# Method of moments for estimating uncertainty distributions

## Abstract

Uncertainty theory is a branch of mathematics for modeling human uncertainty, and uncertain statistics is a methodology for collecting and interpreting expert’s experimental data by uncertainty theory. In order to estimate uncertainty distributions via experts’ experimental data, this paper will present a method of moments and design a numerical method to find moment estimate of unknown parameters.

## Introduction

Probability theory is a branch of mathematics concerned with analysis of random phenomena. The probability distribution describes the range of possible values and the probability of a random variable. In order to determine the probability distribution, random statistics (i.e., the classical mathematical statistics) was proposed as a methodology for collecting and interpreting the test data with correlative information in a system by probability theory. On the other hand, the fuzzy set theory via membership function initiated by Zadeh  is a mathematical model of vague qualitative or qualitative data, frequently generated by means of natural language. In order to determine the membership function, fuzzy statistics was presented including fuzzy point estimation by a fuzzy decision-making approach and so on.

However, a lot of surveys showed that some information and knowledge represented usually by human language like ‘about 1,000 km,’ ‘roughly 60 kg,’ ‘high speed,’ and ‘small size’ are neither randomness nor fuzziness. When the sample size is too small (even no-sample) to estimate a probability distribution, we have to invite some domain experts to evaluate their belief degree that each event will occur. Since human beings usually overweigh unlikely events, the belief degree may have much larger variance than the real frequency. Perhaps some people think that the belief is a subjective probability or fuzzy membership degree. However, the examples  tell us that it is inappropriate because probability theory and fuzzy set theory may lead to counterintuitive results in this case.

In order to model this type of imprecise quantities, uncertainty theory was founded by Liu  in 2007 and refined by Liu  in 2010 and became a branch of mathematics based on the normality, duality, subadditivity, and product axioms. Based on Liu’s uncertainty theory, some basic and important theoretical work of uncertainty theory such as uncertain process , uncertain calculus , uncertain differential equation , uncertain logic , uncertain inference , and uncertain risk analysis  have been established. Meanwhile, as an application of uncertainty theory, Liu  proposed a spectrum of uncertain programming and applied uncertain programming to system reliability design, facility location problem, vehicle routing problem, project scheduling problem, and so on. Other references related to uncertainty theory are Gao , Gao et al. , Peng and Iwamura , and Liu and Ha . Nowadays, uncertainty theory was well developed on both theory section and practice section. For exploring the recent developments of uncertainty theory, the readers may consult the book .

One important issue in uncertainty theory is how to determine uncertainty distribution of an uncertain variable. In order to answer this question, uncertain statistics was firstly presented by Liu  in 2010, and is a methodology for collecting and interpreting expert’s experimental data by uncertainty theory. In uncertain statistics, Liu  first suggested an empirical uncertainty distribution and proposed a principle of least squares as the method for estimating the unknown parameters based on the expert’s experimental data. After that, Chen and Ralescu  employed uncertain statistics to estimate the travel distance between Beijing and Tianjin. In 2012, Wang et al. recast the Delphi method as a process to determine the uncertainty distributions  and presented a method of uncertain hypothesis testing for determining if the views of two domain experts are identical (i.e., wether or not they have the same uncertainty distribution) .

Based on experts’ experimental data and the empirical uncertainty distribution, the k th sample moment is defined in this paper. Then, a method of moments for estimating the unknown parameters of uncertainty distribution is proposed. The remainder of this paper is organized as follows. The next section is intended to introduce some concepts in uncertainty theory as they are needed. Some basic concepts of uncertain statistics are introduced in Section ‘Beginning of uncertain statistics.’ The uncertain moment method for estimating parameters is proposed in Section ‘Method of moments.’ A numerical method to find the moment estimate of unknown parameters is designed in Section ‘Numerical method.’ Finally, a conclusion is drawn in Section ‘Conclusions.’

## Preliminaries

In this section, we will introduce some useful definitions about uncertain measure, uncertain variable, uncertain moment, and so on.

Let Γ be a nonempty set, and L be a σ-algebra over Γ. Each element ΛL is called an event. A number M(Λ) indicates the level that Λ will occur. Uncertain measure M was introduced as a set function satisfying the following three axioms (Liu ):

Axiom 1. M{Γ}=1.

Axiom 2. M{Λ}+M{Λc}=1 for any event Λ.

Axiom 3. For every countable sequence of events {Λ i }, we have

$\text{M}\left\{\bigcup _{i=1}^{\infty }{\Lambda }_{i}\right\}\le \sum _{i=1}^{\infty }\text{M}\left\{{\Lambda }_{i}\right\}.$

Although probability measure satisfies the above three axioms, probability theory is not a special case of uncertainty theory because product probability measure does not satisfy the following product axiom (Liu ):

Axiom 4. Let (Γ k ,L k ,M k ) be uncertainty spaces for k=1,2,…. The product uncertain measure M is an uncertain measure satisfying

$\text{M}\left\{\prod _{k=1}^{\infty }{\Lambda }_{k}\right\}=\underset{k=1}{\overset{\infty }{\wedge }}{\text{M}}_{k}\left\{{\Lambda }_{k}\right\},$

where Λ k are arbitrarily chosen events from L k for k=1,2,…, respectively.

The concept of uncertain variable was introduced by Liu  as a measurable function from an uncertainty space (Γ,L,M) to the set of real numbers. The expected value operator of an uncertain variable was defined by Liu  as

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]={\int }_{0}^{+\infty }\text{M}\left\{\xi \ge x\right\}\mathit{\text{dx}}-{\int }_{-\infty }^{0}\text{M}\left\{\xi \le x\right\}\mathit{\text{dx}},$

provided that at least one of the two integrals is finite. The k th moment of an uncertain variable ξ is defined by E[ξk], where k is a positive integer.

For any x, the function Φ(x)=M{ξx} is called the uncertainty distribution of uncertain variable ξ, Peng and Iwamura  presented a sufficient and necessary condition of uncertainty distribution that a function Φ:→[ 0,1] is an uncertainty distribution if and only if it is an increasing function except Φ(x)≡0 and Φ(x)≡1. Moreover, if the inverse function Φ−1(α) exists and is unique for each α(0,1), then Φ(x) is called regular, and the inverse function Φ−1(α) is called the inverse uncertainty distribution of ξ. If an uncertain variable ξ has regular distribution Φ(x) and its expected value exists, then the expected value may be presented by the following formula:

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]={\int }_{0}^{1}{\Phi }^{-1}\left(\alpha \right)\mathrm{d\alpha .}$

In addition, Liu  proved a series of operational laws of independent uncertain variables for calculating the uncertainty distribution of monotone function of uncertain variables. By these operational laws, the uncertainty distributions of sum, difference, product, quotient, maximum, and minimum for n uncertain variables are easy obtained. The detail contents are shown in  or .

## Beginning of uncertain statistics

Uncertain statistics is based on the expert’s experimental data rather than historical data. One question is how to obtain expert’s experimental data. Liu  designed a questionnaire survey for collecting expert’s experimental data, that is, we invite one or more domain experts who are asked to complete a questionnaire about the meaning of an uncertain variable ξ like ‘about 1,000 km’ individually.

We first ask the domain expert to choose a possible value x that the uncertain variable ξ may take. Then, we quiz him ‘how likely is ξ less than x?’ and denote his belief degree by α. Thus, we obtain an expert’s experimental data (x, α) from the domain expert. Repeating the above process, we obtain the expert’s experimental data.

Let (x1, α1),(x2, α2),…,(x n , α n ) be the expert’s experimental data that meet the following condition:

${x}_{1}<{x}_{2}<\cdots <{x}_{n},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}0\le {\alpha }_{1}\le {\alpha }_{2}\cdots \le {\alpha }_{n}\le 1.$
(1)

Based on the above data, Liu  presented the following empirical uncertainty distribution:

$\Phi \left(x\right)=\left\{\begin{array}{ll}\hfill 0,\hfill & \text{if}\phantom{\rule{2.77626pt}{0ex}}\phantom{\rule{0.3em}{0ex}}x<{x}_{1}\hfill \\ {\alpha }_{i}+\frac{\left({\alpha }_{i+1}-{\alpha }_{i}\right)\left(x-{x}_{i}\right)}{{x}_{i+1}-{x}_{i}},& \text{if}\phantom{\rule{2.77626pt}{0ex}}\phantom{\rule{0.3em}{0ex}}{x}_{i}\le x\le {x}_{i+1},1\le i{x}_{n}.\hfill \end{array}\right\$
(2)

Assume that an uncertainty distribution to be determined has a known functional form with one or more unknown parameters like Φ(x;θ1, θ2,, θ p ) where θ1, θ2,, θ p are unknown parameters. How do we estimate those unknown parameters? Liu  presented the principle of least squares, which says that the unknown parameters θ i , i=1,2,…, p are the solution of the minimization problem,

$\underset{{\theta }_{1},\dots ,{\theta }_{p}}{min}\sum _{i=1}^{n}{\left(\Phi \left({x}_{i};{\theta }_{1},{\theta }_{2},\dots ,{\theta }_{p}\right)-{\alpha }_{i}\right)}^{2}.$
(3)

Let ξ be an uncertain variable with uncertainty distribution Φ(x;θ1, θ2,…, θ p ) where θ1, θ2,…, θ p are unknown parameters. If the uncertainty distribution Φ(x i ;θ1, θ2,…, θ p ) is regular, then Φ−1 is the inverse uncertainty distribution of ξ. Assume that we have some expert’s experimental data (x1, α1),(x2, α2),…,(x n , α n ) satisfying the condition (1). Then, for any α i , we have M{ξΦ−1(α i )}=α i , i=1,2,…, n. It means that the principle of least squares can be changed into the following form:

$\underset{{\theta }_{1},\dots ,{\theta }_{p}}{min}\sum _{i=1}^{n}{\left({\Phi }^{-1}\left({\alpha }_{i};{\theta }_{1},{\theta }_{2},\dots ,{\theta }_{p}\right)-{x}_{i}\right)}^{2},$
(4)

where the unknown parameters θ i , i=1,2,…, p are the solution of the minimization problem.

If the inverse uncertainty distribution of uncertain variable ξ is simple and easy to calculate, we should solve the optimization problem (4) rather than the optimization problem (3) to estimate the unknown parameters. For example, let (x1, α1),(x2, α2),,(x n , α n ) be the expert’s experimental data, and the uncertain variable ξ to be determined be a normal uncertain variable, which has the following uncertainty distribution:

$\Phi \left(x\right)={\left(1+\text{exp}\left(\frac{\pi \left(e-x\right)}{\sqrt{3}\sigma }\right)\right)}^{-1},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}x\in \Re ,$

where e and σ are two unknown parameters. The inverse uncertainty distribution of normal uncertain variable ξ is

${\Phi }^{-1}\left(\alpha \right)=e+\frac{\sigma \sqrt{3}}{\pi }ln\frac{\alpha }{1-\alpha },\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\alpha \in \left(0,1\right).$

By solving the optimization problem (4), it is easy to obtain the following estimate values:

$ê=\stackrel{̄}{x}-\stackrel{̂}{\sigma }\frac{\sqrt{3}}{\mathrm{n\pi }}\sum _{i=1}^{n}\text{ln}\frac{{\alpha }_{i}}{1-{\alpha }_{i}}$

and

$\stackrel{̂}{\sigma }=\frac{\sqrt{3}}{3}\mathrm{\pi n}\frac{\stackrel{̄}{x}{\sum }_{i=1}^{n}ln\frac{{\alpha }_{i}}{1-{\alpha }_{i}}-\sum _{i=1}^{n}{x}_{i}ln\frac{{\alpha }_{i}}{1-{\alpha }_{i}}}{{\left({\sum }_{i=1}^{n}ln\frac{{\alpha }_{i}}{1-{\alpha }_{i}}\right)}^{2}-n{\sum }_{i=1}^{n}{\left(\text{ln}\frac{{\alpha }_{i}}{1-{\alpha }_{i}}\right)}^{2}}$

where $\stackrel{̄}{x}=\left({x}_{1}+{x}_{2}+\cdots +{x}_{n}\right)/n$.

## Method of moments

In this section, a method of moments based on expert’s experimental data is proposed to estimate the unknown parameters. Firstly, we present the k th moment of the empirical uncertainty distribution (2), which is the uncertainty distribution of some uncertain variable ξ.

Theorem 1. Let (x i , α i ), i=1,2,…, n be the expert’s experimental data that meet the following condition:

$0\le {x}_{1}<{x}_{2}<\cdots <{x}_{n},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}0\le {\alpha }_{1}\le {\alpha }_{2}\le \cdots \le {\alpha }_{n}\le 1.$
(5)

Then for any positive integer k, the uncertain variable ξ with the empirical uncertainty distribution (2) has the k th moment

$E\left[{\xi }^{k}\right]={\alpha }_{1}{x}_{1}^{k}+\frac{1}{k+1}\sum _{i=1}^{n-1}\sum _{j=0}^{k}\left({\alpha }_{i+1}-{\alpha }_{i}\right){x}_{i}^{j}{x}_{i+1}^{k-j}+\left(1-{\alpha }_{n}\right){x}_{n}^{k}.$
(6)

Proof. Since 0≤x1<x2<<x n , by (2), we have

$\begin{array}{ll}\phantom{\rule{-5.0pt}{0ex}}E\left[{\xi }^{k}\right]& ={\int }_{0}^{+\infty }\text{M}\left\{{\xi }^{k}\ge x\right\}\mathit{\text{dx}}\phantom{\rule{2em}{0ex}}\\ ={\int }_{0}^{+\infty }k{x}^{k-1}\text{M}\left\{\xi \ge x\right\}\mathit{\text{dx}}\phantom{\rule{2em}{0ex}}\\ ={\int }_{0}^{+\infty }k{x}^{k-1}\left(1-\text{M}\left\{\xi

The theorem is proved.

Definition 1. Let ξ be an uncertain variable. For any positive integer k , if the k th moment of ξ exists, then E[ξk] is called the k th theoretical moment of ξ.

Definition 2. Let (x1, α1),(x2, α2),,(x n , α n ) be the expert’s experimental data that meet the condition (5). For any positive integer k,

$\stackrel{̄}{{\xi }^{k}}={\alpha }_{1}{x}_{1}^{k}+\frac{1}{k+1}\sum _{i=1}^{n-1}\sum _{j=0}^{k}\left({\alpha }_{i+1}-{\alpha }_{i}\right){x}_{i}^{\phantom{\rule{0.3em}{0ex}}j}{x}_{i+1}^{k-j}+\left(1-{\alpha }_{n}\right){x}_{n}^{k}$
(7)

is called the k th sample moment.

Secondly, we introduce a method of moments for estimating the unknown parameter, the description of the method is presented as follows.

Method of moments Let ξ be an uncertain variable with uncertainty distribution Φ(x;θ1, θ2,…, θ p ) where θ1, θ2,…, θ p are unknown parameters. Let (x1, α1), (x2, α2),…,(x n , α n ) be the expert’s experiment data satisfying the condition (5). Let E[ξk] and $\stackrel{̄}{{\xi }^{k}},k=1,2,\dots ,p$ be the k th theoretical and k th sample moments, respectively. As a general procedure for estimating the unknown parameters θ1, θ2,…, θ p , a system of equations is presented:

$E\left[{\xi }^{k}\right]=\stackrel{̄}{{\xi }^{k}},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}k=1,2,\dots ,\mathrm{p.}$
(8)

Since $E\left[{\xi }^{k}\right]=k{\int }_{0}^{+\infty }{x}^{k-1}\left(1-\Phi \left(x;{\theta }_{1},{\theta }_{2},\dots ,{\theta }_{p}\right)\right)\mathit{\text{dx}}$, the above system of equations is equivalent to the following form:

$k{\int }_{0}^{+\infty }{x}^{k-1}\left(1-\Phi \left(x;{\theta }_{1},{\theta }_{2},\dots ,{\theta }_{p}\right)\right)\mathit{\text{dx}}=\stackrel{̄}{{\xi }^{k}},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}k=1,2,\dots ,\mathrm{p.}$
(9)

A solution of the equations, θ1, θ2,…, θ p , is called a moment estimate. We denote the estimate values by $\stackrel{̂}{{\theta }_{1}},\stackrel{̂}{{\theta }_{2}},\dots ,\stackrel{̂}{{\theta }_{p}}$, respectively.

Example 1.Assume (x i , α i ), i=1,2,…, n are the expert’s experimental data satisfying the condition (5) and the uncertainty distribution of uncertain variable ξ has a functional form with one unknown parameter a as follows:

$\Phi \left(x;a\right)=a{x}^{\frac{1}{2}},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}a>0.$
(10)

By Equation 8, the estimate value of the unknown parameter a is the solution of the equation

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\stackrel{̄}{\xi },$

where $\stackrel{̄}{\xi }$ is defined by Equation 7. Since $E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\frac{1}{3{a}^{2}}$, we have

$\frac{1}{3{a}^{2}}=\frac{{\alpha }_{1}+{\alpha }_{2}}{2}{x}_{1}+\sum _{i=2}^{n-1}\frac{{\alpha }_{i+1}-{\alpha }_{i-1}}{2}{x}_{i}+\left(1-\frac{{\alpha }_{n-1}+{\alpha }_{n}}{2}\right){x}_{n}.$
(11)

Hence, we obtain the estimate value of the unknown parameter a,

$â={\left\{3\left[\frac{{\alpha }_{1}+{\alpha }_{2}}{2}{x}_{1}+\sum _{i=2}^{n-1}\frac{{\alpha }_{i+1}-{\alpha }_{i-1}}{2}{x}_{i}+\left(1-\frac{{\alpha }_{n-1}+{\alpha }_{n}}{2}\right){x}_{n}\right]\right\}}^{-\frac{1}{2}}.$
(12)

Assume that the expert’s experiment data are obtained as follows:

$\left(\frac{1}{16},\frac{1}{4}\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(\frac{1}{4},\frac{1}{2}\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(\frac{16}{25},\frac{4}{5}\right).$

By Equation 12, we have the moment estimate value

$â=1.0817,$

and the moment estimate distribution is

$\Phi \left(x\right)=1.0817{x}^{\frac{1}{2}}.$

Example 2.Let ξ be an uncertain variable with uncertainty distribution

$\Phi \left(x;a,b\right)=\mathit{\text{ax}}+b,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(a>0\right)$
(13)

where a, b are two unknown parameters. Assume (x i , α i ), i=1,2,…, n are the expert’s experimental data satisfying the condition (5). By Equation 8, we will solve the following system of equations:

$\left\{\begin{array}{c}E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\stackrel{̄}{\xi }\\ E\left[{\xi }^{2}\right]=\stackrel{̄}{{\xi }^{2}}\end{array}\right\,$

where $\stackrel{̄}{\xi }$ and $\stackrel{̄}{{\xi }^{2}}$ are defined by Equation 7. In addition, the inverse uncertainty distribution is ${\Phi }^{-1}\left(\alpha \right)=\frac{\alpha -b}{a}$. We have

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]={\int }_{0}^{1}{\Phi }^{-1}\left(\alpha \right)\mathrm{d\alpha }=\frac{1-2b}{2a},$

and

$E\left[{\xi }^{2}\right]={\int }_{0}^{1}{\left({\Phi }^{-1}\left(\alpha \right)\right)}^{2}\mathrm{d\alpha }=\frac{1+3{b}^{2}-3b}{3{a}^{2}}.$

Thus, we have the following system of equations

$\left\{\begin{array}{c}\frac{1-2b}{2a}=\stackrel{̄}{\xi }\\ \frac{1+3{b}^{2}-3b}{3{a}^{2}}=\stackrel{̄}{{\xi }^{2}}.\end{array}\right\$

Then the unique solution of the above equations

$\left\{\begin{array}{c}â=\frac{1}{2\sqrt{3}}{\left(\stackrel{̄}{{\xi }^{2}}-{\left(\stackrel{̄}{\xi }\right)}^{2}\right)}^{-\frac{1}{2}}\\ \stackrel{̂}{b}=\frac{1}{2}\left(1-2â\stackrel{̄}{\xi }\right)\end{array}\right\$
(14)

is the moment estimate value of unknown parameters a and b, and the moment estimate distribution is

$\Phi \left(x\right)=\mathrm{âx}+\stackrel{̂}{b}.$

Assume that we have the following expert’s experimental data:

$\left(0.4,0.1\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(1.0,0.2\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(1.5,0.3\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(2.0,0.4\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(3.0,0.7\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(4.0,0.9\right).$

From Equations 7 and 14, we have moment estimate values $â=0.2442$, $\stackrel{̂}{b}=-0.0519$ and the moment estimate distribution is Φ(x)=0.2442x−0.0519.

## Numerical method

In this section, we introduce Newton’s method to estimate the moment estimate value. A general algorithm is proposed to solve the parameter estimate problem as follows.

Assume (x i , α i ), i=1,2,…, n are the expert’s experimental data satisfying the condition (5) and Φ(x;θ1, θ2,…, θ p ) is the uncertainty distribution of uncertain variable ξ with unknown parameters θ1, θ2,…, θ p . The numerical method is to solve the Equations 8,

$E\left[{\xi }^{k}\right]=\stackrel{̄}{{\xi }^{k}},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}k=1,2,\dots ,p$

where $\stackrel{̄}{\xi },\stackrel{̄}{{\xi }^{2}},\dots ,\stackrel{̄}{{\xi }^{p}}$ are the sample moments, which are constants calculated by Equation 7.

Denote a function $F:{ℝ}^{p}\to {ℝ}^{p}$,

$F\left({\theta }_{1},{\theta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)=\left(\begin{array}{c}{F}_{1}\left({\theta }_{1},{\theta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)\\ {F}_{2}\left({\theta }_{1},{\theta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)\\ \cdots \\ {F}_{p}\left({\theta }_{1},{\theta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)\end{array}\right),$

where

${F}_{k}\left({\theta }_{1},{\theta }_{2},\dots ,{\theta }_{p}\right)=k{\int }_{0}^{+\infty }{x}^{k-1}\left(1-\Phi \left(x;{\theta }_{1},{\theta }_{2},\dots ,{\theta }_{p}\right)\right)\mathit{\text{dx}}-\stackrel{̄}{{\xi }^{p}},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}k=1,2,\dots ,\mathrm{n.}$

The Equation 8 is equivalent to F(θ1, θ2,…, θ p )=0.

If the function F is differentiable and the Jacobian matrix of F is easily computed, the procedure can be summarized as follows:

Step 1. Estimate the k th sample moments by Equation (7), k=1,2,…, p.

Step 2. Give an initial value of parameters ${\theta }^{0}=\left({\theta }_{1}^{0},{\theta }_{2}^{0},\dots ,{\theta }_{p}^{0}\right)$.

Step 3. Compute the converse matrix J F (θk)−1 of the Jacobian matrix J F (θk), k=0,1,2,…, where

${J}_{F}\left({\theta }^{k}\right)=\left(\begin{array}{cccc}\frac{\partial {F}_{1}}{\partial {\theta }_{1}}& \frac{\partial {F}_{1}}{\partial {\theta }_{2}}& \cdots & \frac{\partial {F}_{1}}{\partial {\theta }_{p}}\\ \frac{\partial {F}_{2}}{\partial {\theta }_{1}}& \frac{\partial {F}_{2}}{\partial {\theta }_{2}}& \cdots & \frac{\partial {F}_{2}}{\partial {\theta }_{p}}\\ \cdots & \cdots & \cdots & \cdots \\ \frac{\partial {F}_{p}}{\partial {\theta }_{1}}& \frac{\partial {F}_{p}}{\partial {\theta }_{2}}& \cdots & \frac{\partial {F}_{p}}{\partial {\theta }_{p}}\end{array}\right).$
(15)

The integrals in the equation can be computed by some numerical methods.

Step 4. Calculate θk+1 by the following equation:

${\theta }^{k+1}={\theta }^{k}-{J}_{F}{\left({\theta }^{k}\right)}^{-1}F\left({\theta }^{k}\right).$
(16)

Step 5. Repeat the third to fourth steps when |θk+1θk|>α and k<M, where α and M are given positive numbers.

Step 6. Report the last θn as the solution, if |θnθn−1|≤α.

Remark 1.If the distribution Φ(x;θ1, θ2,, θ p ) is continuous differentiable of the parameters, we have

$\frac{\partial {\int }_{0}^{+\infty }\left(1-\Phi \left(x;{\theta }_{1},{\theta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)\right)\mathit{\text{dx}}}{\partial {\theta }_{1}}=-{\int }_{0}^{+\infty }\frac{\mathrm{\partial \Phi }\left(x;{\theta }_{1},{\theta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)}{\partial {\theta }_{1}}\mathrm{dx.}$

Remark 2.If the function F is not differentiable or the Jacobian matrix is hard to compute, we can compute

$\frac{{F}_{i}\left({\theta }_{1},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{j}+{h}_{j},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)-{F}_{i}\left({\theta }_{1},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{j},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)}{{h}_{j}}$

$\frac{\partial {F}_{i}\left({\theta }_{1},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{j},\cdots \phantom{\rule{0.3em}{0ex}},{\theta }_{p}\right)}{\partial {\theta }_{j}},$

where i, j=1,2,…, p, and h1, h2,…, h p are given small positive numbers.

Example 3.Assume that an uncertainty distribution has a lognormal form with two unknown parameters e and σ, that is,

$\Phi \left(x|e,\sigma \right)={\left(1+exp\left(\frac{\pi \left(e-lnx\right)}{\sqrt{3}\sigma }\right)\right)}^{-1},$

and the expert’s experimental data are (0.6, 0.1), (1.0, 0.3), (1.5, 0.4), (2.0, 0.6), (2.8, 0.8), (3.6, 0.9). We have

$F\left(e,\sigma \right)=\left(\begin{array}{c}{\int }_{0}^{+\infty }1-{\left(1+exp\left(\frac{\pi \left(e-lnx\right)}{\sqrt{3}\sigma }\right)\right)}^{-1}\mathit{\text{dx}}-1.86\\ 2{\int }_{0}^{+\infty }x\left(1-{\left(1+exp\left(\frac{\pi \left(e-lnx\right)}{\sqrt{3}\sigma }\right)\right)}^{-1}\right)\mathit{\text{dx}}-3.56\end{array}\right)$

and

${J}_{F}\left(e,\sigma \right)=\left(\begin{array}{cc}{\int }_{0}^{+\infty }\frac{\mathrm{\pi f}\left(x|e,\sigma \right)}{\sqrt{3}\sigma {\left(1+f\left(x|e,\sigma \right)\right)}^{2}}\mathit{\text{dx}}& {\int }_{0}^{+\infty }\frac{\pi \left(e-lnx\right)f\left(x|e,\sigma \right)}{\sqrt{3}{\sigma }^{2}{\left(1+f\left(x|e,\sigma \right)\right)}^{2}}\mathit{\text{dx}}\\ 2{\int }_{0}^{+\infty }x\frac{\mathrm{\pi f}\left(x|e,\sigma \right)}{\sqrt{3}\sigma {\left(1+f\left(x|e,\sigma \right)\right)}^{2}}\mathit{\text{dx}}& 2{\int }_{0}^{+\infty }x\frac{\pi \left(e-lnx\right)f\left(x|e,\sigma \right)}{\sqrt{3}{\sigma }^{2}{\left(1+f\left(x|e,\sigma \right)\right)}^{2}}\mathit{\text{dx}}\end{array}\right),$

where $f\left(x|e,\sigma \right)=exp\left(\frac{\pi \left(e-lnx\right)}{\sqrt{3}\sigma }\right)$. All of these integrals are computed by Simpson formula, and the initial value is (0.6,0.5).

A run of the algorithm shows that the approximate solution of the equation F(e, σ)=0 is $ê=0.50$ and $\stackrel{̂}{\sigma }=0.47$, which leads to the moment estimate distribution

$\Phi \left(x|0.50,0.47\right)={\left(1+exp\left(\frac{\pi \left(0.50-lnx\right)}{\sqrt{3}×0.47}\right)\right)}^{-1}.$

## Conclusions

Uncertain statistics is a methodology for collecting and interpreting expert’s experimental data by uncertainty theory. The method of moments in this paper is a new way to estimate the unknown parameters in uncertainty distribution. This method depends on the expert’s experimental data and is easy to be realistic in the real experiment.

## References

1. Zadeh L: Fuzzy sets. Inform. Control 1965, 8: 338–353. 10.1016/S0019-9958(65)90241-X

2. Liu B: Why is there a need for uncertainty theory? J. Uncertainty Syst 2012, 6(1):3–10.

3. Liu B: Uncertainty theory. Berlin: Springer; 2007.

4. Liu B: Uncertainty Theory: a Branch of Mathematics for Modeling Human Uncertainty. Berlin: Springer; 2010.

5. Liu B: Fuzzy process, hybrid process and uncertain process. J. Uncertain Syst 2008, 2(1):3–16.

6. Liu B: Some research problems in uncertainty theory. J. Uncertain Syst 2009, 3(1):3–10.

7. Li X, Liu B: Hybrid logic and uncertain logic. J. Uncertain Syst 2009, 3(2):83–94.

8. Liu B: Uncertain set theory and uncertain inference rule with application to uncertain control. J. Uncertain Syst 2010, 4(2):83–98.

9. Liu B: Uncertain risk analysis and uncertain reliability analysis. J. Uncertain Syst 2010, 4(3):163–170.

10. Liu B: Theory and Practice of Uncertain Programming. Berlin: Springer; 2009.

11. Gao X: Some properties of continuous uncertain measure. Int. J. Uncertainty, Fuzziness Knowl. Based Syst 2009, 17(3):419–426. 10.1142/S0218488509005954

12. Gao X, Gao Y, Ralescu D: On Liu’s inference rule for uncertain systems. Int. J. Uncertainty, Fuzziness Knowl. Based Syst 2010, 18(1):1–11. 10.1142/S0218488510006349

13. Peng Z, Iwamura K: A sufficient and necessary condition of uncertainty distribution. J. Interdiscip. Math 2010, 13(3):277–285. 10.1080/09720502.2010.10700701

14. Liu Y, Ha M: Expected value of function of uncertain variables. J. Uncertainty Syst 2010, 4(3):181–186.

15. Liu B: Uncertainty Theory. Berlin: Springer; . http://orsc.edu.cn/liu/ut.pdf

16. Chen X, Ralescu D: B-spline method of uncertain statistics with application to estimating distance. J. Uncertain Syst 2012, 6(4):256–262.

17. Wang X, Gao Z, Guo H: Delphi method for estimating uncertainty distributions. Inform.: Int. Interdiscip. J 2012, 15(2):449–460.

18. Wang X, Gao Z, Guo H: Uncertain hypothesis testing for two experts’ empirical data. Math. Comput. Modell 2012, 55: 1478–1482. 10.1016/j.mcm.2011.10.039

## Acknowledgements

This work was supported by National Natural Science Foundation of China (No. 61073121) and Hebei Natural Science Foundation (No. G2013402063 and No. F2012402037).

## Author information

Authors

### Corresponding author

Correspondence to Xiaosheng Wang.

## Rights and permissions

Reprints and Permissions

Wang, X., Peng, Z. Method of moments for estimating uncertainty distributions. J. Uncertain. Anal. Appl. 2, 5 (2014). https://doi.org/10.1186/2195-5468-2-5

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/2195-5468-2-5

### Keywords

• Uncertainty theory
• Uncertainty distribution
• Uncertain statistics
• Moment estimate 