# Extreme values and integral of solution of uncertain differential equation

## Abstract

Uncertain differential equation is a type of differential equation involving uncertain process. This paper will give uncertainty distributions of the extreme values, first hitting time, and integral of the solution of uncertain differential equation. Some solution methods are also documented in this paper.

## Background

Probability theory, since it was founded by Kolmogorov in 1933, has been a crucial tool to model indeterminacy phenomena when probability distributions of the possible events are available. However, due to economical or technological reasons, we often cannot obtain sample data, based on which we estimate the probability distribution via statistics. In this case, we have to invite domain experts to evaluate their belief degree. Since human beings tend to overweight unlikely events, the belief degree usually has a much larger range than the real frequency (Kahneman and Tversky ). As a result, it cannot be treated as probability, otherwise some counterintuitive results may happen. An extreme counterexample was proposed by Liu .

In order to model the belief degree, an uncertainty theory was founded by Liu , and refined by Liu  based on normality, duality, subadditivity and product axioms. So far, it has been applied to many areas, and has brought many branches such as uncertain programming (Liu ), uncertain risk analysis (Liu ), uncertain inference (Liu ), uncertain logic (Liu ), and uncertain statistics (Liu ).

In order to describe the evolution of an uncertain phenomenon, Liu  proposed a concept of uncertain process. Then Liu  designed a Liu process that is an uncertain process with stationary and independent normal uncertain increments, and founded uncertain calculus to deal with the integral and differential of an uncertain process with respect to Liu process. Then Liu and Yao  extended uncertain integral from single Liu process to multiple ones. Besides, Chen and Ralescu  founded uncertain calculus with respect to general Liu process. As a complementary, Yao  founded uncertain calculus with respect to uncertain renewal process.

Uncertain differential equation was first proposed by Liu  as a type of differential equation driven by Liu process. Chen and Liu  gave an analytic solution for linear uncertain differential equation. Following that, Liu  and Yao  gave some methods to solve two types of nonlinear uncertain differential equations. Then Yao and Chen  proposed a numerical method for solving uncertain differential equation. As extensions of uncertain differential equation, uncertain differential equation with jumps was proposed by Yao , and uncertain delayed differential equation was studied among others by Barbacioru , Liu and Fei , and Ge and Zhu . In addition, backward uncertain differential equation was proposed by Ge and Zhu .

Due to the paradox of stochastic finance theory (Liu ), Liu  presented an uncertain stock model via uncertain differential equation, and gave its European option pricing formulas, opening the door of uncertain finance theory. Then Chen  derived the American option pricing formulas of the stock model. After that, Peng and Yao  presented a mean-reverting uncertain stock model, and Chen et alproposed an uncertain stock model with periodic dividends. Besides, Chen and Gao  proposed an uncertain interest rate model, and Liu et al proposed an uncertain currency model. In addition, Zhu  applied uncertain differential equation to optimal control problems.

With many applications of uncertain differential equation, the study on properties of the solutions was also developed well. Chen and Liu  gave a sufficient condition for an uncertain differential equation having a unique solution. Then Gao  weakened the condition. After that, Yao et al gave a sufficient condition for an uncertain differential equation being stable.

In this paper, we will consider the extreme values, first hitting time, and integral of the solution of an uncertain diffusion process. The rest of this paper is organized as follows. In the section of Preliminary, we will review some basic concepts about uncertain variable, uncertain process and uncertain differential equation. After that, we study the extreme values of the solution of an uncertain differential equation, and give their uncertainty distributions in the section of Extreme values. Then by the relationship between first hitting time and extreme value, we give an uncertainty distribution of the first hitting time of the solution of an uncertain differential equation in the section of First hitting time. Following that, we consider the integral of the solution of an uncertain differential equation, and give its inverse uncertainty distribution in the section of Integral. At last, some remarks are given in the section of Conclusions.

## Preliminary

In this section, we will first review some basic concepts and results in uncertainty theory. Then we introduce the concept of uncertain process, uncertain calculus and uncertain differential equation.

### Uncertainty theory

#### Definition 1.

(Liu ) Let $ℒ$ be a σ-algebra on a nonempty set Γ. A set function $ℳ:ℒ\to \left[0,1\right]$ is called an uncertain measure if it satisfies the following axioms:

Axiom 1: (Normality Axiom) $ℳ\left\{\mathrm{\Gamma }\right\}=1$ for the universal set Γ.

Axiom 2: (Duality Axiom) $ℳ\left\{\mathrm{\Lambda }\right\}+ℳ\left\{{\mathrm{\Lambda }}^{c}\right\}=1$ for any event Λ.

Axiom 3: (Subadditivity Axiom) For every countable sequence of events Λ12,, we have

$ℳ\left\{\bigcup _{i=1}^{\infty }{\mathrm{\Lambda }}_{i}\right\}\le \sum _{i=1}^{\infty }ℳ\left\{{\mathrm{\Lambda }}_{i}\right\}.$

Besides, in order to provide the operational law, Liu  defined the product uncertain measure on the product σ-algebre $\mathcal{L}$ as follows.

Axiom 4: (Product Axiom) Let $\left({\mathrm{\Gamma }}_{k},{\mathcal{L}}_{k},{ℳ}_{k}\right)$ be uncertainty spaces for k = 1,2, Then the product uncertain measure $ℳ$ is an uncertain measure satisfying

$ℳ\left\{\prod _{i=1}^{\infty }{\mathrm{\Lambda }}_{k}\right\}=\underset{k=1}{\overset{\infty }{\wedge }}{ℳ}_{k}\left\{{\mathrm{\Lambda }}_{k}\right\}$

where Λ k  are arbitrarily chosen events from ${\mathcal{L}}_{k}$ for k = 1,2,, respectively.

An uncertain variable is essentially a measurable function from an uncertainty space to the set of real numbers. In order to describe an uncertain variable, a concept of uncertainty distribution is defined as follows.

#### Definition 2.

(Liu ) The uncertainty distribution of an uncertain variable ξ is defined by

$\mathrm{\Phi }\left(x\right)=ℳ\left\{\xi \le x\right\}$

for any $x\mathrm{\in }\mathfrak{R}.$

Expected value is regarded as the average value of an uncertain variable in the sense of uncertain measure.

#### Definition 3.

(Liu ) The expected value of an uncertain variable ξ is defined by

$E\left[\xi \right]={\int }_{0}^{+\infty }ℳ\left\{\xi \ge x\right\}\mathrm{d}x-{\int }_{-\infty }^{0}ℳ\left\{\xi \le x\right\}\mathrm{d}x$

provided that at least one of the two integrals exists.

Assuming ξ has an uncertainty distribution Φ, Liu  proved that the expected value of ξ is

$E\left[\xi \right]={\int }_{0}^{+\infty }\left(1-\mathrm{\Phi }\left(x\right)\right)\mathrm{d}x-{\int }_{-\infty }^{0}\mathrm{\Phi }\left(x\right)\mathrm{d}x.$

The inverse function Φ−1 of the uncertainty distribution Φ of uncertain variable ξ is called the inverse uncertainty distribution of ξ if it exists and is unique for each α (0,1). Inverse uncertainty distribution plays a crucial role in operations of independent uncertain variables.

#### Definition 4.

(Liu) The uncertain variables ξ 1,ξ 2,,ξ n  are said to be independent if

$ℳ\left\{\bigcap _{i=1}^{n}\left({\xi }_{i}\in {B}_{i}\right)\right\}=\underset{i=1}{\overset{n}{\wedge }}ℳ\left\{{\xi }_{i}\in {B}_{i}\right\}$

for any Borel set B 1,B 2,,B n  of real numbers.

#### Theorem 1.

(Liu ) Let ξ 1,ξ 2,,ξ n  be independent uncertain variables with uncertainty distributions Φ12, n , respectively. If f(x 1,x 2,,x n ) is strictly increasing with respect to x 1,x 2,,x m  and strictly decreasing with respect to x m+1,x m+2,,x n , then ξ = f (ξ 1,ξ 2,,ξ n ) is an uncertain variable with an inverse uncertainty distribution

${\mathrm{\Phi }}^{-1}\left(\alpha \right)=f\left({\mathrm{\Phi }}_{1}^{-1}\left(\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\mathrm{\Phi }}_{m}^{-1}\left(\alpha \right),{\mathrm{\Phi }}_{m+1}^{-1}\left(1-\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\mathrm{\Phi }}_{n}^{-1}\left(1-\alpha \right)\right).$

#### Theorem 2.

(Liu and Ha ) Let ξ 1,ξ 2,,ξ n  be independent uncertain variables with uncertainty distributions Φ12, n , respectively. If f (x 1,x 2,,x n ) is strictly increasing with respect to x 1,x 2,,x m  and strictly decreasing with respect to x m+1,x m+2,,x n , then the expected value of uncertain variable ξ  =  f(ξ 1,ξ 2,,ξ n ) is

$E\left[\xi \right]={\int }_{0}^{1}f\left({\mathrm{\Phi }}_{1}^{-1}\left(\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\mathrm{\Phi }}_{m}^{-1}\left(\alpha \right),{\mathrm{\Phi }}_{m+1}^{-1}\left(1-\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\mathrm{\Phi }}_{n}^{-1}\left(1-\alpha \right)\right)\mathrm{d}\alpha .$

### Uncertain process

In order to model the evolution of uncertain phenomena, an uncertain process was proposed by Liu  as a sequence of uncertain variables driven by time or space.

#### Definition 5.

(Liu ) Let T be an index set, and let $\left(\mathrm{\Gamma },ℒ,ℳ\right)$ be an uncertainty space. An uncertain process is a measurable function from $T×\left(\mathrm{\Gamma },ℒ,ℳ\right)$ to the set of real numbers, i.e., for each tT and any Borel set B of real numbers, the set

$\left\{{X}_{t}\in B\right\}=\left\{\gamma |{X}_{t}\left(\gamma \right)\in B\right\}$

is an event.

#### Definition 6.

Let X t  be an uncertain process and let z be a given level. Then the uncertain variable

${\tau }_{z}=\text{inf}\left\{t\ge 0|{X}_{t}=z\right\}$

is called the first hitting time that X t  reaches the level z.

Independent increment uncertain process is an important type of uncertain processes. Its formal definition is given below.

#### Definition 7.

(Liu ) An uncertain process is said to have independent increments if

${X}_{{t}_{0}},{X}_{{t}_{1}}-{X}_{{t}_{0}},{X}_{{t}_{2}}-{X}_{{t}_{1}},\cdots \phantom{\rule{0.3em}{0ex}},{X}_{{t}_{k}}-{X}_{{t}_{k-1}}$

are independent uncertain variables where t 0 is the initial time and t 1,t 2,,t k  are any times with t 0 < t 1 <  < t k .

For a sample-continuous independent increment process X t , Liu  proved the following extreme value theorem,

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}{X}_{t}\le x\right\}=\underset{0\le t\le s}{\text{inf}}ℳ\left\{{X}_{t}\le x\right\},$
$ℳ\left\{\underset{0\le t\le s}{\text{inf}}{X}_{t}\le x\right\}=\underset{0\le t\le s}{\text{sup}}ℳ\left\{{X}_{t}\le x\right\}.$

#### Definition 8.

(Liu ) An uncertain process is said to have stationary increments if for any given t > 0, the increments X t+s  − X s  are identically distributed uncertain variables for all s>0.

### Uncertain calculus

#### Definition 9.

(Liu ) An uncertain process C t  is said to be a canonical Liu process if

(i) C 0 = 0 and almost all sample paths are Lipschitz continuous,

(ii) C t  has stationary and independent increments,

(iii) every increment C s+t  − C s  is a normal uncertain variable with expected value 0 and variance t2, whose uncertainty distribution is

${\mathrm{\Phi }}_{t}\left(x\right)={\left(1+\text{exp}\left(-\frac{\pi x}{\sqrt{3}t}\right)\right)}^{-1},\phantom{\rule{1em}{0ex}}x\in \mathfrak{R}.$

#### Definition 10.

(Liu ) Let X t  be an uncertain process and C t  be a canonical Liu process. For any partition of closed interval [a,b] with a = t 1 < t 2 <  < t k+1 = b, the mesh is written as

$\mathrm{\Delta }=\underset{1\le i\le k}{\text{max}}|{t}_{i+1}-{t}_{i}|.$

Then Liu integral of X t  is defined by

$\underset{a}{\overset{b}{\int }}{X}_{t}\mathrm{d}{C}_{t}=\underset{\mathrm{\Delta }\to 0}{\text{lim}}\sum _{i=1}^{k}{X}_{{t}_{i}}·\left({C}_{{t}_{i+1}}-{C}_{{t}_{i}}\right)$

provided that the limit exists almost surely and is finite. In this case, the uncertain process X t  is said to be Liu integrable.

#### Definition 11.

(Liu ) Let C t  be a canonical Liu process, and μ s  and σ s  be two uncertain processes. Then the uncertain process

${Z}_{t}={Z}_{0}+{\int }_{0}^{t}{\mu }_{s}\mathrm{d}s+{\int }_{0}^{t}{\sigma }_{s}\mathrm{d}{C}_{s}$

is called a Liu process with drift μ t  and diffusion σ t . The differential form of Liu process is written as

$\mathrm{d}{Z}_{t}={\mu }_{t}\mathrm{d}t+{\sigma }_{t}\mathrm{d}{C}_{t}.$

#### Theorem 3.

(Liu ) (Fundamental Theorem of Uncertain Calculus) Let C t  be a canonical Liu process, and h(t,c) be a continuously differentiable function. Then Z t  = h(t,C t ) is a Liu process with

$\mathrm{d}{Z}_{t}=\frac{\partial h}{\partial t}\left(t,{C}_{t}\right)\mathrm{d}t+\frac{\partial h}{\mathrm{\partial c}}\left(t,{C}_{t}\right)\mathrm{d}{C}_{t}.$

### Uncertain differential equation

An uncertain differential equation is essentially a type of differential equation driven by Liu process.

#### Definition 12.

(Liu ) Suppose C t  is a canonical Liu process, and f and g are two given functions. Then

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$

is called an uncertain differential equation.

Yao and Chen  proposed a concept of α-path, and found a connection between an uncertain differential equation and a spectrum of ordinary differential equations.

#### Definition 13.

(Yao and Chen ) Let α be a number with 0 < α < 1. An uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$

is said to have an α-path ${X}_{t}^{\alpha }$ if it solves the corresponding ordinary differential equation

$\mathrm{d}{X}_{t}^{\alpha }=f\left(t,{X}_{t}^{\alpha }\right)\mathrm{d}t+|g\left(t,{X}_{t}^{\alpha }\right)|{\mathrm{\Phi }}^{-1}\left(\alpha \right)\mathrm{d}t$

where Φ−1(α) is the inverse uncertainty distribution of a standard normal uncertain variable.

#### Theorem 4.

(Yao and Chen ) Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t},$

respectively. Then

$ℳ\left\{{X}_{t}\le {X}_{t}^{\alpha },\forall t\right\}=\alpha ,$
$ℳ\left\{{X}_{t}>{X}_{t}^{\alpha },\forall t\right\}=1-\alpha .$

As a corollary, the solution X t  has an inverse uncertainty distribution ${\mathrm{\Phi }}_{t}^{-1}\left(\alpha \right)={X}_{t}^{\alpha }.$

## Extreme values

In this section, we study the extreme values of the solution of an uncertain differential equation, and give their uncertainty distributions. In addition, we design some numerical methods to obtain the uncertainty distributions.

### Supremum

#### Theorem 5.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t},$

respectively. Then for a strictly increasing function J(x), the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right).$

#### Proof.

Since J(x) is a strictly increasing function, we have

$\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\right\}\supset \left\{{X}_{t}\le {X}_{t}^{\alpha },\forall t\right\}$

and

$\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\right\}\supset \left\{{X}_{t}>{X}_{t}^{\alpha },\forall t\right\}.$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}\le {X}_{t}^{\alpha },\forall t\right\}=\alpha$

and

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}>{X}_{t}^{\alpha },\forall t\right\}=1-\alpha .$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\right\}=\alpha ,$

i.e., the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right).$

In order to calculate the inverse uncertainty distribution of the supremum, we design a numerical method as below. □

Step 1: Fix α in (0,1), and fix h as the step length. Set i = 0, N = s/h, ${X}_{0}^{\alpha }={X}_{0}$, and H = J(X 0).

Step 2: Employ the recursion formula

${X}_{i+1}^{\alpha }={X}_{i}^{\alpha }+f\left({t}_{i},{X}_{i}^{\alpha }\right)h+g\left({t}_{i},{X}_{i}^{\alpha }\right){\mathrm{\Phi }}^{-1}\left(\alpha \right)h,$

and calculate ${X}_{i+1}^{\alpha }.$

Step 3: Set $H←\text{max}\phantom{\rule{.5em}{0ex}}\left(H,J\left({X}_{i+1}^{\alpha }\right)\right),i←i+1.$

Step 4: Repeat Step 2 and Step 3 for N times.

Step 5: The inverse uncertainty distribution of

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

is determined by

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)=H.$

#### Theorem 6.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t . Assume X t  has an uncertainty distribution Φ t (x) at each time t. Then for a strictly increasing function J(x), the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an uncertainty distribution

${\mathrm{\Psi }}_{s}\left(x\right)=\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

#### Proof.

Since ${X}_{t}^{\alpha }={\mathrm{\Phi }}_{t}^{-1}\left(\alpha \right),$ we have

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(\alpha \right)\right)\right\}=\alpha$

by Theorem 5. Write

$x=\underset{0\le t\le s}{\text{sup}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(\alpha \right)\right),$

i.e.,

$\alpha =\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

Then we have

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le x\right\}=\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

In other words, the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an uncertainty distribution

${\mathrm{\Psi }}_{s}\left(x\right)=\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

#### Theorem 7.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t},$

respectively. Then for a strictly decreasing function J(x), the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{1-\alpha }\right).$

#### Proof.

Since J(x) is a strictly decreasing function, we have

$\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{1-\alpha }\right)\right\}\supset \left\{{X}_{t}\ge {X}_{t}^{1-\alpha },\forall t\right\}$

and

$\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{1-\alpha }\right)\right\}\supset \left\{{X}_{t}<{X}_{t}^{1-\alpha },\forall t\right\}.$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{1-\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}\ge {X}_{t}^{1-\alpha },\forall t\right\}=\alpha$

and

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{1-\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}<{X}_{t}^{1-\alpha },\forall t\right\}=1-\alpha .$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{1-\alpha }\right)\right\}=\alpha ,$

i.e., the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{1-\alpha }\right).$

#### Theorem 8.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t . Assume X t  has an uncertainty distribution Φ t (x) at each time t. Then for a strictly decreasing function J(x), the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an uncertainty distribution

${\mathrm{\Psi }}_{s}\left(x\right)=1-\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

#### Proof.

Since ${X}_{t}^{1-\alpha }={\mathrm{\Phi }}_{t}^{-1}\left(1-\alpha \right),$ we have

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{sup}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(1-\alpha \right)\right)\right\}=\alpha$

by Theorem 7. Write

$x=\underset{0\le t\le s}{\text{sup}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(1-\alpha \right)\right),$

i.e.,

$\alpha =1-\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

Then we have

$ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\le x\right\}=1-\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

In other words, the supremum

$\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)$

has an uncertainty distribution

${\mathrm{\Psi }}_{s}\left(x\right)=1-\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

### Infimum

#### Theorem 9.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t},$

respectively. Then for a strictly increasing function J(x), the infimum

$\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\Upsilon }_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right).$

#### Proof.

Since J(x) is a strictly increasing function, we have

$\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\right\}\supset \left\{{X}_{t}\le {X}_{t}^{\alpha },\forall t\right\}$

and

$\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\right\}\supset \left\{{X}_{t}>{X}_{t}^{\alpha },\forall t\right\}.$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}\le {X}_{t}^{\alpha },\forall t\right\}=\alpha$

and

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}>{X}_{t}^{\alpha },\forall t\right\}=1-\alpha .$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\right\}=\alpha ,$

i.e., the infimum

$\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\Upsilon }_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right).$

In order to calculate the uncertainty distribution of the infimum, we design a numerical method as below. □

Step 1: Fix α in (0,1), and fix h as the step length. Set i = 0, N = s/h, ${X}_{0}^{\alpha }={X}_{0}$, and H = J(X 0).

Step 2: Employ the recursion formula

${X}_{i+1}^{\alpha }={X}_{i}^{\alpha }+f\left({t}_{i},{X}_{i}^{\alpha }\right)h+g\left({t}_{i},{X}_{i}^{\alpha }\right){\mathrm{\Phi }}^{-1}\left(\alpha \right)h,$

and calculate ${X}_{i+1}^{\alpha }$ and $J\left({X}_{i+1}^{\alpha }\right)$

Step 3: Set $H←\text{min}\left(H,J\left({X}_{i+1}^{\alpha }\right)\right),i←i+1.$

Step 4: Repeat Step 2 and Step 3 for N times.

Step 5: The inverse uncertainty distribution of $\underset{0\le t\le s}{\text{inf}}{X}_{t}$ is determined by

${\Upsilon }_{s}^{-1}\left(\alpha \right)=H.$

#### Theorem 10.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t . Assume X t  has an uncertainty distribution Φ t (x) at each time t. Then for a strictly increasing function J(x), the infimum

$\underset{0\le t\le s}{\text{inf}}{X}_{t}$

has an uncertainty distribution

${\Upsilon }_{s}\left(x\right)=\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

#### Proof.

Since ${X}_{t}^{\alpha }={\mathrm{\Phi }}_{t}^{-1}\left(\alpha \right),$ we have

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(\alpha \right)\right)\right\}=\alpha$

by Theorem 9. Write

$x=\underset{0\le t\le s}{\text{inf}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(\alpha \right)\right),$

i.e.,

$\alpha =\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

Then we have

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}{X}_{t}\le x\right\}=\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

In other words, the infimum

$\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)$

has an uncertainty distribution

${\Upsilon }_{s}\left(x\right)=\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

#### Theorem 11.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t},$

respectively. Then for a strictly decreasing function J(x), the infimum

$\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\Upsilon }_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{1-\alpha }\right).$

#### Proof.

Since J(x) is a strictly decreasing function, we have

$\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{1-\alpha }\right)\right\}\supset \left\{{X}_{t}\ge {X}_{t}^{1-\alpha },\forall t\right\}$

and

$\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{1-\alpha }\right)\right\}\supset \left\{{X}_{t}<{X}_{t}^{1-\alpha },\forall t\right\}.$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{1-\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}\ge {X}_{t}^{1-\alpha },\forall t\right\}=\alpha$

and

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)>\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{1-\alpha }\right)\right\}\ge ℳ\left\{{X}_{t}<{X}_{t}^{1-\alpha },\forall t\right\}=1-\alpha .$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{1-\alpha }\right)\right\}=\alpha ,$

i.e., the infimum

$\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)$

has an inverse uncertainty distribution

${\Upsilon }_{s}^{-1}\left(\alpha \right)=\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{1-\alpha }\right).$

#### Theorem 12.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t . Assume X t has an uncertainty distribution Φ t (x) at each time t. Then for a strictly decreasing function J(x), the infimum

$\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)$

has an uncertainty distribution

${\Upsilon }_{s}\left(x\right)=1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

#### Proof.

Since ${X}_{t}^{1-\alpha }={\mathrm{\Phi }}_{t}^{-1}\left(1-\alpha \right),$ we have

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le \underset{0\le t\le s}{\text{inf}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(1-\alpha \right)\right)\right\}=\alpha$

by Theorem 11. Write

$x=\underset{0\le t\le s}{\text{inf}}J\left({\mathrm{\Phi }}_{t}^{-1}\left(1-\alpha \right)\right),$

i.e.,

$\alpha =1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

Then we have

$ℳ\left\{\underset{0\le t\le s}{\text{inf}}{X}_{t}\le x\right\}=1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

In other words, the infimum

$\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)$

has an uncertainty distribution

${\Upsilon }_{s}\left(x\right)=1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(x\right)\right).$

## First hitting time

In this section, we study the first hitting time of the solution of an uncertain differential equation, and give the uncertainty distributions in different cases.

### First hitting time of strictly increasing function of the solution

#### Theorem 13.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$

with an initial value X 0, respectively. Given a strictly increasing function J(x), and a level z > J(X 0), the first hitting time τ z  that J(X t ) reaches z has an uncertainty distribution

$\mathrm{\Psi }\left(s\right)=1-\text{inf}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\ge z\right\right\}.$

#### Proof.

Write

$\begin{array}{ll}{\alpha }_{0}& =\text{inf}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\ge z\right\right\}.\phantom{\rule{2em}{0ex}}\end{array}$

Since J(x) is a strictly increasing function, we have

$\left\{{\tau }_{z}\le s\right\}\supset \left\{J\left({X}_{t}\right)\ge J\left({X}_{t}^{{\alpha }_{0}}\right),\forall t\right\}=\left\{{X}_{t}\ge {X}_{t}^{{\alpha }_{0}},\forall t\right\},$
$\left\{{\tau }_{z}>s\right\}\supset \left\{J\left({X}_{t}\right)

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{{\tau }_{z}\le s\right\}\ge ℳ\left\{{X}_{t}\ge {X}_{t}^{{\alpha }_{0}},\forall t\right\}=1-{\alpha }_{0},$
$ℳ\left\{{\tau }_{z}>s\right\}\ge \left\{{X}_{t}<{X}_{t}^{{\alpha }_{0}},\forall t\right\}={\alpha }_{0}.$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{{\tau }_{z}\le s\right\}=1-{\alpha }_{0}.$

This completes the proof. □

For a strictly increasing function J(x), in order to calculate the uncertainty distribution Ψ(s) of the first hitting time τ z  that J(X t ) reaches z when J(X 0) < z, we design a numerical method as below.

Step 1: Fix ε as the accuracy, and fix h as the step length. Set N = s/h.

Step 2: Employ the recursion formula

${X}_{i+1}^{\epsilon }={X}_{i}^{\epsilon }+f\left({t}_{i},{X}_{i}^{\epsilon }\right)h+g\left({t}_{i},{X}_{i}^{\epsilon }\right){\mathrm{\Phi }}^{-1}\left(\epsilon \right)h$

for N times, and calculate ${X}_{i}^{\epsilon },i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},N.$ If

$\underset{1\le i\le N}{\text{max}}J\left({X}_{i}^{\epsilon }\right)\ge z,$

then return 1 − ε and stop.

Step 3: Employ the recursion formula

${X}_{i+1}^{1-\epsilon }={X}_{i}^{1-\epsilon }+f\left({t}_{i},{X}_{i}^{1-\epsilon }\right)h+g\left({t}_{i},{X}_{i}^{1-\epsilon }\right){\mathrm{\Phi }}^{-1}\left(1-\epsilon \right)h$

for N times, and calculate ${X}_{i}^{1-\epsilon },i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},N.$ If

$\underset{1\le i\le N}{\text{max}}J\left({X}_{i}^{1-\epsilon }\right)

then return ε and stop.

Step 4: Set α 1 = ε, α 2 = 1 − ε.

Step 5: Set α = (α 1 + α 2)/2.

Step 6: Employ the recursion formula

${X}_{i+1}^{\alpha }={X}_{i}^{\alpha }+f\left({t}_{i},{X}_{i}^{\alpha }\right)h+g\left({t}_{i},{X}_{i}^{\alpha }\right){\mathrm{\Phi }}^{-1}\left(\alpha \right)h$

for N times, and calculate ${X}_{i+1}^{\alpha },i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},N.$ If

$\underset{1\le i\le N}{\text{max}}J\left({X}_{t}^{\alpha }\right)

then set α 1 = α. Otherwise, set α 2 = α.

Step 7: If |α 2 − α 1| ≤ ε, then return 1 − α and stop. Otherwise, go to Step 5.

#### Theorem 14.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t  with an initial value X 0. Assume X t  has an uncertainty distribution Φ t (x) at each time t. Then given a strictly increasing function J(x) and a level z > J(X 0), the first hitting time τ z  that J(X t ) reaches z has an uncertainty distribution

$\mathrm{\Psi }\left(s\right)=1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).$

#### Proof.

Since the event {τ z  ≤ s} is equivalent to the event

$\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\ge z\right\}$

provided z > J(X 0), it follows from Theorem 6 that

$\begin{array}{ll}\phantom{\rule{1em}{0ex}}\mathrm{\Psi }\left(s\right)& =ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\ge z\right\}\phantom{\rule{2em}{0ex}}\\ =ℳ\left\{\underset{0\le t\le s}{\text{sup}}{X}_{t}\ge {J}^{-1}\left(z\right)\right\}\phantom{\rule{2em}{0ex}}\\ =1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).\phantom{\rule{2em}{0ex}}\end{array}$

This completes the proof. □

#### Theorem 15.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$

with an initial value X 0, respectively. Given a strictly increasing function J(x) and a level z < J(X 0), the first hitting time τ z  that J(X t ) reaches z has an uncertainty distribution

$\Upsilon \left(s\right)=\text{sup}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\le z\right\right\}.$

#### Proof.

Write

$\begin{array}{ll}\phantom{\rule{1em}{0ex}}{\alpha }_{0}& =\text{sup}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{inf}}J\left(\underset{t}{\overset{\alpha }{X}}\right)\le z\right\right\}.\phantom{\rule{2em}{0ex}}\end{array}$

Then

$\left\{{\tau }_{z}\le s\right\}\supset \left\{J\left({X}_{t}\right)\le J\left({X}_{t}^{{\alpha }_{0}}\right),\forall t\right\}=\left\{{X}_{t}\le {X}_{t}^{{\alpha }_{0}},\forall t\right\},$
$\left\{{\tau }_{z}>s\right\}\supset \left\{J\left({X}_{t}\right)>J\left({X}_{t}^{{\alpha }_{0}}\right),\forall t\right\}=\left\{{X}_{t}>{X}_{t}^{{\alpha }_{0}},\forall t\right\}.$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{{\tau }_{z}\le s\right\}\ge ℳ\left\{{X}_{t}\le {X}_{t}^{{\alpha }_{0}},\forall t\right\}={\alpha }_{0},$
$ℳ\left\{{\tau }_{z}>s\right\}\ge \left\{{X}_{t}>{X}_{t}^{{\alpha }_{0}},\forall t\right\}=1-{\alpha }_{0}.$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{{\tau }_{z}\le s\right\}={\alpha }_{0}.$

This completes the proof. □

For a strictly increasing function J(x), in order to calculate the uncertainty distribution ϒ(s) of the first hitting time τ z that J(X t ) reaches z when J(X 0) > z, we design a numerical method as below.

Step 1: Fix ε as the accuracy, and fix h as the step length. Set N = s/h.

Step 2: Employ the recursion formula

${X}_{i+1}^{\epsilon }={X}_{i}^{\epsilon }+f\left({t}_{i},{X}_{i}^{\epsilon }\right)h+g\left({t}_{i},{X}_{i}^{\epsilon }\right){\mathrm{\Phi }}^{-1}\left(\epsilon \right)h$

for N times, and calculate ${X}_{i}^{\epsilon },i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},N.$ If

$\underset{1\le i\le N}{\text{min}}J\left({X}_{i}^{\epsilon }\right)>z,$

then return 1 − ε and stop.

Step 3: Employ the recursion formula

${X}_{i+1}^{1-\epsilon }={X}_{i}^{1-\epsilon }+f\left({t}_{i},{X}_{i}^{1-\epsilon }\right)h+g\left({t}_{i},{X}_{i}^{1-\epsilon }\right){\mathrm{\Phi }}^{-1}\left(1-\epsilon \right)h$

for N times, and calculate ${X}_{i}^{1-\epsilon },i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},N.$ If

$\underset{1\le i\le N}{\text{min}}J\left({X}_{i}^{1-\epsilon }\right)\le z,$

then return ε and stop.

Step 4: Set α 1 = ε, α 2 = 1 − ε.

Step 5: Set α = (α 1 + α 2)/2.

Step 6: Employ the recursion formula

${X}_{i+1}^{\alpha }={X}_{i}^{\alpha }+f\left({t}_{i},{X}_{i}^{\alpha }\right)h+g\left({t}_{i},{X}_{i}^{\alpha }\right){\mathrm{\Phi }}^{-1}\left(\alpha \right)h$

for N times, and calculate ${X}_{i+1}^{\alpha },i=1,2,\cdots \phantom{\rule{0.3em}{0ex}},N.$ If

$\underset{1\le i\le N}{\text{min}}J\left({X}_{t}^{\alpha }\right)

then set α 1 = α. Otherwise, set α 2 = α.

Step 7: If |α 2 − α 1| ≤ ε, then return α and stop. Otherwise, go to Step 5.

#### Theorem 16.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t  with an initial value X 0. Assume X t  has an uncertainty distribution Φ t (x) at each time t. Then given a strictly increasing function J(x) and a level z < J(X 0), the first hitting time τ z  that J(X t ) reaches z has an uncertainty distribution

$\mathrm{\Psi }\left(s\right)=\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).$

#### Proof.

Since the event {τ z  ≤ s} is equivalent to the event

$\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le z\right\}$

provided z < J(X 0), it follows from Theorem 10 that

$\begin{array}{ll}\phantom{\rule{1em}{0ex}}\mathrm{\Psi }\left(s\right)& =ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le z\right\}\phantom{\rule{2em}{0ex}}\\ =ℳ\left\{\underset{0\le t\le s}{\text{inf}}{X}_{t}\le {J}^{-1}\left(z\right)\right\}\phantom{\rule{2em}{0ex}}\\ =\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).\phantom{\rule{2em}{0ex}}\end{array}$

This completes the proof. □

### First hitting time of strictly decreasing function of the solution

#### Theorem 17.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$

with an initial value X 0, respectively. Given a strictly decreasing function J(x), and a level z > J(X 0), the first hitting time τ z vthat J(X t ) reaches z has an uncertainty distribution

$\mathrm{\Psi }\left(s\right)=\text{sup}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\ge z\right\right\}.$

#### Proof.

Write

$\begin{array}{ll}{\alpha }_{0}& =\text{sup}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}^{\alpha }\right)\ge z\right\right\}.\phantom{\rule{2em}{0ex}}\end{array}$

Since J(x) is a strictly decreasing function, we have

$\left\{{\tau }_{z}\le s\right\}\supset \left\{J\left({X}_{t}\right)\ge J\left({X}_{t}^{{\alpha }_{0}}\right),\forall t\right\}=\left\{{X}_{t}\le {X}_{t}^{{\alpha }_{0}},\forall t\right\},$
$\left\{{\tau }_{z}>s\right\}\supset \left\{J\left({X}_{t}\right){X}_{t}^{{\alpha }_{0}},\forall t\right\}.$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{{\tau }_{z}\le s\right\}\ge ℳ\left\{{X}_{t}\le {X}_{t}^{{\alpha }_{0}},\forall t\right\}={\alpha }_{0},$
$ℳ\left\{{\tau }_{z}>s\right\}\ge \left\{{X}_{t}>{X}_{t}^{{\alpha }_{0}},\forall t\right\}=1-{\alpha }_{0}.$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{{\tau }_{z}\le s\right\}={\alpha }_{0}.$

This completes the proof. □

#### Theorem 18.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t  with an initial value X 0. Assume X t  has an uncertainty distribution Φ t (x) at each time t. Then given a strictly decreasing function J(x) and a level z > J(X 0), the first hitting time τ z  that J(X t ) reaches z has an uncertainty distribution

$\mathrm{\Psi }\left(s\right)=\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).$

#### Proof.

Since the event {τ z  ≤ s} is equivalent to the event

$\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\ge z\right\}$

provided z > J(X 0), it follows from Theorem 10 that

$\begin{array}{ll}\phantom{\rule{1em}{0ex}}\mathrm{\Psi }\left(s\right)& =ℳ\left\{\underset{0\le t\le s}{\text{sup}}J\left({X}_{t}\right)\ge z\right\}\phantom{\rule{2em}{0ex}}\\ =ℳ\left\{\underset{0\le t\le s}{\text{inf}}{X}_{t}\le {J}^{-1}\left(z\right)\right\}\phantom{\rule{2em}{0ex}}\\ =\underset{0\le t\le s}{\text{sup}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).\phantom{\rule{2em}{0ex}}\end{array}$

This completes the proof. □

#### Theorem 19.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$

with an initial value X 0, respectively. Given a strictly decreasing function J(x) and a level z < J(X 0), the first hitting time τ z  that J(X t ) reaches z has an uncertainty distribution

$\Upsilon \left(s\right)=1-\text{inf}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\le z\right\right\}.$

#### Proof.

Write

$\begin{array}{ll}\phantom{\rule{1em}{0ex}}{\alpha }_{0}& =\text{inf}\left\{\alpha \in \left(0,1\right)\left|\phantom{\rule{0.3em}{0ex}}\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}^{\alpha }\right)\le z\right\right\}.\phantom{\rule{2em}{0ex}}\end{array}$

Then

$\left\{{\tau }_{z}\le s\right\}\supset \left\{J\left({X}_{t}\right)\le J\left({X}_{t}^{{\alpha }_{0}}\right),\forall t\right\}=\left\{{X}_{t}\ge {X}_{t}^{{\alpha }_{0}},\forall t\right\},$
$\left\{{\tau }_{z}>s\right\}\supset \left\{J\left({X}_{t}\right)>J\left({X}_{t}^{{\alpha }_{0}}\right),\forall t\right\}=\left\{{X}_{t}<{X}_{t}^{{\alpha }_{0}},\forall t\right\}.$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{{\tau }_{z}\le s\right\}\ge ℳ\left\{{X}_{t}\ge {X}_{t}^{{\alpha }_{0}},\forall t\right\}=1-{\alpha }_{0},$
$ℳ\left\{{\tau }_{z}>s\right\}\ge \left\{{X}_{t}<{X}_{t}^{{\alpha }_{0}},\forall t\right\}={\alpha }_{0}.$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{{\tau }_{z}\le s\right\}=1-{\alpha }_{0}.$

This completes the proof. □

#### Theorem 20.

Let X t  be the solution of an uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t  with an initial value X 0. Assume X t  has an uncertainty distribution Φ t (x) at each time t. Then given a strictly decreasing function J(x) and a level z < J(X 0), the first hitting time τ z  that J(X t ) reaches z has an uncertainty distribution

$\mathrm{\Psi }\left(s\right)=1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).$

#### Proof.

Since the event {τ z  ≤ s} is equivalent to the event

$\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le z\right\}$

provided z < J(X 0), it follows from Theorem 6 that

$\begin{array}{ll}\phantom{\rule{1em}{0ex}}\mathrm{\Psi }\left(s\right)& =ℳ\left\{\underset{0\le t\le s}{\text{inf}}J\left({X}_{t}\right)\le z\right\}\phantom{\rule{2em}{0ex}}\\ =ℳ\left\{\underset{0\le t\le s}{\text{sup}}{X}_{t}\ge {J}^{-1}\left(z\right)\right\}\phantom{\rule{2em}{0ex}}\\ =1-\underset{0\le t\le s}{\text{inf}}{\mathrm{\Phi }}_{t}\left({J}^{-1}\left(z\right)\right).\phantom{\rule{2em}{0ex}}\end{array}$

This completes the proof. □

## Integral

In this section, we study the integral of the solution of an uncertain differential equation, and give its uncertainty distribution. Besides, we design a numerical method to obtain the uncertainty distribution.

### Theorem 21.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t},$

respectively. Assume J(x) is a strictly increasing function. Then the integral

${\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t$

has an inverse uncertainty distribution

${\mathrm{\Psi }}^{-1}\left(\alpha \right)={\int }_{0}^{s}J\left({X}_{t}^{\alpha }\right)\mathrm{d}t.$

### Proof.

Since J(x) is a strictly increasing function, we have

$\begin{array}{l}\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t\le {\int }_{0}^{s}J\left({X}_{t}^{\alpha }\right)\mathrm{d}t\right\}\supset \left\{J\left({X}_{t}\right)\le J\left({X}_{t}^{\alpha }\right),\forall t\right\}=\left\{{X}_{t}\le {X}_{t}^{\alpha },\forall t\right\}\end{array}$

and

$\begin{array}{l}\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t>{\int }_{0}^{s}J\left({X}_{t}^{\alpha }\right)\mathrm{d}t\right\}\supset \left\{J\left({X}_{t}\right)>J\left({X}_{t}^{\alpha }\right),\forall t\right\}=\left\{{X}_{t}>{X}_{t}^{\alpha },\forall t\right\}.\end{array}$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t\le {\int }_{0}^{s}J\left({X}_{t}^{\alpha }\right)\mathrm{d}t\right\}\ge ℳ\left\{{X}_{t}\le {X}_{t}^{\alpha },\forall t\right\}=\alpha$

and

$ℳ\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t>{\int }_{0}^{s}J\left({X}_{t}^{\alpha }\right)\mathrm{d}t\right\}\ge ℳ\left\{{X}_{t}>{X}_{t}^{\alpha },\forall t\right\}=1-\alpha .$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t\le {\int }_{0}^{s}J\left({X}_{t}^{\alpha }\right)\mathrm{d}t\right\}=\alpha .$

In other words, the integral

${\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t$

has an inverse uncertainty distribution

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)={\int }_{0}^{s}J\left({X}_{t}^{\alpha }\right)\mathrm{d}t.$

### Example 1.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation dX t  = f(t,X t )dt + g(t,X t )dC t , respectively. Consider a function h(t,x) = exp(−r t) x. Since h(t,x) is strictly increasing with respect to x, the integral

${\int }_{0}^{s}h\left(t,{X}_{t}\right)\mathrm{d}t={\int }_{0}^{s}\text{exp}\left(-\mathit{\text{rt}}\right){X}_{t}\mathrm{d}t$

has an inverse uncertainty distribution

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)={\int }_{0}^{s}h\left(t,{X}_{t}^{\alpha }\right)\mathrm{d}t={\int }_{0}^{s}\text{exp}\left(-\mathit{\text{rt}}\right){X}_{t}^{\alpha }\mathrm{d}t.$

When J(x) is a strictly increasing function, in order to calculate the uncertainty distribution of the integral of J(X t ), we design a numerical method as below.

Step 1: Fix α in (0,1), and fix h as the step length. Set i = 0, N = s/h, and ${X}_{0}^{\alpha }={X}_{0}.$

Step 2: Employ the recursion formula

${X}_{i+1}^{\alpha }={X}_{i}^{\alpha }+f\left({t}_{i},{X}_{i}^{\alpha }\right)h+g\left({t}_{i},{X}_{i}^{\alpha }\right){\mathrm{\Phi }}^{-1}\left(\alpha \right)h,$

and calculate ${X}_{i+1}^{\alpha }$ and $J\left({X}_{i+1}^{\alpha }\right).$

Step 3: Set i ← i + 1.

Step 4: Repeat Step 2 and Step 3 for N times.

Step 5: The inverse uncertainty distribution of

${\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t$

is determined by

${\mathrm{\Psi }}_{s}^{-1}\left(\alpha \right)=\sum _{i=1}^{N}J\left({X}_{i}^{\alpha }\right)H.$

### Theorem 22.

Let X t  and ${X}_{t}^{\alpha }$ be the solution and α-path of the uncertain differential equation

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t},$

respectively. Assume J(x) is a strictly decreasing function. Then the integral

${\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t$

has an inverse uncertainty distribution

${\Upsilon }_{s}^{-1}\left(\alpha \right)={\int }_{0}^{s}J\left({X}_{t}^{1-\alpha }\right)\mathrm{d}t.$

### Proof.

Since J(x) is a strictly decreasing function, we have

$\begin{array}{ll}\phantom{\rule{6pt}{0ex}}\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t\le {\int }_{0}^{s}J\left({X}_{t}^{1-\alpha }\right)\mathrm{d}t\right\}& \supset \left\{J\left({X}_{t}\right)\le J\left({X}_{t}^{1-\alpha }\right),\forall t\right\}\phantom{\rule{2em}{0ex}}\\ =\left\{{X}_{t}\ge {X}_{t}^{1-\alpha },\forall t\right\}\phantom{\rule{2em}{0ex}}\end{array}$

and

$\begin{array}{ll}\phantom{\rule{6pt}{0ex}}\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t>{\int }_{0}^{s}J\left({X}_{t}^{1-\alpha }\right)\mathrm{d}t\right\}& \supset \left\{J\left({X}_{t}\right)>J\left({X}_{t}^{1-\alpha }\right),\forall t\right\}\phantom{\rule{2em}{0ex}}\\ =\left\{{X}_{t}<{X}_{t}^{1-\alpha },\forall t\right\}.\phantom{\rule{2em}{0ex}}\end{array}$

By Theorem 4 and the monotonicity of uncertain measure, we have

$ℳ\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t\le {\int }_{0}^{s}J\left({X}_{t}^{1-\alpha }\right)\mathrm{d}t\right\}\ge ℳ\left\{{X}_{t}\ge {X}_{t}^{1-\alpha },\forall t\right\}=\alpha$

and

$ℳ\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t>{\int }_{0}^{s}J\left({X}_{t}^{1-\alpha }\right)\mathrm{d}t\right\}\ge ℳ\left\{{X}_{t}<{X}_{t}^{1-\alpha },\forall t\right\}=1-\alpha .$

It follows from the duality axiom of uncertain measure that

$ℳ\left\{{\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t\le {\int }_{0}^{s}J\left({X}_{t}^{1-\alpha }\right)\mathrm{d}t\right\}=\alpha .$

In other words, the integral

${\int }_{0}^{s}J\left({X}_{t}\right)\mathrm{d}t$

has an inverse uncertainty distribution

${\Upsilon }_{s}^{-1}\left(\alpha \right)={\int }_{0}^{s}J\left({X}_{t}^{1-\alpha }\right)\mathrm{d}t.$

## Conclusions

This paper considered the solution of an uncertain differential equation, and gave the uncertainty distributions of its extreme values, first hitting time, and integral. In addition, we designed some numerical methods to obtain the uncertainty distributions.

## References

1. Kahneman D, Tversky A: Prospect theory: An analysis of decision under risk. Econometrica 1979,47(2):263–292. 10.2307/1914185

2. Liu B: Why is there a need for uncertainty theory. J. Uncertain Syst 2012,6(1):3–10.

3. Liu B: Uncertainty Theory. Springer, Berlin; 2007.

4. Liu B: Uncertainty Theory: A Branch of Mathematics for Modeling Human Uncertainty. Springer, Berlin; 2010.

5. Liu B: Theory and Practice of Uncertain Programming. Springer, Berlin; 2009.

6. Liu B: Uncertain risk analysis and uncertain reliability analysis. J. Uncertain Syst 2010,4(3):163–170.

7. Liu B: Uncertain set theory and uncertain inference rule with application to uncertain control. J. Uncertain Syst 2010,4(2):83–98.

8. Liu B: Uncertain logic for modeling human language. J. Uncertain Syst 2011,5(1):3–20.

9. Liu B: Fuzzy process, hybrid process and uncertain process. J. Uncertain Syst 2008,2(1):3–16.

10. Liu B: Some research problems in uncertainty theory. J. Uncertain Syst 2009,3(1):3–10.

11. Liu B, Yao K: Uncertain integral with respect to multiple canonical processes. J. Uncertain Syst 2012,6(4):249–254.

12. Chen X, Ralescu DA: Liu process and uncertain calculus. J. Uncertainty Anal. Appl 2013., 1: Article 3

13. Yao K: Uncertain calculus with renewal process. Fuzzy Optimization and Decis. Mak 2012,11(3):285–297.

14. Chen X, Liu B: Existence and uniqueness theorem for uncertain differential equations. Fuzzy Optimization and Decis. Mak 2010,9(1):69–81.

15. Liu YH: An analytic method for solving uncertain differential equations. J. Uncertain Syst 2012,6(4):243–248.

16. Yao K: A type of nonlinear uncertain differential equations with analytic solution. 2013.

17. Yao K, Chen X: A numerical method for solving uncertain differential equations. J. Intell. and Fuzzy Syst. 2013,25(3):825–832. 10.3233/IFS-120688

18. Barbacioru I: Uncertainty functional differential equations for finance. Surv. Math. Appl 2010, 5: 275–284.

19. Liu HJ, Fei WY: Neutral uncertain delay differential equations. Inf.: An Int. Interdiscip. J 2013,16(2):1225–1232.

20. Ge X, Zhu Y: Existence and uniqueness theorem for uncertain delay differential equations. J. Comput. Inf. Syst 2012,8(20):8341–8347.

21. Ge X, Zhu Y: A necessary condition of optimality for uncertain optimal control problem. Fuzzy Optimization and Decis. Mak 2013,12(1):41–51.

22. Liu B: Toward uncertain finance theory. J. Uncertainty Anal. Appl 2013., 1: Article 1

23. Chen X: American option pricing formula for uncertain financial market. Int. J. Oper. Res 2011,8(2):32–37.

24. Peng J, Yao K: A new option pricing model for stocks in uncertainty markets. Int. J. Oper. Res 2011,8(2):18–26.

25. Chen X, Liu YH, Ralescu DA: Uncertain stock model with periodic dividends. Fuzzy Optimization and Decis. Mak 2013b,12(1):111–123.

26. Chen X, Gao J: Uncertain term structure model of interest rate. Soft Computing 2013,17(4):597–604. 10.1007/s00500-012-0927-0

27. Liu YH, Ralescu DA, Chen X: Uncertain currency model and currency option pricing. International Journal of Intelligent Systems, to be published

28. Zhu Y: Uncertain optimal control with application to a portfolio selection model. Cybern. Syst 2010,41(7):535–547. 10.1080/01969722.2010.511552

29. Gao Y: Existence and uniqueness theorem on uncertain differential equations with local Lipschitz condition. J. Uncertain Syst 2012,6(3):223–232.

30. Yao K, Gao J, Gao Y: Some stability theorems of uncertain differential equation. Fuzzy Optimization and Decis. Mak 2013,12(1):3–13.

31. Liu YH, Ha MH: Expected value of function of uncertain variables. J. Uncertain Syst 2010,4(3):181–186.

32. Liu B: Extreme value theorems of uncertain process with application to insurance risk model. Soft Comput 2013,17(4):549–556. 10.1007/s00500-012-0930-5

## Acknowledgements

This work was supported by National Natural Science Foundation of China Grants No.61273044.

## Author information

Authors

### Corresponding author

Correspondence to Kai Yao.

## Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

Yao, K. Extreme values and integral of solution of uncertain differential equation. J. Uncertain. Anal. Appl. 1, 2 (2013). https://doi.org/10.1186/2195-5468-1-2 