Open Access

Dependent-Chance Programming on Sugeno Measure Space

Journal of Uncertainty Analysis and Applications20175:7

https://doi.org/10.1186/s40467-017-0061-8

Received: 10 January 2017

Accepted: 28 June 2017

Published: 11 July 2017

Abstract

In order to solve the optimization problem of selecting the decision with maximal chance to meet the Sugeno event in Sugeno environment, dependent-chance programming on Sugeno measure space is proposed, which can be considered as a generalized extension of the stochastic dependent-chance programming. Firstly, the theoretical framework of dependent-chance programming on Sugeno measure space is established. Secondly, a Sugeno simulation-based hybrid approach, which consists of back propagation neural network and genetic algorithm, is presented to construct an approximate solution of the complex dependent-chance programming models on Sugeno measure space. Finally, some numerical examples are given to illustrate the effectiveness of the approach.

Keywords

Sugeno measure space Dependent-chance programming Sugeno simulation Hybrid approach

Introduction

There are a lot of uncertainties in decision sciences, engineering, information sciences, system sciences, etc. By uncertain mathematical programming, we can solve optimization problems under uncertain environment. The first method of uncertain mathematical programming is the expected value model (EVM) [14] which optimizes the expected objective functions to satisfy some expected constraints. The second method is named chance-constrained programming (CCP) [510] which is a way to solve optimization problems by assigning a confidence level at which the constraint holds. Occasionally, a complex decision system undertakes multiple tasks called events, and the decision maker wishes to maximize the chance functions of satisfying some events [11]. In order to solve the problems, Liu initiated the third method of uncertain mathematical programming named dependent-chance programming (DCP). He firstly proposed the DCP in stochastic environments [12], and then, he gave the theoretical frameworks of DCP in fuzzy environments [11, 13], random fuzzy DCP [14], and fuzzy random DCP [9]. In the past few years, DCP has been used to solve many optimization problems, such as the dynamic facility layout problem [15], the bi-level resource-constrained project scheduling problems [16], and the inventory modeling problems without and with backordering [17].

In spite of multiple DCP being made, there also exist some limitations. For example, stochastic DCP is established on the basis of the probability measure which should satisfy the additivity, and fuzzy DCP deals with the problems containing fuzziness. But in reality, this requirement for additivity cannot be easily satisfied or might not be satisfied at all [18]. In addition, we may deal with problems without fuzziness. Therefore, we introduce DCP on the Sugeno measure space. Sugeno measure space is one measure space based on Sugeno measure. Sugeno measure is one type of representative nonadditive measures and an important generalization of probability measure [19]. Let us give an example of purchasing apples to illustrate the point. For convenience, let the universe of discourse consist of two properties characterizing the apples such as suitable price (a) and suitable quality (b), say X = {a, b}. Let P(X) denotes the power set of X and μ describes an importance degree or a purchasing possibility of various elements of P(X). Apparently, apples with too high price (no suitable price) and too low quality (no suitable quality) will not be purchased. In this case, the purchasing possibility is equal to 0. Moreover, we will purchase apples with suitable price and suitable quality. In this case, the purchasing possibility is equal to 1. Usually, the quality is more important than the price, so this might result in purchasing possibilities of purchasing apples with only suitable price and only suitable quality are 0.5 and 0.2, respectively. Let
$$ \mu (E)=\left\{\begin{array}{c}\hfill 0,\kern0.8em E=\phi \hfill \\ {}\hfill 0.5,\kern0.6em E=\left\{ a\right\}\hfill \\ {}\hfill 0.2,\kern0.5em E=\left\{ b\right\}\hfill \\ {}\hfill \kern0.6em 1,\kern1em E= X.\kern0.4em \hfill \end{array}\right. $$

This measure could express the subjectivity permeating above problem. Evidently, the above measure μ is nonadditive (μ(X) ≠ μ({a}) + μ({b})), that is, μ is not a probability measure. We can show that μ is a Sugeno measure with λ = 3 [19]. Sugeno measure and Sugeno measure space have been researched by many scholars. Wang and Klir [19] gave the basic definitions and properties of Sugeno measure. Ha et al. [18] proposed the key theorem and the bounds on the rate of uniform convergence of learning theory on Sugeno measure space. Ha et al. [20, 21] gave the key theorem and the theoretical foundations of statistical learning theory based on fuzzy random samples in Sugeno measure space. Shi and Gao [22] researched on quality evaluation of lexical cohesion based on Sugeno measure. Zhang and Zhang [23] proved Borel-Cantelli lemma for Sugeno measure. In order to solve the optimization problems on Sugeno measure space, Ha et al. [24] proposed the expected value models on Sugeno measure space and Zhang et al. [25] proposed the chance-constrained programming on Sugeno measure space. The elemental concepts and properties were given, and hybrid algorithms to solve the above programming were proposed.

The remainder of this paper is organized as follows. "Preliminaries" section discusses the g λ variable and its characterization, redefines its expected value and variance, and then revises the strong law of large numbers in [24]. "Dependent-chance Programming on Sugeno Measure Space" section firstly proposes the concepts of Sugeno environment, event and chance function, and then gives the principle of uncertainty which is the theoretical basis of the DCP on Sugeno measure space. At the end of this section, the theoretical framework of the DCP on Sugeno measure space is established. "A Hybrid Approach to Solve the DCP on Sugeno Measure Space" section gives a Sugeno simulation-based hybrid approach which consists of back propagation (BP) neural network and genetic algorithm (GA) to solve DCP on Sugeno measure space. Section 5 provides numerical examples to illustrate the methodology and effectiveness of the approach. Finally, conclusions are drawn in "Numerical examples" section.

Preliminaries

For the sake of convenience and completeness of our investigations, we offer some basic definitions and properties.

Definition 1 [19] Let X be a nonempty set, ζ be a nonempty class of subsets of X, and μ be a nonnegative real valued set function on ζ. If there exists \( \lambda \in \left(-\frac{1}{ \sup \mu},\infty \right)\cup \left\{0\right\} \) where sup μ = sup Eζ μ(E) such that
$$ \mu \left({\displaystyle \underset{i=1}{\overset{\infty }{\cup }}{E}_i}\right)=\left\{\begin{array}{ll}\frac{1}{\lambda}\left\{{\displaystyle \prod_{i=1}^{\infty}\left[1+\lambda \cdot \mu \left({E}_i\right)\right]}-1\right\},\hfill & \lambda \ne 0\hfill \\ {}{\displaystyle \sum_{i=1}^{\infty}\mu \left({E}_i\right),}\hfill & \lambda =0\hfill \end{array}\right. $$

for any disjoint class {E n } of set in ζ whose union is also in ζ, then we say that μ satisfies the σ − λ -rule (on ζ).

Definition 2 Let be a σ-algebra of subsets of a nonempty set X and μ be a non-additive real valued set function on . Then, μ is called a Sugeno measure, if it satisfies the σ-λ-rule and μ(X) = 1 [18]. Usually, Sugeno measure μ is denoted by g λ . Then, the triple (X, , g λ ) is called a Sugeno measure space [19].

Obviously, g λ is a flexible non-additive measure due to the parameter λ which could take different numeric values [18]. When λ = 0, g λ reduces to probability measure and a Sugeno measure space reduces to a probability measure space. Therefore, we stipulate that λ ≠ 0 in the remainder of the article.

The following theorem shows the transformations between Sugeno measure and probability measure.

Theorem 1 [19] If g λ is a Sugeno measure and
$$ {\theta}_{\lambda}(x)=\frac{ \ln \left(1+\lambda x\right)}{ \ln \left(1+\lambda \right)}\left( x\in \left(-\frac{1}{\lambda},+\infty \right)\right), $$

then θ λ g λ is a probability measure.

Conversely, if P is a probability measure and
$$ {\theta_{\lambda}}^{-1}(x)=\left[{\left(1+\lambda \right)}^x-1\right]/\lambda \left( x\in \left(-\infty, +\infty \right)\right), $$

then θ λ − 1P is a Sugeno measure.

Definition 3 [18] Let (X, , g λ ) be a Sugeno measure space. A function ξ : X → R is called a g λ variable if {ω|ξ(ω) ≤ x}  for all x.

Definition 4 [18] The Sugeno distribution function of a g λ variable ξ is defined as
$$ {F}_{g_{\lambda}}(x)={g}_{\lambda}\left\{\xi \le x\right\},\forall x\in \Re . $$
Example 1 [24] A g λ variable ξ has a Sugeno normal distribution if its Sugeno distribution function is
$$ {F}_{g_{\lambda}}(x)=\left\{\begin{array}{c}\hfill \frac{1}{\lambda}\left\{{\left(1+\lambda \right)}^{\frac{1}{\sqrt{2\pi}\sigma}{\displaystyle {\int}_{-\infty}^x{e}^{-\frac{{\left( t-\mu \right)}^2}{2{\sigma}^2}} dt}}-1\right\},\lambda \ne 0\hfill \\ {}\hfill \frac{1}{\sqrt{2\pi}\sigma}{\displaystyle {\int}_{-\infty}^x{e}^{-\frac{{\left( t-\mu \right)}^2}{2{\sigma}^2}} dt},\begin{array}{ccc}\hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\lambda =0,\hfill \end{array}\right. $$

denoted by ξ ~ SN(μ, σ 2, λ), where μ, σ, and λ are all three real numbers.

Example 2 A g λ variable ξ has a Sugeno λ ‐ 0 ‐ 1 distribution if its Sugeno distribution function is as follows:

when λ ≠ 0,
$$ {F}_{g_{\lambda}}(x)=\left\{\begin{array}{c}\hfill \begin{array}{cc}\hfill 0,\begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill & \hfill \begin{array}{cc}\hfill \hfill & \hfill \kern3em x\le \kern.4em 0\hfill \end{array}\hfill \end{array}\hfill \\ {}\hfill \left[{\left(1+\lambda \right)}^x-1\right]/\lambda, \begin{array}{ccc}\hfill \hfill & \hfill 0< x<1\hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill 1,\begin{array}{cc}\hfill \begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill & \hfill \begin{array}{cc}\hfill \hfill & \hfill \kern0.7em x\ge 1,\hfill \end{array}\hfill \end{array}\hfill \end{array}\right. $$
and when λ = 0,
$$ {F}_{g_{\lambda}}(x)=\left\{\begin{array}{c}\hfill \begin{array}{cc}\hfill 0,\begin{array}{cc}\hfill \hfill & \hfill x\le 0\hfill \end{array}\hfill & \hfill \begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill \end{array}\hfill \\ {}\hfill x,\begin{array}{ccc}\hfill \hfill & \hfill 0< x<1\hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill 1,\begin{array}{cc}\hfill \begin{array}{cc}\hfill \hfill & \hfill x\ge 1,\hfill \end{array}\hfill & \hfill \begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill \end{array}\hfill \end{array}\right. $$

denoted by ξ ~ SU(λ), where λ is a real number.

We can note that ξ ~ SN(μ, σ 2, λ) equals to ξ ~ N(μ, σ 2) and ξ ~ SU(λ) equals to ξ ~ U when λ = 0.

In the following Definitions 5 and 6, the expected value and the variance of a g λ variable are redefined, which revise the definitions in [24].

Definition 5 Let ξ be a g λ variable and \( {F}_{g_{\lambda}}(x) \) be the distribution function of ξ. If \( {\displaystyle {\int}_{-\infty}^{\infty}\left| x\right| d}{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]<\infty \), then we call \( {\theta_{\lambda}}^{-1}\left\{{\displaystyle {\int}_{-\infty}^{\infty } xd{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]}\right\} \) an expected value of ξ and denote it by \( {E}_{g_{\lambda}}\left(\xi \right) \) or E(ξ).

Definition 6 Let ξ be a g λ variable. If \( {E}_{g_{\lambda}}\left\{{\left[\xi -{\theta}_{\lambda}\left({E}_{g_{\lambda}}\xi \right)\right]}^2\right\} \) exists, then we call \( {E}_{g_{\lambda}}\left\{{\left[\xi -{\theta}_{\lambda}\left({E}_{g_{\lambda}}\xi \right)\right]}^2\right\} \) the variance of ξ and denoted it by \( {D}_{g_{\lambda}}\left(\xi \right) \) or D(ξ).

Definition 7 [18] The joint distribution function \( {F}_{g_{\lambda}}:{\Re}^2\to \left[0,1\right] \) of a g λ vector (ξ, η) is defined as \( {F}_{g_{\lambda}}\left( x, y\right)={g}_{\lambda}\left\{\xi \le x,\eta \le y\right\} \), for any x, y.

Definition 8 [18] The g λ variables ξ and η are called independent if for all x and y,
$$ {F}_{g_{\lambda}}\left( x, y\right)={\theta_{\lambda}}^{-1}\left\{{\theta}_{\lambda}\left[{g}_{\lambda}\left(\xi \le x,\eta <\infty \right)\right]\cdot {\theta}_{\lambda}\left[{g}_{\lambda}\left(\xi <\infty, \eta \le y\right)\right]\right\}. $$

Definition 9 [24] Suppose that ξ, ξ 1, ξ 2,  are g λ variables defined on the Sugeno measures space (X, , g λ ). We say that the sequence {ξ n } converges in Sugeno measure to ξ if \( \underset{n\to \infty }{ \lim }{g}_{\lambda}\left\{\left|{\xi}_n-\xi \right|\ge \varepsilon \right\}=0 \) for every ε > 0. In this case, we note \( \underset{n\to \infty }{ \lim }{\xi}_n=\xi \) (g λ ) or \( {\xi}_n\overset{g_{\lambda}}{\to}\xi \) .

Definition 10 [24] Suppose that ξ, ξ 1, ξ 2,  are g λ variables defined on the Sugeno measures space (X, , g λ ). The sequence {ξ n } is said to be convergent almost surely (a.s.) to ξ if and only if there exists a set A with g λ (A) = 0 such that \( \underset{n\to \infty }{ \lim }{\xi}_n\left(\omega \right)=\xi \left(\omega \right) \) for every ωĀ. In this case, we note \( \underset{n\to \infty }{ \lim }{\xi}_n=\xi \) (g λ  − a. s.) or \( {\xi}_n\overset{g_{\lambda}-\mathrm{a}.\mathrm{s}.}{\to}\xi \) .

In the following, the strong law of large numbers of g λ variable is proved, which revises the theorem in [24].

Lemma 1 Let ξ 1, ξ 2, , ξ n be independent g λ variables. If Eξ k  < ∞ and |ξ k | ≤ c (k = 1, 2, , n), then for every ε > 0
$$ {g}_{\lambda}\left\{\underset{k\le n}{ \max}\left|{S}_k-{\theta}_{\lambda}\left( E{S}_k\right)\right|\ge \varepsilon \right\}\le \frac{{\left(1+\lambda \right)}^{\frac{{\displaystyle {\sum}_{k=1}^n{\theta}_{\lambda}\left[ D{\xi}_k\right]}}{\varepsilon^2}-1}}{\lambda}. $$

Proof We stipulate that \( {S}_n={\displaystyle {\sum}_{k=1}^n{\xi}_k} \). Let

\( {A}_k=\left\{\underset{j\le k}{ \max}\left|{S}_j-{\theta}_{\lambda}\left[ E\left({S}_j\right)\right]\right|<\varepsilon \right\} \),

and
$$ {B}_k={A}_{k-1}-{A}_k=\left\{\left|{S}_1-{\theta}_{\lambda}\left[ E\left({S}_1\right)\right]\right|<\varepsilon, \cdots, \left|{S}_{k-1}-{\theta}_{\lambda}\left[ E\left({S}_{k-1}\right)\right]\right|<\varepsilon, \left|{S}_k-{\theta}_{\lambda}\left[ E\left({S}_k\right)\right]\right|\ge \varepsilon \right\}. $$
Then, the sets B k , k = 1, 2, , n are disjoint. Let A 0 = X. We can see that
$$ {A}_0^c={A}_0-{A}_n=\left({A}_0-{A}_1\right)\cup \left({A}_1-{A}_2\right)\cup \cdots \cup \left({A}_{n-1}-{A}_n\right)={\displaystyle \underset{k=1}{\overset{n}{\cup }}}{B}_k $$
and
$$ {B}_k\subset \left\{\left|{S}_{k-1}-{\theta}_{\lambda}\left[ E\left({S}_{k-1}\right)\right]\right|<\varepsilon, \left|{S}_k-{\theta}_{\lambda}\left[ E\left({S}_k\right)\right]\right|\ge \varepsilon \right\}. $$
Moreover, we have
$$ \begin{array}{l}{\displaystyle {\int}_{\kern-0.5em {B}_k}{\left|{S}_n-{\theta}_{\lambda}\left[ E\left({S}_n\right)\right]\right|}^2 d{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]}={\theta}_{\lambda} E\left|\right({S}_n-{\theta}_{\lambda}\left[ E\left({S}_n\right)\right]\left){\chi}_{B_n}\right|{}^2\\ {}\ge {\theta}_{\lambda} E\left|\right({S}_k-{\theta}_{\lambda}\left[ E\left({S}_k\right)\right]\left){\chi}_{B_k}\right|{}^2={\displaystyle {\int}_{-\infty}^{+\infty }{\left[\left({S}_k-{\theta}_{\lambda}\left[ E\left({S}_k\right)\right]\right){\chi}_{B_k}\right]}^2 d{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]\Big)}\\ {}\ge {\varepsilon}^2{\displaystyle {\int}_{\kern-0.5em {B}_k} d{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]}\ge {\varepsilon}^2{\theta}_{\lambda}{g}_{\lambda}\left({B}_k\right).\end{array} $$
Then,
$$ \begin{array}{c}{\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left( D{\xi}_k\right)}={\theta}_{\lambda}\left( D{S}_n\right)={\displaystyle \sum_{k=1}^n{\displaystyle {\int}_{B_k}{\left|{S}_n-{\theta}_{\lambda}\left[ E\left({S}_n\right)\right]\right|}^2 d{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]}}\\ {}\ge {\varepsilon}^2{\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left[{g}_{\lambda}\left({B}_k\right)\right]}={\varepsilon}^2{\theta}_{\lambda}\left[{g}_{\lambda}\left({\displaystyle \underset{k=1}{\overset{n}{\cup }}{B}_k}\right)\right].\end{array} $$
Thus,
$$ {g}_{\lambda}\left({A}_n^c\right)={g}_{\lambda}\left({\displaystyle \underset{k=1}{\overset{n}{\cup }}{B}_k}\right)\le \frac{{\left(1+\lambda \right)}^{\frac{{\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left[ D{\xi}_k\right]}}{\varepsilon^2}-1}}{\lambda}. $$
That is
$$ {g}_{\lambda}\left\{\underset{k\le n}{ \max}\left|{S}_k-{\theta}_{\lambda}\left( E{S}_k\right)\right|\ge \varepsilon \right\}\le \frac{{\left(1+\lambda \right)}^{\frac{{\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left[ D{\xi}_k\right]}}{\varepsilon^2}-1}}{\lambda}. $$
Lemma 2 [24] Let ξ 1, ξ 2, , ξ n be g λ variables. Then, the following statements are equivalent:
  1. (1)

    \( {\xi}_n\overset{g_{\lambda}- a. s.}{\to}\xi \);

     
  2. (2)

    \( \forall \varepsilon >0,{g}_{\lambda}\left\{{\displaystyle \underset{n= k}{\overset{\infty }{\cap }}{\displaystyle \underset{n= k}{\overset{\infty }{\cup }}\left(\left|{\xi}_n-\xi \right|\ge \varepsilon \right)}}\right\}=0 \);

     
  3. (3)

    \( \forall \varepsilon >0,\underset{k\to \infty }{ \lim }{g}_{\lambda}\left\{{\displaystyle \underset{n= k}{\overset{\infty }{\cup }}\left(\left|{\xi}_n-\xi \right|\ge \varepsilon \right)}\right\}=0 \).

     
Lemma 3 Let ξ 1, ξ 2, , ξ n be independent g λ variables. If Eξ k  < ∞, k  < ∞ (k = 1, 2, , n) and \( {\displaystyle \sum_n{\theta}_{\lambda}\left[{D}_{g_{\lambda}}\left(\frac{\xi_n}{n}\right)\right]}<\infty \) , then
$$ {\displaystyle \sum_{k=1}^n\left\{\frac{\xi_k}{k}-{\theta}_{\lambda}\left[ E\left(\frac{\xi_k}{k}\right)\right]\right\}}\overset{g_{\lambda}- a. s.}{\to }0. $$

Proof Let \( {\xi_k}^{\prime }=\frac{\xi_k}{n} \), k = 1, 2, , n. Then ξ k ′ (k = 1, 2, , n) are also independent g λ variables. It follows that \( {S_n}^{\prime }={\displaystyle \sum_{k=1}^n\frac{\xi_k}{n}}=\frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k=}\frac{S_n}{n} \) and \( {\theta}_{\lambda}\left( E{S}_n^{\prime}\right)={\theta}_{\lambda}\left\{ E\left[{\displaystyle \sum_{k=1}^n\left(\frac{\xi_k}{k}\right)}\right]\right\} \). We need to prove \( {S_n}^{\prime }-{\theta}_{\lambda}\left( E{S}_n^{\prime}\right)\overset{g_{\lambda}- a. s.}{\to }0 \).

Clearly, \( {\displaystyle \underset{k}{\cup}\left\{\left|{S}_{n+ k}^{\prime }- E{S}_{n+ k}^{\prime}\right|\ge \varepsilon \right\}}={\displaystyle \underset{k}{\cup}\left\{\underset{v\le k}{ \max}\left|{S}_{n+ k}^{\prime }- E{S}_{n+ k}^{\prime}\right|\ge \varepsilon \right\}} \) is a union of some non-decreasing sequences. By Lemma 1, we have
$$ {g}_{\lambda}\left\{{\displaystyle \underset{k=1}{\overset{\infty }{\cup }}\left\{\left|{S}_{n+ k}^{\prime }-{\theta}_{\lambda}\left( E{S}_{n+ k}^{\prime}\right)\right|\ge \varepsilon \right\}}\right\}=\underset{m\to \infty }{ \lim }{g}_{\lambda}\left\{{\displaystyle \underset{k=1}{\overset{m}{\cup }}\left\{\left|{S}_{n+ k}^{\prime }-{\theta}_{\lambda}\left( E{S}_{n+ k}^{\prime}\right)\right|\ge \varepsilon \right\}}\right\} $$
$$ =\underset{m\to \infty }{ \lim }{g}_{\lambda}\left\{\underset{k\le m}{ \max}\left|{S}_{n+ k}^{\prime }-{\theta}_{\lambda}\left( E{S}_{n+ k}^{\prime}\right)\right|\ge \varepsilon \right\}\le \underset{m\to \infty }{ \lim}\frac{{\left(1+\lambda \right)}^{\frac{{\displaystyle \sum_{k=1}^m{\theta}_{\lambda}\left( D{\xi}_{n+ k}^{\prime}\right)}}{\varepsilon^2}}-1}{\lambda}=\frac{{\left(1+\lambda \right)}^{\frac{{\displaystyle \sum_{k= n+1}^{\infty }{\theta}_{\lambda}\left( D{\xi}_k^{\prime}\right)}}{\varepsilon^2}}-1}{\lambda}. $$
Since \( {\displaystyle \sum_{k=1}^{\infty }{\theta}_{\lambda}\left[\left( D{\xi}_k^{\prime}\right)\right]}={\displaystyle \sum_{k=1}^{\infty }{\theta}_{\lambda}\left[ D\left(\frac{\xi_k}{k}\right)\right]<}\infty \), we have \( {\displaystyle \sum_{k= n+1}^{\infty }{\theta}_{\lambda}\left( D{\xi}_k^{\prime}\right)}\to 0 \) (n → ∞). Then,
$$ {g}_{\lambda}\left\{{\displaystyle \underset{k}{\cup}\left\{\left|{S}_{n+ k}^{\prime }-{\theta}_{\lambda}\left[ E\left({S}_{n+ k}^{\prime}\right)\right]\right|\ge \varepsilon \right\}}\right\}\to 0. $$

Thus, \( {g}_{\lambda}\left\{{\displaystyle \underset{n= k}{\overset{\infty }{\cup }}\left\{\left|{S}_n^{\prime }-{\theta}_{\lambda}\left[ E\left({S}_n^{\prime}\right)\right]\right|\ge \varepsilon \right\}}\right\}\le {g}_{\lambda}\left\{{\displaystyle \underset{k}{\cup}\left\{\left|{S}_{n+ k}^{\prime }-{\theta}_{\lambda}\left[ E\left({S}_{n+ k}^{\prime}\right)\right]\right|\ge \varepsilon \right\}}\right\}\to 0 \).

By Lemma 2, we have \( {S_n}^{\prime }-{\theta}_{\lambda}\left[ E\left({S}_n^{\prime}\right)\right]\overset{g_{\lambda}- a. s.}{\to }0 \). That is
$$ {\displaystyle \sum_{k=1}^n\left\{\frac{\xi_k}{k}-{\theta}_{\lambda}\left[ E\left(\frac{\xi_k}{k}\right)\right]\right\}}\overset{g_{\lambda}- a. s.}{\to }0. $$

Lemma 4 [24] Let A 1, A 2,  be a sequence of sets. If \( {\displaystyle \sum_{k=1}^{\infty }{\theta}_{\lambda}\left[{g}_{\lambda}\left({A}_k\right)\right]}<\infty \), then \( {g}_{\lambda}\left\{{\displaystyle \underset{n=1}{\overset{\infty }{\cap }}{\displaystyle \underset{k\ge n}{\cup }{A}_k}}\right\}=0 \).

Lemma 5 Suppose that ξ 1, ξ 2, , ξ n ,  are identically distributed g λ variables whose Sugeno distribution function is \( {F}_{g_{\lambda}}(x) \) with the same expected value a (a < ∞). Let \( {\xi}_k^{\ast }={\xi}_k{\chi}_{\left\{\left|{\xi}_k\right|\le k\right\}}\left(\omega \right) \), k = 1, 2, . If \( {\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left[{g}_{\lambda}\left\{{\xi}_k^{\ast}\ne {\xi}_k\right\}\right]}<\infty \) and \( \frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }}-\frac{1}{n}{\theta}_{\lambda}\left[ E\left({\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }}\right)\right]\overset{g_{\lambda}- a. s.}{\to }0 \), then
$$ \frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k}-{\theta}_{\lambda}\left[ a\right]\overset{g_{\lambda}- a. s.}{\to }0. $$
Proof Let \( {\overline{\xi}}_n=\frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k} \), \( {\overline{\xi}}_n^{\ast }=\frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }} \), \( E{\overline{\xi}}_n^{\ast }= E\left(\frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }}\right)={\theta}_{\lambda}^{-1}\left[\frac{1}{n}{\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left( E{\xi}_k^{\ast}\right)}\right] \) and k  = a. We have
$$ \left|{\overline{\xi}}_n-{\theta}_{\lambda}(a)\right|=\left|{\overline{\xi}}_n-{\overline{\xi}}_n^{\ast }+{\overline{\xi}}_n^{\ast }-{\theta}_{\lambda}\left[ E\left({\overline{\xi}}_n^{\ast}\right)\right]+{\theta}_{\lambda}\left[ E\left({\overline{\xi}}_n^{\ast}\right)\right]-{\theta}_{\lambda}(a)\right| $$
$$ \le \left|{\overline{\xi}}_n-{\overline{\xi}}_n^{\ast}\right|+\left|{\overline{\xi}}_n^{\ast }-{\theta}_{\lambda}\left[ E\left({\overline{\xi}}_n^{\ast}\right)\right]\right|+\left|{\theta}_{\lambda}\left[ E\left({\overline{\xi}}_n^{\ast}\right)\right]-{\theta}_{\lambda}(a)\right| $$
(1)

Because \( {\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left[{g}_{\lambda}\left\{{\xi}_k^{\ast}\ne {\xi}_k\right\}\right]}<\infty \), we conclude that \( \left|{\overline{\xi}}_n-{\overline{\xi}}_n^{\ast}\right|\overset{g_{\lambda}- a. s.}{\to }0 \) from Lemma 4 and Lemma 2.

By the condition of \( \frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }}-\frac{1}{n}{\theta}_{\lambda}\left[ E\left({\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }}\right)\right]\overset{g_{\lambda}- a. s.}{\to }0 \), we have
$$ \left|{\overline{\xi}}_n^{\ast }-{\theta}_{\lambda}\left[ E\left({\overline{\xi}}_n^{\ast}\right)\right]\right|\overset{g_{\lambda}- a. s.}{\to }0. $$

Because \( {\theta}_{\lambda}\left[ E\left({\xi}_n^{\ast}\right)\right]={\displaystyle {\int}_{- n}^n x d{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]}\to {\displaystyle {\int}_{-\infty}^{\infty } x d{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]}={\theta}_{\lambda}( a) \) as n → ∞, then

\( {\theta}_{\lambda}\left[ E\left({\overline{\xi}}_n^{\ast}\right)\right]=\frac{1}{n}{\displaystyle \sum_{k=1}^n{\theta}_{\lambda}\left( E{\xi}_k^{\ast}\right)}\to \frac{1}{n}{\displaystyle \sum_{k=1}^n{\theta}_{\lambda}(a)}={\theta}_{\lambda}(a) \) as n → ∞.

By (1), we have \( \frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k}-{\theta}_{\lambda}(a)\overset{g_{\lambda}- a. s.}{\to }0 \) (n → ∞).

Lemma 6 [26] Let x 1, x 2,  be sequence of real numbers and \( {\displaystyle \sum_{k=1}^n\frac{x_k}{k}}<\infty \); then, we have \( \frac{1}{n}{\displaystyle \sum_{k=1}^n{x}_k}\to 0 \), as n → ∞.

Theorem 2 (Strong law of large numbers) Let ξ 1, ξ 2, , ξ n ,  be independent and identically distributed g λ variables with the same expected value a (a < ∞). Then, we have
$$ \frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k}-{\theta}_{\lambda}(a)\overset{g_{\lambda}-\mathrm{a}.\mathrm{s}.}{\to }0. $$

Proof Let \( {\xi}_k^{\ast }={\xi}_k{\chi}_{\left\{\left|{\xi}_k\right|\le k\right\}}\left(\omega \right) \).

Since k  < ∞, we have \( {\displaystyle {\int}_{-\infty}^{+\infty}\left| x\right|} d{\theta}_{\lambda}\left[{F}_{g_{\lambda}}(x)\right]<\infty \), then \( {\displaystyle \sum_{k=1}^{\infty }{\theta}_{\lambda}\left[ E\left(\frac{\xi_k^{\ast 2}}{k^2}\right)\right]}<\infty \) [28]. Thus,
$$ {\displaystyle \sum_{k=1}^{\infty }{\theta}_{\lambda}\left[ D\left(\frac{\xi_k^{\ast }}{k}\right)\right]}\le {\displaystyle \sum_{k=1}^{\infty }{\theta}_{\lambda}\left[ E\left(\frac{\xi_k^{\ast 2}}{k^2}\right)\right]}<\infty . $$

By Lemma 3, we have \( {\displaystyle \sum_{k=1}^n\left\{\frac{\xi_k}{k}-{\theta}_{\lambda}\left[ E\left(\frac{\xi_k}{k}\right)\right]\right\}}\overset{g_{\lambda}- a. s.}{\to }0 \). By Lemma 6, we have \( \frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }}-\frac{1}{n}{\theta}_{\lambda}\left[ E\left({\displaystyle \sum_{k=1}^n{\xi}_k^{\ast }}\right)\right]\overset{g_{\lambda}- a. s.}{\to }0 \). By Lemma 5, we have \( \frac{1}{n}{\displaystyle \sum_{k=1}^n{\xi}_k}-{\theta}_{\lambda}(a)\overset{g_{\lambda}- a. s.}{\to }0 \) since \( {\displaystyle \sum_{k=1}^{\infty }{\theta}_{\lambda}\left[{g}_{\lambda}\left\{{\xi}_k^{\ast}\ne {\xi}_k\right\}\right]}<\infty \).

That proves the theorem.

Dependent-Chance Programming on Sugeno Measure Space

Uncertain Environment, Event, Chance Function, and Principle of Uncertainty

Uncertain environment, event, and chance function are basic concepts in DCP. Let us redefine them in Sugeno decision systems at the beginning of this part.

Definition 11 Let x be a decision vector and ξ be a g λ vector. Then, the Sugeno constraints represented by
$$ {g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0,\kern0.5em j=1,2,\cdots, p $$
(2)

are called a Sugeno environment.

Definition 12 Let x be a decision vector and ξ be a g λ vector. Then, a system of Sugeno inequalities
$$ {h}_k\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, q $$
(3)

is called a Sugeno event.

Definition 13 Let x be a decision vector and ξ be a g λ vector. Then, the chance function of an event characterized by (3) is defined as the Sugeno measure of the event, i.e.,
$$ f(x)={g}_{\lambda}\left\{{h}_k\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, q\right\} $$

subject to the Sugeno environment (2).

Definition 14 [14] Let x = (x 1, x 2, , x n ) be a decision vector and r(x) = r(x 1, x 2, , x n ) be an n-dimensional function. Then, the ith decision variable x i is said to be degenerate if
$$ r\left({x}_1,\cdots {x}_{i-1},{x_i}^{\prime },{x}_{i+1},\cdots, {x}_n\right)= r\left({x}_1,\cdots {x}_{i-1},{x_i}^{\prime \prime },{x}_{i+1},\cdots, {x}_n\right) $$

for any x i ′ and x i ′ ′; otherwise, it is nondegenerate. In this case, the set of all nondegenerate decision variables is called nondegenerate set under r(x) which can be denoted by ND[r(x)].

For example, r(x 1, x 2, x 3, x 4, x 5, x 6) = x 1 − x 2 + 3x 5 is a 5-dimensional function. The variables x 1, x 2, x 5 are nondegenerate and x 3, x 4, x 6 are degenerate. So,
$$ N D\left[ r\left({x}_1,{x}_2,{x}_3,{x}_4,{x}_5,{x}_6\right)\right]=\left\{{x}_1,{x}_2,{x}_5\right\}. $$
Definition 15 Let x be a decision vector, ξ be a g λ vector, and E be an event characterized by h k (x, ξ) ≤ 0, k = 1, 2, , q in the Sugeno environment g j (x, ξ) ≤ 0, j = 1, 2, , p. If jJ and ND[g j (x, ξ)] ∩ ND[h k (x, ξ)] ≠ ϕ, we write
$$ {\varepsilon}^{**}= N D\left[{g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\right]\cup N D\left[{h}_k\left(\boldsymbol{x},\boldsymbol{\xi} \right)\right],\kern0.5em k=1,2,\cdots, q,\kern0.5em j\in J $$

Then the jth constraint g j (x, ξ) is called a dependent constraint of the event E if ND[g j (x, ξ)] ∩ ε * * ≠ ϕ; otherwise, it is independent.

Definition 16 Let E be a Sugeno event characterized by h k (x, ξ) ≤ 0, k = 1, 2, , q in the Sugeno environment g j (x, ξ) ≤ 0, j = 1, 2, , p where x is a decision vector and ξ is a g λ vector. Then, for each decision x and realization of a g λ vector ξ, the Sugeno event E is said to be consistent in the Sugeno environment if the following two conditions hold: (1) h k (x, ξ) ≤ 0 k = 1, 2, , q and (2) g j (x, ξ) ≤ 0, jJ* where J* is the index set of all dependent constraints.

Generally, a decision could meet an event if the decision meets both the event itself and the dependent constraints [14]. So, we obtain the following principle of uncertainty in the Sugeno environment which is theoretical basis of DCP on Sugeno measure space.

Principle of Uncertainty The chance of a Sugeno event is the Sugeno measure of the event which is consistent in the Sugeno environment.

Let x be a decision vector and ξ be a g λ vector. There are m events E i characterized by h ik (x, ξ) ≤ 0, k = 1, 2, , q i for i = 1, 2, , m in the Sugeno environment g j (x, ξ) ≤ 0, j = 1, 2, , p. According to the principle of uncertainty, the chance function of the ith event E i in the Sugeno environment is
$$ f(x)={g}_{\lambda}\left\{\begin{array}{l}{h}_{i k}\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, {q}_i\hfill \\ {}{g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, j\in {J}_i\hfill \end{array}\right\} $$
where
$$ {J}_i=\left\{ j\in \left\{1,2\cdots p\right\}\left|{g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0\ \mathrm{is}\ \mathrm{the}\ \mathrm{dependent}\ \mathrm{constraint}\ \mathrm{of}\ {E}_i\right.\right\} $$

for i = 1, 2, , m.

DCP on Sugeno Measure Space

In this part, we extend the DCP to the Sugeno measure space. Therefore, the framework of DCP on the Sugeno measure space is constructed. In order to maximize the chance function of an event subject to a Sugeno environment, we give the following dependent-chance single-objective programming on Sugeno measure space:
$$ \left\{\begin{array}{c}\hfill \max {g}_{\lambda}\left\{{h}_k\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, q\right\}\hfill \\ {}\hfill s. t.\begin{array}{cccc}\hfill \begin{array}{cccc}\hfill \begin{array}{ccc}\hfill \begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill {g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0,\begin{array}{ccc}\hfill j=1,2,\cdots, p,\hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \end{array}\right. $$

where x is a decision vector, ξ is a g λ vector, h k (x, ξ) ≤ 0, k = 1, 2, , q represent an event, and g j (x, ξ) ≤ 0, j = 1, 2, , p represent a Sugeno environment.

In order to maximize multiple chance functions subject to a Sugeno environment, we give the following dependent-chance multi-objective programming on Sugeno measure space which maximizes multiple chance functions in a Sugeno environment:
$$ \left\{\begin{array}{l} \max \left[\begin{array}{c}\hfill {g}_{\lambda}\left\{{h}_{1 k}\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, {q}_1\right\}\hfill \\ {}\hfill {g}_{\lambda}\left\{{h}_{2 k}\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, {q}_2\right\}\hfill \\ {}\hfill \cdots \begin{array}{cccc}\hfill \begin{array}{cccc}\hfill \begin{array}{cc}\hfill \begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill & \hfill \hfill \end{array}\hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill {g}_{\lambda}\left\{{h}_{m k}\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, {q}_m\right\}\hfill \end{array}\right]\\ {}\begin{array}{l} s. t.\\ {}{g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0,\begin{array}{ccc}\hfill j=1,2,\cdots, p,\hfill & \hfill \hfill & \hfill \hfill \end{array}\end{array}\end{array}\right. $$

where x is a decision vector, ξ is a g λ vector, h ik (x, ξ) ≤ 0, k = 1, 2, , q i for i = 1, 2, , m represent the events, and g j (x, ξ) ≤ 0, j = 1, 2, , p represent a Sugeno environment.

In multi-objective decision-making system, goal programming is posed to minimize the deviations, positive, negative, or both, between the objective functions and ideal objective targets, which are present in a certain prior structure set by the decision maker [11]. Furthermore, dependent-chance goal programming on Sugeno measure space may be considered as an extension of goal programming in Sugeno decision system. Then, we give the following dependent-chance goal programming on Sugeno measure space:
$$ \left\{\begin{array}{ll} \min \hfill & {\displaystyle \sum_{j=1}^l{P}_j{\displaystyle \sum_{i=1}^m\left({u}_{i j}{d}_i^{+}+{v}_{i j}{d}_i^{-}\right)}}\hfill \\ {} s. t.\hfill & \hfill \\ {}\hfill & {g}_{\lambda}\left\{{h}_{i k}\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0,\kern0.4em k=1,2,\cdots, {q}_i\right\}+{d}_i^{+}\hbox{-} {d}_i^{-}={b}_i,\kern0.5em i=1,2,\cdots, m\hfill \\ {}\hfill & {g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, j=1,2,\cdots, p\hfill \\ {}\hfill & \kern0.3em {d}_i^{+},{d}_i^{-}\ge 0, i=1,2,\cdots, m,\hfill \end{array}\right. $$
where x is a decision vector, ξ is a g λ vector, P j is the preemptive priority factor which expresses the relative importance of various goals, for all j, u ij , and v ij are the weighting factors corresponding to positive deviation and negative deviation for goal i with priority j assigned, respectively,
$$ {d}_i^{+}= \min \left\{{g}_{\lambda}\left\{{h}_{i k}\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0,\kern0.4em k=1,2,\cdots, {q}_i\right\}-{b}_i,0\right\}, i=1,2,\cdots, m $$
and
$$ {d}_i^{-}= \min \left\{{b}_i-{g}_{\lambda}\left\{{h}_{i k}\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0,\kern0.4em k=1,2,\cdots, {q}_i\right\},0\right\}, i=1,2,\cdots, m $$

are the positive and negative deviations of the target of goal i, respectively, g j is a function in system constraints, b i is the target value according to goal i, l is the number of priorities, m is the number of goal constraints, and p is the number of system constraints.

Example 3 Now, we give a simple example of the DCP on Sugeno measure space:
$$ \left\{\begin{array}{c}\hfill \max {g}_{\lambda}\left\{{x}_1+{x}_2=4\right\}\kern7.9em \hfill \\ {}\hfill s. t.\kern0.9em \hfill \\ {}\hfill \begin{array}{ccc}\hfill \left({x}_1+3{x}_2\right)/4\le \xi \hfill & \hfill \hfill & \hfill \hfill \end{array}\begin{array}{cccc}\hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill \begin{array}{ccc}\hfill \kern0.7em 2{x}_3+{x}_4\ge 20\kern0.3em \hfill & \hfill \hfill & \hfill \hfill \end{array}\begin{array}{cccc}\hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill \kern0.4em {x}_1,{x}_2,{x}_3,{x}_4\kern0.3em \mathrm{are}\kern0.4em \mathrm{positive}\kern0.4em \mathrm{integers},\hfill \end{array}\right. $$
where ξ is a discrete g λ variable with the Sugeno distribution of the form in Table 1:
Table 1

The Sugeno distribution of ξ

x

1/2

7/4

3

g λ (ξ = x)

1/10

1/8

1/2

and λ = 2.

In this model, the event E is characterized by x 1 + x 2 = 4. And the dependent constraints of the event E are (1) (x 1 + 3x 2)/4 ≤ ξ and (2) x 1, x 2 are positive integers. By principle of uncertainty, the chance function is
$$ {g}_{\lambda}\left\{\begin{array}{c}\hfill {x}_1+{x}_2=4\begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill {x}_1+3{x}_2\le 4\xi \begin{array}{cc}\hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill {x}_1,{x}_2\mathrm{arepositive}\ \mathrm{integers}\hfill \end{array}\right\}. $$

Obviously, (x 1, x 2) can be (1, 3), (2, 2) or (3, 1). According to the different values of (x 1, x 2), we should compare the values:

g λ {ξ ≥ 2.5} = g λ {ξ = 3} = 1/2,
$$ {g}_{\lambda}\left\{\xi \ge 2\right\}={g}_{\lambda}\left\{\xi =3\right\}=1/2, $$
and
$$ \begin{array}{c}\kern0.6em {g}_{\lambda}\left\{\xi \ge 1.25\right\}\\ {}={g}_{\lambda}\left(\left\{\xi =3\right\}\cup \left\{\xi =7/4\right\}\right)\\ {}={g}_{\lambda}\left\{\xi =3\right\}+{g}_{\lambda}\left\{\xi =7/4\right\}+\lambda \cdot {g}_{\lambda}\left\{\xi =3\right\}\cdot {g}_{\lambda}\left\{\xi =7/4\right\}\\ {}=3/4.\end{array} $$

Therefore, the best solution is (x 1, x 2) = (3, 1) with the corresponding chance is 3/4.

A Hybrid Approach to Solve the DCP on Sugeno Measure Space

Sugeno Simulation

In order to estimate the accurate values in Sugeno programming, we resort to Sugeno simulation as one of the attractive alternatives. For the sake of solving general DCP on Sugeno measure space, we must deal with the following type of uncertain function
$$ U\left(\boldsymbol{x}\right):\boldsymbol{x}\to {g}_{\lambda}\left\{ f\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0\right\} $$

by Sugeno simulation.

Example 4 Let ξ be a g λ vector and f :  n  →  be a measurable function. In the following, we obtain L = g λ {f(x, ξ) ≤ 0} by Sugeno simulation.

In order to construct a g λ variable ξ with Sugeno distribution \( {F}_{g_{\lambda}}\left(\cdot \right) \), a uniformly distributed variable u over the interval [0, 1] is produced at first, then ξ is assigned to be \( {F_{g_{\lambda}}}^{-1}(u) \) [24]. Therefore, we generate ω k according to the Sugeno measure g λ and produce ξ k  = ξ(ω k ) for k = 1, 2,  , N.

Let N′ denote the number of vectors satisfying the system of inequalities f(x, ξ k ) ≤ 0, k = 1, 2, , N, and
$$ h\left(\boldsymbol{x},{\boldsymbol{\xi}}_k\right)=\left\{\begin{array}{ll}1,\hfill & \mathrm{if}\kern0.2em f\left(\boldsymbol{x},{\boldsymbol{\xi}}_k\right)\le 0\hfill \\ {}0,\hfill & \operatorname{otherwise}.\hfill \end{array}\right. $$

Then, we have E[h(x, ξ k )] = L for all k and \( {N}^{\prime }={\displaystyle \sum_{k=1}^N h\left(\boldsymbol{x},{\boldsymbol{\xi}}_k\right)} \).

It follows from the Theorem 2 that when N → ∞,
$$ \frac{N^{\prime }}{N}=\frac{{\displaystyle \sum_{k=1}^N h\left( x,{\xi}_k\right)}}{N}\overset{g_{\lambda}- a. s.}{\to}\frac{{\left(1+\lambda \right)}^L-1}{\lambda}. $$
Thus, L can be estimated by ln[1 + λ(N′/N)]/ln(1 + λ) provided that N is sufficiently large.

A Hybrid Approach

In order to solve the DCP on Sugeno measure space, we propose a Sugeno simulation-based hybrid approach which combines BP neural network with GA in this part. The form of the DCP on Sugeno measure space is as follows:
$$ \left\{\begin{array}{ll} \max \hfill & {g}_{\lambda}\left\{{h}_k\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, k=1,2,\cdots, q\right\}\hfill \\ {} s. t.\hfill & \hfill \\ {}\hfill & {g}_j\left(\boldsymbol{x},\boldsymbol{\xi} \right)\le 0, j=1,2,\cdots, p.\hfill \end{array}\right. $$

Firstly, we generate the input-output data for the uncertain function according to the Sugeno simulation. Secondly, we train the BP neural network to approximate the underlying functional relationship U and predict the outputs. Thirdly, we make use of GA to enhance the optimization process and arrive at a solution to the optimization problem. Finally, we find the best chromosome to be the optimal solution by selection, crossover, and mutation.

The hybrid algorithm for solving the DCP on Sugeno measure space can be summarized as follows:

Numerical Examples

Here, we give two numerical examples to illustrate the effectiveness of the approach.

Example 5 Let us consider the following DCP on Sugeno measure space:
$$ \left\{\begin{array}{c}\hfill \max {g}_{\lambda}\left\{{x}_1+{x}_2+2{x}_3+2{x}_4=5\right\}\hfill \\ {}\hfill s. t.\begin{array}{cccccccc}\hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill \begin{array}{ccc}\hfill {x_1}^2+{x_2}^2\le {\xi}_1\hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill \begin{array}{ccc}\hfill \kern0.7em {x_3}^2+{x_4}^2\le 3{\xi}_2\hfill & \hfill \hfill & \hfill \hfill \end{array}\hfill \\ {}\hfill \kern0.7em {x}_1,{x}_2,{x}_3,{x}_4\ge 0,\hfill \end{array}\right. $$

where λ = 5; ξ 1 is a Sugeno normal distributed variable characterized by ξ 1 ~ SN(3, 22, 4); ξ 2 is a λ ‐ 0 ‐ 1 distributed variable characterized by ξ 2 ~ SU(3).

The event of this model is x 1 + x 2 + 2x 3 + 2x 4 = 5, and the chance function of the event is
$$ f(x)={g}_{\lambda}\left\{\begin{array}{c}\hfill {x}_1+{x}_2+2{x}_3+2{x}_4=5\hfill \\ {}\hfill {x_1}^2+{x_2}^2\le {\xi}_1\kern0.6em \hfill \\ {}\hfill {x_3}^2+{x_4}^2\le 3{\xi}_2\kern0.4em \hfill \\ {}\hfill {x}_1,{x}_2,{x}_3,{x}_4\ge 0\kern0.2em \hfill \end{array}\right\}. $$
It is easily to find that \( {x}_4=\frac{5-{x}_1-{x}_2-2{x}_3}{2} \). In order to solve the model, we generate 3000 input-output data for the uncertain function U : (x 1x 2x 3) → f(x) by Sugeno simulation. Next, a three-layer BP neural network of the 3-5-1 topology is constructed to approximate the uncertain function U. The activation functions utilized in the BP neural network are a hyperbolic tangent function (hidden layer) and a linear function (output layer), respectively. The BP neural network is trained to adjust the weights and thresholds of the connections between two layers and minimize root mean squared error (RMSE) of the output layer. The maximum number of iterations, the learning rate, the momentum term, and the tolerance criterion for the BP neural network are set to be 5000, 0.0001, 0.85, and 0.001, respectively. Although the settings of these parameters may not be optimal, they ensure the convergence of the learning process realized by the BP neural network. As shown in Fig. 1, we obtain the values of the error function of RMSE in successive iterations.
Fig. 1

Training error curves for f(x) obtained for 3000 input–output data

Moreover, we use the GA to improve the solution of the DCP on Sugeno measure space. Here, the population size, the number of generations, the mutation rate, and the crossover rate of the GA are set to be 30, 300, 0.2, and 0.3, respectively. It should be mentioned here that the settings of these GA parameters may not be optimal. However, under these conditions, the value of fitness (the objective function) is improved. The improvements are illustrated in Fig. 2.
Fig. 2

Values of objective function for 300 generations of the GA

Finally, the best solution obtained in the above way is
$$ {x}^{*}=\left({x}_1,{x}_2,{x}_3,\ {x}_4\right)=\left(1.1379,0.7077,0.7579,0.8183\right) $$

where the corresponding chance is f(x*) = 0.9067.

Example 6 Let us consider the following dependent-chance goal programming on Sugeno measure space:
$$ \left\{\begin{array}{ll}\mathrm{lexmin}\left\{{d}_1^{+},{d}_2^{+},{d}_3^{+}\right\}\hfill & \hfill \\ {}\begin{array}{l} s. t.\\ {}\begin{array}{ll}\hfill & {g}_{\lambda}\left\{{x}_1+{x_4}^2=2\right\}+{d}_1^{-}-{d}_1^{+}=0.80\hfill \\ {}\hfill & {g}_{\lambda}\left\{{x}_2+{x_5}^2=2\right\}+{d}_2^{-}-{d}_2^{+}=0.85\hfill \\ {}\hfill & {g}_{\lambda}\left\{{x}_3+{x_6}^2=3\right\}+{d}_3^{-}-{d}_3^{+}=0.85\hfill \\ {}\hfill & {x}_1^2+{x}_5+{x}_4^2\le 0.5{\xi}_1\hfill \\ {}\hfill & {x}_3+{x}_2^2+{x}_6\le 2.5{\xi}_2\hfill \\ {}\hfill & {x}_i\ge 0, i=1,2,\cdots 6\hfill \\ {}\hfill & {d}_i^{+},{d}_i^{-}\ge 0, i=1,2,\cdots 6,\hfill \end{array}\end{array}\hfill & \hfill \end{array}\right. $$

where λ = 3; ξ 1 and ξ 2 are g λ variables characterized by ξ 1 ~ SN(3, 1, 2) and ξ 2 ~ SN(2, 1, 3), respectively.

The event at the first priority level is x 1 + x 4 2 = 3 whose chance function is
$$ {f}_1(x)={g}_{\lambda}\left\{\begin{array}{l}{x}_1+{x_4}^2=2\hfill \\ {}{x}_1^2+{x}_5+{x}_4^2\le 0.5{\xi}_1\hfill \\ {}{x}_1,{x}_4,{x}_5\ge 0\hfill \end{array}\right\}. $$

The event at the second priority level is x 2 + x 5 2 = 2 whose chance function is

\( {f}_2(x)={g}_{\lambda}\left\{\begin{array}{l}{x}_2+{x}_5^2=2\hfill \\ {}{x}_1^2+{x}_5+{x}_4^2\le 0.5{\xi}_1\hfill \\ {}{x}_3+{x}_2^2+{x}_6\le 2.5{\xi}_2\hfill \\ {}{x}_i\ge 0, i=1,2,\cdots 6\hfill \end{array}\right\} \).

The event at the third priority level is x 3 + x 6 2 = 2 whose chance function is
$$ {f}_3(x)={g}_{\lambda}\left\{\begin{array}{l}{x}_3+{x_6}^2=3\hfill \\ {}{x}_3+{x}_2^2+{x}_6\le 2.5{\xi}_2\hfill \\ {}{x}_2,{x}_3,{x}_6\ge 0\hfill \end{array}\right\}. $$
We can find that \( {x}_4=\sqrt{2-{x}_1} \), \( {x}_5=\sqrt{2-{x}_2} \) and \( {x}_6=\sqrt{3-{x}_3} \). In order to solve this model, we generate 3000 input-output data for the uncertain function U : (x 1x 2x 3) → (f 1(x), f 2(x), f 3(x)) by Sugeno simulation. Next, we construct a three-layer BP neural network of the 3-5-3 topology to approximate the uncertain function U. The BP neural network is trained by the standard BP algorithm with a momentum term while the error function is RMSE. The maximum number of iterations, the learning rate, the momentum term, and the tolerance criterion for the BP neural network are set to be 5000, 0.0001, 0.85, and 0.001, respectively. The values of the error functions obtained in successive iterations are shown in Figs. 3, 4 and 5, respectively.
Fig. 3

Training error curves for f 1(x) obtained for 3000 input–output data

Fig. 4

Training error curves for f 2(x) obtained for 3000 input–output data

Fig. 5

Training error curves for f 3(x) obtained for 3000 input–output data

Then, we use the GA to improve the solution of the DCP on Sugeno measure space, whose population size, number of generations, mutation rate, and crossover rate are set to be 30, 300, 0.2, and 0.7, respectively. The GA enhances the fitness as shown in the Fig. 6,
Fig. 6

Values of objective function for 300 generations of the GA

Finally, the optimal solution is
$$ {x}^{*}=\left(1.0274,2.000,0.0001,0.9863,0,1.7320\right), $$

which satisfies the first goal and the second goal; otherwise, the third objective is 0.1495.

In the process of solving the above two models, we can see that the time complexity of the hybrid approach is the sum of the time spent for the Sugeno simulation, for BP neural network, and for GA. The time spent for the three parts are essential since we can assumed that there is no alternative method to the hybrid approach.

Conclusions

In this paper, an uncertain mathematical programming named dependent-chance programming (DCP) on Sugeno measure space was proposed. To provide general solutions to the programming, a Sugeno simulation-based hybrid approach integrated by BP neural network and GA was given. Compared with the existing kinds of DCP, the DCP on Sugeno measure space has the features as follows: (1) It deals with g λ variables. (2) It may be resorted to when the decision maker wishes to maximize the chance functions of satisfying the events in the Sugeno environment.

Further research directions might be devoted to the wide applications of DCP on Sugeno measure space in area of water resources management, waste management planning, and electric power system planning, and so on, where some characteristics may not satisfy the additivity. Moreover, the DCP based on other kinds of variables, such as fuzzy variables, on Sugeno measure space may be studied. And the hybrid approach for DCP combined with more algorithms such as PSO may be also studied.

Declarations

Funding

This work was supported by the Application Basic Research Plan Key Basic Research Project of Hebei Province (no. 16964213D), the Natural Science Foundation of the Hebei Education Department (no. QN2015116), the Innovation Fund for Postgraduates of Hebei Province in 2016 (grant no. 222), and the Plan Project for Science and Technology in Handan City (nos. 1528102058-5, 1534201095-3).

Authors’ contributions

All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no conflict of interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
College of Water Conservancy and Hydropower/School of Science, Hebei University of Engineering
(2)
School of Economics and Management, Handan University

References

  1. Gupta, P., Mittal, G., Mehlawat, M.K.: Multiobjective expected value model for portfolio selection in fuzzy environment. Optim Lett 7(8), 1765–91 (2013)MathSciNetView ArticleMATHGoogle Scholar
  2. Li, Y., Xu, W., He, S.: Expected value model for optimizing the multiple bus headways. Appl Math Comput 219(11), 5849–61 (2013)MathSciNetMATHGoogle Scholar
  3. Yuan, G.: Two-stage fuzzy production planning expected value model and its approximation method. Appl Math Model 36(6), 2429–45 (2012)MathSciNetView ArticleMATHGoogle Scholar
  4. Ghasemi, M.R., Ignatius, J., Lozano, S., Emrouznejad, A., Hatami-Marbini, A.: A fuzzy expected value approach under generalized data envelopment analysis. Knowl-Based Syst 89, 148–59 (2015)View ArticleGoogle Scholar
  5. Charnes, A., Cooper, W.W.: Chance-constrained programming. Manag Sci 6(1), 73–9 (1959)MathSciNetView ArticleMATHGoogle Scholar
  6. Liu, F., Wen, Z., Xu, Y.: A fuzzy fractional chance-constrained programming model for air quality management under uncertainty. Eng Optim 48(1), 135–53 (2016)MathSciNetView ArticleGoogle Scholar
  7. LóPez, J., Contreras, J., Mantovani, J.R.S.: Reactive power planning under conditional-value-at-risk assessment using chance-constrained optimisation. IET Gener Transm Distrib 9(3), 231–40 (2015)View ArticleGoogle Scholar
  8. Afifi, W.A., Hefny, H.A.: Adaptive TAKAGI-SUGENO fuzzy model using weighted fuzzy expected value in wireless sensor network, 14th edn, pp. 225–31. International Conference on Hybrid Intelligent Systems, Kuwait (2014). doi:https://doi.org/10.1109/HIS.2014.7086203 Google Scholar
  9. Liu, B.: Fuzzy random dependent-chance programming. IEEE Trans Fuzzy Syst 9(5), 721–6 (2001)View ArticleGoogle Scholar
  10. Atawia, R., Abou-zeid, H., Hassanein, H.S., Noureldin, A.: Joint chance-constrained predictive resource allocation for energy-efficient video streaming. IEEE J Sel Areas Commun 34(5), 1389–404 (2016). doi:https://doi.org/10.1109/JSAC.2016.2545358 View ArticleGoogle Scholar
  11. Liu, B.: Dependent-chance programming in fuzzy environments. Fuzzy Sets Syst 109(1), 97–106 (2000)MathSciNetView ArticleMATHGoogle Scholar
  12. Liu, B.: Dependent-chance programming: a class of stochastic optimization. Comput Math Appl 34(12), 89–104 (1997)MathSciNetView ArticleMATHGoogle Scholar
  13. Liu, B.: Dependent-chance programming with fuzzy decisions. IEEE Trans Fuzzy Syst 7(3), 354–60 (1999)MathSciNetView ArticleGoogle Scholar
  14. Liu, B.: Random fuzzy dependent-chance programming and its hybrid intelligent algorithm. Inf Sci 141(3–4), 259–71 (2002)View ArticleMATHGoogle Scholar
  15. Kaveh, M., Dalfard, V.M., Amiri, S.: A new intelligent algorithm for dynamic facility layout problem in state of fuzzy constraints. Neural Comput & Applic 24(5), 1179–90 (2013)View ArticleGoogle Scholar
  16. Zhang, Z., Liu, M., Zhou, X., Chen, L.: A multi-objective DCP model for bi-level resource-constrained project scheduling problems in grounding grid system project under hybrid uncertainty. KSCE J Civ Eng 20(5), 1631–41 (2016)View ArticleGoogle Scholar
  17. Samal, N.K., Pratihar, D.K.: Optimization of variable demand fuzzy economic order quantity inventory models without and with backordering. Comput Ind Eng 78, 148–62 (2014)View ArticleGoogle Scholar
  18. Ha, M., Li, Y., Li, J., Tian, D.: The key theorem and the bounds on the rate of uniform convergence of learning theory on Sugeno measure space. Sci China Ser F 49(3), 372–85 (2006)MathSciNetView ArticleMATHGoogle Scholar
  19. Wang, Z., Klir, G.: Generalized Measure Theory. Springer, New York (2008)MATHGoogle Scholar
  20. Ha, M., Wang, C., Pedrycz, W.: The key theorem of learning theory based on Sugeno measure and fuzzy random samples. Life Syst Model Intell Comput 6329, 241–9 (2010)View ArticleGoogle Scholar
  21. Ha, M., Wang, C., Pedrycz, W.: The theoretical foundations of statistical learning theory based on fuzzy random samples in Sugeno measure space. Trans Inst Meas Control 34(5), 520–6 (2012)View ArticleGoogle Scholar
  22. Shi, W., Gao, Y.: A research on quality evaluation of lexical cohesion based on Sugeno measure. Int Conf Mach Learn Cybern 1, 178–82 (2011). doi:https://doi.org/10.1109/ICMLC.2011.6016731 Google Scholar
  23. Zhang, C., Zhang, H.: Borel-Cantelli lemma for Sugeno measure. Appl Mech Mater 614, 367–70 (2014)View ArticleGoogle Scholar
  24. Ha, M., Zhang, H., Pedrycz, W., Xing, H.: The expected value models on Sugeno measure space. Int J Approx Reason 50(7), 1022–35 (2009)MathSciNetView ArticleMATHGoogle Scholar
  25. Zhang, H., Ha, M., Xing, H., 9: Chance-constrained programming on sugeno measure space. Expert Syst Appl 38, 11527–33 (2011)View ArticleGoogle Scholar
  26. Chung, K.: Course in Probability Theory, 3rd edn. Academic Press, New York (2001)Google Scholar

Copyright

© The Author(s). 2017