Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 3 (2024)
  4. On min- and max-Kies families: distribut ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

On min- and max-Kies families: distributional properties and saturation in Hausdorff sense
Volume 11, Issue 3 (2024), pp. 265–288
Tsvetelin Zaevski ORCID icon link to view author Tsvetelin Zaevski details   Nikolay Kyurkchiev  

Authors

 
Placeholder
https://doi.org/10.15559/24-VMSTA244
Pub. online: 9 January 2024      Type: Research Article      Open accessOpen Access

Received
14 August 2023
Revised
3 November 2023
Accepted
2 January 2024
Published
9 January 2024

Abstract

The purpose of this paper is to explore two probability distributions originating from the Kies distribution defined on an arbitrary domain. The first one describes the minimum of several Kies random variables whereas the second one is for their maximum – they are named min- and max-Kies, respectively. The properties of the min-Kies distribution are studied in details, and later some duality arguments are used to examine the max variant. Also the saturations in the Hausdorff sense are investigated. Some numerical experiments are provided.

1 Introduction

Several extensions of the exponential distribution are available in the stochastic literature. They are very useful in different practical areas including engineering sciences, meteorology and hydrology, communications and telecommunications, energetics, chemical and metallurgical industry, epidemiology, insurance and banking, etc. One of the most important is the Weibull distribution firstly introduced in [18]. We refer also to the books [11–13] for some relatively new results. The original Kies distribution, introduced firstly in [7], appears as the fractional linear transformation $t=\frac{y}{y+1}\Leftrightarrow y=\frac{t}{1-t}$ in the Weibull distribution. This way its domain turns from the positive real half-line (Weibull) to the interval $(0,1)$. Analogously, the transformation $t=\frac{by+a}{y+1}\Leftrightarrow y=\frac{t-a}{b-t}$, $a\lt b$, leads to the Kies distribution on the interval $(a,b)$ – we refer to [8, 14, 19].
Later many authors turn to different modifications of these distributions. Refs. [1, 3, 9, 10] use a power transformation to define new families. Some composite distributions are proposed in [3], see also [1, 4, 16, 20].
Alternatively, we propose a polynomial transformation to be used before the fractional linear one leading to the Kies distribution. The importance of the arising distribution is due to the following property. It turns out that if ${\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}$ are independent Kies distributed random variables defined on one and the same domain, then the new distribution describes the random variable $\min \{{\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}\}$ – this motivates us to entitled it min-Kies distribution. Generally said, this distribution is more situated to the left part of its domain. We examine its properties taking a special attention on the probability density function (PDF, hereafter). It exhibits a quite variable behavior due to the polynomial component – the initial point can be zero, finite, or infinitely large; many peaks are possible, etc. We discuss also the cumulative distribution function (CDF), its complement (CCDF), the quantile function (QF), some expectations, etc. We study also the tail behavior using some risk measures arising in the financial mathematics – Value-at-Risk ($\mathit{VaR}$), Average-Value-at-Risk ($\mathit{AVaR}$), and Expectile-Based-Risk-Measure ($\mathit{ERM}$). Another important characteristic we examine is the so-called saturation in the sense of the Hausdorff distance. In fact, it measures the speed of occurrence of the random variables – a property which determines a large usefulness in many real life fields. We derive some semiclosed form formulas.
Analogously to the min-Kies distribution we define the max-Kies one – it describes the random variable $\max \{{\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}\}$. We explore its properties using some duality arguments. They are based on a transformation which symmetrically changes the left and right distribution’s parts, thus the mass of a max-Kies distribution is more positioned in the right part of the domain.
Finally, we present some numerical results using a real statistical data for the S&P500 financial index on the one side, and for the unemployment insurance issues on the other. Note that we can describe a left-skewed sample as well as a right-skewed one by the min- or max-Kies distributions.
The paper is structured as follows. We present the base we use later in Section 2. The properties of the min-Kies distribution are examined in Section 3. Its tail behavior and the Hausdorff saturation are investigated in Sections 4 and 5. The max-Kies distribution is considered in Section 6. The numerical results are provided in Section 7.

2 Preliminaries

The following notations shall be used hereafter: an uppercae letter for the cumulative distribution function of a distribution, the overlined letter for the complementary cumulative distribution function, the corresponding lowercase letter for the probability density function, and the letter Q for the quantile function. Thus if $F(t)$ is the CDF, then $\overline{F}(t)$, $f(t)$, and ${Q_{F}}(t)$ are the corresponding CCDF, PDF, and QF, respectively.
Let $a\lt b$, λ, and β be positive constants. The Kies distribution on the domain $(a,b)$ is defined through its CDF as
(1)
\[ H(t)=1-\exp \bigg(-\lambda {\bigg(\frac{t-a}{b-t}\bigg)^{\beta }}\bigg).\]
Hence, its CCDF is
\[ \overline{H}(t)=\exp \bigg(-\lambda {\bigg(\frac{t-a}{b-t}\bigg)^{\beta }}\bigg).\]
The PDF can be obtained differentiating CDF (1) as
(2)
\[ h(t)=\lambda \beta (b-a)\frac{{(t-a)^{\beta -1}}}{{(b-t)^{\beta +1}}}\exp \bigg(-\lambda {\bigg(\frac{t-a}{b-t}\bigg)^{\beta }}\bigg).\]
To characterize this distribution we need to obtain the shape of PDF (2). We do this in the proposition below following the approach used in Proposition 2.1 from [20].
Proposition 2.1.
Let us consider the Kies distribution with parameter set $\{a,b,\lambda ,\beta \}$. The right endpoint of the PDF is zero, $h(b)=0$. Let the function $\gamma (t)$ be defined as
(3)
\[ \gamma (t)=\lambda \beta (b-a){\bigg(\frac{t-a}{b-t}\bigg)^{\beta }}-2t-\beta (b-a)+a+b.\]
The following statements characterize the whole PDF.
  • 1. If $\beta =1$, then the left endpoint of the PDF is $h(a)=\frac{\lambda }{b-a}$. If $\lambda \ge 2$, then the PDF is a decreasing function. Otherwise, if $\lambda \lt 2$, then the PDF increases in the interval $(a,b-\lambda \frac{b-a}{2})$ and decreases for $t\in (b-\lambda \frac{b-a}{2},b)$.
  • 2. If $\beta \gt 1$, then $h(a)=0$. Function (3) has a unique root in the interval $(a,b)$ and the PDF increases before it and decreases after.
  • 3. If $\beta \lt 1$, then $h(a)=\infty $. We need the derivative of function (3) which is
    (4)
    \[ {\gamma ^{\prime }}(t)=\lambda {\beta ^{2}}{(b-a)^{2}}\frac{{(t-a)^{\beta -1}}}{{(b-t)^{\beta +1}}}-2.\]
    Let $\overline{t}$ be defined as
    (5)
    \[ \overline{t}=\frac{(a+b)-\beta (b-a)}{2}.\]
    If ${\gamma ^{\prime }}(\overline{t})\ge 0$, then the PDF is a decreasing function. Otherwise, if ${\gamma ^{\prime }}(\overline{t})\lt 0$, then derivative (4) has two roots in the interval $(a,b)$ which we denote by ${\overline{t}_{1}}\lt {\overline{t}_{2}}$. If $\gamma ({\overline{t}_{2}})\ge 0$, then the PDF is again a decreasing function. Finally, if $\gamma ({\overline{t}_{2}})\lt 0$, then the function (3) has two roots in the interval $(a,b)$ too; let us denote them by ${t_{1}}\lt {t_{2}}$. The PDF decreases for $t\in (a,{t_{1}})$, increases for $t\in ({t_{1}},{t_{2}})$, and again decreases for $t\in ({t_{2}},b)$. Thus it has a local minimum at ${t_{1}}$ and a local maximum at ${t_{2}}$.
Proof.
The PDF values at the endpoints can be derived directly from formula (2). We have to obtain the PDF shape. Suppose first that $\beta =1$. The PDF and its derivative are
\[\begin{aligned}{}h(t)& =\lambda \frac{b-a}{{(b-t)^{2}}}\exp \bigg(-\lambda \frac{t-a}{b-t}\bigg),\\ {} {h^{\prime }}(t)& =\lambda \frac{b-a}{{(b-t)^{4}}}\exp \bigg(-\lambda \frac{t-a}{b-t}\bigg)\big[-2t+2b-\lambda (b-a)\big].\end{aligned}\]
The PDF shape follows the form of the derivative ${h^{\prime }}(t)$.
Suppose that $\beta \ne 1$. The PDF can be written as
(6)
\[ h(t)={e^{-\eta (t)}}{\eta ^{\prime }}(t),\]
where the function $\eta (t)$ is
\[ \eta (t)=\lambda {\bigg(\frac{t-a}{b-t}\bigg)^{\beta }}.\]
The derivative of the PDF can be written as
\[ {h^{\prime }}(t)={e^{-\eta (t)}}\big({\eta ^{\prime\prime }}(t)-{\big({\eta ^{\prime }}(t)\big)^{2}}\big),\]
where the derivatives of the function $\eta (t)$ are
\[\begin{aligned}{}{\eta ^{\prime }}(t)& =\lambda \beta (b-a)\frac{{(t-a)^{\beta -1}}}{{(b-t)^{\beta +1}}},\\ {} {\eta ^{\prime\prime }}(t)& =\lambda \beta (b-a)\frac{{(t-a)^{\beta -2}}}{{(b-t)^{\beta +2}}}\big[2t+\beta (b-a)-(a+b)\big].\end{aligned}\]
Some calculations lead to
(7)
\[ {h^{\prime }}(t)=-{e^{-\eta (t)}}\lambda \beta (b-a)\frac{{(t-a)^{\beta -2}}}{{(b-t)^{\beta +2}}}\gamma (t),\]
where the function $\gamma (t)$ is defined in equation (3). Its derivative is given by formula (4). If $\beta \gt 1$, then the derivative ${\gamma ^{\prime }}(t)$ increases from a negative to a positive value in the interval $(a,b)$. Therefore the function $\gamma (t)$ starts from the negative value $\gamma (a)=(b-a)(1-\beta )$ decreases to a negative minimum and increases to infinity. Therefore, derivative (7) has unique root in the interval $(a,b)$ and hence PDF (6) increases before it and decreases after.
Suppose now that $\beta \lt 1$. The second derivative of function (3) is
\[ {\gamma ^{\prime\prime }}(t)=\lambda {\beta ^{2}}{(b-a)^{2}}\frac{{(t-a)^{\beta -2}}}{{(b-t)^{\beta +2}}}\big[2t+\beta (b-a)-(a+b)\big].\]
Let $\overline{t}$ be defined as in (5). Note that $\overline{t}\in (a,b)$ since $\beta \lt 1$. The second derivative ${\gamma ^{\prime\prime }}(t)$ is negative for $t\in (a,\overline{t})$ and positive for $t\in (\overline{t},b)$ and thus the derivative ${\gamma ^{\prime }}(t)$ decreases for $t\in (a,\overline{t})$ and increases for $t\in (\overline{t},b)$. If ${\gamma ^{\prime }}(\overline{t})\ge 0$, then ${\gamma ^{\prime }}(t)\gt 0$ for every $t\in (a,b)$. Hence $\gamma (t)$ is an increasing function and thus it is positive because $\gamma (a)=(1-\beta )(b-a)\gt 0$. Therefore PDF (6) is a decreasing function.
Suppose that ${\gamma ^{\prime }}(\overline{t})\lt 0$ and therefore the equation ${\gamma ^{\prime }}(t)=0$ has two roots ${\overline{t}_{1}}\lt {\overline{t}_{2}}$ and ${\gamma ^{\prime }}(t)\gt 0$ for $t\in \{(a,{\overline{t}_{1}})\cup ({\overline{t}_{2}},b)\}$ and ${\gamma ^{\prime }}(t)\lt 0$ for $t\in ({\overline{t}_{1}},{\overline{t}_{2}},)$. If $\gamma ({\overline{t}_{2}})\ge 0$, then the function $\gamma (t)$ is positive in the whole interval $(a,b)$ and thus PDF (6) is again a decreasing function. Finally, if $\gamma ({\overline{t}_{2}})\lt 0$, then the equation $\gamma (t)=0$ has two roots ${t_{1}}\lt {t_{2}}$ in the interval $(a,b)$ and $\gamma (t)\gt 0$ for $t\in \{(a,{t_{1}})\cup ({t_{2}},b)\}$ and $\gamma (t)\lt 0$ for $t\in ({t_{1}},{t_{2}})$. This confirms the shape of PDF (6).  □

3 Min-Kies distribution

We establish the min-Kies family through the following definitions.
Definition 3.1.
Let the set $\mathcal{P}$ consist of all polynomial style functions defined on the interval $x\in (0,\infty )$,
(8)
\[ P(x;\lambda ,\beta )={\sum \limits_{i=1}^{n}}{\lambda _{i}}{x^{{\beta _{i}}}},\]
where n is a positive integer, λ and β are positive vectors of dimension n, and ${\beta _{1}}\gt {\beta _{2}}\gt \cdots \gt {\beta _{n}}\gt 0$.
We shall denote by $p(x;\lambda ,\beta )$ the inverse function of (8) in the domain $(0,\infty )$. Note that it is well defined since function (8) increases from $P(0;\lambda ,\beta )=0$ to $P(\infty ;\lambda ,\beta )=\infty $.
The following lemma holds.
Lemma 3.2.
Let the functions $A(\cdot )$ and $\alpha (\cdot )$ be defined as
(9)
\[ \begin{aligned}{}A(t)& =P\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg),\\ {} \alpha (y)& =\frac{a+bp(y;\lambda ,\beta )}{1+p(y;\lambda ,\beta )},\end{aligned}\]
where the functions $P(\cdot ;\lambda ,\beta )$ and $p(\cdot ;\lambda ,\beta )$ are the same as in Definition 3.1. Then $\alpha (y)$ is the inverse function of $A(t)$, on the domains $y\in (0,\infty )$ and $t\in (a,b)$.
Proof.
Note first that the inverse function really exists, because the functions $\frac{t-a}{b-t}$ and $P(x)$ are increasing. We have only to check $A(\alpha (y))=y$:
\[ A\big(\alpha (y)\big)=P\bigg(\frac{\frac{a+bp(y;\lambda ,\beta )}{1+p(y;\lambda ,\beta )}-a}{b-\frac{a+bp(y;\lambda ,\beta )}{1+p(y;\lambda ,\beta )}};\lambda ,\beta \bigg)=P\big(p(y;\lambda ,\beta )\big)=y.\]
 □
Definition 3.3.
Let a and b be constants such that $0\le a\lt b$, and n, λ, and β satisfy the conditions of Definition 3.1. We define a new distribution named min-Kies via its CDF as
(10)
\[ F(t)=1-\exp \bigg(-P\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg)\bigg)\equiv 1-\exp \Bigg(-{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{t-a}{b-t}\bigg)^{{\beta _{i}}}}\Bigg).\]
Alternatively, we may define the new Kies style distribution as its CCDF.
Corollary 3.4.
The CCDF of the min-Kies distribution can be obtained as the product of the corresponding original Kies CCDFs:
\[ \overline{F}(t)={\prod \limits_{i=1}^{n}}\overline{H}(t;a,b,{\lambda _{i}},{\beta _{i}}),\]
where $\overline{H}(t;a,b,{\lambda _{i}},{\beta _{i}})$ are the CCDFs of the original Kies distributions with parameters $\{a,b,{\lambda _{i}},{\beta _{i}}\}$.
The following theorem discloses the theoretical importance of the min-Kies distribution and motivates its name.
Theorem 3.5.
Let ${P_{1}}(\cdot )$ and ${P_{2}}(\cdot )$ belong to the set $\mathcal{P}$ – note that this set is closed w.r.t. the sum-operator. Let us denote by ${\xi _{P}}$ a min-Kies distributed random variable with underlying polynomial function $P(\cdot )$. The set of such independent random variables is closed w.r.t. the min-operation in the sense that the random variables $\min \{{\xi _{{P_{1}}}},{\xi _{{P_{2}}}}\}$ and ${\xi _{{P_{1}}+{P_{2}}}}$ are identically distributed.
Proof.
We need to slightly modify the proof of the well known fact that the product of the CDFs of two independent random variables is the CDF of their maximum. Using the lower index to distinguish the distribution terms in Corollary 3.4, we derive
\[\begin{aligned}{}{\overline{F}_{{P_{1}}+{P_{2}}}}(t)& ={\overline{F}_{{P_{1}}}}(t){\overline{F}_{{P_{2}}}}(t)\\ {} & =\mathbb{P}({\xi _{{P_{1}}}}\gt t)\mathbb{P}({\xi _{{P_{2}}}}\gt t)\\ {} & =\mathbb{P}({\xi _{{P_{1}}}}\gt t)\mathbb{P}({\xi _{{P_{2}}}}\gt t\mid {\xi _{{P_{1}}}}\gt t)\\ {} & =\mathbb{P}({\xi _{{P_{1}}}}\gt t,{\xi _{{P_{2}}}}\gt t)\\ {} & =\mathbb{P}\big(\min \{{\xi _{{P_{1}}}}{\xi _{{P_{2}}}}\}\gt t\big).\end{aligned}\]
 □
The corollary below gives the PDF of the min-Kies distribution.
Corollary 3.6.
The probability density of the min-Kies distribution defined on the interval $(a,b)$ is
(11)
\[\begin{aligned}{}f(t)& =\exp \bigg(-P\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg)\bigg){P^{\prime }}\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg)\frac{b-a}{{(b-t)^{2}}}\\ {} & =\exp \Bigg(-{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{t-a}{b-t}\bigg)^{{\beta _{i}}}}\Bigg)\frac{b-a}{{(b-t)^{2}}}{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\beta _{i}}{\bigg(\frac{t-a}{b-t}\bigg)^{{\beta _{i}}-1}}\\ {} & =\overline{F}(t)\frac{b-a}{{(b-t)^{2}}}{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\beta _{i}}{\bigg(\frac{t-a}{b-t}\bigg)^{{\beta _{i}}-1}}.\end{aligned}\]
The behavior of the min-Kies PDF given in formulas (11) is more complicated than the original one, it depends on all basic distributions. Hence, we cannot derive closed form results analogous to Proposition 2.1. However, the following statements hold.
Proposition 3.7.
The right endpoint of PDF (11) is $f(b)=0$. The following statements hold for the left endpoint.
  • 1. If ${\beta _{n}}\lt 1$, then $f(a)=\infty $.
  • 2. If ${\beta _{n}}\gt 1$, then $f(a)=0$.
  • 3. If ${\beta _{n}}=1$, then $f(a)=\frac{{\lambda _{n}}}{b-a}$.
Let the function $Y(y)$ be defined on the positive real half-line as
(12)
\[ Y(y)={P^{\prime\prime }}(y;\lambda ,\beta )(1+y)+2{P^{\prime }}(y;\lambda ,\beta )-{\big({P^{\prime }}(y;\lambda ,\beta )\big)^{2}}(1+y)\]
and its roots be ${y_{1}}\lt {y_{2}}\lt \cdots \lt {y_{k}}$ for some integer k. The PDF has extrema at the points ${t_{j}}=\frac{{y_{j}}b+a}{{y_{j}}+1}$. The characterization of these extrema is as follows.
  • 1. If one of the following statements holds
    • • $\{{\beta _{n}}\gt 1\}$,
    • • $\{n=1,{\beta _{1}}=1,{\lambda _{1}}\lt 2\}$,
    • • $\{n\gt 1,{\beta _{n}}=1,{\beta _{n-1}}\lt 2\}$,
    • • $\{n\gt 1,{\beta _{n}}=1,{\beta _{n-1}}\gt 2,{\lambda _{n}}\lt 2\}$,
    • • $\{n\gt 1,{\beta _{n}}=1,{\beta _{n-1}}=2,{\lambda _{n}}\le 1+\sqrt{1+2{\lambda _{n-1}}}\}$,
    then the constant k is odd, the maxima are for the odd j’s, the minima are for the even ones, and the PDF is increasing at the point $t=a$.
  • 2. On the opposite, if one of the following statements holds
    • • $\{{\beta _{n}}\lt 1\}$,
    • • $\{n=1,{\beta _{1}}=1,{\lambda _{1}}\ge 2\}$,
    • • $\{n\gt 1,{\beta _{n}}=1,{\beta _{n-1}}\gt 2,{\lambda _{n}}\ge 2\}$,
    • • $\{n\gt 1,{\beta _{n}}=1,{\beta _{n-1}}=2,{\lambda _{n}}\gt 1+\sqrt{1+2{\lambda _{n-1}}}\}$,
    then k is even, the minima are for the odd j’s, the maxima are for the even ones, and the PDF is decreasing at the point $t=a$.
Proof.
The results when $n=1$ are proven in Proposition 2.1. Suppose now that $n\gt 1$. We derive the PDF value at the left endpoint using the second presentation from equations (11) and having in mind ${\beta _{1}}\gt {\beta _{2}}\gt \cdots \gt {\beta _{n}}\gt 0$. Let us turn to the PDF shape. We derive the PDF’s derivative using the first statement from (11) as
(13)
\[ {f^{\prime }}(t)={e^{-A(t)}}\big({A^{\prime\prime }}(t)-{\big({A^{\prime }}(t)\big)^{2}}\big),\]
where the function $A(\cdot )$ is defined in equation (9). Its derivatives are
\[\begin{aligned}{}{A^{\prime }}(t)& =\frac{b-a}{{(b-t)^{2}}}{P^{\prime }}\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg),\\ {} {A^{\prime\prime }}(t)& =\frac{{(b-a)^{2}}}{{(b-t)^{4}}}{P^{\prime\prime }}\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg)+2\frac{b-a}{{(b-t)^{3}}}{P^{\prime }}\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg).\end{aligned}\]
Therefore
\[\begin{aligned}{}& {A^{\prime\prime }}(t)-{\big({A^{\prime }}(t)\big)^{2}}\\ {} & \hspace{1em}=\frac{b-a}{{(b-t)^{3}}}\bigg[\frac{b-a}{b-t}{P^{\prime\prime }}\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg)+2{P^{\prime }}\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg)\\ {} & \hspace{2em}-\frac{b-a}{b-t}{\bigg({P^{\prime }}\bigg(\frac{t-a}{b-t};\lambda ,\beta \bigg)\bigg)^{2}}\bigg].\end{aligned}\]
Changing the variables as $y=\frac{t-a}{b-t}$, or equivalently $t=\frac{yb+a}{y+1}$, we derive
\[\begin{aligned}{}& {A^{\prime\prime }}(t)-{\big({A^{\prime }}(t)\big)^{2}}\\ {} & \hspace{1em}=\frac{{(y+1)^{3}}}{{(b-a)^{2}}}\big[(y+1){P^{\prime\prime }}(y;\lambda ,\beta )+2{P^{\prime }}(y;\lambda ,\beta )-(y+1){\big({P^{\prime }}(y;\lambda ,\beta )\big)^{2}}\big].\end{aligned}\]
Hence, derivative (13) can be written as
\[ {f^{\prime }}(t)={e^{-P(y;\lambda ,\beta )}}\frac{{(y+1)^{3}}}{{(b-a)^{2}}}Y(y),\]
where the function $Y(\cdot )$ is given by formula (12). Hence the min-Kies PDF has extrema namely for the roots of function $Y(\cdot ),{y_{1}},{y_{2}},\dots ,{y_{k}}$, transformed to ${t_{1}}=\frac{{y_{1}}b+a}{{y_{1}}+1}$, ${t_{2}}=\frac{{y_{2}}b+a}{{y_{2}}+1},\dots ,{t_{k}}=\frac{{y_{k}}b+a}{{y_{k}}+1}$. We have already proved that $f(a)=\infty $ when ${\beta _{n}}\lt 1$. Having in mind that $f(b)=0$ we conclude that k is even, the minima are for the odd j’s, and maxima are for the even ones. Analogously, we obtain the PDF shape when ${\beta _{n}}\gt 1$ having in mind $f(a)=0$ and $f(b)=0$.
Suppose now that ${\beta _{n}}=1$. We need to find the value $Y({0^{+}})$ to obtain the PDF behavior at the left endpoint $t=a$. Let us decompose the function $P(y;\lambda ,\beta )$ as
\[\begin{aligned}{}P(y;\lambda ,\beta )& ={P_{1}}(y;\lambda ,\beta )+{P_{0}}(y;\lambda ,\beta ),\hspace{2.5pt}\hspace{2.5pt}\text{where}\\ {} {P_{0}}(y)& ={\lambda _{n}}y,\\ {} {P_{1}}(y)& ={\sum \limits_{i=1}^{n-1}}{\lambda _{i}}{y^{{\beta _{i}}}}.\end{aligned}\]
Hence, the value of the function $Y(y)$ near the left endpoint $y=0$ can be written as
(14)
\[\begin{aligned}{}Y\big({0^{+}}\big)& ={P^{\prime\prime }}\big({0^{+}};\lambda ,\beta \big)+2{P^{\prime }}\big({0^{+}};\lambda ,\beta \big)-{\big({P^{\prime }}\big({0^{+}};\lambda ,\beta \big)\big)^{2}}\\ {} & ={P^{\prime\prime }_{0}}\big({0^{+}}\big)+2{P^{\prime }_{0}}\big({0^{+}}\big)-{\big({P^{\prime }_{0}}\big({0^{+}}\big)\big)^{2}}\\ {} & \hspace{1em}+{P^{\prime\prime }_{1}}\big({0^{+}}\big)+2{P^{\prime }_{1}}\big({0^{+}}\big)-{\big({P^{\prime }_{1}}\big({0^{+}}\big)\big)^{2}}\\ {} & \hspace{1em}-2{P^{\prime }_{0}}\big({0^{+}}\big){P^{\prime }_{1}}\big({0^{+}}\big).\end{aligned}\]
We have ${P^{\prime\prime }_{0}}({0^{+}})=0$ and ${P^{\prime }_{0}}({0^{+}})={\lambda _{n}}$. Also, the inequalities ${\beta _{1}}\gt \cdots \gt {\beta _{n-1}}\gt {\beta _{n}}=1$ show ${P^{\prime }_{1}}({0^{+}})=0$. Hence, equation (14) turns to
\[ Y\big({0^{+}}\big)=2{\lambda _{n}}-{\lambda _{n}^{2}}+{P^{\prime\prime }_{1}}\big({0^{+}}\big).\]
If ${\beta _{n-1}}\lt 2$, then ${P^{\prime\prime }_{1}}({0^{+}})=+\infty $ and therefore ${Y^{\prime\prime }}({0^{+}})=+\infty $, too. Therefore the PDF $f(t)$ increases at its left-point $t=a$ and thus the number of extrema k is odd and the odd ones are maxima.
Otherwise, if ${\beta _{n-1}}\gt 2$, then ${P^{\prime\prime }_{1}}({0^{+}})=0$ and hence $Y({0^{+}})=2{\lambda _{n}}-{\lambda _{n}^{2}}$. If ${\lambda _{n}}\lt 2$, then $Y({0^{+}})\gt 0$ and the same reasoning as above leads to the identical results. On the other hand, if ${\lambda _{n}}\ge 2$, then $Y({0^{+}})\le 0$ and hence PDF $f(t)$ decreases at the left-point $t=a$. Therefore k is even and the minima are with odd numbers.
It remains to investigate the case ${\beta _{n-1}}=2$. Note that ${P^{\prime\prime }_{1}}({0^{+}})=2{\lambda _{n-1}}$ nevertheless $n=2$ or $n\gt 2$, because ${\beta _{1}}\gt \cdots \gt {\beta _{n-1}}\gt 2$ when $n\gt 2$. Therefore
(15)
\[ Y\big({0^{+}}\big)=2{\lambda _{n-1}}+2{\lambda _{n}}-{\lambda _{n}^{2}}.\]
Considering formula (15) as a quadratic function w.r.t. ${\lambda _{n}}$ we see that $Y({0^{+}})\gt 0$ for $y\in (0,1+\sqrt{1+2{\lambda _{n-1}}})$ and $Y({0^{+}})\lt 0$ for $y\gt 1+\sqrt{1+2{\lambda _{n-1}}}$. The same reasoning as above finishes the proof.  □
We present in Figure 1, in the blue curves, the PDFs of some min-Kies distributions, and the corresponding parameters are reported in Table 1. The distribution domain is assumed to be the interval $(1,3)$, i.e. $a=1$ and $b=3$. As we can see they exhibit variable shapes. Some typical ones can be viewed in Figure 1a, where we consider two-component distributions. The used parameters are ${\lambda _{1}}=1$, ${\lambda _{2}}=2$, ${\beta _{1}}=2$, and ${\beta _{2}}\in \{0.5,1,1.5\}$. Proposition 3.7 has a confirmation – the PDF’s left endpoint is the infinity when ${\beta _{2}}=0.5$, it is $\frac{{\lambda _{1}}}{b-a}=1$ when ${\beta _{2}}=1$, and it is zero when ${\beta _{2}}=1.5$. Some PDFs with many peaks are presented in Figure 1b.
vmsta244_g001.jpg
Fig. 1.
Min- and max- Kies PDFs
Table 1.
Parameters
parameter (A1) (A2) (A3) (B1) (B2) (B3)
n 2 2 2 5 10 5
${\lambda _{1}}$ 1 1 1 48.8656 3.8367 7.7266
${\lambda _{2}}$ 2 2 2 33.4292 9.5941 0.8782
${\lambda _{3}}$ – – – 5.6038 2.2675 7.4407
${\lambda _{4}}$ – – – 2.9209 2.7862 0.5300
${\lambda _{5}}$ – – – 2.4441 6.3067 4.4290
${\lambda _{6}}$ – – – – 2.6047 –
${\lambda _{7}}$ – – – – 8.7562 –
${\lambda _{8}}$ – – – – 9.5910 –
${\lambda _{9}}$ – – – – 2.0228 –
${\lambda _{10}}$ – – – — 0.9302 –
${\beta _{1}}$ 2 2 2 48.5290 41.8723 39.8993
${\beta _{2}}$ 0.5 1 1.5 38.6619 37.1788 35.9901
${\beta _{3}}$ – – – 34.0970 34.0846 32.7791
${\beta _{4}}$ – – – 5.8855 31.9772 27.8533
${\beta _{5}}$ – – – 1.0243 29.3628 1.6168
${\beta _{6}}$ – – – – 26.4715 –
${\beta _{7}}$ – – – – 23.3149 –
${\beta _{8}}$ – – – – 16.1187 –
${\beta _{9}}$ – – – – 1.8369 –
${\beta _{10}}$ – – – – 0.8744 –
saturation, prime 0.3678 0.4806 0.5469 0.4753 0.5164 0.4891
saturation, dual 0.9611 0.9635 0.9655 0.9999 0.9999 0.9999
compl. sat., prime 0.9611 0.9635 0.9655 0.9999 0.9999 0.9999
compl. sat., dual 0.3678 0.4806 0.5469 0.4753 0.5164 0.4891
Next we discuss the quantile function.
Proposition 3.8.
Let $y\in (0,1)$. The y-quantile of the min-Kies distribution is
\[ Q(y)=\frac{bp(-\ln (1-y);\lambda ,\beta )+a}{p(-\ln (1-y);\lambda ,\beta )+1}.\]
See Definition 3.1 for the meaning of the functions $p(\cdot )$ and $P(\cdot )$.
Proof.
We have to prove $F(Q(y))=y$:
\[\begin{aligned}{}F\big(Q(y)\big)& =1-\exp \bigg(-P\bigg(\frac{Q(y)-a}{b-Q(y)};\lambda ,\beta \bigg)\bigg)\\ {} & =1-\exp \bigg(-P\bigg(\frac{\frac{bp(-\ln (1-y);\lambda ,\beta )+a}{p(-\ln (1-y);\lambda ,\beta )+1}-a}{b-\frac{bp(-\ln (1-y);\lambda ,\beta )+a}{p(-\ln (1-y);\lambda ,\beta )+1}};\lambda ,\beta \bigg)\bigg)\\ {} & =1-\exp \big(-P\big(p\big(-\ln (1-y);\lambda ,\beta \big);\lambda ,\beta \big)\big)\\ {} & =y.\end{aligned}\]
 □
An important property of the min-Kies distributions is their finite moments.
Corollary 3.9.
The min-Kies distributed random variables have finite moments.
Proof.
Let ξ be a min-Kies distributed random variable and let $m\in \mathbb{N}$. Its m-th moment can be obtained via integrating by parts as
\[ \mathbb{E}\big[{\xi ^{m}}\big]={\int _{a}^{b}}{t^{m}}dF(t)=1-m{\int _{a}^{b}}{t^{m-1}}F(t)dt\]
and therefore they are finite.  □
Although the expectations cannot be derived in closed form, Lemma 3.2 allows us to present a numerical approach. It is based on the following proposition.
Proposition 3.10.
Let ξ be a min-Kies distributed random variable and the function $\alpha (\cdot )$ be defined as in formulas (9). Let also the function $\mu (\cdot )$ be such that $\mathbb{E}[\mu (\xi )]\lt \infty $. Under these assumptions the expectation $\mathbb{E}[\mu (\xi )]$ can be derived as the Laplace transform of the function $\mu \circ \alpha (y)\equiv \mu (\alpha (y))$ taken at the point one.
Proof.
Changing the variables as $y=A(t)\Leftrightarrow t=\alpha (y)$ – see formulas (9) – we derive
\[\begin{aligned}{}\mathbb{E}\big[\mu (\xi )\big]& ={\int _{a}^{b}}\Bigg(\mu (t)\exp \Bigg(-{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{t-a}{b-t}\bigg)^{{\beta _{i}}}}\Bigg)\frac{b-a}{{(b-t)^{2}}}{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\beta _{i}}{\bigg(\frac{t-a}{b-t}\bigg)^{{\beta _{i}}-1}}\Bigg)dt\\ {} & ={\int _{a}^{b}}\mu (t){e^{-P(\frac{t-a}{b-t})}}d\bigg(P\bigg(\frac{t-a}{b-t}\bigg)\bigg)\\ {} & ={\int _{a}^{b}}\mu (t){e^{-A(t)}}dA(t)\\ {} & ={\int _{0}^{\infty }}\mu \big(\alpha (y)\big){e^{-y}}dy.\end{aligned}\]
This finishes the proof.  □
The following relation between the min-Kies distribution and the gamma function stands.
Proposition 3.11.
Let the function $A(\cdot )$ be defined as in formulas (9). Then the gamma function can be presented as the expectation
\[ \Gamma (x)=\mathbb{E}\big[{\big(A(\xi )\big)^{x-1}}\big],\]
where ξ is a min-Kies distributed random variable.
Proof.
Applying Proposing 3.10 for the function $\mu (t)={(A(t))^{x-1}}$, we derive
\[\begin{aligned}{}\mathbb{E}\big[\mu (\xi )\big]& ={\int _{0}^{\infty }}\mu \big(\alpha (y)\big){e^{-y}}dy\\ {} & ={\int _{0}^{\infty }}{\big(A\big(\alpha (y)\big)\big)^{x-1}}{e^{-y}}dy\\ {} & ={\int _{0}^{\infty }}{y^{x-1}}{e^{-y}}dy=\Gamma (x).\end{aligned}\]
 □

4 Tail behavior

We investigate in this section the tail properties of the min-Kies distribution. We shall present some results based on several risk measures arising from the financial markets, namely, Value-at-Risk ($\mathit{VaR}$), Average-Value-at-Risk ($\mathit{AVaR}$), and Expectile-Based-Risk-Measure ($\mathit{ERM}$), see Zaevski and Nedeltchev [21]. We also give a presentation of the mean residual life function ($\mathit{MRLF}$). Below we remind the definitions of these measures.
Definition 4.1.
Let $\epsilon \in (0,0.5)$. The $\mathit{VaR}$, $\mathit{AVaR}$, $\mathit{ERM}$, and $\mathit{MRLF}$ for an arbitrary random variable ξ (for the $\mathit{MRLF}$ and $\mathit{ERM}$ we need a finite first or second moment, respectively) are defined as follows.
  • 1. The left $\mathit{VaR}$ value at level ϵ is the quantile $\mathit{VaR}(\epsilon )=Q(\epsilon )$. The right one is $\overline{\mathit{VaR}}(\epsilon )=Q(1-\epsilon )$.
  • 2. The left and right $\mathit{AVaR}$ values are
    \[\begin{aligned}{}\mathit{AVaR}(\epsilon )& =\frac{1}{\epsilon }{\int _{0}^{\epsilon }}\mathit{VaR}(u)du,\\ {} \overline{\mathit{AVaR}}(\epsilon )& =\frac{1}{1-\epsilon }{\int _{\epsilon }^{1}}\mathit{VaR}(u)du.\end{aligned}\]
  • 3. The $\mathit{ERM}$ value is the solution w.r.t. the variable x of the equation
    (16)
    \[ \epsilon \mathbb{E}\big[{(\xi -x)^{+}}\big]=(1-\epsilon )\mathbb{E}\big[{(\xi -x)^{-}}\big],\]
    where ${z^{+}}$ and ${z^{-}}$ stand for $\max (z,0)$ and $\max (-z,0)$.
  • 4. The $\mathit{MRLF}$ is the conditional expectation
    \[ m(t)=\mathbb{E}\left[\xi -t\right.|\xi \gt t].\]
Remark 4.2.
In fact the original definition of the $\mathit{VaR}$ is the opposite quantile. We remove the minus sign since the domain of the Kies distribution is positioned on the positive real half-line. The $\mathit{AVaR}$ is derived by averaging all $\mathit{VaR}$’s below or above the ϵ-quantiles.
The original definition of the expectile is the minimizer of the following quadratic problem
\[ \mathit{ERM}(\epsilon ):=\underset{x\in \mathbb{R}}{\operatorname{arg\,min}}\big\{\mathbb{E}\big[\epsilon {\big({(\xi -x)^{+}}\big)^{2}}+(1-\epsilon ){\big({(\xi -x)^{-}}\big)^{2}}\big]\big\}.\]
Simple calculations show that it achieves minimum for the solution of equation (16) which is unique. Note that the expectile is well defined for random variables with a finite second moment.
Definition 4.1 shows that $\mathit{VaR}$ of the min-Kies distribution can be derived via Proposition 3.8. Next two propositions discuss the $\mathit{AVaR}$, $\mathit{ERM}$, and $\mathit{MRLF}$.
Proposition 4.3.
Tha $\mathit{AVaR}$ values of a min-Kies distributed random variable can be derived via the defined in equations (9) function $\alpha (\cdot )$ as
(17)
\[ \begin{aligned}{}\mathit{AVaR}(\epsilon )& =\frac{1}{\epsilon }{\int _{0}^{-\ln (1-\epsilon )}}{e^{-y}}\alpha (y)dy,\\ {} \overline{\mathit{AVaR}}(\epsilon )& =\frac{1}{1-\epsilon }{\int _{-\ln \epsilon }^{\infty }}{e^{-y}}\alpha (y)dy.\end{aligned}\]
Proof.
Let the constants ${x_{1}}$ and ${x_{2}}$ be such that $0\le {x_{1}}\lt {x_{2}}\le 1$. We shall denote hereafter by ${I_{\{\cdot \}}}$ the indicator function. Changing the variables as $y=Q(u)\hspace{2.5pt}\Leftrightarrow u=F(y)$ and using Proposition 3.10 for $\mu (y)=y{I_{y\in (Q({x_{1}}),Q({x_{2}}))}}$ we derive
\[\begin{aligned}{}{\int _{{x_{1}}}^{{x_{2}}}}Q(u)du& ={\int _{Q({x_{1}})}^{Q({x_{2}})}}ydF(y)\\ {} & =\mathbb{E}[\xi {I_{\xi \in (Q({x_{1}}),Q({x_{2}}))}}]\\ {} & ={\int _{0}^{\infty }}{e^{-y}}\alpha (y){I_{\alpha (y)\in (Q({x_{1}}),Q({x_{2}}))}}dy.\end{aligned}\]
Using Lemma 3.2 and having in mind $A(y)=-\ln (1-F(y))$ we derive that $y\in (-\ln (1-{x_{1}}),-\ln (1-{x_{2}}))$ when $\alpha (y)\in (Q({x_{1}}),Q({x_{2}}))$. Therefore
(18)
\[ {\int _{{x_{1}}}^{{x_{2}}}}Q(u)du={\int _{-\ln (1-{x_{1}})}^{-\ln (1-{x_{2}})}}{e^{-y}}\alpha (y)dy.\]
Applying equation (18) for ${x_{1}}=0$ and ${x_{2}}=\epsilon $ we derive the result for $\mathit{AVaR}$. The result for $\overline{\mathit{AVaR}}$ is obtained for ${x_{1}}=1-\epsilon $ and ${x_{2}}=1$.  □
We need the following lemma before to continue with the expectile tail measure and the mean residual life function.
Lemma 4.4.
The truncated expectation for a min-Kies distributed random variable ξ can be derived as
(19)
\[ \begin{aligned}{}\mathbb{E}\big[{(\xi -t)^{+}}\big]& ={\int _{A(t)}^{+\infty }}{e^{-y}}\alpha (y)dy-t\overline{F}(t),\\ {} \mathbb{E}\big[{(\xi -t)^{-}}\big]& =tF(t)-{\int _{0}^{A(t)}}{e^{-y}}\alpha (y)dy.\end{aligned}\]
Proof.
We have to rewrite truncated expectations (19) as
\[\begin{aligned}{}\mathbb{E}\big[{(\xi -t)^{+}}\big]& =\mathbb{E}[\xi {I_{\xi \gt t}}]-t\overline{F}(t),\\ {} \mathbb{E}\big[{(\xi -t)^{-}}\big]& =tF(t)-\mathbb{E}[\xi {I_{\xi \gt t}}]\end{aligned}\]
and apply Proposition 3.10 for the functions $\mu (y)=y{I_{y\gt t}}$ and $\mu (y)=y{I_{y\lt t}}$.  □
The statements below for the $\mathit{ERM}$ and MRLF hold.
Proposition 4.5.
The value of $\epsilon -\mathit{ERM}$ of a min-Kies distributed random variable is the solution of the following equation w.r.t. the variable x
\[ (1-\epsilon ){\int _{0}^{A(x)}}{e^{-y}}\alpha (y)dy+\epsilon {\int _{A(x)}^{+\infty }}{e^{-y}}\alpha (y)dy=\epsilon \overline{F}(x)+(1-\epsilon )F(x).\]
Its mean residual life function can be presented as
\[ m(t)=\frac{{\textstyle\textstyle\int _{A(t)}^{+\infty }}{e^{-y}}\alpha (y)dy}{\overline{F}(t)}-t.\]
Proof.
The first statement holds due to Definition 4.1 and Lemma 4.4. The second statement can be obtained via the proven in [5] presentation of the $\mathit{MRLF}$
\[ m(t)=\frac{\mathbb{E}[{(\xi -t)^{+}}]}{\overline{F}(t)}\]
and Lemma 4.4.  □

5 Saturation in the Hausdorff sense

We define the Hausdorff distance in a sense of [15].
Definition 5.1.
Let us consider the max-norm in ${\mathbb{R}^{2}}$: if A and B are the points $A=({t_{A}},{x_{A}})$ and $B=({t_{B}},{x_{B}})$, then $\| A-B\| :=\max \{|{t_{A}}-{t_{B}}|,|{x_{A}}-{x_{B}}|\}$. The Hausdorff distance $d(g,h)$ between two curves g and h in ${\mathbb{R}^{2}}$ is
\[ d(g,h):=\max \Big\{\underset{A\in g}{\sup }\underset{B\in h}{\inf }\| A-B\| ,\underset{B\in h}{\sup }\underset{A\in g}{\inf }\| A-B\| \Big\}.\]
We can view the Hausdorff distance as the highest optimal path between the curves. Next we define the saturation of a distribution.
Definition 5.2.
Let $F(\cdot )$ be the CDF of a distribution with left-finite domain – $[a,b)$, $-\infty \lt a\lt b\le \infty $. Its saturation is the Hausdorff distance between the completed graph of $F(\cdot )$ and the curve consisting of two lines – one vertical between the points $(a,0)$ and $(a,1)$, and another horizontal between the points $(a,1)$ and $(b,1)$.
The following corollary holds for the saturation.
Corollary 5.3.
The saturation d of a min-Kies random variable distributed on the interval $[a,b)$ is the unique solution of the equation
(20)
\[ F(a+d)+d=1.\]
Something more, the saturation satisfies the restriction $d\lt \min \{b-a,1\}$.
Proof.
Having in mind that the distribution’s left endpoint is a and Definitions 5.1 and 5.2, we see that the saturation has to satisfy $F(a+d)+d=1$. Equation (20) has a unique root because $l(t)=F(a+t)+t-1$ is an increasing continuous function with endpoints $l(0)=-1\lt 0$ and $l(b-a)=b-a\gt 0$.
Obviously $d\lt 1$ and therefore inequality $d\lt \min \{b-a,1\}$ holds because $l(b-a)\gt 0$.  □
Intuitively, the saturation has to depend only on the domain’s length $b-a$ but not on the particular values of a and b. We formalize this in the following proposition.
Proposition 5.4.
The saturation d of the min-Kies distribution with domain $[a,b)$ satisfies the following equation
(21)
\[ b-a=d\bigg(1+\frac{1}{p(-\ln d;\lambda ,\beta )}\bigg).\]
Proof.
Using equation (20) we derive
\[ P\bigg(\frac{d}{b-a-d};\lambda ,\beta \bigg)=-\ln d,\]
or equivalently
(22)
\[ \frac{d}{b-a-d}=p(-\ln d;\lambda ,\beta ).\]
We can easily check that equations (21) and (22) are equivalent.  □
Next two theorems provide a semi-closed form formula for the saturation.
Theorem 5.5.
Let the function $\gamma (y;c,C,\nu )$ be defined for $y\gt \max \{-\ln (b-a),0\}$, $c\gt 0$, $C\gt 0$, and $\nu \gt 0$ as
\[ \gamma (y;c,C,\nu )=cy{\big((b-a){e^{Cy}}-1\big)^{\nu }}.\]
If the parameters ${\lambda _{i}}$, $i=1,2,\dots ,n$, can be presented as
(23)
\[ {\lambda _{i}}=\gamma \Bigg(y;{c_{i}},{\sum \limits_{i=1}^{n}}{c_{i}},{\beta _{i}}\Bigg)\]
for some positive constants ${c_{1}},{c_{2}},\dots ,{c_{n}}$, then the min-Kies distribution’s saturation is
(24)
\[ d=\exp \Bigg(-y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg).\]
Proof.
Suppose that presentation (23) holds. Hence
\[\begin{aligned}{}{\lambda _{i}}& ={c_{i}}y{\Bigg((b-a)\exp \Bigg(y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg)-1\Bigg)^{{\beta _{i}}}}\\ {} & ={c_{i}}y{\bigg(\frac{b-a-\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}\bigg)^{{\beta _{i}}}},\hspace{2.5pt}i=1,\dots ,n,\end{aligned}\]
and therefore
(25)
\[ {\lambda _{i}}{\bigg(\frac{\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{b-a-\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}\bigg)^{{\beta _{i}}}}={c_{i}}y,\hspace{2.5pt}i=1,\dots ,n.\]
Taking the exponent of opposite equations (25) and multiplying, we derive
\[ \exp \bigg(-P\bigg(\frac{\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{b-a-\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})};\lambda ,\beta \bigg)\bigg)=\exp \Bigg(-y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg).\]
Combining Corollary 5.3 with CDF (10) we see that the saturation is given namely by formula (24).  □
Theorem 5.6.
The equation
(26)
\[ {\sum \limits_{i=1}^{n}}\frac{{\lambda _{i}}}{y{((b-a){e^{y}}-1)^{{\beta _{i}}}}}=1\]
has a unique solution such that $y\gt \max \{-\ln (b-a),0\}$. If it is denoted by $\overline{y}$, then the saturation is $d={e^{-\overline{y}}}$.
Proof.
Equation (26) has unique solution because its left hand-side is a decreasing from infinity to zero function. Let the constants ${c_{i}}$, $i=1,\dots ,n$, be defined as
\[ {c_{i}}=\frac{{\lambda _{i}}}{\overline{y}{((b-a){e^{\overline{y}}}-1)^{{\beta _{i}}}}}.\]
Hence presentations (23) hold for these values of ${c_{i}}$. We finish the proof using Theorem 5.5 and having in mind ${\textstyle\sum _{i=1}^{n}}{c_{i}}=1$.  □
The CDFs with parameters given in Table 1 can be viewed in Figures 1c–1h, in the blue curves. The saturations are presented by the blue squares and their values are reported in Table 1, too, in the first line of its second part.

6 Duality. Max-Kies distribution

We first introduce a specific distributional duality.
Definition 6.1.
Let $F(t)$ be the CDF of a distribution stated on the finite domain $(a,b)$. Then its dual distribution is defined via its CDF as $G(t)=1-F(b+a-t)$.
The following corollary explain the essence of the dual distributions. In fact it moves the left behavior of a distribution to the right side and vice versa.
Corollary 6.2.
The probability density functions of the dual distribution $g(t)$ can be presented via the PDF of the original distribution $f(t)$ as $g(t)=f(b+a-t)$.
Definition 6.3.
The max-Kies distribution is defined as dual of the min-Kies one.
Corollary 6.4.
The cumulative distribution functions of the max-Kies distribution can be presented as
\[\begin{aligned}{}G(t)& =\exp \bigg(-P\bigg(\frac{b-t}{t-a}\bigg)\bigg),\\ {} \overline{G}(t)& =1-\exp \bigg(-P\bigg(\frac{b-t}{t-a}\bigg)\bigg).\end{aligned}\]
The name max-Kies is motivated by the following proposition.
Proposition 6.5.
Let ${\xi _{i}}$, $i=1,2,\dots ,n$, be independent dual-Kies distributed random variables on the domain $(a,b)$ with parameters $\{{\lambda _{i}},{\beta _{i}}\}$. Then the random variable $\max \{{\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}\}$ has a max-Kies distribution.
Proof.
The proof is based on the fact that the CDF of the max-Kies distribution is the product of the CDFs of the underlying original Kies ones. See the proof of Theorem 3.5.  □
The probability density and the quantile functions of the max-Kies distribution can be presented as
(27)
\[ \begin{aligned}{}g(t)& =\exp \bigg(-P\bigg(\frac{b-t}{t-a};\lambda ,\beta \bigg)\bigg){P^{\prime }}\bigg(\frac{b-t}{t-a};\lambda ,\beta \bigg)\frac{b-a}{{(t-a)^{2}}},\\ {} {Q_{G}}(y)& =\frac{ap(-\ln y;\lambda ,\beta )+b}{p(-\ln y;\lambda ,\beta )+1}.\end{aligned}\]
The PDF’s shape can be deduced through Proposition 3.7 having in mind Corollary 6.2. It turns out that $g(a)=0$, whereas the right endpoint $g(b)$ can be zero, finite, or infinitely large. As a rule, the behavior of the max-Kies distribution considered from b to a is the same as the behavior of the corresponding min-Kies one but taken from a to b. In this light the max-Kies distribution behaves at the right endpoint as the min-Kies one at the left endpoint.
Let the functions $A(t)$ and $\alpha (y)$ from equations (9) be defined now as
(28)
\[ \begin{aligned}{}\widetilde{A}(t)& =P\bigg(\frac{b-t}{t-a};;\lambda ,\beta \bigg),\\ {} \widetilde{\alpha }(y)& =\frac{b+ap(y;\lambda ,\beta )}{1+p(y;\lambda ,\beta )}.\end{aligned}\]
Under these assumptions Proposition 3.10 for the expectations still holds.
We discuss now briefly the tail behavior of the max-Kies distribution. The results are similar to their min-Kies analogous and we omit the proofs. The $\mathit{VaR}$-values, left and write, are again expressed by the quantile function – it is now given by equations (27). The $\mathit{AVaR}$ values need a little modification compared to formulas (17)
\[\begin{aligned}{}\mathit{AVaR}(\epsilon )& =\frac{1}{\epsilon }{\int _{-\ln \epsilon }^{\infty }}{e^{-y}}\widetilde{\alpha }(y)dy,\\ {} \overline{\mathit{AVaR}}(\epsilon )& =\frac{1}{1-\epsilon }{\int _{0}^{-\ln (1-\epsilon )}}{e^{-y}}\widetilde{\alpha }(y)dy.\end{aligned}\]
Having in mind that formulas (19) for the truncated expectations still hold for the max-Kies distributions, we conclude that Proposition 4.5 for the expectile tail measure and the mean residual life function are true – the difference is that we have to use now the given in (28) functions $\widetilde{A}(\cdot )$ and $\widetilde{\alpha }(\cdot )$ instead of those from (9).
Finally, let us turn to the saturation of the max-Kies distribution. We formulate the analogues of the results derived in Section 5 in the following theorem.
Theorem 6.6.
The saturation d of the max-Kies distribution satisfies the equation
(29)
\[ b-a=d\big[1+p\big(-\ln (1-d);\lambda ,\beta \big)\big].\]
Let the set Υ be ${\mathbb{R}^{+}}$ if $b-a\ge 1$ and $\Upsilon =(0,-\ln (1-b+a))$ when $b-a\lt 1$. If the parameters ${\lambda _{i}}$, $i=1,\dots ,n$, can be presented as
(30)
\[ {\lambda _{i}}={c_{i}}y{\bigg(\frac{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})-1}{(b-a-1)\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})+1}\bigg)^{{\beta _{i}}}}\]
for some positive constants ${c_{1}},\dots ,{c_{n}}$ and $y\in \Upsilon $, then the max-Kies saturation is
\[ d=1-\exp \Bigg(-y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg).\]
Something more, the equation w.r.t. the variable y
(31)
\[ {\sum \limits_{i=1}^{n}}\frac{{\lambda _{i}}}{y}{\bigg[\frac{(b-a-1){e^{y}}+1}{{e^{y}}-1}\bigg]^{{\beta _{i}}}}=1\]
has a unique solution in the set Υ. If we denote it by $\overline{y}$, then the saturation is $d=1-{e^{-\overline{y}}}$.
Proof.
Equation (29) can be proven analogously to Proposition 5.4. Let us turn to the second part of the theorem. Note that the condition $y\in \Upsilon $ guarantees that the constants ${\lambda _{i}}$ defined by formula (30) are positive reals. Using these formulas we derive
\[\begin{aligned}{}1-d& =\exp \Bigg(-y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg)\\ {} & =\exp \Bigg(-{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{(b-a-1)\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})+1}{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})-1}\bigg)^{{\beta _{i}}}}\Bigg)\\ {} & =\exp \Bigg(-{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{b-a-1+\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{1-\exp (-y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}\bigg)^{{\beta _{i}}}}\Bigg)\\ {} & =\exp \Bigg(-{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{b-a-d}{d}\bigg)^{{\beta _{i}}}}\Bigg)\\ {} & =\exp \bigg(-P\bigg(\frac{b-a-d}{d}\bigg)\bigg)=G(a+d).\end{aligned}\]
This equation together with Corollary 5.3 (note that it holds for the max-Kies distribution, too) proves the second part of the theorem.
Let us consider the left hand-side of equation (31) as a function of the variable $y\in \Upsilon $. It starts from $+\infty $ and decreases to zero. Hence, equation (31) has unique solution. We finish the proof applying the values
\[ {c_{i}}=\frac{{\lambda _{i}}}{y}{\bigg[\frac{(b-a-1){e^{y}}+1}{{e^{y}}-1}\bigg]^{{\beta _{i}}}}\]
to the previous statement of the theorem. Note that ${\textstyle\sum _{i=1}^{n}}{c_{i}}=1$.  □
We need to define a saturation of another style to continue the investigation on the max-Kies distribitions.
Definition 6.7.
The complementary saturation when $-\infty \le a\lt b\lt \infty $ is the Hausdorff distance between the completed graph of $F(\cdot )$ and the curve consisting of the following two lines – a horizontal between the points $(a,0)$ and $(b,0)$ and a vertical between the points $(b,0)$ and $(b,1)$.
The following analogue of Corollary 5.3 is true.
Proposition 6.8.
Let us denote by $\overline{d}$ the complementary saturation of the distribution $F(\cdot )$ on the interval $(a,b]$. Then it is the unique solution of the equation
(32)
\[ F(b-\overline{d})=\overline{d}.\]
Something more, $\overline{d}\lt \min \{b-a,1\}$. We have as an immediate corollary that, if $b-a=1$, then the sum of both saturations is equal to the domain’s length – this means that the distribution achieves its saturations at one and the same point.
Proof.
The proof is very similar to the proof of Corollary 5.3 and we omit it. We can easily check that if $b=a+1$ and $\overline{d}+d=1$, then equations (32) and (20) are equivalent.  □
The following theorem allows us to derive the complementary saturation for the min- and max-Kies distributions.
Theorem 6.9.
The saturation of the prime distribution coincides with the complementary saturation of the dual distribution.
Proof.
Equation (20) together with Definition 6.1 leads to statement (32). Proposition 6.8 finishes the proof.  □
The PDFs of the dual distributions are presented in Figures 1a and 1b, in the red curves, and the parameters are given in Table 1. The complementary saturations are the lengths of the red squares’ sides. All saturations are reported in Table 1, too.

7 Numerical results

We shall provide now two numerical experiments to check how the defined in this paper distributions describe real statistical samples. The first one is based on the historical data for the S&P 500 financial index whereas the second one is for the unemployment insurance issues. We shall use the min-Kies distribution because both are left-skewed.

7.1 S&P 500 index

We shall calibrate now the min-Kies distribution to the statistical data used in [20]. There have been used $10\hspace{0.1667em}717$ daily observations for the S&P 500 index between January 2, 1980 and July 1, 2022. We are interested in the market shocks which are defined as the dates at which the index falls with more than two percents. Our statistical sample is formed by the length of the periods between two shocks – totally of $N=357$ observations in the range between 1 and 950. Note that these observations are divided to 1000 because the paper [20] is devoted on the Kies style distribution stated on the interval $(0,1)$. We preserve this scaling in the current paper to keep the used statistical sample. Also, in such a way some eventual comparisons are possible. Let us denote the scaled observations by ${t_{i}}$, $i=1,2,\dots ,N$, thus they form the domain $(\min \{{t_{i}}\},\max \{{t_{i}}\})$. We shall divide this interval into a grid with $m=50$ uniformly spaced nodes. Let ${l_{i}^{\mathrm{emp}}}$, $i=1,2,\dots ,m$, be the empirical PDF values at these nodes. We derive them as
\[ {l_{i}^{\mathrm{emp}}}=\frac{m{N_{i}}}{N(\max \{{t_{i}}\}-\min \{{t_{i}}\})},\]
where ${N_{i}}$ is the number of the observations falling in the i-th subinterval. Also, let us denote by γ the set of parameters of a min-Kies distribution and by ${l_{i}^{\mathrm{Kies}}}(\gamma )$, $i=1,2,\dots ,m$, the corresponding PDF values of the distribution arising from the parameters γ. Using a modification of the standard least squares method we define the cost function as
(33)
\[ L(\gamma ):={\sum \limits_{i=1}^{m}}\big|\ln \big({l_{i}^{\mathrm{emp}}}+\epsilon \big)-\ln \big({l_{i}^{\mathrm{Kies}}}(\gamma )+\epsilon \big)\big|.\]
Note that we introduce in formula (33) a logarithmic correction because some parameters lead to an infinitely large initial PDF’s value. This logarithm necessitates the use of an additional constant ϵ because some of the empirical PDF values are zero. We set $\epsilon =0.01$.
We calibrate the min-Kies distribution with one, two, three, and four components minimizing cost function (33) over all possible parameter sets γ. As we mentioned above, the values reported in [20] are calibrated under the assumption that the Kies distribution is stated at the interval $(0,1)$. We recalibrate now abandoning this restriction, i.e. the interval endpoints are included in the searched parameter set γ. We minimize cost function (33) using nonlinear programming since we have a multidimensional optimization problem – note that there are four, six, eight, and ten parameters for different min-Kies models. Since the starting point is very essential for the minimizing algorithm, we use the already derived parameters for the initial point for the next Kies distribution. The received values are reported in Table 2, first five columns entitled by the letter (A). The result derived in [20] are in the column (A0). The number n placed at the first line shows the components of the min-Kies distribution. We can make two major conclusions. First, the distributions domain is very closed to the unit interval. On the other hand, it is natural the additional components to lead to a better fit. We can see from the obtained errors that the two component distribution leads to an admissible result, whereas the three and four components provide very good estimations. We present all PDFs in Figure 2a.
vmsta244_g002.jpg
Fig. 2.
Applications
Table 2.
Numerical estimations
par. (A0) (A1) (A2) (A3) (A4) (B1) (B2) (B3) (B4)
n 1 1 2 3 4 1 2 3 4
${\lambda _{0}}$ 15.7857 15.1377 3053.03 19444234.9 11623604.4 101.708 13.4539 74.0164 193.5577
${\lambda _{1}}$ – – 6.6251 32.1833 15.6489 – 7.2452 0.0123 0.0078
${\lambda _{2}}$ – – – 3.9017 3.2997 – – 8.5378 0.0111
${\lambda _{3}}$ – – – – 0.3394 – – – 9.1257
${\beta _{0}}$ 0.7120 0.6800 4.8953 11.8106 9.6112 2.3473 13.7467 13.3751 14.4112
${\beta _{1}}$ – – 0.3609 1.8377 1.3580 – 1.2393 6.1671 6.9765
${\beta _{2}}$ – – – 0.2259 0.2136 – – 1.2787 5.9488
${\beta _{3}}$ – – – – 0.1186 – – – 1.2987
a 0 0.0048 0.0064 0.0077 0.0080 0.0257 1.1141 1.1132 1.1125
b 1 0.9900 1.0076 1.0165 0.9912 18.8619 8.3748 8.9268 9.1328
error 25.3491 25.2108 22.1423 21.3855 21.0382 4.7166 3.4270 3.3980 3.3908

7.2 Unemployment insurance issues

We shall use now a monthly historical data for the unemployment insurance issues in the period between 1971 and 2018 – totally 574 observations. The data can be found at https://data.worlddatany-govns8zxewg or in [17, pp. 162–164]. The same data is also used in [2, 6, 22]. We divide all values by $50\hspace{0.1667em}000$ since their minimum is $49\hspace{0.1667em}263$. Thus the range turns from $[49\hspace{0.1667em}263,308\hspace{0.1667em}352]$ to $[0.9853,6.1670]$. This scaling leads to more convenient domain as well as to some calculation simplifications. The results are reported in the second part of Table 2. We can see that the two-, three-, and four-component distributions approximate similar domains, namely near the interval $(1,9)$. Also, the returned errors are statistically indistinguishable. On the contrary, the original Kies distribution ($n=1$) returns a quite different support, $(0.0257,18.8619)$, as well as a significantly larger error. Hence, we conclude that only one additional component is sufficient and it has a significant importance. All PDFs can be viewed in Figure 2b. Note that the logarithmic correction in the cost function (33) leads to a better fit in the tails than in the distribution’s center.
We can see a major difference between these examples – the initial point for the first PDF is the infinity whereas it is zero for the second one. This is due to the coefficient ${\beta _{n}}$ and its position w.r.t. the unit, see Proposition 3.7.

Acknowledgement

The authors would like to express sincere gratitude to the editor Prof. Yuliya Mishura and to the anonymous reviewers for the helpful and constructive comments which substantially improve the quality of this paper.

References

[1] 
Afify, A., Gemeay, A., Alfaer, N., Cordeiro, G., Hafez, E.: Power-modified Kies-exponential distribution: Properties, classical and Bayesian inference with an application to engineering data. Entropy 24(7), 883 (2022). MR4467767. https://doi.org/10.3390/e24070883
[2] 
Ahmad, Z., Mahmoudi, E., Roozegar, R., Alizadeh, M., Afify, A.: A new exponential-X family: modeling extreme value data in the finance sector. Math. Probl. Eng. 2021, 1–14 (2021). https://doi.org/10.1155/2021/2394931
[3] 
Al-Babtain, A., Shakhatreh, M., Nassar, M., Afify, A.: A new modified Kies family: Properties, estimation under complete and type-II censored samples, and engineering applications. Mathematics 8(8), 1345 (2020). MR4199201. https://doi.org/10.3390/math8081345
[4] 
Alsubie, A.: Properties and applications of the modified Kies–Lomax distribution with estimation methods. J. Math. 2021, 1–18 (2021). MR4346604. https://doi.org/10.1155/2021/1944864
[5] 
Gupta, R., Bradley, D.: Representing the mean residual life in terms of the failure rate. Math. Comput. Model. 37(12–13), 1271–1280 (2003). MR1996036. https://doi.org/10.1016/S0895-7177(03)90038-0
[6] 
He, W., Ahmad, Z., Afify, A., Goual, H.: The arcsine exponentiated-X family: validation and insurance application. Complexity 2020, 1–18 (2020). MR4218892. https://doi.org/10.3390/e22060603
[7] 
Kies, J.: The strength of glass performance. In: Naval Research Lab Report, Washington, D.C., vol. 5093 (1958)
[8] 
Kumar, C.S., Dharmaja, S.: On some properties of Kies distribution. Metron 72(1), 97–122 (2014). MR3176964. https://doi.org/10.1007/s40300-013-0018-8
[9] 
Kumar, C.S., Dharmaja, S.: The exponentiated reduced Kies distribution: Properties and applications. Commun. Stat., Theory Methods 46(17), 8778–8790 (2017). MR3680792. https://doi.org/10.1080/03610926.2016.1193199
[10] 
Kumar, C.S., Dharmaja, S.: On modified Kies distribution and its applications. J. Stat. Res. 51(1), 41–60 (2017). MR3702285. https://doi.org/10.47302/jsr.2017510103
[11] 
Lai, C.-D.: Generalized Weibull distributions. In: Generalized Weibull Distributions, pp. 23–75. Springer (2014). MR3115122. https://doi.org/10.1007/978-3-642-39106-4_2
[12] 
McCool, J.: Using the Weibull Distribution: Reliability, Modeling, and Inference, vol. 950. John Wiley & Sons (2012). MR3014584. https://doi.org/10.1002/9781118351994
[13] 
Rinne, H.: The Weibull Distribution: A Handbook. Chapman and Hall/CRC (2008). MR2477856
[14] 
Sanku, D., Nassarn, M., Kumar, D.: Moments and estimation of reduced Kies distribution based on progressive type-II right censored order statistics. Hacet. J. Math. Stat. 48(1), 332–350 (2019). MR3976180. https://doi.org/10.15672/hjms.2018.611
[15] 
Sendov, B.: Hausdorff Approximations, vol. 50. Springer (1990). MR1078632. https://doi.org/10.1007/978-94-009-0673-0
[16] 
Sobhi, M.A.: The modified Kies–Fréchet distribution: properties, inference and application. AIMS Math. 6, 4691–4714 (2021). MR4220431. https://doi.org/10.3934/math.2021276
[17] 
Vasileva, M., Kyurkchiev, N.: Insuarance Mathematics. Plovdiv University Press (2023) (in Bulgarian).
[18] 
Weibull, W.: A statistical distribution function of wide applicability. J. Appl. Mech. 18(3), 293–297 (1951). https://doi.org/10.1115/1.4010337
[19] 
Zaevski, T., Kyurkchiev, N.: Some notes on the four-parameters Kies distribution. C. R. Acad. Bulg. Sci. 75(10), 1403–1409 (2022). MR4504780
[20] 
Zaevski, T., Kyurkchiev, N.: On some composite Kies families: distributional properties and saturation in Hausdorff sense. Mod. Stoch. Theory Appl. 10(3), 287–312 (2023). MR4608189. https://doi.org/10.15559/23-vmsta227
[21] 
Zaevski, T., Nedeltchev, D.: From BASEL III to BASEL IV and beyond: Expected shortfall and expectile risk measures. Int. Rev. Financ. Anal. 87, 102645 (2023). https://doi.org/10.1016/j.irfa.2023.102645
[22] 
Zhenwu, Y., Ahmad, Z., Almaspoor, Z., Khosa, S.: On the genesis of the Marshall-Olkin family of distributions via the T-X family approach: Statistical modeling. Comput. Mater. Continua 67(1), 753–760 (2021). MR4417175. https://doi.org/10.5269/bspm.53071
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Min-Kies distribution
  • 4 Tail behavior
  • 5 Saturation in the Hausdorff sense
  • 6 Duality. Max-Kies distribution
  • 7 Numerical results
  • Acknowledgement
  • References

Copyright
© 2024 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Exponential distribution Weibull distribution Kies distribution min- and max-distributions Hausdorff saturation

MSC2020
41A40 41A46 60E05 62E17

Funding
The first author is financed by the European Union-NextGenerationEU, through the National Recovery and Resilience Plan of the Republic of Bulgaria, project No BG-RRP-2.004-0008.
The second author is financed by the European Union-NextGenerationEU, through the National Recovery and Resilience Plan of the Republic of Bulgaria, project No BG-RRP-2.004-0001-C01.

Metrics
since March 2018
575

Article info
views

154

Full article
views

214

PDF
downloads

67

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    2
  • Tables
    2
  • Theorems
    5
vmsta244_g001.jpg
Fig. 1.
Min- and max- Kies PDFs
vmsta244_g002.jpg
Fig. 2.
Applications
Table 1.
Parameters
Table 2.
Numerical estimations
Theorem 3.5.
Theorem 5.5.
Theorem 5.6.
Theorem 6.6.
Theorem 6.9.
vmsta244_g001.jpg
Fig. 1.
Min- and max- Kies PDFs
vmsta244_g002.jpg
Fig. 2.
Applications
Table 1.
Parameters
parameter (A1) (A2) (A3) (B1) (B2) (B3)
n 2 2 2 5 10 5
${\lambda _{1}}$ 1 1 1 48.8656 3.8367 7.7266
${\lambda _{2}}$ 2 2 2 33.4292 9.5941 0.8782
${\lambda _{3}}$ – – – 5.6038 2.2675 7.4407
${\lambda _{4}}$ – – – 2.9209 2.7862 0.5300
${\lambda _{5}}$ – – – 2.4441 6.3067 4.4290
${\lambda _{6}}$ – – – – 2.6047 –
${\lambda _{7}}$ – – – – 8.7562 –
${\lambda _{8}}$ – – – – 9.5910 –
${\lambda _{9}}$ – – – – 2.0228 –
${\lambda _{10}}$ – – – — 0.9302 –
${\beta _{1}}$ 2 2 2 48.5290 41.8723 39.8993
${\beta _{2}}$ 0.5 1 1.5 38.6619 37.1788 35.9901
${\beta _{3}}$ – – – 34.0970 34.0846 32.7791
${\beta _{4}}$ – – – 5.8855 31.9772 27.8533
${\beta _{5}}$ – – – 1.0243 29.3628 1.6168
${\beta _{6}}$ – – – – 26.4715 –
${\beta _{7}}$ – – – – 23.3149 –
${\beta _{8}}$ – – – – 16.1187 –
${\beta _{9}}$ – – – – 1.8369 –
${\beta _{10}}$ – – – – 0.8744 –
saturation, prime 0.3678 0.4806 0.5469 0.4753 0.5164 0.4891
saturation, dual 0.9611 0.9635 0.9655 0.9999 0.9999 0.9999
compl. sat., prime 0.9611 0.9635 0.9655 0.9999 0.9999 0.9999
compl. sat., dual 0.3678 0.4806 0.5469 0.4753 0.5164 0.4891
Table 2.
Numerical estimations
par. (A0) (A1) (A2) (A3) (A4) (B1) (B2) (B3) (B4)
n 1 1 2 3 4 1 2 3 4
${\lambda _{0}}$ 15.7857 15.1377 3053.03 19444234.9 11623604.4 101.708 13.4539 74.0164 193.5577
${\lambda _{1}}$ – – 6.6251 32.1833 15.6489 – 7.2452 0.0123 0.0078
${\lambda _{2}}$ – – – 3.9017 3.2997 – – 8.5378 0.0111
${\lambda _{3}}$ – – – – 0.3394 – – – 9.1257
${\beta _{0}}$ 0.7120 0.6800 4.8953 11.8106 9.6112 2.3473 13.7467 13.3751 14.4112
${\beta _{1}}$ – – 0.3609 1.8377 1.3580 – 1.2393 6.1671 6.9765
${\beta _{2}}$ – – – 0.2259 0.2136 – – 1.2787 5.9488
${\beta _{3}}$ – – – – 0.1186 – – – 1.2987
a 0 0.0048 0.0064 0.0077 0.0080 0.0257 1.1141 1.1132 1.1125
b 1 0.9900 1.0076 1.0165 0.9912 18.8619 8.3748 8.9268 9.1328
error 25.3491 25.2108 22.1423 21.3855 21.0382 4.7166 3.4270 3.3980 3.3908
Theorem 3.5.
Let ${P_{1}}(\cdot )$ and ${P_{2}}(\cdot )$ belong to the set $\mathcal{P}$ – note that this set is closed w.r.t. the sum-operator. Let us denote by ${\xi _{P}}$ a min-Kies distributed random variable with underlying polynomial function $P(\cdot )$. The set of such independent random variables is closed w.r.t. the min-operation in the sense that the random variables $\min \{{\xi _{{P_{1}}}},{\xi _{{P_{2}}}}\}$ and ${\xi _{{P_{1}}+{P_{2}}}}$ are identically distributed.
Theorem 5.5.
Let the function $\gamma (y;c,C,\nu )$ be defined for $y\gt \max \{-\ln (b-a),0\}$, $c\gt 0$, $C\gt 0$, and $\nu \gt 0$ as
\[ \gamma (y;c,C,\nu )=cy{\big((b-a){e^{Cy}}-1\big)^{\nu }}.\]
If the parameters ${\lambda _{i}}$, $i=1,2,\dots ,n$, can be presented as
(23)
\[ {\lambda _{i}}=\gamma \Bigg(y;{c_{i}},{\sum \limits_{i=1}^{n}}{c_{i}},{\beta _{i}}\Bigg)\]
for some positive constants ${c_{1}},{c_{2}},\dots ,{c_{n}}$, then the min-Kies distribution’s saturation is
(24)
\[ d=\exp \Bigg(-y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg).\]
Theorem 5.6.
The equation
(26)
\[ {\sum \limits_{i=1}^{n}}\frac{{\lambda _{i}}}{y{((b-a){e^{y}}-1)^{{\beta _{i}}}}}=1\]
has a unique solution such that $y\gt \max \{-\ln (b-a),0\}$. If it is denoted by $\overline{y}$, then the saturation is $d={e^{-\overline{y}}}$.
Theorem 6.6.
The saturation d of the max-Kies distribution satisfies the equation
(29)
\[ b-a=d\big[1+p\big(-\ln (1-d);\lambda ,\beta \big)\big].\]
Let the set Υ be ${\mathbb{R}^{+}}$ if $b-a\ge 1$ and $\Upsilon =(0,-\ln (1-b+a))$ when $b-a\lt 1$. If the parameters ${\lambda _{i}}$, $i=1,\dots ,n$, can be presented as
(30)
\[ {\lambda _{i}}={c_{i}}y{\bigg(\frac{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})-1}{(b-a-1)\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})+1}\bigg)^{{\beta _{i}}}}\]
for some positive constants ${c_{1}},\dots ,{c_{n}}$ and $y\in \Upsilon $, then the max-Kies saturation is
\[ d=1-\exp \Bigg(-y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg).\]
Something more, the equation w.r.t. the variable y
(31)
\[ {\sum \limits_{i=1}^{n}}\frac{{\lambda _{i}}}{y}{\bigg[\frac{(b-a-1){e^{y}}+1}{{e^{y}}-1}\bigg]^{{\beta _{i}}}}=1\]
has a unique solution in the set Υ. If we denote it by $\overline{y}$, then the saturation is $d=1-{e^{-\overline{y}}}$.
Theorem 6.9.
The saturation of the prime distribution coincides with the complementary saturation of the dual distribution.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy