1 Introduction
Several extensions of the exponential distribution are available in the stochastic literature. They are very useful in different practical areas including engineering sciences, meteorology and hydrology, communications and telecommunications, energetics, chemical and metallurgical industry, epidemiology, insurance and banking, etc. One of the most important is the Weibull distribution firstly introduced in [18]. We refer also to the books [11–13] for some relatively new results. The original Kies distribution, introduced firstly in [7], appears as the fractional linear transformation $t=\frac{y}{y+1}\Leftrightarrow y=\frac{t}{1t}$ in the Weibull distribution. This way its domain turns from the positive real halfline (Weibull) to the interval $(0,1)$. Analogously, the transformation $t=\frac{by+a}{y+1}\Leftrightarrow y=\frac{ta}{bt}$, $a\lt b$, leads to the Kies distribution on the interval $(a,b)$ – we refer to [8, 14, 19].
Later many authors turn to different modifications of these distributions. Refs. [1, 3, 9, 10] use a power transformation to define new families. Some composite distributions are proposed in [3], see also [1, 4, 16, 20].
Alternatively, we propose a polynomial transformation to be used before the fractional linear one leading to the Kies distribution. The importance of the arising distribution is due to the following property. It turns out that if ${\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}$ are independent Kies distributed random variables defined on one and the same domain, then the new distribution describes the random variable $\min \{{\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}\}$ – this motivates us to entitled it minKies distribution. Generally said, this distribution is more situated to the left part of its domain. We examine its properties taking a special attention on the probability density function (PDF, hereafter). It exhibits a quite variable behavior due to the polynomial component – the initial point can be zero, finite, or infinitely large; many peaks are possible, etc. We discuss also the cumulative distribution function (CDF), its complement (CCDF), the quantile function (QF), some expectations, etc. We study also the tail behavior using some risk measures arising in the financial mathematics – ValueatRisk ($\mathit{VaR}$), AverageValueatRisk ($\mathit{AVaR}$), and ExpectileBasedRiskMeasure ($\mathit{ERM}$). Another important characteristic we examine is the socalled saturation in the sense of the Hausdorff distance. In fact, it measures the speed of occurrence of the random variables – a property which determines a large usefulness in many real life fields. We derive some semiclosed form formulas.
Analogously to the minKies distribution we define the maxKies one – it describes the random variable $\max \{{\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}\}$. We explore its properties using some duality arguments. They are based on a transformation which symmetrically changes the left and right distribution’s parts, thus the mass of a maxKies distribution is more positioned in the right part of the domain.
Finally, we present some numerical results using a real statistical data for the S&P500 financial index on the one side, and for the unemployment insurance issues on the other. Note that we can describe a leftskewed sample as well as a rightskewed one by the min or maxKies distributions.
The paper is structured as follows. We present the base we use later in Section 2. The properties of the minKies distribution are examined in Section 3. Its tail behavior and the Hausdorff saturation are investigated in Sections 4 and 5. The maxKies distribution is considered in Section 6. The numerical results are provided in Section 7.
2 Preliminaries
The following notations shall be used hereafter: an uppercae letter for the cumulative distribution function of a distribution, the overlined letter for the complementary cumulative distribution function, the corresponding lowercase letter for the probability density function, and the letter Q for the quantile function. Thus if $F(t)$ is the CDF, then $\overline{F}(t)$, $f(t)$, and ${Q_{F}}(t)$ are the corresponding CCDF, PDF, and QF, respectively.
Let $a\lt b$, λ, and β be positive constants. The Kies distribution on the domain $(a,b)$ is defined through its CDF as
Hence, its CCDF is
The PDF can be obtained differentiating CDF (1) as
To characterize this distribution we need to obtain the shape of PDF (2). We do this in the proposition below following the approach used in Proposition 2.1 from [20].
(2)
\[ h(t)=\lambda \beta (ba)\frac{{(ta)^{\beta 1}}}{{(bt)^{\beta +1}}}\exp \bigg(\lambda {\bigg(\frac{ta}{bt}\bigg)^{\beta }}\bigg).\]Proposition 2.1.
Let us consider the Kies distribution with parameter set $\{a,b,\lambda ,\beta \}$. The right endpoint of the PDF is zero, $h(b)=0$. Let the function $\gamma (t)$ be defined as
The following statements characterize the whole PDF.

1. If $\beta =1$, then the left endpoint of the PDF is $h(a)=\frac{\lambda }{ba}$. If $\lambda \ge 2$, then the PDF is a decreasing function. Otherwise, if $\lambda \lt 2$, then the PDF increases in the interval $(a,b\lambda \frac{ba}{2})$ and decreases for $t\in (b\lambda \frac{ba}{2},b)$.

2. If $\beta \gt 1$, then $h(a)=0$. Function (3) has a unique root in the interval $(a,b)$ and the PDF increases before it and decreases after.

3. If $\beta \lt 1$, then $h(a)=\infty $. We need the derivative of function (3) which is
(4)
\[ {\gamma ^{\prime }}(t)=\lambda {\beta ^{2}}{(ba)^{2}}\frac{{(ta)^{\beta 1}}}{{(bt)^{\beta +1}}}2.\]
Proof.
The PDF values at the endpoints can be derived directly from formula (2). We have to obtain the PDF shape. Suppose first that $\beta =1$. The PDF and its derivative are
\[\begin{aligned}{}h(t)& =\lambda \frac{ba}{{(bt)^{2}}}\exp \bigg(\lambda \frac{ta}{bt}\bigg),\\ {} {h^{\prime }}(t)& =\lambda \frac{ba}{{(bt)^{4}}}\exp \bigg(\lambda \frac{ta}{bt}\bigg)\big[2t+2b\lambda (ba)\big].\end{aligned}\]
The PDF shape follows the form of the derivative ${h^{\prime }}(t)$.Suppose that $\beta \ne 1$. The PDF can be written as
where the function $\eta (t)$ is
The derivative of the PDF can be written as
where the function $\gamma (t)$ is defined in equation (3). Its derivative is given by formula (4). If $\beta \gt 1$, then the derivative ${\gamma ^{\prime }}(t)$ increases from a negative to a positive value in the interval $(a,b)$. Therefore the function $\gamma (t)$ starts from the negative value $\gamma (a)=(ba)(1\beta )$ decreases to a negative minimum and increases to infinity. Therefore, derivative (7) has unique root in the interval $(a,b)$ and hence PDF (6) increases before it and decreases after.
\[ {h^{\prime }}(t)={e^{\eta (t)}}\big({\eta ^{\prime\prime }}(t){\big({\eta ^{\prime }}(t)\big)^{2}}\big),\]
where the derivatives of the function $\eta (t)$ are
\[\begin{aligned}{}{\eta ^{\prime }}(t)& =\lambda \beta (ba)\frac{{(ta)^{\beta 1}}}{{(bt)^{\beta +1}}},\\ {} {\eta ^{\prime\prime }}(t)& =\lambda \beta (ba)\frac{{(ta)^{\beta 2}}}{{(bt)^{\beta +2}}}\big[2t+\beta (ba)(a+b)\big].\end{aligned}\]
Some calculations lead to
(7)
\[ {h^{\prime }}(t)={e^{\eta (t)}}\lambda \beta (ba)\frac{{(ta)^{\beta 2}}}{{(bt)^{\beta +2}}}\gamma (t),\]Suppose now that $\beta \lt 1$. The second derivative of function (3) is
\[ {\gamma ^{\prime\prime }}(t)=\lambda {\beta ^{2}}{(ba)^{2}}\frac{{(ta)^{\beta 2}}}{{(bt)^{\beta +2}}}\big[2t+\beta (ba)(a+b)\big].\]
Let $\overline{t}$ be defined as in (5). Note that $\overline{t}\in (a,b)$ since $\beta \lt 1$. The second derivative ${\gamma ^{\prime\prime }}(t)$ is negative for $t\in (a,\overline{t})$ and positive for $t\in (\overline{t},b)$ and thus the derivative ${\gamma ^{\prime }}(t)$ decreases for $t\in (a,\overline{t})$ and increases for $t\in (\overline{t},b)$. If ${\gamma ^{\prime }}(\overline{t})\ge 0$, then ${\gamma ^{\prime }}(t)\gt 0$ for every $t\in (a,b)$. Hence $\gamma (t)$ is an increasing function and thus it is positive because $\gamma (a)=(1\beta )(ba)\gt 0$. Therefore PDF (6) is a decreasing function.Suppose that ${\gamma ^{\prime }}(\overline{t})\lt 0$ and therefore the equation ${\gamma ^{\prime }}(t)=0$ has two roots ${\overline{t}_{1}}\lt {\overline{t}_{2}}$ and ${\gamma ^{\prime }}(t)\gt 0$ for $t\in \{(a,{\overline{t}_{1}})\cup ({\overline{t}_{2}},b)\}$ and ${\gamma ^{\prime }}(t)\lt 0$ for $t\in ({\overline{t}_{1}},{\overline{t}_{2}},)$. If $\gamma ({\overline{t}_{2}})\ge 0$, then the function $\gamma (t)$ is positive in the whole interval $(a,b)$ and thus PDF (6) is again a decreasing function. Finally, if $\gamma ({\overline{t}_{2}})\lt 0$, then the equation $\gamma (t)=0$ has two roots ${t_{1}}\lt {t_{2}}$ in the interval $(a,b)$ and $\gamma (t)\gt 0$ for $t\in \{(a,{t_{1}})\cup ({t_{2}},b)\}$ and $\gamma (t)\lt 0$ for $t\in ({t_{1}},{t_{2}})$. This confirms the shape of PDF (6). □
3 MinKies distribution
We establish the minKies family through the following definitions.
Definition 3.1.
Let the set $\mathcal{P}$ consist of all polynomial style functions defined on the interval $x\in (0,\infty )$,
where n is a positive integer, λ and β are positive vectors of dimension n, and ${\beta _{1}}\gt {\beta _{2}}\gt \cdots \gt {\beta _{n}}\gt 0$.
The following lemma holds.
Lemma 3.2.
Let the functions $A(\cdot )$ and $\alpha (\cdot )$ be defined as
where the functions $P(\cdot ;\lambda ,\beta )$ and $p(\cdot ;\lambda ,\beta )$ are the same as in Definition 3.1. Then $\alpha (y)$ is the inverse function of $A(t)$, on the domains $y\in (0,\infty )$ and $t\in (a,b)$.
(9)
\[ \begin{aligned}{}A(t)& =P\bigg(\frac{ta}{bt};\lambda ,\beta \bigg),\\ {} \alpha (y)& =\frac{a+bp(y;\lambda ,\beta )}{1+p(y;\lambda ,\beta )},\end{aligned}\]Proof.
Note first that the inverse function really exists, because the functions $\frac{ta}{bt}$ and $P(x)$ are increasing. We have only to check $A(\alpha (y))=y$:
□
Definition 3.3.
Let a and b be constants such that $0\le a\lt b$, and n, λ, and β satisfy the conditions of Definition 3.1. We define a new distribution named minKies via its CDF as
Alternatively, we may define the new Kies style distribution as its CCDF.
Corollary 3.4.
The CCDF of the minKies distribution can be obtained as the product of the corresponding original Kies CCDFs:
where $\overline{H}(t;a,b,{\lambda _{i}},{\beta _{i}})$ are the CCDFs of the original Kies distributions with parameters $\{a,b,{\lambda _{i}},{\beta _{i}}\}$.
The following theorem discloses the theoretical importance of the minKies distribution and motivates its name.
Theorem 3.5.
Let ${P_{1}}(\cdot )$ and ${P_{2}}(\cdot )$ belong to the set $\mathcal{P}$ – note that this set is closed w.r.t. the sumoperator. Let us denote by ${\xi _{P}}$ a minKies distributed random variable with underlying polynomial function $P(\cdot )$. The set of such independent random variables is closed w.r.t. the minoperation in the sense that the random variables $\min \{{\xi _{{P_{1}}}},{\xi _{{P_{2}}}}\}$ and ${\xi _{{P_{1}}+{P_{2}}}}$ are identically distributed.
Proof.
We need to slightly modify the proof of the well known fact that the product of the CDFs of two independent random variables is the CDF of their maximum. Using the lower index to distinguish the distribution terms in Corollary 3.4, we derive
\[\begin{aligned}{}{\overline{F}_{{P_{1}}+{P_{2}}}}(t)& ={\overline{F}_{{P_{1}}}}(t){\overline{F}_{{P_{2}}}}(t)\\ {} & =\mathbb{P}({\xi _{{P_{1}}}}\gt t)\mathbb{P}({\xi _{{P_{2}}}}\gt t)\\ {} & =\mathbb{P}({\xi _{{P_{1}}}}\gt t)\mathbb{P}({\xi _{{P_{2}}}}\gt t\mid {\xi _{{P_{1}}}}\gt t)\\ {} & =\mathbb{P}({\xi _{{P_{1}}}}\gt t,{\xi _{{P_{2}}}}\gt t)\\ {} & =\mathbb{P}\big(\min \{{\xi _{{P_{1}}}}{\xi _{{P_{2}}}}\}\gt t\big).\end{aligned}\]
□The corollary below gives the PDF of the minKies distribution.
Corollary 3.6.
The probability density of the minKies distribution defined on the interval $(a,b)$ is
(11)
\[\begin{aligned}{}f(t)& =\exp \bigg(P\bigg(\frac{ta}{bt};\lambda ,\beta \bigg)\bigg){P^{\prime }}\bigg(\frac{ta}{bt};\lambda ,\beta \bigg)\frac{ba}{{(bt)^{2}}}\\ {} & =\exp \Bigg({\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{ta}{bt}\bigg)^{{\beta _{i}}}}\Bigg)\frac{ba}{{(bt)^{2}}}{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\beta _{i}}{\bigg(\frac{ta}{bt}\bigg)^{{\beta _{i}}1}}\\ {} & =\overline{F}(t)\frac{ba}{{(bt)^{2}}}{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\beta _{i}}{\bigg(\frac{ta}{bt}\bigg)^{{\beta _{i}}1}}.\end{aligned}\]The behavior of the minKies PDF given in formulas (11) is more complicated than the original one, it depends on all basic distributions. Hence, we cannot derive closed form results analogous to Proposition 2.1. However, the following statements hold.
Proposition 3.7.
The right endpoint of PDF (11) is $f(b)=0$. The following statements hold for the left endpoint.
Let the function $Y(y)$ be defined on the positive real halfline as
and its roots be ${y_{1}}\lt {y_{2}}\lt \cdots \lt {y_{k}}$ for some integer k. The PDF has extrema at the points ${t_{j}}=\frac{{y_{j}}b+a}{{y_{j}}+1}$. The characterization of these extrema is as follows.
(12)
\[ Y(y)={P^{\prime\prime }}(y;\lambda ,\beta )(1+y)+2{P^{\prime }}(y;\lambda ,\beta ){\big({P^{\prime }}(y;\lambda ,\beta )\big)^{2}}(1+y)\]Proof.
The results when $n=1$ are proven in Proposition 2.1. Suppose now that $n\gt 1$. We derive the PDF value at the left endpoint using the second presentation from equations (11) and having in mind ${\beta _{1}}\gt {\beta _{2}}\gt \cdots \gt {\beta _{n}}\gt 0$. Let us turn to the PDF shape. We derive the PDF’s derivative using the first statement from (11) as
where the function $A(\cdot )$ is defined in equation (9). Its derivatives are
(13)
\[ {f^{\prime }}(t)={e^{A(t)}}\big({A^{\prime\prime }}(t){\big({A^{\prime }}(t)\big)^{2}}\big),\]
\[\begin{aligned}{}{A^{\prime }}(t)& =\frac{ba}{{(bt)^{2}}}{P^{\prime }}\bigg(\frac{ta}{bt};\lambda ,\beta \bigg),\\ {} {A^{\prime\prime }}(t)& =\frac{{(ba)^{2}}}{{(bt)^{4}}}{P^{\prime\prime }}\bigg(\frac{ta}{bt};\lambda ,\beta \bigg)+2\frac{ba}{{(bt)^{3}}}{P^{\prime }}\bigg(\frac{ta}{bt};\lambda ,\beta \bigg).\end{aligned}\]
Therefore
\[\begin{aligned}{}& {A^{\prime\prime }}(t){\big({A^{\prime }}(t)\big)^{2}}\\ {} & \hspace{1em}=\frac{ba}{{(bt)^{3}}}\bigg[\frac{ba}{bt}{P^{\prime\prime }}\bigg(\frac{ta}{bt};\lambda ,\beta \bigg)+2{P^{\prime }}\bigg(\frac{ta}{bt};\lambda ,\beta \bigg)\\ {} & \hspace{2em}\frac{ba}{bt}{\bigg({P^{\prime }}\bigg(\frac{ta}{bt};\lambda ,\beta \bigg)\bigg)^{2}}\bigg].\end{aligned}\]
Changing the variables as $y=\frac{ta}{bt}$, or equivalently $t=\frac{yb+a}{y+1}$, we derive
\[\begin{aligned}{}& {A^{\prime\prime }}(t){\big({A^{\prime }}(t)\big)^{2}}\\ {} & \hspace{1em}=\frac{{(y+1)^{3}}}{{(ba)^{2}}}\big[(y+1){P^{\prime\prime }}(y;\lambda ,\beta )+2{P^{\prime }}(y;\lambda ,\beta )(y+1){\big({P^{\prime }}(y;\lambda ,\beta )\big)^{2}}\big].\end{aligned}\]
Hence, derivative (13) can be written as
where the function $Y(\cdot )$ is given by formula (12). Hence the minKies PDF has extrema namely for the roots of function $Y(\cdot ),{y_{1}},{y_{2}},\dots ,{y_{k}}$, transformed to ${t_{1}}=\frac{{y_{1}}b+a}{{y_{1}}+1}$, ${t_{2}}=\frac{{y_{2}}b+a}{{y_{2}}+1},\dots ,{t_{k}}=\frac{{y_{k}}b+a}{{y_{k}}+1}$. We have already proved that $f(a)=\infty $ when ${\beta _{n}}\lt 1$. Having in mind that $f(b)=0$ we conclude that k is even, the minima are for the odd j’s, and maxima are for the even ones. Analogously, we obtain the PDF shape when ${\beta _{n}}\gt 1$ having in mind $f(a)=0$ and $f(b)=0$.Suppose now that ${\beta _{n}}=1$. We need to find the value $Y({0^{+}})$ to obtain the PDF behavior at the left endpoint $t=a$. Let us decompose the function $P(y;\lambda ,\beta )$ as
We have ${P^{\prime\prime }_{0}}({0^{+}})=0$ and ${P^{\prime }_{0}}({0^{+}})={\lambda _{n}}$. Also, the inequalities ${\beta _{1}}\gt \cdots \gt {\beta _{n1}}\gt {\beta _{n}}=1$ show ${P^{\prime }_{1}}({0^{+}})=0$. Hence, equation (14) turns to
\[\begin{aligned}{}P(y;\lambda ,\beta )& ={P_{1}}(y;\lambda ,\beta )+{P_{0}}(y;\lambda ,\beta ),\hspace{2.5pt}\hspace{2.5pt}\text{where}\\ {} {P_{0}}(y)& ={\lambda _{n}}y,\\ {} {P_{1}}(y)& ={\sum \limits_{i=1}^{n1}}{\lambda _{i}}{y^{{\beta _{i}}}}.\end{aligned}\]
Hence, the value of the function $Y(y)$ near the left endpoint $y=0$ can be written as
(14)
\[\begin{aligned}{}Y\big({0^{+}}\big)& ={P^{\prime\prime }}\big({0^{+}};\lambda ,\beta \big)+2{P^{\prime }}\big({0^{+}};\lambda ,\beta \big){\big({P^{\prime }}\big({0^{+}};\lambda ,\beta \big)\big)^{2}}\\ {} & ={P^{\prime\prime }_{0}}\big({0^{+}}\big)+2{P^{\prime }_{0}}\big({0^{+}}\big){\big({P^{\prime }_{0}}\big({0^{+}}\big)\big)^{2}}\\ {} & \hspace{1em}+{P^{\prime\prime }_{1}}\big({0^{+}}\big)+2{P^{\prime }_{1}}\big({0^{+}}\big){\big({P^{\prime }_{1}}\big({0^{+}}\big)\big)^{2}}\\ {} & \hspace{1em}2{P^{\prime }_{0}}\big({0^{+}}\big){P^{\prime }_{1}}\big({0^{+}}\big).\end{aligned}\]If ${\beta _{n1}}\lt 2$, then ${P^{\prime\prime }_{1}}({0^{+}})=+\infty $ and therefore ${Y^{\prime\prime }}({0^{+}})=+\infty $, too. Therefore the PDF $f(t)$ increases at its leftpoint $t=a$ and thus the number of extrema k is odd and the odd ones are maxima.
Otherwise, if ${\beta _{n1}}\gt 2$, then ${P^{\prime\prime }_{1}}({0^{+}})=0$ and hence $Y({0^{+}})=2{\lambda _{n}}{\lambda _{n}^{2}}$. If ${\lambda _{n}}\lt 2$, then $Y({0^{+}})\gt 0$ and the same reasoning as above leads to the identical results. On the other hand, if ${\lambda _{n}}\ge 2$, then $Y({0^{+}})\le 0$ and hence PDF $f(t)$ decreases at the leftpoint $t=a$. Therefore k is even and the minima are with odd numbers.
It remains to investigate the case ${\beta _{n1}}=2$. Note that ${P^{\prime\prime }_{1}}({0^{+}})=2{\lambda _{n1}}$ nevertheless $n=2$ or $n\gt 2$, because ${\beta _{1}}\gt \cdots \gt {\beta _{n1}}\gt 2$ when $n\gt 2$. Therefore
Considering formula (15) as a quadratic function w.r.t. ${\lambda _{n}}$ we see that $Y({0^{+}})\gt 0$ for $y\in (0,1+\sqrt{1+2{\lambda _{n1}}})$ and $Y({0^{+}})\lt 0$ for $y\gt 1+\sqrt{1+2{\lambda _{n1}}}$. The same reasoning as above finishes the proof. □
We present in Figure 1, in the blue curves, the PDFs of some minKies distributions, and the corresponding parameters are reported in Table 1. The distribution domain is assumed to be the interval $(1,3)$, i.e. $a=1$ and $b=3$. As we can see they exhibit variable shapes. Some typical ones can be viewed in Figure 1a, where we consider twocomponent distributions. The used parameters are ${\lambda _{1}}=1$, ${\lambda _{2}}=2$, ${\beta _{1}}=2$, and ${\beta _{2}}\in \{0.5,1,1.5\}$. Proposition 3.7 has a confirmation – the PDF’s left endpoint is the infinity when ${\beta _{2}}=0.5$, it is $\frac{{\lambda _{1}}}{ba}=1$ when ${\beta _{2}}=1$, and it is zero when ${\beta _{2}}=1.5$. Some PDFs with many peaks are presented in Figure 1b.
Table 1.
Parameters
parameter  (A1)  (A2)  (A3)  (B1)  (B2)  (B3) 
n  2  2  2  5  10  5 
${\lambda _{1}}$  1  1  1  48.8656  3.8367  7.7266 
${\lambda _{2}}$  2  2  2  33.4292  9.5941  0.8782 
${\lambda _{3}}$  –  –  –  5.6038  2.2675  7.4407 
${\lambda _{4}}$  –  –  –  2.9209  2.7862  0.5300 
${\lambda _{5}}$  –  –  –  2.4441  6.3067  4.4290 
${\lambda _{6}}$  –  –  –  –  2.6047  – 
${\lambda _{7}}$  –  –  –  –  8.7562  – 
${\lambda _{8}}$  –  –  –  –  9.5910  – 
${\lambda _{9}}$  –  –  –  –  2.0228  – 
${\lambda _{10}}$  –  –  –  —  0.9302  – 
${\beta _{1}}$  2  2  2  48.5290  41.8723  39.8993 
${\beta _{2}}$  0.5  1  1.5  38.6619  37.1788  35.9901 
${\beta _{3}}$  –  –  –  34.0970  34.0846  32.7791 
${\beta _{4}}$  –  –  –  5.8855  31.9772  27.8533 
${\beta _{5}}$  –  –  –  1.0243  29.3628  1.6168 
${\beta _{6}}$  –  –  –  –  26.4715  – 
${\beta _{7}}$  –  –  –  –  23.3149  – 
${\beta _{8}}$  –  –  –  –  16.1187  – 
${\beta _{9}}$  –  –  –  –  1.8369  – 
${\beta _{10}}$  –  –  –  –  0.8744  – 
saturation, prime  0.3678  0.4806  0.5469  0.4753  0.5164  0.4891 
saturation, dual  0.9611  0.9635  0.9655  0.9999  0.9999  0.9999 
compl. sat., prime  0.9611  0.9635  0.9655  0.9999  0.9999  0.9999 
compl. sat., dual  0.3678  0.4806  0.5469  0.4753  0.5164  0.4891 
Next we discuss the quantile function.
Proposition 3.8.
Let $y\in (0,1)$. The yquantile of the minKies distribution is
See Definition 3.1 for the meaning of the functions $p(\cdot )$ and $P(\cdot )$.
Proof.
We have to prove $F(Q(y))=y$:
\[\begin{aligned}{}F\big(Q(y)\big)& =1\exp \bigg(P\bigg(\frac{Q(y)a}{bQ(y)};\lambda ,\beta \bigg)\bigg)\\ {} & =1\exp \bigg(P\bigg(\frac{\frac{bp(\ln (1y);\lambda ,\beta )+a}{p(\ln (1y);\lambda ,\beta )+1}a}{b\frac{bp(\ln (1y);\lambda ,\beta )+a}{p(\ln (1y);\lambda ,\beta )+1}};\lambda ,\beta \bigg)\bigg)\\ {} & =1\exp \big(P\big(p\big(\ln (1y);\lambda ,\beta \big);\lambda ,\beta \big)\big)\\ {} & =y.\end{aligned}\]
□An important property of the minKies distributions is their finite moments.
Although the expectations cannot be derived in closed form, Lemma 3.2 allows us to present a numerical approach. It is based on the following proposition.
Proposition 3.10.
Let ξ be a minKies distributed random variable and the function $\alpha (\cdot )$ be defined as in formulas (9). Let also the function $\mu (\cdot )$ be such that $\mathbb{E}[\mu (\xi )]\lt \infty $. Under these assumptions the expectation $\mathbb{E}[\mu (\xi )]$ can be derived as the Laplace transform of the function $\mu \circ \alpha (y)\equiv \mu (\alpha (y))$ taken at the point one.
Proof.
Changing the variables as $y=A(t)\Leftrightarrow t=\alpha (y)$ – see formulas (9) – we derive
\[\begin{aligned}{}\mathbb{E}\big[\mu (\xi )\big]& ={\int _{a}^{b}}\Bigg(\mu (t)\exp \Bigg({\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{ta}{bt}\bigg)^{{\beta _{i}}}}\Bigg)\frac{ba}{{(bt)^{2}}}{\sum \limits_{i=1}^{n}}{\lambda _{i}}{\beta _{i}}{\bigg(\frac{ta}{bt}\bigg)^{{\beta _{i}}1}}\Bigg)dt\\ {} & ={\int _{a}^{b}}\mu (t){e^{P(\frac{ta}{bt})}}d\bigg(P\bigg(\frac{ta}{bt}\bigg)\bigg)\\ {} & ={\int _{a}^{b}}\mu (t){e^{A(t)}}dA(t)\\ {} & ={\int _{0}^{\infty }}\mu \big(\alpha (y)\big){e^{y}}dy.\end{aligned}\]
This finishes the proof. □The following relation between the minKies distribution and the gamma function stands.
Proposition 3.11.
Let the function $A(\cdot )$ be defined as in formulas (9). Then the gamma function can be presented as the expectation
where ξ is a minKies distributed random variable.
4 Tail behavior
We investigate in this section the tail properties of the minKies distribution. We shall present some results based on several risk measures arising from the financial markets, namely, ValueatRisk ($\mathit{VaR}$), AverageValueatRisk ($\mathit{AVaR}$), and ExpectileBasedRiskMeasure ($\mathit{ERM}$), see Zaevski and Nedeltchev [21]. We also give a presentation of the mean residual life function ($\mathit{MRLF}$). Below we remind the definitions of these measures.
Definition 4.1.
Remark 4.2.
In fact the original definition of the $\mathit{VaR}$ is the opposite quantile. We remove the minus sign since the domain of the Kies distribution is positioned on the positive real halfline. The $\mathit{AVaR}$ is derived by averaging all $\mathit{VaR}$’s below or above the ϵquantiles.
The original definition of the expectile is the minimizer of the following quadratic problem
\[ \mathit{ERM}(\epsilon ):=\underset{x\in \mathbb{R}}{\operatorname{arg\,min}}\big\{\mathbb{E}\big[\epsilon {\big({(\xi x)^{+}}\big)^{2}}+(1\epsilon ){\big({(\xi x)^{}}\big)^{2}}\big]\big\}.\]
Simple calculations show that it achieves minimum for the solution of equation (16) which is unique. Note that the expectile is well defined for random variables with a finite second moment.Definition 4.1 shows that $\mathit{VaR}$ of the minKies distribution can be derived via Proposition 3.8. Next two propositions discuss the $\mathit{AVaR}$, $\mathit{ERM}$, and $\mathit{MRLF}$.
Proposition 4.3.
Tha $\mathit{AVaR}$ values of a minKies distributed random variable can be derived via the defined in equations (9) function $\alpha (\cdot )$ as
Proof.
Let the constants ${x_{1}}$ and ${x_{2}}$ be such that $0\le {x_{1}}\lt {x_{2}}\le 1$. We shall denote hereafter by ${I_{\{\cdot \}}}$ the indicator function. Changing the variables as $y=Q(u)\hspace{2.5pt}\Leftrightarrow u=F(y)$ and using Proposition 3.10 for $\mu (y)=y{I_{y\in (Q({x_{1}}),Q({x_{2}}))}}$ we derive
Applying equation (18) for ${x_{1}}=0$ and ${x_{2}}=\epsilon $ we derive the result for $\mathit{AVaR}$. The result for $\overline{\mathit{AVaR}}$ is obtained for ${x_{1}}=1\epsilon $ and ${x_{2}}=1$. □
\[\begin{aligned}{}{\int _{{x_{1}}}^{{x_{2}}}}Q(u)du& ={\int _{Q({x_{1}})}^{Q({x_{2}})}}ydF(y)\\ {} & =\mathbb{E}[\xi {I_{\xi \in (Q({x_{1}}),Q({x_{2}}))}}]\\ {} & ={\int _{0}^{\infty }}{e^{y}}\alpha (y){I_{\alpha (y)\in (Q({x_{1}}),Q({x_{2}}))}}dy.\end{aligned}\]
Using Lemma 3.2 and having in mind $A(y)=\ln (1F(y))$ we derive that $y\in (\ln (1{x_{1}}),\ln (1{x_{2}}))$ when $\alpha (y)\in (Q({x_{1}}),Q({x_{2}}))$. Therefore
(18)
\[ {\int _{{x_{1}}}^{{x_{2}}}}Q(u)du={\int _{\ln (1{x_{1}})}^{\ln (1{x_{2}})}}{e^{y}}\alpha (y)dy.\]We need the following lemma before to continue with the expectile tail measure and the mean residual life function.
Proof.
We have to rewrite truncated expectations (19) as
\[\begin{aligned}{}\mathbb{E}\big[{(\xi t)^{+}}\big]& =\mathbb{E}[\xi {I_{\xi \gt t}}]t\overline{F}(t),\\ {} \mathbb{E}\big[{(\xi t)^{}}\big]& =tF(t)\mathbb{E}[\xi {I_{\xi \gt t}}]\end{aligned}\]
and apply Proposition 3.10 for the functions $\mu (y)=y{I_{y\gt t}}$ and $\mu (y)=y{I_{y\lt t}}$. □The statements below for the $\mathit{ERM}$ and MRLF hold.
Proposition 4.5.
The value of $\epsilon \mathit{ERM}$ of a minKies distributed random variable is the solution of the following equation w.r.t. the variable x
Its mean residual life function can be presented as
5 Saturation in the Hausdorff sense
We define the Hausdorff distance in a sense of [15].
Definition 5.1.
Let us consider the maxnorm in ${\mathbb{R}^{2}}$: if A and B are the points $A=({t_{A}},{x_{A}})$ and $B=({t_{B}},{x_{B}})$, then $\ AB\ :=\max \{{t_{A}}{t_{B}},{x_{A}}{x_{B}}\}$. The Hausdorff distance $d(g,h)$ between two curves g and h in ${\mathbb{R}^{2}}$ is
We can view the Hausdorff distance as the highest optimal path between the curves. Next we define the saturation of a distribution.
Definition 5.2.
Let $F(\cdot )$ be the CDF of a distribution with leftfinite domain – $[a,b)$, $\infty \lt a\lt b\le \infty $. Its saturation is the Hausdorff distance between the completed graph of $F(\cdot )$ and the curve consisting of two lines – one vertical between the points $(a,0)$ and $(a,1)$, and another horizontal between the points $(a,1)$ and $(b,1)$.
The following corollary holds for the saturation.
Proof.
Having in mind that the distribution’s left endpoint is a and Definitions 5.1 and 5.2, we see that the saturation has to satisfy $F(a+d)+d=1$. Equation (20) has a unique root because $l(t)=F(a+t)+t1$ is an increasing continuous function with endpoints $l(0)=1\lt 0$ and $l(ba)=ba\gt 0$.
Obviously $d\lt 1$ and therefore inequality $d\lt \min \{ba,1\}$ holds because $l(ba)\gt 0$. □
Intuitively, the saturation has to depend only on the domain’s length $ba$ but not on the particular values of a and b. We formalize this in the following proposition.
Next two theorems provide a semiclosed form formula for the saturation.
Theorem 5.5.
Let the function $\gamma (y;c,C,\nu )$ be defined for $y\gt \max \{\ln (ba),0\}$, $c\gt 0$, $C\gt 0$, and $\nu \gt 0$ as
If the parameters ${\lambda _{i}}$, $i=1,2,\dots ,n$, can be presented as
for some positive constants ${c_{1}},{c_{2}},\dots ,{c_{n}}$, then the minKies distribution’s saturation is
Proof.
Suppose that presentation (23) holds. Hence
Taking the exponent of opposite equations (25) and multiplying, we derive
\[\begin{aligned}{}{\lambda _{i}}& ={c_{i}}y{\Bigg((ba)\exp \Bigg(y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg)1\Bigg)^{{\beta _{i}}}}\\ {} & ={c_{i}}y{\bigg(\frac{ba\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}\bigg)^{{\beta _{i}}}},\hspace{2.5pt}i=1,\dots ,n,\end{aligned}\]
and therefore
(25)
\[ {\lambda _{i}}{\bigg(\frac{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{ba\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}\bigg)^{{\beta _{i}}}}={c_{i}}y,\hspace{2.5pt}i=1,\dots ,n.\]
\[ \exp \bigg(P\bigg(\frac{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{ba\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})};\lambda ,\beta \bigg)\bigg)=\exp \Bigg(y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg).\]
Combining Corollary 5.3 with CDF (10) we see that the saturation is given namely by formula (24). □Proof.
Equation (26) has unique solution because its left handside is a decreasing from infinity to zero function. Let the constants ${c_{i}}$, $i=1,\dots ,n$, be defined as
Hence presentations (23) hold for these values of ${c_{i}}$. We finish the proof using Theorem 5.5 and having in mind ${\textstyle\sum _{i=1}^{n}}{c_{i}}=1$. □
6 Duality. MaxKies distribution
We first introduce a specific distributional duality.
Definition 6.1.
Let $F(t)$ be the CDF of a distribution stated on the finite domain $(a,b)$. Then its dual distribution is defined via its CDF as $G(t)=1F(b+at)$.
The following corollary explain the essence of the dual distributions. In fact it moves the left behavior of a distribution to the right side and vice versa.
Corollary 6.2.
The probability density functions of the dual distribution $g(t)$ can be presented via the PDF of the original distribution $f(t)$ as $g(t)=f(b+at)$.
The name maxKies is motivated by the following proposition.
Proposition 6.5.
Let ${\xi _{i}}$, $i=1,2,\dots ,n$, be independent dualKies distributed random variables on the domain $(a,b)$ with parameters $\{{\lambda _{i}},{\beta _{i}}\}$. Then the random variable $\max \{{\xi _{1}},{\xi _{2}},\dots ,{\xi _{n}}\}$ has a maxKies distribution.
Proof.
The proof is based on the fact that the CDF of the maxKies distribution is the product of the CDFs of the underlying original Kies ones. See the proof of Theorem 3.5. □
The probability density and the quantile functions of the maxKies distribution can be presented as
The PDF’s shape can be deduced through Proposition 3.7 having in mind Corollary 6.2. It turns out that $g(a)=0$, whereas the right endpoint $g(b)$ can be zero, finite, or infinitely large. As a rule, the behavior of the maxKies distribution considered from b to a is the same as the behavior of the corresponding minKies one but taken from a to b. In this light the maxKies distribution behaves at the right endpoint as the minKies one at the left endpoint.
Let the functions $A(t)$ and $\alpha (y)$ from equations (9) be defined now as
Under these assumptions Proposition 3.10 for the expectations still holds.
(28)
\[ \begin{aligned}{}\widetilde{A}(t)& =P\bigg(\frac{bt}{ta};;\lambda ,\beta \bigg),\\ {} \widetilde{\alpha }(y)& =\frac{b+ap(y;\lambda ,\beta )}{1+p(y;\lambda ,\beta )}.\end{aligned}\]We discuss now briefly the tail behavior of the maxKies distribution. The results are similar to their minKies analogous and we omit the proofs. The $\mathit{VaR}$values, left and write, are again expressed by the quantile function – it is now given by equations (27). The $\mathit{AVaR}$ values need a little modification compared to formulas (17)
Having in mind that formulas (19) for the truncated expectations still hold for the maxKies distributions, we conclude that Proposition 4.5 for the expectile tail measure and the mean residual life function are true – the difference is that we have to use now the given in (28) functions $\widetilde{A}(\cdot )$ and $\widetilde{\alpha }(\cdot )$ instead of those from (9).
Finally, let us turn to the saturation of the maxKies distribution. We formulate the analogues of the results derived in Section 5 in the following theorem.
Theorem 6.6.
The saturation d of the maxKies distribution satisfies the equation
Let the set Υ be ${\mathbb{R}^{+}}$ if $ba\ge 1$ and $\Upsilon =(0,\ln (1b+a))$ when $ba\lt 1$. If the parameters ${\lambda _{i}}$, $i=1,\dots ,n$, can be presented as
for some positive constants ${c_{1}},\dots ,{c_{n}}$ and $y\in \Upsilon $, then the maxKies saturation is
(30)
\[ {\lambda _{i}}={c_{i}}y{\bigg(\frac{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})1}{(ba1)\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})+1}\bigg)^{{\beta _{i}}}}\]Proof.
Equation (29) can be proven analogously to Proposition 5.4. Let us turn to the second part of the theorem. Note that the condition $y\in \Upsilon $ guarantees that the constants ${\lambda _{i}}$ defined by formula (30) are positive reals. Using these formulas we derive
\[\begin{aligned}{}1d& =\exp \Bigg(y{\sum \limits_{i=1}^{n}}{c_{i}}\Bigg)\\ {} & =\exp \Bigg({\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{(ba1)\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})+1}{\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})1}\bigg)^{{\beta _{i}}}}\Bigg)\\ {} & =\exp \Bigg({\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{ba1+\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}{1\exp (y{\textstyle\textstyle\sum _{i=1}^{n}}{c_{i}})}\bigg)^{{\beta _{i}}}}\Bigg)\\ {} & =\exp \Bigg({\sum \limits_{i=1}^{n}}{\lambda _{i}}{\bigg(\frac{bad}{d}\bigg)^{{\beta _{i}}}}\Bigg)\\ {} & =\exp \bigg(P\bigg(\frac{bad}{d}\bigg)\bigg)=G(a+d).\end{aligned}\]
This equation together with Corollary 5.3 (note that it holds for the maxKies distribution, too) proves the second part of the theorem.Let us consider the left handside of equation (31) as a function of the variable $y\in \Upsilon $. It starts from $+\infty $ and decreases to zero. Hence, equation (31) has unique solution. We finish the proof applying the values
\[ {c_{i}}=\frac{{\lambda _{i}}}{y}{\bigg[\frac{(ba1){e^{y}}+1}{{e^{y}}1}\bigg]^{{\beta _{i}}}}\]
to the previous statement of the theorem. Note that ${\textstyle\sum _{i=1}^{n}}{c_{i}}=1$. □We need to define a saturation of another style to continue the investigation on the maxKies distribitions.
Definition 6.7.
The complementary saturation when $\infty \le a\lt b\lt \infty $ is the Hausdorff distance between the completed graph of $F(\cdot )$ and the curve consisting of the following two lines – a horizontal between the points $(a,0)$ and $(b,0)$ and a vertical between the points $(b,0)$ and $(b,1)$.
The following analogue of Corollary 5.3 is true.
Proposition 6.8.
Let us denote by $\overline{d}$ the complementary saturation of the distribution $F(\cdot )$ on the interval $(a,b]$. Then it is the unique solution of the equation
Something more, $\overline{d}\lt \min \{ba,1\}$. We have as an immediate corollary that, if $ba=1$, then the sum of both saturations is equal to the domain’s length – this means that the distribution achieves its saturations at one and the same point.
The following theorem allows us to derive the complementary saturation for the min and maxKies distributions.
Theorem 6.9.
The saturation of the prime distribution coincides with the complementary saturation of the dual distribution.
7 Numerical results
We shall provide now two numerical experiments to check how the defined in this paper distributions describe real statistical samples. The first one is based on the historical data for the S&P 500 financial index whereas the second one is for the unemployment insurance issues. We shall use the minKies distribution because both are leftskewed.
7.1 S&P 500 index
We shall calibrate now the minKies distribution to the statistical data used in [20]. There have been used $10\hspace{0.1667em}717$ daily observations for the S&P 500 index between January 2, 1980 and July 1, 2022. We are interested in the market shocks which are defined as the dates at which the index falls with more than two percents. Our statistical sample is formed by the length of the periods between two shocks – totally of $N=357$ observations in the range between 1 and 950. Note that these observations are divided to 1000 because the paper [20] is devoted on the Kies style distribution stated on the interval $(0,1)$. We preserve this scaling in the current paper to keep the used statistical sample. Also, in such a way some eventual comparisons are possible. Let us denote the scaled observations by ${t_{i}}$, $i=1,2,\dots ,N$, thus they form the domain $(\min \{{t_{i}}\},\max \{{t_{i}}\})$. We shall divide this interval into a grid with $m=50$ uniformly spaced nodes. Let ${l_{i}^{\mathrm{emp}}}$, $i=1,2,\dots ,m$, be the empirical PDF values at these nodes. We derive them as
where ${N_{i}}$ is the number of the observations falling in the ith subinterval. Also, let us denote by γ the set of parameters of a minKies distribution and by ${l_{i}^{\mathrm{Kies}}}(\gamma )$, $i=1,2,\dots ,m$, the corresponding PDF values of the distribution arising from the parameters γ. Using a modification of the standard least squares method we define the cost function as
Note that we introduce in formula (33) a logarithmic correction because some parameters lead to an infinitely large initial PDF’s value. This logarithm necessitates the use of an additional constant ϵ because some of the empirical PDF values are zero. We set $\epsilon =0.01$.
(33)
\[ L(\gamma ):={\sum \limits_{i=1}^{m}}\big\ln \big({l_{i}^{\mathrm{emp}}}+\epsilon \big)\ln \big({l_{i}^{\mathrm{Kies}}}(\gamma )+\epsilon \big)\big.\]We calibrate the minKies distribution with one, two, three, and four components minimizing cost function (33) over all possible parameter sets γ. As we mentioned above, the values reported in [20] are calibrated under the assumption that the Kies distribution is stated at the interval $(0,1)$. We recalibrate now abandoning this restriction, i.e. the interval endpoints are included in the searched parameter set γ. We minimize cost function (33) using nonlinear programming since we have a multidimensional optimization problem – note that there are four, six, eight, and ten parameters for different minKies models. Since the starting point is very essential for the minimizing algorithm, we use the already derived parameters for the initial point for the next Kies distribution. The received values are reported in Table 2, first five columns entitled by the letter (A). The result derived in [20] are in the column (A0). The number n placed at the first line shows the components of the minKies distribution. We can make two major conclusions. First, the distributions domain is very closed to the unit interval. On the other hand, it is natural the additional components to lead to a better fit. We can see from the obtained errors that the two component distribution leads to an admissible result, whereas the three and four components provide very good estimations. We present all PDFs in Figure 2a.
Table 2.
Numerical estimations
par.  (A0)  (A1)  (A2)  (A3)  (A4)  (B1)  (B2)  (B3)  (B4) 
n  1  1  2  3  4  1  2  3  4 
${\lambda _{0}}$  15.7857  15.1377  3053.03  19444234.9  11623604.4  101.708  13.4539  74.0164  193.5577 
${\lambda _{1}}$  –  –  6.6251  32.1833  15.6489  –  7.2452  0.0123  0.0078 
${\lambda _{2}}$  –  –  –  3.9017  3.2997  –  –  8.5378  0.0111 
${\lambda _{3}}$  –  –  –  –  0.3394  –  –  –  9.1257 
${\beta _{0}}$  0.7120  0.6800  4.8953  11.8106  9.6112  2.3473  13.7467  13.3751  14.4112 
${\beta _{1}}$  –  –  0.3609  1.8377  1.3580  –  1.2393  6.1671  6.9765 
${\beta _{2}}$  –  –  –  0.2259  0.2136  –  –  1.2787  5.9488 
${\beta _{3}}$  –  –  –  –  0.1186  –  –  –  1.2987 
a  0  0.0048  0.0064  0.0077  0.0080  0.0257  1.1141  1.1132  1.1125 
b  1  0.9900  1.0076  1.0165  0.9912  18.8619  8.3748  8.9268  9.1328 
error  25.3491  25.2108  22.1423  21.3855  21.0382  4.7166  3.4270  3.3980  3.3908 
7.2 Unemployment insurance issues
We shall use now a monthly historical data for the unemployment insurance issues in the period between 1971 and 2018 – totally 574 observations. The data can be found at https://data.worlddatanygovns8zxewg or in [17, pp. 162–164]. The same data is also used in [2, 6, 22]. We divide all values by $50\hspace{0.1667em}000$ since their minimum is $49\hspace{0.1667em}263$. Thus the range turns from $[49\hspace{0.1667em}263,308\hspace{0.1667em}352]$ to $[0.9853,6.1670]$. This scaling leads to more convenient domain as well as to some calculation simplifications. The results are reported in the second part of Table 2. We can see that the two, three, and fourcomponent distributions approximate similar domains, namely near the interval $(1,9)$. Also, the returned errors are statistically indistinguishable. On the contrary, the original Kies distribution ($n=1$) returns a quite different support, $(0.0257,18.8619)$, as well as a significantly larger error. Hence, we conclude that only one additional component is sufficient and it has a significant importance. All PDFs can be viewed in Figure 2b. Note that the logarithmic correction in the cost function (33) leads to a better fit in the tails than in the distribution’s center.
We can see a major difference between these examples – the initial point for the first PDF is the infinity whereas it is zero for the second one. This is due to the coefficient ${\beta _{n}}$ and its position w.r.t. the unit, see Proposition 3.7.