1 Introduction
The Weibull distribution is one of the most important generalizations of the exponential distribution. Despite of the loss of the important memorylessness feature, the Weibull distribution has several advantages which determine its wide use in many real life areas including engineering sciences [29], meteorology and hydrology [28], communications and telecommunications [26], energetics [22], chemical and metallurgical industry [4], epidemiology [16], insurance and banking [6], etc. For more theoretical and practical details related to this distribution we refer to the original work [27] as well as the recently published books [18, 20, 17], and [15].
A significant modification of the Weibull distribution known as a Kies distribution was firstly proposed in [9] and recently considered in many studies. It reduces the positive real halfline support of the Weibull to the interval $\left(0,1\right)$ changing the variables as $t=\frac{x}{x+1}$. Later, many authors discussed several modifications. Refs. [11, 21] and [30] examine four parameter distributions whose domain is translated to an arbitrary positive interval. Refs. [12, 13] take a power of the Kies cumulative distribution function (CDF, hereafter) to define a new family.
A composition approach for the Kies distribution is presented in [2]. This way the new distribution is defined after the change of variables $t=H\left(x\right)$ where $H\left(x\right)$ is the CDF of an auxiliary distribution. Ref. [25] introduces the Fréchet distribution for this purpose and later in [24] the resulting KiesFréchet distribution is applied to model the COVID 19 mortality. Alternatively, [1] and [3] use the exponential and Lomax distributions, respectively. See also [5] for another composition based on the generalized uniform distribution.
In the present article we define a new Kies family by its CDFs which are constructed as a composition of two other Kies CDFs on the interval $\left(0,1\right)$, $G\left(H\left(t\right)\right)$. Note that this definition is possible due to this domain. We name the distributions $G\left(\cdot \right)$ and $H\left(\cdot \right)$ the original one and the correction. Thus we have a fourparameter family – two parameters for each initial distribution. We investigate the probabilistic properties of the resulting distribution in the light of its compositional essence. We derive many relations between the corresponding terms – CDF, probability density function, quantile function, mean residual life function, different expectations and moments – of the resulting and the initial distributions. Also we investigate the tail behavior by the use of three risk measures arising in the modern capital markets, namely VaR (abbreviated from ValueatRisk), AVaR (AverageValueatRisk, also known as CVAR and TVAR), and expectile based VaR.
Other important results we derive are related to the socalled Hausdorff saturation. It presents the distance between the CDF and a Γshaped curve connecting its endpoints. In fact, the saturation measures how the distribution mass is located in the domain – the distribution is more leftplaced when the saturation is lower, and vice versa. Also, when studying specific classes of cumulative distribution functions it is important to know their intrinsic characteristics – the saturation in the Hausdorff sense is namely such one. This characteristic is important for researchers in choosing an appropriate model for approximating specific data from different branches of scientific knowledge such as Biostatistics, Population dynamics, Growth theory, Debugging and Test theory, Computer viruses propagation, Insurance mathematics. In addition, the use of composite Kiess families can also be useful in the study of reactionkinetic models – a similar study of the dynamics of the classical Kiess model is discussed in [14]. We obtain in this paper an interval evaluation of the saturation and investigate its power w.r.t. the four distribution parameters. Moreover, we prove a formula of a semiclosed form for the Hausdorf saturation. Many numerical experiments are provided.
The next question we discuss is related to the inverse problem – the calibration w.r.t. some empirical data. It is accepted in the present literature that the maximum likelihood estimation is a good approach for the Kies style distributions due to the available closed form estimator. However, our numerical simulations do not support this opinion. For this we construct an algorithm based on the least square errors. It turns out that this method produces very believable outcomes.
To illustrate our results we explore an empirical data from the real financial markets, namely, for the S&P500 index. It is well known that there are high and lowvolatility periods. In fact, this is the ubiquitous phenomenon of volatility clustering. We extract the periods between two market shocks and examine the distribution of their lengths. We compare the results which the corrected Kies distribution returns with the outcome of its ancestors, namely, the exponential, the Weibull, and the original Kies distributions.
The paper is organized as follows. Section 2 defines the new class of distributions and discusses their probabilistic properties. The tail behavior is examined in Section 3 trough the measures VaR, AVaR, and expectile based VaR. The Hausdorff distance and the related saturation are considered in Section 4. We discuss the calibration problem in Section 5. Finally, we present a numerical example based on the S&P500 index in Section 6.
2 Definitions and distributional properties
We shall use for convenience the following notations in the whole paper. The cumulative distribution function (CDF, as we mentioned above) of a distribution will be denoted by an uppercase letter, the overlined letter will be used for the complementary cumulative distribution function (CCDF), the corresponding lowercase letter is preserved for the probability density function (PDF), and finaly the letter Q indexed by the CDF’s letter will mean the quantile function (QF). For example, if $F\left(t\right)$ is the CDF, then $\overline{F}\left(t\right)$, $f\left(t\right)$, and ${Q_{F}}\left(t\right)$ are the corresponding CCDF, PDF, and QF, respectively. Also, we shall use the Greek letter ξ for random variables, and we shall mark it by the corresponding CDF letter in the subscript, i.e. ${\xi _{F}}$ means a random variable the CDF of which is $F\left(t\right)$.
The standard Kies distribution is defined on the domain $\left(0,1\right)$ by its CDF
for some positive parameters b and k. Inverting CDF (1) we can derive the quantile function for $t\in \left(0,1\right)$:
Differentiating equation (1) we obtain the probability density function
The following proposition describes the shape of PDF (3).
(2)
\[ {Q_{H}}\left(t\right)=\frac{{\left(\ln \left(1t\right)\right)^{\frac{1}{b}}}}{{k^{\frac{1}{b}}}+{\left(\ln \left(1t\right)\right)^{\frac{1}{b}}}}.\](3)
\[ h\left(t\right)=bk{e^{k{\left(\frac{t}{1t}\right)^{b}}}}\frac{{t^{b1}}}{{\left(1t\right)^{b+1}}}.\]Proposition 2.1.
The value of the PDF at the right end of the distribution domain is zero, $h\left(1\right)=0$. Let the function $\alpha \left(t\right)$ for $t\in \left(0,1\right)$ be defined as
The following statements for PDF (3) w.r.t. the position of the power b w.r.t. 1 hold.

1. If $b>1$, then PDF (3) is zero in the left domain’s endpoint, $h\left(0\right)=0$. Function (4) has a unique root for $t\in \left(0,1\right)$, we denote it by ${t_{2}}$. The PDF increases for $t\in \left(0,{t_{2}}\right)$ having a maximum for $t={t_{2}}$ and decreases for $t\in \left({t_{2}},1\right)$.

2. If $b=1$, then the left limit of the PDF is $h\left(0\right)=k$. If $k\ge 2$, then the PDF is a function decreasing from k to 0. Otherwise, if $k<2$, then we introduce the value ${t_{2}}=1\frac{k}{2}$; note that ${t_{2}}\in \left(0,1\right)$. The PDF starts from the value k for $t=0$, increases to a maximum for $t={t_{2}}$, and decreases to zero.

3. If $b<1$, then $h\left(0\right)=\infty $. The derivative of function (4) is Let $\overline{t}$ be defined as $\overline{t}:=\frac{1b}{2}$. The PDF is a decreasing function when ${\alpha ^{\prime }}\left(\overline{t}\right)\ge 0$.Suppose that ${\alpha ^{\prime }}\left(\overline{t}\right)<0$. In this case derivative (5) has two roots in the interval $\left(0,1\right)$; we denote them by ${\overline{t}_{1}}$ and ${\overline{t}_{2}}$. If $\alpha \left({\overline{t}_{2}}\right)\ge 0$, then the PDF decreases in the whole distribution domain. Otherwise, if $\alpha \left({\overline{t}_{2}}\right)<0$, then function (4) has two roots in the interval $\left(0,1\right)$, too; we notate them by ${t_{1}}$ and ${t_{2}}$. The PDF starts from infinity, decreases in the interval $\left(0,{t_{1}}\right)$ having a local minimum for $t={t_{1}}$, increases for $t\in \left({t_{1}},{t_{2}}\right)$ having a local maximum for $t={t_{2}}$, and decreases to zero for $t\in \left({t_{2}},1\right)$.
We introduce and investigate a new class of distributions for which the correction is presented by another Kies distribution (1).
Definition 2.2.
Let a, b, λ, and k be positive constants. Let two Kies distributed random variables be defined by their CDFs
We define a new distribution in the domain $\left(0,1\right)$ by the CDF
We shall call it a Hcorrected Kies distribution. We name G the original distribution and H the correcting distribution.
(6)
\[ \begin{aligned}{}H\left(t\right)& :=1{e^{k{\left(\frac{t}{1t}\right)^{b}}}},\\ {} G\left(t\right)& :=1{e^{\lambda {\left(\frac{t}{1t}\right)^{a}}}}.\end{aligned}\]Remark 1.
Note that this superposition is possible since the Kies CDF is an increasing from zero to one function in the interval $\left(0,1\right)$.
Proof.
As a corollary of Definition 2.2 we can establish the quantile function.
Corollary 2.4.
The quantile function of a Hcorrected Kies distributed random variable can be derived trough the formula
where ${Q_{H}}\left(t\right)$ and ${Q_{G}}\left(t\right)$ are the quantile functions of the original Kies distributions (equation (2)) $H\left(t\right)$ and $G\left(t\right)$, respectively.
Differentiating equation (8) we obtain for the PDF
More informative is another form of PDF (12) presented in the following proposition.
(12)
\[ f\left(t\right)=ab\lambda k\hspace{2.5pt}{e^{\lambda {\left({e^{k{\left(\frac{t}{1t}\right)^{b}}}}1\right)^{a}}}}{\left({e^{k{\left(\frac{t}{1t}\right)^{b}}}}1\right)^{a1}}{e^{k{\left(\frac{t}{1t}\right)^{b}}}}\frac{{t^{b1}}}{{\left(1t\right)^{b+1}}}.\]Remark 2.
Formula (13) means that the PDF of the Hcorrected Kies distribution is the initial PDF weighted by the corresponding correction’s PDF.
First we have to derive the PDF value of the Hcorrected Kies distribution at the left domain endpoint $t=0$ (obviously, the value at the right one, $t=1$, is zero). Analogously to the original Kies distributions, one can expect that $0 < f\left(0\right) < \infty $ when $a=b=1$. This is true, but the values $a=b=1$ are far from exhausting the cases in which the left endpoint of the PDF is finite and nonzero. The proposition below characterizes the PDF’s behavior near the zero.
Proof.
We shall use form (12) of the PDF. We can see that the PDF near the zero depends only on the term
Expanding the exponent in the Taylor series, we transform equation (14) to
Suppose first that $a<1$. If b is such that
then at least one term of the sum above, namely the first one, tends to infinity for $t\to 0$. Therefore $L\to 0$, since $a<1$. Note that inequality (16) is equivalent to $ab>1$. If the inequality (16) is opposite in sign, then all terms of the sum tend to zero, and therefore $L\to \infty $ (note again $a<1$). If we have equality in (16), or equivalently $ab=1$, then the first term tends to k and the rest tend to zero. Hence $L\to ab\lambda k{k^{a1}}=\lambda {k^{a}}$.
(15)
\[ \begin{aligned}{}L& =ab\lambda k{\left(\left({\sum \limits_{n=1}^{\infty }}\frac{{k^{n}}{t^{nb}}}{n!}\right){t^{\frac{b1}{a1}}}\right)^{a1}}\\ {} & =ab\lambda k{\left({\sum \limits_{n=1}^{\infty }}\frac{{k^{n}}{t^{nb+\frac{b1}{a1}}}}{n!}\right)^{a1}}.\end{aligned}\]Assume now that $a>1$. If inequality (16) holds, equivalently to $ab<1$, then $L\to \infty $, since the fist term of the sum tends to infinity and $a>1$. If $ab=1$, then only the first term is nonzero – its limit is k – and hence $L\to \lambda {k^{a}}$. If $ab>1$ (oppositely to inequality (16)), we conclude that $L\to \infty $, since the sum tends to infinity and the power is positive.
Finally, if $a=1$, then the desired result holds because formula (14) turns to $L=b\lambda k{t^{b1}}$. □
The shape of the PDF of the Hcorrected Kies distribution is a consequence of Propositions 2.1 and 2.6. Obviously, it has to be more various than the PDF of the original Kies distribution. Various examples are presented in Figure 1. In each of all six subfigures we vary the coefficients λ and a for the original distribution G as $\lambda \in \left\{0.5,\hspace{2.5pt}1,\hspace{2.5pt}1.5\right\}$ and $a\in \left\{0.5,\hspace{2.5pt}2\right\}$. Namely, the original Kies distribution G is colored by blue. The rest of the plotted PDFs are produced by the following parameters for the correcting distribution H: $k\in \left\{1,\hspace{2.5pt}2\right\}$ and $b\in \left\{0.5,\hspace{2.5pt}1,\hspace{2.5pt}2\right\}$.
The following proposition for the expectations of the corrected Kies random variables holds.
Proposition 2.7.
Let ${\xi _{F}}$ be an Hcorrected Kies distributed random variable with original distribution G, ${\xi _{G}}$ be an original Kies distributed random variable, and $\beta \left(\cdot \right)$ be a real valued function. The expectation of the random variable $\beta \left({\xi _{F}}\right)$ is equal to the expectation of the random variable $\beta \left({Q_{H}}\left({\xi _{G}}\right)\right)$. Written formalized that is
Proof.
Using the form of PDF (13) and changing the variables as $x=H\left(t\right)$ (equivalently, $t={Q_{H}}\left(x\right)$) we derive
□
(18)
\[\begin{aligned}{}E\left[\beta \left({\xi _{F}}\right)\right]& ={\underset{0}{\overset{1}{\int }}}\beta \left(t\right)g\left(H\left(t\right)\right)dH\left(t\right)\\ {} & ={\underset{0}{\overset{1}{\int }}}\beta \left({Q_{H}}\left(x\right)\right)g\left(x\right)dx\\ {} & =E\left[\beta \left({Q_{H}}\left({\xi _{G}}\right)\right)\right].\end{aligned}\]The following corollaries hold.
Corollary 2.8.
The random variables ${\xi _{F}}$ and ${Q_{H}}\left({\xi _{G}}\right)$ are identically distributed under the assumptions of Proposition 2.7.
Let us consider now the mean residual life function (MRLF, hereafter) of an Hcorrected Kies distribution. Usually it is defined as the conditional expectation
We shall use an alternative presentation stated in [7]:
The following proposition for the MRLF stands.
Proof.
Let us consider first the integral in formula (22). Changing the variables as $x=H\left(s\right)$ (equivalently to $s={Q_{H}}\left(x\right)$) and integrating by parts we derive
(24)
\[ \begin{aligned}{}{\underset{t}{\overset{1}{\int }}}\overline{F}\left(s\right)ds& ={\underset{t}{\overset{1}{\int }}}\overline{G}\left(H\left(s\right)\right)ds={\underset{H\left(t\right)}{\overset{1}{\int }}}\overline{G}\left(x\right)d{Q_{H}}\left(x\right)\\ {} & ={\left.\overline{G}\left(x\right){Q_{H}}\left(x\right)\right_{H\left(t\right)}^{1}}+{\underset{H\left(t\right)}{\overset{1}{\int }}}g\left(x\right){Q_{H}}\left(x\right)dx\\ {} & =\overline{F}\left(t\right)t+E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}>H\left(t\right)}}\right].\end{aligned}\]3 Tail behavior
Let us consider three measures arising from the risk management – $\mathit{VaR}$, $\mathit{AVaR}$, and expectile based VaR; we shall use the notation $\mathit{EX}$ for the last one. By its original definition, the $\mathit{VaR}$ at level α of a random variable is just the opposite of the quantile function $\mathit{VaR}(\alpha )=Q(\alpha )$. Since the domain of the Kies family is the interval $\left(0,1\right)$ we shall think $\mathit{VaR}(\alpha ):=Q(\alpha )$. As its name shows, $\mathit{AVaR}$ is an average $\mathit{VaR}$ in some sense – it is defined as
Also, we consider the right tail behavior by defining the following term
The expectile function is related to the quantiles in the following way. The αquantile of the random variable ξ can be viewed as the lower solution of the optimal problem
where ${z^{+}}$ and ${z^{}}$ are notations for $\max \left(z,0\right)$ and $\max \left(z,0\right)$, respectively. For more details, see, for example, [10]. Analogously, the expectile is defined in [19] as the solution of the following quadratic problem
Note that the expectiles are well defined when the random variable has a finite second moment. For the corrected Kies distributions this is true due to Corollary 2.9. It can be easily proven that expectile (28) is the solution of the following equation w.r.t. the variable x
We derive $\mathit{AVaR}$s and the expectile based $\mathit{VaR}$ in the following two propositions.
(25)
\[ \mathit{AVaR}(\alpha ):=\frac{1}{\alpha }{\underset{0}{\overset{\alpha }{\int }}}\mathit{VaR}(u)du.\](26)
\[ \overline{\mathit{AVaR}}(\alpha ):=\frac{1}{1\alpha }{\underset{\alpha }{\overset{1}{\int }}}\mathit{VaR}(u)du.\](27)
\[ Q\left(\alpha \right)=\underset{x\in \mathbb{R}}{\operatorname{arg\,min}}\left\{E\left[\alpha {\left(\xi x\right)^{+}}+\left(1\alpha \right){\left(\xi x\right)^{}}\right]\right\},\](28)
\[ \mathit{EX}(\alpha ):=\underset{x\in \mathbb{R}}{\operatorname{arg\,min}}\left\{E\left[\alpha {\left({\left(\xi x\right)^{+}}\right)^{2}}+\left(1\alpha \right){\left({\left(\xi x\right)^{}}\right)^{2}}\right]\right\}.\](29)
\[ \alpha E\left[{\left(\xi x\right)^{+}}\right]=\left(1\alpha \right)E\left[{\left(\xi x\right)^{}}\right].\]Proposition 3.1.
We have the following double presentations for $\mathit{AVaR}(\alpha )$ and $\overline{\mathit{AVaR}}(\alpha )$:
where ${\mu _{1}}$ is the first moment given in Corollary 2.9 and ${m_{F}}\left(\cdot \right)$ is the MRLF.
(30)
\[ \begin{aligned}{}\mathit{AVaR}(\alpha )& =\frac{{\mu _{1}}}{\alpha }\frac{1\alpha }{\alpha }\left[{Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\right]\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}} < {Q_{G}}\left(\alpha \right)}}\right]}{\alpha }\\ {} \overline{\mathit{AVaR}}(\alpha )& ={Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}} > H{Q_{G}}\left(\alpha \right)}}\right]}{1\alpha },\end{aligned}\]Proof.
We shall use the following relation between the truncated expecations and the MRLF, the proof of which can be found in [7],
Having in mind equations ${x^{}}={x^{+}}x$ and (31), and changing the variables as $s={Q_{F}}\left(t\right)\hspace{2.5pt}\Leftrightarrow t=F\left(s\right)$, we derive for the first statement of equation (30)
To derive the second form of the $\mathit{AVaR}$, we use equations (11), (19), and (23) (for the quantile function, the moment and the MRLF, respectively) and obtain
(31)
\[ E\left[{\left({\xi _{F}}y\right)^{+}}\right]={m_{F}}\left(y\right)\overline{F}\left(y\right).\](32)
\[ \begin{aligned}{}\mathit{AVaR}(\alpha )& =\frac{1}{\alpha }{\underset{0}{\overset{\alpha }{\int }}}{Q_{F}}\left(t\right)dt=\frac{1}{\alpha }{\underset{0}{\overset{{Q_{F}}\left(\alpha \right)}{\int }}}sf\left(s\right)ds=\frac{E\left[{\xi _{F}}{I_{\xi < {Q_{F}}\left(\alpha \right)}}\right]}{\alpha }\\ {} & =\frac{{Q_{F}}\left(\alpha \right)P\left({\xi _{F}} < {Q_{F}}\left(\alpha \right)\right)}{\alpha }\frac{E\left[{\left({\xi _{F}}{Q_{F}}\left(\alpha \right)\right)^{}}\right]}{\alpha }\\ {} & ={Q_{F}}\left(\alpha \right)\frac{E\left[{\left({\xi _{F}}{Q_{F}}\left(\alpha \right)\right)^{+}}\right]}{\alpha }+\frac{E\left[{\xi _{F}}{Q_{F}}\left(\alpha \right)\right]}{\alpha }\\ {} & =\frac{{\mu _{1}}}{\alpha }\frac{1\alpha }{\alpha }\left[{Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\right].\end{aligned}\](33)
\[ \begin{aligned}{}\mathit{AVaR}(\alpha )& =\frac{{\mu _{1}}\left(1\alpha \right)\left[{Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\right]}{\alpha }\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right)\right]\left(1\alpha \right)\left[{Q_{F}}\left(\alpha \right)+\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}>H\left({Q_{F}}\left(\alpha \right)\right)}}\right]}{\overline{F}\left({Q_{F}}\left(\alpha \right)\right)}{Q_{F}}\left(\alpha \right)\right]}{\alpha }\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}\le {Q_{G}}\left(\alpha \right)}}\right]}{\alpha }.\end{aligned}\]Let us turn to the right tail term $\overline{\mathit{AVaR}}$. Analogously as above, we obtain
Writing equation (23) for $t={Q_{F}}\left(\alpha \right)$, we see that
which leads to the second form of $\overline{\mathit{AVaR}}$ □
(34)
\[ \begin{aligned}{}\overline{\mathit{AVaR}}(\alpha )& =\frac{1}{1\alpha }{\underset{\alpha }{\overset{1}{\int }}}{Q_{F}}\left(t\right)dt=\frac{1}{1\alpha }{\underset{{Q_{F}}\left(\alpha \right)}{\overset{1}{\int }}}sf\left(s\right)ds=\frac{E\left[{\xi _{F}}{I_{\xi >{Q_{F}}\left(\alpha \right)}}\right]}{1\alpha }\\ {} & =\frac{{Q_{F}}\left(\alpha \right)P\left({\xi _{F}}>{Q_{F}}\left(\alpha \right)\right)}{1\alpha }+\frac{E\left[{\left({\xi _{F}}{Q_{F}}\left(\alpha \right)\right)^{+}}\right]}{1\alpha }\\ {} & ={Q_{F}}\left(\alpha \right)+\frac{{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\overline{F}({Q_{F}}\left(\alpha \right)}{1\alpha }\\ {} & ={Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right).\end{aligned}\](35)
\[ {Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)=\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}>{Q_{G}}\left(\alpha \right)}}\right]}{1\alpha },\]Remark 4.
Note that the second forms of $\mathit{AVaR}$ and $\overline{\mathit{AVaR}}$ can be obtained directly (without using the first form) changing the variables as $t={Q_{G}}\left(u\right)\Leftrightarrow u=G\left(t\right)$ in the integral
Next we discuss the expectile based VaR. It can be obtained through both of equations presented in the following proposition.
Proposition 3.2.
The αexpectile based VaR, $\mathit{EX}(\alpha )$, is the solution of the following equivalent equations (w.r.t. the variable t)
where ${\mu _{1}}$ is the first moment given in Corollary 2.9 and ${m_{F}}\left(\cdot \right)$ is the MRLF.
(37)
\[ \begin{aligned}{}& \left(12\alpha \right){m_{F}}\left(t\right)\overline{F}\left(t\right)+\left(1\alpha \right)\left(t{\mu _{1}}\right)=0\\ {} & t\left(\overline{G}\left(H\left(t\right)\right)\alpha \right)E\left[{Q_{H}}\left({\xi _{G}}\right)\left(1\alpha \left(12\alpha \right){I_{{\xi _{G}}>H\left(t\right)}}\right)\right]=0,\end{aligned}\]Proof.
Using the formula ${x^{}}={x^{+}}x$ and equation (29) which determines the expectile we derive
Replacing the truncated expectation from formula (31) we obtain the first equation in (37). It remains to replace the expectation and the MRLF from equations (17) and (23) to derive the second part of (37). □
(38)
\[ \alpha E\left[{\left({\xi _{F}}t\right)^{+}}\right]=\left(1\alpha \right)E\left[{\left({\xi _{F}}t\right)^{+}}\left({\xi _{F}}t\right)\right].\]4 Hausdorff distance and saturation
Let us consider the maxnorm in ${\mathbb{R}^{2}}$, i.e. if A and B are the points $A=\left({t_{A}},{x_{A}}\right)$ and $B=\left({t_{B}},{x_{B}}\right)$, then $\ AB\ :=\max \left\{\left{t_{A}}{t_{B}}\right,\left{x_{A}}{x_{B}}\right\right\}$. We define the Hausdorff distance, also known as a Hdistance, in a sense of [23].
We can define now the saturation of a distribution.
Definition 4.2.
Let $F\left(\cdot \right)$ be the CDF of a distribution with a leftfinite domain $\left[a,b\right)$, $\infty < a < b\le \infty $. Its saturation is the Hausdorff distance between the completed graph of $F\left(\cdot \right)$ and the curve consisting of two lines – one vertical between the points $\left(a,0\right)$ and $\left(a,1\right)$ and another horizontal between $\left(a,1\right)$ and $\left(b,F\left(b\right)\right)$.
Having in mind that the domain of the Kies distribution is the interval $\left(0,1\right)$, we can prove the following corollary for its saturation.
We shall prove now the following formula of a semiclosed form for the saturation of CDF (8).
Theorem 4.4.
Let y be a positive parameter and the function $\gamma \left(y\right)$ be defined as
Suppose that $k=\gamma \left(y\right)$ for some value of y. Then the Hcorrected Kies distribution’s saturation is
Note that the function $\gamma \left(y\right)$ is strictly increasing in the interval $\left(0,\infty \right)$ and hence it is invertible. Therefore the saturation can be expressed as a function of λ, k, a, and b as
Proof.
Applying Corollary 4.3 to CDF (8) we see that the saturation d satisfies the equation
in the interval $\left(0,1\right)$. Note that the solution exists and is unique, because the function $\mu \left(d\right)$ is continuous, increasing, $\mu \left(0\right)=\infty $, and $\mu \left(1\right)=+\infty $. Let us change the variables as
Thus the function $\mu \left(d\right)$ defined as formula (44) turns to
Another change we need is $y=\ln \left(kz\right)$ or, equivalently, $z=\frac{{e^{y}}}{k}$. Thus
and therefore the function $\mu \left(d\right)$ can be rewritten w.r.t. the variable y as
Therefore the equation $\mu \left(y\right)=0$ turns to
or, equivalently,
Substituting k from equation (50) into formula (47), we obtain
We finish the proof combining equations (50) and (51). □
(44)
\[ \mu \left(d\right):=\lambda {\left({e^{k{\left(\frac{d}{1d}\right)^{b}}}}1\right)^{a}}+\ln \left(d\right)=0\](45)
\[ z=\frac{1}{k}{e^{k{\left(\frac{d}{1d}\right)^{b}}}}\Leftrightarrow d=\frac{{\left(\ln \left(kz\right)\right)^{\frac{1}{b}}}}{{\left(\ln \left(kz\right)\right)^{\frac{1}{b}}}+{k^{\frac{1}{b}}}}.\](48)
\[ \mu \left(y\right)=\ln \left({e^{\lambda {\left({e^{y}}1\right)^{a}}}}\frac{{y^{\frac{1}{b}}}}{{y^{\frac{1}{b}}}+{k^{\frac{1}{b}}}}\right).\](49)
\[ {e^{\lambda {\left({e^{y}}1\right)^{a}}}}\frac{{y^{\frac{1}{b}}}}{{y^{\frac{1}{b}}}+{k^{\frac{1}{b}}}}=1\](51)
\[ d=\frac{{y^{\frac{1}{b}}}}{{y^{\frac{1}{b}}}+{y^{\frac{1}{b}}}\left[{e^{\lambda {\left({e^{y}}1\right)^{a}}}}1\right]}={e^{\lambda {\left({e^{y}}1\right)^{a}}}}.\]The behavior of the corrected Kies CDFs can be seen in Figures 2a–2d together with the saturation $\overline{d}$; it is presented by the red points. The red lines form squares which in fact confirms Corollary 4.3. The used quadruplet for the parameters are $(\lambda ,k,a,b)\in \left\{\left(2,2,5,1\right),\hspace{2.5pt}\left(2,2,1,0.5\right),\hspace{2.5pt}\left(2,1,2,1\right),\hspace{2.5pt}\left(0.5,1,0.5,0.5\right)\right\}$.
Next we discuss a useful in practice interval approximation of the Hausdorff saturation $\overline{d}$. Let us consider first the case $\lambda =a=b=1$. The saturation $\overline{d}$ has to satisfy equation (44) which now can be written as
The function $\mu \left(d\right)$ can be approximated very well for small ds by the function
Taking the exponent in the Taylor series we see that the function
can be approximated as $\mathcal{O}\left({d^{2}}\right)$ for small enough values of d by the function
Let ${d_{1}}$ and ${d_{2}}$ be defined as ${d_{1}}:=\frac{1}{k}$ and ${d_{2}}:=\frac{\ln k}{k}$. We shall check when ${\mu _{2}}\left({d_{1}}\right)<0<{\mu _{2}}\left({d_{2}}\right)$. Obviously the first inequality holds when $k>e$. Assuming that this restriction holds, we see that ${\mu _{2}}\left({d_{2}}\right)=\ln \ln k>0$ and hence the second inequality holds, too. Thus we conclude that the function ${\mu _{2}}\left(d\right)$ has a unique root in the interval $\left(0,1\right)$ and it belongs to the subinterval $\left({d_{1}},{d_{2}}\right)$ when $k>e$, since this function is strictly increasing.
(54)
\[ {\mu _{1}}\left(d\right)=kd+\ln d+{\sum \limits_{n=2}^{\infty }}\frac{{\left(kd\right)^{n}}}{n!}.\]In the next proposition we discuss the general case assuming that $\lambda {k^{a}}>1$.
Proposition 4.5.
Suppose that $\lambda {k^{a}}>1$. Let the parameter b be such that $b<\overline{b}$, where $\overline{b}$ is2
Then the function ${\mu _{2}}\left(d\right)$ defined as
has a unique root in the interval $\left(0,1\right)$. Moreover, the root belongs to the subinterval $\left({d_{1}},{d_{2}}\right)$ where
Note that ${d_{1}}<{d_{2}}$ due to the condition $b<\overline{b}$.
(58)
\[ \begin{aligned}{}{d_{1}}& :={\left(\frac{1}{\lambda {k^{a}}}\right)^{\frac{1}{ab}}},\\ {} {d_{2}}& :={\left(\frac{\ln \left(\lambda {k^{a}}\right)}{ab\lambda {k^{a}}}\right)^{\frac{1}{ab}}}.\end{aligned}\]Proof.
Let us consider first function (57) in the particular case $\lambda =a=1$. Thus we have
Obviously, the function ${\mu _{2}}\left(d\right)$ is increasing and ${\mu _{2}}\left({d_{1}}\right)<0$ due to $k>{e^{b}}$. We have for ${\mu _{2}}\left({d_{2}}\right)$:
The last inequality is true again due to $k>{e^{b}}$.
(59)
\[ \begin{aligned}{}k& >{e^{b}},\\ {} {\mu _{2}}\left(d\right)& =k{d^{b}}+\ln d,\\ {} {d_{1}}& ={\left(\frac{1}{k}\right)^{\frac{1}{b}}},\\ {} {d_{2}}& ={\left(\frac{\ln k}{bk}\right)^{\frac{1}{b}}}.\end{aligned}\](60)
\[ {\mu _{2}}\left({d_{2}}\right)=\frac{\ln k}{b}+\frac{1}{b}\left[\ln \left(\frac{\ln k}{b}\right)\ln k\right]=\frac{1}{b}\ln \left(\frac{\ln k}{b}\right)>0.\]Let us remove the restriction $\lambda =a=1$. Let the function ${\overline{\mu }_{2}}\left(d;k,b\right):=k{d^{b}}+\ln d$ be defined as before – see the second line of (59). Note that we mark the dependence on the variables k and b. We can easily check that the equation ${\mu _{2}}\left(d\right)=0$ is equivalent to ${\overline{\mu }_{2}}\left(d;K,B\right)=0$ for $K=\lambda {k^{a}}$ and $B=ab$ and thus we can use the derived above result. This way the values of K and B lead to formulas (56) and (58). □
Let us return to the saturation of the corrected Kies distribution. We have shown above that it is the solution of equation (44) which is equivalent to
Analogously to the case $a=b=\lambda =1$ , formula (54), we can see that after the Taylor expansion of the exponent, the left handside of equation (61) can be approximated by the function ${\mu _{2}}\left(d\right)$ near zero as $\mathcal{O}\left({d^{2b}}\right)$. Hence, its root can be used as an approximation of the corrected Kies distribution’s saturation when it is small enough. On the other hand, the function ${\mu _{2}}\left(d\right)$ is a lower approximation of $\mu \left(d\right)$ and therefore ${\mu _{2}}\left(d\right)<\mu \left(d\right)$. Thus the saturation is below the root of the function ${\mu _{2}}\left(d\right)$ and hence $\overline{d}<{d_{2}}$. The question stands, when ${d_{2}}<1$. Let us define ${b_{1}}$ as
Note that ${b_{1}}<\overline{b}$. We shall show that if $b<\overline{b}$, then ${d_{2}}<1$ only when $b>{b_{1}}$. Using again the notations $K=\lambda {k^{a}}$ and $B=ab$ and having in mind formula (58) we see that ${d_{2}}<1$ when $1>\frac{\ln K}{BK}$ which is equivalent to $b>{b_{1}}$. Note that $K>1$.
(61)
\[ {e^{k{\left(\frac{d}{1d}\right)^{b}}}}1{\left(\frac{\ln d}{\lambda }\right)^{\frac{1}{a}}}=0.\](62)
\[ {b_{1}}:=\frac{\ln \left(\lambda {k^{a}}\right)}{a\lambda {k^{a}}}=\frac{\overline{b}}{\lambda {k^{a}}}.\]On the contrary, ${d_{1}}$ is not always below the saturation $\overline{d}$. It turns out that there exists a value, say ${b_{2}}$, dependent on the other parameters, such that ${d_{1}}<\overline{d}$ for $b<{b_{2}}$, and vice versa. To see this, we consider function (44). Obviously, it is increasing in the distribution domain $\left(0,1\right)$. Also, ${d_{1}}$ increases w.r.t. the parameter b because $\lambda {k^{a}}>1$. Therefore $\overline{\mu }\left(b\right):=\mu \left({d_{1}}\left(b\right)\right)$ is an increasing function, too; note that ${d_{1}}\left(b\right)<1$. Having in mind $\overline{\mu }\left(0\right)=\infty $, $\overline{\mu }\left(+\infty \right)=+\infty $, and $\mu \left(\overline{d}\left(b\right)\right)=0$, we conclude that indeed ${d_{1}}\left(b\right)<\overline{d}\left(b\right)$ for $b<{b_{2}}$, where ${b_{2}}$ is the unique solution in the interval $\left(0,\infty \right)$ of the equation
We shall show that ${b_{2}}<\overline{b}$, too. Let us mark the dependence on b in the terms ${d_{1}}$ and ${d_{2}}$. We can easily check that ${d_{1}}\left(\overline{b}\right)={d_{2}}\left(\overline{b}\right)$ and hence $\overline{d}<{d_{1}}\left(\overline{b}\right)={d_{2}}\left(\overline{b}\right)<1$ (because $\overline{d}<{d_{2}}\left(\overline{b}\right)$ and ${d_{1}}\left(\overline{b}\right)<1$). Therefore $\overline{\mu }\left(\overline{b}\right)=\mu \left({d_{1}}\left(\overline{b}\right)\right)>\mu \left(\overline{d}\right)=0$, since $\mu \left(b\right)$ is an increasing function. Thus we see that ${b_{2}}<\overline{b}$.
We can formulate these results in the following proposition.
Proposition 4.6.
Suppose that $b<\overline{b}$, where $\overline{b}$ is given in formula (56). Then if $b<{b_{2}}$, where ${b_{2}}$ is the solution of equation (63), then ${d_{1}}<\overline{d}<{d_{2}}$. If in addition $b>{b_{1}}$ for ${b_{1}}$ given in equation (62), then ${d_{2}}<1$. Having in mind that ${b_{1}}\vee {b_{2}}<\overline{b}$, we can formulate the following statements:
As we can see from definition (58), ${d_{1}}$ can be viewed as an increasing function w.r.t. the parameter b. Let us consider the second value ${d_{2}}$. It can be written as ${d_{2}}=\alpha \left(\beta \right)={\left(\beta c\right)^{\beta }}$, where $\beta =\frac{1}{ab}$ and $c=\frac{\ln k}{k}$. The function $\alpha \left(\beta \right)$ decreases in the interval $\beta \in \left(0,\frac{1}{ec}\right)$ and increases for $\beta >\frac{1}{ec}$ because its derivative can be written as ${\alpha ^{\prime }}\left(\beta \right)=\alpha \left(\beta \right)\left(\ln \beta c+1\right)$. Thus, we conclude that ${d_{2}}$, considered as a function of the parameter b, ${d_{2}}\left(b\right)$, decreases for $b<{b^{\ast }}$ and increases otherwise, where
Some calculus shows that ${b^{\ast }}<\overline{b}$ when $\lambda >\frac{e}{{k^{a}}}$, and ${b^{\ast }}>\overline{b}$ otherwise. Note that ${b_{1}}<{b^{\ast }}$.
The interval approximations of the saturation are presented in Figures 2e and 2f. The values of $\overline{d}$, ${d_{1}}$, and ${d_{2}}$ considered as functions of the parameter b are colored in blue, red, and orange, respectively. The parameters for the first figure are $\lambda =2$, $a=1$, and $k=20$. The related important values for the parameter b are ${b_{1}}=0.0922$, ${b_{2}}=1.9126$, ${b^{\ast }}=0.2507$, and $\overline{b}=3.6889$. We mark ${b_{1}}$, ${b_{2}}$, and ${b_{3}}$ by black, green, and blue points, respectively. In this case ${b_{1}}<{b^{\ast }}<{b_{2}}$ and thus the first case of Proposition 4.6 holds. Also ${b^{\ast }}<\overline{b}$, since $\lambda >\frac{e}{{k^{a}}}$. We can see in Figure 2e that the interval $\left({d_{1}},{d_{2}}\right)$ is a good evaluation for the saturation $\overline{d}$ when $b\in \left({b^{\ast }},{b_{2}}\right)$. Otherwise, if $b>{b_{2}}$, then the limitation is $\overline{d}<{d_{1}}$.
We choose parameters $\lambda =2$, $k=1$, and $a=5$ for Figure 2f. Now the important values are ${b_{1}}=0.0693$, ${b_{2}}=0.0134$, ${b^{\ast }}=0.1884$, and $\overline{b}=0.1386$. Note that ${b^{\ast }}>\overline{b}$ because $\lambda <\frac{e}{{k^{a}}}$. Also, ${b_{1}}>{b_{2}}$ and thus the second case of Proposition 4.6 is actual. We have that $\overline{d}>{d_{1}}$ for $b<{b_{2}}$, but both values are very close. Otherwise $\overline{d}<{d_{1}}$ when $b\in \left({b_{2}},\overline{b}\right)$.
Let us mention that the relation $\overline{d}<{d_{2}}$ still holds if we remove the restriction $b<\overline{b}$. In this case we have $\overline{d}<{d_{2}}<{d_{1}}<1$.
5 Calibration
The defined corrected Kies distributions depend on four parameters λ, k, a, and b. The maximum likelihood estimator can be obtained in a closed form – for original Kies distributions, see [13]. Unfortunately, it turns out that this method does not work efficiently either in terms of speed or precision. For this we construct a least square errors (LSqE) type algorithm. It falls in the large class of generalized methods of moments (GMM), since it is based on curve fitting to a histogram. Hence we can use the existing results for the GMM; we refer to [8]. These methods produce consistent and asymptotically normal estimators for the Kies distributions since the whole Kies family exhibits finite moments.
Let us have n observations ${t_{1}}$, ${t_{2}},\dots ,{t_{n}}$. First we calculate the empirical PDF at $m\left(=50\right)$ bins as
Then we derive the PDF values of the corrected Kies distribution with parameters $\left(\lambda ,k,a,b\right)$ in the centers of the bins, say ${l_{i}^{Kies}}\left(\lambda ,k,a,b\right)$, via formula (12). The usual LSqE criterion for minimization is
We introduce a little logarithmic modification to minimize the impact of the extremely large values of the PDF. We make this because the PDF is infinitely large at the zero for some values of the parameters. Thus we define the cost function as
We have to minimize the corresponding criterion – (66) or (67) – over all possible parameters $\left\{\lambda ,k,a,b\right\}$. The additional constant ϵ is introduced, because some empirical values may be equal to zero. We set this constant to be $\epsilon ={10^{5}}$. Also, we can use criterion (66) if the empirical PDF seems to be finite at its left endpoint. We provide some experiments to validate this algorithm. We generate n corrected Kies distributed random numbers as ${t_{i}}={Q_{F}}\left({r_{i}}\right)$, where ${Q_{F}}\left(\cdot \right)$ is the quantile function given in equation (11), and ${r_{i}}$ are $\left(0,1\right)$uniformly distributed random numbers. Our choice of n is among $n=1\hspace{2.5pt}000$, $n=10\hspace{2.5pt}000$, $n=100\hspace{2.5pt}000$, and $n=1\hspace{2.5pt}000\hspace{2.5pt}000$. We fix the coefficients λ and k to one, $\lambda =k=1$, and vary a and b among 0.5, 1, and 2. We report in Table 1 the results which are returned by our calibration algorithm. The fits can be seen in Figure 3. It turns out that this simple LSqE algorithm is quite fast and accurate.
(65)
\[ {l_{i}^{emp}}:=\frac{m{N_{i}}}{n\left((\max \left\{{t_{i}}\right\}\min \left\{{t_{i}}\right\}\right)}.\](66)
\[ L\left(\lambda ,k,a,b\right)={\sum \limits_{i=1}^{m}}{\left({l_{i}^{emp}}{l_{i}^{Kies}}\left(\lambda ,k,a,b\right)\right)^{2}}.\](67)
\[ L\left(\lambda ,k,a,b\right):={\sum \limits_{i=1}^{m}}\left\ln \left({l_{i}^{emp}}+\epsilon \right)\ln \left({l_{i}^{Kies}}\left(\lambda ,k,a,b\right)+\epsilon \right)\right.\]Table 1.
The fitted parameters of the corrected Kies distribution
parameter  real  $n=1\hspace{2.5pt}000$  $n=10\hspace{2.5pt}000$  $n=100\hspace{2.5pt}000$  $n=1\hspace{2.5pt}000\hspace{2.5pt}000$ 
λ  1  1.1918  0.8809  0.9801  1.0315 
k  1  0.8076  1.1230  1.0310  0.9670 
a  0.5  0.5637  0.5541  0.5045  0.4923 
b  1  0.9379  0.8609  0.9744  1.0258 
λ  1  1.1224  0.9164  0.9273  1.0096 
k  1  0.9450  1.0817  1.0653  0.9882 
a  0.5  0.5814  0.5031  0.5451  0.4969 
b  2  1.8257  1.9312  1.7986  2.0200 
λ  1  0.9224  0.7287  0.9904  1.0395 
k  1  1.1869  1.1270  1.0063  0.9797 
a  1  0.7860  1.1613  1.0126  0.9855 
b  0.5  0.5761  0.4087  0.4915  0.5096 
λ  1  0.7780  0.6540  0.9013  0.9728 
k  1  1.1416  1.1456  1.0248  1.0121 
a  1  1.1356  1.2856  1.1234  1.0173 
b  1  0.8307  0.7231  0.8761  0.9783 
λ  1  1.3803  1.4454  0.8623  1.1066 
k  1  0.8388  0.7850  1.0578  0.9413 
a  1  0.8738  0.9411  1.0997  0.9822 
b  2  2.2277  2.2455  1.7768  2.0618 
λ  1  0.5789  1.4045  1.0190  1.1136 
k  1  1.1381  0.8774  0.9984  0.9617 
a  2  2.0803  2.0887  1.9625  2.0007 
b  2  1.8879  1.9627  2.0357  2.0085 
6 An application
We investigate now the behavior of the S&P500 index. It is one of the most used indicators in the financial markets and provides an important information for the world economy. We use daily observations for the period between January 2, 1980 and July 01, 2022 – totally 10717 ones. We derive the socalled logreturns, denoted by ${r_{i}}$, via the equation
where ${S_{i}}$ are the observed S&P500 values. The logreturns are presented in Figure 4a. It can be seen that there are periods of calm trading as well as highvolatility periods. This is a well observed phenomenon at all financial markets – the socalled volatility clustering. The highest downward peak happens at October 19, 1987 (1971th observation) – the Black Monday. The S&P500 index loses more than twenty percents – this is the highest oneday loss ever. We are interested in the length of the periods between the shocks. We derive them obtaining the dates at which the index falls by more than two percents – there are 357 such dates – and then we calculate the lengths of the periods between these days. The longest such period contains 950 days – between May 19, 2003 and February 26, 2007. We mark these days with red points in Figure 4. We may view the derived lengths as survival times and we examine their distribution. We divide all observations by 1000 to fit the Kies domain, because the maximal value is 950. We calibrate the parameters of four distributions – corrected Kies and its ancestors, namely, exponential, Weibull, and original Kies. We use the following parametrization:
(68)
\[ {r_{i}}:=\ln \left(\frac{{S_{i+1}}}{{S_{i}}}\right)\hspace{1em}\mathrm{for}\hspace{2.5pt}i=1,2,\dots ,10716,\]The derived parameters are reported in the first part of Table 2. Immediately after them we provide the results which are returned by the LSqE algorithm described in Section 5. The constant ϵ in cost function (67) is chosen to be $\epsilon =0.01$. It turns out that the corrected Kies distribution is significantly closer to the real observations – its error is 23.1820. The value of this error for the original Kies distribution is 25.3491, whereas for the exponential and Weibull distributions it is 26.6652 and 29.4037, respectively.
Table 2.
Fits to the S&P500 data
parameter  corrected Kies  original Kies  Weibull  exponential  empirical 
λ  3.1091    0.0228  0.0293   
k  55.0876  15.7857  0.8327     
a  0.2086         
b  2.1617  0.7120       
LSqE  23.1820  25.3491  29.4037  26.6652   
VaR  corrected Kies  original Kies  Weibull  exponential  empirical 
0.9  0.0710  0.0627  0.0622  0.0675  0.0690 
0.925  0.0877  0.0732  0.0716  0.0759  0.0920 
0.95  0.1106  0.0883  0.0853  0.0878  0.1300 
0.975  0.1448  0.1149  0.1095  0.1081  0.1800 
$\overline{\mathit{AVaR}}$  corrected Kies  original Kies  Weibull  exponential  empirical 
0.9  0.1200  0.1004  0.0969  0.0968  0.1872 
0.925  0.1337  0.1112  0.1069  0.1052  0.2221 
0.95  0.1513  0.1268  0.1214  0.1171  0.2799 
0.975  0.1763  0.1535  0.1468  0.1374  0.4154 
expectile  corrected Kies  original Kies  Weibull  exponential  empirical 
0.9  0.0666  0.0564  0.0556  0.0591  0.0753 
0.925  0.0745  0.0624  0.0612  0.0643  0.0856 
0.95  0.0857  0.0711  0.0694  0.0718  0.1009 
0.975  0.1044  0.0864  0.0838  0.0849  0.1274 
Having in mind Propositions 2.1 and 2.6 (third statements) we conclude that the initial value of PDF for both Kies style distributions is the infinity, because $b=0.7120<1$ for the original distribution and $ab=0.4509<1$ for the corrected one. Hence, a lot of mass is in the left part of the domain. This fact confirms the mentioned above financial phenomenon of volatility clustering. This is true for the Weibull distribution, too, since its parameter k is less than one; $k=0.8327<1$. Also, the initial value of the exponential PDF is relatively large; it is 34.1236.
Additionally, the shape of the calibrated distributions means that the right tails are important. In fact, they present the probabilities of large calm periods in the markets. We compare the results which the four distributions generate for the tail measures $\mathit{VaR}$, $\mathit{AVaR}$, and $\mathit{EX}$ with the empirical ones – see again Table 2. The levels we have chosen are 0.9, 0.925, 0.95, and 0.975; the values of $\mathit{VaR}$, $\mathit{AVaR}$, and $\mathit{EX}$ are derived via equations (11), (30), and (37). Note that here the meaning of these measures is quite different from their traditional use in finance. We can see again that the corrected Kies distribution produces more realistic values in a comparison with the other distributions.
Here is the place to mention another purpose for which we can use these distributions. The available historical data generates relatively small number of observations for the dates with shocks. Note that the lower empirical values for all tail measures reported in Table 2 can be explained namely by the lack of enough observations. Therefore we can use the theoretical distributions as a tool to fulfill the missing information. In this light the closest tail behavior of the corrected Kies distribution has an additional importance.