Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 10, Issue 3 (2023)
  4. On some composite Kies families: distrib ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

On some composite Kies families: distributional properties and saturation in Hausdorff sense
Volume 10, Issue 3 (2023), pp. 287–312
Tsvetelin Zaevski ORCID icon link to view author Tsvetelin Zaevski details   Nikolay Kyurkchiev  

Authors

 
Placeholder
https://doi.org/10.15559/23-VMSTA227
Pub. online: 21 March 2023      Type: Research Article      Open accessOpen Access

Received
9 November 2022
Revised
7 January 2023
Accepted
12 March 2023
Published
21 March 2023

Abstract

The stochastic literature contains several extensions of the exponential distribution which increase its applicability and flexibility. In the present article, some properties of a new power modified exponential family with an original Kies correction are discussed. This family is defined as a Kies distribution which domain is transformed by another Kies distribution. Its probabilistic properties are investigated and some limitations for the saturation in the Hausdorff sense are derived. Moreover, a formula of a semiclosed form is obtained for this saturation. Also the tail behavior of these distributions is examined considering three different criteria inspired by the financial markets, namely, the VaR, AVaR, and expectile based VaR. Some numerical experiments are provided, too.

1 Introduction

The Weibull distribution is one of the most important generalizations of the exponential distribution. Despite of the loss of the important memorylessness feature, the Weibull distribution has several advantages which determine its wide use in many real life areas including engineering sciences [29], meteorology and hydrology [28], communications and telecommunications [26], energetics [22], chemical and metallurgical industry [4], epidemiology [16], insurance and banking [6], etc. For more theoretical and practical details related to this distribution we refer to the original work [27] as well as the recently published books [18, 20, 17], and [15].
A significant modification of the Weibull distribution known as a Kies distribution was firstly proposed in [9] and recently considered in many studies. It reduces the positive real half-line support of the Weibull to the interval $\left(0,1\right)$ changing the variables as $t=\frac{x}{x+1}$. Later, many authors discussed several modifications. Refs. [11, 21] and [30] examine four parameter distributions whose domain is translated to an arbitrary positive interval. Refs. [12, 13] take a power of the Kies cumulative distribution function (CDF, hereafter) to define a new family.
A composition approach for the Kies distribution is presented in [2]. This way the new distribution is defined after the change of variables $t=H\left(x\right)$ where $H\left(x\right)$ is the CDF of an auxiliary distribution. Ref. [25] introduces the Fréchet distribution for this purpose and later in [24] the resulting Kies-Fréchet distribution is applied to model the COVID 19 mortality. Alternatively, [1] and [3] use the exponential and Lomax distributions, respectively. See also [5] for another composition based on the generalized uniform distribution.
In the present article we define a new Kies family by its CDFs which are constructed as a composition of two other Kies CDFs on the interval $\left(0,1\right)$, $G\left(H\left(t\right)\right)$. Note that this definition is possible due to this domain. We name the distributions $G\left(\cdot \right)$ and $H\left(\cdot \right)$ the original one and the correction. Thus we have a four-parameter family – two parameters for each initial distribution. We investigate the probabilistic properties of the resulting distribution in the light of its compositional essence. We derive many relations between the corresponding terms – CDF, probability density function, quantile function, mean residual life function, different expectations and moments – of the resulting and the initial distributions. Also we investigate the tail behavior by the use of three risk measures arising in the modern capital markets, namely VaR (abbreviated from Value-at-Risk), AVaR (Average-Value-at-Risk, also known as CVAR and TVAR), and expectile based VaR.
Other important results we derive are related to the so-called Hausdorff saturation. It presents the distance between the CDF and a Γ-shaped curve connecting its endpoints. In fact, the saturation measures how the distribution mass is located in the domain – the distribution is more left-placed when the saturation is lower, and vice versa. Also, when studying specific classes of cumulative distribution functions it is important to know their intrinsic characteristics – the saturation in the Hausdorff sense is namely such one. This characteristic is important for researchers in choosing an appropriate model for approximating specific data from different branches of scientific knowledge such as Biostatistics, Population dynamics, Growth theory, Debugging and Test theory, Computer viruses propagation, Insurance mathematics. In addition, the use of composite Kiess families can also be useful in the study of reaction-kinetic models – a similar study of the dynamics of the classical Kiess model is discussed in [14]. We obtain in this paper an interval evaluation of the saturation and investigate its power w.r.t. the four distribution parameters. Moreover, we prove a formula of a semiclosed form for the Hausdorf saturation. Many numerical experiments are provided.
The next question we discuss is related to the inverse problem – the calibration w.r.t. some empirical data. It is accepted in the present literature that the maximum likelihood estimation is a good approach for the Kies style distributions due to the available closed form estimator. However, our numerical simulations do not support this opinion. For this we construct an algorithm based on the least square errors. It turns out that this method produces very believable outcomes.
To illustrate our results we explore an empirical data from the real financial markets, namely, for the S&P500 index. It is well known that there are high- and low-volatility periods. In fact, this is the ubiquitous phenomenon of volatility clustering. We extract the periods between two market shocks and examine the distribution of their lengths. We compare the results which the corrected Kies distribution returns with the outcome of its ancestors, namely, the exponential, the Weibull, and the original Kies distributions.
The paper is organized as follows. Section 2 defines the new class of distributions and discusses their probabilistic properties. The tail behavior is examined in Section 3 trough the measures VaR, AVaR, and expectile based VaR. The Hausdorff distance and the related saturation are considered in Section 4. We discuss the calibration problem in Section 5. Finally, we present a numerical example based on the S&P500 index in Section 6.

2 Definitions and distributional properties

We shall use for convenience the following notations in the whole paper. The cumulative distribution function (CDF, as we mentioned above) of a distribution will be denoted by an uppercase letter, the overlined letter will be used for the complementary cumulative distribution function (CCDF), the corresponding lowercase letter is preserved for the probability density function (PDF), and finaly the letter Q indexed by the CDF’s letter will mean the quantile function (QF). For example, if $F\left(t\right)$ is the CDF, then $\overline{F}\left(t\right)$, $f\left(t\right)$, and ${Q_{F}}\left(t\right)$ are the corresponding CCDF, PDF, and QF, respectively. Also, we shall use the Greek letter ξ for random variables, and we shall mark it by the corresponding CDF letter in the subscript, i.e. ${\xi _{F}}$ means a random variable the CDF of which is $F\left(t\right)$.
The standard Kies distribution is defined on the domain $\left(0,1\right)$ by its CDF
(1)
\[ H\left(t\right):=1-{e^{-k{\left(\frac{t}{1-t}\right)^{b}}}}\]
for some positive parameters b and k. Inverting CDF (1) we can derive the quantile function for $t\in \left(0,1\right)$:
(2)
\[ {Q_{H}}\left(t\right)=\frac{{\left(-\ln \left(1-t\right)\right)^{\frac{1}{b}}}}{{k^{\frac{1}{b}}}+{\left(-\ln \left(1-t\right)\right)^{\frac{1}{b}}}}.\]
Differentiating equation (1) we obtain the probability density function
(3)
\[ h\left(t\right)=bk{e^{-k{\left(\frac{t}{1-t}\right)^{b}}}}\frac{{t^{b-1}}}{{\left(1-t\right)^{b+1}}}.\]
The following proposition describes the shape of PDF (3).
Proposition 2.1.
The value of the PDF at the right end of the distribution domain is zero, $h\left(1\right)=0$. Let the function $\alpha \left(t\right)$ for $t\in \left(0,1\right)$ be defined as
(4)
\[ \alpha \left(t\right):=kb{\left(\frac{t}{1-t}\right)^{b}}-\left(2t+b-1\right).\]
The following statements for PDF (3) w.r.t. the position of the power b w.r.t. 1 hold.
  • 1. If $b>1$, then PDF (3) is zero in the left domain’s endpoint, $h\left(0\right)=0$. Function (4) has a unique root for $t\in \left(0,1\right)$, we denote it by ${t_{2}}$. The PDF increases for $t\in \left(0,{t_{2}}\right)$ having a maximum for $t={t_{2}}$ and decreases for $t\in \left({t_{2}},1\right)$.
  • 2. If $b=1$, then the left limit of the PDF is $h\left(0\right)=k$. If $k\ge 2$, then the PDF is a function decreasing from k to 0. Otherwise, if $k<2$, then we introduce the value ${t_{2}}=1-\frac{k}{2}$; note that ${t_{2}}\in \left(0,1\right)$. The PDF starts from the value k for $t=0$, increases to a maximum for $t={t_{2}}$, and decreases to zero.
  • 3. If $b<1$, then $h\left(0\right)=\infty $. The derivative of function (4) is
    (5)
    \[ {\alpha ^{\prime }}\left(t\right)=k{b^{2}}\frac{{t^{b-1}}}{{\left(1-t\right)^{b+1}}}-2.\]
    Let $\overline{t}$ be defined as $\overline{t}:=\frac{1-b}{2}$. The PDF is a decreasing function when ${\alpha ^{\prime }}\left(\overline{t}\right)\ge 0$.
    Suppose that ${\alpha ^{\prime }}\left(\overline{t}\right)<0$. In this case derivative (5) has two roots in the interval $\left(0,1\right)$; we denote them by ${\overline{t}_{1}}$ and ${\overline{t}_{2}}$. If $\alpha \left({\overline{t}_{2}}\right)\ge 0$, then the PDF decreases in the whole distribution domain. Otherwise, if $\alpha \left({\overline{t}_{2}}\right)<0$, then function (4) has two roots in the interval $\left(0,1\right)$, too; we notate them by ${t_{1}}$ and ${t_{2}}$. The PDF starts from infinity, decreases in the interval $\left(0,{t_{1}}\right)$ having a local minimum for $t={t_{1}}$, increases for $t\in \left({t_{1}},{t_{2}}\right)$ having a local maximum for $t={t_{2}}$, and decreases to zero for $t\in \left({t_{2}},1\right)$.
Proof.
We have that $h\left(1\right)=0$ due to the exponential decay of PDF (3).The value $h\left(0\right)$ and the shape of PDF (3) is derived in Appendix A.  □
We introduce and investigate a new class of distributions for which the correction is presented by another Kies distribution (1).
Definition 2.2.
Let a, b, λ, and k be positive constants. Let two Kies distributed random variables be defined by their CDFs
(6)
\[ \begin{aligned}{}H\left(t\right)& :=1-{e^{-k{\left(\frac{t}{1-t}\right)^{b}}}},\\ {} G\left(t\right)& :=1-{e^{-\lambda {\left(\frac{t}{1-t}\right)^{a}}}}.\end{aligned}\]
We define a new distribution in the domain $\left(0,1\right)$ by the CDF
(7)
\[ F\left(t\right):=G\left(H\left(t\right)\right).\]
We shall call it a H-corrected Kies distribution. We name G the original distribution and H the correcting distribution.
Remark 1.
Note that this superposition is possible since the Kies CDF is an increasing from zero to one function in the interval $\left(0,1\right)$.
Proposition 2.3.
The CDF (7) can be written as
(8)
\[ F\left(t\right)=1-{e^{-\lambda {\left({e^{k{\left(\frac{t}{1-t}\right)^{b}}}}-1\right)^{a}}}}.\]
Proof.
We have
(9)
\[ F\left(t\right)=1-{e^{-\lambda {\left(\frac{H\left(t\right)}{1-H\left(t\right)}\right)^{a}}}}=1-{e^{-\lambda {\left(\frac{1}{\overline{H}\left(t\right)}-1\right)^{a}}}}=1-{e^{-\lambda {\left({e^{k{\left(\frac{t}{1-t}\right)^{b}}}}-1\right)^{a}}}},\]
since
(10)
\[ \overline{H}\left(t\right)={e^{-k{\left(\frac{t}{1-t}\right)^{b}}}}.\]
 □
As a corollary of Definition 2.2 we can establish the quantile function.
Corollary 2.4.
The quantile function of a H-corrected Kies distributed random variable can be derived trough the formula
(11)
\[ {Q_{F}}\left(t\right)={Q_{H}}\left({Q_{G}}\left(t\right)\right),\]
where ${Q_{H}}\left(t\right)$ and ${Q_{G}}\left(t\right)$ are the quantile functions of the original Kies distributions (equation (2)) $H\left(t\right)$ and $G\left(t\right)$, respectively.
Differentiating equation (8) we obtain for the PDF
(12)
\[ f\left(t\right)=ab\lambda k\hspace{2.5pt}{e^{-\lambda {\left({e^{k{\left(\frac{t}{1-t}\right)^{b}}}}-1\right)^{a}}}}{\left({e^{k{\left(\frac{t}{1-t}\right)^{b}}}}-1\right)^{a-1}}{e^{k{\left(\frac{t}{1-t}\right)^{b}}}}\frac{{t^{b-1}}}{{\left(1-t\right)^{b+1}}}.\]
More informative is another form of PDF (12) presented in the following proposition.
Proposition 2.5.
The PDF of the H-corrected Kies distribution (12) can be written alternatively as
(13)
\[ f\left(t\right)=g\left(H\left(t\right)\right)h\left(t\right).\]
Proof.
The prof is an immediate consequence from superposition (7).  □
Remark 2.
Formula (13) means that the PDF of the H-corrected Kies distribution is the initial PDF weighted by the corresponding correction’s PDF.
First we have to derive the PDF value of the H-corrected Kies distribution at the left domain endpoint $t=0$ (obviously, the value at the right one, $t=1$, is zero). Analogously to the original Kies distributions, one can expect that $0 < f\left(0\right) < \infty $ when $a=b=1$. This is true, but the values $a=b=1$ are far from exhausting the cases in which the left endpoint of the PDF is finite and nonzero. The proposition below characterizes the PDF’s behavior near the zero.
Proposition 2.6.
The left value $f\left(0\right)$ of the H-corrected Kies PDF (13) is:
  • 1. $f\left(0\right)=0$ when $ab>1$;
  • 2. $f\left(0\right)=\lambda {k^{a}}$ when $ab=1$;
  • 3. $f\left(0\right)=\infty $ when $ab<1$.
Proof.
We shall use form (12) of the PDF. We can see that the PDF near the zero depends only on the term
(14)
\[ L:=ab\lambda k{\left({e^{k{t^{b}}}}-1\right)^{a-1}}{t^{b-1}}.\]
Expanding the exponent in the Taylor series, we transform equation (14) to
(15)
\[ \begin{aligned}{}L& =ab\lambda k{\left(\left({\sum \limits_{n=1}^{\infty }}\frac{{k^{n}}{t^{nb}}}{n!}\right){t^{\frac{b-1}{a-1}}}\right)^{a-1}}\\ {} & =ab\lambda k{\left({\sum \limits_{n=1}^{\infty }}\frac{{k^{n}}{t^{nb+\frac{b-1}{a-1}}}}{n!}\right)^{a-1}}.\end{aligned}\]
Suppose first that $a<1$. If b is such that
(16)
\[ b+\frac{b-1}{a-1}<0,\]
then at least one term of the sum above, namely the first one, tends to infinity for $t\to 0$. Therefore $L\to 0$, since $a<1$. Note that inequality (16) is equivalent to $ab>1$. If the inequality (16) is opposite in sign, then all terms of the sum tend to zero, and therefore $L\to \infty $ (note again $a<1$). If we have equality in (16), or equivalently $ab=1$, then the first term tends to k and the rest tend to zero. Hence $L\to ab\lambda k{k^{a-1}}=\lambda {k^{a}}$.
Assume now that $a>1$. If inequality (16) holds, equivalently to $ab<1$, then $L\to \infty $, since the fist term of the sum tends to infinity and $a>1$. If $ab=1$, then only the first term is nonzero – its limit is k – and hence $L\to \lambda {k^{a}}$. If $ab>1$ (oppositely to inequality (16)), we conclude that $L\to \infty $, since the sum tends to infinity and the power is positive.
Finally, if $a=1$, then the desired result holds because formula (14) turns to $L=b\lambda k{t^{b-1}}$.  □
The shape of the PDF of the H-corrected Kies distribution is a consequence of Propositions 2.1 and 2.6. Obviously, it has to be more various than the PDF of the original Kies distribution. Various examples are presented in Figure 1. In each of all six subfigures we vary the coefficients λ and a for the original distribution G as $\lambda \in \left\{0.5,\hspace{2.5pt}1,\hspace{2.5pt}1.5\right\}$ and $a\in \left\{0.5,\hspace{2.5pt}2\right\}$. Namely, the original Kies distribution G is colored by blue. The rest of the plotted PDFs are produced by the following parameters for the correcting distribution H: $k\in \left\{1,\hspace{2.5pt}2\right\}$ and $b\in \left\{0.5,\hspace{2.5pt}1,\hspace{2.5pt}2\right\}$.
vmsta227_g001.jpg
Fig. 1.
PDFs of the corrected Kies distributions
The following proposition for the expectations of the corrected Kies random variables holds.
Proposition 2.7.
Let ${\xi _{F}}$ be an H-corrected Kies distributed random variable with original distribution G, ${\xi _{G}}$ be an original Kies distributed random variable, and $\beta \left(\cdot \right)$ be a real valued function. The expectation of the random variable $\beta \left({\xi _{F}}\right)$ is equal to the expectation of the random variable $\beta \left({Q_{H}}\left({\xi _{G}}\right)\right)$. Written formalized that is
(17)
\[ E\left[\beta \left({\xi _{F}}\right)\right]=E\left[\beta \left({Q_{H}}\left({\xi _{G}}\right)\right)\right].\]
Proof.
Using the form of PDF (13) and changing the variables as $x=H\left(t\right)$ (equivalently, $t={Q_{H}}\left(x\right)$) we derive
(18)
\[\begin{aligned}{}E\left[\beta \left({\xi _{F}}\right)\right]& ={\underset{0}{\overset{1}{\int }}}\beta \left(t\right)g\left(H\left(t\right)\right)dH\left(t\right)\\ {} & ={\underset{0}{\overset{1}{\int }}}\beta \left({Q_{H}}\left(x\right)\right)g\left(x\right)dx\\ {} & =E\left[\beta \left({Q_{H}}\left({\xi _{G}}\right)\right)\right].\end{aligned}\]
 □
The following corollaries hold.
Corollary 2.8.
The random variables ${\xi _{F}}$ and ${Q_{H}}\left({\xi _{G}}\right)$ are identically distributed under the assumptions of Proposition 2.7.
Corollary 2.9.
The H-corrected Kies distributed random variable ${\xi _{F}}$ has finite moments and they can be presented as
(19)
\[ {\mu _{n}}:=E\left[{\xi _{F}^{n}}\right]=E\left[{\left({Q_{H}}\left({\xi _{G}}\right)\right)^{n}}\right]\]
for $n=1,2,\dots \hspace{0.1667em}$.
Proof.
We can obtain the moments integrating by parts as
(20)
\[ E\left[{\xi _{F}^{n}}\right]={\underset{0}{\overset{1}{\int }}}{t^{n}}dF\left(t\right)=1-n{\underset{0}{\overset{1}{\int }}}{t^{n-1}}F\left(t\right)dt\]
and hence they are finite. Formula (19) is an immediate consequence of equation (17).  □
Let us consider now the mean residual life function (MRLF, hereafter) of an H-corrected Kies distribution. Usually it is defined as the conditional expectation
(21)
\[ {m_{F}}\left(t\right):=E\left[{\xi _{F}}-t\right.|{\xi _{F}}>t].\]
We shall use an alternative presentation stated in [7]:
(22)
\[ {m_{F}}\left(t\right):=\frac{1}{\overline{F}\left(t\right)}{\underset{t}{\overset{1}{\int }}}\overline{F}\left(s\right)ds.\]
The following proposition for the MRLF stands.
Proposition 2.10.
The MRLF of an H-corrected Kies distributed random variable ${\xi _{F}}$ can be written as
(23)
\[ {m_{F}}\left(t\right)=\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}>H\left(t\right)}}\right]}{\overline{F}\left(t\right)}-t.\]
Proof.
Let us consider first the integral in formula (22). Changing the variables as $x=H\left(s\right)$ (equivalently to $s={Q_{H}}\left(x\right)$) and integrating by parts we derive
(24)
\[ \begin{aligned}{}{\underset{t}{\overset{1}{\int }}}\overline{F}\left(s\right)ds& ={\underset{t}{\overset{1}{\int }}}\overline{G}\left(H\left(s\right)\right)ds={\underset{H\left(t\right)}{\overset{1}{\int }}}\overline{G}\left(x\right)d{Q_{H}}\left(x\right)\\ {} & ={\left.\overline{G}\left(x\right){Q_{H}}\left(x\right)\right|_{H\left(t\right)}^{1}}+{\underset{H\left(t\right)}{\overset{1}{\int }}}g\left(x\right){Q_{H}}\left(x\right)dx\\ {} & =-\overline{F}\left(t\right)t+E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}>H\left(t\right)}}\right].\end{aligned}\]
We obtain the desired result combining equations (22) and (24).  □
Remark 3.
A simple validation of Proposition 2.10 can be seen for $t=0$. Then formula (21) leads to $m\left(0\right)=E\left[{\xi _{F}}\right]$ and thus formulas (17) and (23) coincide when $\beta \left(\cdot \right)$ is the identity function.

3 Tail behavior

Let us consider three measures arising from the risk management – $\mathit{VaR}$, $\mathit{AVaR}$, and expectile based VaR; we shall use the notation $\mathit{EX}$ for the last one. By its original definition, the $\mathit{VaR}$ at level α of a random variable is just the opposite of the quantile function $\mathit{VaR}(\alpha )=-Q(\alpha )$. Since the domain of the Kies family is the interval $\left(0,1\right)$ we shall think $\mathit{VaR}(\alpha ):=Q(\alpha )$. As its name shows, $\mathit{AVaR}$ is an average $\mathit{VaR}$ in some sense – it is defined as
(25)
\[ \mathit{AVaR}(\alpha ):=\frac{1}{\alpha }{\underset{0}{\overset{\alpha }{\int }}}\mathit{VaR}(u)du.\]
Also, we consider the right tail behavior by defining the following term
(26)
\[ \overline{\mathit{AVaR}}(\alpha ):=\frac{1}{1-\alpha }{\underset{\alpha }{\overset{1}{\int }}}\mathit{VaR}(u)du.\]
The expectile function is related to the quantiles in the following way. The α-quantile of the random variable ξ can be viewed as the lower solution of the optimal problem
(27)
\[ Q\left(\alpha \right)=\underset{x\in \mathbb{R}}{\operatorname{arg\,min}}\left\{E\left[\alpha {\left(\xi -x\right)^{+}}+\left(1-\alpha \right){\left(\xi -x\right)^{-}}\right]\right\},\]
where ${z^{+}}$ and ${z^{-}}$ are notations for $\max \left(z,0\right)$ and $\max \left(-z,0\right)$, respectively. For more details, see, for example, [10]. Analogously, the expectile is defined in [19] as the solution of the following quadratic problem
(28)
\[ \mathit{EX}(\alpha ):=\underset{x\in \mathbb{R}}{\operatorname{arg\,min}}\left\{E\left[\alpha {\left({\left(\xi -x\right)^{+}}\right)^{2}}+\left(1-\alpha \right){\left({\left(\xi -x\right)^{-}}\right)^{2}}\right]\right\}.\]
Note that the expectiles are well defined when the random variable has a finite second moment. For the corrected Kies distributions this is true due to Corollary 2.9. It can be easily proven that expectile (28) is the solution of the following equation w.r.t. the variable x
(29)
\[ \alpha E\left[{\left(\xi -x\right)^{+}}\right]=\left(1-\alpha \right)E\left[{\left(\xi -x\right)^{-}}\right].\]
We derive $\mathit{AVaR}$s and the expectile based $\mathit{VaR}$ in the following two propositions.
Proposition 3.1.
We have the following double presentations for $\mathit{AVaR}(\alpha )$ and $\overline{\mathit{AVaR}}(\alpha )$:
(30)
\[ \begin{aligned}{}\mathit{AVaR}(\alpha )& =\frac{{\mu _{1}}}{\alpha }-\frac{1-\alpha }{\alpha }\left[{Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\right]\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}} < {Q_{G}}\left(\alpha \right)}}\right]}{\alpha }\\ {} \overline{\mathit{AVaR}}(\alpha )& ={Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}} > H{Q_{G}}\left(\alpha \right)}}\right]}{1-\alpha },\end{aligned}\]
where ${\mu _{1}}$ is the first moment given in Corollary 2.9 and ${m_{F}}\left(\cdot \right)$ is the MRLF.
Proof.
We shall use the following relation between the truncated expecations and the MRLF, the proof of which can be found in [7],
(31)
\[ E\left[{\left({\xi _{F}}-y\right)^{+}}\right]={m_{F}}\left(y\right)\overline{F}\left(y\right).\]
Having in mind equations ${x^{-}}={x^{+}}-x$ and (31), and changing the variables as $s={Q_{F}}\left(t\right)\hspace{2.5pt}\Leftrightarrow t=F\left(s\right)$, we derive for the first statement of equation (30)
(32)
\[ \begin{aligned}{}\mathit{AVaR}(\alpha )& =\frac{1}{\alpha }{\underset{0}{\overset{\alpha }{\int }}}{Q_{F}}\left(t\right)dt=\frac{1}{\alpha }{\underset{0}{\overset{{Q_{F}}\left(\alpha \right)}{\int }}}sf\left(s\right)ds=\frac{E\left[{\xi _{F}}{I_{\xi < {Q_{F}}\left(\alpha \right)}}\right]}{\alpha }\\ {} & =\frac{{Q_{F}}\left(\alpha \right)P\left({\xi _{F}} < {Q_{F}}\left(\alpha \right)\right)}{\alpha }-\frac{E\left[{\left({\xi _{F}}-{Q_{F}}\left(\alpha \right)\right)^{-}}\right]}{\alpha }\\ {} & ={Q_{F}}\left(\alpha \right)-\frac{E\left[{\left({\xi _{F}}-{Q_{F}}\left(\alpha \right)\right)^{+}}\right]}{\alpha }+\frac{E\left[{\xi _{F}}-{Q_{F}}\left(\alpha \right)\right]}{\alpha }\\ {} & =\frac{{\mu _{1}}}{\alpha }-\frac{1-\alpha }{\alpha }\left[{Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\right].\end{aligned}\]
To derive the second form of the $\mathit{AVaR}$, we use equations (11), (19), and (23) (for the quantile function, the moment and the MRLF, respectively) and obtain
(33)
\[ \begin{aligned}{}\mathit{AVaR}(\alpha )& =\frac{{\mu _{1}}-\left(1-\alpha \right)\left[{Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\right]}{\alpha }\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right)\right]-\left(1-\alpha \right)\left[{Q_{F}}\left(\alpha \right)+\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}>H\left({Q_{F}}\left(\alpha \right)\right)}}\right]}{\overline{F}\left({Q_{F}}\left(\alpha \right)\right)}-{Q_{F}}\left(\alpha \right)\right]}{\alpha }\\ {} & =\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}\le {Q_{G}}\left(\alpha \right)}}\right]}{\alpha }.\end{aligned}\]
Let us turn to the right tail term $\overline{\mathit{AVaR}}$. Analogously as above, we obtain
(34)
\[ \begin{aligned}{}\overline{\mathit{AVaR}}(\alpha )& =\frac{1}{1-\alpha }{\underset{\alpha }{\overset{1}{\int }}}{Q_{F}}\left(t\right)dt=\frac{1}{1-\alpha }{\underset{{Q_{F}}\left(\alpha \right)}{\overset{1}{\int }}}sf\left(s\right)ds=\frac{E\left[{\xi _{F}}{I_{\xi >{Q_{F}}\left(\alpha \right)}}\right]}{1-\alpha }\\ {} & =\frac{{Q_{F}}\left(\alpha \right)P\left({\xi _{F}}>{Q_{F}}\left(\alpha \right)\right)}{1-\alpha }+\frac{E\left[{\left({\xi _{F}}-{Q_{F}}\left(\alpha \right)\right)^{+}}\right]}{1-\alpha }\\ {} & ={Q_{F}}\left(\alpha \right)+\frac{{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)\overline{F}({Q_{F}}\left(\alpha \right)}{1-\alpha }\\ {} & ={Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right).\end{aligned}\]
Writing equation (23) for $t={Q_{F}}\left(\alpha \right)$, we see that
(35)
\[ {Q_{F}}\left(\alpha \right)+{m_{F}}\left({Q_{F}}\left(\alpha \right)\right)=\frac{E\left[{Q_{H}}\left({\xi _{G}}\right){I_{{\xi _{G}}>{Q_{G}}\left(\alpha \right)}}\right]}{1-\alpha },\]
which leads to the second form of $\overline{\mathit{AVaR}}$  □
Remark 4.
Note that the second forms of $\mathit{AVaR}$ and $\overline{\mathit{AVaR}}$ can be obtained directly (without using the first form) changing the variables as $t={Q_{G}}\left(u\right)\Leftrightarrow u=G\left(t\right)$ in the integral
(36)
\[ \int {Q_{F}}\left(u\right)du=\int {Q_{H}}\left({Q_{G}}\left(u\right)\right)du.\]
Next we discuss the expectile based VaR. It can be obtained through both of equations presented in the following proposition.
Proposition 3.2.
The α-expectile based VaR, $\mathit{EX}(\alpha )$, is the solution of the following equivalent equations (w.r.t. the variable t)
(37)
\[ \begin{aligned}{}& \left(1-2\alpha \right){m_{F}}\left(t\right)\overline{F}\left(t\right)+\left(1-\alpha \right)\left(t-{\mu _{1}}\right)=0\\ {} & t\left(\overline{G}\left(H\left(t\right)\right)-\alpha \right)-E\left[{Q_{H}}\left({\xi _{G}}\right)\left(1-\alpha -\left(1-2\alpha \right){I_{{\xi _{G}}>H\left(t\right)}}\right)\right]=0,\end{aligned}\]
where ${\mu _{1}}$ is the first moment given in Corollary 2.9 and ${m_{F}}\left(\cdot \right)$ is the MRLF.
Proof.
Using the formula ${x^{-}}={x^{+}}-x$ and equation (29) which determines the expectile we derive
(38)
\[ \alpha E\left[{\left({\xi _{F}}-t\right)^{+}}\right]=\left(1-\alpha \right)E\left[{\left({\xi _{F}}-t\right)^{+}}-\left({\xi _{F}}-t\right)\right].\]
Replacing the truncated expectation from formula (31) we obtain the first equation in (37). It remains to replace the expectation and the MRLF from equations (17) and (23) to derive the second part of (37).  □

4 Hausdorff distance and saturation

Let us consider the max-norm in ${\mathbb{R}^{2}}$, i.e. if A and B are the points $A=\left({t_{A}},{x_{A}}\right)$ and $B=\left({t_{B}},{x_{B}}\right)$, then $\| A-B\| :=\max \left\{\left|{t_{A}}-{t_{B}}\right|,\left|{x_{A}}-{x_{B}}\right|\right\}$. We define the Hausdorff distance, also known as a H-distance, in a sense of [23].
Definition 4.1.
The Hausdorff distance $d\left(g,h\right)$ between two curves g and h in ${\mathbb{R}^{2}}$ is
(39)
\[ d\left(g,h\right):=\max \left\{\underset{A\in g}{\sup }\underset{B\in h}{\inf }\| A-B\| ,\underset{B\in h}{\sup }\underset{A\in g}{\inf }\| A-B\| \right\}.\]
Remark 5.
Roughly said, the Hausdorff distance is the highest optimal path between the curves.
We can define now the saturation of a distribution.
Definition 4.2.
Let $F\left(\cdot \right)$ be the CDF of a distribution with a left-finite domain $\left[a,b\right)$, $-\infty < a < b\le \infty $. Its saturation is the Hausdorff distance between the completed graph of $F\left(\cdot \right)$ and the curve consisting of two lines – one vertical between the points $\left(a,0\right)$ and $\left(a,1\right)$ and another horizontal between $\left(a,1\right)$ and $\left(b,F\left(b\right)\right)$.
Having in mind that the domain of the Kies distribution is the interval $\left(0,1\right)$, we can prove the following corollary for its saturation.
Corollary 4.3.
The saturation of the Kies CDF, $F\left(\cdot \right)$, is the unique solution of the equation
(40)
\[ F\left(d\right)=1-d.\]
Proof.
The proof is an immediate corollary of Definitions 4.1 and 4.2. Note that equation (40) has a unique root because the function $F\left(\cdot \right)$ is increasing and continuous.  □
We shall prove now the following formula of a semiclosed form for the saturation of CDF (8).
Theorem 4.4.
Let y be a positive parameter and the function $\gamma \left(y\right)$ be defined as
(41)
\[ \gamma \left(y\right):=y{\left[{e^{\lambda {\left({e^{y}}-1\right)^{a}}}}-1\right]^{b}}.\]
Suppose that $k=\gamma \left(y\right)$ for some value of y. Then the H-corrected Kies distribution’s saturation is
(42)
\[ d\left(y\right)={e^{-\lambda {\left({e^{y}}-1\right)^{a}}}}.\]
Note that the function $\gamma \left(y\right)$ is strictly increasing in the interval $\left(0,\infty \right)$ and hence it is invertible. Therefore the saturation can be expressed as a function of λ, k, a, and b as
(43)
\[ d\left(\lambda ,k,a,b\right)={e^{-\lambda {\left({e^{{\gamma ^{-1}}\left(\lambda ,k,a,b\right)}}-1\right)^{a}}}}.\]
Proof.
Applying Corollary 4.3 to CDF (8) we see that the saturation d satisfies the equation
(44)
\[ \mu \left(d\right):=\lambda {\left({e^{k{\left(\frac{d}{1-d}\right)^{b}}}}-1\right)^{a}}+\ln \left(d\right)=0\]
in the interval $\left(0,1\right)$. Note that the solution exists and is unique, because the function $\mu \left(d\right)$ is continuous, increasing, $\mu \left(0\right)=-\infty $, and $\mu \left(1\right)=+\infty $. Let us change the variables as
(45)
\[ z=\frac{1}{k}{e^{k{\left(\frac{d}{1-d}\right)^{b}}}}\Leftrightarrow d=\frac{{\left(\ln \left(kz\right)\right)^{\frac{1}{b}}}}{{\left(\ln \left(kz\right)\right)^{\frac{1}{b}}}+{k^{\frac{1}{b}}}}.\]
Thus the function $\mu \left(d\right)$ defined as formula (44) turns to
(46)
\[ \mu \left(d\right)=\lambda {\left(zk-1\right)^{a}}+\ln \left(d\right).\]
Another change we need is $y=\ln \left(kz\right)$ or, equivalently, $z=\frac{{e^{y}}}{k}$. Thus
(47)
\[ d=\frac{{y^{\frac{1}{b}}}}{{y^{\frac{1}{b}}}+{k^{\frac{1}{b}}}}\]
and therefore the function $\mu \left(d\right)$ can be rewritten w.r.t. the variable y as
(48)
\[ \mu \left(y\right)=\ln \left({e^{\lambda {\left({e^{y}}-1\right)^{a}}}}\frac{{y^{\frac{1}{b}}}}{{y^{\frac{1}{b}}}+{k^{\frac{1}{b}}}}\right).\]
Therefore the equation $\mu \left(y\right)=0$ turns to
(49)
\[ {e^{\lambda {\left({e^{y}}-1\right)^{a}}}}\frac{{y^{\frac{1}{b}}}}{{y^{\frac{1}{b}}}+{k^{\frac{1}{b}}}}=1\]
or, equivalently,
(50)
\[ k=y{\left[{e^{\lambda {\left({e^{y}}-1\right)^{a}}}}-1\right]^{b}}.\]
Substituting k from equation (50) into formula (47), we obtain
(51)
\[ d=\frac{{y^{\frac{1}{b}}}}{{y^{\frac{1}{b}}}+{y^{\frac{1}{b}}}\left[{e^{\lambda {\left({e^{y}}-1\right)^{a}}}}-1\right]}={e^{-\lambda {\left({e^{y}}-1\right)^{a}}}}.\]
We finish the proof combining equations (50) and (51).  □
The behavior of the corrected Kies CDFs can be seen in Figures 2a–2d together with the saturation $\overline{d}$; it is presented by the red points. The red lines form squares which in fact confirms Corollary 4.3. The used quadruplet for the parameters are $(\lambda ,k,a,b)\in \left\{\left(2,2,5,1\right),\hspace{2.5pt}\left(2,2,1,0.5\right),\hspace{2.5pt}\left(2,1,2,1\right),\hspace{2.5pt}\left(0.5,1,0.5,0.5\right)\right\}$.
vmsta227_g002.jpg
Fig. 2.
CDFs of the corrected Kies distributions with the Hausdorff saturation
Next we discuss a useful in practice interval approximation of the Hausdorff saturation $\overline{d}$. Let us consider first the case $\lambda =a=b=1$. The saturation $\overline{d}$ has to satisfy equation (44) which now can be written as
(52)
\[ \mu \left(d\right)={e^{k\frac{d}{1-d}}}-1+\ln \left(d\right)=0.\]
The function $\mu \left(d\right)$ can be approximated very well for small ds by the function
(53)
\[ {\mu _{1}}\left(d\right)={e^{kd}}-1+\ln \left(d\right).\]
Taking the exponent in the Taylor series we see that the function
(54)
\[ {\mu _{1}}\left(d\right)=kd+\ln d+{\sum \limits_{n=2}^{\infty }}\frac{{\left(kd\right)^{n}}}{n!}.\]
can be approximated as $\mathcal{O}\left({d^{2}}\right)$ for small enough values of d by the function
(55)
\[ {\mu _{2}}\left(d\right)=kd+\ln d.\]
Let ${d_{1}}$ and ${d_{2}}$ be defined as ${d_{1}}:=\frac{1}{k}$ and ${d_{2}}:=\frac{\ln k}{k}$. We shall check when ${\mu _{2}}\left({d_{1}}\right)<0<{\mu _{2}}\left({d_{2}}\right)$. Obviously the first inequality holds when $k>e$. Assuming that this restriction holds, we see that ${\mu _{2}}\left({d_{2}}\right)=\ln \ln k>0$ and hence the second inequality holds, too. Thus we conclude that the function ${\mu _{2}}\left(d\right)$ has a unique root in the interval $\left(0,1\right)$ and it belongs to the subinterval $\left({d_{1}},{d_{2}}\right)$ when $k>e$, since this function is strictly increasing.
In the next proposition we discuss the general case assuming that $\lambda {k^{a}}>1$.
Proposition 4.5.
Suppose that $\lambda {k^{a}}>1$. Let the parameter b be such that $b<\overline{b}$, where $\overline{b}$ is2
(56)
\[ \overline{b}:=\frac{\ln \left(\lambda {k^{a}}\right)}{a}.\]
Then the function ${\mu _{2}}\left(d\right)$ defined as
(57)
\[ {\mu _{2}}\left(d\right):=k{d^{b}}-{\left(-\frac{\ln d}{\lambda }\right)^{\frac{1}{a}}}\]
has a unique root in the interval $\left(0,1\right)$. Moreover, the root belongs to the subinterval $\left({d_{1}},{d_{2}}\right)$ where
(58)
\[ \begin{aligned}{}{d_{1}}& :={\left(\frac{1}{\lambda {k^{a}}}\right)^{\frac{1}{ab}}},\\ {} {d_{2}}& :={\left(\frac{\ln \left(\lambda {k^{a}}\right)}{ab\lambda {k^{a}}}\right)^{\frac{1}{ab}}}.\end{aligned}\]
Note that ${d_{1}}<{d_{2}}$ due to the condition $b<\overline{b}$.
Proof.
Let us consider first function (57) in the particular case $\lambda =a=1$. Thus we have
(59)
\[ \begin{aligned}{}k& >{e^{b}},\\ {} {\mu _{2}}\left(d\right)& =k{d^{b}}+\ln d,\\ {} {d_{1}}& ={\left(\frac{1}{k}\right)^{\frac{1}{b}}},\\ {} {d_{2}}& ={\left(\frac{\ln k}{bk}\right)^{\frac{1}{b}}}.\end{aligned}\]
Obviously, the function ${\mu _{2}}\left(d\right)$ is increasing and ${\mu _{2}}\left({d_{1}}\right)<0$ due to $k>{e^{b}}$. We have for ${\mu _{2}}\left({d_{2}}\right)$:
(60)
\[ {\mu _{2}}\left({d_{2}}\right)=\frac{\ln k}{b}+\frac{1}{b}\left[\ln \left(\frac{\ln k}{b}\right)-\ln k\right]=\frac{1}{b}\ln \left(\frac{\ln k}{b}\right)>0.\]
The last inequality is true again due to $k>{e^{b}}$.
Let us remove the restriction $\lambda =a=1$. Let the function ${\overline{\mu }_{2}}\left(d;k,b\right):=k{d^{b}}+\ln d$ be defined as before – see the second line of (59). Note that we mark the dependence on the variables k and b. We can easily check that the equation ${\mu _{2}}\left(d\right)=0$ is equivalent to ${\overline{\mu }_{2}}\left(d;K,B\right)=0$ for $K=\lambda {k^{a}}$ and $B=ab$ and thus we can use the derived above result. This way the values of K and B lead to formulas (56) and (58).  □
Let us return to the saturation of the corrected Kies distribution. We have shown above that it is the solution of equation (44) which is equivalent to
(61)
\[ {e^{k{\left(\frac{d}{1-d}\right)^{b}}}}-1-{\left(-\frac{\ln d}{\lambda }\right)^{\frac{1}{a}}}=0.\]
Analogously to the case $a=b=\lambda =1$ , formula (54), we can see that after the Taylor expansion of the exponent, the left hand-side of equation (61) can be approximated by the function ${\mu _{2}}\left(d\right)$ near zero as $\mathcal{O}\left({d^{2b}}\right)$. Hence, its root can be used as an approximation of the corrected Kies distribution’s saturation when it is small enough. On the other hand, the function ${\mu _{2}}\left(d\right)$ is a lower approximation of $\mu \left(d\right)$ and therefore ${\mu _{2}}\left(d\right)<\mu \left(d\right)$. Thus the saturation is below the root of the function ${\mu _{2}}\left(d\right)$ and hence $\overline{d}<{d_{2}}$. The question stands, when ${d_{2}}<1$. Let us define ${b_{1}}$ as
(62)
\[ {b_{1}}:=\frac{\ln \left(\lambda {k^{a}}\right)}{a\lambda {k^{a}}}=\frac{\overline{b}}{\lambda {k^{a}}}.\]
Note that ${b_{1}}<\overline{b}$. We shall show that if $b<\overline{b}$, then ${d_{2}}<1$ only when $b>{b_{1}}$. Using again the notations $K=\lambda {k^{a}}$ and $B=ab$ and having in mind formula (58) we see that ${d_{2}}<1$ when $1>\frac{\ln K}{BK}$ which is equivalent to $b>{b_{1}}$. Note that $K>1$.
On the contrary, ${d_{1}}$ is not always below the saturation $\overline{d}$. It turns out that there exists a value, say ${b_{2}}$, dependent on the other parameters, such that ${d_{1}}<\overline{d}$ for $b<{b_{2}}$, and vice versa. To see this, we consider function (44). Obviously, it is increasing in the distribution domain $\left(0,1\right)$. Also, ${d_{1}}$ increases w.r.t. the parameter b because $\lambda {k^{a}}>1$. Therefore $\overline{\mu }\left(b\right):=\mu \left({d_{1}}\left(b\right)\right)$ is an increasing function, too; note that ${d_{1}}\left(b\right)<1$. Having in mind $\overline{\mu }\left(0\right)=-\infty $, $\overline{\mu }\left(+\infty \right)=+\infty $, and $\mu \left(\overline{d}\left(b\right)\right)=0$, we conclude that indeed ${d_{1}}\left(b\right)<\overline{d}\left(b\right)$ for $b<{b_{2}}$, where ${b_{2}}$ is the unique solution in the interval $\left(0,\infty \right)$ of the equation
(63)
\[ \overline{\mu }\left(b\right)=\lambda {\left({e^{{\lambda ^{-\frac{1}{a}}}{\left(1-{\lambda ^{-\frac{1}{ab}}}{k^{-\frac{1}{b}}}\right)^{-b}}}}-1\right)^{a}}-\frac{\ln \lambda }{ab}-\frac{\ln k}{b}.\]
We shall show that ${b_{2}}<\overline{b}$, too. Let us mark the dependence on b in the terms ${d_{1}}$ and ${d_{2}}$. We can easily check that ${d_{1}}\left(\overline{b}\right)={d_{2}}\left(\overline{b}\right)$ and hence $\overline{d}<{d_{1}}\left(\overline{b}\right)={d_{2}}\left(\overline{b}\right)<1$ (because $\overline{d}<{d_{2}}\left(\overline{b}\right)$ and ${d_{1}}\left(\overline{b}\right)<1$). Therefore $\overline{\mu }\left(\overline{b}\right)=\mu \left({d_{1}}\left(\overline{b}\right)\right)>\mu \left(\overline{d}\right)=0$, since $\mu \left(b\right)$ is an increasing function. Thus we see that ${b_{2}}<\overline{b}$.
We can formulate these results in the following proposition.
Proposition 4.6.
Suppose that $b<\overline{b}$, where $\overline{b}$ is given in formula (56). Then if $b<{b_{2}}$, where ${b_{2}}$ is the solution of equation (63), then ${d_{1}}<\overline{d}<{d_{2}}$. If in addition $b>{b_{1}}$ for ${b_{1}}$ given in equation (62), then ${d_{2}}<1$. Having in mind that ${b_{1}}\vee {b_{2}}<\overline{b}$, we can formulate the following statements:
  • • If ${b_{1}}<{b_{2}}$, then
    • 1. ${d_{1}}<\overline{d}<{d_{2}}$ and ${d_{2}}>1$ when $b<{b_{1}}$;
    • 2. ${d_{1}}<\overline{d}<{d_{2}}<1$ when $b\in \left({b_{1}},{b_{2}}\right)$;
    • 3. $\overline{d}<{d_{1}}<{d_{2}}<1$ when $b\in \left({b_{2}},\overline{b}\right)$;
  • • If ${b_{1}}>{b_{2}}$, then
    • 1. ${d_{1}}<\overline{d}<{d_{2}}$ and ${d_{2}}>1$ when $b<{b_{2}}$;
    • 2. $\overline{d}<{d_{1}}<{d_{2}}$ and ${d_{2}}>1$ when $b\in \left({b_{2}},{b_{1}}\right)$;
    • 3. $\overline{d}<{d_{1}}<{d_{2}}<1$ when $b\in \left({b_{1}},\overline{b}\right)$;
As we can see from definition (58), ${d_{1}}$ can be viewed as an increasing function w.r.t. the parameter b. Let us consider the second value ${d_{2}}$. It can be written as ${d_{2}}=\alpha \left(\beta \right)={\left(\beta c\right)^{\beta }}$, where $\beta =\frac{1}{ab}$ and $c=\frac{\ln k}{k}$. The function $\alpha \left(\beta \right)$ decreases in the interval $\beta \in \left(0,\frac{1}{ec}\right)$ and increases for $\beta >\frac{1}{ec}$ because its derivative can be written as ${\alpha ^{\prime }}\left(\beta \right)=\alpha \left(\beta \right)\left(\ln \beta c+1\right)$. Thus, we conclude that ${d_{2}}$, considered as a function of the parameter b, ${d_{2}}\left(b\right)$, decreases for $b<{b^{\ast }}$ and increases otherwise, where
(64)
\[ {b^{\ast }}:=e\frac{\ln \left(\lambda {k^{a}}\right)}{a\lambda {k^{a}}}=e{b_{1}}.\]
Some calculus shows that ${b^{\ast }}<\overline{b}$ when $\lambda >\frac{e}{{k^{a}}}$, and ${b^{\ast }}>\overline{b}$ otherwise. Note that ${b_{1}}<{b^{\ast }}$.
The interval approximations of the saturation are presented in Figures 2e and 2f. The values of $\overline{d}$, ${d_{1}}$, and ${d_{2}}$ considered as functions of the parameter b are colored in blue, red, and orange, respectively. The parameters for the first figure are $\lambda =2$, $a=1$, and $k=20$. The related important values for the parameter b are ${b_{1}}=0.0922$, ${b_{2}}=1.9126$, ${b^{\ast }}=0.2507$, and $\overline{b}=3.6889$. We mark ${b_{1}}$, ${b_{2}}$, and ${b_{3}}$ by black, green, and blue points, respectively. In this case ${b_{1}}<{b^{\ast }}<{b_{2}}$ and thus the first case of Proposition 4.6 holds. Also ${b^{\ast }}<\overline{b}$, since $\lambda >\frac{e}{{k^{a}}}$. We can see in Figure 2e that the interval $\left({d_{1}},{d_{2}}\right)$ is a good evaluation for the saturation $\overline{d}$ when $b\in \left({b^{\ast }},{b_{2}}\right)$. Otherwise, if $b>{b_{2}}$, then the limitation is $\overline{d}<{d_{1}}$.
We choose parameters $\lambda =2$, $k=1$, and $a=5$ for Figure 2f. Now the important values are ${b_{1}}=0.0693$, ${b_{2}}=0.0134$, ${b^{\ast }}=0.1884$, and $\overline{b}=0.1386$. Note that ${b^{\ast }}>\overline{b}$ because $\lambda <\frac{e}{{k^{a}}}$. Also, ${b_{1}}>{b_{2}}$ and thus the second case of Proposition 4.6 is actual. We have that $\overline{d}>{d_{1}}$ for $b<{b_{2}}$, but both values are very close. Otherwise $\overline{d}<{d_{1}}$ when $b\in \left({b_{2}},\overline{b}\right)$.
Let us mention that the relation $\overline{d}<{d_{2}}$ still holds if we remove the restriction $b<\overline{b}$. In this case we have $\overline{d}<{d_{2}}<{d_{1}}<1$.

5 Calibration

The defined corrected Kies distributions depend on four parameters λ, k, a, and b. The maximum likelihood estimator can be obtained in a closed form – for original Kies distributions, see [13]. Unfortunately, it turns out that this method does not work efficiently either in terms of speed or precision. For this we construct a least square errors (LSqE) type algorithm. It falls in the large class of generalized methods of moments (GMM), since it is based on curve fitting to a histogram. Hence we can use the existing results for the GMM; we refer to [8]. These methods produce consistent and asymptotically normal estimators for the Kies distributions since the whole Kies family exhibits finite moments.
Let us have n observations ${t_{1}}$, ${t_{2}},\dots ,{t_{n}}$. First we calculate the empirical PDF at $m\left(=50\right)$ bins as
(65)
\[ {l_{i}^{emp}}:=\frac{m{N_{i}}}{n\left((\max \left\{{t_{i}}\right\}-\min \left\{{t_{i}}\right\}\right)}.\]
Then we derive the PDF values of the corrected Kies distribution with parameters $\left(\lambda ,k,a,b\right)$ in the centers of the bins, say ${l_{i}^{Kies}}\left(\lambda ,k,a,b\right)$, via formula (12). The usual LSqE criterion for minimization is
(66)
\[ L\left(\lambda ,k,a,b\right)={\sum \limits_{i=1}^{m}}{\left({l_{i}^{emp}}-{l_{i}^{Kies}}\left(\lambda ,k,a,b\right)\right)^{2}}.\]
We introduce a little logarithmic modification to minimize the impact of the extremely large values of the PDF. We make this because the PDF is infinitely large at the zero for some values of the parameters. Thus we define the cost function as
(67)
\[ L\left(\lambda ,k,a,b\right):={\sum \limits_{i=1}^{m}}\left|\ln \left({l_{i}^{emp}}+\epsilon \right)-\ln \left({l_{i}^{Kies}}\left(\lambda ,k,a,b\right)+\epsilon \right)\right|.\]
We have to minimize the corresponding criterion – (66) or (67) – over all possible parameters $\left\{\lambda ,k,a,b\right\}$. The additional constant ϵ is introduced, because some empirical values may be equal to zero. We set this constant to be $\epsilon ={10^{-5}}$. Also, we can use criterion (66) if the empirical PDF seems to be finite at its left endpoint. We provide some experiments to validate this algorithm. We generate n corrected Kies distributed random numbers as ${t_{i}}={Q_{F}}\left({r_{i}}\right)$, where ${Q_{F}}\left(\cdot \right)$ is the quantile function given in equation (11), and ${r_{i}}$ are $\left(0,1\right)$-uniformly distributed random numbers. Our choice of n is among $n=1\hspace{2.5pt}000$, $n=10\hspace{2.5pt}000$, $n=100\hspace{2.5pt}000$, and $n=1\hspace{2.5pt}000\hspace{2.5pt}000$. We fix the coefficients λ and k to one, $\lambda =k=1$, and vary a and b among 0.5, 1, and 2. We report in Table 1 the results which are returned by our calibration algorithm. The fits can be seen in Figure 3. It turns out that this simple LSqE algorithm is quite fast and accurate.
Table 1.
The fitted parameters of the corrected Kies distribution
parameter real $n=1\hspace{2.5pt}000$ $n=10\hspace{2.5pt}000$ $n=100\hspace{2.5pt}000$ $n=1\hspace{2.5pt}000\hspace{2.5pt}000$
λ 1 1.1918 0.8809 0.9801 1.0315
k 1 0.8076 1.1230 1.0310 0.9670
a 0.5 0.5637 0.5541 0.5045 0.4923
b 1 0.9379 0.8609 0.9744 1.0258
λ 1 1.1224 0.9164 0.9273 1.0096
k 1 0.9450 1.0817 1.0653 0.9882
a 0.5 0.5814 0.5031 0.5451 0.4969
b 2 1.8257 1.9312 1.7986 2.0200
λ 1 0.9224 0.7287 0.9904 1.0395
k 1 1.1869 1.1270 1.0063 0.9797
a 1 0.7860 1.1613 1.0126 0.9855
b 0.5 0.5761 0.4087 0.4915 0.5096
λ 1 0.7780 0.6540 0.9013 0.9728
k 1 1.1416 1.1456 1.0248 1.0121
a 1 1.1356 1.2856 1.1234 1.0173
b 1 0.8307 0.7231 0.8761 0.9783
λ 1 1.3803 1.4454 0.8623 1.1066
k 1 0.8388 0.7850 1.0578 0.9413
a 1 0.8738 0.9411 1.0997 0.9822
b 2 2.2277 2.2455 1.7768 2.0618
λ 1 0.5789 1.4045 1.0190 1.1136
k 1 1.1381 0.8774 0.9984 0.9617
a 2 2.0803 2.0887 1.9625 2.0007
b 2 1.8879 1.9627 2.0357 2.0085
vmsta227_g003.jpg
Fig. 3.
PDFs of the corrected Kies distributions

6 An application

We investigate now the behavior of the S&P500 index. It is one of the most used indicators in the financial markets and provides an important information for the world economy. We use daily observations for the period between January 2, 1980 and July 01, 2022 – totally 10717 ones. We derive the so-called log-returns, denoted by ${r_{i}}$, via the equation
(68)
\[ {r_{i}}:=\ln \left(\frac{{S_{i+1}}}{{S_{i}}}\right)\hspace{1em}\mathrm{for}\hspace{2.5pt}i=1,2,\dots ,10716,\]
where ${S_{i}}$ are the observed S&P500 values. The log-returns are presented in Figure 4a. It can be seen that there are periods of calm trading as well as high-volatility periods. This is a well observed phenomenon at all financial markets – the so-called volatility clustering. The highest downward peak happens at October 19, 1987 (1971th observation) – the Black Monday. The S&P500 index loses more than twenty percents – this is the highest one-day loss ever. We are interested in the length of the periods between the shocks. We derive them obtaining the dates at which the index falls by more than two percents – there are 357 such dates – and then we calculate the lengths of the periods between these days. The longest such period contains 950 days – between May 19, 2003 and February 26, 2007. We mark these days with red points in Figure 4. We may view the derived lengths as survival times and we examine their distribution. We divide all observations by 1000 to fit the Kies domain, because the maximal value is 950. We calibrate the parameters of four distributions – corrected Kies and its ancestors, namely, exponential, Weibull, and original Kies. We use the following parametrization:
(69)
\[ \begin{aligned}{}{f_{\mathrm{exponential}}}& :=\frac{1}{\lambda }{e^{-\frac{x}{\lambda }}},\\ {} {f_{\mathrm{Weibull}}}& :=\frac{k}{\lambda }{\left(\frac{x}{\lambda }\right)^{k-1}}{e^{-{\left(\frac{x}{\lambda }\right)^{k}}}}.\end{aligned}\]
vmsta227_g004.jpg
Fig. 4.
S&P500 log-returns and the related estimations
The derived parameters are reported in the first part of Table 2. Immediately after them we provide the results which are returned by the LSqE algorithm described in Section 5. The constant ϵ in cost function (67) is chosen to be $\epsilon =0.01$. It turns out that the corrected Kies distribution is significantly closer to the real observations – its error is 23.1820. The value of this error for the original Kies distribution is 25.3491, whereas for the exponential and Weibull distributions it is 26.6652 and 29.4037, respectively.
Table 2.
Fits to the S&P500 data
parameter corrected Kies original Kies Weibull exponential empirical
λ 3.1091 - 0.0228 0.0293 -
k 55.0876 15.7857 0.8327 - -
a 0.2086 - - - -
b 2.1617 0.7120 - - -
LSqE 23.1820 25.3491 29.4037 26.6652 -
VaR corrected Kies original Kies Weibull exponential empirical
0.9 0.0710 0.0627 0.0622 0.0675 0.0690
0.925 0.0877 0.0732 0.0716 0.0759 0.0920
0.95 0.1106 0.0883 0.0853 0.0878 0.1300
0.975 0.1448 0.1149 0.1095 0.1081 0.1800
$\overline{\mathit{AVaR}}$ corrected Kies original Kies Weibull exponential empirical
0.9 0.1200 0.1004 0.0969 0.0968 0.1872
0.925 0.1337 0.1112 0.1069 0.1052 0.2221
0.95 0.1513 0.1268 0.1214 0.1171 0.2799
0.975 0.1763 0.1535 0.1468 0.1374 0.4154
expectile corrected Kies original Kies Weibull exponential empirical
0.9 0.0666 0.0564 0.0556 0.0591 0.0753
0.925 0.0745 0.0624 0.0612 0.0643 0.0856
0.95 0.0857 0.0711 0.0694 0.0718 0.1009
0.975 0.1044 0.0864 0.0838 0.0849 0.1274
Having in mind Propositions 2.1 and 2.6 (third statements) we conclude that the initial value of PDF for both Kies style distributions is the infinity, because $b=0.7120<1$ for the original distribution and $ab=0.4509<1$ for the corrected one. Hence, a lot of mass is in the left part of the domain. This fact confirms the mentioned above financial phenomenon of volatility clustering. This is true for the Weibull distribution, too, since its parameter k is less than one; $k=0.8327<1$. Also, the initial value of the exponential PDF is relatively large; it is 34.1236.
Additionally, the shape of the calibrated distributions means that the right tails are important. In fact, they present the probabilities of large calm periods in the markets. We compare the results which the four distributions generate for the tail measures $\mathit{VaR}$, $\mathit{AVaR}$, and $\mathit{EX}$ with the empirical ones – see again Table 2. The levels we have chosen are 0.9, 0.925, 0.95, and 0.975; the values of $\mathit{VaR}$, $\mathit{AVaR}$, and $\mathit{EX}$ are derived via equations (11), (30), and (37). Note that here the meaning of these measures is quite different from their traditional use in finance. We can see again that the corrected Kies distribution produces more realistic values in a comparison with the other distributions.
Here is the place to mention another purpose for which we can use these distributions. The available historical data generates relatively small number of observations for the dates with shocks. Note that the lower empirical values for all tail measures reported in Table 2 can be explained namely by the lack of enough observations. Therefore we can use the theoretical distributions as a tool to fulfill the missing information. In this light the closest tail behavior of the corrected Kies distribution has an additional importance.

A Proof of Proposition 2.1

The value of PDF (3) at left endpoint of the distribution domain, $h\left(0\right)$, can be obtain directly from equation (3). To continue, let us examine the derivative of PDF (3). It can be presented as
(70)
\[ {h^{\prime }}\left(t\right)=k{e^{-k\eta \left(t\right)}}\left[{\eta ^{\prime\prime }}\left(t\right)-k{\left({\eta ^{\prime }}\left(t\right)\right)^{2}}\right],\]
where
(71)
\[ \eta \left(t\right):={\left(\frac{t}{1-t}\right)^{b}}.\]
Derivative of function (71) is
(72)
\[ {\eta ^{\prime }}\left(t\right)=b\frac{{t^{b-1}}}{{\left(1-t\right)^{b+1}}}.\]
Let us consider first the case $b=1$ and therefore
(73)
\[ {\eta ^{\prime }}\left(t\right)=\frac{1}{{\left(1-t\right)^{2}}}.\]
The second derivative of function (71) is
(74)
\[ {\eta ^{\prime\prime }}\left(t\right)=2\frac{1}{{\left(1-t\right)^{3}}}.\]
Having in mind formulas (73) and (74), we see that derivative (70) is positive when $t<{t_{2}}=1-\frac{k}{2}$, and vice versa. Hence, if $k\ge 2$, then derivative (70) is always negative in the distribution domain and therefore the PDF is a decreasing function. Otherwise, if $k<2$, then $0<{t_{2}}<1$, and hence the PDF increases for $t\in \left(0,{t_{2}}\right)$ and decreases for $t\in \left({t_{2}},1\right)$.
Suppose now that $b\ne 1$. The second derivative of function (71) now is
(75)
\[ {\eta ^{\prime\prime }}\left(t\right)=b\frac{{t^{b-2}}}{{\left(1-t\right)^{b+2}}}\left(2t+b-1\right).\]
Taking in attention formulas (72) and (75), we conclude that PDF’s derivative (70) is positive when $\alpha \left(t\right)<0$, and vice versa, where function $\alpha \left(t\right)$ is given in equation (4).
Let us consider the case $b>1$. The derivative ${\alpha ^{\prime }}\left(t\right)$, given in equation (5), is an increasing function with negative left endpoint and positive right endpoint, ${\alpha ^{\prime }}\left(0\right)=-2$ and ${\alpha ^{\prime }}\left(1\right)=+\infty $. Thus it has a unique root in the interval $\left(0,1\right)$, which we denote by ${t_{2}}$. Hence, function (4) decreases for $t\in \left(0,{t_{2}}\right)$, has a minimum for $t={t_{2}}$ and increases to $\alpha \left(1\right)=+\infty $. The shape of the PDF follows, since the left endpoint of the function $\alpha \left(t\right)$ is negative, $\alpha \left(0\right)=-(b-1)<0$.
Finally, suppose that $b<1$. We derive the second derivative of function (4) as
(76)
\[ {\alpha ^{\prime\prime }}\left(t\right)=k{b^{2}}\frac{{t^{b-2}}}{{\left(1-t\right)^{b+2}}}\left(2t+b-1\right).\]
Therefore ${\alpha ^{\prime\prime }}\left(t\right)<0$ for $t\in \left(0,\overline{t}\right)$, $\overline{t}=\frac{1-b}{2}$, and ${\alpha ^{\prime\prime }}\left(t\right)>0$ for $t\in \left(\overline{t},1\right)$. Hence, the derivative ${\alpha ^{\prime }}\left(t\right)$ decreases when $t\in \left(0,\overline{t}\right)$ and increases when $t\in \left(\overline{t},1\right)$. If ${\alpha ^{\prime }}\left(\overline{t}\right)\ge 0$, then the derivative ${\alpha ^{\prime }}\left(t\right)$ is positive in the whole domain and thus $\alpha \left(t\right)$ is an increasing function. Hence, the PDF decreases in the distribution domain, since $\alpha \left(0\right)=1-b>0$.
Suppose that ${\alpha ^{\prime }}\left(\overline{t}\right)<0$. Having in mind ${\alpha ^{\prime }}\left(0\right)={\alpha ^{\prime }}\left(1\right)=+\infty $, we conclude that the derivative ${\alpha ^{\prime }}\left(t\right)$ has two roots, ${\overline{t}_{1}}$ and ${\overline{t}_{2}}$. Also, ${\alpha ^{\prime }}\left(t\right)<0$ for $t\in \left({t_{1}},{t_{2}}\right)$ and ${\alpha ^{\prime }}\left(t\right)>0$ outside the interval. Therefore the function $\alpha \left(t\right)$ starts from the positive value $\alpha \left(0\right)=1-b$, increases to a local maximum for $t={\overline{t}_{1}}$, decreases to a local minimum for $t={\overline{t}_{2}}$ and increases to infinity when $t=1$. Hence, if $\alpha \left({\overline{t}_{2}}\right)\ge 0$, then $\alpha \left(t\right)$ is positive in the whole domain and therefore the PDF is a decreasing function. Otherwise, if $\alpha \left({\overline{t}_{2}}\right)<0$, then the function $\alpha \left(t\right)$ has two roots, ${t_{1}}$ and ${t_{2}}$, and it is negative between them and positive outside. This finishes the proof.

Acknowledgement

The authors would like to express sincere gratitude to the editor Prof. Yuliya Mishura and to the anonymous reviewers for the helpful and constructive comments which substantially improve the quality of this paper.

Footnotes

2 Note that this inequality is equivalent to $k>e$ when $\lambda =a=b=1$.

References

[1] 
Afify, A.Z., Gemeay, A.M., Alfaer, N.M., Cordeiro, G.M., Hafez, E.H.: Power-modified Kies-exponential distribution: Properties, classical and Bayesian inference with an application to engineering data. Entropy 24(7), 883 (2022). MR4467767. https://doi.org/10.3390/e24070883
[2] 
Al-Babtain, A.A., Shakhatreh, M.K., Nassar, M., Afify, A.Z.: A new modified Kies family: Properties, estimation under complete and type-II censored samples, and engineering applications. Mathematics 8(8), 1345 (2020). MR4199201. https://doi.org/10.3934/math.2021176
[3] 
Alsubie, A.: Properties and applications of the modified Kies–Lomax distribution with estimation methods. J. Math., 2021(2), 1–18 (2021). MR4346604. https://doi.org/10.1155/2021/1944864
[4] 
Berger, M.-H., Jeulin, D.: Statistical analysis of the failure stresses of ceramic fibres: Dependence of the weibull parameters on the gauge length, diameter variation and fluctuation of defect density. J. Mater. Sci. 38(13), 2913–2923 (2003). https://doi.org/10.1023/A:1024405123420
[5] 
Bhatti, F.A., Ahmad, M.: On a new family of kies burr iii distribution: Development, properties, characterizations, and applications. Sci. Iran. 27(5), 2555–2571 (2020). ISSN 1026-3098. . URL http://scientiairanica.sharif.edu/article_21382.html
[6] 
Emam, W., Tashkandy, Y.: The weibull claim model: Bivariate extension, bayesian, and maximum likelihood estimations. Math. Probl. Eng., 2022(1), 1–10 (2022)
[7] 
Gupta, R.C., Bradley, D.M.: Representing the mean residual life in terms of the failure rate. Math. Comput. Model. 37(12–13), 1271–1280 (2003). MR1996036. https://doi.org/10.1016/S0895-7177(03)90038-0
[8] 
Hansen, L.P.: Large sample properties of generalized method of moments estimators. Econometrica: Journal of the econometric society, 50(4) 1029–1054 (1982). MR0666123. https://doi.org/10.2307/1912775
[9] 
Kies, J.A.: The strength of glass performance. In: Naval Research Lab Report, Washington, D.C., vol. 5093 (1958)
[10] 
Koenker, R.W.: Quantile regression. Cambridge University Press, (2005). ISBN 9780521845731. MR2268657. https://doi.org/10.1017/CBO9780511754098
[11] 
Kumar, C.S., Dharmaja, S.H.S.: On some properties of Kies distribution. Metron 72(1), 97–122 (2014). MR3176964. https://doi.org/10.1007/s40300-013-0018-8
[12] 
Kumar, C.S., Dharmaja, S.H.S.: The exponentiated reduced Kies distribution: Properties and applications. Commun. Stat., Theory Methods 46(17), 8778–8790 (2017). MR3680792. https://doi.org/10.1080/03610926.2016.1193199
[13] 
Kumar, C.S., Dharmaja, S.H.S.: On modified Kies distribution and its applications. J. Stat. Res. 51(1), 41–60 (2017). MR3702285. https://doi.org/10.47302/jsr.2017510103
[14] 
Kyurkchiev, N., Zaevski, T., Iliev, A., Rahnev, A.: A modified three–parameter Kies cumulative distribution function in the light of reaction network analysis. Int. J. Differ. Equ. Appl. 21(2), 1–17 (2022)
[15] 
Lai, C.-D.: Generalized Weibull distributions. In: Generalized Weibull Distributions, pp. 23–75. Springer, (2014). MR3115122. https://doi.org/10.1007/978-3-642-39106-4_2
[16] 
Matsushita, S., Hagiwara, K., Shiota, T., Shimada, H., Kuramoto, K., Toyokura, Y.: Lifetime data analysis of disease and aging by the weibull probability distribution. J. Clin. Epidemiol. 45(10), 1165–1175 (1992). https://www.sciencedirect.com/science/article/pii/089543569290157I. https://doi.org/10.1016/0895-4356(92)90157-I
[17] 
McCool, J.I.: Using the Weibull distribution: reliability, modeling, and inference, vol. 950. John Wiley & Sons, (2012). MR3014584. https://doi.org/10.1002/9781118351994
[18] 
Prabhakar Murthy, D.N., Xie, M., Jiang, R.: Weibull models. John Wiley & Sons, (2004). MR2013269
[19] 
Newey, W.K., Powell, J.L.: Asymmetric least squares estimation and testing. Econometrica: Journal of the Econometric Society 55(4), 819–847 (1987). MR0906565. https://doi.org/10.2307/1911031
[20] 
Rinne, H.: The Weibull distribution: a handbook. Chapman and Hall/CRC, (2008). MR2477856
[21] 
Sanku, D.E.Y., Nassarn, M., Kumar, D.: Moments and estimation of reduced Kies distribution based on progressive type-II right censored order statistics. Hacet. J. Math. Stat. 48(1), 332–350 (2019). MR3976180. https://doi.org/10.15672/hjms.2018.611
[22] 
Seguro, J.V., Lambert, T.W.: Modern estimation of the parameters of the weibull wind speed distribution for wind energy analysis. J. Wind Eng. Ind. Aerodyn. 85(1), 75–84 (2000). https://www.sciencedirect.com/science/article/pii/S0167610599001221. https://doi.org/10.1016/S0167-6105(99)00122-1
[23] 
Sendov, B.: Hausdorff approximations, vol. 50. Springer, (1990). MR1078632. https://doi.org/10.1007/978-94-009-0673-0
[24] 
Shafiq, A., Lone, S.A., Naz Sindhu, T., El Khatib, Y., Al-Mdallal, Q.M., Muhammad, T.: A new modified Kies Fréchet distribution: Applications of mortality rate of Covid-19. Results Phys. 28, 104638 (2021). https://www.sciencedirect.com/science/article/pii/S2211379721007294. https://doi.org/10.1016/j.rinp.2021.104638
[25] 
Sobhi, M.M.A.: The modified Kies–Fréchet distribution: properties, inference and application. AIMS Math. 6, 4691–4714 (2021). MR4220431. https://doi.org/10.3934/math.2021276
[26] 
Soulimani, A., Benjillali, M., Chergui, H., da Costa, D.B.: Multihop weibull-fading communications: Performance analysis framework and applications. J. Franklin Inst. 358(15), 8012–8044 (2021). https://www.sciencedirect.com/science/article/pii/S0016003221004701. MR4319388. https://doi.org/10.1016/j.jfranklin.2021.08.004
[27] 
Weibull, W.: A statistical distribution function of wide applicability. J. Appl. Mech. 18(3), 293–297 (1951). https://doi.org/10.1115/1.4010337
[28] 
Wilks, D.S.: Rainfall intensity, the weibull distribution, and estimation of daily surface runoff. J. Appl. Meteorol. Climatol. 28(1), 52–58 (1989). https://doi.org/10.1175/1520-0450(1989)028<0052:RITWDA>2.0.CO;2
[29] 
Yazhou, J., Molin, W., Zhixin, J.: Probability distribution of machining center failures. Reliab. Eng. Syst. Saf. 50(1), 121–125 (1995). https://www.sciencedirect.com/science/article/pii/095183209500070I. https://doi.org/10.1016/0951-8320(95)00070-I
[30] 
Zaevski, T.S., Kyurkchiev, N.: Some notes on the four-parameters Kies distribution. Comptes rendus de l’Académie bulgare des Sciences 75(10), 1403–1409 (2022). MR4504780
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Definitions and distributional properties
  • 3 Tail behavior
  • 4 Hausdorff distance and saturation
  • 5 Calibration
  • 6 An application
  • A Proof of Proposition 2.1
  • Acknowledgement
  • Footnotes
  • References

Copyright
© 2023 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Exponential distribution Weibull distribution Kies distribution estimator tail behavior Hausdorff saturation 33C15 60E05 60E10

Funding
This research has been partially supported by Grant No BG05M2OP001-1.001-0003, financed by the Science and Education for Smart Growth Operational Program (2014–2020) and co-financed by the European Union through the European structural and Investment funds. The first author was supported also by the project KP-06-N32/8 with the Bulgarian National Science Fund.

Metrics
since March 2018
638

Article info
views

521

Full article
views

327

PDF
downloads

109

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    4
  • Tables
    2
  • Theorems
    1
vmsta227_g001.jpg
Fig. 1.
PDFs of the corrected Kies distributions
vmsta227_g002.jpg
Fig. 2.
CDFs of the corrected Kies distributions with the Hausdorff saturation
vmsta227_g003.jpg
Fig. 3.
PDFs of the corrected Kies distributions
vmsta227_g004.jpg
Fig. 4.
S&P500 log-returns and the related estimations
Table 1.
The fitted parameters of the corrected Kies distribution
Table 2.
Fits to the S&P500 data
Theorem 4.4.
vmsta227_g001.jpg
Fig. 1.
PDFs of the corrected Kies distributions
vmsta227_g002.jpg
Fig. 2.
CDFs of the corrected Kies distributions with the Hausdorff saturation
vmsta227_g003.jpg
Fig. 3.
PDFs of the corrected Kies distributions
vmsta227_g004.jpg
Fig. 4.
S&P500 log-returns and the related estimations
Table 1.
The fitted parameters of the corrected Kies distribution
parameter real $n=1\hspace{2.5pt}000$ $n=10\hspace{2.5pt}000$ $n=100\hspace{2.5pt}000$ $n=1\hspace{2.5pt}000\hspace{2.5pt}000$
λ 1 1.1918 0.8809 0.9801 1.0315
k 1 0.8076 1.1230 1.0310 0.9670
a 0.5 0.5637 0.5541 0.5045 0.4923
b 1 0.9379 0.8609 0.9744 1.0258
λ 1 1.1224 0.9164 0.9273 1.0096
k 1 0.9450 1.0817 1.0653 0.9882
a 0.5 0.5814 0.5031 0.5451 0.4969
b 2 1.8257 1.9312 1.7986 2.0200
λ 1 0.9224 0.7287 0.9904 1.0395
k 1 1.1869 1.1270 1.0063 0.9797
a 1 0.7860 1.1613 1.0126 0.9855
b 0.5 0.5761 0.4087 0.4915 0.5096
λ 1 0.7780 0.6540 0.9013 0.9728
k 1 1.1416 1.1456 1.0248 1.0121
a 1 1.1356 1.2856 1.1234 1.0173
b 1 0.8307 0.7231 0.8761 0.9783
λ 1 1.3803 1.4454 0.8623 1.1066
k 1 0.8388 0.7850 1.0578 0.9413
a 1 0.8738 0.9411 1.0997 0.9822
b 2 2.2277 2.2455 1.7768 2.0618
λ 1 0.5789 1.4045 1.0190 1.1136
k 1 1.1381 0.8774 0.9984 0.9617
a 2 2.0803 2.0887 1.9625 2.0007
b 2 1.8879 1.9627 2.0357 2.0085
Table 2.
Fits to the S&P500 data
parameter corrected Kies original Kies Weibull exponential empirical
λ 3.1091 - 0.0228 0.0293 -
k 55.0876 15.7857 0.8327 - -
a 0.2086 - - - -
b 2.1617 0.7120 - - -
LSqE 23.1820 25.3491 29.4037 26.6652 -
VaR corrected Kies original Kies Weibull exponential empirical
0.9 0.0710 0.0627 0.0622 0.0675 0.0690
0.925 0.0877 0.0732 0.0716 0.0759 0.0920
0.95 0.1106 0.0883 0.0853 0.0878 0.1300
0.975 0.1448 0.1149 0.1095 0.1081 0.1800
$\overline{\mathit{AVaR}}$ corrected Kies original Kies Weibull exponential empirical
0.9 0.1200 0.1004 0.0969 0.0968 0.1872
0.925 0.1337 0.1112 0.1069 0.1052 0.2221
0.95 0.1513 0.1268 0.1214 0.1171 0.2799
0.975 0.1763 0.1535 0.1468 0.1374 0.4154
expectile corrected Kies original Kies Weibull exponential empirical
0.9 0.0666 0.0564 0.0556 0.0591 0.0753
0.925 0.0745 0.0624 0.0612 0.0643 0.0856
0.95 0.0857 0.0711 0.0694 0.0718 0.1009
0.975 0.1044 0.0864 0.0838 0.0849 0.1274
Theorem 4.4.
Let y be a positive parameter and the function $\gamma \left(y\right)$ be defined as
(41)
\[ \gamma \left(y\right):=y{\left[{e^{\lambda {\left({e^{y}}-1\right)^{a}}}}-1\right]^{b}}.\]
Suppose that $k=\gamma \left(y\right)$ for some value of y. Then the H-corrected Kies distribution’s saturation is
(42)
\[ d\left(y\right)={e^{-\lambda {\left({e^{y}}-1\right)^{a}}}}.\]
Note that the function $\gamma \left(y\right)$ is strictly increasing in the interval $\left(0,\infty \right)$ and hence it is invertible. Therefore the saturation can be expressed as a function of λ, k, a, and b as
(43)
\[ d\left(\lambda ,k,a,b\right)={e^{-\lambda {\left({e^{{\gamma ^{-1}}\left(\lambda ,k,a,b\right)}}-1\right)^{a}}}}.\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy