1 Introduction
It is well known that financial institutions need to maintain a certain load on assets to protect against sudden losses associated with debt default risk, operational risks, market risks, liquidity, etc. To solve this problem, risk measures are used. A risk measure assigns a real number to a random outcome or risk event that expresses the degree of risk associated with that random outcome. This concept has found many applications in various fields such as finance, actuarial science, operations research and management.
The simplest and best known measure is VaR (Value-at-Risk). Provided that a stock’s fluctuation over time has no peaks, VaR is useful for predicting the risk associated with a portfolio. But when we face financial crises of different types, the chance of VaR to give a complete overview of possible risks decreases, as it is not sensitive to anything beyond its loss threshold. In addition, VaR is not coherent. This problem is partially solved by CVaR (Conditional Value-at-Risk), but not well enough. An important disadvantage of CVaR is that it cannot be computed efficiently, even for a sum of arbitrary independent random variables. Furthermore, financial markets may face high volatility and instability. In such circumstances, traders and managers use more stringent and conservative measures to manage risk rather than VaR and CVaR. In this case a more complicated measure can be used, namely a coherent measure of risk called Entropic Value-at-Risk (EVaR), introduced in [10] (where it is called “coherent entropic risk measure”) and further studied in [1]. The basic properties of this measure are described in the papers [1–3]. A quantitative analysis of various risk measures, including EVaR is conducted in [17]. Delbaen [9] provides a relation between EVaR and other commonotone risk measures. The paper [16] extends and generalizes EVaR by involving Rényi entropies. It provides explicit relations among different entropic risk measures, elaborates their dual representations and presents their relations explicitly. In [5] authors show that EVaR, which is not a dynamic risk measure in general, can be a finitely-valued dynamic risk measure for at least one value of confidence parameter. In [7] the concept of EVaR is applied to portfolio selection, and a new mean-EVaR model with uncertain random returns is established. In [15] authors explore portfolio construction in the context of Gaussian mixture returns, aiming to maximize expected exponential utility. Additionally, the authors demonstrate that minimizing EVaR can also be addressed through convex optimization.
Important properties of EVaR are coherence and strong monotonicity in its domain (see [2]), while monotonic risk measures such as VaR and CVaR lack these properties. However, EVaR also has some shortcomings. In particular, there are certain types of distributions with which this measure cannot be applied (for example, distributions for which the moment-generating function does not exist). In addition, the calculation of this measure is often reduced to solving optimization problem, which leads to large consumption of computation time. The goal of the present paper is to obtain analytical representations of EVaR for certain risk distribution, which will help to reduce this problem. Namely, we derive the explicit formulas for EVaR of Poisson, compound Poisson, gamma, Laplace, inverse Gaussian and normal inverse Gaussian distributions, which are widely used in the risk modelling. It turns out that for first four of these distributions EVaR is expressed trough the so-called Lambert function, which can be efficiently computed with the help of the modern mathematical software. In addition, for each distribution we provide graphical illustrations demonstrating the behavior of its EVaR depending on the various distributional parameters.
The paper is organized as follows. In Section 2 we recall the definition of EVaR together with its basic properties. Section 3 contains the calculation of EVaR for selected distributions. In Subsection 3.1 we recall the notion of the Lambert function, which is needed for further calculations. Subsections 3.2–3.7 are devoted to the derivation of EVaR for Poisson, compound Poisson, gamma, Laplace, inverse Gaussian and normal inverse Gaussian distributions, respectively. Two appendices supplement the paper: Appendix A contains the formula for EVaR of the normal distribution and Appendix B collects the probability density functions of selected distributions, considered in Section 3.
2 General properties of EVaR
Let $(\Omega ,\mathcal{F},\mathbf{P})$ be a probability space and let ${\mathcal{L}_{0}}={\mathcal{L}_{0}}(\Omega ,\mathcal{F},\mathbf{P})$ be space of random variables $X:\Omega \to \mathbb{R}$.
Definition 2.1.
A risk measure ρ is the mapping ${\mathcal{L}_{0}}\to \overline{\mathbb{R}}$ for which the following conditions are satisfied:
Remark 2.2.
We would like to emphasize that we consider risk measures from the point of view of insurance, where a positive X means “loss” (claim amount) and negative X means “profit.” Alternatively, one may take the point of view of finance, where a random variable X represents “profit” if it is positive and “loss” if negative (in this case translation invariance and monotonicity conditions should be reformulated accordingly).
Definition 2.3.
Risk measure $\rho :{\mathcal{L}_{0}}\to \overline{\mathbb{R}}$ is called a coherent risk measure if the following additional conditions are satisfied:
Remark 2.4.
It is worth mentioning that, traditionally [4], coherent risk measures are defined as satisfying four axioms, namely $(i)$ translation invariance, $(ii)$ monotonicity, $(iii)$ positive homogeneity, and
Note that the conditions $(i)$–$(iv)$ together yield the same requirements for a coherent risk measure; item 1) of Definition 2.1 becomes redundant if positive homogeneity property holds.
We mention also that the convexity axiom is a weaker assumption than subadditivity and positive homogeneity. This axiom was introduced by Föllmer and Shied [11], who proposed to consider a broader class of convex risk measures, satisfying only three axioms, namely convexity, monotonicity and translation invariance.
Definition 2.5.
A risk measure ρ is called law-invariant if $\rho (X)=\rho (Y)$ whenever $X,Y\in {\mathcal{L}_{0}}$ have the same law.
Among various risk measures, the following risk measure is often used in modern risk management practice ([12, 13]).
However, in this paper we focus on another risk measure, namely, EVaR, introduced in [1] according to the following definition (the same measure was introduced in [10] in an equivalent form).
Definition 2.7.
Let $X\in {\mathcal{L}_{0}}$. Assume that its moment-generating function ${m_{X}}(t)=\mathbf{E}{e^{tX}}$ is well defined for all $t\ge 0$. An entropic risk measure EVaR with a confidence level $\alpha \in [0,1)$ (or a risk level $1-\alpha $) is defined as
Remark 2.8.
It is sufficient to assume that the moment-generating function ${m_{X}}(t)$ is well defined for all $t\in [0,A]$ for some $A\gt 0$.
According to [1] and [3], EVaR is a coherent risk measure. In addition, the risk measure EVaR is law-invariant, more precisely, it possesses the following property [2]: For $X,Y\in {\mathcal{L}_{0}}$ with the distribution functions ${F_{X}}$ and ${F_{Y}}$ respectively, whose moment-generating functions exist for all $t\in \mathbb{R}$, the following one-to-one correspondence holds: ${\operatorname{EVaR}_{\alpha }}(X)={\operatorname{EVaR}_{\alpha }}(Y)$ for all $\alpha \in [0,1)$ if and only if ${F_{X}}(t)={F_{Y}}(t)$ for all $t\in \mathbb{R}$.
Remark 2.9.
Entropic risk measures were first studied by Föllmer and Knispel [10], who introduced the so-called coherent entropic risk measure ${\rho _{c}}(X)$ having the following representation (see [10, Prop. 3.1]):
where ${e_{\gamma }}(X)$ is the convex entropic risk measure defined by
As one can see, actually the definitions of ${\rho _{c}}(X)$ and ${\operatorname{EVaR}_{\alpha }}(X)$ coincide, and their parameters c and α are related as $c=-\log (1-\alpha )$.
One of the shortcomings of EVaR is the difficulty of real-world computations for the models. It was computed mostly for the Gaussian distribution. In the next sections we obtain analytical representations which help to overcome indicated shortcoming.
3 Calculation of EVaR for selected distributions
In this section we calculate EVaR for Poisson, compound Poisson, Gamma, inverse Gaussian and normal inverse Gaussian distributions. Note that EVaR for normal distribution is calculated in [1], the corresponding formula is given in Appendix. We shall intensively calculate EVaR via the so-called Lambert function, therefore as the first step, we present its main properties.
3.1 Lambert W function
The Lambert W function [8] is a multi-valued inverse of the function $x\mapsto x{e^{x}}$. In other words, W is defined as a function satisfying
For positive x this function is single-valued, for $x\lt -\frac{1}{e}$ there is no inverse function. For $-\frac{1}{e}\le x\le 0$ there are two possible real values of $W(x)$ (see Fig. 1). We denote the branch satisfying $W(x)\ge -1$ by ${W_{0}}(x)$ and the branch satisfying $W(x)\le -1$ by ${W_{-1}}(x)$ [8]. Note that
${W_{0}}(x)$ is referred to as the principal branch of the W function. It is obvious that ${W_{0}}(x)$ preserves the sign of x.
Fig. 1.
The two real branches of the Lambert W function: ${W_{0}}(x)$ (solid line) and ${W_{-1}}(x)$ (dashed line)
The derivative of W equals (see [8, Eq. (3.2)])
Taking logarithms of both sides of (3.1) and rearranging terms we get the following transformations:
A more general formula, involving branches of complex logarithm, can be found in [14].
(3.3)
\[\begin{array}{l}\displaystyle \log ({W_{0}}(z))=\log (z)-{W_{0}}(z),\hspace{1em}z\gt 0,\\ {} \displaystyle \log (-{W_{k}}(z))=\log (-z)-{W_{k}}(z),\hspace{1em}k\in \{-1,0\},\hspace{2.5pt}z\in [-\frac{1}{e},0).\end{array}\]3.2 Poisson distribution
Let us calculate EVaR for a Poisson distribution. Denote $\beta =-\log (1-\alpha )-\lambda $.
Theorem 3.1.
Let $X\sim \operatorname{Pois}(\lambda )$, $\lambda \gt 0$, then for all $\alpha \in [0,1)$
where ${W_{0}}$ stands for the principal branch of the Lambert function. The expression for ${\operatorname{EVaR}_{\alpha }}(X)$ is jointly continuous in $(\alpha ,\lambda )\in [0,1)\times (0,\infty )$.
Proof.
Recall that the moment-generating function of the Poisson distribution equals
Therefore, by Definition 2.7,
\[ {\operatorname{EVaR}_{\alpha }}(X)=\underset{t\gt 0}{\inf }\frac{1}{t}\left(-\log (1-\alpha )+\lambda ({e^{t}}-1)\right)=\underset{t\gt 0}{\inf }\frac{\beta +\lambda {e^{t}}}{t}.\]
Let us find the infimum of the function $f(t)=\frac{\beta +\lambda {e^{t}}}{t}$ over $t\gt 0$. Note that f is continuous on $(0,+\infty )$ and tends to $+\infty $ at 0 and $+\infty $. It is obvious at $+\infty $ but at 0 it follows from the fact that
Therefore the infimum is in fact the minimal value and achieved inside $(0,+\infty )$.Let $\beta \ne 0$. Then the derivative of the function f equals
The condition ${f^{\prime }}(t)=0$ for some $t\gt 0$ leads to the equation
Since for all $\alpha \in [0,1)$
\[ \frac{\beta }{e\lambda }=\frac{1}{e\lambda }\log \frac{1}{1-\alpha }-\frac{1}{e}\ge -\frac{1}{e},\]
we see that the solution of (3.6) can be expressed through the principal branch ${W_{0}}$ of Lambert function as
The choice of the principal branch ${W_{0}}$ of Lambert function corresponds to the fact that we consider $t\gt 0$.Let us check that the sufficient condition for the local minimum at point ${t^{\ast }}$ is fulfilled. Using the equality ${e^{W(x)}}=x/W(x)$ (see (3.1)), we get
\[\begin{aligned}{}{f^{\prime\prime }}({t^{\ast }})& =\frac{2\beta +\lambda {e^{t}}({t^{2}}-2t+2)}{{t^{3}}}{\bigg|_{t={t^{\ast }}}}=\frac{2\beta +\lambda {e^{1+{W_{0}}(\frac{\beta }{e\lambda })}}\left(1+{\left({W_{0}}\left(\frac{\beta }{e\lambda }\right)\right)^{2}}\right)}{{\left(1+{W_{0}}\left(\frac{\beta }{e\lambda }\right)\right)^{3}}}\\ {} & =\frac{\beta {\left({W_{0}}\left(\frac{\beta }{e\lambda }\right)+1\right)^{2}}}{{\left(1+{W_{0}}\left(\frac{\beta }{e\lambda }\right)\right)^{3}}{W_{0}}\left(\frac{\beta }{e\lambda }\right)}=\frac{\beta }{\left(1+{W_{0}}\left(\frac{\beta }{e\lambda }\right)\right){W_{0}}\left(\frac{\beta }{e\lambda }\right)}\gt 0,\end{aligned}\]
because ${W_{0}}$ preserves the sign of argument and therefore $\frac{\beta }{{W_{0}}\left(\frac{\beta }{e\lambda }\right)}\gt 0$ and $\Big(1+{W_{0}}\left(\frac{\beta }{e\lambda }\right)\Big)={t^{\ast }}\gt 0$ also. So, the point $t={t^{\ast }}$ minimizes $f(t)$ over $t\gt 0$ and the minimum equals $f({t^{\ast }})=\frac{\beta }{{W_{0}}(\frac{\beta }{e\lambda })}$.In the case $\beta =0$ we get that the minimum is achieved at point $t=1$ and equals
\[ {\operatorname{EVaR}_{\alpha }}(X)=\underset{t\gt 0}{\inf }\frac{\lambda {e^{t}}}{t}=\lambda e.\]
Thus the formula (3.4) is proved. Combining it with (3.1), we derive the representation (3.5), which implies the joint continuity in $(\alpha ,\lambda )\in [0,1)\times (0,\infty )$. □3.3 Compound Poisson distribution
Risk measures are often used for models with a large number of identical losses. This could be, for example, losses on an insurance portfolio. In such models, the losses are usually modelled as independent and identically distributed random variables, and the loss intensity will have a discrete distribution, such as binomial, Poisson, or negative binomial. Suppose that a collective risk model $X:={\textstyle\sum _{i=1}^{\eta }}{\xi _{i}}$ (here ${\textstyle\sum _{i=1}^{0}}=0$ by convention) is given, where the intensity of insurance cases is modelled by the Poisson distribution $\eta \sim \operatorname{Pois}(\lambda )$, and the losses $\{{\xi _{i}},i\ge 1\}$ are independent, identically distributed random variables with the moment-generating function ${m_{\xi }}(t)=\mathbf{E}{e^{t\xi }}$. It is well known that the moment-generating function of the compound Poisson distribution is given by
Then for all $\alpha \in [0,1)$,
Now let us consider the Bernoulli distribution for the values of losses.
Theorem 3.2.
Let $\eta \sim \operatorname{Pois}(\lambda )$, ${\xi _{i}}\sim \operatorname{Bern}(p)$, $i\ge 1$, be independent, identically distributed random variables, and $X:={\textstyle\sum _{i=1}^{\eta }}{\xi _{i}}$. Then for any $\alpha \in [0,1)$
\[ {\operatorname{EVaR}_{\alpha }}(X)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{\beta }{{W_{0}}(\frac{\beta }{e\lambda p})},\hspace{1em}& \beta \ne 0,\\ {} e\lambda p,\hspace{1em}& \beta =0,\end{array}\right.\]
where $\beta =-\log (1-\alpha )-\lambda p$, and ${W_{0}}$ stands for the principal branch of the Lambert function. The expression for ${\operatorname{EVaR}_{\alpha }}(X)$ is jointly continuous in $(\alpha ,\lambda ,p)\in [0,1)\times (0,\infty )\times (0,1)$.Now let us consider the case where losses have a centered normal distribution.
Theorem 3.3.
Let $X:={\textstyle\sum _{i=1}^{\eta }}{\xi _{i}}$, where $\eta \sim \operatorname{Pois}(\lambda )$, and $\forall \hspace{0.1667em}i\ge 1:$ ${\xi _{i}}\sim \mathcal{N}(0,{\sigma ^{2}})$ are independent, identically distributed random variables. Then for all $\alpha \in [0,1)$,
where $\beta =-\log (1-\alpha )-\lambda $, $\gamma =\frac{\beta }{2\sqrt{e}\lambda }$. In addition, ${\operatorname{EVaR}_{\alpha }}(X)$ is jointly continuous in $(\alpha ,\lambda ,\sigma )\in [0,1)\times {(0,\infty )^{2}}$.
(3.9)
\[ {\operatorname{EVaR}_{\alpha }}(X)=\left\{\begin{array}{l@{\hskip10.0pt}l}\beta \sigma \frac{{\left(2{W_{0}}(\gamma )+1\right)^{1/2}}}{2{W_{0}}(\gamma )},\hspace{1em}& \beta \ne 0,\\ {} \lambda \sigma \sqrt{e},\hspace{1em}& \beta =0,\end{array}\right.\]Proof.
Substituting the moment-generating function of the normal distribution ${m_{\xi }}(t)=\exp \{\frac{1}{2}{t^{2}}{\sigma ^{2}}\}$ into (3.8), we see that
which can be solved with the help of the Lambert function (see Subsection 3.1). To this end, we need to check that the right-hand side of equation (3.10) exceeds $-\frac{1}{e}$. This condition holds, because
Therefore, by the definition (3.1) of the Lambert function, the solution of (3.10) is given by
Note that we choose the principal branch ${W_{0}}$ of the Lambert function, because for ${W_{-1}}$ the expression under square root is negative. For ${W_{0}}$ it is always positive. Indeed, by (3.11), $\gamma \ge -\frac{1}{2\sqrt{e}}$. Due to monotonicity of ${W_{0}}$, we have
where the last equality follows from the definition of the Lambert function (it easy to see from (3.1) that ${W_{0}}(x)=-\frac{1}{2}$ for $x=-\frac{1}{2\sqrt{e}}$). Thus the value ${t_{\ast }}$ is well defined for any $\alpha \in (0,1)$. Let us check the sufficient condition for a local minimum. The second derivative of f at $t={t_{\ast }}$ equals
Therefore
\[ {\operatorname{EVaR}_{\alpha }}=\underset{t\gt 0}{\inf }\frac{-\log (1-\alpha )+\lambda ({m_{\xi }}(t)-1)}{t}=\underset{t\gt 0}{\inf }\frac{\lambda \exp \left\{\frac{{\sigma ^{2}}{t^{2}}}{2}\right\}+\beta }{t},\]
So we need to minimize the function
First, let us consider the case $\beta \ne 0$. The derivative of f equals
\[ {f^{\prime }}(t)=\frac{\lambda \exp \left\{\frac{{\sigma ^{2}}{t^{2}}}{2}\right\}({\sigma ^{2}}{t^{2}}-1)-\beta }{{t^{2}}}.\]
Then the condition ${f^{\prime }}(t)=0$ implies the equation
(3.10)
\[ \exp \left\{\frac{{\sigma ^{2}}{t^{2}}-1}{2}\right\}\cdot \frac{{\sigma ^{2}}{t^{2}}-1}{2}=\frac{\beta }{2\sqrt{e}\lambda }=:\gamma ,\](3.11)
\[ \gamma =\frac{\beta }{2\sqrt{e}\lambda }=\frac{-\log (1-\alpha )-\lambda }{2\sqrt{e}\lambda }\ge -\frac{1}{2\sqrt{e}}\gt -\frac{1}{e}.\]
\[ {f^{\prime\prime }}({t_{\ast }})=\frac{2\beta +\lambda \exp \left\{\frac{{\sigma ^{2}}{t_{\ast }^{2}}}{2}\right\}\left(2+{\sigma ^{4}}{t_{\ast }^{4}}-{\sigma ^{2}}{t_{\ast }^{2}}\right)}{{t_{\ast }^{3}}}.\]
Since ${t_{\ast }}$ satisfies the equation (3.10), we see that
(3.12)
\[ \lambda \exp \left\{\frac{{\sigma ^{2}}{t_{\ast }^{2}}}{2}\right\}=\frac{\beta }{{\sigma ^{2}}{t_{\ast }^{2}}-1}=\frac{\beta }{2{W_{0}}(\gamma )}.\]
\[\begin{aligned}{}{f^{\prime\prime }}({t_{\ast }})& =\frac{2\beta +\frac{\beta }{2{W_{0}}(\gamma )}\left(2+{\left(1+2{W_{0}}(\gamma )\right)^{2}}-\left(1+2{W_{0}}(\gamma )\right)\right)}{{\sigma ^{-3}}{\left(2{W_{0}}(\gamma )+1\right)^{3/2}}}\\ {} & =\frac{{\sigma ^{3}}\beta \left(2{\left({W_{0}}(\gamma )\right)^{2}}+3{W_{0}}(\gamma )+1\right)}{{\left(1+2{W_{0}}(\gamma )\right)^{3/2}}{W_{0}}(\gamma )}\\ {} & =\frac{{\sigma ^{3}}\beta \left(1+{W_{0}}(\gamma )\right)\left(1+2{W_{0}}(\gamma )\right)}{{\left(1+2{W_{0}}(\gamma )\right)^{3/2}}{W_{0}}(\gamma )}=\frac{{\sigma ^{2}}\beta \left(1+{W_{0}}(\gamma )\right)}{{t_{\ast }}{W_{0}}(\gamma )}.\end{aligned}\]
Note that for any $\beta \ne 0$
\[ \frac{\beta }{{W_{0}}(\gamma )}=\frac{\beta }{{W_{0}}\left(\frac{\beta }{2\sqrt{e}\lambda }\right)}\gt 0.\]
As a result, ${f^{\prime\prime }}({t_{\ast }})\gt 0$. The sufficient condition for a local minimum is satisfied. Therefore, the minimal value of $f(t)$ is attained at $t={t_{\ast }}$ and equals
\[ f({t_{\ast }})=\frac{\lambda \exp \left\{\frac{{\sigma ^{2}}{t_{\ast }^{2}}}{2}\right\}+\beta }{{t_{\ast }}}=\frac{\frac{\beta }{2{W_{0}}(\gamma )}+\beta }{{\sigma ^{-1}}{(1+2{W_{0}}(\gamma ))^{1/2}}}=\beta \sigma \frac{{(2{W_{0}}(\gamma )+1)^{1/2}}}{2{W_{0}}(\gamma )},\]
where we have used (3.12).Let us consider the case $\beta =0$. The problem reduces to the finding the value of
The minimum is achieved at the point $t=\frac{1}{\sigma }$. So ${\operatorname{EVaR}_{\alpha }}(X)=\lambda \sigma \sqrt{e}$.
Thus the formula (3.9) is proved. It remains to verify the continuity of ${\operatorname{EVaR}_{\alpha }}(X)$ at the point $\beta =0$. Let us check the convergence of the previous expression by applying L’Hôpital’s rule since the numerator and denominator tend to 0. Their derivatives can be computed using (3.2) as follows:
\[ \frac{d}{d\beta }{W_{0}}\left(\frac{\beta }{2\sqrt{e}\lambda }\right)=\frac{1}{2\sqrt{e}\lambda \exp \{{W_{0}}(\gamma )\}(1+{W_{0}}(\gamma ))},\]
\[\begin{array}{c}\displaystyle \frac{d}{d\beta }\left(\beta \sigma {\left(2{W_{0}}\left(\frac{\beta }{2\sqrt{e}\lambda }\right)+1\right)^{\frac{1}{2}}}\right)\\ {} \displaystyle =\sigma {(2{W_{0}}(\gamma )+1)^{\frac{1}{2}}}+\frac{\beta \sigma }{2\sqrt{e}\lambda \exp \{{W_{0}}(\gamma )\}(1+{W_{0}}(\gamma )){(2{W_{0}}(\gamma )+1)^{1/2}}}.\end{array}\]
Therefore, the limit equals
\[\begin{aligned}{}& \underset{\beta \to 0}{\lim }{\operatorname{EVaR}_{\alpha }}(X)\\ {} & =\underset{\beta \to 0}{\lim }\Bigg[\sqrt{e}\lambda \exp \{{W_{0}}(\gamma )\}(1+{W_{0}}(\gamma ))\\ {} & \hspace{2em}\hspace{2em}\times \Bigg(\sigma {(2{W_{0}}(\gamma )+1)^{\frac{1}{2}}}\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}+\frac{\beta \sigma }{2\sqrt{e}\lambda \exp \{{W_{0}}(\gamma )\}(1+{W_{0}}(\gamma )){(2{W_{0}}(\gamma )+1)^{1/2}}}\Bigg)\Bigg]\\ {} & =\sqrt{e}\lambda \sigma .\end{aligned}\]
Thus, the continuity is proved. □3.4 Gamma distribution
Theorem 3.4.
Let $X\sim G(k,\theta )$ have a gamma distribution. Then for any $\alpha \in [0,1)$,
Here ${W_{-1}}$ denotes the branch of Lambert function that does not exceed $-1$. The expression for ${\operatorname{EVaR}_{\alpha }}(X)$ is jointly continuous in $(\alpha ,k,\theta )\in [0,1)\times {(0,\infty )^{2}}$.
(3.13)
\[ {\operatorname{EVaR}_{\alpha }}(X)=-k\theta {W_{-1}}\left(-{e^{-1}}{(1-\alpha )^{\frac{1}{k}}}\right).\]Proof.
The moment-generating function of the gamma distribution is given by
Then
\[\begin{aligned}{}{\operatorname{EVaR}_{\alpha }}(X)& =\underset{t\in (0,{\theta ^{-1}})}{\inf }\frac{1}{t}\Big(-\log (1-\alpha )-k\log (1-\theta t)\Big)\\ {} & =\underset{t\in (0,{\theta ^{-1}})}{\inf }\frac{b-k\log (\frac{1}{\theta }-t)}{t},\end{aligned}\]
where $b:=-\log (1-\alpha )-k\log \theta $.Let us denote $f(t):=\frac{1}{t}(b-k\log (\frac{1}{\theta }-t))$. We are looking for the point of local minimum of the function f on $(0,\frac{1}{\theta })$. Its derivative equals
The condition ${f^{\prime }}(t)=0$ leads to the equation
which is equivalent to
The equation (3.14) can be solved using the Lambert function, since its right-hand side is not less than $-\frac{1}{e}$:
Let $z:=-{e^{-1}}{(1-\alpha )^{\frac{1}{k}}}$. Applying (3.3), we get
\[ \frac{1-\theta t}{\theta }=\exp \left\{\frac{b}{k}+1\right\}\exp \left\{-\frac{1}{1-\theta t}\right\},\]
or
(3.14)
\[ -\frac{1}{1-\theta t}\exp \left\{-\frac{1}{1-\theta t}\right\}=-\frac{1}{\theta }\exp \left\{-\frac{b}{k}-1\right\}.\]
\[ -\frac{1}{\theta }\exp \left\{-\frac{b}{k}-1\right\}=-{e^{-1}}{(1-\alpha )^{\frac{1}{k}}}\ge -{e^{-1}}.\]
We see from the condition $t\lt \frac{1}{\theta }$ that $-\frac{1}{1-\theta t}\le -1$, hence, the branch ${W_{-1}}$ of the Lambert function should be chosen. We arrive at the equation
its solution is
Let us check the sufficient conditions for a local minimum. The second derivative at ${t_{\ast }}$ equals
(3.15)
\[ {f^{\prime\prime }}({t_{\ast }})=\frac{1}{{t_{\ast }^{3}}}\left(2\log \frac{1}{1-\alpha }+\frac{\theta k{t_{\ast }}(3\theta {t_{\ast }}-2)}{{(1-\theta {t_{\ast }})^{2}}}-2k\log (1-\theta {t_{\ast }})\right).\]
\[\begin{aligned}{}{f^{\prime\prime }}({t_{\ast }})& =\frac{{\theta ^{3}}{({W_{-1}}(z))^{3}}}{{({W_{-1}}(z)+1)^{3}}}\bigg(2\log \left(\frac{1}{1-\alpha }\right)\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{0.2778em}+k\Big({({W_{-1}}(z))^{2}}+4{W_{-1}}(z)+3+2\log (-{W_{-1}}(z))\Big)\hspace{-0.1667em}\bigg)\\ {} & =\frac{{\theta ^{3}}{({W_{-1}}(z))^{3}}k{({W_{-1}}(z)+1)^{2}}}{{({W_{-1}}(z)+1)^{3}}}\ge 0,\end{aligned}\]
because ${W_{-1}}(z)\le -1$. Thus we have proved that the minimum of f is achieved at the point $t={t_{\ast }}$. After substituting ${t_{\ast }}$ into $f(t)$ and simplifying the resulting expression, we arrive at (3.13). The continuity of ${\operatorname{EVaR}_{\alpha }}(X)$ follows from the continuity of ${W_{-1}}$. □3.5 Laplace distribution
Theorem 3.7.
Let $X\sim L(\mu ,b)$ have a Laplace distribution. Then for all $\alpha \in [0,1)$,
where $\gamma =-2{e^{-2}}(1-\alpha )$. The expression for ${\operatorname{EVaR}_{\alpha }}(X)$ is jointly continuous in $(\alpha ,\mu ,b)\in [0,1)\times \mathbb{R}\times (0,\infty )$.
(3.16)
\[ {\operatorname{EVaR}_{\alpha }}(X)=\mu -b{W_{-1}}(\gamma ){\left(1+\frac{2}{{W_{-1}}(\gamma )}\right)^{\frac{1}{2}}},\]Proof.
The moment-generating function equals
Then
This equation can be solved via the Lambert function, because its right-hand side $\gamma =-2{e^{-2}}(1-\alpha )\ge -{e^{-1}}$. Since $-\frac{2}{1-{b^{2}}{t^{2}}}\lt -1$, we see that the branch ${W_{-1}}$ should be chosen. Thus, we arrive at the following solution to (3.18):
Note that ${W_{-1}}(t)$ is a decreasing function on $(-{e^{-1}},0)$, therefore
so the expression under square root is positive and ${t_{\ast }}$ is well defined.
\[ {\operatorname{EVaR}_{\alpha }}(X)=\underset{t\in (0,{b^{-1}})}{\inf }\frac{1}{t}\left(a+\mu t-\log \left(1-{b^{2}}{t^{2}}\right)\right),\]
where $a:=-\log (1-\alpha )$. Let us find the minimum of the function
Its first derivative equals
\[ {f^{\prime }}(t)=-\frac{a({b^{2}}{t^{2}}-1)-{b^{2}}{t^{2}}(\log (1-{b^{2}}{t^{2}})-2)+\log (1-{b^{2}}{t^{2}})}{{t^{2}}({b^{2}}{t^{2}}-1)}.\]
From the condition ${f^{\prime }}(t)=0$, we get the equation
or
(3.18)
\[ -\frac{2}{1-{b^{2}}{t^{2}}}\exp \left\{-\frac{2}{1-{b^{2}}{t^{2}}}\right\}=-2{e^{-a-2}}=:\gamma .\]The second derivative of f equals
\[\begin{aligned}{}{f^{\prime\prime }}({t_{\ast }})& =\frac{2}{{t_{\ast }^{3}}}\left(a+\frac{{b^{2}}{t_{\ast }^{2}}(3{b^{2}}{t_{\ast }^{2}}-1)}{{(1-{b^{2}}{t_{\ast }^{2}})^{2}}}-\log (1-{b^{2}}{t_{\ast }^{2}})\right)\\ {} & =\frac{1}{{t_{\ast }^{3}}}\left(2a+{({W_{-1}}(\gamma ))^{2}}+5{W_{-1}}(\gamma )+6-2\log 2+2\log (-{W_{-1}}(\gamma ))\right)\\ {} & =\frac{1}{{t_{\ast }^{3}}}\left({({W_{-1}}(z))^{2}}+3{W_{-1}}(z)+2\right)\gt 0,\end{aligned}\]
where we have used the relation (3.3). Consequently, ${t_{\ast }}$ is a minimum point. Substituting ${t_{\ast }}$ into the function (3.17) and simplifying the resulting expression, we get (3.16). □3.6 Inverse Gaussian distribution
Theorem 3.8.
Let $X\sim \operatorname{IG}(\mu ,\lambda )$ have an inverse Gaussian distribution, or the so-called Wald distribution. Then for all $\alpha \in [0,1)$
where $\delta =1-\frac{\mu }{\lambda }\log (1-\alpha )$. The expression for ${\operatorname{EVaR}_{\alpha }}(X)$ is jointly continuous in $(\alpha ,k,\theta )\in [0,1)\times {(0,\infty )^{2}}$.
Proof.
The corresponding moment-generating function equals
where
\[ {m_{X}}(t)=\exp \left\{\frac{\lambda }{\mu }\left(1-{\left(1-\frac{2{\mu ^{2}}t}{\lambda }\right)^{\frac{1}{2}}}\right)\right\},\hspace{1em}t\lt \frac{\lambda }{2{\mu ^{2}}}.\]
Substituting it into the definition of ${\operatorname{EVaR}_{\alpha }}$, we get the following optimization problem:
\[ {\operatorname{EVaR}_{\alpha }}(X)=\underset{t\gt 0}{\inf }\frac{1}{t}\left(-\log (1-\alpha )+\frac{\lambda }{\mu }\left(1-{\left(1-\frac{2{\mu ^{2}}t}{\lambda }\right)^{\frac{1}{2}}}\right)\right).\]
Denote $a=-\log (1-\alpha )$. Let us find the minimum point of the objective function
\[ f(t)=\frac{1}{t}\left(a+\frac{\lambda }{\mu }-\frac{\lambda }{\mu }{\left(1-\frac{2{\mu ^{2}}t}{\lambda }\right)^{\frac{1}{2}}}\right)=\frac{1}{t}\left(a+\frac{\lambda }{\mu }-\frac{\lambda }{\mu }z(t)\right),\]
where $z(t):={(1-\frac{2{\mu ^{2}}t}{\lambda })^{1/2}}$. Then the derivative of $f(t)$ equals
(3.19)
\[ {f^{\prime }}(t)=\frac{1}{{t^{2}}}\left(-\frac{\lambda }{\mu }{z^{\prime }}(t)t-a-\frac{\lambda }{\mu }+\frac{\lambda }{\mu }z(t)\right),\]
\[ {z^{\prime }}(t)=-\frac{{\mu ^{2}}}{\lambda }{\left(1-\frac{2{\mu ^{2}}t}{\lambda }\right)^{-1/2}}=-\frac{{\mu ^{2}}}{\lambda z(t)}.\]
Using this formula together with the inverse relation $t=\frac{\lambda }{2{\mu ^{2}}}(1-{z^{2}}(t))$, we can transform (3.19) as follows:
\[\begin{aligned}{}{f^{\prime }}(t)& =\frac{1}{{t^{2}}}\left(\frac{\lambda }{2\mu z(t)}\left(1-{z^{2}}(t)\right)-a-\frac{\lambda }{\mu }+\frac{\lambda }{\mu }z(t)\right)\\ {} & =\frac{\lambda }{2\mu {t^{2}}z(t)}\left({z^{2}}(t)-2\delta z(t)+1\right),\end{aligned}\]
where $\delta =\frac{a\mu }{\lambda }+1\ge 1$. We see that the condition ${f^{\prime }}(t)=0$ is satisfied if and only if $z(t)=\delta \pm \sqrt{{\delta ^{2}}-1}$. The root with “plus” sign is bigger than 1, which is not possible for $t\gt 0$. Therefore, we need to consider only the value
which corresponds to
\[ {t_{\ast }}=\frac{\lambda }{2{\mu ^{2}}}\left(1-{\left(\delta -\sqrt{{\delta ^{2}}-1}\right)^{2}}\right)\in \left(0,\frac{\lambda }{2{\mu ^{2}}}\right).\]
Note that $z(t)$ is a decreasing function and
This means that the minimal value of $f(t)$ is achieved at $t={t_{\ast }}$. It equals
\[\begin{aligned}{}f({t_{\ast }})& =\frac{1}{{t_{\ast }}}\left(a+\frac{\lambda }{\mu }-\frac{\lambda }{\mu }{z_{\ast }}\right)=\frac{\lambda }{\mu }\cdot \frac{\delta -{z_{\ast }}}{{t_{\ast }}}=\frac{2\mu \sqrt{{\delta ^{2}}-1}}{1-{\left(\delta -\sqrt{{\delta ^{2}}-1}\right)^{2}}}\\ {} & =\frac{\mu }{\delta -\sqrt{{\delta ^{2}}-1}}=\mu \left(\delta +\sqrt{{\delta ^{2}}-1}\right).\end{aligned}\]
□3.7 Normal inverse Gaussian distribution
Consider the class of normal inverse Gaussian distributions $\operatorname{NIG}(\alpha ,\beta ,\mu ,\delta )$. This is a flexible system of distributions that includes distributions with heavy tails and skewed distributions.
Theorem 3.9.
Let $X\sim \operatorname{NIG}(\alpha ,\beta ,\mu ,\delta )$ be a normal inverse Gaussian distribution, $0\le |\beta |\lt \alpha $, $\mu \in \mathbb{R}$, $\delta \gt 0$. Then for the level ${\alpha ^{\prime }}\in [0,1)$,
where
Remark 3.10.
-
(i) We have excluded the case $\alpha =\beta =0$, because in this case the moment-generating function is not defined for any $t\ne 0$.
-
(iii) Evidently, $\varphi \ge \psi \ge 0$.
-
(iv) ${t_{\ast }}\in [0,\alpha -\beta ]$, because ${t_{\ast }}=(\alpha -\beta )\frac{\alpha \psi +\beta \psi }{\alpha \varphi +\beta \psi }$ and $\frac{\alpha \psi +\beta \psi }{\alpha \varphi +\beta \psi }\in [0,1]$ if $\varphi \ge \psi \ge 0$.
Proof.
The corresponding moment-generating function is equal to
see, e.g., [6]. Then we get the following optimization problem:
where
Then we find the minimum point for positive t
(3.21)
\[ {m_{X}}(t)=\exp \left\{\mu t+\delta \sqrt{{\alpha ^{2}}-{\beta ^{2}}}-\delta \sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}\right\},\hspace{1em}t\in [-\alpha -\beta ,\alpha -\beta ],\](3.22)
\[\begin{aligned}{}f(t)& :=\frac{1}{t}\left(\log \frac{1}{1-{\alpha ^{\prime }}}+\mu t+\delta \sqrt{{\alpha ^{2}}-{\beta ^{2}}}-\delta \sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}\right)\\ {} & =\mu +\frac{\delta }{t}\left(\varphi -\sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}\right).\end{aligned}\]
\[\begin{aligned}{}{f^{\prime }}(t)& =\frac{\delta }{{t^{2}}}\left(\frac{(\beta +t)t}{\sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}}-\varphi +\sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}\right)\\ {} & =\frac{\delta \left((\beta +t)t-\varphi \sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}+{\alpha ^{2}}-{(\beta +t)^{2}}\right)}{{t^{2}}\sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}}\\ {} & =\frac{\delta \left({\alpha ^{2}}-{\beta ^{2}}-\beta t-\varphi \sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}\right)}{{t^{2}}\sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}}\\ {} & =\frac{\delta \left({({\alpha ^{2}}-{\beta ^{2}}-\beta t)^{2}}-{\varphi ^{2}}({\alpha ^{2}}-{(\beta +t)^{2}})\right)}{{t^{2}}\sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}\left({\alpha ^{2}}-{\beta ^{2}}-\beta t+\varphi \sqrt{{\alpha ^{2}}-{(\beta +t)^{2}}}\right)}.\end{aligned}\]
Note that the denominator is positive for all $t\in [0,\alpha -\beta ]$. Indeed, ${\alpha ^{2}}-{\beta ^{2}}-\beta t\ge {\alpha ^{2}}-{\beta ^{2}}-\beta (\alpha -\beta )=\alpha (\alpha -\beta )\ge 0$. Hence, the sign of ${f^{\prime }}(t)$ coincides with that of the function
\[\begin{aligned}{}g(t)& ={\left({\alpha ^{2}}-{\beta ^{2}}-\beta t\right)^{2}}-{\varphi ^{2}}\left({\alpha ^{2}}-{(\beta +t)^{2}}\right)\\ {} & ={t^{2}}\left({\varphi ^{2}}+{\beta ^{2}}\right)+2\beta \left({\varphi ^{2}}-{\alpha ^{2}}+{\beta ^{2}}\right)t-\left({\alpha ^{2}}-{\beta ^{2}}\right)\left({\varphi ^{2}}-{\alpha ^{2}}+{\beta ^{2}}\right)\\ {} & ={t^{2}}\left({\varphi ^{2}}+{\beta ^{2}}\right)+2\beta {\psi ^{2}}t-\left({\alpha ^{2}}-{\beta ^{2}}\right){\psi ^{2}}.\end{aligned}\]
It is not hard to see that the quadratic equation $g(t)=0$ has two solutions
however, the solution with minus sign is obviously negative. Let us consider the solution
\[ {t_{\ast }}=\frac{\psi (-\beta \psi +\alpha \varphi )}{{\varphi ^{2}}+{\beta ^{2}}}=\frac{\psi \left({\alpha ^{2}}{\varphi ^{2}}-{\beta ^{2}}{\psi ^{2}}\right)}{\left({\varphi ^{2}}+{\beta ^{2}}\right)(\beta \psi +\alpha \varphi )}=\frac{\left({\alpha ^{2}}-{\beta ^{2}}\right)\psi }{\alpha \varphi +\beta \psi },\]
where we have used the relation
\[ {\alpha ^{2}}{\varphi ^{2}}-{\beta ^{2}}{\psi ^{2}}={\alpha ^{2}}{\varphi ^{2}}-{\beta ^{2}}\left({\varphi ^{2}}-{\alpha ^{2}}+{\beta ^{2}}\right)=\left({\alpha ^{2}}-{\beta ^{2}}\right)\left({\varphi ^{2}}+{\beta ^{2}}\right).\]
According to Remark 3.10 (iv), ${t_{\ast }}\in [0,\alpha -\beta ]$. Moreover, from the properties of the quadratic function we get that $g(t)\lt 0$ for $t\in [0,{t_{\ast }})$ and $g(t)\gt 0$ for $t\in ({t_{\ast }},\alpha -\beta ]$, and the derivative ${f^{\prime }}(t)$ demonstrates the same behavior. This means that ${t_{\ast }}$ is the minimum point of $f(t)$ on $[0,\alpha -\beta ]$. Substituting this value into (3.22), we obtain (3.20). □4 Concluding remarks
There are not many models for EVaR described in the current scientific literature. Most often, only the standard normal distribution model is considered. The lack of extended results in this area is due to the complexity of calculations of EVaR for the majority of the main distributions.
In this paper, EVaR for several distributions was calculated via Lambert function. The distributions that are most commonly used in statistical models and loss modelling were discussed, namely, Poisson, compound Poisson, gamma, exponential, chi-squared, Laplace, inverse Gaussian and normal inverse Gaussian distributions.
The result of this work brings the possibility of further studying the behavior of EVaR for the already defined models, as well as expanding the models for which EVaR can be analytically calculated.
As the reader may observe, all distributions considered in the paper (except the normal inverse Gaussian) belong to the exponential family. We anticipate the potential for explicating EVaR for other members of the exponential family as well as for other risk distributions. However, this remains a subject for future investigation.