1 Introduction
Several models have been proposed for a discrete time2 risk process $\{U(t):t=0,1,\dots \}$. The following model is known as a compound binomial process and was first considered in [7],
where $U(0)=u\ge 0$ is an integer representing the initial capital and the counting process $\{N(t):t=0,1,\dots \}$ has a $\text{Binomial}(t,p)$ distribution, where p stands for the probability of a claim in each period. The discrete random variables ${X_{1}},{X_{2}},\dots $ are i.i.d. with probability function ${f_{X}}(x)=ℙ({X_{i}}=x)$ for $x=1,2,\dots $ and mean ${\mu _{X}}$ such that ${\mu _{X}}\cdot p<1$. This restriction comes from the net profit condition. Each ${X_{i}}$ represents the total amount of claims in the i-th period where claims existed. In each period, one unit of currency from premiums is gained. The top-left plot of Figure 1 shows a possible realization of this risk process. The ultimate ruin time is defined as
as long as the indicated set is not empty, otherwise $\tau :=\infty $. Hence, the probability of ultimate ruin is
The reader should be aware that some authors, for example in [15], consider models where ruin occurs only when the condition $U(t)<0$ is satisfied.
One central problem in the theory of ruin is to find $\psi (u)$. For the above model this probability can be calculated using the following relation known as Gerber’s formula [7],
for $u=1,2,\dots $, where ${\overline{F}_{X}}(u)=ℙ({X_{i}}>u)={\textstyle\sum _{x=u+1}^{\infty }}{f_{X}}(x)$.
An apparently simpler risk model is defined as follows.
In this case, at each unit of time there is always a claim of size Y. If ${\mu _{Y}}$ denotes the expectation of this claim, the net profit condition now reads ${\mu _{Y}}<1$. It can be shown [4, pp. 467] that this condition implies that $\psi (u)<1$, where the time of ruin τ and the ultimate ruin probability $\psi (u)$ are defined as before. One feature that makes this model simple is that all of its elements are discrete, but some studies [2] have been carried out where continuous claims are incorporated.
Under a conditioning argument, it is easy to show that the probability of ruin satisfies the recursive relation
Now, given a compound binomial model (1) we can construct a Gerber–Dickson model (4) as follows. Let ${R_{1}},{R_{2}},\dots $ be i.i.d. $\text{Bernoulli}(p)$ random variables and define ${Y_{i}}={R_{i}}\cdot {X_{i}}$ where ${X_{i}}\in \{1,2,\dots \}$, $i\ge 1$, as in model (1). The distribution of these claims is ${f_{Y}}(0)=1-p$ and ${f_{Y}}(y)=p\cdot {f_{X}}(y)$, for $y\ge 1$.
Conversely, given model (4) and defining $p=1-{f_{Y}}(0)$, we can construct a model (1) by letting claims ${X_{i}}$ have distribution ${f_{X}}(x)={f_{Y}}(x)/p$, for $x\ge 1$. It can be readily checked that ${\mu _{Y}}=p\cdot {\mu _{X}}$ and that the probability generating function of $U(t)$ in both models coincide. This shows models (1) and (4) are equivalent in the sense that $U(t)$ has the same distribution in both models. As expected, the recursive relations (3) and (6) can be easily obtained one from the other. Thus, results obtained for one model can be translated for the other model. For example, using model (1), authors in [3] find the ruin severity and the surplus just before the ruin, and a discrete version of the popular Gerber–Shiu function is used in [10] and [11] to solve problems on ruin time and ruin severity. A survey of results and models for several discrete time risk models can be found in [12].
In this work we will use the notation of the Gerber–Dickson risk model (4) and for simplicity we will write $f(y)$, $F(y)$ and μ instead of ${f_{Y}}(y)$, ${F_{Y}}(y)$ and ${\mu _{Y}}$, respectively. Also, as time an other auxiliary variables are considered discrete, we will write, for example, $t\ge 0$ instead of $t=0,1,\dots $ Our main objective is to provide some methods to approximate the ultimate ruin probability for this risk model. The results obtained are the discrete version of those found in [14].
2 The Pollaczeck–Khinchine formula
The continuous version of Pollaczeck–Khinchine formula plays a major role in the theory of ruin for the Cramér–Lundberg model. On the contrary, its discrete version is seldom mentioned in the literature on discrete time risk models. In this section we develop this formula and apply it later to find a general method to calculate ultimate ruin probabilities for claims with particular distributions. The construction procedure we use for the discrete case resembles closely the one already known for the continuous case.
Assuming $\tau <\infty $, the non-negative random variable $W=|U(\tau )|$ is known as the severity of ruin. It indicates how large the capital drops below zero at the time of ruin. See the top-right plot of Figure 1. The joint probability of ruin and severity not greater than $w=0,1,\dots $ is denoted by
In [5] it is shown that, in particular,
Hence,
This probability will be useful in finding the distribution of the size of the first drop of the risk process below its initial capital u, see Theorem 1 below, which will lead us to the Pollaczeck–Khinchine formula. For every claim distribution, there is an associated distribution which often appears in the calculation of ruin probabilities. This is defined next.
The probability function defined by (10) is also known as the integrated-tail distribution, although this name is best suited to continuous distributions. For example, the equilibrium distribution associated to a $\text{Geometric}(p)$ claim distribution with mean $\mu =1/(1-p)$ is the same geometric since
As in the continuous time risk models, let us define the surplus process $\{Z(t):t\ge 0\}$ by
This is a random walk that starts at zero, it has stationary and independent increments and $Z(t)\to -\infty $ a.s. as $t\to \infty $ under the net profit condition $\mu <1$. See the bottom-right plot of Figure 1. In terms of this surplus process, ruin occurs when $Z(t)$ reaches level u or above. Thus, the ruin probability can be written as
As $u\ge 1$ and $Z(0)=0$, we can also write
We next define the times of records and the severities for the surplus process.
Definition 3.
Let ${\tau _{0}^{\ast }}:=0$. For $i\ge 1$, the i-th record time of the surplus process is defined as
when the indicated set is not empty, otherwise ${\tau _{i}^{\ast }}:=\infty $. The non-negative variable ${Y_{i}^{\ast }}=Z({\tau _{i}^{\ast }})-Z({\tau _{i-1}^{\ast }})$ is called the severity or size of the i-th record time, assuming ${\tau _{i}^{\ast }}<\infty $.
(15)
\[ {\tau _{i}^{\ast }}=\min \hspace{0.1667em}\{t>{\tau _{i-1}^{\ast }}:Z(t)\ge Z({\tau _{i-1}^{\ast }})\},\]The random variables ${\tau _{0}^{\ast }}<{\tau _{1}^{\ast }}<\cdots \hspace{0.1667em}$ represent the stopping times when the surplus process $\left\{Z(t):t\ge 0\right\}$ arrives at a new or the previous maximum, and the severity ${Y_{i}^{\ast }}$ is the difference between the maxima at ${\tau _{i}^{\ast }}$ and ${\tau _{i-1}^{\ast }}$. A graphical example of these record times are shown in the bottom-right plot of Figure 1. In particular, observe that ${\tau _{1}^{\ast }}$ is the first positive time the risk process is less than or equal to its initial capital u, that is,
and the severity is ${Y_{1}^{\ast }}=Z({\tau _{1}^{\ast }})=u-U({\tau _{1}^{\ast }})$ and this is the size of this first drop below level u. Also, since the surplus process has stationary increments, all severities share the same distribution, that is,
assuming ${\tau _{i}^{\ast }}<\infty $. We will next find out that distribution.
(17)
\[ {Y_{i}^{\ast }}=Z({\tau _{i}^{\ast }})-Z({\tau _{i-1}^{\ast }})\sim Z({\tau _{1}^{\ast }})-Z(0)={Y_{1}^{\ast }},\hspace{1em}i\ge 1,\]Theorem 1.
Let $k\ge 1$. Conditioned on the event $({\tau _{k}^{\ast }}<\infty )$, the severities ${Y_{1}^{\ast }},\dots ,{Y_{k}^{\ast }}$ are independent and identically distributed according to the equilibrium distribution
Proof.
By (17), it is enough to find the distribution of ${Y_{1}^{\ast }}$. Observe that ${\tau _{1}^{\ast }}=\tau $ when $U(0)=0$. By (9) and (5), for $x\ge 0$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle ℙ({Y_{1}^{\ast }}=x\mid {\tau _{1}^{\ast }}<\infty )& \displaystyle =& \displaystyle ℙ(u-U({\tau _{1}^{\ast }})=x\mid {\tau _{1}^{\ast }}<\infty )\\ {} & \displaystyle =& \displaystyle ℙ(|U(\tau )|=x\mid \tau <\infty ,U(0)=0)\\ {} & \displaystyle =& \displaystyle ℙ(\tau <\infty ,Y=x\mid U(0)=0)/ℙ(\tau <\infty \mid U(0)=0)\\ {} & \displaystyle =& \displaystyle \overline{F}(x)/\mu .\end{array}\]
The independence property follows from the independence of the claims. Indeed, the severity of the i-th record time is
\[ {Y_{i}^{\ast }}=Z({\tau _{i}^{\ast }})-Z({\tau _{i-1}^{\ast }})={\sum \limits_{j={\tau _{i-1}^{\ast }}+1}^{{\tau _{i}^{\ast }}}}({Y_{j}}-1),\hspace{1em}i\ge 1.\]
Therefore,
\[ ℙ\left({\bigcap \limits_{i=1}^{k}}\left({Y_{i}^{\ast }}={y_{i}}\right)\right)=ℙ\left({\bigcap \limits_{i=1}^{k}}\left({\sum \limits_{j={\tau _{i-1}^{\ast }}+1}^{{\tau _{i}^{\ast }}}}({Y_{j}}-1)={y_{i}}\right)\right)={\prod \limits_{i=1}^{k}}\hspace{0.1667em}ℙ\left({Y_{i}^{\ast }}={y_{i}}\right).\]
□Since the surplus process is a Markov process, the following properties hold: For $i\ge 2$ and assuming ${\tau _{i}^{\ast }}<\infty $, for $0<s<x$,
Also, for $k\ge 1$,
(19)
\[ ℙ({\tau _{i}^{\ast }}=x\mid {\tau _{i-1}^{\ast }}=s)=ℙ({\tau _{i}^{\ast }}-{\tau _{i-1}^{\ast }}=x-s\mid {\tau _{i-1}^{\ast }}=s)=ℙ({\tau _{1}^{\ast }}=x-s).\]The total number of records of the surplus process $\{Z(t):t\ge 0\}$ is defined by the non-negative random variable
when the indicated set is not empty, otherwise $K:=0$. Note that $0\le K<\infty $ a.s. since $Z(t)\to -\infty $ a.s. under the net profit condition. The distribution of this random variable is established next.
Theorem 2.
Let $\mu <1$ be the mean of claims in the Gerber–Dickson risk process (4). The number of records K has a $\textit{Geometric}(1-\mu )$ distribution, that is,
Proof.
The case $k=0$ can be related to the ruin probability with $u=0$ as follows,
Hence, $ℙ(K>0)=\psi (0)=\mu $. Let us see the case $k=1$,
\[ {f_{K}}(1)=ℙ({\tau _{1}^{\ast }}<\infty ,{\tau _{2}^{\ast }}=\infty )=ℙ({\tau _{2}^{\ast }}=\infty \mid {\tau _{1}^{\ast }}<\infty )ℙ({\tau _{1}^{\ast }}<\infty ).\]
By (21),
\[ {f_{K}}(1)=ℙ({\tau _{1}^{\ast }}=\infty )ℙ({\tau _{1}^{\ast }}<\infty )=ℙ(K>0){f_{K}}(0)=\mu (1-\mu ).\]
Now consider the case $k\ge 2$ and let ${A_{k}}=({\tau _{k}^{\ast }}<\infty )$. Conditioning on ${A_{k-1}}$ and its complement,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle ℙ({A_{k}})& \displaystyle =& \displaystyle ℙ({\tau _{k}^{\ast }}<\infty \mid {A_{k-1}})ℙ({A_{k-1}})\\ {} & \displaystyle =& \displaystyle ℙ({\tau _{k}^{\ast }}<\infty \mid {\tau _{k-1}^{\ast }}<\infty )ℙ({A_{k-1}})\\ {} & \displaystyle =& \displaystyle ℙ({\tau _{1}^{\ast }}<\infty )ℙ({A_{k-1}})\\ {} & \displaystyle =& \displaystyle \psi (0)ℙ({A_{k-1}}).\end{array}\]
An iterative argument shows that $ℙ({A_{k}})={(\psi (0))^{k}}$, $k\ge 2$. Therefore,
\[\begin{aligned}{}{f_{K}}(k)& \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}ℙ({\tau _{k+1}^{\ast }}=\infty ,{A_{k}})=ℙ({\tau _{k+1}^{\ast }}=\infty \mid {A_{k}})ℙ({A_{k}})\\ {} & \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}ℙ({\tau _{1}^{\ast }}=\infty ){(\psi (0))^{k}}=(1-\mu ){\mu ^{k}}.\end{aligned}\]
□In the following proposition it is established that the ultimate maximum of the surplus process has a compound geometric distribution. This will allow us to write the ruin probability as the tail of this distribution.
Theorem 3.
Proof.
Theorem 4 (Pollaczeck–Khinchine formula, discrete version).
The probability of ruin for a Gerber–Dickson risk process (4) can be written as
where ${S_{k}^{\ast }}={\textstyle\sum _{i=1}^{k}}{Y_{i}^{\ast }}$.
(27)
\[ \psi (u)=(1-\mu ){\sum \limits_{k=1}^{\infty }}ℙ({S_{k}^{\ast }}\ge u)\hspace{0.1667em}{\mu ^{k}},\hspace{1em}u\ge 0,\]Proof.
For $u=0$, the sum in (27) reduces to μ which we know is $\psi (0)$. For $u\ge 1$, by (23) and (25),
\[\begin{aligned}{}\psi (u)& \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}ℙ\left({\sum \limits_{i=1}^{K}}{Y_{i}^{\ast }}\ge u\right)={\sum \limits_{k=0}^{\infty }}ℙ\left({\sum \limits_{i=1}^{K}}{Y_{i}^{\ast }}\ge u\mid K=k\right){f_{K}}(k)\\ {} & \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}(1-\mu ){\sum \limits_{k=1}^{\infty }}ℙ({S_{k}^{\ast }}\ge u)\hspace{0.1667em}{\mu ^{k}}.\end{aligned}\]
□For example, suppose claims have a $\text{Geometric}(p)$ distribution with mean $\mu =(1-p)/p$. The net profit condition $\mu <1$ implies $p>1/2$. We have seen in equation (11) that the associated equilibrium distribution is again $\text{Geometric}(p)$, and hence the k-th convolution is Negative Binomial with parameters $k\in \mathbb{N} $ and $p\in (1/2,1)$. Straightforward calculations show that the Pollaczeck–Khinchine formula gives the known solution for the probability of ruin,
This includes the case $u=0$ in the same formula. In the following section we will consider claims that have a mixture of some distributions.
3 Negative binomial mixture distributions
Negative binomial mixture (NBM) distributions will be used to approximate the ruin probability when claims have a mixed Poisson (MP) distribution. Although NBM distributions are the analogue of Erlang mixture distributions [18, 19], they cannot be used to approximate any discrete distribution with non-negative support. However, it turns out that they can approximate mixed Poisson distributions. This is stated in [16, Theorem 1], where the authors define NBM distributions by the probability generating function
\[ G(z)=\underset{m\to \infty }{\lim }{\sum \limits_{k=1}^{m}}{q_{k,m}}{\left(\frac{1-{p_{k,m}}}{1-{p_{k,m}}\hspace{0.1667em}z}\right)^{{r_{k,m}}}},\hspace{1em}z<1,\]
where ${q_{k,m}}$ are positive numbers and their sum over index k equals to 1. This is a rather general definition for a NBM distribution. In this work we will consider a particular case of it.We will denote by $\text{NB}(k,p)$ the negative binomial distribution with parameters $k\in \mathbb{N} $ and $p\in (0,1)$. Its probability function will be written as $\text{nb}(k,p)(x)$ and the distribution function by $\text{NB}(k,p)(x)$. More precisely, for integers $x\ge 0$,
\[ \text{nb}(k,p)(x)=\left(\genfrac{}{}{0.0pt}{}{k+x-1}{x}\right){p^{k}}{(1-p)^{x}},\hspace{2.5pt}\text{NB}(k,p)(x)=1-{\sum \limits_{i=0}^{k-1}}\text{nb}(x+1,1-p)(i).\]
The case $k=1$ reduces to the $\text{Geometric}(p)$ distribution. It will be also useful to recall that the $\text{NB}(k,p)$ distribution can be obtained as the distribution of the sum of k independent random variables $\text{Geometric}(p)$-distributed.Definition 4.
Let ${q_{1}},{q_{2}},\dots $ be a sequence of numbers such that ${q_{k}}\ge 0$ and ${\textstyle\sum _{k=1}^{\infty }}{q_{k}}=1$. A negative binomial mixture distribution with parameters $\boldsymbol{\pi }=({q_{1}},{q_{2}},\dots )$ and $p\in (0,1)$, denoted by $\text{NBM}(\boldsymbol{\pi },p)$, is a discrete distribution with the probability function
It is useful to observe that any NBM distribution can be written as a compound sum of geometric random variables. Indeed, let N be a discrete random variable with the probability function ${q_{k}}={f_{N}}(k)$, $k\ge 1$, and define ${S_{N}}={\textstyle\sum _{i=1}^{N}}{X_{i}}$, where ${X_{1}},{X_{2}},\dots $ are i.i.d. r.v.s $\text{Geometric}(p)$-distributed and independent of N. Then, conditioning on the values of N,
and the p.g.f. has the form ${G_{{S_{N}}}}(r)={G_{N}}({G_{X}}(r))$. The following is a particular way to write the distribution function of a NBM distribution.
\[ ℙ({S_{N}}=x)={\sum \limits_{k=1}^{\infty }}{q_{k}}\cdot ℙ\left({\sum \limits_{i=1}^{k}}{X_{i}}=x\right)={\sum \limits_{k=1}^{\infty }}{q_{k}}\cdot \text{nb}(k,p)(x),\hspace{1em}x\ge 0.\]
Thus, given any $\text{NBM}(\boldsymbol{\pi },p)$ distribution with $\boldsymbol{\pi }=({f_{N}}(1),{f_{N}}(2),\dots )$, we have the representation
In particular, Proof.
For $x\ge 0$,
\[\begin{aligned}{}{F_{{S_{N}}}}(x)& \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}{\sum \limits_{k=1}^{\infty }}{f_{N}}(k)\cdot \text{NB}(k,p)(x)\\ {} & \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}{\sum \limits_{k=1}^{\infty }}{f_{N}}(k)\hspace{0.1667em}\left[1-{\sum \limits_{i=0}^{k-1}}\text{nb}(x+1,1-p)(i)\right]\\ {} & \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}{\sum \limits_{i=0}^{\infty }}\left[{\sum \limits_{k=1}^{i}}{f_{N}}(k)\right]\hspace{0.1667em}\text{nb}(x+1,1-p)(i)\\ {} & \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}𝔼({F_{N}}(Z)).\end{aligned}\]
□We will show next that the equilibrium distribution associated to a NBM distribution is again a NBM one. For a distribution function $F(x)$, $\overline{F}(x)$ denotes $1-F(x)$.
Theorem 6.
Let ${S_{N}}\sim \textit{NBM}(\boldsymbol{\pi },p)$, with $\boldsymbol{\pi }=({f_{N}}(1),{f_{N}}(2),\dots )$ and $𝔼(N)<\infty $. The equilibrium distribution of ${S_{N}}$ is $\textit{NBM}({\boldsymbol{\pi }_{e}},p)$, where
and
Proof.
For $x\ge 0$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {f_{e}}(x)& \displaystyle =& \displaystyle \frac{{\overline{F}_{{S_{N}}}}(x)}{𝔼({S_{N}})}=\frac{p{\textstyle\textstyle\sum _{i=0}^{\infty }}{\overline{F}_{N}}(i)\left(\genfrac{}{}{0.0pt}{}{x+i}{i}\right){p^{i}}{(1-p)^{x+1}}}{(1-p)𝔼(N)}\\ {} & \displaystyle =& \displaystyle {\sum \limits_{i=0}^{\infty }}\frac{{\overline{F}_{N}}(i)}{𝔼(N)}\left(\genfrac{}{}{0.0pt}{}{x+i}{i}\right){p^{i+1}}{(1-p)^{x}}.\end{array}\]
Naming $j=i+1$,
□It can be checked that (33) is a probability function. It is the equilibrium distribution associated to N.
The following proposition states that a compound geometric NBM distribution is again a NBM one. This result is essential to calculate the ruin probability when claims have a NBM distribution.
Theorem 7.
Let $M\sim \textit{Geometric}(\rho )$ and let ${N_{1}},{N_{2}},\dots $ be a sequence of independent random variables with identical distribution $\boldsymbol{\pi }=({f_{N}}(1),{f_{N}}(2),\dots )$. Let ${S_{{N_{1}}}},{S_{{N_{2}}}},\dots $ be random variables with a $\textit{NBM}(\boldsymbol{\pi },p)$ distribution. Then
where ${\boldsymbol{\pi }^{\ast }}=({f_{{N^{\ast }}}}(1),{f_{{N^{\ast }}}}(2),\dots )$ is the distribution of ${N^{\ast }}={\textstyle\sum _{j=1}^{M+1}}{N_{j}}$ and is given by
(34)
\[ S:={\sum \limits_{j=1}^{M+1}}{S_{{N_{j}}}}\sim \textit{NBM}({\boldsymbol{\pi }^{\ast }},p),\]Proof.
For $x\ge 1$ and $m\ge 1$,
The last term is the p.g.f. of a $\text{NBM}({\boldsymbol{\pi }^{\ast }},p)$ distribution evaluated at zero. □
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle ℙ(S=x\mid M+1=m)& \displaystyle =& \displaystyle ℙ\left({\sum \limits_{j=1}^{m}}{S_{{N_{j}}}}=x\right)=ℙ\left({\sum \limits_{j=1}^{m}}{\sum \limits_{i=1}^{{N_{j}}}}{X_{i\hspace{0.1667em}j}}=x\right)\\ {} & \displaystyle =& \displaystyle ℙ\left({\sum \limits_{\ell =1}^{{N_{m}}}}{X_{\ell }}=x\right),\end{array}\]
where ${N_{m}}={\textstyle\sum _{i=1}^{m}}{N_{i}}$ and ${X_{\ell }}\sim \text{Geometric}(p)$, for $\ell \ge 1$. Therefore,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle ℙ(S=x)& \displaystyle =& \displaystyle {\sum \limits_{m=1}^{\infty }}ℙ(S=x\mid M+1=m)\hspace{0.1667em}{f_{M+1}}(m)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{m=1}^{\infty }}ℙ\left({\sum \limits_{\ell =1}^{{N_{m}}}}{X_{l}}=x\right){f_{M+1}}(m)\\ {} & \displaystyle =& \displaystyle ℙ\left({\sum \limits_{\ell =1}^{{N^{\ast }}}}{X_{\ell }}=x\right),\end{array}\]
where ${N^{\ast }}={\textstyle\sum _{j=1}^{M+1}}{N_{j}}$. Using Panjer’s formula it can be shown that ${N^{\ast }}$ has distribution ${\boldsymbol{\pi }^{\ast }}$ given by (35) and (36). Since ${X_{\ell }}\sim \text{Geometric}(p)$, ${\textstyle\sum _{\ell =1}^{{N^{\ast }}}}{X_{\ell }}\sim \text{NBM}({\boldsymbol{\pi }^{\ast }},p)$. Lastly, the probability of the event $(S=0)$ can be calculated as follows:
\[ ℙ(S=0)={\sum \limits_{k=1}^{\infty }}{f_{{N^{\ast }}}}(k)\hspace{0.1667em}\text{nb}(k,p)(0)={\sum \limits_{k=1}^{\infty }}{f_{{N^{\ast }}}}(k)\hspace{0.1667em}{p^{k}}={f_{{N^{\ast }}}}(1)\hspace{0.1667em}p+{\sum \limits_{k=2}^{\infty }}{f_{{N^{\ast }}}}(k)\hspace{0.1667em}{p^{k}}.\]
Substituting ${f_{{N^{\ast }}}}(k)$ from (35) and (36), one obtains
\[ ℙ(S=0)=\rho \hspace{0.1667em}{G_{N}}(p)+(1-\rho )\hspace{0.1667em}{G_{N}}(p)\hspace{0.1667em}ℙ(S=0).\]
Therefore,
(37)
\[ ℙ(S=0)=\frac{\rho \hspace{0.1667em}{G_{N}}(p)}{1-(1-\rho )\hspace{0.1667em}{G_{N}}(p)}={G_{M+1}}({G_{N}}(p))={G_{M+1}}({G_{N}}({G_{{X_{i\hspace{0.1667em}j}}}}(0))).\]From (35) and (36), it is not difficult to derive a recursive formula for ${\overline{F}_{{N^{\ast }}}}(k)$, namely,
The following result establishes a formula to calculate the ruin probability when claims have a NBM distribution.
Theorem 8.
Consider the Gerber–Dickson model (4) with claims having a $\textit{NBM}(\boldsymbol{\pi },p)$ distribution, where $\boldsymbol{\pi }=({f_{N}}(1),{f_{N}}(2),\dots )$ and $𝔼(N)<\infty $. For $u\ge 1$ define ${Z_{u}}\sim \textit{NegBin}(u,1-p)$. Then the ruin probability can be written as
where the sequence ${\left\{{\overline{C}_{k}}\right\}_{k=0}^{\infty }}$ is given by
(39)
\[ \psi (u)={\sum \limits_{k=0}^{\infty }}{\overline{C}_{k}}\cdot ℙ({Z_{u}}=k)=𝔼({\overline{C}_{{Z_{u}}}}),\hspace{1em}u\ge 1,\](40)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\overline{C}_{0}}& \displaystyle =& \displaystyle 𝔼(N)(1-p)/p,\end{array}\]Proof.
Let ${R_{0}}={\textstyle\sum _{j=1}^{{M_{0}}}}{Y_{e,j}}$, where ${M_{0}}\sim \text{Geometric}(\rho )$ with $\rho =1-\psi (0)$, and let ${Y_{e,1}},{Y_{e,2}},\dots $ be r.v.s distributed according to the equilibrium distribution associated to $\text{NBM}(\boldsymbol{\pi },p)$ claims. By Theorem 6, we know this equilibrium distribution is $\text{NBM}({\boldsymbol{\pi }_{e}},p)$, where ${\boldsymbol{\pi }_{e}}$ is given by ${f_{Ne}}(j)={\overline{F}_{N}}(j-1)/𝔼(N)$, $j\ge 1$. By (25), for $u\ge 1$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \psi (u)& \displaystyle =& \displaystyle ℙ({R_{0}}\ge u)\\ {} & \displaystyle =& \displaystyle ℙ({R_{0}}\ge u\mid {M_{0}}>0)ℙ({M_{0}}>0)+ℙ({R_{0}}\ge u\mid {M_{0}}=0)ℙ({M_{0}}=0)\\ {} & \displaystyle =& \displaystyle (1-\rho )\hspace{0.1667em}ℙ(R\ge u),\end{array}\]
where $R\sim {\textstyle\sum _{j=1}^{M+1}}{Y_{e,j}}$ with $M+1\sim \text{Geometric}(\rho )$. By Theorem 7, $R\sim \text{NBM}({\boldsymbol{\pi }^{\ast }},p)$, where ${\boldsymbol{\pi }^{\ast }}$ is given by equations (35) and (36). Now define
Therefore, using (32),
\[ \psi (u)=(1-\rho )\hspace{0.1667em}ℙ(R>u)=(1-\rho )\hspace{0.1667em}𝔼\left({\overline{F}_{{N^{\ast }}}}({Z_{u}})\right)={\sum \limits_{k=0}^{\infty }}{\overline{C}_{k}}\hspace{0.1667em}ℙ({Z_{u}}=k).\]
Finally, we calculate the coefficients ${\overline{C}_{k}}$ where $\rho =1-\psi (0)=1-𝔼(N)(1-p)/p$. First,
and by (38),
□As an example consider claims with a geometric distribution. This is a NBM distribution with $\boldsymbol{\pi }=(1,0,0,\dots )$. Equations (40–42) yield
Substituting in (39) together with $\psi (0)=(1-p)/p$, we recover the known solution $\psi (u)={\left[(1-p)/p\right]^{u+1}}$, for $u\ge 0$, mentioned earlier in (28).
4 Mixed Poisson distribution
This section contains the definition of a mixed Poisson distribution and some of its relations with NBM distributions.
Definition 5.
Let X and Λ be two non-negative random variables. If $X\mid (\Lambda =\lambda )\sim \text{Poisson}(\lambda )$, then we say that X has a mixed Poisson distribution with mixing distribution ${F_{\Lambda }}$. In this case, we write $X\sim \text{MP}({F_{\Lambda }})$.
Observe the distribution of $X\mid (\Lambda =\lambda )$ is required to be Poisson, but the unconditional distribution of X, although discrete, is not necessarily Poisson. In [13], necessary and sufficient conditions are given to determine whether a given distribution belongs to the MP family. This is done via its probability generating function. A large number of examples of these distributions can be found in [9] and a study of their general properties is given in [8]. In particular, it is not difficult to see that $𝔼(X)=𝔼(\Lambda )$. Indeed, conditioning on the values of Λ,
In [17], a recursive formula to evaluate MP probabilities is given. The following proposition establishes a relationship between the Erlang mixture distribution (used as a mixing distribution) and the negative binomial distribution. The former will be denoted by $\text{ErlangM}(\boldsymbol{\pi },\beta )$, with similar meaning for the parameters as in the notation $\text{NBM}(\boldsymbol{\pi },p)$ used before. In the ensuing calculations the probability function of a $\text{Poisson}(\lambda )$ distribution is denoted by $\text{poisson}(\lambda )(x)$. The Erlang distribution of parameters $k\in \mathbb{N} $ and $\beta >0$ is denoted by $\text{Erlang}(k,\beta )$. The distribution function is written as $\text{Erlang}(k,\beta )(x)$ and the density function as $\text{erlang}(k,\beta )(x)$, that is,
\[ 𝔼(X)={\int _{0}^{\infty }}𝔼(X\hspace{0.1667em}|\hspace{0.1667em}\Lambda =\lambda )\hspace{0.1667em}d{F_{\Lambda }}(\lambda )={\int _{0}^{\infty }}\lambda \hspace{0.1667em}d{F_{\Lambda }}(\lambda )=𝔼(\Lambda ).\]
Also, the p.g.f. of X can be written as
(44)
\[ {G_{X}}(r)={\int _{0}^{\infty }}{e^{-\lambda (1-r)}}d{F_{\Lambda }}(\lambda ),\hspace{1em}r<1.\]
\[ \text{erlang}(k,\beta )(x)=\frac{{(\beta x)^{k-1}}}{(k-1)!}\hspace{0.1667em}\beta {e^{-\beta x}},\hspace{1em}x>0.\]
When $k=1$ one obtains the exponential distribution with parameter β, which is denoted by $\text{Exp}(\beta )$.Theorem 9.
Let Λ be a random variable with the distribution $\textit{ErlangM}(\boldsymbol{\pi },\beta )$. The distributions $\textit{MP}({F_{\Lambda }})$ and $\textit{NBM}(\boldsymbol{\pi },\beta /(\beta +1))$ are the same.
Proof.
Let $X\sim \text{MP}({F_{\Lambda }})$. For $x\ge 0$,
\[\begin{aligned}{}ℙ(X=x)& \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}{\int _{0}^{\infty }}\text{poisson}(\lambda )(x)\cdot {\sum \limits_{k=1}^{\infty }}{q_{k}}\cdot \text{erlang}(k,\beta )(\lambda )\hspace{0.1667em}d\lambda \\ {} & \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}{\sum \limits_{k=1}^{\infty }}{q_{k}}\cdot {\left(\frac{\beta }{\beta +1}\right)^{k}}{\left(\frac{1}{\beta +1}\right)^{x}}\frac{(k+x-1)!}{(k-1)!\hspace{0.1667em}x!}\\ {} & \hspace{2.5pt}\hspace{2.5pt}=\hspace{2.5pt}\hspace{2.5pt}{\sum \limits_{k=1}^{\infty }}{q_{k}}\cdot \text{nb}(k,\beta /(\beta +1))(x).\end{aligned}\]
□As an example consider the case when $\Lambda \sim \text{Exp}(\beta )$ and $\boldsymbol{\pi }=(1,0,0,\dots )$. By Theorem 9, $ℙ(X=x)=\text{nb}(1,\beta /(\beta +1))(x)$ for $x\ge 0$. That is, $X\sim \text{Geometric}(p)$ with $p=\beta /(\beta +1)$.
Next proposition will be useful to show that a MP distribution can be approximated by NBM distributions. Its proof can be found in [8].
Theorem 10.
Let ${\Lambda _{1}},{\Lambda _{2}},\dots $ be positive random variables with distribution functions ${F_{1}},{F_{2}},\dots $ and let ${X_{1}},{X_{2}},\dots $ be random variables such that ${X_{i}}\sim \textit{MP}({F_{i}})$, $i\ge 1$. Then ${X_{n}}\xrightarrow{D}X$, if and only if, ${\Lambda _{n}}\xrightarrow{D}\Lambda $, where $X\sim \textit{MP}({F_{\Lambda }})$.
Finally we establish how to approximate a MP distribution.
Theorem 11.
Let $X\sim \textit{MP}({F_{\Lambda }})$, and let ${X_{n}}$ be a random variable with the distribution $\textit{NBM}({\boldsymbol{\pi }_{n}},{p_{n}})$, for $n\ge 1$, where ${p_{n}}=n/(n+1)$, ${\boldsymbol{\pi }_{n}}=(q(1,n),q(2,n),\dots )$ and $q(k,n)={F_{\Lambda }}(k/n)-{F_{\Lambda }}((k-1)/n)$. Then ${X_{n}}\xrightarrow{D}X$.
Proof.
First, suppose that ${F_{\Lambda }}$ is continuous. Let ${\Lambda _{1}},{\Lambda _{2}},\dots $ be random variables, where ${\Lambda _{n}}$ has the distribution given by the following Erlang mixture (see [14]),
with $q(k,n)={F_{\Lambda }}(k/n)-{F_{\Lambda }}((k-1)/n)$. It is known [14] that
Then, by Theorem 10, ${X_{n}}\xrightarrow{D}X$, where ${X_{n}}\sim \text{MP}({F_{n}})$. This is a $\text{NBM}(\boldsymbol{\pi },{p_{n}})$ by Theorem 9 where $\boldsymbol{\pi }=(q(1,n),q(2,n),\dots )$ and ${p_{n}}=n/(n+1)$.
(45)
\[ {F_{n}}(x)={\sum \limits_{k=1}^{\infty }}q(k,n)\cdot \text{Erlang}(k,n)(x),\hspace{1em}x>0,\]Now suppose ${F_{\Lambda }}$ is discrete. Let ${Y_{n}}\sim \text{NB}(\lambda n,n/(n+1))$, where λ and n are positive integers and let $Z\sim \text{Poisson}(\lambda )$. The probability generating functions of these random variables satisfy
where $q(k,n)={F_{\Lambda }}(k/n)-{F_{\Lambda }}((k-1)/n)$. Note that for any natural number n, if k is not a multiple of n, then $q(k,n)=0$. Let $k=\lambda \hspace{0.1667em}n$ with $\lambda \ge 1$. Then $q(k,n)={F_{\Lambda }}(\lambda )-{F_{\Lambda }}(\lambda -1/n)={f_{\Lambda }}(\lambda )$. Therefore, for $x\ge 0$,
Therefore,
□
\[ \underset{n\to \infty }{\lim }{G_{{Y_{n}}}}(r)=\underset{n\to \infty }{\lim }{\left(1+\frac{1-r}{n}\right)^{-\lambda n}}=\exp \{-\lambda (1-r)\}={G_{Z}}(r).\]
Thus,
On the other hand, suppose that X is a mixed Poisson random variable with the probability function ${f_{X}}(x)$, for $x\ge 0$, and mixing distribution ${F_{\Lambda }}(\lambda )$, for $\lambda \ge 1$. Let ${\{{X_{n}}\}_{n=1}^{\infty }}$ be a sequence of random variables with the distribution
(47)
\[ {f_{n}}(x)={\sum \limits_{k=1}^{\infty }}q(k,n)\cdot \text{nb}\left(k,\frac{n}{n+1}\right)(x),\hspace{1em}n\ge 1,\hspace{0.1667em}x\ge 0,\]For ${X_{n}}\sim \text{NBM}({\boldsymbol{\pi }_{n}},{p_{n}})$, as in the previous statement, it is easy to see that
As a consequence of Theorem 11, for $X\sim \text{MP}({F_{\Lambda }})$, its probability function can be approximated by NBM distributions with suitable parameters. That is, for sufficiently large values of n,
where $q(k,n)={F_{\Lambda }}\left(k/n\right)-{F_{\Lambda }}\left((k-1)/n\right)$ and ${p_{n}}=n/(n+1)$.
5 Ruin probability approximations
We here consider the case when claims in the Gerber–Dickson risk model (4) have distribution function $F\sim \text{MP}({F_{\Lambda }})$. Let ${\psi _{n}}(u)$ denote the ruin probability when claims have the distribution ${F_{n}}(x)$ as defined in Theorem 11. If n is large enough, ${F_{n}}(x)$ is close to $F(x)$, and it is expected that ${\psi _{n}}(u)$ will be close to $\psi (u)$, the unknown ruin probability. This procedure is formalized in the following theorem.
Theorem 12.
If claims in the Gerber–Dickson model (4) have a $\textit{MP}({F_{\Lambda }})$ distribution, then
where
with $Z\sim \textit{NB}(u,1/(1+n))$. The sequence ${\left\{{\overline{C}_{k,n}}\right\}_{k=0}^{\infty }}$ is determined by
(50)
\[ {\psi _{n}}(u)={\sum \limits_{k=0}^{\infty }}{\overline{C}_{k,n}}\hspace{0.1667em}ℙ(Z=k)=𝔼\left({\overline{C}_{Z,n}}\right),\](51)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\overline{C}_{0,n}}& \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}{\overline{F}_{\Lambda }}(j/n)/n,\end{array}\]Proof.
Suppose $X\sim \text{MP}({F_{\Lambda }})$ with $𝔼(X)<1$ and the equilibrium probability function ${f_{e}}(x)$. Let ${X_{1}},{X_{2}},\dots $ be a sequence of $\text{NBM}({\boldsymbol{\pi }_{n}},{p_{n}})$ r.v.s approximating X, where ${\boldsymbol{\pi }_{n}}=(q(1,n),q(2,n),\dots )$, with $q(k,n)={F_{\Lambda }}(k/n)-{F_{\Lambda }}((k-1)/n)$ and ${p_{n}}=n/(n+1)$. That is,
By (30),
Now, by Theorem 11, since ${X_{n}}\xrightarrow{D}X$, we have
Now, let ${X_{n1}},{X_{n2}},\dots $ be i.i.d. random variables with the probability function ${f_{e,n}}(x)$ and set ${S_{k,n}}:={\textstyle\sum _{i=1}^{k}}{X_{ni}}$. By the Pollaczeck–Khinchine formula, for $u\ge 0$,
(54)
\[ {f_{{X_{n}}}}(x)={\sum \limits_{k=1}^{\infty }}q(k,n)\cdot \text{nb}(k,n/(n+1))(x),\hspace{1em}x\ge 0.\]
\[ 𝔼({X_{n}})={\sum \limits_{k=1}^{\infty }}k\hspace{0.1667em}q(k,n)\cdot \frac{1/(n+1)}{n/(n+1)}={\sum \limits_{k=1}^{\infty }}(k/n)\cdot [\hspace{0.1667em}{F_{\Lambda }}(k/n)-{F_{\Lambda }}((k-1)/n)\hspace{0.1667em}].\]
Taking the limit,
(55)
\[ \underset{n\to \infty }{\lim }𝔼({X_{n}})={\int _{0}^{\infty }}x\hspace{0.1667em}d{F_{\Lambda }}(x)=𝔼(\Lambda )=𝔼(X).\]
\[ \underset{n\to \infty }{\lim }{\overline{F}_{{X_{n}}}}(x)={\overline{F}_{X}}(x),\hspace{1em}x\ge 0.\]
Combining the above with (55),
\[ \underset{n\to \infty }{\lim }\frac{{\overline{F}_{{X_{n}}}}(x)}{𝔼({X_{n}})}=\frac{{\overline{F}_{X}}(x)}{𝔼(X)}.\]
This means the equilibrium probability function ${f_{e,n}}(x)$ associated to ${f_{{X_{n}}}}(x)$ satisfies
Using probability generating functions and (56), it is also easy to show that for any $k\ge 1$,
(57)
\[ \underset{n\to \infty }{\lim }{F_{e,n}^{\ast k}}(x)={F_{e}^{\ast k}}(x),\hspace{1em}x\ge 0.\]
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\psi _{n}}(u)& \displaystyle =& \displaystyle {\sum \limits_{k=1}^{\infty }}ℙ({S_{k,n}}\ge u)\hspace{0.1667em}(1-𝔼({X_{n}})){𝔼^{k}}({X_{n}})\\ {} & \displaystyle =& \displaystyle {\sum \limits_{k=1}^{\infty }}(1-{F_{e,n}^{\ast k}}(u-1))\hspace{0.1667em}(1-𝔼({X_{n}})){𝔼^{k}}({X_{n}}).\end{array}\]
Taking the limit as $n\to \infty $, and using (55) and (57),
\[ \underset{n\to \infty }{\lim }{\psi _{n}}(u)={\sum \limits_{k=1}^{\infty }}(1-{F_{e}^{\ast k}}(u-1))\hspace{0.1667em}(1-𝔼(X)){𝔼^{k}}(X)=\psi (u),\hspace{1em}u\ge 1.\]
On the other hand, since claims ${X_{n}}$ have a $\text{NBM}({\boldsymbol{\pi }_{n}},{p_{n}})$ distribution, with ${\boldsymbol{\pi }_{n}}=(q(1,n),q(2,n),\dots )$, $\hspace{1em}q(k,n)={F_{\Lambda }}(k/n)-{F_{\Lambda }}((k-1)/n)$ and ${p_{n}}=n/(n+1)$, by Theorem 8,
\[ {\psi _{n}}(u)={\sum \limits_{k=0}^{\infty }}{\overline{C}_{k,n}}\cdot ℙ(Z=k)=𝔼({\overline{C}_{Z,n}}),\hspace{1em}u\ge 1,\]
where $Z\sim \text{NB}(u,1/(n+1))$ and the sequence ${\left\{{\overline{C}_{k,n}}\right\}_{k=0}^{\infty }}$ is given by
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\overline{C}_{0,n}}& \displaystyle =& \displaystyle 𝔼({N_{n}})/n,\\ {} \displaystyle {\overline{C}_{k,n}}& \displaystyle =& \displaystyle {\overline{C}_{0,n}}\hspace{0.1667em}\left[{\sum \limits_{i=1}^{k}}{f_{Ne}}(i)\hspace{0.1667em}{\overline{C}_{k-i,n}}+{\overline{F}_{Ne}}(k)\right],\hspace{1em}k\ge 1,\\ {} \displaystyle {f_{Ne}}(i)& \displaystyle =& \displaystyle \frac{{\overline{F}_{{N_{n}}}}(i-1)}{𝔼({N_{n}})},\hspace{1em}i\ge 1,\end{array}\]
where ${N_{n}}$ is the r.v. related to probabilities $q(k,n)$. Thus, it only remains to calculate the form of $𝔼({N_{n}})$ and ${\overline{F}_{{N_{n}}}}(i-1)$.
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle 𝔼({N_{n}})& \displaystyle =& \displaystyle {\sum \limits_{j=1}^{\infty }}ℙ({N_{n}}>j-1)={\sum \limits_{j=1}^{\infty }}{\sum \limits_{i=j}^{\infty }}q(i,n)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{j=1}^{\infty }}{\sum \limits_{i=j}^{\infty }}({F_{\Lambda }}(i/n)-{F_{\Lambda }}((i-1)/n))={\sum \limits_{j=0}^{\infty }}{\overline{F}_{\Lambda }}(j/n).\end{array}\]
Thus,
Also,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\overline{F}_{{N_{n}}}}(i-1)& \displaystyle =& \displaystyle ℙ({N_{n}}>i-1)={\sum \limits_{k=i}^{\infty }}q(k,n)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{k=i}^{\infty }}({F_{\Lambda }}(k/n)-{F_{\Lambda }}((k-1)/n))={\overline{F}_{\Lambda }}((i-1)/n).\end{array}\]
Then,
□5.1 First approximation method
Our first proposal of approximation to $\psi (u)$ is presented as a corollary of Theorem 12. Note that ${\overline{C}_{0,n}}={\textstyle\sum _{j=0}^{\infty }}{\overline{F}_{\Lambda }}(j/n)/n$ is an upper sum for the integral of ${\overline{F}_{\Lambda }}$. Thus, ${\overline{C}_{0,n}}\to 𝔼(\Lambda )$ as $n\to \infty $. For the approximation methods we propose, we will take ${\overline{C}_{0,n}}=𝔼(\Lambda )$, for any value of n.
Corollary 1.
Suppose a Gerber–Dickson model (4) with $\textit{MP}({F_{\Lambda }})$ claims is given. For large n,
where
(58)
\[ \psi (u)\approx {\sum \limits_{k=0}^{\infty }}{\overline{C}_{k,n}}\cdot \textit{nb}(u,1/(1+n))(k),\](59)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\overline{C}_{0,n}}& \displaystyle =& \displaystyle 𝔼(\Lambda ),\end{array}\]For example, suppose claims have a $\text{MP}({F_{\Lambda }})$ distribution, where $\Lambda \sim \text{Exp}(\beta )$. In this case, claims have a $\text{Geo}(\beta /(1+\beta ))$ distribution with mean value $1/\beta $. By (28),
Substituting (62) into (58) and simplifying,
\[ \psi (u)={\left(\frac{1/(1+\beta )}{\beta /(1+\beta )}\right)^{u+1}}=\frac{1}{{\beta ^{u+1}}},\hspace{1em}u\ge 0.\]
Observe the net profit condition implies the restriction $1/\beta <1$. We will check that our approximation (58) converges to this solution as $n\to \infty $. First, the following is easily calculated: ${f_{Ne}}(i)={e^{-i\beta /n}}({e^{\beta /n}}-1)$ and ${\overline{F}_{Ne}}(k)={e^{-\beta k/n}}$. After some more calculations, one can obtain
(62)
\[ {\overline{C}_{k,n}}=\frac{1}{\beta }{\left[\frac{1}{\beta }(1-{e^{-\beta /n}})+{e^{-\beta /n}}\right]^{k}}.\]In the examples shown in the next section, we have numerically found that the infinite sum (58) converges fast enough, so that a partial sum can be used without much loss of accuracy.
5.2 Second approximation method
Our second method to approximate the ruin probability is a direct application of the Law of Large Numbers [6].
Corollary 2.
The expression $\psi (u)=ℙ\left({\textstyle\sum _{i=1}^{K}}{Y_{i}^{\ast }}\ge u\right)$, given in (25), produces another approximation method due to the Law of Large Numbers. This approximation method is well known in the classical model of Cramér–Lundberg [1, page 465], and it is easily applied in our discrete model. Indeed, for large values of m,
where ${w_{1}},{w_{2}},\dots ,{w_{m}}$ is a random sample generated from a random variable W such as $\psi (u)=𝔼(W)$. The random variable W is defined by
\[ W=\left\{\begin{array}{l@{\hskip10.0pt}l}1& \text{if}\hspace{2.5pt}\hspace{2.5pt}{\textstyle\textstyle\sum _{i=1}^{K}}{Y_{i}^{\ast }}\ge u,\\ {} 0& \text{if}\hspace{2.5pt}\hspace{2.5pt}{\textstyle\textstyle\sum _{i=1}^{K}}{Y_{i}^{\ast }}<u.\end{array}\right.\]
The approximation given by (64) will be called the Pollaczeck–Khinchine method in this work. It will be used to contrast the new methods used in the numerical examples in section 6.6 Numerical examples
In this section we apply the proposed approximation methods in the case when the mixing distribution is Erlang, Pareto and Lognormal. The results obtained show that the approximated ruin probabilities are extremely close to the exact probabilities. The latter were calculated recursively using formulas (5) and (6). In all cases the new approximations were calculated for $u=0,1,2,\dots ,10$ and using the software R. For the first proposed approximation method, $n=500$ was used; for the second method, $m=1000$, i.e. 1000 values were generated from a $\text{NB}(u,1/(n+1))$ distribution, and again $n=$ 500. The sum (58) was truncated up to the integer ${k^{\ast }}$ defined below as the largest integer where the probability function $\text{nb}(u,1/(1+n))(x)$ is still above the arbitrary chosen small value 0.00001,
Hence, for $x\ge {k^{\ast }}+1$ the probability function is less than or equal to 0.00001 and we neglect those terms in the sum (58).
In order to evaluate the accuracy of the simulation approach (63), it was compared with the Pollaczeck–Khinchine method (64). The latter was implemented using 10,000 simulations in each ruin probability approximation.
Erlang distribution
In this example we assume claims have a $\text{MP}({F_{\Lambda }})$ distribution with $\Lambda \sim \text{Erlang}(2,3)$. In this case, $𝔼(\Lambda )=2/3$. Table 1 below shows the results of the approximations. Columns E, ${N_{1}}$, ${N_{2}}$ and PK show, for each value of u, the exact value of $\psi (u)$, the approximation with the first method, the approximation with the second method, and the approximation with the Pollaczeck–Khinchine method, respectively. Relative errors $(\hat{\psi }-\psi )/\psi $ are also shown. The left-hand side plot of Figure 2 shows the values of u against E, ${N_{1}}$, ${N_{2}}$ and PK. The right-hand side plot shows the values of u against the relative errors.
Table 1.
Ruin probability approximation for $\text{MP}({F_{\Lambda }})$ claims with $\Lambda \sim \text{Erlang}(2,3)$
u | E | ${N_{1}}$ | $\frac{\hat{\psi }-\psi }{\psi }$ | ${N_{2}}$ | $\frac{\hat{\psi }-\psi }{\psi }$ | $PK$ | $\frac{\hat{\psi }-\psi }{\psi }$ |
0 | 0.66667 | 0.66667 | 0.00000 | 0.66667 | 0.00000 | 0.66667 | 0.00000 |
1 | 0.40741 | 0.40775 | 0.00084 | 0.40326 | −0.01019 | 0.4089 | 0.00366 |
2 | 0.24280 | 0.24328 | 0.00196 | 0.24551 | 0.01115 | 0.2397 | −0.01276 |
3 | 0.14358 | 0.14401 | 0.00306 | 0.14317 | −0.00282 | 0.1456 | 0.01410 |
4 | 0.08469 | 0.08504 | 0.00414 | 0.08647 | 0.02096 | 0.084 | −0.00818 |
5 | 0.04992 | 0.05018 | 0.00521 | 0.05063 | 0.01419 | 0.0512 | 0.02566 |
6 | 0.02942 | 0.02960 | 0.00628 | 0.02989 | 0.01607 | 0.0311 | 0.05726 |
7 | 0.01733 | 0.01746 | 0.00735 | 0.01732 | −0.00079 | 0.0172 | −0.00763 |
8 | 0.01021 | 0.01030 | 0.00842 | 0.01009 | −0.01208 | 0.0105 | 0.02818 |
9 | 0.00602 | 0.00607 | 0.00949 | 0.00586 | −0.02682 | 0.0061 | 0.01379 |
10 | 0.00355 | 0.00358 | 0.01056 | 0.00335 | −0.05468 | 0.0031 | −0.12559 |
Pareto distribution
In this example claims have a $\text{MP}({F_{\Lambda }})$ distribution with $\Lambda \sim \text{Pareto}(3,1)$. For this distribution, $𝔼(\Lambda )=1/2$. Table 2 shows the approximations results in the same terms as in Table 1. Figure 3 shows the results graphically.
Table 2.
Ruin probability approximation for $\text{MP}({F_{\Lambda }})$ claims with $\Lambda \sim \text{Pareto}(3,1)$
u | E | ${N_{1}}$ | $\frac{\hat{\psi }-\psi }{\psi }$ | ${N_{2}}$ | $\frac{\hat{\psi }-\psi }{\psi }$ | $PK$ | $\frac{\hat{\psi }-\psi }{\psi }$ |
0 | 0.50000 | 0.50000 | 0.00000 | 0.50000 | 0.00000 | 0.50000 | 0.00000 |
1 | 0.28757 | 0.28751 | −0.00023 | 0.28484 | −0.00950 | 0.29170 | 0.01435 |
2 | 0.18050 | 0.18046 | −0.00022 | 0.18216 | 0.00921 | 0.17690 | −0.01995 |
3 | 0.12014 | 0.12010 | −0.00034 | 0.11960 | −0.00448 | 0.12040 | 0.00215 |
4 | 0.08348 | 0.08344 | −0.00053 | 0.08445 | 0.01159 | 0.08170 | −0.02135 |
5 | 0.06001 | 0.05996 | −0.00076 | 0.06034 | 0.00547 | 0.06080 | 0.01317 |
6 | 0.04437 | 0.04432 | −0.00100 | 0.04450 | 0.00301 | 0.04600 | 0.03681 |
7 | 0.03360 | 0.03356 | −0.00127 | 0.03343 | −0.00528 | 0.03270 | −0.02686 |
8 | 0.02599 | 0.02595 | −0.00154 | 0.02577 | −0.00865 | 0.02280 | −0.12288 |
9 | 0.02049 | 0.02045 | −0.00181 | 0.02019 | −0.01467 | 0.02020 | −0.01419 |
10 | 0.01643 | 0.01639 | −0.00209 | 0.01612 | −0.01858 | 0.01750 | 0.06527 |
Lognormal distribution
In this example we suppose that claims have a $\text{MP}({F_{\Lambda }})$ distribution with $\Lambda \sim \text{Lognormal}(-1,1)$. For this distribution $𝔼(\Lambda )={e^{-1/2}}$. Table 3 shows the approximations results and Figure 4 shows the related plots.
Fig. 4.
Approximation when claims are $\text{MP}(\Lambda )$ and $\Lambda \sim \text{Lognormal}(-1,1)$
Table 3.
Approximations for $\text{MP}({F_{\Lambda }})$ claims with $\Lambda \sim \text{Lognormal}(-1,1)$
u | E | ${N_{1}}$ | $\frac{\hat{\psi }-\psi }{\psi }$ | ${N_{2}}$ | $\frac{\hat{\psi }-\psi }{\psi }$ | $PK$ | $\frac{\hat{\psi }-\psi }{\psi }$ |
0 | 0.60653 | 0.60653 | 0.00000 | 0.60653 | 0.00000 | 0.60653 | 0.00000 |
1 | 0.38126 | 0.38124 | −0.00005 | 0.37816 | −0.00813 | 0.37960 | −0.00436 |
2 | 0.25231 | 0.25238 | 0.00025 | 0.25426 | 0.00772 | 0.25340 | 0.00431 |
3 | 0.17287 | 0.17294 | 0.00042 | 0.17198 | −0.00515 | 0.17520 | 0.01349 |
4 | 0.12128 | 0.12135 | 0.00053 | 0.12282 | 0.01264 | 0.12010 | −0.00976 |
5 | 0.08661 | 0.08666 | 0.00060 | 0.08715 | 0.00624 | 0.08960 | 0.03456 |
6 | 0.06272 | 0.06276 | 0.00064 | 0.06297 | 0.00397 | 0.06280 | 0.00124 |
7 | 0.04597 | 0.04600 | 0.00067 | 0.04574 | −0.00487 | 0.04390 | −0.04498 |
8 | 0.03404 | 0.03406 | 0.00067 | 0.03373 | −0.00914 | 0.03420 | 0.00466 |
9 | 0.02545 | 0.02546 | 0.00066 | 0.02502 | −0.01686 | 0.02420 | −0.04902 |
10 | 0.01919 | 0.01920 | 0.00063 | 0.01874 | −0.02346 | 0.02030 | 0.05791 |
As can be seen from the tables and graphs shown, the two approximating methods yield ruin probabilities close to the exact probabilities for the examples considered.
7 Conclusions
We have first provided a general formula for the ultimate ruin probability in the Gerber–Dickson risk model (4) when claims follow a negative binomial mixture (NBM) distribution. The ruin probability is expressed as the expected value of a deterministic sequence $\{{C_{k}}\}$, where index k is the value of a negative binomial distribution. The sequence is not given explicitly but can be calculated recursively. We then extended the formula for claims with a mixed Poisson (MP) distribution. The extension was possible due to the fact that MP distributions can be approximated by NBM distributions.
The formulas obtained yielded two immediate approximation methods. These were tested using particular examples. The numerical results showed high accuracy. The second approximation method showed more stability than the PK method under changes of the initial capital u.
The general results obtained in this work lead to some other questions that we have set aside for further work: error bounds for our estimates, detailed study of some other particular cases for NBM and MP distributions, properties and bounds for the sequence $\{{C_{k}}\}$, and the possible extension of the ruin probability formula to more general claim distributions.