1 Introduction
Many probabilistic models estimating the likelihoods of certain events are based on the sequence of sums $\{{\textstyle\sum _{i=1}^{n}}{X_{i}},\hspace{0.1667em}n\in \mathbb{N}\}$, where ${X_{i}}$ are some random variables. This sequence is called the random walk. Random walks are usually visualized as branching trees or certain paths on the plane or space; their occurrence spreads from pure mathematics to many applied sciences. For instance, one may refer to the Case–Shiller home pricing index [9] or even more generally to the random walk hypothesis [30]. From a pure mathematics standpoint, one may mention the random matrix theory, see, for example, [3, 13, 31] and other related works. Perhaps the closest context where the need to know the distribution of ${\sup _{n\geqslant 1}}{\textstyle\sum _{i=1}^{n}}({X_{i}}-\kappa )$ arises is insurance mathematics. In the ruin theory one may assume that the insurer’s wealth ${W_{u}}(n)$ in discrete time moments $n\in \mathbb{N}$ consists of incoming cash premiums and outgoing payoffs (claims), and ${W_{u}}(n)$ admits the representation:
where $u\in {\mathbb{N}_{0}}:=\mathbb{N}\cup \{0\}$ is interpreted as initial surplus ${W_{u}}(0):=u$, $\kappa \in \mathbb{N}$, denotes the arrival rate of premiums paid by customers and the subtracted sum of random variables represents claims. Here we assume that random variables ${X_{i}}$ are independent, nonnegative, and integer-valued but not necessarily identically distributed. More precisely, we assume that ${X_{i}}\stackrel{d}{=}{X_{i+N}}$ for all $i\in \mathbb{N}$ and some fixed $N\in \mathbb{N}$. The model (1) can be visualized as a “race” between deterministic line $u+\kappa n$ and the sum of random variables ${\textstyle\sum _{i=1}^{n}}{X_{i}}$ when n varies, see Figure 1.1
(1)
\[ {W_{u}}(n)=u+\kappa n-{\sum \limits_{i=1}^{n}}{X_{i}}=u-{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa ),\]Fig. 1.
Lines $1+n$, $1+2n$, $1+3n$, and random walk ${\textstyle\sum _{i=1}^{n}}{X_{i}}{1_{\{i\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}2=1\}}}+{\textstyle\sum _{i=1}^{n}}{Y_{i}}{1_{\{i\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}2=0\}}}$, where $\mathbb{P}({X_{i}}=0)=0.3$, $\mathbb{P}({X_{i}}=1)=0.1$, $\mathbb{P}({X_{i}}=5)=0.6$ and $\mathbb{P}({Y_{i}}=0)=0.8$, $\mathbb{P}({Y_{i}}=1)=0.1$, $\mathbb{P}({Y_{i}}=10)=0.1$, and n varies from 1 to 20
We say that the fixed natural number N is the number of periods or seasons and call the model (1) N-seasonal discrete-time risk model. The model (1) with $N=1$ is a discrete version of the continuous-time Cramér–Lundberg model (also known as the classical risk process)
where, analogously as in model (1), $x\geqslant 0$ represents initial surplus, $c\gt 0$ premium, ${\xi _{i}}$ are independent and identically distributed nonnegative random variables, and ${P_{t}}$ is the counting Poisson process with intensity $\lambda \gt 0$. The model (2) can be further extended, cf. E. Spare Andersen’s model [2] or models considered in [6, 7].
Being curious whether initial surplus and collected premiums can always cover incurred claims, for the N-seasonal discrete-time risk model (1) we define the finite time survival probability
where T is some fixed natural number, and the ultimate time survival probability
Computation of finite time survival (or ruin, $\psi (u,\hspace{0.1667em}T):=1-\varphi (u,\hspace{0.1667em}T))$ probability (3) is far easier than the computation of ultimate time survival (or ruin, $\psi (u):=1-\varphi (u)$) probability (4), see Theorem 4 for the precise expressions of $\varphi (u,\hspace{0.1667em}T)$. Difficulties computing $\varphi (u)$ arise due to $\varphi (\kappa N),\hspace{0.1667em}\varphi (\kappa N+1),\dots $ being expressible via $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$ which are initially unknown, see the formula (5) in the next section. Therefore, the essence of the problem we solve is nothing but finding these initial values $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$. In this work, we demonstrate that the required values of $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$ satisfy a certain system of linear equations (see the system (16)) whose coefficients are based on certain polynomials and the roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$, where $s\in \mathbb{C}$ and ${G_{{S_{N}}}}(s)$ is the probability-generating function of ${S_{N}}={X_{1}}+\cdots +{X_{N}}$.
(3)
\[ \varphi (u,\hspace{0.1667em}T):=\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{T}}\big\{{W_{u}}(n)\gt 0\big\}\Bigg)=\mathbb{P}\Bigg(\underset{1\leqslant n\leqslant T}{\sup }{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )\lt u\Bigg),\](4)
\[ \varphi (u):=\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{\infty }}\big\{{W_{u}}(n)\gt 0\big\}\Bigg)=\mathbb{P}\Bigg(\underset{n\geqslant 1}{\sup }{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )\lt u\Bigg).\]Let us briefly overview the history and some fundamental works on the subject and mention a few recent papers. The foundation of ruin theory dates back to 1903 when Swedish actuary Filip Lundberg published his work [29], which was republished in 1930 also by Swedish mathematician Harald Cramér, while the random walk formulation, as such, was first introduced by English mathematician and biostatistician Karl Pearson [33]. The Cramér–Lundberg risk model (2) was extended by Danish mathematician Erik Sparre Andersen by allowing claim inter-arrival times to have arbitrary distribution [2]. The next famous works were published in the late eighties by Hans U. Gerber and Elias S. W. Shiu, see [16, 15, 38, 39]. Equally, in the second half of the twentieth century, there were many sound studies regarding the random walk by such authors as William Feller, Frank L. Spitzer, David G. Kendal, Félix Pollaczek and others, see [14, 40, 41, 23, 35] and related works. Scrolling across the timeline in recent decades, one may reference a notable survey [27]. Various assumptions on random walk’s structure in models (1) or (2) (cf. [20, 37]), the variety of other numerical characteristics of renewal risk models than the defined ones in (3) and (4) (cf. [28, 24]), and the research methods (cf. [42]) are the reasons which make the recent literature voluminous. Next to the mentioned references, see [11, 36, 4, 12, 10, 34, 26, 25, 32, 8] as the recent ones on the subject, too.
2 Recursive nature of ultimate time survival probability, basic notations, and the net profit condition
This section starts by deriving the basic recurrent relation for the ultimate time survival probability. The definition (4), the law of total probability and rearrangements imply
Substituting $u=0$ into the recursive formula (5), we notice that in order to find $\varphi (\kappa N)$ we need to know the values of $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$. Moreover, if we know the values of $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$, the same recurrence (5) allows us to compute $\varphi (u)$ for any $u\geqslant \kappa N$ by substituting $u=0,\hspace{0.1667em}1,\dots $ there. Thus, as mentioned in the introduction, all we need is a way to compute these initial values.
(5)
\[\begin{aligned}{}& \varphi (u)=\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{\infty }}\big\{{W_{u}}(n)\gt 0\big\}\Bigg)\\ {} & =\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{N}}\Bigg\{u+\kappa n-{\sum \limits_{i=1}^{n}}{X_{i}}\gt 0\Bigg\}\cap {\bigcap \limits_{n=N+1}^{\infty }}\Bigg\{u+\kappa n-{\sum \limits_{i=1}^{n}}{X_{i}}\gt 0\Bigg\}\Bigg)\\ {} & =\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{N}}\Bigg\{{\sum \limits_{i=1}^{n}}{X_{i}}\leqslant u+\kappa n-1\Bigg\}\cap \\ {} & \hspace{1em}\hspace{2.5pt}\cap {\bigcap \limits_{n=N+1}^{\infty }}\Bigg\{u+\kappa N-{\sum \limits_{i=1}^{N}}{X_{i}}+\kappa (n-N)-{\sum \limits_{i=N+1}^{n}}{X_{i}}\gt 0\Bigg\}\Bigg)\\ {} & =\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-14.22636pt}\mathbb{P}({X_{1}}\hspace{-0.1667em}=\hspace{-0.1667em}{i_{1}})\mathbb{P}({X_{2}}\hspace{-0.1667em}=\hspace{-0.1667em}{i_{2}})\cdots \mathbb{P}({X_{N}}\hspace{-0.1667em}=\hspace{-0.1667em}{i_{N}})\varphi \Bigg(u\hspace{-0.1667em}+\hspace{-0.1667em}\kappa N\hspace{-0.1667em}-\hspace{-0.1667em}{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg).\end{aligned}\]We now define a series of notations. Recalling that we aim to know the distribution of ${\sup _{n\geqslant 1}}{\textstyle\sum _{i=1}^{n}}({X_{i}}-\kappa )$, we define N random variables:
where ${x^{+}}=\max \{0,\hspace{0.1667em}x\}$, $x\in \mathbb{R}$, is the positive part function. Obviously, same as ${X_{1}},{X_{2}},\dots ,{X_{N}}$, every random variable ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ attains the values from the set {0, 1, …}.
(6)
\[\begin{aligned}{}& {\mathcal{M}_{1}}:=\underset{n\geqslant 1}{\sup }{\Bigg({\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )\Bigg)^{+}},\hspace{0.1667em}{\mathcal{M}_{2}}:=\underset{n\geqslant 2}{\sup }{\Bigg({\sum \limits_{i=2}^{n}}({X_{i}}-\kappa )\Bigg)^{+}},\dots ,\\ {} & {\mathcal{M}_{N}}:=\underset{n\geqslant N}{\sup }{\Bigg({\sum \limits_{i=N}^{n}}({X_{i}}-\kappa )\Bigg)^{+}},\end{aligned}\]Let us denote the probability mass functions of ${\mathcal{M}_{j}}$, their generators ${X_{j}}$ and the sum ${S_{N}}={X_{1}}+{X_{2}}+\cdots +{X_{N}}$:
where $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$ and $i\in {\mathbb{N}_{0}}$. Let ${F_{{X_{j}}}}(i)$ be the distribution function of the random variable ${X_{j}}$, i.e.
(7)
\[ {m_{i}^{(j)}}:=\mathbb{P}({\mathcal{M}_{j}}=i),\hspace{2.5pt}\hspace{2.5pt}{x_{i}^{(j)}}:=\mathbb{P}({X_{j}}=i),\hspace{2.5pt}\hspace{2.5pt}{s_{i}^{(N)}}:=\mathbb{P}({S_{N}}=i),\]
\[ {F_{{X_{j}}}}(u)=\mathbb{P}({X_{j}}\leqslant u)={\sum \limits_{i=0}^{u}}{x_{i}^{(j)}},\hspace{1em}j\in \{1,\hspace{0.1667em}2,\dots ,N\},\hspace{0.1667em}u\in {\mathbb{N}_{0}}.\]
In addition, let ${\overline{S}_{1}}(0):=\{s\in \mathbb{C}:|s|\leqslant 1\}$, ${S_{1}}(0):=\{s\in \mathbb{C}:|s|\lt 1\}$ be the circles in the complex plane centered at the origin with radius one and denote the probability-generating function of some nonnegative and integer-valued random variable X,
The definition of the survival probability (4) and the definition of random variable ${\mathcal{M}_{1}}$ imply
Thus, the survival probability computation turns into the setup of the distribution function of ${\mathcal{M}_{1}}$. It is simple to explain the core idea of the paper, i.e. how the probabilities ${m_{i}^{(1)}}$, $i\in {\mathbb{N}_{0}}$, are attained. Let us refer to Feller’s book [14, Theorem on page 198]. The referenced Theorem states that if $N=1$ in model (1), i.e. the random walk $\{{\textstyle\sum _{i=1}^{n}}{X_{i}},\hspace{0.1667em}n\in \mathbb{N}\}$ is generated by independent and identically distributed random variables, which are the copies of X, then ${({\mathcal{M}_{1}}+X-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{1}}$. For arbitrary number of periods $N\in \mathbb{N}$ the mentioned distribution property is as follows:
for all $j=2,\hspace{0.1667em}3,\dots ,N$, where ${\tilde{X}_{N}}$ is an independnet copy of ${X_{N}}$; see Lemma 2 in Section 5. Metaphorically, the distributions’ equalities in (9) mean that the random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ “can see each other”, and, more precisely, based on the equalities in (9), we can set up a system of corresponding equalities of probability-generating functions
The system (10) contains the desired information on ${m_{i}^{(1)}},\hspace{0.1667em}i\in {\mathbb{N}_{0}}$.
(8)
\[ \varphi (u+1)=\mathbb{P}({\mathcal{M}_{1}}\leqslant u)={\sum \limits_{i=0}^{u}}{m_{i}^{(1)}}\hspace{1em}\text{for all}\hspace{2.5pt}u\in {\mathbb{N}_{0}}.\](9)
\[ {({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}\hspace{1em}\text{and}\hspace{1em}{({\mathcal{M}_{j}}+{X_{j-1}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{j-1}},\](10)
\[ \left\{\begin{array}{l}\mathbb{E}{s^{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}=\mathbb{E}{s^{{\mathcal{M}_{N}}}}\hspace{1em}\\ {} \mathbb{E}{s^{{({\mathcal{M}_{2}}+{X_{1}}-\kappa )^{+}}}}=\mathbb{E}{s^{{\mathcal{M}_{1}}}}\hspace{1em}\\ {} \hspace{2.5pt}\vdots \hspace{1em}\\ {} \mathbb{E}{s^{{({\mathcal{M}_{N}}+{X_{N-1}}-\kappa )^{+}}}}=\mathbb{E}{s^{{\mathcal{M}_{N-1}}}}\hspace{1em}\end{array}\right..\]In general, the random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ can be extended, i.e. ${\lim \nolimits_{u\to \infty }}\mathbb{P}({\mathcal{M}_{j}}=u)\gt 0$, $j=1,\hspace{0.1667em}2,\dots ,N$. However, ${\lim \nolimits_{u\to \infty }}\mathbb{P}({\mathcal{M}_{j}}\lt u)=1$ for all $j=1,\hspace{0.1667em}2,\dots ,N$ if $\mathbb{E}{S_{N}}\lt \kappa N$, see Lemma 1 in Section 5.
The condition $\mathbb{E}{S_{N}}\lt \kappa N$ is called the net profit condition and it is crucially important for the survival probability $\varphi (u)$. Intuitively, an insurer has no chance to survive in long-term activity, if threatening amounts of claims on average are greater or equal to the collected premiums. This can also be well illustrated by the expected value of ${W_{u}}(n)$ in (1). For instance,
if $\mathbb{E}{S_{N}}\gt \kappa N$ and n is sufficiently large. Consequently, the negative value of ${W_{u}}(n)$ is unavoidable. See Theorem 3 in Section 3 for the precise expressions of the survival probability $\varphi (u),\hspace{0.1667em}u\in {\mathbb{N}_{0}}$ when the net profit condition is violated, i.e. $\mathbb{E}{S_{N}}\geqslant \kappa N$.
3 Main results
In this section, based on the previously introduced notations and explanation that our goal is to know the probability mass function of ${\mathcal{M}_{1}}$, we formulate two main theorems for the ultimate time survival probability computation under the net profit condition. Theorem 1, being implied by system (10), provides the relations between ${m_{i}^{(1)}},\hspace{0.1667em}{m_{i}^{(2)}},\dots ,{m_{i}^{(N)}}$ and ${x_{i}^{(1)}},\hspace{0.1667em}{x_{i}^{(2)}},\dots ,{x_{i}^{(N)}}$ for all $i\in {\mathbb{N}_{0}}$ and lays down the foundation for the computation of the ultimate time survival probability $\varphi (u+1)={\textstyle\sum _{i=0}^{u}}{m_{i}^{(1)}},\hspace{0.1667em}u\in {\mathbb{N}_{0}}$.
Theorem 1.
Suppose that the N-seasonal discrete-time risk model (1) is generated by random variables ${X_{1}},\hspace{0.1667em}{X_{2}},\dots ,{X_{N}}$ and the net profit condition $\mathbb{E}{S_{N}}\lt \kappa N$ is satisfied. Then the following statements are correct:
-
1. The probability mass functions of random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ and ${X_{1}},{X_{2}},\dots ,{X_{N}}$ satisfy the relation
(11)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}(\kappa -i-j)+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}(\kappa -i-j)\\ {} & \hspace{2em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}(\kappa \hspace{-0.1667em}-\hspace{-0.1667em}i\hspace{-0.1667em}-\hspace{-0.1667em}j)+\cdots +{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}\hspace{-0.1667em}{x_{j}^{(N-1)}}(\kappa \hspace{-0.1667em}-\hspace{-0.1667em}i\hspace{-0.1667em}-\hspace{-0.1667em}j)\\ {} & \hspace{1em}=\kappa N-\mathbb{E}{S_{N}}.\end{aligned}\] -
2. If $s\in {\overline{S}_{1}}(0)\setminus \{0\}$, then
(12)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\frac{{G_{{X_{N}}}}(s)}{{s^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(s)}{{s^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)}{{s^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}={s^{\kappa }}{G_{{M_{N}}}}(s)\bigg(1-\frac{{G_{{S_{N}}}}(s)}{{s^{\kappa N}}}\bigg),\end{aligned}\](13)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N}}}}(j-i)+\frac{{G_{{X_{N}}}}(\alpha )}{{\alpha ^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{1}}}}(j-i)\\ {} & \hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(\alpha )}{{\alpha ^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{2}}}}(j-i)+\cdots \\ {} & \hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(\alpha )}{{\alpha ^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N-1}}}}(j-i)=0.\end{aligned}\] -
3. If $\alpha \in {\overline{S}_{1}}(0)\setminus \{0,\hspace{0.1667em}1\}$, is a root of ${G_{{S_{N}}}}(s)={s^{\kappa N}}$ of multiplicity r, $r=2,\hspace{0.1667em}3,\dots ,\kappa N-1$, then
(14)
\[\begin{aligned}{}& \hspace{-21.33955pt}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({s^{j}}\big){\bigg|_{s=\alpha }}{F_{{X_{N}}}}(j-i)\\ {} & \hspace{-21.33955pt}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{X_{N}}}}(s){s^{j-\kappa }}\big){\bigg|_{s=\alpha }}{F_{{X_{1}}}}(j-i)\\ {} & \hspace{-21.33955pt}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{X_{N}}+{X_{1}}}}(s){s^{j-2\kappa }}\big){\bigg|_{s=\alpha }}{F_{{X_{2}}}}(j-i)+\cdots \\ {} & \hspace{-21.33955pt}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s){s^{j-\kappa (N-1)}}\big){\bigg|_{s=\alpha }}\hspace{-0.1667em}{F_{{X_{N-1}}}}(j-i)\hspace{-0.1667em}=\hspace{-0.1667em}0,\end{aligned}\] -
4. For $n=\kappa ,\hspace{0.1667em}\kappa +1,\dots $ , the probability mass functions of random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ and ${X_{1}},{X_{2}},\dots ,{X_{N}}$ satisfy the following system of equations
(15)
\[ \hspace{-14.22636pt}\left\{\begin{array}{l@{\hskip10.0pt}l}{m_{n}^{(1)}}{x_{0}^{(N)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(N)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(N)}}-{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{1_{\{n=\kappa \}}}\\ {} {m_{n}^{(2)}}{x_{0}^{(1)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}{1_{\{n=\kappa \}}}\\ {} {m_{n}^{(3)}}{x_{0}^{(2)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(3)}}{x_{n-i}^{(2)}}-{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}{1_{\{n=\kappa \}}}\\ {} \hspace{1em}& \hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\vdots \\ {} {m_{n}^{(N)}}{x_{0}^{(N-1)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(N-1)}}\hspace{-0.1667em}-\hspace{-0.1667em}{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(N)}}{x_{n-i}^{(N-1)}}\hspace{-0.1667em}-\hspace{-0.1667em}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}{1_{\{n=\kappa \}}}\end{array}\right.\hspace{-14.22636pt}.\]
Let us comment on how Theorem 1 is used to obtain the distribution of ${\mathcal{M}_{1}}$, i.e. ${m_{i}^{(1)}}$, $i\in {\mathbb{N}_{0}}$. First, we note that the equation ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ has exactly $\kappa N-1$ roots in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$ counted with their multiplicities, see Lemma 4 in Section 5. We denote these roots by ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$. We then create the system of linear equations (see eq. (16)) by replicating the equation (13) $\kappa N-1$ times over the roots ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$ and include (11) as the last equation. To illustrate that explicitly, we define the matrices ${\boldsymbol{M}_{1}},\hspace{0.1667em}{\boldsymbol{M}_{2}},\dots ,{\boldsymbol{M}_{N}}$ and ${\boldsymbol{G}_{2}},\hspace{0.1667em}{\boldsymbol{G}_{3}},\dots ,{\boldsymbol{G}_{N}}$:
and set up the system
where ∘ denotes the Hadamard matrix product, also known as the element-wise product, entry-wise product, or Schur product, i.e. two elements in corresponding positions in two matrices are multiplied, see, for example, [21].
Clearly, the solution of (16) (see Section 4 on solvability and modifications of (16)) gives ${m_{0}^{(1)}},\hspace{0.1667em}{m_{1}^{(1)}},\dots ,{m_{\kappa -1}^{(1)}}$, while using the system (15) (note that we can have ${m_{0}^{(j)}},\hspace{0.1667em}{m_{1}^{(j)}},\dots ,{m_{\kappa -1}^{(j)}}$, $j\in \{2,\hspace{0.1667em}3,\dots ,N\}$, from (16), too) we can compute ${m_{\kappa }^{(1)}},{m_{\kappa +1}^{(1)}},\dots ,{m_{\kappa N-1}^{(1)}}$ and consequently $\varphi (1),\varphi (2),\dots ,\varphi (\kappa N)$. Having $\varphi (1),\varphi (2),\dots ,\varphi (\kappa N)$ we can obtain $\varphi (0)$ by setting $u=0$ in recurrence (5) and use the same recurrence (5) to proceed computing $\varphi (\kappa N+1),\varphi (\kappa N+2),\dots \hspace{0.1667em}$. Of course, the survival probabilities $\varphi (\kappa N+1),\hspace{0.1667em}\varphi (\kappa N+2),\dots $ can be cumputed by system (15), too. Moreover, we can set up the ultimate time survival probability-generating function. Let
In view of (8), it is easy to observe that, for $s\in {S_{1}}(0)$,
Then, the ultimate time survival probability-generating function $\Xi (s)$ admits the following expression.
(18)
\[ \Xi (s)={\sum \limits_{i=0}^{\infty }}\varphi (i+1){s^{i}}={\sum \limits_{i=0}^{\infty }}{s^{i}}{\sum \limits_{j=0}^{i}}{m_{j}^{(1)}}={\sum \limits_{j=0}^{\infty }}{m_{j}^{(1)}}\frac{{s^{j}}}{1-s}=\frac{{G_{{\mathcal{M}_{1}}}}(s)}{1-s}.\]Theorem 2.
Let us assume that the probabilities ${m_{0}^{(j)}},\hspace{0.1667em}{m_{1}^{(j)}},\dots ,{m_{\kappa -1}^{(j)}}$, $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$, are known beforehand. Then, the survival probability-generating function $\Xi (s)$, for $s\in {S_{1}}(0)$ and ${s^{\kappa N}}\ne {G_{{S_{N}}}}(s)$ admits the following representation
\[\begin{array}{l}\displaystyle \Xi (s)=\frac{{u^{T}}(s)v(s)}{{G_{{S_{N}}}}(s)-{s^{\kappa N}}},\hspace{1em}\textit{where}\hspace{2.5pt}\\ {} \displaystyle u(s)=\left(\begin{array}{c}{s^{\kappa (N-1)}}\\ {} {s^{\kappa (N-2)}}{G_{{X_{1}}}}(s)\\ {} {s^{\kappa (N-3)}}{G_{{X_{1}}+{X_{2}}}}(s)\\ {} \vdots \\ {} {s^{\kappa }}{G_{{X_{1}}+{X_{2}}+\cdots +{X_{N-2}}}}(s)\\ {} {G_{{X_{1}}+{X_{2}}+\cdots +{X_{N-1}}}}(s)\end{array}\right),\hspace{1em}v(s)=\left(\begin{array}{c}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{1}}}}(j-i)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{2}}}}(j-i)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(4)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{3}}}}(j-i)\\ {} \vdots \\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{N-1}}}}(j-i)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{N}}}}(j-i)\end{array}\right).\end{array}\]
The next theorem states that if the net profit condition is unsatisfied, the ultimate time survival is impossible except in some cases when ${S_{N}}$ is degenerate.
Theorem 3.
Suppose that N-seasonal discrete-time risk model (1) is generated by random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$, and the net profit condition is not satisfied. In this case:
-
1. If $\mathbb{E}{S_{N}}\gt \kappa N$, then $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$.
-
2. If $\mathbb{E}{S_{N}}=\kappa N$ and $\mathbb{P}({S_{N}}=\kappa N)\lt 1$, then $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$.
-
3. If $\mathbb{P}({S_{N}}=\kappa N)=1$, then random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$ are degenerate and\[\begin{aligned}{}& u+\kappa {n^{\ast }}-{\sum \limits_{k=1}^{{n^{\ast }}}}{X_{k}}\leqslant 0\Rightarrow \varphi (u)=0,\\ {} & u+\kappa {n^{\ast }}-{\sum \limits_{k=1}^{{n^{\ast }}}}{X_{k}}\gt 0\Rightarrow \varphi (u)=1,\end{aligned}\]here ${n^{\ast }}$ is equal to such $n\in \{1,2,\dots ,N\}$ for which $\kappa n-{\textstyle\sum _{k=1}^{n}}{X_{k}}$ attains its minimum.
The last theorem provides an algorithm for the computation of finite time survival probability $\varphi (u,\hspace{0.1667em}T)$. Let us define
where $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$ and ${X_{i}^{(j)}}={X_{i+j-1}}$. It is easy to observe that ${\varphi ^{(N+j)}}(u,\hspace{0.1667em}T)={\varphi ^{(j)}}(u,\hspace{0.1667em}T)$.
(19)
\[ {\varphi ^{(j)}}(u,\hspace{0.1667em}T)=\mathbb{P}\Bigg(\underset{1\leqslant n\leqslant T}{\sup }{\sum \limits_{i=1}^{n}}\big({X_{i}^{(j)}}-\kappa \big)\lt u\Bigg),\]Theorem 4.
For the finite time survival probability (3) of the N-seasonal discrete time risk model defined in (1), the following holds:
and
if $T\in \{N+1,\hspace{0.1667em}N+2,\dots \}$.
(20)
\[\begin{aligned}{}\varphi (u,1)& =\sum \limits_{i\leqslant u+\kappa -1}{x_{i}^{(1)}},\hspace{1em}\varphi (u,2)=\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1}}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}},\dots ,\\ {} \varphi (u,\hspace{0.1667em}N)& =\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}},\end{aligned}\](21)
\[\begin{aligned}{}& \varphi (u,T)\hspace{-0.1667em}=\hspace{-0.1667em}\hspace{-21.33955pt}\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-21.33955pt}\hspace{-0.1667em}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}}\varphi (u+\kappa N-{i_{1}}-\cdots -{i_{N}},\hspace{0.1667em}T-N),\end{aligned}\]
Moreover, for the defined probabilities (19), it holds that
4 Notes on the solution of linear system involving probabilities of ${\mathcal{M}_{1}},{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$
In general, it is not easy to give an explicit solution of system (16) or even to prove that the system’s determinant of size $\kappa N\times \kappa N$ never equals zero. For instance, if $N=1$ and $\kappa \in \mathbb{N}$, then the system (16) is
where $X\stackrel{d}{=}{X_{1}}$, ${x_{i}}={x_{i}^{(1)}}$ and ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa -1}}$ are the simple roots of ${s^{\kappa }}={G_{X}}(s)$ when $s\in {\overline{S}_{1}}(0)\setminus \{1\}$. Then, if ${x_{0}}\gt 0$, the determinant of the system’s matrix in (24) is the Vandermonde-like one,
where ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{N-1}}$ are the simple roots of ${s^{N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$, see [19]. If $N=3$, the main matrix in (25) is
and one may check that for ${s_{0}^{(3)}}=\mathbb{P}({X_{1}}+{X_{2}}+{X_{3}}=0)\gt 0$ the matrix A is nonsingular iff
(24)
\[\begin{aligned}{}& \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{\kappa -1}}{\alpha _{1}^{j}}{F_{X}}(j)& \hspace{-14.22636pt}{\textstyle\sum \limits_{j=0}^{\kappa -2}}{\alpha _{1}^{j+1}}{F_{X}}(j)& \hspace{-14.22636pt}\dots & \hspace{-14.22636pt}{\alpha _{1}^{\kappa -1}}{x_{0}}\\ {} \vdots & \hspace{-14.22636pt}\vdots & \hspace{-14.22636pt}\ddots & \hspace{-14.22636pt}\vdots \\ {} {\textstyle\sum \limits_{j=0}^{\kappa -1}}{\alpha _{\kappa -1}^{j}}{F_{X}}(j)& \hspace{-14.22636pt}{\textstyle\sum \limits_{j=0}^{\kappa -2}}{\alpha _{\kappa -1}^{j+1}}{F_{X}}(j)& \hspace{-14.22636pt}\dots & \hspace{-14.22636pt}{\alpha _{\kappa -1}^{\kappa -1}}{x_{0}}\\ {} {\textstyle\sum \limits_{j=0}^{\kappa -1}}{x_{j}}(\kappa -j)& \hspace{-2.84544pt}{\textstyle\sum \limits_{j=0}^{\kappa -2}}{x_{j}}(\kappa -j-1)& \hspace{-7.11317pt}\dots & \hspace{-14.22636pt}{x_{0}}\end{array}\right)\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} \vdots \\ {} {m_{\kappa -2}^{(1)}}\\ {} {m_{\kappa -1}^{(1)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} \vdots \\ {} 0\\ {} \kappa -\mathbb{E}X\end{array}\right),\end{aligned}\]
\[ \frac{{x_{0}^{\kappa }}}{{(-1)^{\kappa +1}}}{\prod \limits_{j=1}^{\kappa -1}}({\alpha _{j}}-1)\prod \limits_{1\leqslant i\lt j\leqslant \kappa -1}({\alpha _{j}}-{\alpha _{i}})\ne 0,\]
and the probabilities ${m_{0}^{(1)}},\hspace{0.1667em}{m_{1}^{(1)}},\dots ,{m_{\kappa -1}^{(1)}}$ together with the survival probabilities $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa )$ admit neat closed-form expressions, see [17, Thm. 4]. On the other hand, if $\kappa =1$ and $N\in \mathbb{N}$, then the system (16) is
(25)
\[\begin{aligned}{}& \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{x_{0}^{(N)}}& \frac{{x_{0}^{(1)}}{G_{{X_{N}}}}({\alpha _{1}})}{{\alpha _{1}}}& \dots & \frac{{x_{0}^{(N-1)}}{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}({\alpha _{1}})}{{\alpha _{1}^{N-1}}}\\ {} {x_{0}^{(N)}}& \frac{{x_{0}^{(1)}}{G_{{X_{N}}}}({\alpha _{2}})}{{\alpha _{2}}}& \dots & \frac{{x_{0}^{(N-1)}}{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}({\alpha _{2}})}{{\alpha _{2}^{N-1}}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {x_{0}^{(N)}}& \frac{{x_{0}^{(1)}}{G_{{X_{N}}}}({\alpha _{N-1}})}{{\alpha _{N-1}}}& \dots & \frac{{x_{0}^{(N-1)}}{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}({\alpha _{N-1}})}{{\alpha _{N-1}^{N-1}}}\\ {} {x_{0}^{(N)}}& {x_{0}^{(1)}}& \dots & {x_{0}^{(N-1)}}\end{array}\right)\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} \vdots \\ {} {m_{0}^{(N-1)}}\\ {} {m_{0}^{(N)}}\end{array}\right)\\ {} & \hspace{227.62204pt}=\left(\begin{array}{c}0\\ {} 0\\ {} \vdots \\ {} 0\\ {} N-\mathbb{E}{S_{N}}\end{array}\right),\end{aligned}\](26)
\[ A:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{x_{0}^{(3)}}& \frac{{x_{0}^{(1)}}{G_{{X_{3}}}}({\alpha _{1}})}{{\alpha _{1}}}& \frac{{x_{0}^{(2)}}{G_{{X_{3}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}\\ {} {x_{0}^{(3)}}& \frac{{x_{0}^{(1)}}{G_{{X_{3}}}}({\alpha _{2}})}{{\alpha _{2}}}& \frac{{x_{0}^{(2)}}{G_{{X_{3}}+{X_{1}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}\\ {} {x_{0}^{(3)}}& {x_{0}^{(1)}}& {x_{0}^{(2)}}\end{array}\right)\]
\[ \bigg(\frac{{G_{{X_{3}}}}({\alpha _{1}})}{{\alpha _{1}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}-1\bigg)\ne \bigg(\frac{{G_{{X_{3}}}}({\alpha _{2}})}{{\alpha _{2}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}-1\bigg)\]
where ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}}$ are the simple roots of ${s^{3}}={G_{{X_{1}}+{X_{2}}+{X_{3}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$. Computer computations with some chosen random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$, $N\geqslant 3$, do not reveal any examples that the system’s matrix in (16) is singular. The listed thoughts raise the following conjecture.Conjecture 1.
Assume that ${s_{0}^{(N)}}=\mathbb{P}({X_{1}}+{X_{2}}+\cdots +{X_{N}}=0)\gt 0$ and ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$ are the simple roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$. Then, the system’s matrix in (16) is nonsingular for all $\kappa ,\hspace{0.1667em}N\in \mathbb{N}$. In particular, if $N=3$ and $\kappa =1$, then
\[ \frac{{G_{{S_{3}}}}({\alpha _{1}})}{{\alpha _{1}^{3}}}=\frac{{G_{{S_{3}}}}({\alpha _{2}})}{{\alpha _{2}^{3}}}=1\]
implies
\[ \bigg(\frac{{G_{{X_{3}}}}({\alpha _{1}})}{{\alpha _{1}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}-1\bigg)\ne \bigg(\frac{{G_{{X_{3}}}}({\alpha _{2}})}{{\alpha _{2}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}-1\bigg)\]
and consequently $\det A\ne 0$.
Let us comment on how the system (16) gets modified if there are multiple roots among ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$ and/or the random variable ${S_{N}}$ does not attain its “small” values. It is clear that $\mathbb{P}({S_{N}}\geqslant j)=1$ for some $j\in \{1,\hspace{0.1667em}2,\dots ,\kappa N-1\}$ implies at least one column of zeros in the main matrix of (16). Note that $\mathbb{P}({S_{N}}\geqslant \kappa N)=1$ violates the net profit condition $\mathbb{E}{S_{N}}\lt \kappa N$. Also, $\mathbb{P}({S_{N}}\geqslant j)=1$ for some $j\in \{1,\hspace{0.1667em}2,\dots ,\kappa N-1\}$ always implies fewer terms in the right-hand side of the recurrence (5) as some values of the probability mass function equal zero then. For instance, if $\mathbb{P}({X_{N}}\geqslant \kappa N-1)=1$, then $\varphi (0)={s_{\kappa N-1}^{(N)}}\varphi (1)$ and there is only $\varphi (0)$ that we must know in order to find $\varphi (1),\hspace{0.1667em}\varphi (2),\dots $ by the recurrence (5). Thus, when $\mathbb{P}({S_{N}}\geqslant j)=1$ for some $j\in \{1,\hspace{0.1667em}2,\dots ,\kappa N-1\}$, we have to adjust the main matrix in (16) according to the equalities (13) and (11) not including any columns of zeros.
In addition, when some roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ are of multiplicity $r\in \{2,\hspace{0.1667em}3,\dots ,\kappa N-1\}$, then, to avoid identical lines in (16), we must replace the corresponding lines with derivatives as provided in equality (14).
Once again, computational examples with some chosen random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$, $N\geqslant 3$, and $\kappa \geqslant 1$ do not reveal any examples showing that such a modified (due to multiple roots and/or ${S_{N}}$ not attaining “small” values) system’s matrix in (16) is singular.
5 Lemmas
In this section, we formulate and prove several auxiliary lemmas that are later used to prove theorems formulated in Section 3. Some of the presented lemmas are direct generalizations of statements from [19, Sec. 5], where they are proved for ${X_{j}}-1$, $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$, while here we need them for ${X_{j}}-\kappa $, $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$, $\kappa \in \mathbb{N}$.
Proof.
We prove the case $j=1$ only and note that the other cases can be proven similarly.
According to the strong law of large numbers
\[\begin{aligned}{}& \frac{1}{n}{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )=\frac{1}{N}\Bigg(\frac{N}{n}{\sum \limits_{\begin{array}{c}i=1\\ {} i\equiv 1\hspace{0.1667em}\mathrm{mod}\hspace{0.1667em}N\end{array}}^{n}}({X_{i}}-\kappa )+\cdots +\frac{N}{n}{\sum \limits_{\begin{array}{c}i=1\\ {} i\equiv N\hspace{0.1667em}\mathrm{mod}\hspace{0.1667em}N\end{array}}^{n}}({X_{i}}-\kappa )\Bigg)\\ {} & \hspace{1em}\underset{n\to \infty }{\longrightarrow }\frac{1}{N}\big((\mathbb{E}{X_{1}}-\kappa )+\cdots +(\mathbb{E}{X_{N}}-\kappa )\big)=\frac{\mathbb{E}{S_{N}}-\kappa N}{N}=:-\mu \lt 0\hspace{2.5pt}\text{a.s.}\end{aligned}\]
Therefore,
\[ \mathbb{P}\Bigg(\underset{j\geqslant n}{\sup }\Bigg|\frac{1}{j}{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )+\mu \Bigg|\lt \frac{\mu }{2}\Bigg)\underset{n\to \infty }{\longrightarrow }1.\]
Consequently, for any arbitrarily small $\varepsilon \gt 0$, there exists number ${N_{\varepsilon }}\in \mathbb{N}$ such that
\[\begin{aligned}{}& \mathbb{P}\Bigg({\bigcap \limits_{j=n}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\leqslant 0\Bigg\}\Bigg)\\ {} & \hspace{1em}\geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=n}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt -\frac{\mu }{2}\Bigg\}\Bigg)\\ {} & \hspace{1em}\geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=n}^{\infty }}\Bigg\{\Bigg|\frac{1}{j}{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )+\mu \Bigg|\lt \frac{\mu }{2}\Bigg\}\Bigg)\\ {} & \hspace{1em}=\mathbb{P}\Bigg(\underset{j\geqslant n}{\sup }\Bigg|\frac{1}{j}{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )+\mu \Bigg|\lt \frac{\mu }{2}\Bigg)\geqslant 1-\varepsilon \end{aligned}\]
for all $n\geqslant {N_{\varepsilon }}$.It follows that for any such ε and any $u\in \mathbb{N}$ we have
\[\begin{aligned}{}\mathbb{P}({\mathcal{M}_{1}}\lt u)& =\mathbb{P}\Bigg({\bigcap \limits_{j=1}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg)\\ {} & \geqslant \mathbb{P}\Bigg(\Bigg\{{\bigcap \limits_{j=1}^{{N_{\varepsilon }}-1}}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg\}\cap \Bigg\{{\bigcap \limits_{j={N_{\varepsilon }}}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\leqslant 0\Bigg\}\Bigg\}\Bigg)\\ {} & \geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=1}^{{N_{\varepsilon }}-1}}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg)+\mathbb{P}\Bigg({\bigcap \limits_{j={N_{\varepsilon }}}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\leqslant 0\Bigg\}\Bigg)-1\\ {} & \geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=1}^{{N_{\varepsilon }}-1}}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg)-\varepsilon .\end{aligned}\]
The last inequality implies
where $\varepsilon \gt 0$ is arbitrarily small, and the assertion of lemma follows. □Lemma 2.
If the net profit condition is satisfied, i.e. $\mathbb{E}{S_{N}}\lt \kappa N$, it holds that ${({\mathcal{M}_{j}}+{X_{j-1}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{j-1}}$, for all $j=2,\hspace{0.1667em}3,\dots ,N$, and ${({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}$, where ${\tilde{X}_{N}}$ is an independent copy of ${X_{N}}$.
Proof.
We prove the equality ${({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}$ only and note that the other ones can be proved by the same arguments. According to Lemma 1, $\mathbb{P}({\mathcal{M}_{1}}\lt \infty )=1$. Let us denote ${\hat{X}_{j}}={X_{j}}-\kappa $ for all $j\in \{1,\hspace{0.1667em}2,\dots ,N-1\}$ and say that ${\hat{X}_{N}}$ is an independent copy of ${X_{N}}-\kappa $. Then
\[\begin{aligned}{}& {({\mathcal{M}_{1}}+{\hat{X}_{N}})^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \big\{0,\hspace{0.1667em}\max \{{\hat{X}_{1}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}}+{\hat{X}_{3}},\dots \}\big\}+{\hat{X}_{N}}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \{0,\hspace{0.1667em}{\hat{X}_{1}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}}+{\hat{X}_{3}},\dots \}+{\hat{X}_{N}}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \{{\hat{X}_{N}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{1}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{1}}+{\hat{X}_{2}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{1}}+{\hat{X}_{2}}+{\hat{X}_{3}},\hspace{0.1667em}\dots \}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \{{\hat{X}_{N}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}}+{\hat{X}_{N+2}},\dots \}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}\max \big\{0,\max \{{\hat{X}_{N}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}}+{\hat{X}_{N+2}},\dots \}\big\}\\ {} & \hspace{1em}\stackrel{d}{=}{\mathcal{M}_{N}}.\end{aligned}\]
□Lemma 3.
Let $s\in {\overline{S}_{1}}(0)\setminus \{0\}$ and say that the net profit condition holds. Then the probability-generating functions of ${X_{1}},{X_{2}},\dots ,{X_{N}}$ and ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}}\dots ,\hspace{0.1667em}{\mathcal{M}_{N}}$ are related in the following way:
(27)
\[ \hspace{-11.38092pt}\left\{\begin{array}{l@{\hskip10.0pt}l}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}({s^{\kappa }}-{s^{i+j}})\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}({s^{\kappa }}-{s^{i+j}})\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{1}}}}(s)-{G_{{X_{1}}}}(s){G_{{\mathcal{M}_{2}}}}(s)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}({s^{\kappa }}-{s^{i+j}})\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{2}}}}(s)-{G_{{X_{2}}}}(s){G_{{\mathcal{M}_{3}}}}(s)\\ {} \hspace{1em}& \hspace{0.1667em}\hspace{0.1667em}\vdots \\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}({s^{\kappa }}-{s^{i+j}})\hspace{-8.0pt}\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{N-1}}}}(s)-{G_{{X_{N-1}}}}(s){G_{{\mathcal{M}_{N}}}}(s)\end{array}\right..\]Proof.
Let us demonstrate how the first equality in (27) is derived and note that the remaining ones follow the same logic.
By Lemma 2, the equality of distributions ${({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}$ implies the equality of probability-generating functions ${G_{{\mathcal{M}_{N}}}}(s)={G_{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}(s)$, where ${\tilde{X}_{N}}$ denotes an independent copy of ${X_{N}}$. Then, applying the law of total expectation for the last equality, we obtain
\[\begin{aligned}{}{G_{{\mathcal{M}_{N}}}}(s)& =\mathbb{E}{s^{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}=\mathbb{E}\big(\mathbb{E}\big({s^{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}|{\mathcal{M}_{1}}\big)\big)\\ {} & ={\sum \limits_{i=0}^{\infty }}{m_{i}^{(1)}}\mathbb{E}{s^{{({X_{N}}-\kappa +i)^{+}}}}={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}\mathbb{E}{s^{{({X_{N}}-\kappa +i)^{+}}}}+{G_{{X_{N}}}}(s){s^{-\kappa }}{\sum \limits_{i=\kappa }^{\infty }}{m_{i}^{(1)}}{s^{i}}\\ {} & ={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}\big(\mathbb{E}{s^{{({X_{N}}-\kappa +i)^{+}}}}-{s^{i-\kappa }}{G_{{X_{N}}}}(s)\big)+{s^{-\kappa }}{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s).\end{aligned}\]
Multiplying both sides of the last equality by ${s^{\kappa }}$ when $s\ne 0$ and observing that
we get the desired result. □The next lemma provides the quantity and location of the roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$.
Lemma 4.
Assume that the net profit condition ${G^{\prime }_{{S_{N}}}}(1)=\mathbb{E}{S_{N}}\lt \kappa N$ is valid. Then there are exactly $\kappa N-1$ roots, counted with their multiplicities, of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$.
Proof.
We follow the proof of [18, Lemma 9]. Due to the estimate
on the boundary $|s|=1$ when $\lambda \gt 1$, Rouché’s theorem implies that both functions ${G_{{S_{N}}}}(s)-\lambda {s^{\kappa N}}$ and $\lambda {s^{\kappa N}}$ have the same number of roots in $|s|\lt 1$ and this number is $\kappa N$ due to the fundamental theorem of algebra. When $\lambda \to {1^{+}}$ some roots of ${G_{{S_{N}}}}(s)-\lambda {s^{\kappa N}}$ remain in $|s|\lt 1$ and some migrate to the boundary points $|s|=1$. Obviously, $s=1$ is always the root of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ and it is the simple root because the net profit condition holds, i.e.
Thus, there remain $\kappa N-1$ roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$ and additionally one can say that s, such that $|s|=1$, $s\ne 1$, is the root of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ if the greatest common divisor of $\kappa N$ and all the powers of s in ${G_{{S_{N}}}}(s)$ is greater than one. □
6 Proofs
In this section, we prove all of the theorems formulated in Section 3.
Proof of Theorem 1.
We first prove equality (12). To derive (12), we use the system of equations (27) from Lemma 3. According to the conditions of Lemma 3, $s\ne 0$ and we rearrange the system (27) by multiplying its first equality by 1, the second one by ${G_{{X_{N}}}}(s)/{s^{\kappa }}$, the third one by ${G_{{X_{N}}+{X_{1}}}}(s)/{s^{2\kappa }}$ and so on till the last equality which is multiplied by ${G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)/{s^{\kappa (N-1)}}$. We then add up all these equations and obtain
Here we have used the fact that if any random variables X are Y independent, then their probability generating functions satisfy
Thus, the equality (12) is proved. We now derive (13).
(28)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\frac{{G_{{X_{N}}}}(s)}{{s^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(s)}{{s^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)}{{s^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{m_{j}^{(N-1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s)+\frac{{G_{{X_{N}}}}(s)}{{s^{\kappa }}}\big({s^{\kappa }}{G_{{\mathcal{M}_{1}}}}(s)-{G_{{X_{1}}}}(s){G_{{\mathcal{M}_{2}}}}(s)\big)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(s)}{{s^{2\kappa }}}\big({s^{\kappa }}{G_{{\mathcal{M}_{2}}}}(s)-{G_{{X_{2}}}}(s){G_{{\mathcal{M}_{3}}}}(s)\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)}{{s^{\kappa (N-1)}}}\big({s^{\kappa }}{G_{{\mathcal{M}_{N-1}}}}(s)-{G_{{X_{N-1}}}}(s){G_{{\mathcal{M}_{N}}}}(s)\big)\\ {} & \hspace{1em}={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-\frac{{G_{{S_{N}}}}(s)}{{s^{\kappa (N-1)}}}{G_{{\mathcal{M}_{N}}}}(s)={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)\bigg(1-\frac{{G_{{S_{N}}}}(s)}{{s^{\kappa N}}}\bigg).\end{aligned}\]It is obvious that the right-hand side of (28) equals zero if we set $s=\alpha $, where α is the root of ${G_{{S_{N}}}}(s)={s^{\kappa N}},\hspace{0.1667em}s\in {\overline{S}_{1}}(0)\setminus \{1\}$. We then divide both sides of (28) by $\alpha -1$, i.e.
\[ \frac{{\alpha ^{\kappa }}-{\alpha ^{i+j}}}{\alpha -1}={\alpha ^{j+i}}+{\alpha ^{j+i+1}}+\cdots +{\alpha ^{\kappa -1}},\hspace{1em}\alpha \ne 1,\]
and get
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}+\frac{{G_{{X_{N}}}}(\alpha )}{{\alpha ^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(\alpha )}{{\alpha ^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(\alpha )}{{\alpha ^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N}}}}(j-i)+\frac{{G_{{X_{N}}}}(\alpha )}{{\alpha ^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{1}}}}(j-i)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(\alpha )}{{\alpha ^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{2}}}}(j-i)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(\alpha )}{{\alpha ^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N-1}}}}(j-i)=0,\end{aligned}\]
which is the claimed equality (13).We now consider the equation (14). Since $s\ne 1$, we divide the both sides of (28) by $s-1$ and rewrite the right-hand side of it as
Clearly, the derivatives
\[ \frac{{d^{n}}}{d{s^{n}}}\big({s^{\kappa (1-N)}}{G_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa N}}-{G_{{S_{N}}}}(s)\big)/(s-1)\big){\bigg|_{s=\alpha }}=0\]
for all $n\in \{1,\hspace{0.1667em}2,\dots ,r-1\}$ if α is the root of ${s^{\kappa N}}={G_{{S_{N}}}}(s),\hspace{0.1667em}s\in {\overline{S}_{1}}(0)\setminus \{1\}$ and root’s multiplicity is $r\in \{2,\hspace{0.1667em}3,\dots ,\kappa N-1\}$. Thus, the equality (14) is nothing but the n-th derivative with respect to s of equation (28) (divided by $s-1$) at $s=\alpha $.We now prove equality (11) in Theorem 1. The derivatives by s of both sides of equation (28) give
We continue the proof by letting $s\to {1^{-}}$ in (29). Because the net profit condition holds, i.e. $\mathbb{E}{S_{N}}\lt \kappa N$, and $\mathbb{P}({\mathcal{M}_{N}}\lt \infty )=1$, we obtain
(29)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)\\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)\\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)\\ {} & \hspace{1em}={G^{\prime }_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big)\\ {} & \hspace{1em}\hspace{1em}+{G_{{\mathcal{M}_{N}}}}(s)\big(\kappa {s^{\kappa -1}}-{G_{{S_{N}}}}(s)\kappa (1-N){s^{\kappa (1-N)-1}}-{G^{\prime }_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big).\end{aligned}\](30)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big(\kappa -(i+j)\big)+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big(\kappa -(i+j)\big)\\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big(\kappa \hspace{-0.1667em}-\hspace{-0.1667em}(i\hspace{-0.1667em}+\hspace{-0.1667em}j)\big)\hspace{-0.1667em}+\hspace{-0.1667em}\cdots \hspace{-0.1667em}+\hspace{-0.1667em}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}\big(\kappa -(i+j)\big)\\ {} & \hspace{1em}=\underset{s\to {1^{-}}}{\lim }{G^{\prime }_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big)+\kappa N-\mathbb{E}{S_{N}}.\end{aligned}\]To compute the limit in (30) there are two separate cases to examine: $\mathbb{E}{\mathcal{M}_{N}}\lt \infty $ and $\mathbb{E}{\mathcal{M}_{N}}=\infty $. If $\mathbb{E}{\mathcal{M}_{N}}\lt \infty $, then the limit in (30) is zero. However, this limit is zero even if $\mathbb{E}{\mathcal{M}_{N}}=\infty $. Indeed, if $\mathbb{E}{\mathcal{M}_{N}}=\infty $, then by using L’Hospital’s rule we get
see [19, Lem. 5].2 Thus, the limit in (30) is zero, and the equality (11) in Theorem 1 follows.
\[\begin{aligned}{}& \underset{s\to {1^{-}}}{\lim }{G^{\prime }_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big)=\underset{s\to {1^{-}}}{\lim }\frac{{s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}}{1/{G^{\prime }_{{\mathcal{M}_{N}}}}(s)}\\ {} & \hspace{1em}=\underset{s\to {1^{-}}}{\lim }\frac{{({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}})^{\prime }}}{{(1/{G^{\prime }_{{\mathcal{M}_{N}}}}(s))^{\prime }}}\\ {} & \hspace{1em}=\underset{s\to {1^{-}}}{\lim }\big(\kappa {s^{\kappa -1}}-{G^{\prime }_{{S_{N}}}}(s){s^{\kappa (1-N)}}-{G_{{S_{N}}}}(s)\kappa (1-N){s^{\kappa (1-N)-1}}\big)\frac{{({G^{\prime }_{{\mathcal{M}_{N}}}}(s))^{2}}}{-{G^{\prime\prime }_{{\mathcal{M}_{N}}}}(s)}\\ {} & \hspace{1em}=(\kappa N-\mathbb{E}{S_{N}})\cdot 0=0,\end{aligned}\]
because
(31)
\[ \underset{s\to {1^{-}}}{\lim }\frac{{({G^{\prime }_{{\mathcal{M}_{N}}}}(s))^{2}}}{{G^{\prime\prime }_{{\mathcal{M}_{N}}}}(s)}=0,\]It remains to prove the equalities in system (15). In short, every equality in system (15) is the corresponding equality from (27) expanded at $s=0$. Let us demonstrate the derivation of the first equality in (15) in detail and note that the remaining ones are derived analogously. We need to show that the first equality in (27)
implies (the first one in (15))
Equality (33) is implied by (32) because of the following equalities:
(32)
\[ {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s)\]
\[ {m_{n}^{(1)}}{x_{0}^{(N)}}={m_{n-\kappa }^{(N)}}-{\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(N)}}-{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{1}}{1_{\{n=\kappa \}}},\hspace{1em}n=\kappa ,\hspace{0.1667em}\kappa +1,\dots ,\]
or, equivalently,
(33)
\[ {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{1_{\{n=\kappa \}}}={m_{n-\kappa }^{(N)}}-{\sum \limits_{i=0}^{n}}{m_{i}^{(1)}}{x_{n-i}^{(N)}},\hspace{1em}n=\kappa ,\hspace{0.1667em}\kappa +1,\dots \hspace{0.1667em}.\]
\[\begin{aligned}{}\frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\Bigg({\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)\Bigg){\bigg|_{s=0}}& ={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{1_{\{n=\kappa \}}},\\ {} \frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\big({s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)\big){\bigg|_{s=0}}& ={m_{n-\kappa }^{(N)}},\\ {} \frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{\mathcal{M}_{1}}}}(s){G_{{X_{N}}}}(s)\big){\bigg|_{s=0}}& ={\sum \limits_{i=0}^{n}}{m_{i}^{(1)}}{x_{n-i}^{(N)}},\end{aligned}\]
when $n=\kappa ,\hspace{0.1667em}\kappa +1,\dots \hspace{0.1667em}$. The proof of Theorem 1 is finished. □Proof of Theorem 2.
Let us rewrite the system (27) as
and denote this system by $AB=C$. Determinant of the main matrix in (34) is
Thus, the main matrix in (34) is invertible for all s such that ${s^{\kappa N}}\ne {G_{{S_{N}}}}(s)$ and $B={A^{-1}}C$. Therefore, the previous thoughts and equality (18) imply
(34)
\[\begin{aligned}{}& \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{s^{\kappa }}& -{G_{{X_{1}}}}(s)& 0& \dots & 0& 0\\ {} 0& {s^{\kappa }}& -{G_{{X_{2}}}}(s)& \dots & 0& 0\\ {} \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ {} 0& 0& 0& \dots & {s^{\kappa }}& -{G_{{X_{N-1}}}}(s)\\ {} -{G_{{X_{N}}}}(s)& 0& 0& \dots & 0& {s^{\kappa }}\end{array}\right)\left(\begin{array}{c}{G_{{\mathcal{M}_{1}}}}(s)\\ {} {G_{{\mathcal{M}_{2}}}}(s)\\ {} \vdots \\ {} {G_{{\mathcal{M}_{N-1}}}}(s)\\ {} {G_{{\mathcal{M}_{N}}}}(s)\end{array}\right)\\ {} & \hspace{1em}=\left(\begin{array}{c}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}({s^{\kappa }}-{s^{i+j}})\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}({s^{\kappa }}-{s^{i+j}})\\ {} \vdots \\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}({s^{\kappa }}-{s^{i+j}})\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}({s^{\kappa }}-{s^{i+j}})\end{array}\right)\end{aligned}\]
\[ \Xi (s)=\frac{{G_{{\mathcal{M}_{1}}}}(s)}{1-s}=\frac{1}{{s^{\kappa N}}-{G_{{S_{N}}}}(s)}\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\boldsymbol{M}_{11}}& {\boldsymbol{M}_{21}}& \dots & {\boldsymbol{M}_{N1}}\end{array}\right)\frac{C}{1-s},\]
where ${\boldsymbol{M}_{11}},\hspace{0.1667em}{\boldsymbol{M}_{21}},\dots ,{\boldsymbol{M}_{N1}}$ are the minors of A and C is the column vector of the right-hand side of (34). □Proof of Theorem 3.
We first show that $\mathbb{E}{S_{N}}\gt \kappa N$ implies $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. The recurrence (5) yields
where ${\mu _{i}}(u)$ for each $i\in \{1,\hspace{0.1667em}2,\dots ,\kappa (N-1)\}$ are coefficients consisting of the products of probability mass functions of random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$. For instance, if $N=2$ and $\kappa =1$, then
We now change the order of summation in (36),
Clearly, the definition of the survival probability (4) implies that $\varphi (u)$ is a nondecreasing function, i.e. $\varphi (u)\leqslant \varphi (u+1)$ for all $u\in {\mathbb{N}_{0}}$. Thus, there exists a nonnegative limit $\varphi (\infty ):={\lim \nolimits_{u\to \infty }}\varphi (u)$ and $\varphi (\infty )=1$ if the net profit condition $\mathbb{E}{S_{N}}\lt \kappa N$ holds, see Lemma 1. We now let $v\to \infty $ in both sides of (37). For the first sum in (37) we obtain
and for the second
Indeed, let us recall that $\mathbb{E}X={\textstyle\sum _{i=0}^{\infty }}\mathbb{P}(X\gt i)$, when X is some nonnegative and integer-valued random variable. Then, the upper bound of (39) is
If $\mathbb{E}{S_{N}}\gt \kappa N$, the nonnegative right-hand side of (40) implies $\varphi (\infty )=0$ and consequently $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. Thus, the survival is impossible if $\mathbb{E}{S_{N}}\gt \kappa N$.
(35)
\[\begin{aligned}{}& \varphi (u)\\ {} & =\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N-1}}\leqslant u+\kappa (N-1)-1\\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-28.45274pt}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\cdots \mathbb{P}({X_{N}}={i_{N}})\hspace{0.1667em}\varphi \Bigg(u+\kappa N-{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg)\\ {} & ={\sum \limits_{i=1}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)\\ {} & \hspace{1em}-\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} u+\kappa (N-1)\leqslant {i_{1}}+{i_{2}}+\cdots +{i_{N-1}}\leqslant u+\kappa N-1\\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-28.45274pt}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}}\hspace{0.1667em}\varphi \Bigg(u+\kappa N-{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg)\\ {} & \hspace{1em}\hspace{1em}\vdots \\ {} & \hspace{1em}-\sum \limits_{\substack{u+\kappa \leqslant {i_{1}}\leqslant u+\kappa N-1\\ {} {i_{1}}+{i_{2}}\leqslant u+\kappa N-1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N-1}}\leqslant u+\kappa N-1\\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{0.0pt}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}}\hspace{0.1667em}\varphi \Bigg(u+\kappa N-{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg)\\ {} & ={\sum \limits_{i=1}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)-{\sum \limits_{i=1}^{\kappa (N-1)}}{\mu _{i}}(u)\varphi (i),\end{aligned}\]
\[ \varphi (u)=\sum \limits_{\substack{{i_{1}}\leqslant u\\ {} {i_{1}}+{i_{2}}\leqslant u+1}}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\varphi (u+2-{i_{1}}-{i_{2}})={\sum \limits_{i=1}^{u+2}}{s_{u+2-i}^{(2)}}\varphi (i)-{x_{u+1}^{(1)}}{x_{0}^{(2)}}\varphi (1).\]
If ${\mu _{0}}(u):={s_{u+\kappa N}^{(N)}}$ and ${\mu _{j}}(u):=0$ when $j\gt \kappa (N-1)$, then the equality in (35) is
\[ \varphi (u)={\sum \limits_{i=0}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)-{\sum \limits_{i=0}^{\kappa N-1}}{\mu _{i}}(u)\varphi (i).\]
By summing up both sides of the last equality over u, which varies from 0 to some natural and sufficiently large v, we obtain
(36)
\[ {\sum \limits_{u=0}^{v}}\varphi (u)={\sum \limits_{u=0}^{v}}{\sum \limits_{i=0}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)-{\sum \limits_{u=0}^{v}}{\sum \limits_{i=0}^{\kappa N-1}}{\mu _{i}}(u)\varphi (i).\]
\[ {\sum \limits_{u=0}^{v}}{\sum \limits_{i=0}^{u+\kappa N}}(\cdot )={\sum \limits_{i=0}^{\kappa N-1}}{\sum \limits_{u=0}^{v}}(\cdot )+{\sum \limits_{i=\kappa N}^{v+\kappa N}}{\sum \limits_{u=i-\kappa N}^{v}}(\cdot ),\]
and obtain
\[\begin{aligned}{}& {\sum \limits_{u=0}^{v+\kappa N}}\varphi (u)-{\sum \limits_{u=v+1}^{v+\kappa N}}\varphi (u)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i){\sum \limits_{u=0}^{v}}{s_{u+\kappa N-i}^{(N)}}+{\sum \limits_{i=\kappa N}^{v+\kappa N}}\varphi (i){\sum \limits_{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}-{\sum \limits_{i=0}^{\kappa N-1}}\varphi (i){\sum \limits_{u=0}^{v}}{\mu _{i}}(u).\end{aligned}\]
Subtracting ${\textstyle\sum _{i=0}^{v+\kappa N}}\varphi (i){\textstyle\sum _{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}$ from both sides of the last equation and rearranging, we get
\[\begin{aligned}{}& {\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\Bigg(1-{\sum \limits_{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}\Bigg)-{\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg({\sum \limits_{u=0}^{v}}{s_{u+\kappa N-i}^{(N)}}-{\sum \limits_{u=0}^{v}}{\mu _{i}}(u)\Bigg)-{\sum \limits_{i=0}^{\kappa N-1}}\varphi (i){\sum \limits_{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}\end{aligned}\]
or
\[\begin{aligned}{}& {\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)-{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\Bigg(1-{\sum \limits_{u=0}^{v+\kappa N-i}}{s_{u}^{(N)}}\Bigg)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg({\sum \limits_{u=0}^{\kappa N-i-1}}{s_{u}^{(N)}}+{\sum \limits_{u=0}^{v}}{\mu _{i}}(u)\Bigg),\end{aligned}\]
which implies
(37)
\[\begin{aligned}{}\hspace{-14.22636pt}& {\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)-{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg(\mathbb{P}({S_{N}}\leqslant \kappa N-i-1)+{\sum \limits_{u=0}^{v}}{\mu _{i}}(u)\Bigg).\end{aligned}\](38)
\[ \underset{v\to \infty }{\lim }{\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)=\underset{v\to \infty }{\lim }\big(\varphi (v+1)+\cdots +\varphi (v+\kappa N)\big)=\varphi (\infty )\kappa N,\](39)
\[ \underset{v\to \infty }{\lim }{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)=\varphi (\infty )\mathbb{E}{S_{N}}.\]
\[\begin{aligned}{}& \underset{v\to \infty }{\lim }{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\leqslant \underset{v\to \infty }{\lim }\varphi (v+\kappa N){\sum \limits_{i=0}^{v+\kappa N}}\mathbb{P}({S_{N}}\hspace{-0.1667em}\gt \hspace{-0.1667em}v\hspace{-0.1667em}+\hspace{-0.1667em}\kappa N-i)\\ {} & \hspace{1em}=\underset{v\to \infty }{\lim }\varphi (v+\kappa N){\sum \limits_{i=0}^{v+\kappa N}}\mathbb{P}({S_{N}}\gt i)=\varphi (\infty )\mathbb{E}{S_{N}},\end{aligned}\]
while the lower bound is the same due to inequality
\[\begin{aligned}{}& {\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\\ {} & \hspace{1em}={\sum \limits_{j=0}^{M}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)+{\sum \limits_{i=M+1}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\\ {} & \hspace{1em}\geqslant \underset{i\geqslant M+1}{\inf }\varphi (i){\sum \limits_{i=0}^{v+\kappa N-M-1}}\mathbb{P}({S_{N}}\gt i),\end{aligned}\]
where M is some fixed and sufficiently large natural number. Thus, when $v\to \infty $, the equality in (37) is
(40)
\[ \varphi (\infty )(\kappa N-\mathbb{E}{S_{N}})={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg(\mathbb{P}({S_{N}}\leqslant \kappa N-i-1)+{\sum \limits_{u=0}^{\infty }}{\mu _{i}}(u)\Bigg).\]Let us now consider the case when $\mathbb{E}{S_{N}}=\kappa N$ and $\mathbb{P}({S_{N}}=\kappa N)\lt 1$. If $\mathbb{E}{S_{N}}=\kappa N$ and $\mathbb{P}({S_{N}}=\kappa N)\lt 1$, then at least one probability ${s_{0}^{(N)}},\hspace{0.1667em}{s_{1}^{(N)}},\dots ,{s_{\kappa N-1}^{(N)}}$ is larger than zero, because otherwise $\mathbb{E}{S_{N}}\gt \kappa N$. Then, from (40),
If ${s_{0}^{(N)}}\gt 0$, then (41) implies $\varphi (0)=\varphi (1)=\cdots =\varphi (\kappa N-1)=0$. Using recurrence (5) we can show that $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. If ${s_{0}^{(N)}}=0$ and ${s_{1}^{(N)}}\gt 0$, then (41) implies that $\varphi (0)=\varphi (1)=\cdots =\varphi (\kappa N-2)=0$ and once again, using recurrence (5) we can show that $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. Arguing in the same we proceed up to ${s_{0}^{(N)}}={s_{1}^{(N)}}=\cdots ={s_{\kappa N-2}^{(N)}}=0,\hspace{0.1667em}{s_{\kappa N-1}^{(N)}}\gt 0$ and observe that in all of these cases (41) and recurrence (5) yields $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$.
(41)
\[ {\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg({\sum \limits_{u=0}^{\kappa N-i-1}}{s_{u}^{(N)}}+{\sum \limits_{u=0}^{\infty }}{\mu _{i}}(u)\Bigg)=0.\]Finally, let us consider the case when $\mathbb{P}({S_{N}}=\kappa N)=1$. If ${S_{N}}=\kappa N$ with probability one, the random variables ${X_{1}},\hspace{0.1667em}{X_{2}},\dots ,{X_{N}}$ are degenerate, meaning that ${X_{i}}\equiv {c_{i}}$ for all $i\in \{1,\hspace{0.1667em}2,\dots ,N\}$, where ${c_{i}}\in \{0,\hspace{0.1667em}1,\dots ,\kappa N\}$ and ${c_{1}}+{c_{2}}+\cdots +{c_{N}}=\kappa N$. Thus, the model (1) becomes completely deterministic. Moreover, ${W_{u}}(n)={W_{u}}(n+N)$ for all $n\in {\mathbb{N}_{0}}$, $N\in \mathbb{N}$, and it is sufficient to check if the lowest of value among ${W_{u}}(1),\dots ,{W_{u}}(N)$ is larger than zero. □
7 Numerical examples
In this section, we illustrate the applicability of theorems formulated in Section 3. All the necessary computations in this section are performed using Wolfram Mathematica [22]. Notice that some of the examples considered here are also considered in [1, Sec. 4], where the ultimate time survival probability was obtained by computing the limits of certain recurrent sequences. Therefore, in some examples here we check if the obtained values of $\varphi (u)$ match the previously known ones obtained by different methods.
We say that a random variable X is distributed according to the shifted Poisson distribution $\mathcal{P}(\lambda ,\hspace{0.1667em}\xi )$ with parameters $\lambda \gt 0$ and $\xi \in {\mathbb{N}_{0}}$, if
One can check the following facts for the shifted Poisson distribution:
Example 1.
Let $\kappa =2$ and consider the bi-seasonal ($N=2$) discrete time risk model (1) where ${X_{1}}\sim \mathcal{P}(1,\hspace{0.1667em}0)$ and ${X_{2}}\sim \mathcal{P}(2,\hspace{0.1667em}0)$. We set up the survival probability-generating function $\Xi (s)$ and compute $\varphi (u)$, when $u=0,1,\dots ,15$.
Let us observe that in the considered example the net profit condition is satisfied $\mathbb{E}{S_{2}}=3\lt 4$. Solving the equation ${G_{{S_{2}}}}(s)={e^{3(s-1)}}={s^{4}}$ when $s\in {\overline{S}_{1}}(0)\setminus \{1\}$, we get ${\alpha _{1}}:=-0.3605,{\alpha _{2}}:=-0.1294+0.4087i,{\alpha _{3}}:=-0.1294-0.4087i$. Since all of the solutions ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\hspace{0.1667em}{\alpha _{3}}$ are simple and none of them are equal to 0, following the description beneath Theorem 1 in Section 3, we set up matrices ${\boldsymbol{M}_{1}},{\boldsymbol{M}_{2}}$ and ${\boldsymbol{G}_{2}}$:
\[\begin{array}{l}\displaystyle {\boldsymbol{M}_{1}}=\left(\begin{array}{c@{\hskip10.0pt}c}{x_{1}^{(2)}}{\alpha _{1}}+{x_{0}^{(2)}}({\alpha _{1}}+1)& {x_{0}^{(2)}}{\alpha _{1}}\\ {} {x_{1}^{(2)}}{\alpha _{2}}+{x_{0}^{(2)}}({\alpha _{2}}+1)& {x_{0}^{(2)}}{\alpha _{2}}\\ {} {x_{1}^{(2)}}{\alpha _{3}}+{x_{0}^{(2)}}({\alpha _{3}}+1)& {x_{0}^{(2)}}{\alpha _{3}}\\ {} {x_{1}^{(2)}}+2{x_{0}^{(2)}}& {x_{0}^{(2)}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{M}_{2}}=\left(\begin{array}{c@{\hskip10.0pt}c}{x_{1}^{(1)}}{\alpha _{1}}+{x_{0}^{(1)}}({\alpha _{1}}+1)& {x_{0}^{(1)}}{\alpha _{1}}\\ {} {x_{1}^{(1)}}{\alpha _{2}}+{x_{0}^{(1)}}({\alpha _{2}}+1)& {x_{0}^{(1)}}{\alpha _{2}}\\ {} {x_{1}^{(1)}}{\alpha _{3}}+{x_{0}^{(1)}}({\alpha _{3}}+1)& {x_{0}^{(1)}}{\alpha _{3}}\\ {} {x_{1}^{(1)}}+2{x_{0}^{(1)}}& {x_{0}^{(1)}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{G}_{2}}=\left(\begin{array}{c@{\hskip10.0pt}c}\frac{{G_{{X_{2}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}& \frac{{G_{{X_{2}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}\\ {} \frac{{G_{{X_{2}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}& \frac{{G_{{X_{2}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}\\ {} \frac{{G_{{X_{2}}}}({\alpha _{3}})}{{\alpha _{3}^{2}}}& \frac{{G_{{X_{2}}}}({\alpha _{3}})}{{\alpha _{3}^{2}}}\\ {} 1& 1\end{array}\right)\end{array}\]
and the system
\[ {\left(\begin{array}{c@{\hskip10.0pt}c}{\boldsymbol{M}_{1}}& {\boldsymbol{M}_{2}}\circ {\boldsymbol{G}_{2}}\end{array}\right)_{4\times 4}}\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} {m_{1}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} 0\\ {} 0\\ {} 1\end{array}\right),\]
which implies ${m_{0}^{(1)}}=0.6501$, ${m_{1}^{(1)}}=0.1395$, ${m_{0}^{(2)}}=0.5083$, ${m_{1}^{(2)}}=0.1855$. Then, $\varphi (1)={m_{0}^{(1)}}=0.6501$ and $\varphi (2)={m_{0}^{(1)}}+{m_{1}^{(1)}}=0.7896$. We then can use the system (15) to find ${m_{2}^{(1)}},{m_{3}^{(1)}},\dots \hspace{0.1667em}$, and consequently $\varphi (3),\varphi (4),\dots $ due to equality (8). In the considered case, the system (15) is
\[ \left\{\begin{array}{l}{m_{n}^{(2)}}=\bigg({m_{n-2}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(1)}}{1_{\{n=2\}}}\bigg)/{x_{0}^{(1)}}\hspace{1em}\\ {} {m_{n}^{(1)}}=\bigg({m_{n-2}^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(2)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(2)}}{1_{\{n=2\}}}\bigg)/{x_{0}^{(2)}}\hspace{1em}\end{array}\right.,\]
$n=2,\hspace{0.1667em}3,\dots \hspace{0.1667em}$. Having $\varphi (1)$, $\varphi (2)$, $\varphi (3)$ and $\varphi (4)$ we use the recurrence (5) in order to find $\varphi (0)$
\[\begin{aligned}{}\varphi (0)& =\sum \limits_{\substack{{i_{1}}\leqslant 1\\ {} {i_{1}}+{i_{2}}\leqslant 3}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\hspace{0.1667em}\varphi (4-{i_{1}}-{i_{2}})\\ {} & ={x_{0}^{(1)}}{x_{0}^{(2)}}\varphi (4)+\big({x_{0}^{(1)}}{x_{1}^{(2)}}+{x_{1}^{(1)}}{x_{0}^{(2)}}\big)\varphi (3)+\big({x_{0}^{(1)}}{x_{2}^{(2)}}+{x_{1}^{(1)}}{x_{1}^{(2)}}\big)\varphi (2)\\ {} & \hspace{1em}+\big({x_{0}^{(1)}}{x_{3}^{(2)}}+{x_{1}^{(1)}}{x_{2}^{(2)}}\big)\varphi (1).\end{aligned}\]
Let us recall that the recurrence (5) can be used to compute $\varphi (u)$ when $u\geqslant 5$.We provide the obtained survival probabilities in Table 1.
The provided values of $\varphi (u)$ match the ones given in [1, Table 1], where they were obtained by a different method.
Table 1.
Survival probabilities for $\kappa =2$, $N=2$, ${X_{1}}\sim \mathcal{P}(1,0)$ and ${X_{2}}\sim \mathcal{P}(2,0)$
Based on Theorem 2, we now set up the survival probability-generating function $\Xi (s)$, i.e.
\[ \frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\big(\Xi (s)\big){\bigg|_{s=0}}=\varphi (n+1),\hspace{1em}n=0,1,\dots \hspace{0.1667em}.\]
So, having ${m_{0}^{(1)}}$, ${m_{1}^{(1)}}$, ${m_{0}^{(2)}}$, ${m_{1}^{(2)}}$ and omitting the elementary rearrangements, we get
Example 2.
Let us consider the model (1) when $\kappa =2$ and ${X_{1}}\sim \mathcal{P}(1,\hspace{0.1667em}1)$ and ${X_{2}}\sim \mathcal{P}(9/10,\hspace{0.1667em}1)$. We find $\varphi (u)$ when $u=0,1,\dots ,50$ and set up the survival probability-generating function $\Xi (s)$.
According to (42) and (43), we check that the net profit condition is satisfied: $\mathbb{E}{S_{2}}=1+1+0.9+1=3.9\lt 4=\kappa N$. The probability-generating function of ${S_{2}}={X_{1}}+{X_{2}}$ is ${G_{{S_{2}}}}(s)={s^{2}}{e^{1.9(s-1)}}$ and the equation ${G_{{S_{2}}}}(s)={s^{4}}$ has one nonzero solution inside the unit circle: $\alpha =-0.2928$. Since ${x_{0}^{(1)}}=0$, ${x_{0}^{(2)}}=0$ we use (11) and (13) to set up the system
It is easy to see that system (44), as in the previous example, can be expressed using matrices ${\boldsymbol{M}_{1}},{\boldsymbol{M}_{2}}$ and ${\boldsymbol{G}_{2}}$:
(44)
\[ \left(\begin{array}{c@{\hskip10.0pt}c}{x_{1}^{(2)}}\alpha & \frac{{G_{{X_{2}}}}(\alpha )}{\alpha }{x_{1}^{(1)}}\\ {} {x_{1}^{(2)}}& {x_{1}^{(1)}}\end{array}\right)\times \left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{0}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} 0.1\end{array}\right).\]
\[ {\boldsymbol{M}_{1}}=\left(\begin{array}{c}{x_{1}^{(2)}}\alpha \\ {} {x_{1}^{(2)}}\end{array}\right),\hspace{0.1667em}{\boldsymbol{M}_{2}}=\left(\begin{array}{c}{x_{1}^{(1)}}\alpha \\ {} {x_{1}^{(1)}}\end{array}\right),\hspace{0.1667em}{\boldsymbol{G}_{2}}=\left(\begin{array}{c}\frac{{G_{{X_{2}}}}(\alpha )}{{\alpha ^{2}}}\\ {} 1\end{array}\right).\]
The system (44) implies ${m_{0}^{(1)}}=0.1270$, ${m_{0}^{(2)}}=0.1315$ and consequently, $\varphi (1)={m_{0}^{(1)}}=0.1270$. To proceed computing $\varphi (u)={\textstyle\sum _{i=1}^{u-1}}{m_{i}^{(1)}}$, $u\geqslant 2$, we use (15), which in this particular case is
\[ \left\{\begin{array}{l}{m_{n}^{(2)}}{x_{0}^{(1)}}={m_{n-2}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(1)}}{1_{\{n=2\}}}\hspace{1em}\\ {} {m_{n}^{(1)}}{x_{0}^{(2)}}={m_{n-2}^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(2)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(2)}}{1_{\{n=2\}}}\hspace{1em}\end{array}\right.,\hspace{0.1667em}n=2,\hspace{0.1667em}3,\dots \]
or, equivalently,
\[ \left\{\begin{array}{l}{m_{n-1}^{(2)}}=\bigg({m_{n-2}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-2}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{m_{0}^{(2)}}{x_{1}^{(1)}}{1_{\{n=2\}}}\bigg)/{x_{1}^{(1)}}\hspace{1em}\\ {} {m_{n-1}^{(1)}}=\bigg({m_{n-2}^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-2}}{m_{i}^{(1)}}{x_{n-i}^{(2)}}-{m_{0}^{(1)}}{x_{1}^{(2)}}{1_{\{n=2\}}}\bigg)/{x_{1}^{(2)}}\hspace{1em}\end{array}\right.,\hspace{0.1667em}n=2,\hspace{0.1667em}3,\dots \hspace{0.1667em}.\]
Substituting $n=2,\hspace{0.1667em}3,\dots $ into the last two equations, we obtain ${m_{1}^{(1)}},{m_{2}^{(1)}},\dots \hspace{0.1667em}$. The survival probability $\varphi (0)$ is found using recurrence (5):
\[ \varphi (0)\hspace{-0.1667em}=\hspace{-0.1667em}\sum \limits_{\substack{{i_{1}}\leqslant 1\\ {} {i_{1}}+{i_{2}}\leqslant 3}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\hspace{0.1667em}\varphi (4-{i_{1}}-{i_{2}})={x_{1}^{(1)}}{x_{1}^{(2)}}\varphi (2)+{x_{1}^{(1)}}{x_{2}^{(2)}}\varphi (1).\]
After completing all the necessary arithmetic, we get survival probabilities which are provided in Table 2.Once again, the results obtained in Table 2 match the ones presented in [1, Table 3], where the numbers are obtained differently, i.e. computing limits of certain recurrent sequences.
Table 2.
Survival probabilities for $\kappa =2$, $N=2$, ${X_{1}}\sim \mathcal{P}(1,1)$ and ${X_{2}}\sim \mathcal{P}(9/10,1)$
u | 0 | 1 | 2 | 3 | 4 | 5 | 10 | 20 | 30 | 40 | 50 |
$\varphi (u)$ | 0.048 | 0.127 | 0.209 | 0.286 | 0.355 | 0.417 | 0.649 | 0.873 | 0.954 | 0.983 | 0.994 |
Theorem 2 yields the following survival probability-generating function
Example 3.
Let us consider the bi-seasonal model (1) with $\kappa =3$ where claims are represented by two independent random variables ${X_{1}}$ and ${X_{2}}$, whose distributions are given in Table 3 and Table 4.
We find the survival probability $\varphi (u)$ for all $u\in {\mathbb{N}_{0}}$ and its generating function $\Xi (s)$.
It is easy to observe that $\mathbb{E}{S_{2}}=2.4\lt 6$. Thus, the net profit condition is valid. The probability-generating function of the sum ${X_{1}}+{X_{2}}$ is
Solving ${G_{{S_{2}}}}(s)={s^{6}}$, we obtain the following roots inside the unit circle:
\[ {\alpha _{1}}=-\frac{4}{11},\hspace{0.1667em}{\alpha _{2}}=-0.2250,\hspace{0.1667em}{\alpha _{3}}=-0.0154-0.7423i,\hspace{0.1667em}{\alpha _{4}}=-0.0154+0.7423i.\]
Note that the complex roots always occur in conjugate pairs due to ${G_{{S_{N}}}}(\overline{s})-{\overline{s}^{\kappa N}}=\overline{{G_{{S_{N}}}}(s)-{s^{\kappa N}}}$, where over-line denotes conjugation. According to Lemma 4, there must be one root of multiplicity two and one may check that ${\alpha _{1}}$ is such.We then employ (14) to create the modified versions of ${\boldsymbol{M}_{1}}$, ${\boldsymbol{M}_{2}}$ and ${\boldsymbol{G}_{2}}$. Let ${\tilde{\boldsymbol{M}}_{1}}$, ${\tilde{\boldsymbol{M}}_{2}}$ and ${\tilde{\boldsymbol{G}}_{2}}$ be
\[\begin{array}{l}\displaystyle {\tilde{\boldsymbol{M}}_{1}}:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\alpha _{1}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{1}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{1}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{1}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{1}^{2}}\\ {} {\alpha _{2}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{2}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{2}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{2}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{2}^{2}}\\ {} {\alpha _{3}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{3}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{3}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{3}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{3}^{2}}\\ {} {\alpha _{4}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{4}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{4}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{4}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{4}^{2}}\\ {} {F_{{X_{2}}}}(1)+2{\alpha _{1}}{F_{{X_{2}}}}(2)& {x_{0}^{(2)}}+2{\alpha _{1}}{F_{{X_{2}}}}(1)& 2{x_{0}^{(2)}}{\alpha _{1}}\\ {} {x_{2}^{(2)}}+2{x_{1}^{(2)}}+3{x_{0}^{(2)}}& {x_{1}^{(2)}}+2{x_{0}^{(2)}}& {x_{0}^{(2)}}\end{array}\right),\\ {} \displaystyle {\tilde{\boldsymbol{M}}_{2}}:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\alpha _{1}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{1}}{F_{{X_{1}}}}(1)+{x_{0}^{(1)}}& {\alpha _{1}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{1}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{1}^{2}}\\ {} {\alpha _{2}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{2}}{F_{{X_{1}}}}(1)+{x_{0}^{(1)}}& {\alpha _{2}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{2}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{2}^{2}}\\ {} {\alpha _{3}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{3}}{F_{{X_{1}}}}(1)+{x_{0}^{(1)}}& {\alpha _{3}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{3}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{3}^{2}}\\ {} {\alpha _{4}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{4}}{F_{{X_{1}}}}(1)+{x_{0}^{(2)}}& {\alpha _{4}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{4}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{4}^{2}}\\ {} {\tilde{M}_{5,\hspace{0.1667em}1}}& {\tilde{M}_{5,\hspace{0.1667em}2}}& {\tilde{M}_{5,\hspace{0.1667em}3}}\\ {} 3{x_{0}^{(1)}}+2{x_{1}^{(1)}}+{x_{2}^{(1)}}& 2{x_{0}^{(1)}}+{x_{1}^{(1)}}& {x_{0}^{(1)}}\end{array}\right),\\ {} \displaystyle \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\tilde{\boldsymbol{M}}_{5,\hspace{0.1667em}1}}& {\tilde{\boldsymbol{M}}_{5,\hspace{0.1667em}2}}& {\tilde{\boldsymbol{M}}_{5,\hspace{0.1667em}3}}\end{array}\right)=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\Big(\frac{{G_{{X_{2}}}}(s)}{{s^{2}}}\Big)^{\prime }}{\Big|_{s={\alpha _{1}}}}& {\Big(\frac{{G_{{X_{2}}}}(s)}{s}\Big)^{\prime }}{\Big|_{s={\alpha _{1}}}}& {G^{\prime }_{{X_{2}}}}(s){|_{s={\alpha _{1}}}}\end{array}\right)\\ {} \displaystyle \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\times \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{x_{0}^{(1)}}& 0& 0\\ {} {F_{{X_{1}}}}(1)& {x_{0}^{(1)}}& 0\\ {} {F_{{X_{1}}}}(2)& {F_{{X_{1}}}}(1)& {x_{0}^{(1)}}\end{array}\right),\\ {} \displaystyle {\tilde{\boldsymbol{G}}_{2}}:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{G_{{X_{2}}}}({\alpha _{1}})/{\alpha _{1}^{3}}& {G_{{X_{2}}}}({\alpha _{1}})/{\alpha _{1}^{3}}& {G_{{X_{2}}}}({\alpha _{1}})/{\alpha _{1}^{3}}\\ {} {G_{{X_{2}}}}({\alpha _{2}})/{\alpha _{2}^{3}}& {G_{{X_{2}}}}({\alpha _{2}})/{\alpha _{2}^{3}}& {G_{{X_{2}}}}({\alpha _{2}})/{\alpha _{2}^{3}}\\ {} {G_{{X_{2}}}}({\alpha _{3}})/{\alpha _{3}^{3}}& {G_{{X_{2}}}}({\alpha _{3}})/{\alpha _{3}^{3}}& {G_{{X_{2}}}}({\alpha _{3}})/{\alpha _{3}^{3}}\\ {} {G_{{X_{2}}}}({\alpha _{4}})/{\alpha _{4}^{3}}& {G_{{X_{2}}}}({\alpha _{4}})/{\alpha _{4}^{3}}& {G_{{X_{2}}}}({\alpha _{4}})/{\alpha _{4}^{3}}\\ {} 1& 1& 1\\ {} 1& 1& 1\end{array}\right).\end{array}\]
Then
\[ {\left(\begin{array}{c@{\hskip10.0pt}c}{\tilde{\boldsymbol{M}}_{1}}& {\tilde{\boldsymbol{M}}_{2}}\circ {\tilde{\boldsymbol{G}}_{2}}\end{array}\right)_{6\times 6}}\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} {m_{2}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} {m_{1}^{(2)}}\\ {} {m_{2}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} 0\\ {} 0\\ {} 0\\ {} 0\\ {} 3.6\end{array}\right)\hspace{1em}\Rightarrow \hspace{1em}\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} {m_{2}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} {m_{1}^{(2)}}\\ {} {m_{2}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0.9984\\ {} 0.0016\\ {} 0\\ {} 1\\ {} 0\\ {} 0\end{array}\right).\]
It follows that $\varphi (1)={m_{0}^{(1)}}=0.9984$, $\varphi (2)={m_{0}^{(1)}}+{m_{1}^{(1)}}=1$ and consequently $\varphi (u)=1$ for all $u\geqslant 3$. Therefore,
\[\begin{aligned}{}\varphi (0)& =\sum \limits_{\substack{{i_{1}}\leqslant 2\\ {} {i_{1}}+{i_{2}}\leqslant 5}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\hspace{0.1667em}\varphi (6-{i_{1}}-{i_{2}})\\ {} & ={x_{0}^{(1)}}{x_{0}^{(2)}}\varphi (6)+\big({x_{0}^{(1)}}{x_{1}^{(2)}}+{x_{1}^{(1)}}{x_{0}^{(2)}}\big)\varphi (5)\\ {} & \hspace{1em}+\big({x_{0}^{(1)}}{x_{2}^{(2)}}+{x_{1}^{(1)}}{x_{1}^{(2)}}+{x_{2}^{(1)}}{x_{0}^{(2)}}\big)\varphi (4)\\ {} & \hspace{1em}+\big({x_{2}^{(1)}}{x_{1}^{(2)}}+{x_{1}^{(1)}}{x_{2}^{(2)}}\big)\varphi (3)+{x_{2}^{(1)}}{x_{2}^{(2)}}\varphi (2)=0.9728.\end{aligned}\]
The correctness of these results can be verified in the following way. If initial surplus $u=1$, ruin can only occur at the first moment of time and only if $1+3\cdot 1-{X_{1}}\leqslant 0$, i.e. ${X_{1}}=4$. Thus, $\varphi (1)=1-\mathbb{P}({X_{1}}=4)=1-0.0016=0.9984$. If initial surplus $u\geqslant 1$, then ruin will never occur. There are two reasons for that. First of all, at the first moment of time insurer’s wealth will never drop below one. Moreover, every two periods insurer earns 6 units of currency and that is the maximum amount of claims that the insurer can suffer during two consecutive periods. The result of $\varphi (0)$ is also logical as with no initial capital ruin can occur only if ${X_{1}}=3$ or ${X_{1}}=4$, thus $\varphi (0)=1-\mathbb{P}({X_{1}}=3)-\mathbb{P}({X_{1}}=4)=1-0.0256-0.0016=0.9728$.
The generating function of $\varphi (1),\hspace{0.1667em}\varphi (2),\dots $ in the considered case is simply
One may verify that Theorem 2 produces the same result.
Example 4.
In the last example, we consider ten season model with a premium rate of 5, i.e. $N=10$, $\kappa =5$, and we assume claims to be generated by independent random variables ${X_{k}}\sim \mathcal{P}(k/(k+1)+4,\hspace{0.1667em}0)$, $k\in \{1,\hspace{0.1667em}2,\dots ,10\}$, where $\mathcal{P}(\lambda ,\hspace{0.1667em}0)$ denotes the Poisson distribution with parameter λ. We compute both the finite time survival probability $\varphi (u,T)$ and the ultimate time survival probability $\varphi (u)$ and provide a frame of ultimate time survival probability-generating function $\Xi (s)$.
Let us verify that the net profit condition is satisfied:
We now apply Theorem 1. The equation
has 49 simple roots inside the unit circle depicted in Figure 2.
Fig. 2.
Roots of ${s^{50}}={G_{{S_{10}}}}(s)$, when ${X_{k}}\sim \mathcal{P}(k/(k+1)+4,\hspace{0.1667em}0)$, $k\in \{1,\hspace{0.1667em}2,\dots ,10\}$
Denoting these roots by ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{49}}$ we set up matrices ${\boldsymbol{M}_{1}}$, ${\boldsymbol{M}_{2}}$, $\dots \hspace{0.1667em}$, ${\boldsymbol{M}_{10}}$ and ${\boldsymbol{G}_{2}}$, ${\boldsymbol{G}_{3}}$, $\dots \hspace{0.1667em}$, ${\boldsymbol{G}_{10}}$:
we obtain ${m_{0}^{(1)}}=0.1821$, ${m_{1}^{(1)}}=0.0604$, ${m_{2}^{(1)}}=0.0583$, ${m_{3}^{(1)}}=0.0545$, ${m_{4}^{(1)}}=0.0504$.
\[\begin{array}{l}\displaystyle {\boldsymbol{M}_{1}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{4}}{\alpha _{1}^{j}}{F_{{X_{10}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{1}^{j}}{F_{{X_{10}}}}(j-1)& \dots & {x_{0}^{(10)}}{\alpha _{1}^{4}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\textstyle\sum \limits_{j=0}^{4}}{\alpha _{49}^{j}}{F_{{X_{10}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{49}^{j}}{F_{{X_{10}}}}(j-1)& \dots & {x_{0}^{(10)}}{\alpha _{49}^{4}}\\ {} {\textstyle\sum \limits_{j=0}^{4}}{x_{j}^{(10)}}(5-j)& {\textstyle\sum \limits_{j=0}^{3}}{x_{j}^{(10)}}(5-j-1)& \dots & {x_{0}^{(10)}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{M}_{2}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{4}}{\alpha _{1}^{j}}{F_{{X_{1}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{1}^{j}}{F_{{X_{1}}}}(j-1)& \dots & {x_{0}^{(1)}}{\alpha _{1}^{4}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\textstyle\sum \limits_{j=0}^{4}}{\alpha _{49}^{j}}{F_{{X_{1}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{49}^{j}}{F_{{X_{1}}}}(j-1)& \dots & {x_{0}^{(1)}}{\alpha _{49}^{4}}\\ {} {\textstyle\sum \limits_{j=0}^{4}}{x_{j}^{(1)}}(5-j)& {\textstyle\sum \limits_{j=0}^{3}}{x_{j}^{(1)}}(5-j-1)& \dots & {x_{0}^{(1)}}\end{array}\right),\dots ,\\ {} \displaystyle {\boldsymbol{M}_{10}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{4}}{\alpha _{1}^{j}}{F_{{X_{9}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{1}^{j}}{F_{{X_{9}}}}(j-1)& \dots & {x_{0}^{(9)}}{\alpha _{1}^{4}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\textstyle\sum \limits_{j=0}^{4}}{\alpha _{49}^{j}}{F_{{X_{9}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{49}^{j}}{F_{{X_{9}}}}(j-1)& \dots & {x_{0}^{(9)}}{\alpha _{49}^{4}}\\ {} {\textstyle\sum \limits_{j=0}^{4}}{x_{j}^{9}}(5-j)& {\textstyle\sum \limits_{j=0}^{3}}{x_{j}^{9}}(5-j-1)& \dots & {x_{0}^{9}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{G}_{2}}\hspace{-0.1667em}=\hspace{-0.1667em}\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{{G_{{X_{10}}}}({\alpha _{1}})}{{\alpha _{1}^{5}}}& \dots & \frac{{G_{{X_{10}}}}({\alpha _{1}})}{{\alpha _{1}^{5}}}\\ {} \vdots & \ddots & \vdots \\ {} \frac{{G_{{X_{10}}}}({\alpha _{49}})}{{\alpha _{49}^{5}}}& \dots & \frac{{G_{{X_{10}}}}({\alpha _{49}})}{{\alpha _{49}^{5}}}\\ {} 1& \dots & 1\end{array}\right),{\boldsymbol{G}_{3}}\hspace{-0.1667em}=\hspace{-0.1667em}\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{10}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{10}}}\\ {} \vdots & \ddots & \vdots \\ {} \frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{49}})}{{\alpha _{49}^{10}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{49}})}{{\alpha _{49}^{10}}}\\ {} 1& \dots & 1\end{array}\right),\dots ,\\ {} \displaystyle {\boldsymbol{G}_{10}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{1}})}{{\alpha _{1}^{45}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{1}})}{{\alpha _{1}^{45}}}\\ {} \vdots & \ddots & \vdots \\ {} \frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{49}})}{{\alpha _{49}^{45}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{49}})}{{\alpha _{49}^{45}}}\\ {} 1& \dots & 1\end{array}\right).\end{array}\]
Solving the system
(45)
\[ {\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\boldsymbol{M}_{1}}& {\boldsymbol{M}_{2}}\circ {\boldsymbol{G}_{2}}& \dots & {\boldsymbol{M}_{10}}\circ {\boldsymbol{G}_{10}}\end{array}\right)_{50\times 50}}{\left(\substack{{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} \vdots \\ {} {m_{4}^{(1)}}\\ {} \vdots \\ {} {m_{0}^{(10)}}\\ {} {m_{1}^{(10)}}\\ {} \vdots \\ {} {m_{4}^{(10)}}}\right)_{50\times 1}}={\left(\substack{0\\ {} 0\\ {} \vdots \\ {} 0\\ {} \frac{55991}{27720}}\right)_{50\times 1}}\]Therefore, using (8):
\[\begin{aligned}{}\varphi (1)& ={m_{0}^{(1)}}=0.1821,\\ {} \varphi (2)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}=0.2425,\\ {} \varphi (3)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}+{m_{2}^{(1)}}=0.3009,\\ {} \varphi (4)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}+{m_{2}^{(1)}}+{m_{3}^{(1)}}=0.3554,\\ {} \varphi (5)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}+{m_{2}^{(1)}}+{m_{3}^{(1)}}+{m_{4}^{(1)}}=0.4058.\end{aligned}\]
Employing system (15) we find the remaining probabilities ${m_{5}^{(1)}},{m_{6}^{(1)}},\dots \hspace{0.1667em}$:
\[ \left\{\begin{array}{l@{\hskip10.0pt}l}{m_{n}^{(2)}}\hspace{1em}& \hspace{-7.0pt}=\bigg({m_{n-5}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{4}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{4-i}}{x_{j}^{(1)}}{1_{\{n=5\}}}\bigg)/{x_{0}^{(1)}}\\ {} \hspace{1em}& \hspace{-7.0pt}\hspace{2.5pt}\vdots \\ {} {m_{n}^{(10)}}\hspace{1em}& \hspace{-7.0pt}=\bigg({m_{n-5}^{(9)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(10)}}{x_{n-i}^{(9)}}-{\textstyle\sum \limits_{i=0}^{4}}{m_{i}^{(10)}}{\textstyle\sum \limits_{j=0}^{4-i}}{x_{j}^{(9)}}{1_{\{n=5\}}}\bigg)/{x_{0}^{(9)}}\\ {} {m_{n}^{(1)}}\hspace{1em}& \hspace{-7.0pt}=\bigg({m_{n-5}^{(10)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(10)}}-{\textstyle\sum \limits_{i=0}^{4}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{4-i}}{x_{j}^{(10)}}{1_{\{n=5\}}}\bigg)/{x_{0}^{(10)}}\end{array}\right.\]
$n=5,6,\dots $ . We substitute the obtained probabilities ${m_{0}^{(1)}},\hspace{0.1667em}{m_{1}^{(1)}},\dots $ into (8) and compute $\varphi (6),\varphi (7),\dots $ . Finally, $\varphi (0)$ can be found using (5):
\[ \varphi (0)=\sum \limits_{\substack{{i_{1}}\leqslant 4\\ {} {i_{1}}+{i_{2}}\leqslant 9\\ {} {i_{1}}+{i_{2}}+{i_{3}}\leqslant 14\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{10}}\leqslant 49}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\cdots \mathbb{P}({X_{10}}={i_{10}})\varphi \Bigg(50-{\sum \limits_{j=1}^{10}}{i_{j}}\Bigg).\]
The final results, including the finite time ruin probability computed by Theorem 4, rounded up to three decimal places, are provided in Table 5.
Table 5.
Survival probabilities for $\kappa =5$, $N=10$, ${X_{k}}\sim \mathcal{P}(k/(k+1)+4,\hspace{0.1667em}0)$, $k\in \{1,2,\dots ,10\}$
T | $u=0$ | $u=1$ | $u=2$ | $u=3$ | $u=4$ | $u=5$ | $u=10$ | $u=20$ | $u=30$ |
1 | 0.532 | 0.703 | 0.831 | 0.913 | 0.960 | 0.983 | 1 | 1 | 1 |
2 | 0.424 | 0.587 | 0.727 | 0.831 | 0.902 | 0.946 | 0.999 | 1 | 1 |
3 | 0.368 | 0.520 | 0.657 | 0.767 | 0.849 | 0.906 | 0.995 | 1 | 1 |
4 | 0.332 | 0.474 | 0.606 | 0.717 | 0.804 | 0.869 | 0.988 | 1 | 1 |
5 | 0.306 | 0.440 | 0.567 | 0.677 | 0.766 | 0.834 | 0.979 | 1 | 1 |
10 | 0.235 | 0.343 | 0.450 | 0.548 | 0.635 | 0.708 | 0.921 | 0.998 | 1 |
20 | 0.200 | 0.294 | 0.389 | 0.478 | 0.558 | 0.629 | 0.863 | 0.990 | 1 |
30 | 0.179 | 0.264 | 0.350 | 0.432 | 0.507 | 0.575 | 0.814 | 0.979 | 0.999 |
∞ | 0.125 | 0.182 | 0.243 | 0.301 | 0.355 | 0.406 | 0.605 | 0.826 | 0.923 |
The survival probability $\varphi (1),\hspace{0.1667em}\varphi (2),\dots $ generating function, having ${m_{i}^{(j)}}$, $i=0,1,\dots ,4$, $j=1,\hspace{0.1667em}2,\dots ,10$, from system (45), when $s\in {S_{1}}(0)$ and ${e^{{a_{10}}(s-1)}}\ne {s^{50}}$ is
\[\begin{array}{l}\displaystyle \Xi (s)=\frac{{u^{T}}(s)v(s)}{{e^{{a_{10}}(s-1)}}-{s^{50}}},\\ {} \displaystyle u(s)=\left(\begin{array}{c}{s^{45}}\\ {} {s^{40}}{e^{{a_{1}}(s-1)}}\\ {} {s^{35}}{e^{{a_{2}}(s-1)}}\\ {} \vdots \\ {} {s^{5}}{e^{{a_{8}}(s-1)}}\\ {} {e^{{a_{9}}(s-1)}}\end{array}\right),\hspace{1em}v(s)=\left(\begin{array}{c}{e^{-{\lambda _{1}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(2)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{1}^{j}}/l!\\ {} {e^{-{\lambda _{2}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(3)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{2}^{j}}/l!\\ {} \vdots \\ {} {e^{-{\lambda _{9}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(10)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{9}^{j}}/l!\\ {} {e^{-{\lambda _{10}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(1)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{10}^{j}}/l!\end{array}\right),\end{array}\]
where ${a_{n}}=4n+{\textstyle\sum _{k=0}^{n}}k/(k+1)$ and ${\lambda _{n}}=4+n/(n+1)$ when $n=1,\hspace{0.1667em}2,\dots ,10$.