The insurance model when the amount of claims depends on the state of the insured person (healthy, ill, or dead) and claims are connected in a Markov chain is investigated. The signed compound Poisson approximation is applied to the aggregate claims distribution after n∈N periods. The accuracy of order O(n−1) and O(n−1/2) is obtained for the local and uniform norms, respectively. In a particular case, the accuracy of estimates in total variation and non-uniform estimates are shown to be at least of order O(n−1). The characteristic function method is used. The results can be applied to estimate the probable loss of an insurer to optimize an insurance premium.
This paper is motivated by the insurance model in which the insured is described by a random variable (rv) with three states (healthy, ill, dead), and rvs are connected in a Markov chain. We assume that the insurer pays one unit of money in the case of illness and continuously pays d∈N units in the case of death. We are interested in aggregate losses for the insurer after n∈N time periods. More precisely, let ξ0,ξ1,…,ξn,… be a non-stationary three-state {a1,a2,a3} Markov chain. State a1 corresponds to being healthy, state a2 corresponds to being ill, and state a3 is reached in the case of death. The insurer pays nothing for healthy policy holders, one unit of money for the ill individuals, and constantly pays d units of money (d∈N) in the case of death. We denote the distribution of Sn=f(ξ1)+⋯+f(ξn)(n∈N) by Fn, that is, P(Sn=m)=Fn{m} for m∈Z. Here f(a1)=0,f(a2)=1,f(a3)=d,d∈N. We will analyze a little simplified model by assuming that the probability of a healthy person to die is equal to zero (i.e. we exclude the cases of sudden death). Even though this assumption diminishes model’s universality, it is quite reasonable, because usually a person is ill at least for one time period and dies only afterwards.
The matrix of transition probabilities P is defined in the following way
P=1−γγ01−α−ββα001,α,β,γ∈(0,1).
It is assumed that at the beginning the insured person is healthy. Hence, the initial distribution is given by
P(ξ0=a1)=π1=1,P(ξ0=a2)=π2=0,P(ξ0=a3)=π3=0.
Observe, that our Markov chain contains one absorbing state (death).
In this paper, we consider triangular arrays of rvs (the scheme of series), i.e. all transition probabilities α,β,γ can depend on n∈N. Arguably in insurance models the triangular arrays are more natural than the more frequently studied less general scheme of sequences, when it is assumed that the probability to become ill or to die does not change as time passes.
All results are obtained under the condition
0<β⩽0.15,0<γ⩽0.05,α⩽C0<1,α+β<1.
Here C0∈(0,1) is any maximum possible value of α(n),n∈N (strictly less than 1), i.e. the maximum probability of an ill individual to die for all time periods n∈N. The condition (1) is not very restrictive, because β⩽0.15 means that the probability to remain ill during the next time period does not exceed 15%, and γ⩽0.05 means that the probability of a healthy person to become ill does not exceed 5%, that is, only chronic and epidemic illnesses are excluded.
We denote by C all positive absolute constants, and we denote by θ any complex number satisfying |θ|⩽1. The values of C and θ can vary from line to line or even within the same line. Sometimes, as in (1), we supply constants with indices. Let Ik denote the distribution concentrated at an integer k∈Z, and set I=I0. Let MZ be a set of finite signed measures concentrated on Z. The Fourier transform and analogue of distribution function for M∈MZ is denoted by Mˆ(t)(t∈R) and M(x):=∑j=−∞xM{j}, respectively. Similarly, Fn(x):=Fn{(−∞,x]}. For y∈R and j∈N={1,2,3,…}, we set
yj:=1j!y(y−1)…(y−j+1),y0:=1.
If N,M∈MZ, then products and powers of N and M are understood in the convolution sense, that is, for a set A⊆Z,
NM{A}=∑k=−∞∞N{A−k}M{k},M0=I.
The exponential of M is denoted by
eM=exp{M}:=∑k=0∞1k!Mk.
We define the local norm, the uniform (Kolmogorov) norm, and the total-variation norm of M respectively by
‖M‖∞:=supk∈Z|M{k}|,|M|K:=supx∈R|M{(−∞,x]}|,‖M‖:=∑j=−∞∞|M{j}|.
In the proofs, we apply the following well-known relations:
MNˆ(t)=Mˆ(t)Nˆ(t),‖MN‖⩽‖M‖‖N‖,|MN|K⩽‖M‖|N|K,‖MN‖∞⩽‖M‖‖N‖∞,|Mˆ(t)|⩽‖M‖,Iˆa(t)=eita,Iˆ(t)=1.
Known results
The compound Poisson approximation is frequently used to approximate aggregate losses in risk models (see, for example, [5, 8, 9, 12, 14, 21]); however, in those models it is usually assumed that rvs are independent of time period n∈N. The compound Poisson approximation to sums of Markov dependent rvs was investigated in [6]. Numerous papers were devoted to Markov Binomial distribution, see [1, 3, 4, 7, 10, 18, 19], and the references therein. It seems, however, that the case of Markov chain containing absorbing state was not considered so far. Our research is closely related to the paper [16], in which a non-stationary three-state symmetric Markov chain ξ0,ξ1,…ξn,… was investigated with the matrix of transition probabilities
a1−2aab1−2bba1−2aa,a,b∈(0,0.5).
Let S˜n=f˜(ξ1)+⋯+f˜(ξn)(n∈N), f˜(a1)=−1, f˜(a2)=0, f˜(a3)=1 and let the initial distribution be P(ξ0=a1)=π1, P(ξ0=a2)=π2, and P(ξ0=a3)=π3. Denote the distribution of S˜n by F˜n. G˜ defines the measure with the Fourier transform:
g˜(t)=(π1+1−2acost1−2aπ2+π3)1−2(a−b)1−2(a−b)−2a(cost−1)×exp{2nb(1−2a)(cost−1)(1−2a+2b)(1−2acost)}.
As shown in [16], if a,b⩽1/30, then
‖F˜n−G˜‖⩽C(min{1n,b}+0.2n|a−b|).
The main part of the approximation G˜ is a compound Poisson distribution with a compounding symmetrized geometric distribution. The accuracy of approximation is at least O(n−1). However, due to the symmetry of distribution and possible negative values, it is difficult to find a compatible insurance model.
Measures used for approximation
For convenience we present all Fourier transforms of measures used for construction of approximations in a separate table. Note that all measures are denoted by the same capital letters as their Fourier transforms (for example, Hˆ(t) is the Fourier transform of H).
The measures can be easily found from their Fourier transforms using the formula
M{k}=12π∫−ππe−kitMˆ(t)dtfor allk∈Z.
For example,
Hˆ(t)=(1−β)eit1−βeit.
Since Iˆa(t)=eita, for all k∈Z we have
H{k}=12π∫−ππe−kit(1−β)eit1−βeitdt=1−β2π∫−ππe−ikteit∑j=0∞(βeit)jdt=(1−β)βk−1∑j=0∞βj−k+112π∫−ππe−kite(j+1)itdt=(1−β)βk−1∑j=0∞βj−k+1Ij+1{k}=(1−β)∑j=0∞βjIj+1{k}.
The other measures can be calculated analogously using their Fourier transforms presented in Table 1.
We analyze the scheme of series, when transition probabilities may differ from one time period to another time period, that is, transition probabilities depend on n∈N: α=α(n),β=β(n),γ=γ(n). First we formulate a general approximation result for Fn, where possible smallness of α and γ is taken into account.
Let condition (1) hold. Then, for alln=1,2,…,|Fn−(GnV+E)|K⩽C(d+1)(e−Cnγαγn+(β+4γ)n),‖Fn−(GnV+E)‖∞⩽C(d+1)(e−Cnγαn+(β+4γ)n).
Observe that, since β+4γ⩽0.35, the second term in (4) tends to zero exponentially.
Unlike (2), there are two components in our approximation: the first one contains n-fold convolution of a signed compound Poisson measure, the second one takes into account the probability of death (the absorbing state). The measures of approximation are chosen in a way ensuring that the accuracy of approximation is at least as good as in the Berry–Esseen theorem.
Let condition (1) hold. Then, for alln=1,2,…,|Fn−(GnV+E)|K⩽C(d+1)n.
This accuracy is reached, when αγ=O(n−1). If α,γ⩾C1>00$]]>, the accuracy of approximation is exponentially sharp. That prompts a question: Is it possible to simplify the structure of approximation by imposing more restrictive assumptions? The answer is positive for α uniformly separated from zero for all n.
Let condition (1) hold andα⩾C2. Then, for alln=1,2,…,|Fn−(G1nV1+E)|K⩽C(d+1)(γe−Cnγ+(β+4γ)n).
Observe that the accuracy of approximation in (5) is at least of order O(n−1). This accuracy is reached if γ=O(n−1).
If both probabilities are uniformly separated from zero, Fn is exponentially close to the measure E.
Let condition (1) hold andα,γ⩾C2. Then, for alln=1,2,…,‖Fn−E‖⩽C(d+1)e−Cn.
Observe that, if the scheme of sequences is analyzed, all probabilities do not depend on n and hence the conditions of Theorem 3 are satisfied as long as condition (1) holds. Note also that in Theorem 3 the stronger total variation norm is used.
Let condition (1) hold andα⩾C2. Then, for alln=1,2,…,‖Fn−(G1nV2+E)‖⩽C(d+1)(γe−Cnγ(1+β/γ)+n(β+4γ)n).
Let condition (1) hold andα⩾C2. Then, for alln=1,2,…,‖Fn−(G1nV2+E)‖⩽C(d+1)e−Cnγn(1+βγ).
The local estimates in Theorem 2, 3, and 4 have the same order as in (5), (6), and (7), hence we do no formulate them separately.
In insurance models, tail probabilities are very important, see, for example [11, 17, 20]. Therefore, we formulate some non-uniform estimates for the case when α is uniformly separated from zero.
Let condition (1) hold andα⩾C2. Then, for any integerk⩾1andn∈N,|Fn{k}−(G1nV2+E){k}|⩽C(d+1)e−Cnγ(β+γ)n(β+(k+1)γ).|Fn(k)−(G1nV2+E)(k)|⩽Cd2e−Cnγn(1+kγ2).
The non-uniform estimate for distribution functions (10) is quite inaccurate if γ is small. On the other hand, the local non-uniform estimate is at least of order O(n−1k−1), when β is of the same order as γ.
When γ is uniformly separated from zero and α is small, estimate (4) could not be simplified.
Auxiliary results
We begin from the inversion inequalities.
LetM∈MZ. Then|M|K⩽12π∫−ππ|Mˆ(t)||eit−1|dt,‖M‖∞⩽12π∫−ππ|Mˆ(t)|dt.If, in addition,∑k∈Z|k||M{k}|<∞, then‖M‖⩽(1+bπ)1/2(12π∫−ππ|Mˆ(t)|2+1b2|(e−itaMˆ(t))′|2dt)1/2,and, for anya∈R,b>00$]]>,|k−a||M{k}|⩽12π∫−ππ|(Mˆ(t)e−ita)′|dt,|k−a||M(k)|⩽12π∫−ππ|(Mˆ(t)e−it−1e−ita)′|dt.
Observe that (11) and (15) are trivial if integrals on the right-hand side are infinite. All inequalities are well-known and can be found in [2] Section 6.1 and Section 6.2; see, also [13] and Lemma 3.3 in [15].
The characteristic function method is used for the analysis of the model. Therefore our next step is to obtain Fˆn(t).
Let condition (1) hold. Then the characteristic functionFˆn(t)can be expressed in the following way:Fˆn(t)=Λˆ1n(t)Wˆ1(t)+Λˆ2n(t)Wˆ2(t)+Λˆ3n(t)Wˆ3(t).
The characteristic function Fˆn(t) can be written as follows, see [16]:
Fˆn(t)=(π1,π2,π3)(Λˆ1n(t)y→1z→1T+Λˆ2n(t)y→2z→2T+Λˆ3n(t)y→3z→3T)(1,1,1)T.
Expression (16) is known as Perron’s formula. Similar expression was used for Markov binomial distribution; see, for example, [3]. Λˆj(t) (j=1,2,3) are eigenvalues of the following matrix:
P˜(t)=1−γγeit01−α−ββeitαedit00edit.
We find the eigenvalues by solving the following equation:
|P˜(t)−Λˆ(t)I|=0.
It is not difficult to prove that
Λˆ1,2(t)2−Λˆ1,2(t)(1−γ+βeit)+eit(β−γ(1−α))=0,
and
edit−Λˆ3(t)=0.
Hence,
Λˆ1,2(t)=1−γ+βeit±Dˆ1/2(t)2,Dˆ(t)=(1−γ+βeit)2−4eit(β−γ(1−α)),Λˆ3(t)=edit.
Eigenvectors y→j and z→j are obtained by solving the following system of equations:
P˜(t)y→j=Λˆ(t)y→j,z→jTP˜(t)=Λˆ(t)z→jT,z→jTy→j=1.
From the first equation of system (19) we get that yj,3=0, hence the other two equations are equivalent because of equation (18). Therefore,
yj→T=(yj,1,1−α−βΛˆj(t)−βeityj,1,0),j=1,2.
Similarly, from the second equation of system (19) we get
zj→T=(zj,1,Λˆj(t)−(1−γ)1−α−βzj,1,αedit(Λˆj(t)−(1−γ))(Λˆj(t)−edit)(1−α−β)zj,1),j=1,2.
The third equation of system (19) can be written as
zj→Tyj→=1,yj,1zj,1+yj,2zj,2+yj,3zj,3=1,yj,1zj,1+Λˆj(t)−(1−γ)Λˆj(t)−βeityj,1zj,1+0=1,1+γeit(1−α−β)(Λˆj(t)−βeit)2=1yj,1zj,1.
According to assumption, (π1,π2,π3)=(1,0,0). Substituting (20), (21), and (22) into (17), we obtain
Wˆ1,2(t)=(1,0,0)yj→zj→T111=1+Λˆj(t)−(1−γ)1−α−β(1+αeditΛˆj(t)−edit)1+γeit(1−α−β)(Λˆj(t)−βeit)2,j=1,2.
From equation (18) we get
Λˆj(t)−(1−γ)1−α−β=γeitΛˆj(t)−βeit.
Hence,
Wˆ1,2(t)=1+γeitΛˆ1,2(t)−βeit(1+αeditΛˆ1,2(t)−edit)1+(1−α−β)γeit(Λˆ1,2(t)−βeit)2.
Applying equation (18), we prove that the numerator of Wˆ1,2(t) is equal to
(e(d+1)it−1)(β−γ(1−α))−(edit−1)Λˆ1,2(t)(Λˆ1,2(t)−βeit)(Λˆ1,2(t)−edit)+(eit−1)[γΛˆ1,2(t)−(β−γ(1−α))](Λˆ1,2(t)−βeit)(Λˆ1,2(t)−edit).
It is easy to check that
(1−γ−βeit)2+4γeit(1−α−β)=Dˆ(t).
Similarly
(Λˆ1,2(t)−βeit)2=(1−γ−βeit)2±2(1−γ−βeit)Dˆ(t)+Dˆ(t)4.
Using (25) and (26), we obtain
(Λˆ1,2(t)−βeit)2+(1−α−β)γeit=Dˆ(t)(Dˆ(t)±(1−γ−βeit))2.
Notice that
2(Λˆ1,2(t)−βeit)=1−γ−βeit±Dˆ(t).
Substituting (24), (26), and (27) into (23), we complete the proof for Λˆ1,2 and Wˆ1,2(t).
Similarly, system (19) is solved with Λˆ3(t)=edit. We get y3→T=(y3,1,edit−(1−γ)γeity3,1,(edit−βeit)y3,2−(1−α−β)y3,1αedity3,1),z3→T=(0,0,z3,3). Hence,
1y3,1z3,3=(e(d−1)it−β)(edit−(1−γ))−γ(1−α−β)αγedit.
Substituting (28), (29), and (30) into (17), we get
Wˆ3(t)=(1,0,0)y3→z3→T111=y3,1z3,3=αγedit(e(d−1)it−β)(edit−(1−γ))−γ(1−α−β).
□
It is not difficult to notice that |Wˆ3(t)| is equal to 1 at some points, for example, Wˆ3(0)=1, since
Wˆ3(0)=αγ(1−β)(1−(1−γ))−γ(1−α−β)=αγαγ=1.
Therefore, one cannot expect that Λˆ3n(t)Wˆ3 be small. Therefore we concentrate our research on possible asymptotic behavior of other components of Fˆn(t). We begin from a short expansion of Dˆ(t).
Observe that Dˆ(t) can be written in the following way:
Dˆ(t)=(1+γ−βeit)2(1+4γ((1−α)eit−1)(1+γ−βeit)2).
Let condition (1) hold,|t|⩽π. ThenDˆ(t)=1+γ−βeit+5.81θγ.
Dˆ(t) can be expanded and written as
Dˆ(t)=(1+γ−βeit)∑j=0∞1/2j(4γ((1−α)eit−1)(1+γ−βeit)2)j=(1+γ−βeit)+2γ((1−α)eit−1)1+γ−βeit+16γ2((1−α)eit−1)2(1+γ−βeit)3∑j=2∞1/2j(4γ((1−α)eit−1)(1+γ−βeit)2)j−2=(1+γ−βeit)+2γ((1−α)eit−1)1+γ−βeit+2θγ2|(1−α)eit−1|2|1+γ−βeit|3∑j=0∞|4γ((1−α)eit−1)(1+γ−βeit)2|j.
Observe that
|4γ((1−α)eit−1)(1+γ−βeit)2|⩽8·0.05(0.85+0.05)2⩽0.5,θγ2|(1−α)eit−1|2|1+γ−βeit|3∑j=0∞|4γ((1−α)eit−1)(1+γ−βeit)2|j⩽0.55θγ.
Therefore
Dˆ(t)=1+γ−βeit+4θγ0.85+2·0.55θγ=1+γ−βeit+5.81θγ.
□
Next we prove that Λˆ2(t) is always small.
Let condition (1) hold,|t|⩽π. Then|Λˆ2(t)|⩽β+4γ.
From Lemma 3 we get
|Λˆ2(t)|=|1−γ+βeit−Dˆ(t)2|=12|1−γ+βeit−(1+γ−βeit+5.81θγ)|⩽β+4γ.
□
Let condition (1) hold,|t|⩽π. Then|Λˆ2(t)|⩽0.35.
The following estimate shows that Λ1 behaves similarly to the compound Poisson distribution.
Let condition (1) hold,|t|⩽π. Then|Λˆ1(t)|⩽1+0.4(1−α)γRe(Hˆ(t)−1)−0.2αγ⩽exp{0.4(1−α)γRe(Hˆ(t)−1)−0.2αγ}.
It is not difficult to check that
11+γ−βeit=1−β1+γ−β11−βeit−βγ1+γ−βeiteit−11−βeit11+γ−β.
From (32) and (33) it follows that
|Λˆ1(t)|=|1−γ+βeit+Dˆ(t)2|⩽|1+γ(1−β)1+γ−β(Ψˆ(t)−1)|+βγ2(1+γ−β)2|Ψˆ(t)−1||eit−1|+2γ2|Ψˆ(t)−1|2(1+β)2(1+γ−β)3.
Notice that
|Ψˆ(t)|2=(ReΨˆ(t))2+(ImΨˆ(t))2⩽(1−α1−β)2⩽1,|Ψˆ(t)−1|2⩽2(1−ReΨˆ(t))−α1−β(2−α1−β).
For all 0⩽ν⩽1, we have
|1+ν(Ψˆ(t)−1)|=(1−ν)+νReΨˆ(t)+iνImΨˆ(t)⩽1+ν(1−ν)(ReΨˆ(t)−1).
Let
ν=γ(1−β)1+γ−β.
Substituting (35) into (34) and applying inequality (36), we get
|Λˆ1(t)|⩽1+ν(1−ν)(ReΨˆ(t)−1)+βγ2(1+γ−β)2|Ψˆ(t)−1||eit−1|+4γ2(1+β)2(1+γ−β)3(1−ReΨˆ(t))−2γ2α1−β(1+β)2(1+γ−β)3(2−α1−β).|Ψˆ(t)−1| can be estimated as
|Ψˆ(t)−1|⩽21−β,
and |eit−1| can be estimated as
|eit−1|⩽|(1−α)eit−1||1−βeit||1−βeit|+α⩽|Ψˆ(t)−1|(1+β)+α.
Then
|Λˆ1(t)|⩽1+(ReΨˆ(t)−1)γ1+γ−β((1−β)(1−γ(1−β)1+γ−β)−2γβ(1+β)1+γ−β−4γ(1+β)2(1+γ−β)2)+2αγ2(1−β)(1+γ−β)(β1+γ−β−(1+β)2(1+γ−β)2(2−α1−β)).
Notice that
ReΨˆ(t)−1=(1−α)Re(Hˆ(t)−1)−α−αβcos(t)|1−βeit|2.
Let condition (1) hold,|t|⩽π. Then|Λˆ1(t)|⩽1+Cγ(ReHˆ(t)−1−α)⩽exp{Cγ(ReHˆ(t)−1−α)}.
Next we demonstrate that |Wˆ2(t)| is always small.
Let condition (1) hold,|t|⩽π. Then|Wˆ2(t)|⩽2(d+1)|eit−1|.
From Lemma 3 we have
|Dˆ(t)|⩾1+γ−β−5.81γ⩾1−4.81·0.05−0.15⩾0.6.
By applying Corollary 3, we get
|Λˆ2(t)−edit|⩾1−|Λˆ2(t)|⩾1−0.35=0.65.
Hence,
|Wˆ2(t)|⩽(d+1)|eit−1|(2|β−γ(1−α)|+(1+γ)|Λˆ2(t)|)0.65·0.6⩽(d+1)|eit−1|(2max{β,γ(1−α)}+(1+γ)·0.35)0.39⩽2(d+1)|eit−1|.
□
To approximate |Wˆ1(t)|, we need a longer expansion for Dˆ(t).
Let condition (1) hold,|t|⩽π. ThenDˆ(t)=2Aˆ(t)−1+γ−βeit+Cθγ4((1−ReHˆ(t))2+α4).If alsoα⩾C2, then(Dˆ(t))′=(2Δˆ1(t)−1+γ−βeit)′+Cθγ3.
The expansion of Dˆ(t) follows from equations (31) and (33). The second equation of this lemma is proved similarly. □
Let condition (1) hold,|t|⩽π. ThenΛˆ1(t)=Aˆ(t)+Cθγ4((1−ReHˆ(t))2+α4).
Let condition (1) hold,α⩾C2,|t|⩽π. ThenΛˆ1(t)=1+Aˆ1(t)γ+(Aˆ2(t)+Aˆ4(t))γ2+Cθγ3.
The following three lemmas are needed for the approximation of W1.
Let condition (1) hold,|t|⩽π. Then|Aˆ(t)|⩽1+Cγ(ReHˆ(t)−1−α).If alsoα⩾C2, then there exists C such that|Δˆ1(t)|⩽1−Cγ.
The proof is very similar to the proof of Lemma 5 and, therefore, is omitted. □
Let condition (1) hold,|t|⩽π. Then|Wˆ1(t)−Vˆ(t)|⩽C(d+1)γ|eit−1|.
From Corollary 4 and Lemma 8 it follows that |Λˆ1(t)−edit|⩾Cγ(1−ReHˆ(t)+α),|Aˆ(t)−edit|⩾Cγ(1−ReHˆ(t)+α).
Applying (38), (41), (42), Lemma 7 and Corollary 5, the result follows. □
Let condition (1) hold,α⩾C2,|t|⩽π. Then|Wˆ1(t)−Vˆ1(t)|⩽C(d+1)γ|eit−1|.
Since α⩾C2,
|Λˆ1(t)−edit|⩾Cγ(1−ReHˆ(t)+α)⩾Cγ(0+C2)⩾Cγ.
From Corollary 6 it follows that
|Λˆ1(t)−Δˆ1|=Cγ3.
Also, from Lemma 8 it follows that
|Δˆ1−edit|⩾1−(1−Cγ)=Cγ.
Hence, it is easy to check that the inequality of the lemma is correct. □
Let condition (1) hold,|t|⩽π. Then∫−ππ|Λˆ1(t)|n|Wˆ1(t)−Vˆ(t)||eit−1|dt⩽C(d+1)γne−Cnγα,∫−ππ|Λˆ1(t)|n|Wˆ1(t)−Vˆ(t)|dt⩽C(d+1)e−Cnγαn.
It is obvious that
ReHˆ(t)−1=(1+β)(cos(t)−1)|1−βeit|2=−2Csin2(t/2).
We will use the following simple inequality
∫−ππ|sin(t/2)|kexp{−2λsin2(t/2)}dt⩽C(k)λ−(k+1)/2.
By applying Lemma 5, Lemma 9, (46), and (47), we get
∫−ππ|Λˆ1(t)|n|Wˆ1(t)−Vˆ(t)||eit−1|dt⩽∫−ππC(d+1)γexp{n(0.4(1−α)γ(ReHˆ(t)−1)−0.2γα)}dt⩽∫−ππC(d+1)γexp{Cnγ(ReHˆ(t)−1)}e−Cnγαdt⩽C(d+1)γne−Cnγα.
The second inequality of the lemma is proved similarly. □
Let condition (1) hold,α⩾C2,|t|⩽π. Then∫−ππ|Λˆ1(t)|n|Wˆ1(t)−Vˆ1(t)||eit−1|dt⩽C(d+1)γe−Cnγ.
From Lemma 5 and Lemma 10 it follows that
∫−ππ|Λˆ1(t)|n|Wˆ1(t)−Vˆ1(t)||eit−1|dt⩽∫−ππC(d+1)γexp{−0.2C2γn}dt⩽C(d+1)γe−Cnγ.
□
Let condition (1) hold,|t|⩽π. Then∫−ππ|Vˆ(t)||Λˆ1n(t)−Gˆn(t)||eit−1|dt⩽C(d+1)γγne−Cnγα,∫−ππ|Vˆ(t)||Λˆ1n(t)−Gˆn(t)|dt⩽C(d+1)γne−Cnγα.
Notice that
|Vˆ(t)|⩽C(d+1)|eit−1|γ(1−ReHˆ(t)+α),|Λˆ1n(t)−Gˆn(t)|⩽|Λˆ1(t)−Gˆ(t)|·n·max{|Λˆ1(t)|n−1,|Gˆ(t)|n−1}.
From Corollary 4 we have |Λˆ1|⩽exp{Cγ(ReHˆ(t)−1−α)}. Taking into account that |ea+bi|=ea, |Gˆ(t)| can be estimated as
|Gˆ(t)|⩽exp{Cγ(ReHˆ(t)−1−α)}.
Using Corollary 5, we have that
|Λˆ1(t)−Gˆ(t)|=|exp{lnΛˆ1(t)}−exp{lnGˆ(t)}|⩽C|lnΛˆ1(t)−lnGˆ(t)|=C|(Λˆ1(t)−1)−(Λˆ1(t)−1)22+(Λˆ1(t)−1)33+Cθ|Λˆ1(t)−1|44−lnGˆ(t)|=C|(Aˆ(t)−1)−12(Aˆ12(t)γ2+2Aˆ1(t)(Aˆ2(t)+Aˆ4(t))γ3)+13Aˆ13(t)γ3+Cθγ4((1−ReHˆ(t))2+α4)−lnGˆ(t)|⩽Cγ4((1−ReHˆ(t))2+α4).
By applying (50), (51), and the inequality xe−x⩽1, for all x>00$]]>, we can estimate the following integral:
∫−ππ|Vˆ(t)||Λˆ1n(t)−Gˆn(t)||eit−1|dt⩽C(d+1)∫−ππnexp{nCγ(ReHˆ(t)−1−α)}γ3((1−ReHˆ(t))+1)dt⩽C(d+1)∫−ππnγ3exp{n·0.5Cγ(ReHˆ(t)−1)}n·0.5Cγ(−ReHˆ(t)+1)e−Cnγα(2−ReHˆ(t))dt⩽C(d+1)∫−ππγ2exp{−2Cnγsin2(t/2)}e−Cnγαdt⩽C(d+1)γγne−Cnγα.
The second inequality of this lemma is proved similarly. □
Let condition (1) hold,α⩾C2,|t|⩽π. Then∫−ππ|Vˆ1(t)||Λˆ1n(t)−Gˆ1n(t)||eit−1|dt⩽C(d+1)γe−Cnγ.
Since α⩾C2,
|Vˆ1(t)|⩽C(d+1)|eit−1|γ,
and
|Λˆ1n(t)−Gˆ1n(t)|⩽|Λˆ1(t)−Gˆ1(t)|·n·exp{−Cγ(n−1)}.|Λˆ1(t)−Gˆ1(t)| is estimated by applying Corollary 6:
|Λˆ1(t)−Gˆ1(t)|⩽C|lnΛˆ1(t)−lnGˆ1(t)|=C|(Λˆ1(t)−1)−(Λˆ1(t)−1)22+Cθ|Λˆ1(t)−1|33−lnGˆ1(t)|=C|Aˆ1(t)γ+(Aˆ2(t)+Aˆ4(t))γ2−12Aˆ12(t)γ2+Cθγ3−lnGˆ1(t)|⩽Cγ3.
By applying (53), (55), and the inequality xe−x⩽1, for all x>00$]]>, we can estimate the following integral:
∫−ππ|Vˆ1(t)||Λˆ1n(t)−Gˆ1n(t)||eit−1|dt⩽C(d+1)∫−ππnγ2exp{−nCγ}dt⩽C(d+1)∫−ππnγ2exp{−n0.5Cγ}n0.5Cγdt⩽C(d+1)γe−Cnγ.
□
Let condition (1) hold,α⩾C2,|t|⩽π. Then|Wˆ1(t)|⩽C(d+1)γ,|Wˆ1′(t)|⩽C(d+1)(1+β/γ)γ,|Wˆ2(t)|⩽C(d+1),|Wˆ2′(t)|⩽C(d+1),|Vˆ2(t)|⩽C(d+1)γ,|Vˆ2′(t)|⩽C(d+1)(1+β/γ)γ,|Wˆ1(t)−Vˆ2(t)|⩽C(d+1)γ,|Wˆ1′(t)−Vˆ2′(t)|⩽C(d+1)γ(1+β/γ),|Λˆ1(t)|⩽e−Cγ,|Gˆ1(t)|⩽e−Cγ,|Λˆ1′(t)|⩽Cγ,|Gˆ1′(t)|⩽Cγ,|Λˆ2(t)|⩽β+4γ,|Λˆ2′(t)|⩽C(β+4γ),|Λˆ1(t)−Gˆ1(t)|⩽Cγ3,|(Λˆ1n(t)−Gˆ1n(t))′|⩽Cγ2e−Cnγ,|1−edit||Λˆ1(t)−edit|⩽C,|1−edit||Δˆ1(t)−edit|⩽C.
All inequalities are based on the previously obtained estimates of |Λˆ1(t)|, |Λˆ2(t)|, |Wˆ2(t)|, |Gˆ1(t)|, and the expansion of Dˆ(t). The inequalities containing Vˆ2(t) are proved similarly to those of Vˆ1(t) (see Lemma 10). □
Proofs
Applying inversion formula (11), Lemma 11, and Lemma 13 we prove
|Fn−(GnV+E)|K⩽12π∫−ππ|Fˆn(t)−Gˆn(t)Vˆ(t)−Eˆ(t)||eit−1|dt⩽12π∫−ππ|Λˆ1n(t)||Wˆ1(t)−Vˆ(t)||eit−1|dt+12π∫−ππ|Vˆ(t)||Λˆ1n(t)−Gˆn(t)||eit−1|dt+12π∫−ππ|Λˆ2n(t)Wˆ2(t)||eit−1|dt⩽C(d+1)γne−Cnγα+C(d+1)(β+4γ)n.
The local estimate is obtained analogously by applying inversion formula (12). □
The proof is similar to the proof of Theorem 1. Lemma 12 and Lemma 14 are applied instead of Lemma 11 and Lemma 13, since α⩾C2. □
Taking into account Corollary 3 and Lemma 15, we get
|Λˆ1,2nWˆ1,2|⩽C(d+1)e−Cn,|(Λˆ1,2nWˆ1,2)′|⩽|(Λˆ1,2n)′||Wˆ1,2|+|Λˆ1,2n||Wˆ1,2′|⩽nC(d+1)e−C(n−1)+C(d+1)e−Cn⩽C(d+1)ne−Cn.
From inversion formula (13) applied with a=0 and b=1 we get
‖Fn−E‖=‖Λ1nW1+Λ2nW2‖⩽‖Λ1nW1‖+‖Λ2nW2‖⩽(1+π)1/2(12π∫−ππ|Λˆ1nWˆ1|2+|(Λˆ1nWˆ1)′|2dt)1/2+(1+π)1/2(12π∫−ππ|Λˆ2nWˆ2|2+|(Λˆ2nWˆ2)′|2dt)1/2⩽C(d+1)e−Cn.
□
From Lemma 15, we get
|Λˆ2n(t)Wˆ2(t)|⩽C(d+1)(β+4γ)n,|(Λˆ2n(t)Wˆ2(t))′|⩽|(Λˆ2n(t))′Wˆ2(t)|+|Λˆ2n(t)Wˆ2′(t)|⩽C(d+1)n(β+4γ)n+C(d+1)(β+4γ)n⩽C(d+1)n(β+4γ)n,|Gˆ1n(t)(Wˆ1(t)−Vˆ2(t))|⩽C(d+1)γe−Cnγ,|(Gˆ1n(t)(Wˆ1(t)−Vˆ2(t)))′|⩽|(Gˆ1n(t))′(Wˆ1(t)−Vˆ2(t))|+|Gˆ1n(t)(Wˆ1(t)−Vˆ2(t))′|⩽C(d+1)nγ2e−C(n−1)γ+C(d+1)γe−Cnγ(1+β/γ)⩽C(d+1)γe−Cnγ(1+β/γ),|(Λˆ1n(t)−Gˆ1n(t))Wˆ1(t)|⩽n|Λˆ1(t)−Gˆ1(t)|e−C(n−1)γC(d+1)γ⩽C(d+1)γe−Cnγ,|((Λˆ1n(t)−Gˆ1n(t))Wˆ1(t))′|⩽|(Λˆ1n(t)−Gˆ1n(t))′Wˆ1(t)|+|(Λˆ1n(t)−Gˆ1n(t))Wˆ1′(t)|⩽C(d+1)γe−Cnγ(1+β/γ).
By applying inversion formula (13) with a=0 and b=1, we prove
‖Fn−(G1nV2+E)‖⩽C(d+1)(γe−Cnγ(1+β/γ)+n(β+4γ)n).
□
We use the inequalities obtained in the proof of Theorem 4 and inversion formula (14) with a=0. We have
k|Fn−(G1nV2+E){k}|⩽12π∫−ππ|(Wˆ1(t)(Λˆ1n(t)−Gˆ1n(t)))′|dt+12π∫−ππ|(Gˆ1n(t)(Wˆ1(t)−Vˆ2(t)))′|dt+12π∫−ππ|(Λˆ2(t)Wˆ2(t))′|dt⩽C(d+1)(γe−0.5Cnγe−0.5Cnγ(1+β/γ)+nenln(β+4γ)).
Hence,
k(1+β/γ)−1|Fn−(G1nV2+E){k}|⩽C(d+1)e−Cnγn
and
|Fn−(G1nV2+E){k}|⩽C(d+1)e−Cnγn,
since |M|⩽‖M‖∞⩽‖M‖.
Summing those inequalities, we get
|Fn−(G1nV2+E){k}|⩽C(d+1)e−Cnγn(1+k(1+β/γ)−1)=C(d+1)e−Cnγ(β+γ)n(β+(k+1)γ).
In order to prove the second inequality of the theorem, we apply the inversion formula (15) with a=0:
k|Fn−(G1nV2+E)(k)|⩽12π∫−ππ|(Wˆ1(t)e−it−1(Λˆ1n(t)−Gˆ1n(t)))′|dt+12π∫−ππ|(Gˆ1n(t)(Wˆ1(t)e−it−1−Vˆ2(t)e−it−1))′|dt+12π∫−ππ|(Λˆ2(t)Wˆ2(t)e−it−1)′|dt.
The summands can be estimated by using the inequalities from the proof of Theorem 4:
|Wˆ1(t)e−it−1||(Λˆ1n(t)−Gˆ1n(t))′|⩽C(d+1)γ2e−Cnγ,e(d+1)it−1e−it−1=(eit−1)(1+eit+⋯+edit)e−it(1−eit)=−eit(1+eit+⋯+edit),|(Wˆ1(t)e−it−1)′|⩽Cd2γ2,|(Wˆ2(t)e−it−1)′|⩽Cd2,|(Wˆ1(t)e−it−1)′||Λˆ1n(t)−Gˆ1n(t)|⩽Cnγ3e−Cnγd2γ2⩽Cd2e−Cnγ,|Gˆ1n(t)′(Wˆ1(t)−Vˆ2(t)e−it−1)|⩽C(d+1)γe−Cnγ,|Gˆ1n(t)(Wˆ1(t)−Vˆ2(t)e−it−1)′|⩽Cd2e−Cnγγ2,|Λˆ2n(t)′Wˆ2(t)e−it−1|⩽C(d+1)e−Cn,|Λˆ2n(t)(Wˆ2(t)e−it−1)′|⩽Cd2(β+4γ)n.
Thus, we get
kγ2|Fn−(G1nV2+E)(k)|⩽Cd2e−Cnγn
and
|Fn−(G1nV2+E)(k)|⩽C(d+1)e−Cnγn.
By summing the above inequalities we arrive at
|Fn−(G1nV2+E)(k)|⩽Cd2e−Cnγn(1+kγ2).
□
ReferencesBarbour, A.D., Lindvall, T.: Translated Poisson approximation for Markov chains. 19(3), 609–630 (2006). MR2280512. https://doi.org/10.1007/s10959-006-0047-9Čekanavičius, V.: . Universitext, Springer (2016). MR3467748. https://doi.org/10.1007/978-3-319-34072-2Čekanavičius, V., Roos, B.: Poisson type approximations for the Markov binomial distribution. 119, 190–207 (2009). MR2485024. https://doi.org/10.1016/j.spa.2008.01.008Čekanavičius, V., Vellaisamy, P.: Compound Poisson and signed compound Poisson approximations to the Markov binomial law. 16(4), 1114–1136 (2010). MR2759171. https://doi.org/10.3150/09-BEJ246De Pril, N., Dhaene, J.: Error bounds for compound Poisson approximations of the individual risk model. 22(2), 135–148 (1992). https://doi.org/10.2143/AST.22.2.2005111Erhardsson, T.: Compound Poisson approximation for Markov chains using Stein’s method. 27(1), 565–596 (1999). MR1681149. https://doi.org/10.1214/aop/1022677272Gani, J.: On the probability generating function of the sum of Markov-Bernoulli random variables. (Special vol.) 19A, 321–326 (1982). MR0633201. https://doi.org/10.2307/3213571Gerber, H.U.: Error bounds for the compound Poisson approximation. 3, 191–194 (1984). MR0752200. https://doi.org/10.1016/0167-6687(84)90062-3Hipp, C.: Approximation of aggregate claims distributions by compound Poisson distribution. 4(4), 227–232 (1985). MR0810720. https://doi.org/10.1016/0167-6687(85)90032-0Hirano, K., Aki, S.: On number of success runs of specified length in a two-state Markov chain. 3, 313–320 (1993). MR1243389. https://doi.org/10.1239/aap/1029955143Leipus, R., Šiaulys, J.: On the random max-closure for heavy-tailed random variables. 57(2), 208–221 (2017). MR3654985. https://doi.org/10.1007/s10986-017-9355-2Pitts, S.M.: A functional approach to approximations for the individual risk model. 34, 379–397 (2004). MR2086451. https://doi.org/10.1017/S051503610001374XPresman, E.L.: Approximation in variation of the distribution of a sum of independent Bernoulli variables with a Poisson law. 30(2), 417–422 (1986). MR0792634. https://doi.org/10.1137/1130051Roos, B.: On variational bounds in the compound Poisson approximation of the individual risk model. 40, 403–414 (2007). MR2310979. https://doi.org/10.1016/j.insmatheco.2006.06.003Šliogere, J., Čekanavičius, V.: Two limit theorems for Markov binomial distribution. 55(3), 451–463 (2015). MR3379037. https://doi.org/10.1007/s10986-015-9291-yŠliogere, J., Čekanavičius, V.: Approximation of symmetric three-state Markov chain by compound Poisson law. 56(3), 417–438 (2016). MR3530227. https://doi.org/10.1007/s10986-016-9326-zWang, K., Gao, M., Yang, Y., Chen, Y.: Asymptotics for the finite-time ruin probability in a discrete-time risk model with dependent insurance and financial risks. 58(1), 113–125 (2018). MR3779067. https://doi.org/10.1007/s10986-017-9378-8Xia, A., Zhang, M.: On approximation of Markov binomial distributions. 15, 1335–1350 (2009). MR2597595. https://doi.org/10.3150/09-BEJ194Yang, G., Miao, Y.: Moderate and Large Deviation Estimate for the Markov-Binomial Distribution. 110, 737–747 (2010). MR2610590. https://doi.org/10.1007/s10440-009-9471-zYang, Y., Wang, Y.: Tail behavior of the product of two dependent random variables with applications to risk theory. 16(1), 55–74 (2013). MR3020177. https://doi.org/10.1007/s10687-012-0153-2Zhang, H., Liu, Y., Li, B.: Notes on discrete compound Poisson model with applications to risk theory. 59, 325–336 (2014). MR3283233. https://doi.org/10.1016/j.insmatheco.2014.09.012