1 Introduction and preliminaries
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be independent identically distributed (i.i.d.) random variables (r.v.s). Denote a sequence of partial sums $\{{S_{n}},n\geqslant 0\}$ by
In this paper, we consider the randomly stopped sum
where ν is a counting r.v. taking values in ${\mathbb{N}_{0}}:=\{0,1,2,\dots \hspace{0.1667em}\}$. We assume that ν is nondegenerate at zero, i.e. $\mathbb{P}(\nu \hspace{-0.1667em}=\hspace{-0.1667em}0)\hspace{-0.1667em}\lt \hspace{-0.1667em}1$, and that ν is independent of $\{X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}\}$. Denote ${F_{X}}$, ${F_{\nu }}$ and ${F_{{S_{\nu }}}}$ the distributions of X, ν and ${S_{\nu }}$, respectively. In the case where primary r.v.s are heavy-tailed and nonnegative, the standard result states that if ν has finite mean $\mathbb{E}\nu $, and its distribution tail is lighter than the tail of X, then
For important contributions, see Stam [15], Daley et al. [4], Embrechts et al. [7], Faÿ et al. [8]. In Section 4 of the last paper, the case of nonnegative regularly varying summands was examined in detail. Note that, generally, relationship (2) can be obtained under different conditions on r.v.s X and ν (see, e.g., Daley et al. [4]). More precisely, (2) holds under various heavy-tailed distribution classes, moment conditions on X and ν, relationships between the distribution tails $\overline{{F_{X}}}$ and $\overline{{F_{\nu }}}$ (typically, $\overline{{F_{\nu }}}(x)=o(\overline{{F_{X}}}(x))$). Usually, weakening the conditions on ${F_{\nu }}$, stronger conditions on ${F_{X}}$ are assumed. In the case of real-valued r.v.s, the conditions for (2) depend also on the sign of the mean $\mu =\mathbb{E}X$ if it exists. For instance, in the case of negative mean, the relation (2) holds for all subclasses of ${\mathcal{S}^{\ast }}$ (see definition below), which includes most subexponential distributions with finite mean, see Denisov et al. [6, Theorem 1].
(1)
\[ {S_{0}}=0,\hspace{2.5pt}\hspace{2.5pt}{S_{n}}:={X_{1}}+\cdots +{X_{n}},\hspace{2.5pt}n\geqslant 1.\](2)
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& \underset{x\to \infty }{\sim }\mathbb{E}\nu \hspace{0.1667em}\overline{{F_{X}}}(x).\end{aligned}\]In this paper, we pose the problem under what ‘minimal’ moment conditions relation (2) holds in the case of real-valued consistently varying distribution ${F_{X}}$ with zero mean. Recall that the class of consistently varying distributions (see definition below) contains the regularly varying class of distributions.
Before formulating and discussing the main result of the paper, we will introduce the related subclasses of heavy-tailed distributions, some notions, and known results. We will say that a distribution $F=1-\overline{F}$ is on $R:=(-\infty ,\infty )$ if $\overline{F}(x)\gt 0$ for all $x\in \mathbb{R}$. All limiting relations are assumed as $x\to \infty $ unless it is stated to the contrary. For two eventually positive functions $a(x)$ and $b(x)$, $a(x)\sim b(x)$ means that $\lim a(x)/b(x)=1$; $a(x)\asymp b(x)$ means that $0\lt \liminf a(x)/b(x)\leqslant \limsup a(x)/b(x)\lt \infty $. We denote ${a^{+}}:=\max \{a,0\}$, ${a^{-}}:=-\min \{a,0\}$.
Next we introduce the heavy-tailed distribution subclasses which will be used in the paper.
-
• A distribution F on R is said to be regularly varying with index $\alpha \geqslant 0$ and denoted $F\in \mathcal{R}(\alpha )$ if its tail satisfies\[\begin{aligned}{}\lim \frac{\overline{F}(xy)}{\overline{F}(x)}& ={y^{-\alpha }}\hspace{2.5pt}\textit{for any}\hspace{2.5pt}y\gt 0.\end{aligned}\]$F\in \mathcal{R}(0)$ is said to be slowly varying distribution.
It holds that $\mathcal{C}\subset \mathcal{D}\subset \mathcal{H}$.
Next, for a distribution F on R denote
and introduce the upper and lower Matuszewska indices by equalities
(4)
\[\begin{aligned}{}{\overline{F}_{\ast }}(y)& :=\underset{x\to \infty }{\liminf }\frac{\overline{F}(xy)}{\overline{F}(x)},\hspace{2.5pt}\hspace{2.5pt}{\overline{F}^{\ast }}(y):=\underset{x\to \infty }{\limsup }\frac{\overline{F}(xy)}{\overline{F}(x)},\hspace{2.5pt}y\gt 1,\end{aligned}\]
\[ {J_{F}^{+}}=-\underset{y\to \infty }{\lim }\frac{\log {\overline{F}_{\ast }}(y)}{\log y},\hspace{2.5pt}\hspace{2.5pt}{J_{F}^{-}}=-\underset{y\to \infty }{\lim }\frac{\log {\overline{F}^{\ast }}(y)}{\log y}.\]
Clearly, $0\leqslant {J_{F}^{-}}\leqslant {J_{F}^{+}}\leqslant \infty $. It is well known that $F\in \mathcal{D}$ if and only if ${J_{F}^{+}}\lt \infty $.Note that $F\in \mathcal{S}$ implies
\[\begin{aligned}{}\overline{{F^{\ast n}}}(x)& \sim n\overline{F}(x)\hspace{2.5pt}\hspace{2.5pt}\text{for all}\hspace{2.5pt}\hspace{2.5pt}n\geqslant 2,\end{aligned}\]
see, e.g., Foss et al. [9, Corollary 3.20].It holds that $\mathcal{C}\subset {\mathcal{S}^{\ast }}\subset \mathcal{S}$ provided the mean is finite.
More details on the mentioned heavy-tailed classes can be found in the recent book [11].
First, we formulate some known results for class $\mathcal{C}$.
Proposition 1.
Proof. In the case of nonnegative r.v.s, part (a) of the proposition can be found in Leipus and Šiaulys [10, Corollary 3]. We provide a short proof for the case of distributions on R. Write
This implies (5) by the dominated convergence theorem. Part (b) can be found in Ng et al. [13, Theorem 2.3] or Denisov et al. [6, Corollary 3]. Part (c) follows from Denisov et al. [6, Theorem 1] and relationship $\mathcal{C}\subset {\mathcal{S}^{\ast }}$ (in the case of regularly varying distributions, see Borovkov and Borovkov [2, Theorem 7.1.1]). □
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& ={\sum \limits_{n=1}^{\infty }}\overline{{F_{X}^{\ast n}}}(x)\mathbb{P}(\nu =n),\hspace{2.5pt}x\geqslant 0.\end{aligned}\]
Since $\mathcal{C}\subset \mathcal{S}$, we have $\overline{{F_{X}^{\ast n}}}(x)\sim n\overline{{F_{X}}}(x)$ for any $n\geqslant 1$. In addition, as $\mathcal{C}\subset \mathcal{D}$, according to Theorem 3 in Daley et al. [4], for any $p\gt {J_{{F_{X}}}^{+}}$, there exists a finite positive constant C, independent of x and n, such that
(6)
\[\begin{aligned}{}\underset{x\in \mathbb{R}}{\sup }\frac{\overline{{F_{X}^{\ast n}}}(x)}{\overline{{F_{X}}}(x)}& \leqslant C{n^{p+1}}.\end{aligned}\]We will focus our attention to the case where $\mathbb{E}X=0$ and show that in this case the result in part (b) can be improved replacing $o(\cdot )$-condition to $O(\cdot )$-condition. Note that, in the case of zero mean and in the more general setup, Olvera-Cravioto [14, Theorem 2.1 (b)] obtained the following result.
Proposition 2.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. real-valued r.v.s with the common distribution ${F_{X}}\in \mathcal{C}$, and let ν be an independent counting r.v. Assume that ${J_{{F_{X}}}^{-}}\gt 0$, $\mathbb{E}|X{|^{r}}\lt \infty $ for some $r\gt 1$, $\mathbb{E}X=0$ and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$. Then (5) holds.
As noted by Olvera-Cravioto [14], the proof of the result follows the standard heavy-tailed techniques from Nagaev [12], Borovkov [1] (see also Borovkov and Borovkov [2]), based on the exponential bounds for sums of truncated r.v.s. Moreover, it was conjectured that the requirement $\mathbb{E}|X{|^{r}}\lt \infty $, $r\gt 1$ might be weakened with a different proof technique.
In our paper we prove that the result of Proposition 2 indeed holds under the condition $\mathbb{E}|X|\lt \infty $, accordingly modifying the proof. Specifically, some ideas from Cline and Hsing [3], Tang [16] and Danilenko and Šiaulys [5] have been used in the proof of the main result. Apparently, the alternative proof of the main result can be constructed using the bounds in Theorem 1 of Tang and Yan [18].
2 Main results
Theorem 1.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. r.v.s with the distribution ${F_{X}}\in \mathcal{C}$, and let ν be an independent counting r.v. If $\mathbb{E}|X|\lt \infty $, $\mathbb{E}X=0$, ${J_{{F_{X}}}^{-}}\gt 0$, and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$, then (5) holds.
Observe that conditions $\mathbb{E}|X|\lt \infty $ and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$ imply finiteness of the moment $\mathbb{E}\nu \lt \infty $. The statement of the theorem follows from Propositions 3 and 4 below in which the upper and lower asymptotic bounds are obtained.
Remark 1.
Note that, in the case of dominatedly varying distribution ${F_{X}}$ with finite mean, the condition $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$ (both for $\mu \gt 0$ and $\mu \leqslant 0$) is sufficient for the relationship $\overline{{F_{{S_{\nu }}}}}(x)\asymp \overline{{F_{X}}}(x)$ (see, e.g., Leipus and Šiaulys [10], Yang and Gao [19]). Taking into account the closure of class $\mathcal{D}$ under weak tail equivalence, this yields that the distribution of random sum ${S_{\nu }}$ is in $\mathcal{D}$.
Proposition 3.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. r.v.s with the common distribution ${F_{X}}\in \mathcal{S}$, and let ν be an independent counting r.v. with finite mean $\mathbb{E}\nu $. Then
From the main theorem we obtain the following statement for regularly varying distributions. To the best of our knowledge, this is a new result.
Corollary 1.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. r.v.s with the distribution ${F_{X}}\in \mathcal{R}(\alpha )$, $\alpha \geqslant 1$, and let ν be an independent counting r.v. If $\mathbb{E}|X|\lt \infty $, $\mathbb{E}X=0$, and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$, then (5) holds.
Remark that if ${F_{X}}\in \mathcal{R}(\alpha )$, $\alpha \gt 1$, then the condition $\mathbb{E}|X|\lt \infty $ is automatically satisfied.
3 Proof of Proposition 3
For $K\in N$ and large x we have
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& \geqslant \mathbb{P}({S_{\nu }}\gt x,\nu \leqslant K)={\sum \limits_{n=1}^{K}}\overline{{F_{X}^{\ast n}}}(x)\mathbb{P}(\nu =n).\end{aligned}\]
Since $\overline{{F_{X}^{\ast n}}}(x)\sim n\overline{{F_{X}}}(x)$, we get that
The assertion of the proposition follows now from the last estimate by passing K to infinity. □4 Proof of Proposition 4
Let $K\in N$ and $\delta \in (0,1)$ be temporarily fixed numbers. For sufficiently large x we have
where ${\widehat{X}_{k}}:=\min \{{X_{k}},x(1-\delta )\}$.
(7)
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& =\mathbb{P}({S_{\nu }}\gt x,\nu \leqslant K)+\mathbb{P}\big({S_{\nu }}\gt x,K\lt \nu \leqslant x{\delta ^{-1}}\big)\\ {} & \hspace{1em}+\mathbb{P}\big({S_{\nu }}\gt x,\nu \gt x{\delta ^{-1}}\big)\\ {} & ={\sum \limits_{n=1}^{K}}\mathbb{P}({S_{n}}\gt x)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}\mathbb{P}\big({S_{n}}\gt x,{\cup _{k=1}^{n}}\big\{{X_{k}}\gt x(1-\delta )\big\}\big)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}\mathbb{P}\big({S_{n}}\gt x,{\cap _{k=1}^{n}}\big\{{X_{k}}\le x(1-\delta )\big\}\big)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\mathbb{P}\big({S_{\nu }}\gt x,\nu \gt x{\delta ^{-1}}\big)\\ {} & \leqslant {\sum \limits_{n=1}^{K}}\overline{{F_{X}^{\ast n}}}(x)\mathbb{P}(\nu =n)+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}n\overline{{F_{X}}}\big(x(1-\delta )\big)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}\mathbb{P}\Bigg({\sum \limits_{k=1}^{n}}{\widehat{X}_{k}}\gt x\Bigg)\mathbb{P}(\nu =n)+\overline{{F_{\nu }}}\big(x{\delta ^{-1}}\big)\\ {} & =:{\mathcal{J}_{1}}+{\mathcal{J}_{2}}+{\mathcal{J}_{3}}+{\mathcal{J}_{4}},\end{aligned}\]Since ${F_{X}}\in \mathcal{C}\subset \mathcal{S}$, it holds that $\overline{{F_{X}^{\ast n}}}(x)\sim n\overline{{F_{X}}}(x)$ for any fixed n. Therefore,
for sufficiently large $x\geqslant {x_{1}}(K,\delta )$.
(8)
\[\begin{aligned}{}{\mathcal{J}_{1}}& \leqslant (1+\delta )\overline{{F_{X}}}(x)\mathbb{E}\nu {1_{\{\nu \leqslant K\}}}\end{aligned}\]In addition,
Using the bound in Lemma 1 (i) for the class $\mathcal{D}$ and the condition $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$, we obtain
for large $x\geqslant {x_{2}}(\delta )$ with some positive constants ${c_{1}}$ and ${c_{2}}$.
(11)
\[\begin{aligned}{}{\mathcal{J}_{4}}& =\frac{\overline{{F_{\nu }}}(x{\delta ^{-1}})}{\overline{{F_{X}}}(x{\delta ^{-1}})}\hspace{0.1667em}\frac{\overline{{F_{X}}}(x{\delta ^{-1}})}{\overline{{F_{X}}}(x)}\hspace{0.1667em}\overline{{F_{X}}}(x)\\ {} & \leqslant {c_{1}}\frac{\overline{{F_{X}}}(x{\delta ^{-1}})}{\overline{{F_{X}}}(x)}\hspace{0.1667em}\overline{{F_{X}}}(x)\leqslant {c_{2}}\hspace{0.1667em}{\delta ^{{J_{{F_{X}}}^{-}}/2}}\hspace{0.1667em}\overline{{F_{X}}}(x)\end{aligned}\]Substituting estimates (8)–(11) into (7), we obtain
\[\begin{aligned}{}\frac{\overline{{F_{{S_{\nu }}}}}(x)}{\mathbb{E}\nu \overline{{F_{X}}}(x)}& \leqslant \max \bigg\{\frac{{\mathcal{J}_{1}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu {1_{\{\nu \leqslant K\}}}},\frac{{\mathcal{J}_{2}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu {1_{\{\nu \gt K\}}}}\bigg\}\\ {} & \hspace{1em}+\frac{{\mathcal{J}_{3}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu }+\frac{{\mathcal{J}_{4}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu }\\ {} & \leqslant \max \bigg\{1+\delta ,\frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\bigg\}+\underset{n\leqslant x{\delta ^{-1}}}{\max }\frac{\mathbb{P}({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x)}{n\overline{{F_{X}}}(x)}\\ {} & \hspace{1em}+\frac{{c_{2}}}{\mathbb{E}\nu }\hspace{0.1667em}{\delta ^{\hspace{0.1667em}{J_{{F_{X}}}^{-}}/2}}\end{aligned}\]
for $x\geqslant \max \{{x_{1}}(K,\delta ),{x_{2}}(\delta )\}$. Therefore,
\[\begin{aligned}{}\limsup \frac{\overline{{F_{{S_{\nu }}}}}(x)}{\mathbb{E}\nu \overline{{F_{X}}}(x)}& \leqslant \max \bigg\{1+\delta ,\limsup \frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\bigg\}\\ {} & \hspace{1em}+\limsup \frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\limsup \underset{n\leqslant x{\delta ^{-1}}}{\max }\frac{\mathbb{P}({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x)}{n\overline{{F_{X}}}(x(1-\delta ))}\\ {} & \hspace{1em}+\frac{{c_{2}}}{\mathbb{E}\nu }\hspace{0.1667em}{\delta ^{{J_{{F_{X}}}^{-}}/2}}\\ {} & =\max \bigg\{1+\delta ,\limsup \frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\bigg\}+\frac{{c_{2}}}{\mathbb{E}\nu }\hspace{0.1667em}{\delta ^{{J_{{F_{X}}}^{-}}/2}}\end{aligned}\]
according to Lemma 2. The desired upper bound is then obtained taking $\delta \searrow 0$.5 Auxiliary lemmas
The first auxiliary lemma can be found in Tang and Tsitsiashvili [17, Lemma 3.5].
Lemma 1.
Let the distribution $F\in \mathcal{D}$ with lower and upper Matuszewska indices ${J_{F}^{-}}$ and ${J_{F}^{+}}$, respectively.
-
(i) If ${J_{F}^{-}}\gt 0$, then for any $0\leqslant {p_{1}}\lt {J_{F}^{-}}$ there exist positive constants ${C_{1}}={C_{1}}({p_{1}})$ and ${D_{1}}={D_{1}}({p_{1}})$, such that
(12)
\[\begin{aligned}{}\frac{\overline{F}(y)}{\overline{F}(x)}& \geqslant {C_{1}}{\bigg(\frac{x}{y}\bigg)^{{p_{1}}}}\end{aligned}\] -
(ii) For any ${p_{2}}\gt {J_{F}^{+}}\geqslant 0$ there exist positive constants ${C_{2}}={C_{2}}({p_{2}})$ and ${D_{2}}={D_{2}}({p_{2}})$, such that
(13)
\[\begin{aligned}{}\frac{\overline{F}(y)}{\overline{F}(x)}& \leqslant {C_{2}}{\bigg(\frac{x}{y}\bigg)^{{p_{2}}}}\end{aligned}\] -
(iii) For any $p\gt {J_{F}^{+}}$ it holds that ${x^{-p}}=o(\overline{F}(x))$.
The following lemma is crucial in the proof of Proposition 4.
Proof. For any $\delta \in (0,1)$, set
and $a(x,n)=\log (1/(n\overline{{F_{X}}}(x(1-\delta ))))$ for large x ($x\geqslant {x_{3}}(\delta )$) and $n\leqslant x{\delta ^{-1}}$.
Lemma 2.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. real-valued r.v.s with the dominatedly varying distribution ${F_{X}}\in \mathcal{D}$. If $\mathbb{E}|X|\lt \infty $, $\mathbb{E}X=0$, then, for any $\delta \in (0,1)$,
\[\begin{aligned}{}\lim \underset{n\leqslant x{\delta ^{-1}}}{\max }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}& =0,\end{aligned}\]
where ${\widehat{X}_{k}}:=\min \{{X_{k}},x(1-\delta )\}$.
\[ a=a(x,n):=\max \bigg\{\log \frac{1}{n\hspace{0.1667em}\overline{{F_{X}}}(x(1-\delta ))},1\bigg\},\hspace{2.5pt}x\in R,\hspace{2.5pt}n\in N.\]
The assumption $\mathbb{E}|X|\lt \infty $ implies that $x\overline{{F_{X}}}(x(1-\delta ))\to 0$ as $x\to \infty $. Since $a(x,n)$ is nonincreasing in n, we get that for any $\delta \in (0,1)$
(14)
\[\begin{aligned}{}\underset{n\leqslant x{\delta ^{-1}}}{\min }a(x,n)& \geqslant \log \frac{1}{x{\delta ^{-1}}\overline{{F_{X}}}(x(1-\delta ))}\to \infty \end{aligned}\]By the exponential Markov inequality, for any $h,x\gt 0$, we have
Split the expectation $\mathbb{E}({\mathrm{e}^{h{\widehat{X}_{1}}}}-1)$ as follows:
where
The inequality ${\mathrm{e}^{z}}-1\leqslant z{\mathrm{e}^{z}}$, $z\geqslant 0$, implies that
In addition, observe that
Substituting estimates (17), (18), (19) into (15)–(16), we get
\[\begin{aligned}{}\mathbb{P}\Bigg({\sum \limits_{k=1}^{n}}{\widehat{X}_{k}}\gt x\Bigg)& \leqslant {\mathrm{e}^{-hx}}\mathbb{E}\exp \Bigg\{h{\sum \limits_{k=1}^{n}}{\widehat{X}_{k}}\Bigg\}\\ {} & ={\mathrm{e}^{-hx}}{\big(1+\mathbb{E}\big({\mathrm{e}^{h{\widehat{X}_{1}}}}-1\big)\big)^{n}}.\end{aligned}\]
Thus, by inequality $1+z\leqslant {\mathrm{e}^{z}}$, $z\in R$,
(15)
\[ \frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}\leqslant \exp \big\{-hx+a+n\mathbb{E}\big({\mathrm{e}^{h{\widehat{X}_{1}}}}-1\big)\big\}.\](16)
\[\begin{aligned}{}\mathbb{E}\big({\mathrm{e}^{h{\widehat{X}_{1}}}}-1\big)& ={\mathcal{K}_{1}}+{\mathcal{K}_{2}}+{\mathcal{K}_{3}}+{\mathcal{K}_{4}},\end{aligned}\]
\[\begin{aligned}{}{\mathcal{K}_{1}}& :={\int _{(-\infty ,0]}}\big({\mathrm{e}^{hu}}-1\big)\mathrm{d}{F_{X}}(u),\\ {} {\mathcal{K}_{2}}& :={\int _{(0,x(1-\delta ){a^{-2}}]}}\big({\mathrm{e}^{hu}}-1\big)\mathrm{d}{F_{X}}(u),\\ {} {\mathcal{K}_{3}}& :={\int _{(x(1-\delta ){a^{-2}},x(1-\delta )]}}\big({\mathrm{e}^{hu}}-1\big)\mathrm{d}{F_{X}}(u),\\ {} {\mathcal{K}_{4}}& :=\big({\mathrm{e}^{hx(1-\delta )}}-1\big)\overline{{F_{X}}}\big(x(1-\delta )\big).\end{aligned}\]
The inequalities $|{\mathrm{e}^{z}}-1|\leqslant |z|$, $|{\mathrm{e}^{z}}-z-1|\leqslant {z^{2}}/2$, $z\leqslant 0$, imply that
(17)
\[\begin{aligned}{}{\mathcal{K}_{1}}& =h\mathbb{E}X{1_{\{X\leqslant 0\}}}+\mathbb{E}\big({\mathrm{e}^{hX}}-hX-1\big){1_{\{X\leqslant 0\}}}\\ {} & =-h\mathbb{E}{X^{-}}+\mathbb{E}\big({\mathrm{e}^{hX}}-1\big){1_{\{X\leqslant -{h^{-1/4}}\}}}-h\mathbb{E}X{1_{\{X\leqslant -{h^{-1/4}}\}}}\\ {} & \hspace{1em}+\mathbb{E}\big({\mathrm{e}^{hX}}-hX-1\big){1_{\{-{h^{-1/4}}\lt X\leqslant 0\}}}\\ {} & \leqslant -h\mathbb{E}{X^{-}}+2h\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{3/2}}}{2}.\end{aligned}\](18)
\[\begin{aligned}{}{\mathcal{K}_{2}}& \leqslant h{\mathrm{e}^{hx(1-\delta ){a^{-2}}}}{\int _{(0,x(1-\delta ){a^{-2}}]}}u\hspace{0.1667em}\mathrm{d}{F_{X}}(u)\\ {} & \leqslant h{\mathrm{e}^{hx(1-\delta ){a^{-2}}}}\mathbb{E}{X^{+}}.\end{aligned}\](19)
\[\begin{aligned}{}{\mathcal{K}_{3}},{\mathcal{K}_{4}}& \leqslant {\mathrm{e}^{hx(1-\delta )}}\overline{{F_{X}}}\big(x(1-\delta ){a^{-2}}\big).\end{aligned}\]
\[\begin{aligned}{}& \frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}\\ {} & \hspace{1em}\leqslant \exp \big\{2n{\mathrm{e}^{hx(1-\delta )}}\overline{{F_{X}}}\big(x(1-\delta ){a^{-2}}\big)\big\}\\ {} & \hspace{2em}\times \exp \bigg\{-hx+a+nh\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}-\mathbb{E}{X^{-}}+{\mathrm{e}^{\frac{hx(1-\delta )}{{a^{2}}}}}\mathbb{E}{X^{+}}\bigg)\bigg\}.\end{aligned}\]
According to Lemma 1 (iii), ${(x(1-\delta ))^{p}}\hspace{0.1667em}\overline{{F_{X}}}(x(1-\delta ))\to \infty $ for any $p\gt {J_{{F_{X}}}^{+}}$. Hence, for large x ($x\geqslant {x_{4}}(\delta ,p)\gt {x_{3}}(\delta )$),
This relation implies that
and, since ${F_{X}}\in \mathcal{D}$, by Lemma 1 (ii), it holds
for any $p\gt {J_{{F_{X}}}^{+}}$, large x $(x\geqslant {x_{5}}(\delta ,p)\gt {x_{4}}(\delta ,p))$ and some positive constant ${c_{3}}={c_{3}}(\delta ,p)$.
(20)
\[ \underset{1\leqslant n\leqslant x{\delta ^{-1}}}{\max }a(x,n)\leqslant \log \frac{{(x(1-\delta ))^{p}}}{\overline{{F_{X}}}(x(1-\delta )){(x(1-\delta ))^{p}}}\leqslant p\log \big(x(1-\delta )\big).\](21)
\[\begin{aligned}{}\frac{\overline{{F_{X}}}(x(1-\delta ){a^{-2}})}{\overline{{F_{X}}}(x(1-\delta ))}& \leqslant {c_{3}}{a^{2p}}\end{aligned}\]Therefore, by condition $\mathbb{E}X=\mathbb{E}{X^{+}}-\mathbb{E}{X^{-}}=0$, we get
for $h\gt 0$, $n\leqslant x{\delta ^{-1}}$ and large x ($x\geqslant {x_{5}}(\delta ,p)$).
(22)
\[\begin{aligned}{}& \frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}\\ {} & \hspace{1em}\leqslant \exp \big\{2{c_{3}}n{a^{2p}}{\mathrm{e}^{hx(1-\delta )}}\overline{{F_{X}}}\big(x(1-\delta )\big)\big\}\\ {} & \hspace{1em}\hspace{1em}\times \exp \bigg\{-hx+a+nh\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}+\big({\mathrm{e}^{\frac{hx(1-\delta )}{{a^{2}}}}}-1\big)\mathbb{E}{X^{+}}\bigg)\bigg\}\\ {} & =:{P_{1}}{P_{2}}\end{aligned}\]Now, for $x\gt 0$ set
\[\begin{aligned}{}h& =h(x,n):=\max \bigg\{\frac{a(x,n)-2p\log a(x,n)}{x(1-\delta )},\frac{1}{x(1-\delta )}\bigg\}.\end{aligned}\]
By (14), for large x ($x\geqslant {x_{6}}(\delta ,p)\gt {x_{5}}(\delta ,p)$),
Hence, from (20) we obtain, that for $x\geqslant {x_{6}}(\delta ,p)$
With this choice of h, we obtain that, for large x $(x\geqslant {x_{6}}(\delta ,p))$ and any $n\leqslant x{\delta ^{-1}}$,
For ${P_{2}}$, we have for large x and $n\leqslant x{\delta ^{-1}}$
Since, by (14) and (23), ${\min _{n\leqslant x{\delta ^{-1}}}}a(x,n)\to \infty $ and ${\max _{n\leqslant x{\delta ^{-1}}}}h(x,n)\to 0$, for large x ($x\geqslant {x_{7}}(\delta ,p)\gt {x_{6}}(\delta ,p))$, it holds that
(24)
\[\begin{aligned}{}{P_{1}}& =\exp \big\{2{c_{3}}n{a^{2p}}{\mathrm{e}^{a-2p\log a}}\overline{{F_{X}}}\big(x(1-\delta )\big)\big\}\\ {} & =\exp \big\{2{c_{3}}{\mathrm{e}^{a}}n\overline{{F_{X}}}\big(x(1-\delta )\big)\big\}\\ {} & ={\mathrm{e}^{2{c_{3}}}}.\end{aligned}\](25)
\[\begin{aligned}{}{P_{2}}& =\exp \bigg\{-\frac{a\delta }{1-\delta }+\frac{2p\log a}{1-\delta }+n\hspace{0.1667em}\frac{a-2p\log a}{x(1-\delta )}\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}\\ {} & \hspace{1em}+\big({\mathrm{e}^{(a-2p\log a){a^{-2}}}}-1\big)\mathbb{E}{X^{+}}\bigg)\bigg\}\\ {} & \leqslant \exp \bigg\{-\frac{a\delta }{1-\delta }+\frac{2p\log a}{1-\delta }+\frac{a}{\delta (1-\delta )}\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}\\ {} & \hspace{1em}+\big({\mathrm{e}^{1/a}}-1\big)\mathbb{E}{X^{+}}\bigg)\bigg\}.\end{aligned}\]
\[\begin{aligned}{}\underset{n\leqslant x{\delta ^{-1}}}{\max }\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}+\big({\mathrm{e}^{1/a}}-1\big)\mathbb{E}{X^{+}}\bigg)& \leqslant \frac{{\delta ^{2}}}{2}.\end{aligned}\]
Substituting this bound into (25), we obtain that, for large x,
\[\begin{aligned}{}\underset{n\leqslant x{\delta ^{-1}}}{\max }{P_{2}}& \leqslant \underset{n\leqslant x{\delta ^{-1}}}{\max }\exp \bigg\{-\frac{a\delta }{1-\delta }+\frac{2p\log a}{1-\delta }+\frac{a\delta }{2(1-\delta )}\bigg\}\\ {} & =\underset{n\leqslant x{\delta ^{-1}}}{\max }\exp \bigg\{-\frac{a\delta -4p\log a}{2(1-\delta )}\bigg\}\to 0.\end{aligned}\]
This, together with (22) and (24), implies the statement of the lemma. □