1 Introduction
Higher-order Galton–Watson processes with immigration having finite second moment (also called Generalized Integer-valued AutoRegressive (GINAR) processes) have been introduced by Latour [14, equation (1.1)]. Pénisson and Jacob [16] used higher-order Galton–Watson processes (without immigration) for studying the decay phase of an epidemic, and, as an application, they investigated the Bovine Spongiform Encephalopathy epidemic in Great Britain after the 1988 feed ban law. As a continuation, Pénisson [15] introduced estimators of the so-called infection parameter in the growth and decay phases of an epidemic. Recently, Kashikar and Deshmukh [12, 13] and Kashikar [11] used second order Galton–Watson processes (without immigration) for modeling the swine flu data for Pune, India and La-Gloria, Mexico. Kashikar and Deshmukh [12] also studied their basic probabilistic properties such as a formula for their probability generator function, probability of extinction, long run behavior and conditional least squares estimation of the offspring means.
Let ${\mathbb{Z}_{+}}$, $\mathbb{N}$, $\mathbb{R}$ and ${\mathbb{R}_{+}}$ denote the set of non-negative integers, positive integers, real numbers and non-negative real numbers, respectively. The natural basis of ${\mathbb{R}^{d}}$ will be denoted by $\{{\boldsymbol{e}_{1}},\dots ,{\boldsymbol{e}_{d}}\}$. For $x\in \mathbb{R}$, the integer part of x is denoted by $\lfloor x\rfloor $. Every random variable will be defined on a probability space $(\Omega ,\mathcal{A},\mathbb{P})$. Convergence in distribution and equality in distributions of random variables or stochastic processes is denoted by $\stackrel{\mathcal{D}}{\longrightarrow }$ and $\stackrel{\mathcal{D}}{=}$, respectively.
First, we recall the Galton–Watson process with immigration, which assumes that an individual can reproduce only once during its lifetime at age 1, and then it dies immediately. The initial population size at time 0 will be denoted by ${X_{0}}$. For each $n\in \mathbb{N}$, the population consists of the offsprings born at time n and the immigrants arriving at time n. For each $n,i\in \mathbb{N}$, the number of offsprings produced at time n by the ${i^{\mathrm{th}}}$ individual of the ${(n-1)^{\mathrm{th}}}$ generation will be denoted by ${\xi _{n,i}}$. The number of immigrants in the ${n^{\mathrm{th}}}$ generation will be denoted by ${\varepsilon _{n}}$. Then, for the population size ${X_{n}}$ of the ${n^{\mathrm{th}}}$ generation, we have
where ${\textstyle\sum _{i=1}^{0}}:=0$. Here $\big\{{X_{0}},\hspace{0.1667em}{\xi _{n,i}},\hspace{0.1667em}{\varepsilon _{n}}:n,i\in \mathbb{N}\big\}$ are supposed to be independent non-negative integer-valued random variables, and $\{{\xi _{n,i}}:n,i\in \mathbb{N}\}$ and $\{{\varepsilon _{n}}:n\in \mathbb{N}\}$ are supposed to consist of identically distributed random variables, respectively. If ${\varepsilon _{n}}=0$, $n\in \mathbb{N}$, then we say that ${({X_{n}})_{n\in {\mathbb{Z}_{+}}}}$ is a Galton–Watson process (without immigration).
(1)
\[ {X_{n}}={\sum \limits_{i=1}^{{X_{n-1}}}}{\xi _{n,i}}+{\varepsilon _{n}},\hspace{2em}n\in \mathbb{N},\]Next, we introduce the second-order Galton–Watson branching model with immigration. In this model we suppose that an individual reproduces at age 1 and also at age 2, and then it dies immediately. For each $n\in \mathbb{N}$, the population consists again of the offsprings born at time n and the immigrants arriving at time n. For each $n,i,j\in \mathbb{N}$, the number of offsprings produced at time n by the ${i^{\mathrm{th}}}$ individual of the ${(n-1)^{\mathrm{th}}}$ generation and by the ${j^{\mathrm{th}}}$ individual of the ${(n-2)^{\mathrm{th}}}$ generation will be denoted by ${\xi _{n,i}}$ and ${\eta _{n,j}}$, respectively, and ${\varepsilon _{n}}$ denotes the number of immigrants in the ${n^{\mathrm{th}}}$ generation. Then, for the population size ${X_{n}}$ of the ${n^{\mathrm{th}}}$ generation, we have
where ${X_{-1}}$ and ${X_{0}}$ are non-negative integer-valued random variables (the initial population sizes). Here $\big\{{X_{-1}},{X_{0}},\hspace{0.1667em}{\xi _{n,i}},\hspace{0.1667em}{\eta _{n,j}},\hspace{0.1667em}{\varepsilon _{n}}:n,i,j\in \mathbb{N}\big\}$ are supposed to be non-negative integer-valued random variables such that $\big\{({X_{-1}},{X_{0}}),\hspace{0.1667em}{\xi _{n,i}},\hspace{0.1667em}{\eta _{n,j}},\hspace{0.1667em}{\varepsilon _{n}}:n,i,j\in \mathbb{N}\big\}$ are independent, and $\{{\xi _{n,i}}:n,i\in \mathbb{N}\}$, $\{{\eta _{n,j}}:n,j\in \mathbb{N}\}$ and $\{{\varepsilon _{n}}:n\in \mathbb{N}\}$ are supposed to consist of identically distributed random variables, respectively. Note that the number of individuals alive at time $n\in {\mathbb{Z}_{+}}$ is ${X_{n}}+{X_{n-1}}$, which can be larger than the population size ${X_{n}}$ of the ${n^{\mathrm{th}}}$ generation, since the individuals of the population at time $n-1$ are still alive at time n, because they can reproduce also at age 2. The stochastic process ${({X_{n}})_{n\geqslant -1}}$ given by (2) is called a second-order Galton–Watson process with immigration or a Generalized Integer-valued AutoRegressive process of order 2 (GINAR(2) process), see, e.g., Latour [14]. Especially, if ${\xi _{1,1}}$ and ${\eta _{1,1}}$ are Bernoulli distributed random variables, then ${({X_{n}})_{n\geqslant -1}}$ is also called an Integer-valued AutoRegressive process of order 2 (INAR(2) process), see, e.g., Du and Li [8]. If ${\varepsilon _{1}}=0$, then we say that ${({X_{n}})_{n\geqslant -1}}$ is a second-order Galton–Watson process without immigration, introduced and studied by Kashikar and Deshmukh [12] as well.
(2)
\[ {X_{n}}={\sum \limits_{i=1}^{{X_{n-1}}}}{\xi _{n,i}}+{\sum \limits_{j=1}^{{X_{n-2}}}}{\eta _{n,j}}+{\varepsilon _{n}},\hspace{2em}n\in \mathbb{N},\]The process given in (2) with the special choice ${\eta _{1,1}}=0$ gives back the process given in (1), which will be called a first-order Galton–Watson process with immigration to make a distinction.
For notational convenience, let ξ, η and ε be random variables such that $\xi \stackrel{\mathcal{D}}{=}{\xi _{1,1}}$, $\eta \stackrel{\mathcal{D}}{=}{\eta _{1,1}}$ and $\varepsilon \stackrel{\mathcal{D}}{=}{\varepsilon _{1}}$, and put ${m_{\xi }}:=\mathbb{E}(\xi )\in [0,\infty ]$, ${m_{\eta }}:=\mathbb{E}(\eta )\in [0,\infty ]$ and ${m_{\varepsilon }}:=\mathbb{E}(\varepsilon )\in [0,\infty ]$.
If ${({X_{n}})_{n\in {\mathbb{Z}_{+}}}}$ is a (first-order) Galton–Watson process with immigration such that ${m_{\xi }}\in (0,1)$, $\mathbb{P}(\varepsilon =0)<1$ and ${\textstyle\sum _{j=1}^{\infty }}\mathbb{P}(\varepsilon =j)\log (j)<\infty $, then the Markov process ${({X_{n}})_{n\in {\mathbb{Z}_{+}}}}$ admits a unique stationary distribution (see, e.g., Quine [17]), i.e., the distribution of the initial value ${X_{0}}$ can be uniquely chosen so that the process becomes strongly stationary. If ε is regularly varying with index $\alpha \in (0,\infty )$, i.e., $\mathbb{P}(\varepsilon >x)\in (0,\infty )$ for all $x\in (0,\infty )$, and
\[ \underset{x\to \infty }{\lim }\frac{\mathbb{P}(\varepsilon >qx)}{\mathbb{P}(\varepsilon >x)}={q^{-\alpha }}\hspace{2em}\text{for all}\hspace{5pt}q\in (0,\infty )\text{,}\]
then, by Lemma A.3, ${\textstyle\sum _{j=1}^{\infty }}\mathbb{P}(\varepsilon =j)\log (j)<\infty $. The content of Theorem 2.1.1 in Basrak et al. [4] is the following statement.Theorem 1.
Let ${({X_{n}})_{n\in {\mathbb{Z}_{+}}}}$ be a (first-order) Galton–Watson process with immigration such that ${m_{\xi }}\in (0,1)$ and ε is regularly varying with index $\alpha \in (0,2)$. In case of $\alpha \in [1,2)$, assume additionally that $\mathbb{E}({\xi ^{2}})<\infty $. Let the distribution of the initial value ${X_{0}}$ be such that the process is strongly stationary. Then we have
\[ \mathbb{P}({X_{0}}>x)\sim {\sum \limits_{i=0}^{\infty }}{m_{\xi }^{i\alpha }}\mathbb{P}(\varepsilon >x)=\frac{1}{1-{m_{\xi }^{\alpha }}}\mathbb{P}(\varepsilon >x)\hspace{2em}\textit{as}\hspace{5pt}x\to \infty ,\]
and hence ${X_{0}}$ is also regularly varying with index α.
Note that in case of $\alpha =1$ and ${m_{\varepsilon }}=\infty $ Basrak et al. [4, Theorem 2.1.1] assume additionally that ε is consistently varying (or in other words intermediate varying), but, eventually, this follows from the fact that ε is regularly varying. Basrak et al. [4, Remark 2.2.2] derived the result of Theorem 1 also for $\alpha \in [2,3)$ under the additional assumption $\mathbb{E}({\xi ^{3}})<\infty $ (not mentioned in the paper), and they remarked that the same applies to all $\alpha \in [3,\infty )$ (possibly under an additional moment assumption $\mathbb{E}({\xi ^{\lfloor \alpha \rfloor +1}})<\infty $).
In Barczy et al. [3] we study regularly varying non-stationary (first-order) Galton–Watson processes with immigration.
If ${({X_{n}})_{n\geqslant -1}}$ is a second-order Galton–Watson process with immigration such that ${m_{\xi }},{m_{\eta }}\in (0,1)$ with ${m_{\xi }}+{m_{\eta }}<1$, $\mathbb{P}(\varepsilon =0)<1$ and ${\textstyle\sum _{j=1}^{\infty }}\mathbb{P}(\varepsilon =j)\log (j)<\infty $, then the distribution of the initial values $({X_{0}},{X_{-1}})$ can be uniquely chosen so that the process becomes strongly stationary, see Lemma 3.
The main result of the paper is the following analogue of Theorem 1.
Theorem 2.
Let ${({X_{n}})_{n\geqslant -1}}$ be a second-order Galton–Watson process with immigration such that ${m_{\xi }},{m_{\eta }}\in (0,1)$ with ${m_{\xi }}+{m_{\eta }}<1$, and ε is regularly varying with index $\alpha \in (0,2)$. In case of $\alpha \in [1,2)$, assume additionally that $\mathbb{E}({\xi ^{2}})<\infty $ and $\mathbb{E}({\eta ^{2}})<\infty $. Let the distribution of the initial values $({X_{0}},{X_{-1}})$ be such that the process is strongly stationary. Then we have
Consequently, ${X_{0}}$ is also regularly varying with index α.
\[ \mathbb{P}({X_{0}}>x)\sim {\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}\mathbb{P}(\varepsilon >x)\hspace{2em}\textit{as}\hspace{5pt}x\to \infty \textit{,}\]
where ${m_{0}}:=1$, ${m_{k}}:=\frac{{\lambda _{+}^{k+1}}-{\lambda _{-}^{k+1}}}{{\lambda _{+}}-{\lambda _{-}}}$, $k\in \mathbb{N}$, and
(3)
\[ {\lambda _{+}}:=\frac{{m_{\xi }}+\sqrt{{m_{\xi }^{2}}+4{m_{\eta }}}}{2},\hspace{2em}{\lambda _{-}}:=\frac{{m_{\xi }}-\sqrt{{m_{\xi }^{2}}+4{m_{\eta }}}}{2}.\]Note that ${\lambda _{+}}$ and ${\lambda _{-}}$ are the eigenvalues of the offspring mean matrix given in (8) of a corresponding 2-type Galton–Watson process with immigration. Note that for all $k\in {\mathbb{Z}_{+}}$, we have ${m_{k}}=\mathbb{E}({V_{k,0}})$, where ${({V_{n,0}})_{n\geqslant -1}}$ is a second-order Galton–Watson process (without immigration) with the initial values ${V_{0,0}}=1$ and ${V_{-1,0}}=0$, and with the same offspring distributions as ${({X_{n}})_{n\geqslant -1}}$, see (9). Consequently, the series ${\textstyle\sum _{i=0}^{\infty }}{m_{i}^{\alpha }}$ appearing in Theorem 2 is convergent, since for each $i\in \mathbb{N}$, we have ${m_{i}}=\mathbb{E}({V_{i,0}})\leqslant {\lambda _{+}^{i}}<1$ by (10) and the assumption ${m_{\xi }}+{m_{\eta }}<1$.
Our technique and result might be extended to p-th order Galton–Watson branching processes with immigration. More generally, one can pose an open problem, namely, under what conditions on the offspring and immigration distributions of a general p-type Galton–Watson branching process with immigration, its unique (p-dimensional) stationary distribution is jointly regularly varying. We also note that there is a vast literature on tail behavior of regularly varying time series (see, e.g., Hult and Samorodnitsky [10]), however, the available results do not seem to be applicable for describing the tail behavior of the stationary distribution for regularly varying branching processes with immigration. The link between GINAR and autoregressive processes is that their autocovariance functions are identical under finite second moment assumptions, but we cannot see that it would imply anything for the tail behavior of a GINAR process knowing the tail behaviour of a corresponding autoregressive process. Further, in our situation the second moment is infinite, so the autocovariance function is not defined.
Very recently, Bősze and Pap [5] have studied regularly varying non-stationary second-order Galton–Watson processes with immigration. They have found some sufficient conditions on the initial, the offspring and the immigration distributions of a non-stationary second-order Galton–Watson process with immigration under which the distribution of the process in question is regularly varying at any fixed time. The results in Bősze and Pap [5] can be considered as extensions of the results in Barczy et al. [3] on not necessarily stationary (first-order) Galton–Watson processes with immigration. Concerning the results in Bősze and Pap [5] and in the present paper, there is no overlap, for more details see Remark 1.
The paper is organized as follows. In Section 2 we present preliminaries. First we recall a representation of a second-order Galton–Watson process without or with immigration as a (special) 2-type Galton–Watson process without or with immigration, respectively. Then, we derive an explicit formula for the expectation of a second-order Galton–Watson process with immigration at time n and describe its asymptotic behavior as $n\to \infty $, and, assuming finiteness of the second moments of the offspring distributions, we give an estimation of the second moment of a second-order Galton–Watson process (without immigration). Next, we recall sufficient conditions for the existence of a unique stationary distribution for a 2-type Galton–Watson process with immigration, and a representation of this stationary distribution. Applying these results to the special 2-type Galton–Watson process with immigration belonging to the class of second-order Galton–Watson processes with immigration, we obtain sufficient conditions for the existence of a unique distribution of the initial values $({X_{0}},{X_{-1}})$ such that the process becomes strongly stationary, see Lemma 3. Section 3 is devoted to the proof of Theorem 2. In the course of the proof sufficient conditions are given under which the distribution of a second-order Galton–Watson processes (without immigration) ${({X_{n}})_{n\geqslant -1}}$ at a fixed time is regularly varying provided that ${X_{0}}$ is regularly varying and ${X_{-1}}=0$, see Proposition 1. In the Appendix we collect some results on regularly varying functions and distributions, to name a few of them: convolution property, Karamata’s theorem and Potter’s bounds. Note that the ArXiv version [2] of this paper contains more details, proofs and appendices.
2 Preliminaries on second-order Galton–Watson processes with immigration
First, we recall a representation of a second-order Galton–Watson process without or with immigration as a (special) 2-type Galton–Watson process without or with immigration, respectively. Let ${({X_{n}})_{n\geqslant -1}}$ be a second-order Galton–Watson process with immigration given in (2), and let us introduce the random vectors
Then we have
hence ${({\boldsymbol{Y}_{n}})_{n\in {\mathbb{Z}_{+}}}}$ is a (special) 2-type Galton–Watson process with immigration and with initial vector
In fact, the type 1 and 2 individuals are identified with individuals of age 0 and 1, respectively, and for each $n,i,j\in \mathbb{N}$, at time n, the ${i^{\mathrm{th}}}$ individual of type 1 of the ${(n-1)^{\mathrm{th}}}$ generation produces ${\xi _{n,i}}$ individuals of type 1 and exactly one individual of type 2, and the ${j^{\mathrm{th}}}$ individual of type 2 of the ${(n-1)^{\mathrm{th}}}$ generation produces ${\eta _{n,j}}$ individuals of type 1 and no individual of type 2.
(4)
\[ {\boldsymbol{Y}_{n}}:=\left[\substack{{Y_{n,1}}\\ {} {Y_{n,2}}}\right]:=\left[\substack{{X_{n}}\\ {} {X_{n-1}}}\right],\hspace{2em}n\in {\mathbb{Z}_{+}}.\](5)
\[ {\boldsymbol{Y}_{n}}={\sum \limits_{i=1}^{{Y_{n-1,1}}}}\left[\substack{{\xi _{n,i}}\\ {} 1}\right]+{\sum \limits_{j=1}^{{Y_{n-1,2}}}}\left[\substack{{\eta _{n,j}}\\ {} 0}\right]+\left[\substack{{\varepsilon _{n}}\\ {} 0}\right],\hspace{2em}n\in \mathbb{N},\]The representation (5) works backwards as well, namely, let ${({\boldsymbol{Y}_{k}})_{k\in {\mathbb{Z}_{+}}}}$ be a special 2-type Galton–Watson process with immigration given by
where ${\boldsymbol{Y}_{0}}$ is a 2-dimensional integer-valued random vector. Here, for each $k,j\in \mathbb{N}$ and $i\in \{1,2\}$, ${\xi _{k,j,i,1}}$ denotes the number of type 1 offsprings in the ${k^{\mathrm{th}}}$ generation produced by the ${j^{\mathrm{th}}}$ offspring of the ${(k-1)^{\mathrm{th}}}$ generation of type i, and ${\varepsilon _{k}}$ denotes the number of type 1 immigrants in the ${k^{\mathrm{th}}}$ generation. For the second coordinate process of ${({\boldsymbol{Y}_{k}})_{k\in {\mathbb{Z}_{+}}}}$, we get ${Y_{k,2}}={Y_{k-1,1}}$, $k\in \mathbb{N}$, and substituting this into (6), the first coordinate process of ${({\boldsymbol{Y}_{k}})_{k\in {\mathbb{Z}_{+}}}}$ satisfies
(6)
\[ {\boldsymbol{Y}_{k}}={\sum \limits_{j=1}^{{Y_{k-1,1}}}}\left[\substack{{\xi _{k,j,1,1}}\\ {} 1}\right]+{\sum \limits_{j=1}^{{Y_{k-1,2}}}}\left[\substack{{\xi _{k,j,2,1}}\\ {} 0}\right]+\left[\substack{{\varepsilon _{k,1}}\\ {} 0}\right],\hspace{2em}k\in \mathbb{N},\]
\[ {Y_{k,1}}={\sum \limits_{j=1}^{{Y_{k-1,1}}}}{\xi _{k,j,1,1}}+{\sum \limits_{j=1}^{{Y_{k-2,1}}}}{\xi _{k,j,2,1}}+{\varepsilon _{k,1}},\hspace{2em}k\geqslant 2.\]
Thus, the first coordinate process of ${({\boldsymbol{Y}_{k}})_{k\in {\mathbb{Z}_{+}}}}$ given by (6) satisfies equation (2) with ${X_{n}}:={Y_{n,1}}$, ${\xi _{n,i}}:={\xi _{n,i,1,1}}$, ${\eta _{n,j}}:={\xi _{n,j,2,1}}$, ${\varepsilon _{n}}:={\varepsilon _{n,1}}$, $n,i,j\in \mathbb{N}$, and with the initial values ${X_{0}}:={Y_{0,1}}$ and ${X_{-1}}:={Y_{0,2}}$, i.e., it is a second-order Galton–Watson process with immigration.Note that, for a second-order Galton–Watson process ${({X_{n}})_{n\geqslant -1}}$ (without immigration), the additive (or branching) property of a 2-type Galton–Watson process (without immigration) (see, e.g. in Athreya and Ney [1, Chapter V, Section 1]), together with the law of total probability, for each $n\in \mathbb{N}$, implies
where $\big\{({X_{0}},{X_{-1}}),{\zeta _{i,0}^{(n)}},{\zeta _{j,-1}^{(n)}}:i,j\in \mathbb{N}\big\}$ are independent random variables such that $\{{\zeta _{i,0}^{(n)}}:i\in \mathbb{N}\}$ are independent copies of ${V_{n,0}}$ and $\{{\zeta _{j,-1}^{(n)}}:j\in \mathbb{N}\}$ are independent copies of ${V_{n,-1}}$, where ${({V_{k,0}})_{k\geqslant -1}}$ and ${({V_{k,-1}})_{k\geqslant -1}}$ are second-order Galton–Watson processes (without immigration) with initial values ${V_{0,0}}=1$, ${V_{-1,0}}=0$, ${V_{0,-1}}=0$ and ${V_{-1,-1}}=1$, and with the same offspring distributions as ${({X_{k}})_{k\geqslant -1}}$.
(7)
\[ {X_{n}}\stackrel{\mathcal{D}}{=}{\sum \limits_{i=1}^{{X_{0}}}}{\zeta _{i,0}^{(n)}}+{\sum \limits_{j=1}^{{X_{-1}}}}{\zeta _{j,-1}^{(n)}},\]Moreover, if ${({X_{n}})_{n\geqslant -1}}$ is a second-order Galton–Watson process with immigration, then for each $n\in \mathbb{N}$, we have
\[ {X_{n}}={V_{0}^{(n)}}({X_{0}},{X_{-1}})+{\sum \limits_{i=1}^{n}}{V_{i}^{(n-i)}}({\varepsilon _{i}},0),\]
where $\big\{{V_{0}^{(n)}}({X_{0}},{X_{-1}}),{V_{i}^{(n-i)}}({\varepsilon _{i}},0):i\in \{1,\dots ,n\}\big\}$ are independent random variables such that ${V_{0}^{(n)}}({X_{0}},{X_{-1}})$ represents the number of newborns at time n, resulting from the initial individuals ${X_{0}}$ at time 0 and ${X_{-1}}$ at time $-1$, and for each $i\in \{1,\dots ,n\}$, ${V_{i}^{(n-i)}}({\varepsilon _{i}},0)$ represents the number of newborns at time n, resulting from the immigration ${\varepsilon _{i}}$ at time i, see the ArXiv version [2] of this paper.Our next aim is to derive an explicit formula for the expectation of a subcritical second-order Galton–Watson process with immigration at time n and to describe its asymptotic behavior as $n\to \infty $.
Recall that ξ, η and ε are random variables such that $\xi \stackrel{\mathcal{D}}{=}{\xi _{1,1}}$, $\eta \stackrel{\mathcal{D}}{=}{\eta _{1,1}}$ and $\varepsilon \stackrel{\mathcal{D}}{=}{\varepsilon _{1}}$, and we put ${m_{\xi }}=\mathbb{E}(\xi )\in [0,\infty ]$, ${m_{\eta }}=\mathbb{E}(\eta )\in [0,\infty ]$ and ${m_{\varepsilon }}=\mathbb{E}(\varepsilon )\in [0,\infty ]$. If ${m_{\xi }}\in {\mathbb{R}_{+}}$, ${m_{\eta }}\in {\mathbb{R}_{+}}$, ${m_{\varepsilon }}\in {\mathbb{R}_{+}}$, $\mathbb{E}({X_{0}})\in {\mathbb{R}_{+}}$ and $\mathbb{E}({X_{-1}})\in {\mathbb{R}_{+}}$, then (2) implies
Note that ${\boldsymbol{M}_{\xi ,\eta }}$ is the mean matrix of the 2-type Galton–Watson process ${({\boldsymbol{Y}_{n}})_{n\in {\mathbb{Z}_{+}}}}$ given in (4). Thus, we conclude
\[ \mathbb{E}({X_{n}}\hspace{0.1667em}|\hspace{0.1667em}{\mathcal{F}_{n-1}^{X}})={X_{n-1}}{m_{\xi }}+{X_{n-2}}{m_{\eta }}+{m_{\varepsilon }},\hspace{2em}n\in \mathbb{N},\]
where ${\mathcal{F}_{n}^{X}}:=\sigma ({X_{-1}},{X_{0}},\dots ,{X_{n}})$, $n\in {\mathbb{Z}_{+}}$. Consequently,
\[ \mathbb{E}({X_{n}})={m_{\xi }}\mathbb{E}({X_{n-1}})+{m_{\eta }}\mathbb{E}({X_{n-2}})+{m_{\varepsilon }},\hspace{2em}n\in \mathbb{N},\]
which can be written in the matrix form
\[ \left[\substack{\mathbb{E}({X_{n}})\\ {} \mathbb{E}({X_{n-1}})}\right]={\boldsymbol{M}_{\xi ,\eta }}\left[\substack{\mathbb{E}({X_{n-1}})\\ {} \mathbb{E}({X_{n-2}})}\right]+\left[\substack{{m_{\varepsilon }}\\ {} 0}\right],\hspace{2em}n\in \mathbb{N},\]
with
(8)
\[ {\boldsymbol{M}_{\xi ,\eta }}:=\left[\begin{array}{c@{\hskip10.0pt}c}{m_{\xi }}& {m_{\eta }}\\ {} 1& 0\end{array}\right].\]
\[ \left[\substack{\mathbb{E}({X_{n}})\\ {} \mathbb{E}({X_{n-1}})}\right]={\boldsymbol{M}_{\xi ,\eta }^{n}}\left[\substack{\mathbb{E}({X_{0}})\\ {} \mathbb{E}({X_{-1}})}\right]+{\sum \limits_{k=1}^{n}}{\boldsymbol{M}_{\xi ,\eta }^{n-k}}\left[\substack{{m_{\varepsilon }}\\ {} 0}\right],\hspace{2em}n\in \mathbb{N}.\]
Hence, the asymptotic behavior of the sequence ${(\mathbb{E}({X_{n}}))_{n\in \mathbb{N}}}$ depends on the asymptotic behavior of the powers ${({\boldsymbol{M}_{\xi ,\eta }^{n}})_{n\in \mathbb{N}}}$, which is related to the spectral radius ϱ of ${\boldsymbol{M}_{\xi ,\eta }}$. The matrix ${\boldsymbol{M}_{\xi ,\eta }}$ has eigenvalues ${\lambda _{+}}$ and ${\lambda _{-}}$ given in (3) and satisfying ${\lambda _{+}}\in {\mathbb{R}_{+}}$ and ${\lambda _{-}}\in [-{\lambda _{+}},0]$, hence the spectral radius of ${\boldsymbol{M}_{\xi ,\eta }}$ is $\varrho ={\lambda _{+}}$. If ${({X_{n}})_{n\geqslant -1}}$ is a second-order Galton–Watson process with immigration such that ${m_{\xi }}\in {\mathbb{R}_{+}}$ and ${m_{\eta }}\in {\mathbb{R}_{+}}$, then ${({X_{n}})_{n\geqslant -1}}$ is called subcritical, critical or supercritical if $\varrho <1$, $\varrho =1$ or $\varrho >1$, respectively. It is easy to check that a second-order Galton–Watson process with immigration is subcritical, critical or supercritical if and only if ${m_{\xi }}+{m_{\eta }}<1$, ${m_{\xi }}+{m_{\eta }}=1$ or ${m_{\xi }}+{m_{\eta }}>1$, respectively.Lemma 1.
Let ${({X_{n}})_{n\geqslant -1}}$ be a second-order Galton–Watson process with immigration such that ${m_{\xi }},{m_{\eta }}\in (0,1)$ with ${m_{\xi }}+{m_{\eta }}<1$, ${m_{\varepsilon }}\in {\mathbb{R}_{+}}$, $\mathbb{E}({X_{0}})\in {\mathbb{R}_{+}}$ and $\mathbb{E}({X_{-1}})\in {\mathbb{R}_{+}}$. Then, for all $n\in \mathbb{N}$, we have
and hence
(9)
\[ \begin{aligned}{}\mathbb{E}({X_{n}})& =\frac{{\lambda _{+}^{n+1}}-{\lambda _{-}^{n+1}}}{{\lambda _{+}}-{\lambda _{-}}}\mathbb{E}({X_{0}})+\frac{{\lambda _{+}^{n}}-{\lambda _{-}^{n}}}{{\lambda _{+}}-{\lambda _{-}}}{m_{\eta }}\mathbb{E}({X_{-1}})\\ {} & \phantom{=\hspace{0.2778em}}+\frac{1}{{\lambda _{+}}-{\lambda _{-}}}\bigg({\lambda _{+}}\frac{1-{\lambda _{+}^{n}}}{1-{\lambda _{+}}}-{\lambda _{-}}\frac{1-{\lambda _{-}^{n}}}{1-{\lambda _{-}}}\bigg){m_{\varepsilon }},\end{aligned}\]
\[ \mathbb{E}({X_{n}})=\frac{{m_{\varepsilon }}}{(1-{\lambda _{+}})(1-{\lambda _{-}})}+\operatorname{O}({\lambda _{+}^{n}})\hspace{2em}\textit{as}\hspace{5pt}n\to \infty \textit{.}\]
Further, in case of ${m_{\varepsilon }}=0$, i.e., when there is no immigration, we have the following more precise statements:
and
The first moment of a subcritical second-order Galton–Watson process ${({X_{n}})_{n\geqslant -1}}$ (without immigration) can be estimated by (10). Next, we present an auxiliary lemma on an estimation of the second moment of a subcritical second-order Galton–Watson process (without immigration).
Lemma 2.
Let ${({X_{n}})_{n\geqslant -1}}$ be a second-order Galton–Watson process (without immigration) such that ${m_{\xi }},{m_{\eta }}\in (0,1)$ with ${m_{\xi }}+{m_{\eta }}<1$, ${X_{0}}=1$, ${X_{-1}}=0$, $\mathbb{E}({\xi ^{2}})<\infty $ and $\mathbb{E}({\eta ^{2}})<\infty $. Then for all $n\in \mathbb{N}$,
The proofs of Lemmata 1 and 2 together with statements in the critical and supercritical cases can be found in the ArXiv version [2] of this paper.
Next, we recall 2-type Galton–Watson processes with immigration. For each $k,j\in {\mathbb{Z}_{+}}$ and $i,\ell \in \{1,2\}$, the number of individuals of type i born or arrived as immigrants in the ${k^{\mathrm{th}}}$ generation will be denoted by ${X_{k,i}}$, the number of type ℓ offsprings produced by the ${j^{\mathrm{th}}}$ individual who is of type i belonging to the ${(k-1)^{\mathrm{th}}}$ generation will be denoted by ${\xi _{k,j,i,\ell }}$, and the number of type i immigrants in the ${k^{\mathrm{th}}}$ generation will be denoted by ${\varepsilon _{k,i}}$. Then we have
where the series ${\textstyle\sum _{i=0}^{\infty }}{\boldsymbol{V}_{i}^{(i)}}({\boldsymbol{\varepsilon }_{i}})$ converges with probability 1, see, e.g., Heyer [9, Theorem 3.1.6]. The above representation of the stationary distribution $\boldsymbol{\pi }$ for ${({\boldsymbol{X}_{n}})_{n\in {\mathbb{Z}_{+}}}}$ can be interpreted in a way that we consider independent 2-type Galton–Watson processes without immigration such that the ${i^{\mathrm{th}}}$ one admits initial vector ${\boldsymbol{\varepsilon }_{i}}$, $i\in {\mathbb{Z}_{+}}$, evaluate the ${i^{\mathrm{th}}}$ 2-type Galton–Watson processes at time point i, and then sum up all these random variables.
\[ \left[\substack{{X_{k,1}}\\ {} {X_{k,2}}}\right]={\sum \limits_{j=1}^{{X_{k-1,1}}}}\left[\substack{{\xi _{k,j,1,1}}\\ {} {\xi _{k,j,1,2}}}\right]+{\sum \limits_{j=1}^{{X_{k-1,2}}}}\left[\substack{{\xi _{k,j,2,1}}\\ {} {\xi _{k,j,2,2}}}\right]+\left[\substack{{\varepsilon _{k,1}}\\ {} {\varepsilon _{k,2}}}\right],\hspace{2em}k\in \mathbb{N}.\]
Here $\big\{{\boldsymbol{X}_{0}},\hspace{0.1667em}{\boldsymbol{\xi }_{k,j,i}},\hspace{0.1667em}{\boldsymbol{\varepsilon }_{k}}:k,j\in \mathbb{N},\hspace{0.1667em}i\in \{1,2\}\big\}$ are supposed to be independent, and $\{{\boldsymbol{\xi }_{k,j,1}}:k,j\in \mathbb{N}\}$, $\{{\boldsymbol{\xi }_{k,j,2}}:k,j\in \mathbb{N}\}$ and $\{{\boldsymbol{\varepsilon }_{k}}:k\in \mathbb{N}\}$ are supposed to consist of identically distributed random vectors, where
\[ {\boldsymbol{X}_{0}}:=\left[\substack{{X_{0,1}}\\ {} {X_{0,2}}}\right],\hspace{2em}{\boldsymbol{\xi }_{k,j,i}}:=\left[\substack{{\xi _{k,j,i,1}}\\ {} {\xi _{k,j,i,2}}}\right],\hspace{2em}{\boldsymbol{\varepsilon }_{k}}:=\left[\substack{{\varepsilon _{k,1}}\\ {} {\varepsilon _{k,2}}}\right].\]
For notational convenience, let ${\boldsymbol{\xi }_{1}}$, ${\boldsymbol{\xi }_{2}}$ and $\boldsymbol{\varepsilon }$ be random vectors such that ${\boldsymbol{\xi }_{1}}\stackrel{\mathcal{D}}{=}{\boldsymbol{\xi }_{1,1,1}}$, ${\boldsymbol{\xi }_{2}}\stackrel{\mathcal{D}}{=}{\boldsymbol{\xi }_{1,1,2}}$ and $\boldsymbol{\varepsilon }\stackrel{\mathcal{D}}{=}{\boldsymbol{\varepsilon }_{1}}$, and put ${\boldsymbol{m}_{{\boldsymbol{\xi }_{1}}}}:=\mathbb{E}({\boldsymbol{\xi }_{1}})\in {[0,\infty ]^{2}}$, ${\boldsymbol{m}_{{\boldsymbol{\xi }_{2}}}}:=\mathbb{E}({\boldsymbol{\xi }_{2}})\in {[0,\infty ]^{2}}$, ${\boldsymbol{m}_{\boldsymbol{\varepsilon }}}:=\mathbb{E}(\boldsymbol{\varepsilon })\in {[0,\infty ]^{2}}$, and
\[ {\boldsymbol{M}_{\boldsymbol{\xi }}}:=\left[\begin{array}{c@{\hskip10.0pt}c}{\boldsymbol{m}_{{\boldsymbol{\xi }_{1}}}}& {\boldsymbol{m}_{{\boldsymbol{\xi }_{2}}}}\end{array}\right]\in {[0,\infty ]^{2\times 2}}.\]
We call ${\boldsymbol{M}_{\boldsymbol{\xi }}}$ the offspring mean matrix, and note that many authors define the offspring mean matrix as ${\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}$. If ${\boldsymbol{m}_{{\boldsymbol{\xi }_{1}}}}\in {\mathbb{R}_{+}^{2}}$, ${\boldsymbol{m}_{{\boldsymbol{\xi }_{2}}}}\in {\mathbb{R}_{+}^{2}}$, the spectral radius of ${\boldsymbol{M}_{\boldsymbol{\xi }}}$ is less than 1, ${\boldsymbol{M}_{\boldsymbol{\xi }}}$ is primitive, i.e., there exists $m\in \mathbb{N}$ such that ${\boldsymbol{M}_{\boldsymbol{\xi }}^{m}}\in {\mathbb{R}_{++}^{2\times 2}}$, $\mathbb{P}(\boldsymbol{\varepsilon }=\mathbf{0})<1$ and $\mathbb{E}({1_{\{\boldsymbol{\varepsilon }\ne \mathbf{0}\}}}\log ({({\boldsymbol{e}_{1}}+{\boldsymbol{e}_{2}})^{\top }}\boldsymbol{\varepsilon }))<\infty $, then, by the Theorem in Quine [17], there exists a unique stationary distribution $\boldsymbol{\pi }$ for ${({\boldsymbol{X}_{n}})_{n\in {\mathbb{Z}_{+}}}}$. As a consequence of formula (16) for the probability generating function of $\boldsymbol{\pi }$ in Quine [17], we have
\[ {\sum \limits_{i=0}^{n}}{\boldsymbol{V}_{i}^{(i)}}({\boldsymbol{\varepsilon }_{i}})\stackrel{\mathcal{D}}{\longrightarrow }\boldsymbol{\pi }\hspace{2em}\text{as}\hspace{5pt}n\to \infty \text{,}\]
where ${({\boldsymbol{V}_{k}^{(i)}}({\boldsymbol{\varepsilon }_{i}}))_{k\in {\mathbb{Z}_{+}}}}$, $i\in {\mathbb{Z}_{+}}$, are independent copies of a 2-type Galton–Watson process ${({\boldsymbol{V}_{k}}(\boldsymbol{\varepsilon }))_{k\in {\mathbb{Z}_{+}}}}$ (without immigration) with an initial vector ${\boldsymbol{V}_{0}}(\boldsymbol{\varepsilon })=\boldsymbol{\varepsilon }$ and with the same offspring distributions as ${({\boldsymbol{X}_{k}})_{k\in {\mathbb{Z}_{+}}}}$. Consequently, we have
(11)
\[ {\sum \limits_{i=0}^{\infty }}{\boldsymbol{V}_{i}^{(i)}}({\boldsymbol{\varepsilon }_{i}})\stackrel{\mathcal{D}}{=}\boldsymbol{\pi },\]Next, we give sufficient conditions for the strong stationarity of a subcritical second-order Galton–Watson process with immigration.
Lemma 3.
If ${({X_{n}})_{n\geqslant -1}}$ is a second-order Galton–Watson process with immigration such that ${m_{\xi }},{m_{\eta }}\in (0,1)$ with ${m_{\xi }}+{m_{\eta }}<1$, $\mathbb{P}(\varepsilon =0)<1$ and ${\textstyle\sum _{j=1}^{\infty }}\mathbb{P}(\varepsilon =j)\log (j)<\infty $, then the distribution of the initial values $({X_{0}},{X_{-1}})$ can be uniquely chosen so that the process becomes strongly stationary, and we have a representation
where the series converges with probability 1 and ${({V_{k}^{(i)}}({\varepsilon _{i}}))_{k\geqslant -1}}$, $i\in {\mathbb{Z}_{+}}$, are independent copies of ${({V_{k}}(\varepsilon ))_{k\geqslant -1}}$, which is a second-order Galton–Watson process (without immigration) with the initial values ${V_{0}}(\varepsilon )=\varepsilon $ and ${V_{-1}}(\varepsilon )=0$, and with the same offspring distributions as ${({X_{k}})_{k\geqslant -1}}$. In fact, the distribution of $({X_{0}},{X_{-1}})$ is the unique stationary distribution of the corresponding special 2-type Galton–Watson process ${({\boldsymbol{Y}_{n}})_{n\in {\mathbb{Z}_{+}}}}$ with immigration given in (5).
(12)
\[ {X_{0}}\stackrel{\mathcal{D}}{=}{\sum \limits_{i=0}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}}),\]Proof.
First we show that the process ${({X_{n}})_{n\geqslant -1}}$ is strongly stationary if and only if the distribution of the initial population sizes ${({X_{0}},{X_{-1}})^{\top }}$ coincides with the stationary distribution $\boldsymbol{\pi }$ of the Markov chain ${({\boldsymbol{Y}_{k}})_{k\in {\mathbb{Z}_{+}}}}$. If ${({X_{0}},{X_{-1}})^{\top }}\stackrel{\mathcal{D}}{=}\boldsymbol{\pi }$, then ${\boldsymbol{Y}_{0}}\stackrel{\mathcal{D}}{=}\boldsymbol{\pi }$, thus ${({\boldsymbol{Y}_{k}})_{k\in {\mathbb{Z}_{+}}}}$ is strongly stationary, and hence for each $n,m\in {\mathbb{Z}_{0}}$, $({\boldsymbol{Y}_{0}},\dots ,{\boldsymbol{Y}_{n}})\stackrel{\mathcal{D}}{=}({\boldsymbol{Y}_{m}},\dots ,{\boldsymbol{Y}_{n+m}})$, yielding
\[\begin{aligned}{}& ({X_{0}},{X_{-1}},{X_{1}},{X_{0}},\dots ,{X_{n}},{X_{n-1}})\\ {} & \hspace{2em}\stackrel{\mathcal{D}}{=}({X_{m}},{X_{m-1}},{X_{m+1}},{X_{m}},\dots ,{X_{n+m}},{X_{n+m-1}}).\end{aligned}\]
Especially, $({X_{-1}},{X_{0}},{X_{1}},\dots ,{X_{n}})\stackrel{\mathcal{D}}{=}({X_{m-1}},{X_{m}},{X_{m+1}},\dots ,{X_{n+m}})$, hence ${({X_{n}})_{n\geqslant -1}}$ is strongly stationary. Since
is a continuous function of $({X_{m-1}},{X_{m}},{X_{m+1}},\dots ,{X_{n+m}})$, these considerations work backwards as well. Consequently, $\boldsymbol{\pi }$ is the unique stationary distribution of the second-order Markov chain ${({X_{n}})_{n\geqslant -1}}$.The offspring mean matrix of ${({\boldsymbol{Y}_{n}})_{n\in {\mathbb{Z}_{+}}}}$ has the form
\[ \left[\begin{array}{c@{\hskip10.0pt}c}{m_{\xi }}& {m_{\eta }}\\ {} 1& 0\end{array}\right]={\boldsymbol{M}_{\xi ,\eta }},\]
the spectral radius of ${\boldsymbol{M}_{\xi ,\eta }}$ is ϱ which is less than 1, and ${\boldsymbol{M}_{\xi ,\eta }}$ is primitive, since
\[ {\boldsymbol{M}_{\xi ,\eta }^{2}}={\left[\begin{array}{c@{\hskip10.0pt}c}{m_{\xi }}& {m_{\eta }}\\ {} 1& 0\end{array}\right]^{2}}=\left[\begin{array}{c@{\hskip10.0pt}c}{m_{\xi }^{2}}+{m_{\eta }}& {m_{\xi }}{m_{\eta }}\\ {} {m_{\xi }}& {m_{\eta }}\end{array}\right]\in {(0,\infty )^{2}}.\]
Hence, as it was recalled earlier, there exists a unique stationary distribution $\boldsymbol{\pi }$ for ${({\boldsymbol{Y}_{n}})_{n\in {\mathbb{Z}_{+}}}}$. Moreover, the stationary distribution $\boldsymbol{\pi }$ of ${({\boldsymbol{Y}_{n}})_{n\in {\mathbb{Z}_{+}}}}$ has a representation given in (11). Using the considerations for the backward representation, we have ${({\boldsymbol{e}_{1}^{\top }}{\boldsymbol{V}_{k}}(\boldsymbol{\varepsilon }))_{k\in {\mathbb{Z}_{+}}}}={({V_{k}}(\varepsilon ))_{k\in {\mathbb{Z}_{+}}}}$ and ${({\boldsymbol{e}_{2}^{\top }}{\boldsymbol{V}_{k}}(\boldsymbol{\varepsilon }))_{k\in {\mathbb{Z}_{+}}}}={({V_{k-1}}(\varepsilon ))_{k\in {\mathbb{Z}_{+}}}}$, where ${({V_{k}}(\varepsilon ))_{k\geqslant -1}}$ is a second-order Galton–Watson process (without immigration) with initial values ${V_{0}}(\varepsilon )=\varepsilon $ and ${V_{-1}}(\varepsilon )=0$, and with the same offspring distributions as ${({X_{k}})_{k\geqslant -1}}$. Consequently, the marginals of the stationary distribution $\boldsymbol{\pi }$ are the same distributions π. So, under the given conditions, ${({X_{n}})_{n\geqslant -1}}$ is strongly stationary if and only if the distribution of $({X_{0}},{X_{-1}})$ coincides with $\boldsymbol{\pi }$. In this case the distribution of ${X_{0}}$ is π, and it admits the representation (12). □Note also that ${({X_{n}})_{n\geqslant -1}}$ is only a second-order Markov chain, but not a Markov chain.
Remark 1.
Note that there is no overlap between the results in the recent paper of Bősze and Pap [5] on non-stationary second-order Galton–Watson processes with immigration and in the present paper. In [5] the authors always suppose that the initial values ${X_{0}}$ and ${X_{-1}}$ of a second-order Galton–Watson process with immigration ${({X_{n}})_{n\geqslant -1}}$ are independent, so in the results of [5] the distribution of $({X_{0}},{X_{-1}})$ cannot be chosen as the unique stationary distribution $\boldsymbol{\pi }$ of the special 2-type Galton–Watson process ${({\boldsymbol{Y}_{n}})_{n\in {\mathbb{Z}_{+}}}}$ with immigration given in (5), since the marginals of $\boldsymbol{\pi }$ are not independent in general. □
3 Proof of Theorem 2
For the proof of Theorem 2, we need an auxiliary result on the tail behaviour of second-order Galton–Watson processes (without immigration) ${({X_{n}})_{n\geqslant -1}}$ such that ${X_{0}}$ is regularly varying and ${X_{-1}}=0$.
Proposition 1.
Let ${({X_{n}})_{n\geqslant -1}}$ be a second-order Galton–Watson process (without immigration) such that ${X_{0}}$ is regularly varying with index ${\beta _{0}}\in {\mathbb{R}_{+}}$, ${X_{-1}}=0$, ${m_{\xi }}\in (0,\infty )$ and ${m_{\eta }}\in {\mathbb{R}_{+}}$. In case of ${\beta _{0}}\in [1,\infty )$, assume additionally that there exists $r\in ({\beta _{0}},\infty )$ with $\mathbb{E}({\xi ^{r}})<\infty $ and $\mathbb{E}({\eta ^{r}})<\infty $. Then for all $n\in \mathbb{N}$,
\[ \mathbb{P}({X_{n}}>x)\sim {m_{n}^{{\beta _{0}}}}\mathbb{P}({X_{0}}>x)\hspace{2em}\textit{as}\hspace{5pt}x\to \infty \textit{,}\]
where ${m_{i}}$, $i\in {\mathbb{Z}_{+}}$, are given in Theorem 2, and hence, ${X_{n}}$ is also regularly varying with index ${\beta _{0}}$ for each $n\in \mathbb{N}$.
Proof of Proposition 1.
Let us fix $n\in \mathbb{N}$. In view of the additive property (7), it is sufficient to prove
\[ \mathbb{P}\Bigg({\sum \limits_{i=1}^{{X_{0}}}}{\zeta _{i,0}^{(n)}}>x\Bigg)\sim {m_{n}^{{\beta _{0}}}}\mathbb{P}({X_{0}}>x)\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{.}\]
This follows from Proposition A.1, since $\mathbb{E}({\zeta _{1,0}^{(n)}})={m_{n}}\in (0,\infty )$, $n\in \mathbb{N}$, by (9). □Proof of Theorem 2.
We will use the ideas of the proof of Theorem 2.1.1 in Basrak et al. [4] and the representation (12) of the distribution of ${X_{0}}$. Recall that ${({V_{k}^{(i)}}({\varepsilon _{i}}))_{k\geqslant -1}}$, $i\in {\mathbb{Z}_{+}}$, are independent copies of ${({V_{k}}(\varepsilon ))_{k\geqslant -1}}$, which is a second-order Galton–Watson process (without immigration) with the initial values ${V_{0}}(\varepsilon )=\varepsilon $ and ${V_{-1}}(\varepsilon )=0$, and with the same offspring distributions as ${({X_{k}})_{k\geqslant -1}}$. Due to the representation (7), for each $i\in {\mathbb{Z}_{+}}$, we have
and hence the random variables ${\textstyle\sum _{i=0}^{n}}{V_{i}^{(i)}}({\varepsilon _{i}})$, $n\in {\mathbb{Z}_{+}}$, are also regularly varying with index α. For each $n\in \mathbb{N}$, using that ${V_{i}^{(i)}}({\varepsilon _{i}})$, $i\in {\mathbb{Z}_{+}}$, are non-negative, we have
Moreover, for each $n\in \mathbb{N}$ and $q\in (0,1)$, we have
\[ {V_{i}^{(i)}}({\varepsilon _{i}})\stackrel{\mathcal{D}}{=}{\sum \limits_{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}},\]
where $\big\{{\varepsilon _{i}},{\zeta _{j,0}^{(i)}}:j\in \mathbb{N}\big\}$ are independent random variables such that $\{{\zeta _{j,0}^{(i)}}:j\in \mathbb{N}\}$ are independent copies of ${V_{i,0}}$, where ${({V_{k,0}})_{k\geqslant -1}}$ is a second-order Galton–Watson process (without immigration) with the initial values ${V_{0,0}}=1$ and ${V_{-1,0}}=0$, and with the same offspring distributions as ${({X_{k}})_{k\geqslant -1}}$. For each $i\in {\mathbb{Z}_{+}}$, by Proposition 1, we obtain $\mathbb{P}({V_{i}^{(i)}}({\varepsilon _{i}})>x)\sim {m_{i}^{\alpha }}\mathbb{P}(\varepsilon >x)$ as $x\to \infty $, yielding that the random variables ${V_{i}^{(i)}}({\varepsilon _{i}})$, $i\in {\mathbb{Z}_{+}}$, are also regularly varying with index α. Since ${V_{i}^{(i)}}({\varepsilon _{i}})$, $i\in {\mathbb{Z}_{+}}$, are independent, for each $n\in {\mathbb{Z}_{+}}$, by Lemma A.5, we have
(13)
\[ \mathbb{P}\bigg({\sum \limits_{i=0}^{n}}{V_{i}^{(i)}}({\varepsilon _{i}})>x\bigg)\sim {\sum \limits_{i=0}^{n}}{m_{i}^{\alpha }}\mathbb{P}(\varepsilon >x)\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{,}\]
\[\begin{aligned}{}\underset{x\to \infty }{\liminf }\frac{\mathbb{P}({X_{0}}>x)}{\mathbb{P}(\varepsilon >x)}& =\underset{x\to \infty }{\liminf }\frac{\mathbb{P}({\textstyle\textstyle\sum _{i=0}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>x)}{\mathbb{P}(\varepsilon >x)}\\ {} & \geqslant \underset{x\to \infty }{\liminf }\frac{\mathbb{P}({\textstyle\textstyle\sum _{i=0}^{n}}{V_{i}^{(i)}}({\varepsilon _{i}})>x)}{\mathbb{P}(\varepsilon >x)}={\sum \limits_{i=0}^{n}}{m_{i}^{\alpha }},\end{aligned}\]
hence, letting $n\to \infty $, we obtain
(14)
\[ \underset{x\to \infty }{\liminf }\frac{\mathbb{P}({X_{0}}>x)}{\mathbb{P}(\varepsilon >x)}\geqslant {\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}.\]
\[\begin{aligned}{}& \underset{x\to \infty }{\limsup }\frac{\mathbb{P}({X_{0}}>x)}{\mathbb{P}(\varepsilon >x)}=\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=0}^{n-1}}{V_{i}^{(i)}}({\varepsilon _{i}})+{\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>x\big)}{\mathbb{P}(\varepsilon >x)}\\ {} & \leqslant \underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=0}^{n-1}}{V_{i}^{(i)}}({\varepsilon _{i}})>(1-q)x\big)+\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >x)}\\ {} & \leqslant {L_{1,n}}(q)+{L_{2,n}}(q)\end{aligned}\]
with
\[\begin{aligned}{}& {L_{1,n}}(q):=\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=0}^{n-1}}{V_{i}^{(i)}}({\varepsilon _{i}})>(1-q)x\big)}{\mathbb{P}(\varepsilon >x)},\\ {} & {L_{2,n}}(q):=\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >x)}.\end{aligned}\]
Since ε is regularly varying with index α, by (13), we obtain
\[\begin{aligned}{}{L_{1,n}}(q)& =\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=0}^{n-1}}{V_{i}^{(i)}}({\varepsilon _{i}})>(1-q)x\big)}{\mathbb{P}(\varepsilon >(1-q)x)}\cdot \frac{\mathbb{P}(\varepsilon >(1-q)x)}{\mathbb{P}(\varepsilon >x)}\\ {} & ={(1-q)^{-\alpha }}{\sum \limits_{i=0}^{n-1}}{m_{i}^{\alpha }}\end{aligned}\]
and
\[\begin{aligned}{}{L_{2,n}}(q)& =\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >qx)}\cdot \frac{\mathbb{P}(\varepsilon >qx)}{\mathbb{P}(\varepsilon >x)}\\ {} & ={q^{-\alpha }}\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >qx)},\end{aligned}\]
and hence
\[\begin{aligned}{}\underset{n\to \infty }{\lim }{L_{1,n}}(q)& ={(1-q)^{-\alpha }}{\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }},\\ {} \underset{n\to \infty }{\lim }{L_{2,n}}(q)& ={q^{-\alpha }}\underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >qx)}.\end{aligned}\]
The aim of the following discussion is to show
First, we consider the case $\alpha \in (0,1)$. For each $x\in (0,\infty )$, $n\in \mathbb{N}$ and $\delta \in (0,1)$, we have
if $x\in [{x_{0}},\infty )$ and $(1-\delta ){\varrho ^{-i}}\in [1,\infty )$, which holds for sufficiently large $i\in \mathbb{N}$ due to $\varrho \in (0,1)$. Consequently, if $\delta \in (0,\frac{\alpha }{2})$, then
\[\begin{aligned}{}& \mathbb{P}\Bigg({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>x\Bigg)\\ {} & =\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}})>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}>(1-\delta )x\Bigg)\\ {} & \phantom{=\hspace{0.2778em}}+\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}})>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}\leqslant (1-\delta )x\Bigg)\\ {} & =\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}})>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}>(1-\delta )x\Bigg)\\ {} & \hspace{1em}+\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}\leqslant (1-\delta )x\Bigg)\\ {} & \leqslant \mathbb{P}\bigg(\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}>(1-\delta )x\bigg)+\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}>x\Bigg)\\ {} & =:{P_{1,n}}(x,\delta )+{P_{2,n}}(x,\delta ),\end{aligned}\]
where $\varrho ={\lambda _{+}}$. By subadditivity of probability,
\[ {P_{1,n}}(x,\delta )\leqslant \sum \limits_{i\geqslant n}\mathbb{P}({\varrho ^{i}}{\varepsilon _{i}}>(1-\delta )x)=\sum \limits_{i\geqslant n}\mathbb{P}(\varepsilon >(1-\delta ){\varrho ^{-i}}x).\]
Using Potter’s upper bound (see Lemma A.6), for $\delta \in (0,\frac{\alpha }{2})$, there exists ${x_{0}}\in (0,\infty )$ such that
(16)
\[ \frac{\mathbb{P}(\varepsilon >(1-\delta ){\varrho ^{-i}}x)}{\mathbb{P}(\varepsilon >x)}<(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\alpha +\delta }}<(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}\]
\[ \underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }\frac{{P_{1,n}}(x,\delta )}{\mathbb{P}(\varepsilon >x)}\leqslant \underset{n\to \infty }{\lim }\sum \limits_{i\geqslant n}(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}=0,\]
since ${\varrho ^{\frac{\alpha }{2}}}<1$ (due to $\varrho \in (0,1)$) yields ${\textstyle\sum _{i=0}^{\infty }}{({\varrho ^{-i}})^{-\alpha /2}}<\infty $. Now we turn to prove that ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}\frac{{P_{2,n}}(x,\delta )}{\mathbb{P}({\varepsilon _{1}}>x)}=0$. By Markov’s inequality,
\[ {P_{2,n}}(x,\delta )\leqslant \frac{1}{x}\sum \limits_{i\geqslant n}\mathbb{E}\big({V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}\big).\]
By the representation ${V_{i}^{(i)}}({\varepsilon _{i}})\stackrel{\mathcal{D}}{=}{\textstyle\sum _{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}$, we have
\[\begin{aligned}{}& \mathbb{E}\big({V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}\big)=\mathbb{E}\Bigg({\sum \limits_{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}{1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}\Bigg)\\ {} & \hspace{1em}=\mathbb{E}\Bigg[\mathbb{E}\Bigg({\sum \limits_{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}{1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}\hspace{0.1667em}\Bigg|\hspace{0.1667em}{\varepsilon _{i}}\Bigg)\Bigg]=\mathbb{E}\Bigg({\sum \limits_{j=1}^{{\varepsilon _{i}}}}\mathbb{E}({\zeta _{1,0}^{(i)}}){1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}\Bigg)\\ {} & \hspace{1em}=\mathbb{E}({\zeta _{1,0}^{(i)}})\mathbb{E}\big({\varepsilon _{i}}{1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}\big),\end{aligned}\]
since $\{{\zeta _{j,0}^{(i)}}:j\in \mathbb{N}\}$ and ${\varepsilon _{i}}$ are independent. Moreover,
\[\begin{aligned}{}& \mathbb{E}\big({\varepsilon _{i}}{1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}\big)=\mathbb{E}\big(\varepsilon {1_{\{\varepsilon \leqslant (1-\delta ){\varrho ^{-i}}x\}}}\big)={\int _{0}^{\infty }}\mathbb{P}\big(\varepsilon {1_{\{\varepsilon \leqslant (1-\delta ){\varrho ^{-i}}x\}}}>t\big)\hspace{0.1667em}\mathrm{d}t\\ {} & \hspace{1em}={\int _{0}^{(1-\delta ){\varrho ^{-i}}x}}\mathbb{P}(t<\varepsilon \leqslant (1-\delta ){\varrho ^{-i}}x)\hspace{0.1667em}\mathrm{d}t\leqslant {\int _{0}^{(1-\delta ){\varrho ^{-i}}x}}\mathbb{P}(\varepsilon >t)\hspace{0.1667em}\mathrm{d}t.\end{aligned}\]
By Karamata’s theorem (see Theorem A.1), we have
\[ \underset{y\to \infty }{\lim }\frac{{\textstyle\textstyle\int _{0}^{y}}\mathbb{P}(\varepsilon >t)\hspace{0.1667em}\mathrm{d}t}{y\mathbb{P}(\varepsilon >y)}=\frac{1}{1-\alpha },\]
thus there exists ${y_{0}}\in (0,\infty )$ such that
\[ {\int _{0}^{y}}\mathbb{P}(\varepsilon >t)\hspace{0.1667em}\mathrm{d}t\leqslant \frac{2y\mathbb{P}(\varepsilon >y)}{1-\alpha },\hspace{2em}y\in [{y_{0}},\infty ),\]
hence
\[ {\int _{0}^{(1-\delta ){\varrho ^{-i}}x}}\mathbb{P}(\varepsilon >t)\hspace{0.1667em}\mathrm{d}t\leqslant \frac{2(1-\delta ){\varrho ^{-i}}x\mathbb{P}(\varepsilon >(1-\delta ){\varrho ^{-i}}x)}{1-\alpha }\]
whenever $(1-\delta ){\varrho ^{-i}}x\in [{y_{0}},\infty )$, which holds for $i\geqslant n$ with sufficiently large $n\in \mathbb{N}$ and $x\in [{(1-\delta )^{-1}}{\varrho ^{n}}{y_{0}},\infty )$ due to $\varrho \in (0,1)$. Thus, for sufficiently large $n\in \mathbb{N}$ and $x\in [{(1-\delta )^{-1}}{\varrho ^{n}}{y_{0}},\infty )$, we obtain
\[\begin{aligned}{}\frac{{P_{2,n}}(x,\delta )}{\mathbb{P}(\varepsilon >x)}& \leqslant \frac{1}{x\mathbb{P}(\varepsilon >x)}\sum \limits_{i\geqslant n}\mathbb{E}({\zeta _{1,0}^{(i)}}){\int _{0}^{(1-\delta ){\varrho ^{-i}}x}}\mathbb{P}(\varepsilon >t)\hspace{0.1667em}\mathrm{d}t\\ {} & \leqslant \frac{2(1-\delta )}{1-\alpha }\sum \limits_{i\geqslant n}\frac{\mathbb{P}(\varepsilon >(1-\delta ){\varrho ^{-i}}x)}{\mathbb{P}(\varepsilon >x)},\end{aligned}\]
since $\mathbb{E}({\zeta _{1,0}^{(i)}})\leqslant {\varrho ^{i}}$, $i\in {\mathbb{Z}_{+}}$, by (10) and ${\zeta _{1,0}^{(0)}}=1$. Using (16), we get
\[ \frac{{P_{2,n}}(x,\delta )}{\mathbb{P}(\varepsilon >x)}\leqslant \frac{2(1-\delta )}{1-\alpha }\sum \limits_{i\geqslant n}(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}\]
for $\delta \in (0,\frac{\alpha }{2})$, for sufficiently large $n\in \mathbb{N}$ and for all $x\in [\max ({x_{0}},{(1-\delta )^{-1}}{\varrho ^{n}}{y_{0}}),\infty )$. Hence for $\delta \in (0,\frac{\alpha }{2})$ we have
\[ \underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }\frac{{P_{2,n}}(x,\delta )}{\mathbb{P}(\varepsilon >x)}\leqslant \underset{n\to \infty }{\lim }\frac{2(1-{\delta ^{2}})}{1-\alpha }\sum \limits_{i\geqslant n}{[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}=0,\]
where the last step follows by the fact that the series ${\textstyle\sum _{i=0}^{\infty }}{({\varrho ^{i}})^{\frac{\alpha }{2}}}$ is convergent, since $\varrho \in (0,1)$.Consequently, due to the fact that $\mathbb{P}({\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\hspace{0.1667em}>\hspace{0.1667em}x)\hspace{0.1667em}\leqslant \hspace{0.1667em}{P_{1,n}}(x,\delta )+{P_{2,n}}(x,\delta )$, $x\in (0,\infty )$, $n\in \mathbb{N}$, $\delta \in (0,1)$, we obtain (15), and we conclude ${\lim \nolimits_{n\to \infty }}{L_{2,n}}(q)=0$ for all $q\in (0,1)$. Thus we obtain
\[ \underset{x\to \infty }{\limsup }\frac{\mathbb{P}({X_{0}}>x)}{\mathbb{P}(\varepsilon >x)}\leqslant \underset{n\to \infty }{\lim }{L_{1,n}}(q)+\underset{n\to \infty }{\lim }{L_{2,n}}(q)={(1-q)^{-\alpha }}{\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}\]
for all $q\in (0,1)$. Letting $q\downarrow 0$, this yields
\[ \underset{x\to \infty }{\limsup }\frac{\mathbb{P}({X_{0}}>x)}{\mathbb{P}(\varepsilon >x)}\leqslant {\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}.\]
Taking into account (14), the proof of (15) is complete in case of $\alpha \in (0,1)$.Next, we consider the case $\alpha \in [1,2)$. Note that (15) is equivalent to
if $x\in [{x_{0}},\infty )$ and $(1-\delta ){\varrho ^{-2i}}\in [1,\infty )$, which holds for sufficiently large $i\in \mathbb{N}$ (due to $\varrho \in (0,1)$). Consequently, if $\delta \in (0,\frac{\alpha }{4})$, then
\[\begin{aligned}{}& \underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>\sqrt{x}\big)}{\mathbb{P}(\varepsilon >\sqrt{x})}\\ {} & \hspace{2em}=\underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\big)^{2}}>x\big)}{\mathbb{P}({\varepsilon ^{2}}>x)}=0.\end{aligned}\]
Repeating a similar argument as for $\alpha \in (0,1)$, we obtain
\[\begin{aligned}{}& \mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\right)^{2}}>x\Bigg)\\ {} & =\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}>(1-\delta )x\Bigg)\\ {} & \phantom{\hspace{1em}}+\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}\leqslant (1-\delta )x\Bigg)\\ {} & =\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}>(1-\delta )x\Bigg)\\ {} & \hspace{1em}+\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}\leqslant (1-\delta )x\Bigg)\\ {} & \leqslant \mathbb{P}\bigg(\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}>(1-\delta )x\bigg)+\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\right)^{2}}>x\Bigg)\\ {} & =:{P_{1,n}}(x,\delta )+{P_{2,n}}(x,\delta )\end{aligned}\]
for each $x\in (0,\infty )$, $n\in \mathbb{N}$ and $\delta \in (0,1)$. By the subadditivity of probability,
\[ {P_{1,n}}(x,\delta )\leqslant {\sum \limits_{i=n}^{\infty }}\mathbb{P}({\varrho ^{2i}}{\varepsilon _{i}^{2}}>(1-\delta )x)={\sum \limits_{i=n}^{\infty }}\mathbb{P}({\varepsilon ^{2}}>(1-\delta ){\varrho ^{-2i}}x)\]
for each $x\in (0,\infty )$, $n\in \mathbb{N}$ and $\delta \in (0,1)$. Since ${\varepsilon ^{2}}$ is regularly varying with index $\frac{\alpha }{2}$ (see Lemma A.1), using Potter’s upper bound (see Lemma A.6) for $\delta \in \big(0,\frac{\alpha }{4}\big)$, there exists ${x_{0}}\in (0,\infty )$ such that
(17)
\[ \frac{\mathbb{P}({\varepsilon ^{2}}>(1-\delta ){\varrho ^{-2i}}x)}{\mathbb{P}({\varepsilon ^{2}}>x)}<(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{2}+\delta }}<(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}\]
\[ \underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }\frac{{P_{1,n}}(x,\delta )}{\mathbb{P}({\varepsilon ^{2}}>x)}\leqslant \underset{n\to \infty }{\lim }{\sum \limits_{i=n}^{\infty }}(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}=0,\]
since ${\varrho ^{\frac{\alpha }{2}}}<1$ (due to $\varrho \in (0,1)$). By Markov’s inequality, for $x\in (0,\infty )$, $n\in \mathbb{N}$ and $\delta \in (0,1)$, we have
\[\begin{aligned}{}& \frac{{P_{2,n}}(x,\delta )}{\mathbb{P}({\varepsilon ^{2}}>x)}\leqslant \frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}\mathbb{E}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\right)^{2}}\Bigg)\\ {} & =\frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}\mathbb{E}\Bigg({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}{({\varepsilon _{i}})^{2}}{1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\Bigg)\\ {} & \hspace{1em}+\frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}\mathbb{E}\Bigg({\sum \limits_{i,j=n,\hspace{0.2778em}i\ne j}^{\infty }}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{V_{i}^{(i)}}({\varepsilon _{i}}){V_{j}^{(j)}}({\varepsilon _{j}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}{1_{\{{\varepsilon _{j}^{2}}\leqslant (1-\delta ){\varrho ^{-2j}}x\}}}\hspace{-0.1667em}\hspace{-0.1667em}\Bigg)\\ {} & =:{J_{2,1,n}}(x,\delta )+{J_{2,2,n}}(x,\delta )\end{aligned}\]
for each $x\in (0,\infty )$, $n\in \mathbb{N}$ and $\delta \in (0,1)$. By Lemma 2, (9) and (10) with ${X_{0}}=1$ and ${X_{-1}}=0$, we have
\[\begin{aligned}{}\mathbb{E}({V_{i}^{(i)}}{(n)^{2}})& =\mathbb{E}\left({\left({\sum \limits_{j=1}^{n}}{\zeta _{j,0}^{(i)}}\right)^{2}}\right)={\sum \limits_{j=1}^{n}}\mathbb{E}\big({({\zeta _{j,0}^{(i)}})^{2}}\big)+{\sum \limits_{j,\ell =1,\hspace{0.1667em}j\ne \ell }^{n}}\mathbb{E}\big({\zeta _{j,0}^{(i)}}\big)\mathbb{E}\big({\zeta _{\ell ,0}^{(i)}}\big)\\ {} & \leqslant {c_{\mathrm{sub}}}{\sum \limits_{j=1}^{n}}{\varrho ^{i}}+{\sum \limits_{j,\ell =1,\hspace{0.1667em}j\ne \ell }^{n}}{\varrho ^{i}}{\varrho ^{i}}\leqslant {c_{\mathrm{sub}}}n{\varrho ^{i}}+({n^{2}}-n){\varrho ^{2i}}\\ {} & \leqslant {c_{\mathrm{sub}}}{\varrho ^{i}}n+{\varrho ^{2i}}{n^{2}}\end{aligned}\]
for $i,n\in \mathbb{N}$. Hence, using that $({\varepsilon _{i}},{V_{i}^{(i)}}({\varepsilon _{i}}))\stackrel{\mathcal{D}}{=}\big({\varepsilon _{i}},{\textstyle\sum _{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}\big)$ and that ${\varepsilon _{i}}$ and $\{{\zeta _{j,0}^{(i)}}:j\in \mathbb{N}\}$ are independent, we have
\[\begin{aligned}{}{J_{2,1,n}}(x,\delta )& ={\sum \limits_{i=n}^{\infty }}\frac{\mathbb{E}\big({V_{i}^{(i)}}{({\varepsilon _{i}})^{2}}{1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\big)}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & ={\sum \limits_{i=n}^{\infty }}\frac{\mathbb{E}\big({\big({\textstyle\textstyle\sum _{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}\big)^{2}}{1_{\{{\varepsilon _{i}}\leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}\}}}\big)}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & ={\sum \limits_{i=n}^{\infty }}\frac{{\textstyle\sum _{0\leqslant \ell \leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}}}\mathbb{E}\Big({\big({\textstyle\textstyle\sum _{j=1}^{\ell }}{\zeta _{j,0}^{(i)}}\big)^{2}}\Big)\mathbb{P}({\varepsilon _{i}}=\ell )}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & \leqslant {\sum \limits_{i=n}^{\infty }}\frac{{\textstyle\sum _{0\leqslant \ell \leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}}}\left({c_{\mathrm{sub}}}{\varrho ^{i}}\ell +{\varrho ^{2i}}{\ell ^{2}}\right)\mathbb{P}(\varepsilon =\ell )}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & ={\sum \limits_{i=n}^{\infty }}{c_{\mathrm{sub}}}{\varrho ^{i}}\frac{\mathbb{E}(\varepsilon {1_{\{{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & \phantom{=\hspace{0.2778em}}+{\sum \limits_{i=n}^{\infty }}{\varrho ^{2i}}\frac{\mathbb{E}({\varepsilon ^{2}}{1_{\{{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & =:{J_{2,1,1,n}}(x,\delta )+{J_{2,1,2,n}}(x,\delta ).\end{aligned}\]
Since ${\varepsilon ^{2}}$ is regularly varying with index $\frac{\alpha }{2}\in [\frac{1}{2},1)$ (see Lemma A.1), by Karamata’s theorem (see Theorem A.1), we have
\[ \underset{y\to \infty }{\lim }\frac{{\textstyle\textstyle\int _{0}^{y}}\mathbb{P}({\varepsilon ^{2}}>t)\hspace{0.1667em}\mathrm{d}t}{y\mathbb{P}({\varepsilon ^{2}}>y)}=\frac{1}{1-\frac{\alpha }{2}},\]
thus there exists ${y_{0}}\in (0,\infty )$ such that
\[ {\int _{0}^{y}}\mathbb{P}({\varepsilon ^{2}}>t)\hspace{0.1667em}\mathrm{d}t\leqslant \frac{2y\mathbb{P}({\varepsilon ^{2}}>y)}{1-\frac{\alpha }{2}},\hspace{2em}y\in [{y_{0}},\infty ),\]
hence
\[\begin{aligned}{}\mathbb{E}({\varepsilon ^{2}}{1_{\{{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})& ={\int _{0}^{\infty }}\mathbb{P}({\varepsilon ^{2}}{1_{\{{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}>y)\hspace{0.1667em}\mathrm{d}y\\ {} & ={\int _{0}^{(1-\delta ){\varrho ^{-2i}}x}}\mathbb{P}(y<{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x)\hspace{0.1667em}\mathrm{d}y\\ {} & \leqslant {\int _{0}^{(1-\delta ){\varrho ^{-2i}}x}}\mathbb{P}({\varepsilon ^{2}}>t)\hspace{0.1667em}\mathrm{d}t\\ {} & \leqslant \frac{2(1-\delta ){\varrho ^{-2i}}x\mathbb{P}({\varepsilon ^{2}}>(1-\delta ){\varrho ^{-2i}}x)}{1-\frac{\alpha }{2}}\end{aligned}\]
whenever $(1-\delta ){\varrho ^{-2i}}x\in [{y_{0}},\infty )$, which holds for $i\geqslant n$ with sufficiently large $n\in \mathbb{N}$, and $x\in [{(1-\delta )^{-1}}{\varrho ^{2n}}{y_{0}},\infty )$ due to $\varrho \in (0,1)$. Thus for $\delta \in (0,\frac{\alpha }{4})$, for sufficiently large $n\in \mathbb{N}$ (satisfying $(1-\delta ){\varrho ^{-2n}}\in (1,\infty )$ as well) and for all $x\in [\max ({x_{0}},{(1-\delta )^{-1}}{\varrho ^{2n}}{y_{0}}),\infty )$, using (17), we obtain
\[\begin{aligned}{}{J_{2,1,2,n}}(x,\delta )& \leqslant \frac{2(1-\delta )}{1-\frac{\alpha }{2}}{\sum \limits_{i=n}^{\infty }}\frac{\mathbb{P}({\varepsilon ^{2}}>(1-\delta ){\varrho ^{-2i}}x)}{\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & \leqslant \frac{2(1-\delta )}{1-\frac{\alpha }{2}}{\sum \limits_{i=n}^{\infty }}(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}\\ {} & =\frac{2(1-{\delta ^{2}})}{1-\frac{\alpha }{2}}{\sum \limits_{i=n}^{\infty }}{[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}.\end{aligned}\]
Hence for $\delta \in (0,\frac{\alpha }{4})$, we have
\[ \underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }{J_{2,1,2,n}}(x,\delta )\leqslant \frac{2(1-{\delta ^{2}})}{1-\frac{\alpha }{2}}\underset{n\to \infty }{\lim }{\sum \limits_{i=n}^{\infty }}{[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}=0,\]
yielding ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}{J_{2,1,2,n}}(x,\delta )=0$ for $\delta \in (0,\frac{\alpha }{4})$. Further, if $\alpha \in (1,2)$, or $\alpha =1$ and ${m_{\varepsilon }}<\infty $, we have
\[ {J_{2,1,1,n}}(x,\delta )\leqslant {c_{\mathrm{sub}}}{\sum \limits_{i=n}^{\infty }}{\varrho ^{i}}\frac{{m_{\varepsilon }}}{x\mathbb{P}({\varepsilon ^{2}}>x)},\]
and hence, using that ${\lim \nolimits_{x\to \infty }}x\mathbb{P}({\varepsilon ^{2}}>x)=\infty $ (see Lemma A.2),
\[ \underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }{J_{2,1,1,n}}(x,\delta )\leqslant {c_{\mathrm{sub}}}{m_{\varepsilon }}\underset{n\to \infty }{\lim }\Bigg({\sum \limits_{i=n}^{\infty }}{\varrho ^{i}}\Bigg)\underset{x\to \infty }{\limsup }\frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}=0,\]
yielding ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}{J_{2,1,1,n}}(x,\delta )=0$ for $\delta \in (0,1)$.If $\alpha =1$ and ${m_{\varepsilon }}=\infty $, then we have
\[ {J_{2,1,1,n}}(x,\delta )={\sum \limits_{i=n}^{\infty }}{c_{\mathrm{sub}}}{\varrho ^{i}}\frac{\mathbb{E}\big(\varepsilon {1_{\{\varepsilon \leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}\}}}\big)}{x\mathbb{P}({\varepsilon ^{2}}>x)}\]
for $x\in (0,\infty )$, $n\in \mathbb{N}$ and $\delta \in (0,1)$. Note that
\[\begin{aligned}{}\mathbb{E}(\varepsilon {1_{\{\varepsilon \leqslant y\}}})& \leqslant {\int _{0}^{\infty }}\mathbb{P}(\varepsilon {1_{\{\varepsilon \leqslant y\}}}>t)\hspace{0.1667em}\mathrm{d}t={\int _{0}^{y}}\mathbb{P}(t<\varepsilon \leqslant y)\hspace{0.1667em}\mathrm{d}t\\ {} & \leqslant {\int _{0}^{y}}\mathbb{P}(t<\varepsilon )\hspace{0.1667em}\mathrm{d}t=:\widetilde{L}(y)\end{aligned}\]
for $y\in {\mathbb{R}_{+}}$. Because of $\alpha =1$, Proposition 1.5.9a in Bingham et al. [6] yields that $\widetilde{L}$ is a slowly varying function (at infinity). By Potter’s bounds (see Lemma A.6), for every $\delta \in (0,\infty )$, there exists ${z_{0}}\in (0,\infty )$ such that
for $z\geqslant {z_{0}}$ and $y\geqslant z$. Hence, for $x\geqslant {z_{0}^{2}}$, we have
\[ \mathbb{E}\big(\varepsilon {1_{\{\varepsilon \leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}\}}}\big)\leqslant \widetilde{L}\big({(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}\big)\leqslant \widetilde{L}({\varrho ^{-i}}{x^{\frac{1}{2}}})\leqslant (1+\delta ){\varrho ^{-i\delta }}\widetilde{L}({x^{\frac{1}{2}}})\]
for $i\geqslant n$, where we also used that $\widetilde{L}$ is monotone increasing. Using this, we conclude that for every $\delta \in (0,\infty )$, there exists ${z_{0}}\in (0,\infty )$ such that for $x\geqslant {z_{0}^{2}}$, we have
\[ {J_{2,1,1,n}}(x,\delta )\leqslant (1+\delta ){c_{\mathrm{sub}}}\frac{\widetilde{L}({x^{\frac{1}{2}}})}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i=n}^{\infty }}{\varrho ^{-i\delta }}.\]
Here, since $\varrho \in (0,1)$ and $\delta \in (0,\infty )$, we have ${\lim \nolimits_{n\to \infty }}{\textstyle\sum _{i=n}^{\infty }}{\varrho ^{-i\delta }}=0$, and
\[ \frac{\widetilde{L}(\sqrt{x})}{x\mathbb{P}({\varepsilon ^{2}}>x)}=\frac{\widetilde{L}(\sqrt{x})}{{x^{1/4}}}\cdot \frac{1}{{x^{3/4}}\mathbb{P}(\varepsilon >\sqrt{x})}\to 0\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{,}\]
by Lemma A.2, due to the fact that $\widetilde{L}$ is slowly varying and the function $(0,\infty )\ni x\mapsto \mathbb{P}(\varepsilon >\sqrt{x})$ is regularly varying with index $-1/2$. Hence ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}{J_{2,1,1,n}}(x,\delta )=0$ for $\delta \in (0,1)$ in case of $\alpha =1$ and ${m_{\varepsilon }}=\infty $.Consequently, we have ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}{J_{2,1,n}}(x,\delta )=0$ for $\delta \in (0,\frac{\alpha }{4})$.
Now we turn to prove ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}{J_{2,2,n}}(x,\delta )=0$ for $\delta \in (0,1)$. Using that $\{({\varepsilon _{i}},{V_{i}^{(i)}}({\varepsilon _{i}})):i\in \mathbb{N}\}$ are independent, we have
\[\begin{aligned}{}{J_{2,2,n}}(x,\delta )\leqslant \frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}& \mathbb{E}({V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})\\ {} & \hspace{0.1667em}\times \mathbb{E}({V_{j}^{(j)}}({\varepsilon _{j}}){1_{\{{\varepsilon _{j}^{2}}\leqslant (1-\delta ){\varrho ^{-2j}}x\}}}).\end{aligned}\]
Here, using that $\left({\varepsilon _{i}},{V_{i}^{(i)}}({\varepsilon _{i}})\right)\stackrel{\mathcal{D}}{=}\big({\varepsilon _{i}},{\textstyle\sum _{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}\big)$, where ${\varepsilon _{i}}$ and $\{{\zeta _{j,0}^{(i)}}:j\in \mathbb{N}\}$ are independent, and (10) with ${X_{0}}=1$ and ${X_{-1}}=0$, we have
\[\begin{aligned}{}& \mathbb{E}({V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})=\mathbb{E}\left(\left({\sum \limits_{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}\right){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\right)\\ {} & ={\sum \limits_{\ell =0}^{\lfloor {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}\rfloor }}\mathbb{E}\left({\sum \limits_{j=1}^{\ell }}{\zeta _{j,0}^{(i)}}\right)\mathbb{P}({\varepsilon _{i}}=\ell )\leqslant {\sum \limits_{\ell =0}^{\lfloor {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}\rfloor }}\ell {\varrho ^{i}}\mathbb{P}({\varepsilon _{i}}=\ell )\\ {} & ={\varrho ^{i}}\mathbb{E}({\varepsilon _{i}}{1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})\end{aligned}\]
for $x\in (0,\infty )$ and $\delta \in (0,1)$. If $\alpha \in (1,2)$, or $\alpha =1$ and ${m_{\varepsilon }}<\infty $, then
\[\begin{aligned}{}& {J_{2,2,n}}(x,\delta )\\ {} & \leqslant \frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{i+j}}\mathbb{E}({\varepsilon _{i}}{1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})\mathbb{E}({\varepsilon _{j}}{1_{\{{\varepsilon _{j}^{2}}\leqslant (1-\delta ){\varrho ^{-2j}}x\}}})\\ {} & \leqslant \frac{{m_{\varepsilon }^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{i+j}}\leqslant \frac{{m_{\varepsilon }^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\left({\sum \limits_{i=n}^{\infty }}{\varrho ^{i}}\right)^{2}}\end{aligned}\]
for $x\in (0,\infty )$ and $\delta \in (0,1)$, and then, by Lemma A.2,
\[\begin{aligned}{}\underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }{J_{2,2,n}}(x,\delta )& \leqslant {m_{\varepsilon }^{2}}\underset{n\to \infty }{\lim }{\left({\sum \limits_{i=n}^{\infty }}{\varrho ^{i}}\right)^{2}}\underset{x\to \infty }{\limsup }\frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & ={m_{\varepsilon }^{2}}\left(\underset{n\to \infty }{\lim }\frac{{\varrho ^{2n}}}{{(1-\varrho )^{2}}}\right)\cdot 0=0,\end{aligned}\]
yielding that ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}{J_{2,2,n}}(x,\delta )=0$.If $\alpha =1$ and ${m_{\varepsilon }}=\infty $, then we can apply the same argument as for ${J_{2,1,1,n}}(x,\delta )$. Namely,
\[\begin{aligned}{}{J_{2,2,n}}(x,\delta )& \leqslant \frac{{(1+\delta )^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{(1-\delta )(i+j)}}{(\widetilde{L}({x^{\frac{1}{2}}}))^{2}}\\ {} & \leqslant {(1+\delta )^{2}}\frac{{(\widetilde{L}({x^{\frac{1}{2}}}))^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{(1-\delta )(i+j)}}\\ {} & ={(1+\delta )^{2}}\frac{{(\widetilde{L}({x^{\frac{1}{2}}}))^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\left({\sum \limits_{i=n}^{\infty }}{\varrho ^{(1-\delta )i}}\right)^{2}}\end{aligned}\]
for $x\in (0,\infty )$ and $\delta \in (0,1)$, where
\[ \frac{{(\widetilde{L}({x^{\frac{1}{2}}}))^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}={\left(\frac{\widetilde{L}({x^{\frac{1}{2}}})}{{x^{\frac{1}{2}}}}\right)^{2}}\frac{1}{{x^{\frac{3}{4}}}\mathbb{P}(\varepsilon >\sqrt{x})}\to 0\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{,}\]
yielding that ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}{J_{2,2,n}}(x,\delta )=0$ for $\delta \in (0,1)$ in case of $\alpha =1$ and ${m_{\varepsilon }}=\infty $ as well.Consequently, ${\lim \nolimits_{n\to \infty }}{\limsup _{x\to \infty }}\frac{{P_{2,n}}(x,\delta )}{\mathbb{P}({\varepsilon ^{2}}>x)}=0$ for $\delta \in (0,\frac{\alpha }{4})$ yielding (15) in case of $\alpha \in [1,2)$ as well, and we conclude ${\lim \nolimits_{n\to \infty }}{L_{2,n}}(q)=0$ for all $q\in (0,1)$. The proof can be finished as in case of $\alpha \in (0,1)$. □
Remark 2.
The statement of Theorem 2 remains true in the case when ${m_{\xi }}\in (0,1)$ and ${m_{\eta }}=0$. In this case we get the statement for classical Galton–Watson processes, see Theorem 2.1.1 in Basrak et al. [4] or Theorem 1. However, note that this is not a special case of Theorem 2, since in this case the mean matrix ${\boldsymbol{M}_{\xi ,\eta }}$ is not primitive. □