1 Introduction and main result
Let $(X_{k},\xi _{k})_{k\in \mathbb{N}}$ be a sequence of independent copies of a pair $(X,\xi )$ where X is a random process with paths in $D[0,\infty )$ and ξ is a positive random variable. We impose no conditions on the dependence structure of $(X,\xi )$. Hereafter $\mathbb{N}_{0}$ denotes the set of non-negative integers $\{0,1,2,\dots \}$.
Let $(S_{n})_{n\in \mathbb{N}_{0}}$ be a standard zero-delayed random walk:
and let $(\nu (t))_{t\in \mathbb{R}}$ be the corresponding first-passage time process for $(S_{n})_{n\in \mathbb{N}_{0}}$:
The random process with immigration $Y=(Y(u))_{u\in \mathbb{R}}$ is defined as a finite sum
\[Y(u):=\sum \limits_{k\ge 0}X_{k+1}(u-S_{k})\mathbb{1}_{\{S_{k}\le u\}}=\sum \limits_{k=0}^{\nu (u)-1}X_{k+1}(u-S_{k}),\hspace{1em}u\in \mathbb{R}.\]
This family of random processes was introduced in [11] as a generalization of several known objects in applied probability including branching processes with immigration (in case of X being a branching process) and renewal shot noise processes (in case of $X(t)=h(t)$ a.s. for some $h\in D[0,\infty )$). The process X is usually called a response process, or a response function if $X(t)=h(t)$ a.s. for some deterministic function h.The problem of weak convergence of random processes with immigration was addressed in [11, 12, 16] where the authors give a more or less complete picture of the weak convergence of finite-dimensional distributions of $(Y(ut))_{u\ge 0}$ or $(Y(u+t))_{u\in \mathbb{R}}$, as $t\to \infty $. The case of renewal shot noise process has received much attention in the past years, see [6, 9, 10, 14]. A comprehensive survey of the subject is given in Chapter 3 of the recent book [7].
A much more delicate question of weak convergence of Y in functional spaces, to the best of our knowledge, was only investigated either for particular response processes, or in the simple case when ξ is exponentially distributed. In the latter situation Y is called a Poisson shot noise process. In the list below η is a random variable which satisfies certain assumptions specified in the corresponding papers:
-
• if ξ has exponential distribution and either $X(t)=\mathbb{1}_{\{\eta >t\}}$ or $X(t)=t\wedge \eta $, functional limit theorems for Y were derived in [18];
-
• if $X(t)=\mathbb{1}_{\{\eta >t\}}$ and $\mathbb{E}\xi <\infty $, a functional limit theorem for Y was established in [8];
-
• if ξ has exponential distribution and $X(t)=\eta f(t)$ for some deterministic function f, limit theorems for Y were obtained in [15];
In this paper we treat the case where ξ is heavy-tailed, more precisely we assume that
for some $\ell _{\xi }$ slowly varying at infinity, and $\alpha \in (0,1)$. Assuming (2), we obtain a functional limit theorem for a quite general class of response processes. The class of such processes can be described by a common property: they do not “oscillate to much” around the mean $\mathbb{E}[X(t)]$, which itself varies regularly with parameter $\rho >-\alpha $. Let us briefly outline our approach based on ideas borrowed from [11]. Put $h(t):=\mathbb{E}[X(t)]$ and write1
We investigate the two summands in the right-hand side separately. The second summand is a standard renewal shot noise process with response function h. Under condition (2) and assuming that
for some $\rho \in \mathbb{R}$ and a slowly varying function $\ell _{h}$, it was proved in [10, Theorem 2.9] and [14, Theorem 2.1] that
where $J_{\alpha ,\rho }=(J_{\alpha ,\rho }(u))_{u\ge 0}$ is a so-called fractionally integrated inverse α-stable subordinator. The process $J_{\alpha ,\rho }$ is defined as a pathwise Lebesgue–Stieltjes integral
In this formula ${W_{\alpha }^{\gets }}(y):=\inf \{t\ge 0:W_{\alpha }(t)>y\}$, $y\ge 0$, is a generalized inverse of an α-stable subordinator $(W_{\alpha }(t))_{t\ge 0}$ with the Laplace exponent
It is also known that convergence of finite-dimensional distributions (5) can be strengthened to convergence in the Skorokhod space $D(0,\infty )$ endowed with the $J_{1}$-topology if $\rho >-\alpha $, see Theorem 2.1 in [14]. If $\rho \le -\alpha $ the process $(J_{\alpha ,\rho }(u))_{u\ge 0}$, being a.s. finite for every fixed $u\ge 0$, has a.s. locally unbounded trajectories, see Proposition 2.5 in [14].
(3)
\[Y(t)=\sum \limits_{k\ge 0}\big(X_{k+1}(t-S_{k})-h(t-S_{k})\big)\mathbb{1}_{\{S_{k}\le t\}}+\sum \limits_{k\ge 0}h(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}.\](5)
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{h(t)}\sum \limits_{k\ge 0}h(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\stackrel{\mathrm{f}.\mathrm{d}.}{\Longrightarrow }\big(J_{\alpha ,\rho }(u)\big)_{u>0},\hspace{1em}t\to \infty ,\](6)
\[J_{\alpha ,\rho }(u)=\int _{[0,\hspace{0.1667em}u]}{(u-y)}^{\rho }\mathrm{d}{W_{\alpha }^{\gets }}(y),\hspace{1em}u\ge 0.\]Turning to the first summand in (3) we note that it is the a.s. limit of a martingale $(R(j,t),\mathcal{F}_{j})_{j\in \mathbb{N}}$, where $\mathcal{F}_{j}:=\sigma ((X_{k},\xi _{k}):1\le k\le j)$ and
\[R(j,t):=\sum \limits_{k=0}^{j-1}\big(X_{k+1}(t-S_{k})-h(t-S_{k})\big)\mathbb{1}_{\{S_{k}\le t\}},\hspace{1em}j\in \mathbb{N}.\]
Applying the martingale central limit theory it is possible to show that under appropriate assumptions (which are of no importance for this paper)
\[\bigg(\sqrt{\frac{\mathbb{P}\{\xi >t\}}{v(t)}}\sum \limits_{k\ge 0}\big(X_{k+1}(ut-S_{k})-h(ut-S_{k})\big)\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\stackrel{\mathrm{f}.\mathrm{d}.}{\Longrightarrow }\big(Z(u)\big)_{u>0},\]
as $t\to \infty $, for a non-trivial process Z, where $v(t):=\mathbb{E}[{(X(t)-h(t))}^{2}]$ is the variance of X, see Proposition 2.2 in [11].We are interested in situations when the second summand in (3) asymptotically dominates, more precisely we are looking for conditions ensuring
for every fixed $T>0$. From what has been mentioned above it is clear that this can happen only if
Restricting our attention to the case where v is regularly varying with index $\beta \in \mathbb{R}$, i.e.
we see that (8) holds if $\beta <\alpha +2\rho $ and fails if $\beta >\alpha +2\rho $. As long as we do not make any assumptions on distributional or path-wise properties of X such as e.g., monotonicity, self-similarity or independence of increments, it can be hardly expected that condition (8) alone is sufficient for (7). Nevertheless, we will show that (7) holds true under additional assumptions on the asymptotic behavior of higher centered moments $\mathbb{E}[{(X(t)-h(t))}^{2l}]$, $l=1,2,\dots $, and an additional technical assumption. Our first main result treats the case where the moments of the normalized process $([X(t)-h(t)]/v(t))_{t\ge 0}$ are bounded uniformly in $t\ge 0$. Denote by $(\widehat{X}(t))_{t\ge 0}$ the centered process $(X(t)-h(t))_{t\ge 0}$.
(7)
\[\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,\hspace{0.1667em}T]}{\sup }\bigg|\sum \limits_{k\ge 0}\big(X_{k+1}(ut-S_{k})-h(ut-S_{k})\big)\mathbb{1}_{\{S_{k}\le ut\}}\bigg|\stackrel{\mathbb{P}}{\to }0,\hspace{1em}t\to \infty ,\]Theorem 1.
Assume that for all $t\ge 0$ and $l\in \mathbb{N}$ we have $\mathbb{E}[|X(t){|}^{l}]<\infty $. Further, assume that the following conditions are fulfilled:
Then, as $t\to \infty $,
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology.
-
(A4) there exists $\delta >0$ such that for every $l\in \mathbb{N}$ the following two conditions hold: and
(11)
\[\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]\le C_{l}{t}^{l(\rho -\varepsilon )},\hspace{1em}t\ge 0,\]
(12)
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{h(t)}\sum \limits_{k\ge 0}X_{k+1}(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0},\]Our second main result is mainly applicable when the process X is almost surely bounded by some (deterministic) constant. We have the following theorem.
Theorem 2.
Assume that for all $t\ge 0$ and $l\in \mathbb{N}$ we have $\mathbb{E}|X(t){|}^{l}<\infty $ and conditions (A1), (A2) of Theorem 1 are valid. Further, suppose that for every $l\in \mathbb{N}$ there exists a constant $C_{l}>0$ such that
and for some $\delta >0$ the function $t\mapsto \mathbb{E}[\sup _{y\in [0,\delta )}|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}{|}^{l}]$ is either directly Riemann integrable or locally bounded and
Then (12) holds.
(13)
\[\mathbb{E}\big[\widehat{X}{(t)}^{2l}\big]=\mathbb{E}\big[{\big(X(t)-h(t)\big)}^{2l}\big]\le C_{l}h(t),\hspace{1em}t\ge 0,\](14)
\[\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]=O\big(\mathbb{P}\{\xi >t\}\big),\hspace{1em}t\to \infty .\]Obviously, our results are far from being optimal and leave a lot of space for improvements, yet they are applicable to several models given in the next section.
2 Applications
2.1 The number of busy servers in a $G/G/\infty $ queue
Consider a $G/G/\infty $ queue with customers arriving at $0=S_{0}<S_{1}<S_{2}<\cdots \hspace{0.1667em}$. Upon arrival each customer is served immediately by one of infinitely many idle servers and let the service time of the kth customer be $\eta _{k}$, a copy of a positive random variable η. Put $X(t):=\mathbb{1}_{\{\eta >t\}}$, then the random process with immigration
represents the number of busy servers at time $u\ge 0$. The process $(Y(u))_{u\ge 0}$ may also be interpreted as the difference between the number of visits to $[0,t]$ of the standard random walk $(S_{k})_{k\ge 0}$ and the perturbed random walk $(S_{k}+\eta _{k+1})_{k\ge 1}$, see [2], or as the number of active sources in a communication network, see [17, 18]. An introduction to renewal theory for perturbed random walks can be found in [7].
Assume that (2) holds and
for some $\rho \in (-\alpha ,0]$ and $\ell _{\eta }$ slowly varying at infinity. Note that
Moreover, for every $l\in \mathbb{N}$ and every $\delta >0$,
\[\mathbb{E}\big[\widehat{X}{(t)}^{2l}\big]=\mathbb{P}\{\eta >t\}\mathbb{P}\{\eta \le t\}\big({\mathbb{P}}^{2l-1}\{\eta >t\}+{\mathbb{P}}^{2l-1}\{\eta \le t\}\big)\le h(t)\]
and
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}& \displaystyle \Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]\le {2}^{l-1}\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|\Big]\\{} & \displaystyle \le {2}^{l}\mathbb{P}\{\eta >t\}\mathbb{1}_{\{t\le \delta \}}+{2}^{l}\big(\mathbb{P}\{\eta >t-\delta \}-\mathbb{P}\{\eta >t\}\big)\mathbb{1}_{\{t>\delta \}}.\end{array}\]
The function on the right-hand side is directly Riemann integrable. Indeed, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \hspace{-5.69054pt}\sum \limits_{n\ge 1}\underset{\delta n\le y\le \delta (n+1)}{\sup }\big(\mathbb{P}\{\eta >y-\delta \}-\mathbb{P}\{\eta >y\}\big)\\{} & \displaystyle \le \sum \limits_{n\ge 1}\big(\mathbb{P}\big\{\eta >(n-1)\delta \big\}-\mathbb{P}\big\{\eta >(n+1)\delta \big\}\big)=\mathbb{P}\{\eta >0\}+\mathbb{P}\{\eta >\delta \}\le 2,\end{array}\]
and the claim follows from the remark after the definition of direct Riemann integrability given on p. 362 in [5].From Theorem 2 we obtain the following result, complementing Theorem 1.2 in [8] that treats the case $\mathbb{E}\xi <\infty $.
Proposition 1.
Assume that $(\xi ,\eta )$ is a random vector with positive components such that (2) and (15) hold for $\alpha \in (0,1)$ and $\rho \in (-\alpha ,0]$, respectively. Let $(\xi _{k},\eta _{k})_{k\in \mathbb{N}}$ be a sequence of independent copies of $(\xi ,\eta )$ and $(S_{k})_{k\in \mathbb{N}_{0}}$ be a random walk defined by (1). Then
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{\mathbb{P}\{\eta >t\}}\sum \limits_{k\ge 0}\mathbb{1}_{\{S_{k}\le ut<S_{k}+\eta _{k+1}\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0},\hspace{1em}t\to \infty ,\]
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology.
2.2 Shot noise processes with a random amplitude
Assume that $X(t)=\eta f(t)$, where η is a non-degenerate random variable and $f:[0,\infty )\to \mathbb{R}$ is a fixed càdlàg function. The corresponding random process with immigration
where $(\eta _{k})_{k\in \mathbb{N}}$ is a sequence of independent copies of η, may be interpreted as a renewal shot noise process in which the common response function f is scaled at a shot $S_{k}$ by a random factor $\eta _{k+1}$. In case where $(\xi _{k})_{k\in \mathbb{N}}$ have exponential distribution and are independent of $(\eta _{k})_{k\in \mathbb{N}}$ such processes were used in mathematical finance as a model of stock prices with long-range dependence in asset returns, see [15].
Note that if $\mathbb{E}|\eta {|}^{l}<\infty $ for all $l\in \mathbb{N}$, then
for some $\delta >0$ and $\varepsilon >0$. Then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle h(t)=(\mathbb{E}\eta )f(t),\hspace{2em}v(t)=\mathrm{Var}(\eta ){f}^{2}(t),\\{} & \displaystyle \mathbb{E}\big[{\big(X(t)-h(t)\big)}^{2l}\big]=\mathbb{E}\big[{(\eta -\mathbb{E}\eta )}^{2l}\big]{f}^{2l}(t)\le C_{l}{v}^{l}(t),\hspace{1em}l\in \mathbb{N},\end{array}\]
for some $C_{l}>0$. Assume now that f varies regularly with index $\rho >-\alpha $ and additionally satisfies
(16)
\[\underset{y\in [0,\delta )}{\sup }\big|f(t)-f(t-y)\big|=O\big({t}^{\rho -\varepsilon }\big),\hspace{1em}t\to \infty ,\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]& \displaystyle =\mathbb{E}|\eta -\mathbb{E}\eta {|}^{l}\underset{y\in [0,\delta )}{\sup }{\big|f(t)-f(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\\{} & \displaystyle =O\big({t}^{l(\rho -\varepsilon )}\big),\hspace{1em}t\to \infty .\end{array}\]
Hence, all assumptions of Theorem 1 hold (if $\mathbb{E}\eta <0$, Theorem 1 is applicable to the process $-X$) and we have the following result.Proposition 2.
Assume that $\mathbb{E}|\eta {|}^{l}<\infty $ for all $n\in \mathbb{N}$, $\mathbb{E}\eta \ne 0$ and (2) holds. If $f:[0,\infty )\to \mathbb{R}$ satisfies
for some $\rho >-\alpha $ and $\ell _{f}$ slowly varying at infinity, and (16) holds, then
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{f(t)\mathbb{E}\eta }\sum \limits_{k\ge 0}\eta _{k+1}f(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0},\hspace{1em}t\to \infty ,\]
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology.
This result complements the convergence of finite-dimensional distributions provided by Example 3.3 in [11].
Remark 2.
In general, condition (16) might not hold for a function f which is regularly varying with index $\rho \in \mathbb{R}$. Take, for example,
Then, f is regularly varying with index $\rho =0$, but for every $\delta >0$ and large $n\in \mathbb{N}$ we have
\[\underset{y\in [0,\delta )}{\sup }\big|f(2n)-f(2n-y)\big|\ge \underset{y\in [0,\delta \wedge 1)}{\sup }\big|f(2n)-f(2n-y)\big|\ge \frac{2}{\log (2n)}.\]
Hence, (16) does not hold. On the other hand, if f is differentiable with an eventually monotone derivative ${f^{\prime }}$, then (16) holds by the mean value theorem for differentiable functions and Theorem 1.7.2 in [3].3 Proof of Theorems 1 and 2
The proofs of Theorems 1 and 2 rely on the same ideas, so we will prove them simultaneously. Pick $\delta >0$ such that all assumptions of Theorem 1 or Theorem 2 hold. This $\delta >0$ remains fixed throughout the proof.
In view of assumptions (A1) and (A2) and the fact that h is càdlàg we infer from Theorem 2.1 in [14] that
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology. Note that in Theorem 2.1 of [14] h is assumed monotone (or eventually monotone). However, this assumption is redundant. The only places which have to be adjusted in the proofs are two displays on p. 90, where $h(0)$ should be replaced by $\sup _{y\in [0,c]}h(y)$.
(17)
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{h(t)}\sum \limits_{k\ge 0}h(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0}\hspace{1em}t\to \infty ,\]Hence, from (3) we see that it is enough to check, for every fixed $T>0$, that
where $\widetilde{Y}(t):=\sum _{k\ge 0}(X_{k+1}(t-S_{k})-h(t-S_{k}))\mathbb{1}_{\{S_{k}\le t\}}$ for $t\ge 0$. Moreover, it suffices to show that
Indeed, for every fixed $s>0$,
To check (20) we apply the Burkholder–Davis–Gundy inequality in the form given in Theorem 11.3.2 of [4], to obtain
for some constant $K_{l}>0$, where we recall the notation $\mathcal{F}_{k}=\sigma ((X_{j},\xi _{j}):1\le j\le k)$.
(18)
\[\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,T]}{\sup }\big|\widetilde{Y}(ut)\big|\stackrel{\mathbb{P}}{\to }0,\hspace{1em}t\to \infty ,\](19)
\[\frac{\mathbb{P}\{\xi >t\}}{h(t)}\big|\widetilde{Y}(t)\big|\stackrel{a.s.}{\to }0,\hspace{1em}t\to \infty .\]
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,T]}{\sup }\big|\widetilde{Y}(ut)\big|\\{} & \displaystyle \hspace{1em}\le \frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,s]}{\sup }\big|\widetilde{Y}(u)\big|+\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [s,Tt]}{\sup }\big|\widetilde{Y}(u)\big|\\{} & \displaystyle \hspace{1em}\le \frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,s]}{\sup }\big|\widetilde{Y}(u)\big|+\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [s,Tt]}{\sup }\frac{h(u)}{\mathbb{P}\{\xi >u\}}\underset{u\in [s,Tt]}{\sup }\bigg|\frac{\mathbb{P}\{\xi >u\}}{h(u)}\widetilde{Y}(u)\bigg|.\end{array}\]
Since $t\mapsto h(t)/\mathbb{P}\{\xi >t\}$ is regularly varying with positive index $\rho +\alpha $,
\[\underset{u\in [s,Tt]}{\sup }\frac{h(u)}{\mathbb{P}\{\xi >u\}}\sim \frac{h(Tt)}{\mathbb{P}\{\xi >Tt\}}\sim {T}^{\rho +\alpha }\frac{h(t)}{\mathbb{P}\{\xi >t\}},\hspace{1em}t\to \infty .\]
Sending $t\to \infty $ we obtain, for every fixed $s>0$,
\[\underset{t\to \infty }{\limsup }\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,T]}{\sup }\big|\widetilde{Y}(ut)\big|\le {T}^{\rho +\alpha }\underset{u\in [s,\infty )}{\sup }\bigg|\frac{\mathbb{P}\{\xi >u\}}{h(u)}\widetilde{Y}(u)\bigg|.\]
Sending now $s\to \infty $ shows that (19) implies (18). Let us first check that (19) holds along the arithmetic sequence $(n\delta )_{n\in \mathbb{N}}$. According to the Borel–Cantelli lemma and Markov’s inequality it suffices to check that for some $l\in \mathbb{N}$
(20)
\[\sum \limits_{n=1}^{\infty }{\bigg(\frac{\mathbb{P}\{\xi >n\delta \}}{h(\delta n)}\bigg)}^{2l}\mathbb{E}\big[\widetilde{Y}{(\delta n)}^{2l}\big]<\infty .\](21)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big[\widetilde{Y}{(t)}^{2l}\big]& \displaystyle \le K_{l}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}\mathbb{E}\big({\widehat{X}_{k+1}^{2}}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}|\mathcal{F}_{k}\big)\bigg)}^{l}\bigg]\\{} & \displaystyle \hspace{1em}+K_{l}\mathbb{E}\Big[\underset{k\ge 0}{\sup }\big({\widehat{X}_{k+1}^{2l}}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\big)\Big],\end{array}\]Proof of (20) under assumptions of Theorem 1.
Using assumption (A4) we infer from (21):
If $\beta \ge 0$, then $t\mapsto {v}^{l}(t)$ varies regularly with non-negative index $l\beta $. Therefore, Lemma 1(i) yields
To bound the first summand in (22) apply Lemma 1(i) to obtain
(22)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}& \displaystyle \big[\widetilde{Y}{(t)}^{2l}\big]\\{} & \displaystyle \le K_{l}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}v(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]+K_{l}\mathbb{E}\bigg[\sum \limits_{k\ge 0}{\widehat{X}_{k+1}^{2l}}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg]\\{} & \displaystyle \le K_{l}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}v(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]+K_{l}C_{l}\mathbb{E}\bigg[\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg].\end{array}\]
\[\mathbb{E}\bigg(\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)=O\bigg(\frac{{v}^{l}(t)}{\mathbb{P}\{\xi >t\}}\bigg),\hspace{1em}t\to \infty .\]
If $\beta \in (-\alpha ,0)$, pick $l\in \mathbb{N}$ such that $l\beta <-\alpha $. Then ${v}^{l}(t)=O(\mathbb{P}\{\xi >t\})$, as $t\to \infty $, and Lemma 1(iii) yields
\[\mathbb{E}\bigg[\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg]=O(1),\hspace{1em}t\to \infty .\]
Hence, in any case
(23)
\[\mathbb{E}\bigg[\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg]=O\bigg(\frac{{v}^{l}(t)}{\mathbb{P}\{\xi >t\}}\bigg)+O(1),\hspace{1em}t\to \infty .\]
\[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}v(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=O\bigg({\bigg(\frac{v(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
Combining this estimate with (23), we see that (20) holds if we pick $l>{(2\rho +\alpha -\beta )}^{-1}$. This proves (20) under assumptions of Theorem 1. □Proof of (20) under assumptions of Theorem 2.
From (21) and using (13) we have
\[\mathbb{E}\big[\widetilde{Y}{(t)}^{2l}\big]\le K_{l}{C_{1}^{l}}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}h(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]+K_{l}C_{l}\mathbb{E}\bigg[\sum \limits_{k\ge 0}h(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg].\]
Lemma 1(i) gives us the estimate
\[\mathbb{E}\big[\widetilde{Y}{(t)}^{2l}\big]=O\bigg({\bigg(\frac{h(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
Therefore, (20) holds if we choose $l\in \mathbb{N}$ such that $l(\alpha +\rho )>1$. This proves (20) under the assumptions of Theorem 2.It remains to show that
where $V_{k+1}(t):=\sup _{y\in [0,\delta )}|\widehat{X}_{k+1}(t)-\widehat{X}_{k+1}(t-y)\mathbb{1}_{\{y\le t\}}|$. □
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{\mathbb{P}\{\xi >n\delta \}}{h(n\delta )}\underset{t\in [n\delta ,(n+1)\delta )}{\sup }\bigg|& \displaystyle \sum \limits_{k\ge 0}\big(\widehat{X}_{k+1}\big((n+1)\delta -S_{k}\big)\mathbb{1}_{\{S_{k}\le (n+1)\delta \}}\\{} & \displaystyle -\widehat{X}_{k+1}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\big)\bigg|\stackrel{a.s.}{\to }0,\end{array}\]
as $n\to \infty $, which in turn is an obvious consequence of regular variation of $t\mapsto \mathbb{P}\{\xi >t\}/h(t)$ and
(24)
\[\frac{\mathbb{P}\{\xi >n\}}{h(n)}\sum \limits_{k\ge 0}V_{k+1}(n\delta -S_{k})\mathbb{1}_{\{S_{k}\le n\delta \}}\stackrel{a.s.}{\to }0,\hspace{1em}n\to \infty ,\]Proof of (24) under assumptions of Theorem 1.
Applying Lemma 2(i) with $b(t)={t}^{\rho -\varepsilon }$ and appropriate $\varepsilon >0$ we obtain from (A5) that
\[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}V_{k+1}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=O\bigg({\bigg(\frac{{t}^{\rho -\varepsilon }}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
Hence (24) holds in view of the Borel–Cantelli lemma and Markov’s inequality, since
\[\sum \limits_{n=1}^{\infty }\mathbb{P}\bigg\{\frac{\mathbb{P}\{\xi >n\}}{h(n)}\sum \limits_{k\ge 0}V_{k+1}(n\delta -S_{k})\mathbb{1}_{\{S_{k}\le n\delta \}}>\varepsilon \bigg\}\le \widehat{C}\sum \limits_{n=1}^{\infty }{\big({n}^{\rho -\varepsilon }h(n)\big)}^{l}<\infty ,\]
for all $l\in \mathbb{N}$ such that $\varepsilon l>1$ and some $\widehat{C}=\widehat{C}_{l}>0$. □Proof of (24) under assumptions of Theorem 1.
If the function
\[t\mapsto \mathbb{E}\Big[{\Big(\underset{y\in [0,\delta )}{\sup }\big|\widehat{X}_{k+1}(t)-\widehat{X}_{k+1}(t-y)\mathbb{1}_{\{y\le t\}}\big|\Big)}^{l}\Big]\]
is directly Riemann integrable, then
\[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}V_{k+1}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=o(1),\hspace{1em}t\to \infty \]
by Lemma 2(ii). Hence (24) holds by the same reasoning as above after applying the Borel–Cantelli lemma. If (14) holds, then the last centered formula also holds with $O(1)$ in the right-hand side by Lemma 2(iii), whence (24). This finishes the proofs of Theorems 1 and 2. □