Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 4, Issue 2 (2017)
  4. A functional limit theorem for random pr ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Cited by
  • More
    Article info Full article Cited by

A functional limit theorem for random processes with immigration in the case of heavy tails
Volume 4, Issue 2 (2017), pp. 93–108
Alexander Marynych   Glib Verovkin  

Authors

 
Placeholder
https://doi.org/10.15559/17-VMSTA76
Pub. online: 12 April 2017      Type: Research Article      Open accessOpen Access

Received
31 January 2017
Revised
26 March 2017
Accepted
27 March 2017
Published
12 April 2017

Abstract

Let $(X_{k},\xi _{k})_{k\in \mathbb{N}}$ be a sequence of independent copies of a pair $(X,\xi )$ where X is a random process with paths in the Skorokhod space $D[0,\infty )$ and ξ is a positive random variable. The random process with immigration $(Y(u))_{u\in \mathbb{R}}$ is defined as the a.s. finite sum $Y(u)=\sum _{k\ge 0}X_{k+1}(u-\xi _{1}-\cdots -\xi _{k})\mathbb{1}_{\{\xi _{1}+\cdots +\xi _{k}\le u\}}$. We obtain a functional limit theorem for the process $(Y(ut))_{u\ge 0}$, as $t\to \infty $, when the law of ξ belongs to the domain of attraction of an α-stable law with $\alpha \in (0,1)$, and the process X oscillates moderately around its mean $\mathbb{E}[X(t)]$. In this situation the process $(Y(ut))_{u\ge 0}$, when scaled appropriately, converges weakly in the Skorokhod space $D(0,\infty )$ to a fractionally integrated inverse stable subordinator.

1 Introduction and main result

Let $(X_{k},\xi _{k})_{k\in \mathbb{N}}$ be a sequence of independent copies of a pair $(X,\xi )$ where X is a random process with paths in $D[0,\infty )$ and ξ is a positive random variable. We impose no conditions on the dependence structure of $(X,\xi )$. Hereafter $\mathbb{N}_{0}$ denotes the set of non-negative integers $\{0,1,2,\dots \}$.
Let $(S_{n})_{n\in \mathbb{N}_{0}}$ be a standard zero-delayed random walk:
(1)
\[S_{0}:=0,\hspace{2em}S_{n}:=\xi _{1}+\cdots +\xi _{n},\hspace{1em}n\in \mathbb{N},\]
and let $(\nu (t))_{t\in \mathbb{R}}$ be the corresponding first-passage time process for $(S_{n})_{n\in \mathbb{N}_{0}}$:
\[\nu (t):=\inf \{k\in \mathbb{N}_{0}:S_{k}>t\},\hspace{1em}t\in \mathbb{R}.\]
The random process with immigration $Y=(Y(u))_{u\in \mathbb{R}}$ is defined as a finite sum
\[Y(u):=\sum \limits_{k\ge 0}X_{k+1}(u-S_{k})\mathbb{1}_{\{S_{k}\le u\}}=\sum \limits_{k=0}^{\nu (u)-1}X_{k+1}(u-S_{k}),\hspace{1em}u\in \mathbb{R}.\]
This family of random processes was introduced in [11] as a generalization of several known objects in applied probability including branching processes with immigration (in case of X being a branching process) and renewal shot noise processes (in case of $X(t)=h(t)$ a.s. for some $h\in D[0,\infty )$). The process X is usually called a response process, or a response function if $X(t)=h(t)$ a.s. for some deterministic function h.
The problem of weak convergence of random processes with immigration was addressed in [11, 12, 16] where the authors give a more or less complete picture of the weak convergence of finite-dimensional distributions of $(Y(ut))_{u\ge 0}$ or $(Y(u+t))_{u\in \mathbb{R}}$, as $t\to \infty $. The case of renewal shot noise process has received much attention in the past years, see [6, 9, 10, 14]. A comprehensive survey of the subject is given in Chapter 3 of the recent book [7].
A much more delicate question of weak convergence of Y in functional spaces, to the best of our knowledge, was only investigated either for particular response processes, or in the simple case when ξ is exponentially distributed. In the latter situation Y is called a Poisson shot noise process. In the list below η is a random variable which satisfies certain assumptions specified in the corresponding papers:
  • • if ξ has exponential distribution and either $X(t)=\mathbb{1}_{\{\eta >t\}}$ or $X(t)=t\wedge \eta $, functional limit theorems for Y were derived in [18];
  • • if $X(t)=\mathbb{1}_{\{\eta >t\}}$ and $\mathbb{E}\xi <\infty $, a functional limit theorem for Y was established in [8];
  • • if $X(t)=\mathbb{1}_{\{\eta \le t\}}$, functional limit theorems for Y are given in [1];
  • • if ξ has exponential distribution and $X(t)=\eta f(t)$ for some deterministic function f, limit theorems for Y were obtained in [15];
  • • in [12, 16] sufficient conditions for weak convergence of $(Y(u+t))_{u\in \mathbb{R}}$ to a stationary process with immigration were found.
In this paper we treat the case where ξ is heavy-tailed, more precisely we assume that
(2)
\[\mathbb{P}\{\xi >t\}\sim {t}^{-\alpha }\ell _{\xi }(t),\hspace{1em}t\to \infty ,\]
for some $\ell _{\xi }$ slowly varying at infinity, and $\alpha \in (0,1)$. Assuming (2), we obtain a functional limit theorem for a quite general class of response processes. The class of such processes can be described by a common property: they do not “oscillate to much” around the mean $\mathbb{E}[X(t)]$, which itself varies regularly with parameter $\rho >-\alpha $. Let us briefly outline our approach based on ideas borrowed from [11]. Put $h(t):=\mathbb{E}[X(t)]$ and write1
(3)
\[Y(t)=\sum \limits_{k\ge 0}\big(X_{k+1}(t-S_{k})-h(t-S_{k})\big)\mathbb{1}_{\{S_{k}\le t\}}+\sum \limits_{k\ge 0}h(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}.\]
We investigate the two summands in the right-hand side separately. The second summand is a standard renewal shot noise process with response function h. Under condition (2) and assuming that
(4)
\[h(t)=\mathbb{E}\big[X(t)\big]\sim {t}^{\rho }\ell _{h}(t),\hspace{1em}t\to \infty ,\]
for some $\rho \in \mathbb{R}$ and a slowly varying function $\ell _{h}$, it was proved in [10, Theorem 2.9] and [14, Theorem 2.1] that
(5)
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{h(t)}\sum \limits_{k\ge 0}h(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\stackrel{\mathrm{f}.\mathrm{d}.}{\Longrightarrow }\big(J_{\alpha ,\rho }(u)\big)_{u>0},\hspace{1em}t\to \infty ,\]
where $J_{\alpha ,\rho }=(J_{\alpha ,\rho }(u))_{u\ge 0}$ is a so-called fractionally integrated inverse α-stable subordinator. The process $J_{\alpha ,\rho }$ is defined as a pathwise Lebesgue–Stieltjes integral
(6)
\[J_{\alpha ,\rho }(u)=\int _{[0,\hspace{0.1667em}u]}{(u-y)}^{\rho }\mathrm{d}{W_{\alpha }^{\gets }}(y),\hspace{1em}u\ge 0.\]
In this formula ${W_{\alpha }^{\gets }}(y):=\inf \{t\ge 0:W_{\alpha }(t)>y\}$, $y\ge 0$, is a generalized inverse of an α-stable subordinator $(W_{\alpha }(t))_{t\ge 0}$ with the Laplace exponent
\[-\log \mathbb{E}{e}^{-sW_{\alpha }(1)}=\varGamma (1-\alpha ){s}^{\alpha },\hspace{1em}s\ge 0.\]
It is also known that convergence of finite-dimensional distributions (5) can be strengthened to convergence in the Skorokhod space $D(0,\infty )$ endowed with the $J_{1}$-topology if $\rho >-\alpha $, see Theorem 2.1 in [14]. If $\rho \le -\alpha $ the process $(J_{\alpha ,\rho }(u))_{u\ge 0}$, being a.s. finite for every fixed $u\ge 0$, has a.s. locally unbounded trajectories, see Proposition 2.5 in [14].
Turning to the first summand in (3) we note that it is the a.s. limit of a martingale $(R(j,t),\mathcal{F}_{j})_{j\in \mathbb{N}}$, where $\mathcal{F}_{j}:=\sigma ((X_{k},\xi _{k}):1\le k\le j)$ and
\[R(j,t):=\sum \limits_{k=0}^{j-1}\big(X_{k+1}(t-S_{k})-h(t-S_{k})\big)\mathbb{1}_{\{S_{k}\le t\}},\hspace{1em}j\in \mathbb{N}.\]
Applying the martingale central limit theory it is possible to show that under appropriate assumptions (which are of no importance for this paper)
\[\bigg(\sqrt{\frac{\mathbb{P}\{\xi >t\}}{v(t)}}\sum \limits_{k\ge 0}\big(X_{k+1}(ut-S_{k})-h(ut-S_{k})\big)\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\stackrel{\mathrm{f}.\mathrm{d}.}{\Longrightarrow }\big(Z(u)\big)_{u>0},\]
as $t\to \infty $, for a non-trivial process Z, where $v(t):=\mathbb{E}[{(X(t)-h(t))}^{2}]$ is the variance of X, see Proposition 2.2 in [11].
We are interested in situations when the second summand in (3) asymptotically dominates, more precisely we are looking for conditions ensuring
(7)
\[\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,\hspace{0.1667em}T]}{\sup }\bigg|\sum \limits_{k\ge 0}\big(X_{k+1}(ut-S_{k})-h(ut-S_{k})\big)\mathbb{1}_{\{S_{k}\le ut\}}\bigg|\stackrel{\mathbb{P}}{\to }0,\hspace{1em}t\to \infty ,\]
for every fixed $T>0$. From what has been mentioned above it is clear that this can happen only if
(8)
\[\underset{t\to \infty }{\lim }\frac{\mathbb{P}\{\xi >t\}v(t)}{{h}^{2}(t)}=0.\]
Restricting our attention to the case where v is regularly varying with index $\beta \in \mathbb{R}$, i.e.
(9)
\[v(t)\sim {t}^{\beta }\ell _{v}(t),\hspace{1em}t\to \infty ,\]
we see that (8) holds if $\beta <\alpha +2\rho $ and fails if $\beta >\alpha +2\rho $. As long as we do not make any assumptions on distributional or path-wise properties of X such as e.g., monotonicity, self-similarity or independence of increments, it can be hardly expected that condition (8) alone is sufficient for (7). Nevertheless, we will show that (7) holds true under additional assumptions on the asymptotic behavior of higher centered moments $\mathbb{E}[{(X(t)-h(t))}^{2l}]$, $l=1,2,\dots $, and an additional technical assumption. Our first main result treats the case where the moments of the normalized process $([X(t)-h(t)]/v(t))_{t\ge 0}$ are bounded uniformly in $t\ge 0$. Denote by $(\widehat{X}(t))_{t\ge 0}$ the centered process $(X(t)-h(t))_{t\ge 0}$.
Theorem 1.
Assume that for all $t\ge 0$ and $l\in \mathbb{N}$ we have $\mathbb{E}[|X(t){|}^{l}]<\infty $. Further, assume that the following conditions are fulfilled:
  • (A1) relation (2) holds for some $\alpha \in (0,1)$;
  • (A2) relation (4) holds for some $\rho >-\alpha $;
  • (A3) relation (9) holds for some $\beta \in (-\alpha ,\alpha +2\rho )$;
  • (A4) there exists $\delta >0$ such that for every $l\in \mathbb{N}$ the following two conditions hold:
    (10)
    \[\mathbb{E}\big[\widehat{X}{(t)}^{2l}\big]\le C_{l}{v}^{l}(t),\hspace{1em}t\ge 0,\]
    and
    (11)
    \[\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]\le C_{l}{t}^{l(\rho -\varepsilon )},\hspace{1em}t\ge 0,\]
    for some $C_{l}\in (0,\infty )$ and $\varepsilon >0$.
Then, as $t\to \infty $,
(12)
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{h(t)}\sum \limits_{k\ge 0}X_{k+1}(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0},\]
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology.
Our second main result is mainly applicable when the process X is almost surely bounded by some (deterministic) constant. We have the following theorem.
Theorem 2.
Assume that for all $t\ge 0$ and $l\in \mathbb{N}$ we have $\mathbb{E}|X(t){|}^{l}<\infty $ and conditions (A1), (A2) of Theorem 1 are valid. Further, suppose that for every $l\in \mathbb{N}$ there exists a constant $C_{l}>0$ such that
(13)
\[\mathbb{E}\big[\widehat{X}{(t)}^{2l}\big]=\mathbb{E}\big[{\big(X(t)-h(t)\big)}^{2l}\big]\le C_{l}h(t),\hspace{1em}t\ge 0,\]
and for some $\delta >0$ the function $t\mapsto \mathbb{E}[\sup _{y\in [0,\delta )}|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}{|}^{l}]$ is either directly Riemann integrable or locally bounded and
(14)
\[\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]=O\big(\mathbb{P}\{\xi >t\}\big),\hspace{1em}t\to \infty .\]
Then (12) holds.
Obviously, our results are far from being optimal and leave a lot of space for improvements, yet they are applicable to several models given in the next section.

2 Applications

2.1 The number of busy servers in a $G/G/\infty $ queue

Consider a $G/G/\infty $ queue with customers arriving at $0=S_{0}<S_{1}<S_{2}<\cdots \hspace{0.1667em}$. Upon arrival each customer is served immediately by one of infinitely many idle servers and let the service time of the kth customer be $\eta _{k}$, a copy of a positive random variable η. Put $X(t):=\mathbb{1}_{\{\eta >t\}}$, then the random process with immigration
\[Y(u)=\sum \limits_{k\ge 0}\mathbb{1}_{\{S_{k}\le u<S_{k}+\eta _{k+1}\}},\hspace{1em}u\ge 0,\]
represents the number of busy servers at time $u\ge 0$. The process $(Y(u))_{u\ge 0}$ may also be interpreted as the difference between the number of visits to $[0,t]$ of the standard random walk $(S_{k})_{k\ge 0}$ and the perturbed random walk $(S_{k}+\eta _{k+1})_{k\ge 1}$, see [2], or as the number of active sources in a communication network, see [17, 18]. An introduction to renewal theory for perturbed random walks can be found in [7].
Assume that (2) holds and
(15)
\[\mathbb{P}\{\eta >t\}\sim {t}^{\rho }\ell _{\eta }(t),\hspace{1em}t\to \infty ,\]
for some $\rho \in (-\alpha ,0]$ and $\ell _{\eta }$ slowly varying at infinity. Note that
\[h(t)=\mathbb{P}\{\eta >t\}\sim {t}^{\rho }\ell _{\eta }(t),\hspace{1em}t\to \infty .\]
Moreover, for every $l\in \mathbb{N}$ and every $\delta >0$,
\[\mathbb{E}\big[\widehat{X}{(t)}^{2l}\big]=\mathbb{P}\{\eta >t\}\mathbb{P}\{\eta \le t\}\big({\mathbb{P}}^{2l-1}\{\eta >t\}+{\mathbb{P}}^{2l-1}\{\eta \le t\}\big)\le h(t)\]
and
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}& \displaystyle \Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]\le {2}^{l-1}\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|\Big]\\{} & \displaystyle \le {2}^{l}\mathbb{P}\{\eta >t\}\mathbb{1}_{\{t\le \delta \}}+{2}^{l}\big(\mathbb{P}\{\eta >t-\delta \}-\mathbb{P}\{\eta >t\}\big)\mathbb{1}_{\{t>\delta \}}.\end{array}\]
The function on the right-hand side is directly Riemann integrable. Indeed, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \hspace{-5.69054pt}\sum \limits_{n\ge 1}\underset{\delta n\le y\le \delta (n+1)}{\sup }\big(\mathbb{P}\{\eta >y-\delta \}-\mathbb{P}\{\eta >y\}\big)\\{} & \displaystyle \le \sum \limits_{n\ge 1}\big(\mathbb{P}\big\{\eta >(n-1)\delta \big\}-\mathbb{P}\big\{\eta >(n+1)\delta \big\}\big)=\mathbb{P}\{\eta >0\}+\mathbb{P}\{\eta >\delta \}\le 2,\end{array}\]
and the claim follows from the remark after the definition of direct Riemann integrability given on p. 362 in [5].
From Theorem 2 we obtain the following result, complementing Theorem 1.2 in [8] that treats the case $\mathbb{E}\xi <\infty $.
Proposition 1.
Assume that $(\xi ,\eta )$ is a random vector with positive components such that (2) and (15) hold for $\alpha \in (0,1)$ and $\rho \in (-\alpha ,0]$, respectively. Let $(\xi _{k},\eta _{k})_{k\in \mathbb{N}}$ be a sequence of independent copies of $(\xi ,\eta )$ and $(S_{k})_{k\in \mathbb{N}_{0}}$ be a random walk defined by (1). Then
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{\mathbb{P}\{\eta >t\}}\sum \limits_{k\ge 0}\mathbb{1}_{\{S_{k}\le ut<S_{k}+\eta _{k+1}\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0},\hspace{1em}t\to \infty ,\]
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology.
Remark 1.
We do not assume independence of ξ and η.

2.2 Shot noise processes with a random amplitude

Assume that $X(t)=\eta f(t)$, where η is a non-degenerate random variable and $f:[0,\infty )\to \mathbb{R}$ is a fixed càdlàg function. The corresponding random process with immigration
\[Y(t)=\sum \limits_{k\ge 0}\eta _{k+1}f(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}},\hspace{1em}t\ge 0,\]
where $(\eta _{k})_{k\in \mathbb{N}}$ is a sequence of independent copies of η, may be interpreted as a renewal shot noise process in which the common response function f is scaled at a shot $S_{k}$ by a random factor $\eta _{k+1}$. In case where $(\xi _{k})_{k\in \mathbb{N}}$ have exponential distribution and are independent of $(\eta _{k})_{k\in \mathbb{N}}$ such processes were used in mathematical finance as a model of stock prices with long-range dependence in asset returns, see [15].
Note that if $\mathbb{E}|\eta {|}^{l}<\infty $ for all $l\in \mathbb{N}$, then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle h(t)=(\mathbb{E}\eta )f(t),\hspace{2em}v(t)=\mathrm{Var}(\eta ){f}^{2}(t),\\{} & \displaystyle \mathbb{E}\big[{\big(X(t)-h(t)\big)}^{2l}\big]=\mathbb{E}\big[{(\eta -\mathbb{E}\eta )}^{2l}\big]{f}^{2l}(t)\le C_{l}{v}^{l}(t),\hspace{1em}l\in \mathbb{N},\end{array}\]
for some $C_{l}>0$. Assume now that f varies regularly with index $\rho >-\alpha $ and additionally satisfies
(16)
\[\underset{y\in [0,\delta )}{\sup }\big|f(t)-f(t-y)\big|=O\big({t}^{\rho -\varepsilon }\big),\hspace{1em}t\to \infty ,\]
for some $\delta >0$ and $\varepsilon >0$. Then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]& \displaystyle =\mathbb{E}|\eta -\mathbb{E}\eta {|}^{l}\underset{y\in [0,\delta )}{\sup }{\big|f(t)-f(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\\{} & \displaystyle =O\big({t}^{l(\rho -\varepsilon )}\big),\hspace{1em}t\to \infty .\end{array}\]
Hence, all assumptions of Theorem 1 hold (if $\mathbb{E}\eta <0$, Theorem 1 is applicable to the process $-X$) and we have the following result.
Proposition 2.
Assume that $\mathbb{E}|\eta {|}^{l}<\infty $ for all $n\in \mathbb{N}$, $\mathbb{E}\eta \ne 0$ and (2) holds. If $f:[0,\infty )\to \mathbb{R}$ satisfies
\[f(t)\sim {t}^{\rho }\ell _{f}(t),\hspace{1em}t\to \infty ,\]
for some $\rho >-\alpha $ and $\ell _{f}$ slowly varying at infinity, and (16) holds, then
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{f(t)\mathbb{E}\eta }\sum \limits_{k\ge 0}\eta _{k+1}f(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0},\hspace{1em}t\to \infty ,\]
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology.
This result complements the convergence of finite-dimensional distributions provided by Example 3.3 in [11].
Remark 2.
In general, condition (16) might not hold for a function f which is regularly varying with index $\rho \in \mathbb{R}$. Take, for example,
\[f(t)=1+\frac{{(-1)}^{[t]}}{\log [t]}\mathbb{1}_{\{t>1\}}.\]
Then, f is regularly varying with index $\rho =0$, but for every $\delta >0$ and large $n\in \mathbb{N}$ we have
\[\underset{y\in [0,\delta )}{\sup }\big|f(2n)-f(2n-y)\big|\ge \underset{y\in [0,\delta \wedge 1)}{\sup }\big|f(2n)-f(2n-y)\big|\ge \frac{2}{\log (2n)}.\]
Hence, (16) does not hold. On the other hand, if f is differentiable with an eventually monotone derivative ${f^{\prime }}$, then (16) holds by the mean value theorem for differentiable functions and Theorem 1.7.2 in [3].

3 Proof of Theorems 1 and 2

The proofs of Theorems 1 and 2 rely on the same ideas, so we will prove them simultaneously. Pick $\delta >0$ such that all assumptions of Theorem 1 or Theorem 2 hold. This $\delta >0$ remains fixed throughout the proof.
In view of assumptions (A1) and (A2) and the fact that h is càdlàg we infer from Theorem 2.1 in [14] that
(17)
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{h(t)}\sum \limits_{k\ge 0}h(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0}\hspace{1em}t\to \infty ,\]
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology. Note that in Theorem 2.1 of [14] h is assumed monotone (or eventually monotone). However, this assumption is redundant. The only places which have to be adjusted in the proofs are two displays on p. 90, where $h(0)$ should be replaced by $\sup _{y\in [0,c]}h(y)$.
Hence, from (3) we see that it is enough to check, for every fixed $T>0$, that
(18)
\[\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,T]}{\sup }\big|\widetilde{Y}(ut)\big|\stackrel{\mathbb{P}}{\to }0,\hspace{1em}t\to \infty ,\]
where $\widetilde{Y}(t):=\sum _{k\ge 0}(X_{k+1}(t-S_{k})-h(t-S_{k}))\mathbb{1}_{\{S_{k}\le t\}}$ for $t\ge 0$. Moreover, it suffices to show that
(19)
\[\frac{\mathbb{P}\{\xi >t\}}{h(t)}\big|\widetilde{Y}(t)\big|\stackrel{a.s.}{\to }0,\hspace{1em}t\to \infty .\]
Indeed, for every fixed $s>0$,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,T]}{\sup }\big|\widetilde{Y}(ut)\big|\\{} & \displaystyle \hspace{1em}\le \frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,s]}{\sup }\big|\widetilde{Y}(u)\big|+\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [s,Tt]}{\sup }\big|\widetilde{Y}(u)\big|\\{} & \displaystyle \hspace{1em}\le \frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,s]}{\sup }\big|\widetilde{Y}(u)\big|+\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [s,Tt]}{\sup }\frac{h(u)}{\mathbb{P}\{\xi >u\}}\underset{u\in [s,Tt]}{\sup }\bigg|\frac{\mathbb{P}\{\xi >u\}}{h(u)}\widetilde{Y}(u)\bigg|.\end{array}\]
Since $t\mapsto h(t)/\mathbb{P}\{\xi >t\}$ is regularly varying with positive index $\rho +\alpha $,
\[\underset{u\in [s,Tt]}{\sup }\frac{h(u)}{\mathbb{P}\{\xi >u\}}\sim \frac{h(Tt)}{\mathbb{P}\{\xi >Tt\}}\sim {T}^{\rho +\alpha }\frac{h(t)}{\mathbb{P}\{\xi >t\}},\hspace{1em}t\to \infty .\]
Sending $t\to \infty $ we obtain, for every fixed $s>0$,
\[\underset{t\to \infty }{\limsup }\frac{\mathbb{P}\{\xi >t\}}{h(t)}\underset{u\in [0,T]}{\sup }\big|\widetilde{Y}(ut)\big|\le {T}^{\rho +\alpha }\underset{u\in [s,\infty )}{\sup }\bigg|\frac{\mathbb{P}\{\xi >u\}}{h(u)}\widetilde{Y}(u)\bigg|.\]
Sending now $s\to \infty $ shows that (19) implies (18). Let us first check that (19) holds along the arithmetic sequence $(n\delta )_{n\in \mathbb{N}}$. According to the Borel–Cantelli lemma and Markov’s inequality it suffices to check that for some $l\in \mathbb{N}$
(20)
\[\sum \limits_{n=1}^{\infty }{\bigg(\frac{\mathbb{P}\{\xi >n\delta \}}{h(\delta n)}\bigg)}^{2l}\mathbb{E}\big[\widetilde{Y}{(\delta n)}^{2l}\big]<\infty .\]
To check (20) we apply the Burkholder–Davis–Gundy inequality in the form given in Theorem 11.3.2 of [4], to obtain
(21)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big[\widetilde{Y}{(t)}^{2l}\big]& \displaystyle \le K_{l}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}\mathbb{E}\big({\widehat{X}_{k+1}^{2}}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}|\mathcal{F}_{k}\big)\bigg)}^{l}\bigg]\\{} & \displaystyle \hspace{1em}+K_{l}\mathbb{E}\Big[\underset{k\ge 0}{\sup }\big({\widehat{X}_{k+1}^{2l}}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\big)\Big],\end{array}\]
for some constant $K_{l}>0$, where we recall the notation $\mathcal{F}_{k}=\sigma ((X_{j},\xi _{j}):1\le j\le k)$.
Proof of (20) under assumptions of Theorem 1.
Using assumption (A4) we infer from (21):
(22)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}& \displaystyle \big[\widetilde{Y}{(t)}^{2l}\big]\\{} & \displaystyle \le K_{l}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}v(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]+K_{l}\mathbb{E}\bigg[\sum \limits_{k\ge 0}{\widehat{X}_{k+1}^{2l}}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg]\\{} & \displaystyle \le K_{l}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}v(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]+K_{l}C_{l}\mathbb{E}\bigg[\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg].\end{array}\]
If $\beta \ge 0$, then $t\mapsto {v}^{l}(t)$ varies regularly with non-negative index $l\beta $. Therefore, Lemma 1(i) yields
\[\mathbb{E}\bigg(\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)=O\bigg(\frac{{v}^{l}(t)}{\mathbb{P}\{\xi >t\}}\bigg),\hspace{1em}t\to \infty .\]
If $\beta \in (-\alpha ,0)$, pick $l\in \mathbb{N}$ such that $l\beta <-\alpha $. Then ${v}^{l}(t)=O(\mathbb{P}\{\xi >t\})$, as $t\to \infty $, and Lemma 1(iii) yields
\[\mathbb{E}\bigg[\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg]=O(1),\hspace{1em}t\to \infty .\]
Hence, in any case
(23)
\[\mathbb{E}\bigg[\sum \limits_{k\ge 0}{v}^{l}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg]=O\bigg(\frac{{v}^{l}(t)}{\mathbb{P}\{\xi >t\}}\bigg)+O(1),\hspace{1em}t\to \infty .\]
To bound the first summand in (22) apply Lemma 1(i) to obtain
\[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}v(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=O\bigg({\bigg(\frac{v(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
Combining this estimate with (23), we see that (20) holds if we pick $l>{(2\rho +\alpha -\beta )}^{-1}$. This proves (20) under assumptions of Theorem 1.  □
Proof of (20) under assumptions of Theorem 2.
From (21) and using (13) we have
\[\mathbb{E}\big[\widetilde{Y}{(t)}^{2l}\big]\le K_{l}{C_{1}^{l}}\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}h(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]+K_{l}C_{l}\mathbb{E}\bigg[\sum \limits_{k\ge 0}h(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg].\]
Lemma 1(i) gives us the estimate
\[\mathbb{E}\big[\widetilde{Y}{(t)}^{2l}\big]=O\bigg({\bigg(\frac{h(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
Therefore, (20) holds if we choose $l\in \mathbb{N}$ such that $l(\alpha +\rho )>1$. This proves (20) under the assumptions of Theorem 2.
It remains to show that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{\mathbb{P}\{\xi >n\delta \}}{h(n\delta )}\underset{t\in [n\delta ,(n+1)\delta )}{\sup }\bigg|& \displaystyle \sum \limits_{k\ge 0}\big(\widehat{X}_{k+1}\big((n+1)\delta -S_{k}\big)\mathbb{1}_{\{S_{k}\le (n+1)\delta \}}\\{} & \displaystyle -\widehat{X}_{k+1}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\big)\bigg|\stackrel{a.s.}{\to }0,\end{array}\]
as $n\to \infty $, which in turn is an obvious consequence of regular variation of $t\mapsto \mathbb{P}\{\xi >t\}/h(t)$ and
(24)
\[\frac{\mathbb{P}\{\xi >n\}}{h(n)}\sum \limits_{k\ge 0}V_{k+1}(n\delta -S_{k})\mathbb{1}_{\{S_{k}\le n\delta \}}\stackrel{a.s.}{\to }0,\hspace{1em}n\to \infty ,\]
where $V_{k+1}(t):=\sup _{y\in [0,\delta )}|\widehat{X}_{k+1}(t)-\widehat{X}_{k+1}(t-y)\mathbb{1}_{\{y\le t\}}|$.  □
Proof of (24) under assumptions of Theorem 1.
Applying Lemma 2(i) with $b(t)={t}^{\rho -\varepsilon }$ and appropriate $\varepsilon >0$ we obtain from (A5) that
\[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}V_{k+1}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=O\bigg({\bigg(\frac{{t}^{\rho -\varepsilon }}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
Hence (24) holds in view of the Borel–Cantelli lemma and Markov’s inequality, since
\[\sum \limits_{n=1}^{\infty }\mathbb{P}\bigg\{\frac{\mathbb{P}\{\xi >n\}}{h(n)}\sum \limits_{k\ge 0}V_{k+1}(n\delta -S_{k})\mathbb{1}_{\{S_{k}\le n\delta \}}>\varepsilon \bigg\}\le \widehat{C}\sum \limits_{n=1}^{\infty }{\big({n}^{\rho -\varepsilon }h(n)\big)}^{l}<\infty ,\]
for all $l\in \mathbb{N}$ such that $\varepsilon l>1$ and some $\widehat{C}=\widehat{C}_{l}>0$.  □
Proof of (24) under assumptions of Theorem 1.
If the function
\[t\mapsto \mathbb{E}\Big[{\Big(\underset{y\in [0,\delta )}{\sup }\big|\widehat{X}_{k+1}(t)-\widehat{X}_{k+1}(t-y)\mathbb{1}_{\{y\le t\}}\big|\Big)}^{l}\Big]\]
is directly Riemann integrable, then
\[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}V_{k+1}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=o(1),\hspace{1em}t\to \infty \]
by Lemma 2(ii). Hence (24) holds by the same reasoning as above after applying the Borel–Cantelli lemma. If (14) holds, then the last centered formula also holds with $O(1)$ in the right-hand side by Lemma 2(iii), whence (24). This finishes the proofs of Theorems 1 and 2.  □

Acknowledgments

The work of A. Marynych was supported by the Alexander von Humboldt Foundation. We thank two anonymous referees for careful reading, valuable comments and corrections of our numerous blunders.

A Appendix

A.1 Moment convergence for renewal shot noise process

Lemma 1.
Let $f:[0,\infty )\to \mathbb{R}$ be a locally bounded measurable function and suppose that relation (2) holds for some $\alpha \in (0,1)$.
  • (i) Assume that
    \[f(t)\sim {t}^{\rho }\ell _{f}(t),\hspace{1em}t\to \infty ,\]
    for some $\rho >-\alpha $ and $\ell _{f}$ slowly varying at infinity. Let $(J_{\alpha ,\rho }(u))_{u\ge 0}$ be a fractionally integrated inverse stable subordinator defined in (6) (and below). Then, for every $l\in \mathbb{N}$,
    (25)
    \[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{t\to \infty }{\lim }\mathbb{E}\bigg[{\bigg(\frac{\mathbb{P}\{\xi >t\}}{f(t)}\sum \limits_{k\ge 0}f(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]\\{} & \displaystyle \hspace{1em}=\mathbb{E}{\big(J_{\alpha ,\rho }(u)\big)}^{l}\\{} & \displaystyle \hspace{1em}=\frac{l!}{{(\varGamma (1-\alpha ))}^{l}}\prod \limits_{j=1}^{l}\frac{\varGamma (1+\rho +(j-1)(\alpha +\rho ))}{\varGamma (j(\alpha +\rho )+1)}.\end{array}\]
  • (ii) If f is directly Riemann integrable, then, for every $l\in \mathbb{N}$,
    \[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}f(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=o(1),\hspace{1em}t\to \infty .\]
  • (iii) If $f(t)=O(\mathbb{P}\{\xi >t\})$, as $t\to \infty $, then, for every $l\in \mathbb{N}$,
    \[\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}f(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]=O(1),\hspace{1em}t\to \infty .\]
Proof.
The formula for the moments of fractionally integrated inverse stable subordinator (the second equality in (25)) is known, see for example (3.65) in [7] or (2.17) in [10].
Proof of (I).
In case $\rho \in (-\alpha ,0]$ this result is just Lemma 5.3 in [10]. A perusal of the proof of the aforementioned lemma shows that without any modifications the constraint $\rho \in (-\alpha ,0]$ can be replaced by $\rho >-\alpha $.  □
Proof of (II).
If $l=1$ and the distribution of $S_{1}$ is non-lattice the claim follows from the classical key renewal theorem. If $l=1$ and the distribution of $S_{1}$ is lattice, the claim still holds, see the penultimate centered formula on p. 94 in [13]. In particular, this means
(26)
\[0\le m_{1}(t):=\mathbb{E}\bigg[\sum \limits_{k\ge 0}\big|f(t-S_{k})\big|\mathbb{1}_{\{S_{k}\le t\}}\bigg]\le M_{1},\hspace{1em}t\ge 0,\]
for some constant $M_{1}>0$. Applying formula (5.19) in [10] we obtain
(27)
\[m_{l}(t):=\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 0}\big|f(t-S_{k})\big|\mathbb{1}_{\{S_{k}\le t\}}\bigg)}^{l}\bigg]={\int _{0}^{t}}r_{l}(t-y)\mathrm{d}U(y),\]
where $U(y)=\sum _{k\ge 0}\mathbb{P}\{S_{k}\le y\}$, $y\ge 0$ is the renewal function and
\[r_{l}(t)=\sum \limits_{j=0}^{l-1}v_{j}{\big|f(t)\big|}^{l-j}(t)m_{j}(t),\]
for some real constants $v_{j}$. We proceed by induction. Assume that we know
\[m_{j}(t)\to 0,\hspace{1em}t\to \infty ,\hspace{1em}j=1,\dots ,l-1,\]
in particular,
\[0\le m_{j}(t)\le M_{j},\hspace{1em}t\ge 0,\hspace{1em}j=1,\dots ,l-1.\]
Then
\[\big|r_{l}(t)\big|\le \sum \limits_{j=0}^{l-1}M_{j}|v_{j}||f(t){|}^{l-j},\hspace{1em}t\ge 0,\]
and the right-hand side is directly Riemann integrable. By the same reasoning as in case $l=1$ we obtain
\[m_{l}(t)\to 0,\hspace{1em}t\to \infty ,\]
by the key renewal theorem.  □
Proof of (III).
Again, let us consider the case $l=1$ first. Put $Z(t):=t-S_{\nu (t)-1}$ and note that
\[\mathbb{E}\bigg[\sum \limits_{k\ge 0}f(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}\bigg]=\mathbb{E}g\big(Z(t)\big),\]
where $g(t):=f(t)/\mathbb{P}\{\xi >t\}$. Since g is bounded, we have $\mathbb{E}g(Z(t))=O(1)$, as $t\to \infty $. For arbitrary $l\in \mathbb{N}$ the result follows from (26) and (27) by induction in the same vein as in the proof of part (ii).  □
In the next lemma we give an upper bound on the moments of random process with immigration under assumption (2). Recall the notation $Y(t)=\sum _{k\ge 0}X_{k+1}(t-S_{k})\mathbb{1}_{\{S_{k}\le t\}}$.
Lemma 2.
Assume that (2) holds for some $\alpha \in (0,1)$.
  • (i) Suppose there exists a locally bounded measurable function $b:[0,\infty )\to [0,\infty )$ such that
    \[b(t)\sim {t}^{\beta }\ell _{b}(t),\hspace{1em}t\to \infty ,\]
    for some $\beta >-\alpha $ and $\ell _{b}$ slowly varying at infinity. If for every $l\in \mathbb{N}$
    \[\mathbb{E}\big[{\big|X(t)\big|}^{l}\big]\le {b}^{l}(t),\hspace{1em}t\ge 0,\]
    then for every $l\in \mathbb{N}$ we have
    (28)
    \[\mathbb{E}\big[{\big|Y(t)\big|}^{l}\big]=O\bigg({\bigg(\frac{b(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
  • (ii) Suppose that for every $l\in \mathbb{N}$ there exists a directly Riemann integrable function $b_{l}:[0,\infty )\to [0,\infty )$ such that
    \[\mathbb{E}\big[{\big|X(t)\big|}^{l}\big]\le b_{l}(t),\hspace{1em}t\ge 0.\]
    Then, for every $l\in \mathbb{N}$
    (29)
    \[\mathbb{E}\big[{\big|Y(t)\big|}^{l}\big]=o(1),\hspace{1em}t\to \infty .\]
  • (iii) Suppose that for every fixed $l\in \mathbb{N}$ we have
    \[\mathbb{E}\big[{\big|X(t)\big|}^{l}\big]=O\big(\mathbb{P}\{\xi >t\}\big),\hspace{1em}t\to \infty .\]
    Then, for every $l\in \mathbb{N}$
    (30)
    \[\mathbb{E}\big[{\big|Y(t)\big|}^{l}\big]=O(1),\hspace{1em}t\to \infty .\]
Proof.
Put $a_{l}(t):=\mathbb{E}[|X(t){|}^{l}]$ for $l\in \mathbb{N}$ and
\[Z(t):=\sum \limits_{k\ge 0}\big|X_{k+1}(t-S_{k})\big|\mathbb{1}_{\{S_{k}\le t\}},\hspace{1em}t\ge 0.\]
Clearly, $\mathbb{E}[|Y(t){|}^{l}]\le \mathbb{E}[{[Z(t)]}^{l}]$ for all $t\ge 0$ and $l\in \mathbb{N}$. We prove (28), (29) and (30) with $\mathbb{E}[Z{(t)}^{l}]$ replacing $\mathbb{E}[|Y(t){|}^{l}]$ in the left-hand sides. From the definition of random process with immigration it follows that
\[Z(t)\stackrel{\mathrm{d}}{=}\big|X(t)\big|+\widehat{Z}(t-\xi )\mathbb{1}_{\{\xi \le t\}},\hspace{1em}t\ge 0,\]
where $\widehat{Z}(t)\stackrel{\mathrm{d}}{=}Z(t)$ for every fixed $t\ge 0$ and $\widehat{Z}(t)$ is independent of $(X,\xi )$ in the right-hand side. Taking expectations we obtain
(31)
\[\mathbb{E}\big[Z(t)\big]=a_{1}(t)+\mathbb{E}\big[Z(t-\xi _{1})\big]\mathbb{1}_{\{\xi \le t\}},\hspace{1em}t\ge 0,\]
whilst, for $l\ge 2$, we have
(32)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbb{E}\big[Z{(t)}^{l}\big]\\{} & \displaystyle \hspace{1em}=a_{l}(t)+\sum \limits_{j=1}^{l-1}\left(\genfrac{}{}{0.0pt}{}{l}{j}\right)\mathbb{E}\big[{\big|X(t)\big|}^{l-j}{\big(\widehat{Z}(t-\xi )\big)}^{j}\mathbb{1}_{\{\xi \le t\}}\big]+\mathbb{E}\big[Z{(t-\xi )}^{l}\mathbb{1}_{\{\xi \le t\}}\big]\\{} & \displaystyle \hspace{1em}=a_{l}(t)+\sum \limits_{j=1}^{l-1}\left(\genfrac{}{}{0.0pt}{}{l}{j}\right){\int _{0}^{\infty }}{\int _{0}^{t}}{z}^{l-j}\mathbb{E}\big[Z{(t-y)}^{j}\big]\mathbb{P}\big\{\big|X(t)\big|\in \mathrm{d}z,\xi \in \mathrm{d}y\big\}\\{} & \displaystyle \hspace{2em}+\mathbb{E}\big[Z{(t-\xi )}^{l}\mathbb{1}_{\{\xi \le t\}}\big]\\{} & \displaystyle \hspace{1em}\le a_{l}(t)+\sum \limits_{j=1}^{l-1}\left(\genfrac{}{}{0.0pt}{}{l}{j}\right)a_{l-j}(t)\underset{0\le y\le t}{\sup }\mathbb{E}\big[Z{(y)}^{j}\big]+\mathbb{E}\big[Z{(t-\xi )}^{l}\mathbb{1}_{\{\xi \le t\}}\big].\end{array}\]
Case I.
From Lemma 1(i) and formula (31) using the inequality $a_{1}(t)\le b(t)$, $t\ge 0$, we obtain
\[\mathbb{E}\big[Z(t)\big]=O\bigg(\frac{b(t)}{\mathbb{P}\{\xi >t\}}\bigg),\hspace{1em}t\to \infty .\]
Thus, (28) holds for $l=1$. We proceed by induction. Assume that for every $j=1,\dots ,l-1$ there exists $C_{j}>0$ such that
\[\mathbb{E}\big[Z{(t)}^{j}\big]\le C_{j}{\bigg(\frac{b(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{j},\hspace{1em}t\ge 0.\]
This implies
\[\underset{0\le y\le t}{\sup }\mathbb{E}\big[Z{(y)}^{j}\big]\le C_{j}\underset{0\le y\le t}{\sup }{\bigg(\frac{b(y)}{\mathbb{P}\{\xi >y\}}\bigg)}^{j}\sim C_{j}{\bigg(\frac{b(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{j},\]
where the last relation follows from the regular variation of $t\mapsto b(t)/\mathbb{P}\{\xi >t\}$ with positive index $\beta +\alpha $. Hence, from equation (32) and the inequalities $a_{j}(t)\le {b}^{j}(t)$, $t\ge 0$, $j=1,\dots ,l-1$, we deduce
\[\mathbb{E}\big[Z{(t)}^{l}\big]\le {C^{\prime }}\frac{{b}^{l}(t)}{{(\mathbb{P}\{\xi >t\})}^{l-1}}+\mathbb{E}\big[Z{(t-\xi )}^{l}\mathbb{1}_{\{\xi \le t\}}\big],\hspace{1em}t\ge 0,\]
for some ${C^{\prime }}={C^{\prime }_{l}}>0$. Since $t\mapsto {C^{\prime }}{b}^{l}(t)/{(\mathbb{P}\{\xi >t\})}^{l-1}$ is regularly varying with index $l(\beta +\alpha )-\alpha >-\alpha $, Lemma 1(i) yields
\[\mathbb{E}\big[Z{(t)}^{l}\big]=O\bigg({\bigg(\frac{b(t)}{\mathbb{P}\{\xi >t\}}\bigg)}^{l}\bigg),\hspace{1em}t\to \infty .\]
Case II.
Arguing by induction as in the proof of case (i) we see from formulae (31) and (32) that
\[\mathbb{E}\big[Z{(t)}^{l}\big]\le {\widehat{b}^{\prime }_{l}}(t)+\mathbb{E}\big[Z{(t-\xi _{1})}^{l}\mathbb{1}_{\{\xi \le t\}}\big],\hspace{1em}t\ge 0,\]
for a directly Riemann integrable function ${\widehat{b}^{\prime }_{l}}$. The claim follows from the key renewal theorem.
Case III.
For $l=1$ the claim follows from Lemma 1(iii) and formula (31). Using inductive argument once again we obtain from (32) that
\[\mathbb{E}\big[Z{(t)}^{l}\big]\le {C^{\prime\prime }}\mathbb{P}\{\xi >t\}+\mathbb{E}\big[Z{(t-\xi _{1})}^{l}\mathbb{1}_{\{\xi \le t\}}\big],\hspace{1em}t\ge 0,\]
for some ${C^{\prime\prime }}={C^{\prime\prime }_{l}}>0$ and the claim follows from Lemma 1(iii).
 □

Footnotes

1 In what follows we always assume that h exists and is a càdlàg function.

References

[1] 
Alsmeyer, G., Iksanov, A., Marynych, A.: Functional limit theorems for the number of occupied boxes in the Bernoulli sieve. Stoch. Process. Appl. 127(3), 995–1017 (2017). MR3605718. doi:10.1016/j.spa.2016.07.007
[2] 
Alsmeyer, G., Iksanov, A., Meiners, M.: Power and exponential moments of the number of visits and related quantities for perturbed random walks. J. Theor. Probab. 28(1), 1–40 (2015). MR3320959. doi:10.1007/s10959-012-0475-7
[3] 
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Encycl. Math. Appl., vol. 27. Cambridge University Press, Cambridge (1987), 491 p. MR0898871. doi:10.1017/CBO9780511721434
[4] 
Chow, Y.S., Teicher, H.: Probability Theory. Independence, Interchangeability, Martingales, 3rd edn. Springer Texts Statist., Springer (1997), 488 p.
[5] 
Feller, W.: An Introduction to Probability Theory and Its Applications. Vol. II. 2nd edn., John Wiley & Sons, Inc., New York, London, Sydney (1971), 669 p. MR0270403
[6] 
Iksanov, A.: Functional limit theorems for renewal shot noise processes with increasing response functions. Stoch. Process. Appl. 123(6), 1987–2010 (2013). MR3038496. doi:10.1016/j.spa.2013.01.019
[7] 
Iksanov, A.: Renewal Theory for Perturbed Random Walks and Similar Processes. Probab. Appl., Birkhäuser (2016), 250 p.
[8] 
Iksanov, A., Jedidi, W., Bouzeffour, F.: Functional limit theorems for the number of busy servers in a G/G/∞ queue. Preprint available at https://arxiv.org/pdf/1610.08662.pdf
[9] 
Iksanov, A., Kabluchko, Z., Marynych, A.: Weak convergence of renewal shot noise processes in the case of slowly varying normalization. Stat. Probab. Lett. 114, 67–77 (2016). MR3491974. doi:10.1016/j.spl.2016.03.015
[10] 
Iksanov, A., Marynych, A., Meiners, M.: Limit theorems for renewal shot noise processes with eventually decreasing response functions. Stoch. Process. Appl. 124(6), 2132–2170 (2014). MR3188351. doi:10.1016/j.spa.2014.02.007
[11] 
Iksanov, A., Marynych, A., Meiners, M.: Asymptotics of random processes with immigration I: Scaling limits. Bernoulli 23(2), 1233–1278 (2017). MR3606765. doi:10.3150/15-BEJ776
[12] 
Iksanov, A., Marynych, A., Meiners, M.: Asymptotics of random processes with immigration II: Convergence to stationarity. Bernoulli 23(2), 1279–1298 (2017)
[13] 
Iksanov, A.M., Marynych, A.V., Vatutin, V.A.: Weak convergence of finite-dimensional distributions of the number of empty boxes in the Bernoulli sieve. Theory Probab. Appl. 59(1), 87–113 (2015). MR3416065. doi:10.1137/S0040585X97986904
[14] 
Iksanov, A., Kabluchko, Z., Marynych, A., Shevchenko, G.: Fractionally integrated inverse stable subordinators. Stoch. Process. Appl. 127(1), 80–106 (2016)
[15] 
Klüppelberg, C., Kühn, C.: Fractional Brownian motion as a weak limit of Poisson shot noise processes—with applications to finance. Stoch. Process. Appl. 113(2), 333–351 (2004)
[16] 
Marynych, A.: A note on convergence to stationarity of random processes with immigration. Theory Stoch. Process. 20(36)(1), 84–100 (2016). MR3502397
[17] 
Mikosch, T., Resnick, S.: Activity rates with very heavy tails. Stoch. Process. Appl. 116(2), 131–155 (2006)
[18] 
Resnick, S., Rootzén, H.: Self-similar communication models and very heavy tails. Ann. Appl. Probab. 10(3), 753–778 (2000)
Reading mode PDF XML

Table of contents
  • 1 Introduction and main result
  • 2 Applications
  • 3 Proof of Theorems 1 and 2
  • Acknowledgments
  • A Appendix
  • Footnotes
  • References

Copyright
© 2017 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Fractionally integrated inverse stable subordinators random process with immigration shot noise process

MSC2010
60F05 (primary) 60K05 (secondary)

Metrics
since March 2018
566

Article info
views

416

Full article
views

352

PDF
downloads

156

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    2
Theorem 1.
Theorem 2.
Theorem 1.
Assume that for all $t\ge 0$ and $l\in \mathbb{N}$ we have $\mathbb{E}[|X(t){|}^{l}]<\infty $. Further, assume that the following conditions are fulfilled:
  • (A1) relation (2) holds for some $\alpha \in (0,1)$;
  • (A2) relation (4) holds for some $\rho >-\alpha $;
  • (A3) relation (9) holds for some $\beta \in (-\alpha ,\alpha +2\rho )$;
  • (A4) there exists $\delta >0$ such that for every $l\in \mathbb{N}$ the following two conditions hold:
    (10)
    \[\mathbb{E}\big[\widehat{X}{(t)}^{2l}\big]\le C_{l}{v}^{l}(t),\hspace{1em}t\ge 0,\]
    and
    (11)
    \[\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]\le C_{l}{t}^{l(\rho -\varepsilon )},\hspace{1em}t\ge 0,\]
    for some $C_{l}\in (0,\infty )$ and $\varepsilon >0$.
Then, as $t\to \infty $,
(12)
\[\bigg(\frac{\mathbb{P}\{\xi >t\}}{h(t)}\sum \limits_{k\ge 0}X_{k+1}(ut-S_{k})\mathbb{1}_{\{S_{k}\le ut\}}\bigg)_{u>0}\Rightarrow \big(J_{\alpha ,\rho }(u)\big)_{u>0},\]
weakly on $D(0,\infty )$ endowed with the $J_{1}$-topology.
Theorem 2.
Assume that for all $t\ge 0$ and $l\in \mathbb{N}$ we have $\mathbb{E}|X(t){|}^{l}<\infty $ and conditions (A1), (A2) of Theorem 1 are valid. Further, suppose that for every $l\in \mathbb{N}$ there exists a constant $C_{l}>0$ such that
(13)
\[\mathbb{E}\big[\widehat{X}{(t)}^{2l}\big]=\mathbb{E}\big[{\big(X(t)-h(t)\big)}^{2l}\big]\le C_{l}h(t),\hspace{1em}t\ge 0,\]
and for some $\delta >0$ the function $t\mapsto \mathbb{E}[\sup _{y\in [0,\delta )}|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}{|}^{l}]$ is either directly Riemann integrable or locally bounded and
(14)
\[\mathbb{E}\Big[\underset{y\in [0,\delta )}{\sup }{\big|\widehat{X}(t)-\widehat{X}(t-y)\mathbb{1}_{\{y\le t\}}\big|}^{l}\Big]=O\big(\mathbb{P}\{\xi >t\}\big),\hspace{1em}t\to \infty .\]
Then (12) holds.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy