1 Introduction
In recent years limit theorems and statistical inference for high frequency observations of stochastic processes have received a great deal of attention. The most prominent class of high frequency statistics are power variations that have been proved to be of immense importance for the analysis of the fine structure of an underlying stochastic process. The asymptotic theory for power variations and related statistics has been intensively studied in the setting of Itô semimartingales, fractional Brownian motion and Brownian semi-stationary processes, to name just a few; see for example [2–4, 7, 9] among many others.
In the recent work [6, 5] power variations of stationary increments Lévy moving average processes have been investigated in details. These are continuous-time stochastic processes ${({X_{t}})}_{t\ge 0}$, defined on a probability space $(\varOmega ,\mathcal{F},\mathbb{P})$, that are given by
where $L={({L_{t}})}_{t\in \mathbb{R}}$ is a symmetric Lévy process on $\mathbb{R}$ with ${L_{0}}=0$ and without Gaussian component. Moreover, $g,{g_{0}}:\mathbb{R}\to \mathbb{R}$ are deterministic functions vanishing on $(-\infty ,0)$. The most prominent subclasses include Lévy moving average processes, which correspond to the setting ${g_{0}}=0$, and the linear fractional stable motion, which is obtained by taking $g(s)={g_{0}}(s)={s_{+}^{\alpha }}$ and L being a symmetric β-stable Lévy process with $\beta \in (0,2)$. The latter is a self-similar process with index $H=\alpha +1/\beta $; see [12].
(1.1)
\[ {X_{t}}={\int _{-\infty }^{t}}\big(g(t-s)-{g_{0}}(-s)\big)\hspace{0.1667em}\text{d}{L_{s}},\]We introduce the kth order increments ${\Delta _{i,k}^{n}}X$ of X, $k\in \mathbb{N}$, that are defined by
For example, we have that ${\Delta _{i,1}^{n}}X={X_{\frac{i}{n}}}-{X_{\frac{i-1}{n}}}$ and ${\Delta _{i,2}^{n}}X={X_{\frac{i}{n}}}-2{X_{\frac{i-1}{n}}}+{X_{\frac{i-2}{n}}}$. The main statistic of interest is the power variation computed on the basis of kth order increments:
A variety of asymptotic results has been shown for the statistic $V{(p;k)_{n}}$ in [6, 5]. The mode of convergence and possible limits heavily depend on the interplay between the power p, the form of the kernel function g and the Blumenthal–Getoor index of L. We recall that the Blumenthal–Getoor index is defined via
where ν denotes the Lévy measure of L. It is well known that ${\sum _{s\in [0,1]}}|\Delta {L_{s}}{|}^{p}$ is finite when $p>\beta $, while it is infinite for $p<\beta $. Here $\Delta {L_{s}}={L_{s}}-{L_{s-}}$ where ${L_{s-}}={\lim _{u\uparrow s,u<s}}{L_{u}}$. To formulate the results of [6, 5], we introduce the following set of assumptions on g, ${g_{0}}$ and ν:
(1.2)
\[ {\Delta _{i,k}^{n}}X:={\sum \limits_{j=0}^{k}}{(-1)}^{j}\left(\genfrac{}{}{0.0pt}{}{k}{j}\right){X_{(i-j)/n}},\hspace{1em}i\ge k.\](1.4)
\[ \beta :=\inf \Bigg\{r\ge 0:{\int _{-1}^{1}}|x{|}^{r}\hspace{0.1667em}\nu (\text{d}x)<\infty \Bigg\}\in [0,2],\]
Assumption (A): The function $g:\mathbb{R}\to \mathbb{R}$ satisfies the condition
where $g(t)\sim f(t)$ as $t\downarrow 0$ means that ${\lim _{t\downarrow 0}}g(t)/f(t)=1$. For some $w\in (0,2]$, ${\limsup _{t\to \infty }}\nu (x:|x|\ge t){t}^{w}<\infty $ and $g-{g_{0}}$ is a bounded function in ${L}^{w}({\mathbb{R}_{+}})$. Furthermore, g is k-times continuous differentiable on $(0,\infty )$ and there exists a $\delta >0$ such that $|{g}^{(k)}(t)|\le K{t}^{\alpha -k}$ for all $t\in (0,\delta )$, $|{g}^{(k)}|$ is decreasing on $(\delta ,\infty )$ and ${g}^{(j)}\in {L}^{w}((\delta ,\infty ))$ for $j\in \{1,k\}$.
(1.5)
\[ g(t)\sim {c_{0}}{t}^{\alpha }\hspace{1em}\text{as}\hspace{2.5pt}t\downarrow 0\hspace{1em}\text{for some}\hspace{2.5pt}\alpha >0\hspace{2.5pt}\text{and}\hspace{2.5pt}{c_{0}}\ne 0,\]
Assumption (A-log): In addition to (A) suppose that
\[ {\int _{\delta }^{\infty }}|{g}^{(k)}(s){|}^{w}\big|\log \big(|{g}^{(k)}(s)|\big)\big|\hspace{0.1667em}\text{d}s<\infty .\]
Intuitively speaking, Assumption (A) says that ${g}^{(k)}$ may have a singularity at 0 when α is small, but it is smooth outside of 0. The theorem below has been proved in [6, 5]. We recall that a sequence of ${\mathbb{R}}^{d}$-valued random variables ${({Y_{n}})}_{n\ge 1}$ is said to converge stably in law to a random variable Y, defined on an extension of the original probability space $(\varOmega ,\mathcal{F},\mathbb{P})$, whenever the joint convergence in distribution $({Y_{n}},Z)\stackrel{d}{\to }(Y,Z)$ holds for any $\mathcal{F}$-measurable Z; in this case we use the notation ${Y_{n}}\stackrel{\mathcal{L}-s}{\to }Y$. We refer to [1, 11] for a detailed exposition of stable convergence.Theorem 1.1 ([6, Theorem 1.1(i)] and [5, Theorem 1.2(i)]).
Suppose that Assumption (A) holds, the Blumenthal–Getoor index satisfies $\beta <2$ and $p>\beta $. If $w=1$ assume that (A-log) holds. Then we obtain the following cases:
-
(i) When $\alpha <k-1/p$ then we have the stable convergence
(1.6)
\[ \begin{aligned}{}{n}^{\alpha p}V{(X,p;k)_{n}}& \stackrel{\mathcal{L}-s}{\to }|{c_{0}}{|}^{p}\sum \limits_{m:{T_{m}}\in [0,1]}|\Delta {L_{{T_{m}}}}{|}^{p}{V_{m}}\\{} \hspace{1em}\textit{with}\hspace{1em}{V_{m}}& ={\sum \limits_{l=0}^{\infty }}|{h_{k}}(l+{U_{m}}){|}^{p},\end{aligned}\] -
(ii) When $\alpha =k-1/p$ and additionally $1/p+1/w>1$, then we have
(1.8)
\[ \begin{aligned}{}\frac{{n}^{\alpha p}}{\log (n)}V{(X,p;k)_{n}}& \stackrel{\mathbb{P}}{\longrightarrow }|{c_{0}}{q_{k,\alpha }}{|}^{p}\sum \limits_{s\in (0,1]}|\Delta {L_{s}}{|}^{p}\\{} \hspace{1em}\textit{with}\hspace{1em}{q_{k,\alpha }}& :={\prod \limits_{j=0}^{k-1}}(\alpha -j).\end{aligned}\]
We remark that the first order asymptotic theory of [6, Theorem 1.1] includes two more regimes: an ergodic type limit theorem in the setting $p<\beta $, $\alpha <k-1/\beta $ and convergence in probability to a random integral in the setting $p\ge 1$, $\alpha >k-1/\max \{p,\beta \}$. However, in this paper we concentrate ourselves on results of Theorem 1.1, which are quite non-standard in the literature. More specifically, our aim is to extend the theory of Theorem 1.1 to kernels g that exhibit multiple singularities. We call a point $x\in {\mathbb{R}_{+}}$ a singularity point when the kth derivative ${g}^{(k)}$ of g explodes at x. Note that under Assumption (A) and condition $\alpha \le k-1/p$ the function g has only one singularity point at $x=0$. In practical applications a singularity point $x\in {\mathbb{R}_{+}}$ leads to a strong feedback effect stemming from the past jumps around the time $t-x$. Such effects has been discussed in the context of turbulence modelling in [8].
We will show that the limits in Theorem 1.1(i) and (ii) will be affected by the presence of multiple singularity points. More precisely, we will see that the increments ${\Delta _{i,k}^{n}}X$ can be heavily influenced by the jumps of L that happened in the past, and the time delay is determined by the singularity points of g. The obtained result is similar in spirit to the work [8] that studied quadratic variation of Brownian semi-stationary processes under multiple singularities of the kernel g. Furthermore, we will prove that in general the stable convergence in Theorem 1.1(i) only holds along a subsequence.
2 Main results
We consider stationary increments Lévy moving average processes as defined at (1.1) and recall that the driving motion L is a pure jump Lévy process with Lévy measure ν. Now, we introduce the condition on the kernel function g:
Assumption (B): For some $w\in (0,2]$, ${\limsup _{t\to \infty }}\nu (x:|x|\ge t){t}^{w}<\infty $ and $g-{g_{0}}$ is a bounded function in ${L}^{w}({\mathbb{R}_{+}})$. Furthermore, there exist points $0={\theta _{0}}<{\theta _{1}}<\cdots <{\theta _{l}}$ such that the following properties hold:
Let us give some remarks on Assumption (B). First of all, conditions (B)(i) and (B)(ii), which are direct extensions of (1.5), mean that for small powers ${\alpha _{z}}>0$ the points ${\theta _{z}}$ are singularities of g in the sense that ${g}^{(k)}({\theta _{z}})$ does not exist. On the other hand, condition (B)(iii) states that there exist no further singularities. The parameter w is by no means unique. It simultaneously describes the tail behaviours of the Lévy measure ν and the integrability of the function $|{g}^{(k)}|$, which exhibit a trade-off. When L is β-stable we always take $w=\beta $. Furthermore, Assumption (B) guarantees the existence of ${X_{t}}$ for all $t\ge 0$. Indeed, it follows from [10, Theorem 7] that the process X is well-defined if and only if for all $t\ge 0$,
where ${f_{t}}(s)=g(t+s)-{g_{0}}(s)$. By adding and subtracting g to ${f_{t}}$ it follows by Assumption (B) and the mean value theorem that ${f_{t}}$ is a bounded function in ${L}^{w}({\mathbb{R}_{+}})$. For all $\epsilon >0$, Assumption (B) implies that
-
(i) $g(t)\sim {c_{0}}{t}^{{\alpha _{0}}}$ as $t\downarrow 0$ for some ${\alpha _{0}}>0$ and ${c_{0}}\ne 0$.
-
(ii) $g(t)\sim {c_{z}}|t-{\theta _{z}}{|}^{{\alpha _{z}}}$ as $t\to {\theta _{z}}$ for some ${\alpha _{z}}>0$ and ${c_{z}}\ne 0$, and for all $z=1,\dots ,l$.
-
(iii) $g\in {C}^{k}({\mathbb{R}_{+}}\setminus \{{\theta _{0}},\dots ,{\theta _{l}}\})$.
-
(iv) There exist δ, $K>0$ such that $|{g}^{(k)}(t)|\le K|t-{\theta _{z}}{|}^{{\alpha _{z}}-k}$ for all $t\in ({\theta _{z}}-\delta ,{\theta _{z}}+\delta )\setminus \{{\theta _{z}}\}$, for any $z=0,\dots ,l$. Furthermore, there exists a ${\delta ^{\prime }}>0$ such that $|{g}^{(k)}|$ is decreasing on $({\theta _{l}}+{\delta ^{\prime }},\infty )$ and ${g}^{(j)}\in {L}^{w}(({\theta _{l}}+{\delta ^{\prime }},\infty ))$ for $j\in \{1,k\}$.
(2.1)
\[ {\int _{-t}^{\infty }}{\int _{\mathbb{R}}}\big(|{f_{t}}(s)x{|}^{2}\wedge 1\big)\hspace{0.1667em}\nu (\text{d}x)\hspace{0.1667em}\text{d}s<\infty ,\]
\[ {\int _{\mathbb{R}}}\big(|yx{|}^{2}\wedge 1\big)\hspace{0.1667em}\nu (\text{d}x)\le K\big({\mathbf{1}_{\{|y|\le 1\}}}|y{|}^{w}+{\mathbf{1}_{\{|y|>1\}}}|y{|}^{\beta +\epsilon }\big),\]
which shows (2.1) since ${f_{t}}$ is a bounded function in ${L}^{w}({\mathbb{R}_{+}})$.Remark 2.1 (Toy example).
Recall the following well-known results about the power variation of a pure jump Lévy process L:
\[ V{(L,p;k)_{n}}\stackrel{\mathbb{P}}{\longrightarrow }\sum \limits_{s\in [0,1]}|\Delta {L_{s}}{|}^{p}<\infty \]
for any $k\ge 1$ and any $p>\beta $. Let us now consider a simple stationary increments Lévy moving average process X with ${g_{0}}=0$ and $g(x)={\mathbf{1}_{[0,1]}}(x)$. In this case we may call the points ${\theta _{0}}=0$ and ${\theta _{1}}=1$ the singularities of g, although they do not precisely correspond to conditions (B)(i) and (B)(ii), and we observe that ${X_{t}}={L_{t}}-{L_{t-1}}$. Hence, we obtain the convergence in probability
\[ V{(X,p;k)_{n}}\stackrel{\mathbb{P}}{\longrightarrow }\sum \limits_{s\in [0,1]}|\Delta {L_{s}}{|}^{p}+\sum \limits_{s\in [-1,0]}|\Delta {L_{s}}{|}^{p}\]
for any $k\ge 1$ and any $p>\beta $. This result demonstrates that even in the simplest setting multiple singularities lead to a different limit.It turns out that only the minimal powers among $\{{\alpha _{0}},\dots ,{\alpha _{l}}\}$ determine the asymptotic behaviour of the statistic $V{(X,p;k)_{n}}$. Thus, we define
Furthermore, we introduce the notation ${h_{k,0}}:={h_{k}}$ and
In the main result below we consider a subsequence ${({n_{j}})}_{j\in \mathbb{N}}$ such that the following condition holds:
where $\{x\}$ denotes the fractional part of $x\in \mathbb{R}$. Obviously, such a subsequence always exists since $\{n{\theta _{z}}\}$ is a bounded sequence. Sometimes we will require a stronger condition, which is analogous to Assumption (A-log):
(2.2)
\[ \alpha :=\min \{{\alpha _{0}},\dots ,{\alpha _{l}}\}\hspace{2em}\text{and}\hspace{2em}\mathcal{A}:=\{z:{\alpha _{z}}=\alpha \}.\](2.3)
\[ {h_{k,z}}(x)={\sum \limits_{j=0}^{k}}{(-1)}^{j}\left(\genfrac{}{}{0.0pt}{}{k}{j}\right)|x-j{|}^{{\alpha _{z}}}\hspace{1em}\text{for}\hspace{2.5pt}z=1,\dots ,l.\](2.4)
\[ \underset{j\to \infty }{\lim }\{{n_{j}}{\theta _{z}}\}={\eta _{z}}\in [0,1]\hspace{1em}\text{for all}\hspace{2.5pt}z\in \mathcal{A},\]
Assumption (B-log): Condition (B) holds and we have that
The main result of the paper is the following theorem.
Theorem 2.2.
Suppose that Assumption (B) holds, the Blumenthal–Getoor index satisfies $\beta <2$ and $p>\beta $. If $w=1$ assume that (B-log) holds. Recall the notations (2.2) and (2.3). Then we obtain the following cases:
-
(i) When ${\max _{0\le z\le l}}{\alpha _{z}}<k-1/p$ and condition (2.4) holds, then we have the stable convergence
(2.5)
\[\begin{aligned}{}& {n_{j}^{\alpha p}}V{(X,p;k)_{{n_{j}}}}\stackrel{\mathcal{L}-s}{\to }\sum \limits_{z\in \mathcal{A}}|{c_{z}}{|}^{p}\sum \limits_{m:{T_{m}}\in [-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{m}}}}{|}^{p}{V_{m}^{z}}\\{} & \textit{with}\hspace{1em}{V_{m}^{z}}=\sum \limits_{r\in \mathbb{Z}}{\big|{h_{k,z}}\big(r+1-\{{U_{m}}+{\eta _{z}}\}\big)\big|}^{p}.\end{aligned}\] -
(ii) Let $\alpha ={\alpha _{0}}=\cdots ={\alpha _{l}}=k-1/p$. Assume that the functions ${f_{z}}:{\mathbb{R}_{+}}\to \mathbb{R}$ defined by ${f_{z}}(x)=g(x)/|x-{\theta _{z}}{|}^{\alpha }$ are in ${C}^{k}(({\theta _{z}}-\delta ,{\theta _{z}}+\delta ))$ for all $\delta <{\max _{1\le j\le l}}({\theta _{j}}-{\theta _{j-1}})$. If $1/p+1/w>1$, then we have
(2.6)
\[ \frac{{n}^{\alpha p}}{\log (n)}V{(X,p;k)_{n}}\stackrel{\mathbb{P}}{\longrightarrow }|{q_{k,\alpha }}{|}^{p}{\sum \limits_{z=0}^{l}}|{c_{z}}{|}^{p}(1+{\mathbf{1}_{\{z\ge 1\}}})\sum \limits_{m:{T_{m}}\in [-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{m}}}}{|}^{p},\]
We remark that the stable convergence in Theorem 2.2(i) only holds along the subsequence ${({n_{j}})}_{j\ge 1}$, which is seen from the form of the limit in (2.5) that depends on $({\eta _{z}})$. The original statistic ${n}^{\alpha p}V{(X,p;k)_{n}}$ is tight, but does not converge except when ${\theta _{z}}\in \mathbb{N}$ for all $z\in \mathcal{A}$. On the other hand, in Theorem 2.2(ii) we do not require to consider a subsequence.
Notice that the interval $[-{\theta _{z}},1-{\theta _{z}}]$, which appears in Theorem 2.2, is the set $[0,1]$ shifted by ${\theta _{z}}$ to the left. Given the discussion of Remark 2.1, such a shift in the limit is not really surprising. We recall that a similar phenomenon has been discovered in [8] in the context of Brownian semi-stationary processes. These are stochastic processes ${({Y_{t}})}_{t\ge 0}$ defined by
where W is a two-sided Brownian motion and ${({\sigma _{t}})}_{t\in \mathbb{R}}$ is a cádlág process. When the kernel function g satisfies conditions (B)(i) and (B)(ii) along with some further assumptions, which in particular ensure the existence of ${Y_{t}}$, the authors have shown the following convergence in probability (see [8, Theorem 3.2]):
where ${\tau _{n}^{2}}=\mathbb{E}[{({\Delta _{k,k}^{n}}G)}^{2}]$ with ${G_{t}}={\int _{-\infty }^{t}}g(t-s)\hspace{0.1667em}\text{d}{W_{s}}$, and the probability weights ${({\pi _{z}})}_{z\in \mathcal{A}}$ are given by
(2.7)
\[ \frac{1}{n{\tau _{n}^{2}}}V{(Y,2;k)_{n}}\stackrel{\mathbb{P}}{\longrightarrow }\sum \limits_{z\in \mathcal{A}}{\pi _{z}}{\int _{-{\theta _{z}}}^{1-{\theta _{z}}}}{\sigma _{s}^{2}}\hspace{0.1667em}\text{d}s\]
\[ {\pi _{z}}=\frac{{c_{z}^{2}}\| {h_{k,z}}{\| _{{L}^{2}(\mathbb{R})}^{2}}}{{\sum _{z\in \mathcal{A}}}{c_{z}^{2}}\| {h_{k,z}}{\| _{{L}^{2}(\mathbb{R})}^{2}}}.\]
Hence, we observe the same shift phenomenon in the integration region as in Theorem 2.2.3 Proofs
Throughout this section all positive constants are denoted by C although they may change from line to line. We will divide the proof of Theorem 2.2 into several steps. First, we will show the statements (2.5) and (2.6) for a compound Poisson process. In the second step we will decompose the jump measure of L into jumps that are bigger than ϵ and jumps that are smaller than ϵ. The big jumps form a compound Poisson process and hence the claim follows from the first step. Finally, we prove negligibility of small jumps when $\epsilon \to 0$.
We start with an important proposition.
Proposition 3.1.
Let $T=({T_{1}},\dots ,{T_{d}})$ be a stochastic vector with a density $v:{\mathbb{R}}^{d}\to {\mathbb{R}_{+}}$. Suppose there exists an open convex set $A\subseteq {\mathbb{R}}^{d}$ such that v is continuously differentiable on A and vanishes outside of A. Then, under condition (2.4), it holds that
where $\{x\}$ denotes the fractional parts of the vector $x\in {\mathbb{R}}^{d}$ and $x+a$, $a\in \mathbb{R}$, is componentwise addition. Here $U=({U_{1}},\dots ,{U_{d}})$ consists of i.i. $\mathcal{U}(0,1)$-distributed random variables defined on an extension of the space $(\varOmega ,\mathcal{F},\mathbb{P})$ and being independent of $\mathcal{F}$.
(3.1)
\[ {\big(\{{n_{j}}T+{n_{j}}{\theta _{z}}\}\big)}_{z\in \mathcal{A}}\stackrel{\mathcal{L}-s}{\to }{\big(\{U+{\eta _{z}}\}\big)}_{z\in \mathcal{A}}\hspace{1em}\textit{as}\hspace{2.5pt}j\to \infty ,\]Proof.
We first show the stable convergence
This statement has already been shown in [6, Lemma 4.1], but we demonstrate its proof for completeness. Let $f:{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}\to \mathbb{R}$ be a ${C}^{1}$-function, which vanishes outside some closed ball in $A\times {\mathbb{R}}^{d}$. We claim that there exists a finite constant $K>0$ such that for all $\rho >0$
By (3.3) used for $\rho =1/n$ we obtain that
Moreover, due to [1, Proposition 2(D”)], (3.4) implies the stable convergence $\{nT\}\stackrel{\mathcal{L}-s}{\to }U$ as $n\to \infty $. Thus, we need to prove the inequality (3.3). Define $\phi (x,u):=f(x,u)v(x)$. Then it holds by substitution that
(3.3)
\[ {D_{\rho }}:=\Bigg|{\int _{{\mathbb{R}}^{d}}}f\big(x,\{x/\rho \}\big)v(x)\hspace{0.1667em}\text{d}x-{\int _{{\mathbb{R}}^{k}}}\bigg({\int _{{[0,1]}^{d}}}f(x,u)\hspace{0.1667em}\text{d}u\bigg)v(x)\hspace{0.1667em}\text{d}x\Bigg|\le K\rho .\](3.4)
\[ \mathbb{E}\big[f\big(T,\{nT\}\big)\big]\longrightarrow \mathbb{E}\big[f(T,U)\big]\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]
\[\begin{aligned}{}{\int _{{\mathbb{R}}^{d}}}f\big(x,\{x/\rho \}\big)v(x)\hspace{0.1667em}\text{d}x& =\sum \limits_{j\in {\mathbb{Z}}^{d}}{\int _{{(0,1]}^{d}}}{\rho }^{d}\phi (\rho j+\rho u,u)\hspace{0.1667em}\text{d}u\end{aligned}\]
and
\[\begin{aligned}{}{\int _{{\mathbb{R}}^{d}}}\bigg({\int _{{[0,1]}^{d}}}f(x,u)\hspace{0.1667em}\text{d}u\bigg)v(x)\hspace{0.1667em}\text{d}x& =\sum \limits_{j\in {\mathbb{Z}}^{d}}{\int _{{[0,1]}^{d}}}\bigg({\int _{(\rho j,\rho (j+1)]}}\phi (x,u)\hspace{0.1667em}\text{d}x\bigg)\hspace{0.1667em}\text{d}u.\end{aligned}\]
Hence, we conclude that
\[\begin{aligned}{}{D_{\rho }}& \le \sum \limits_{j\in {\mathbb{Z}}^{d}}{\int _{{(0,1]}^{d}}}\Bigg|{\int _{(\rho j,\rho (j+1)]}}\phi (x,u)\hspace{0.1667em}\text{d}x-{\rho }^{d}\phi (\rho j+\rho u,u)\Bigg|\hspace{0.1667em}\text{d}u\\{} & \le \sum \limits_{j\in {\mathbb{Z}}^{d}}{\int _{{(0,1]}^{d}}}{\int _{(\rho j,\rho (j+1)]}}\big|\phi (x,u)-\phi (\rho j+\rho u,u)\big|\hspace{0.1667em}\text{d}x\hspace{0.1667em}\text{d}u.\end{aligned}\]
Using that A is convex and open, we deduce by the mean value theorem that there exists a positive constant K and a compact set $B\subseteq {\mathbb{R}}^{d}\times {\mathbb{R}}^{d}$ such that for all $j\in {\mathbb{Z}}^{d}$, $x\in (\rho j,\rho (j+1)]$ and $u\in {(0,1]}^{d}$ we have
Thus, ${D_{\rho }}\le K\rho {\int _{{(0,1]}^{d}}}{\int _{{\mathbb{R}}^{d}}}{\mathbf{1}_{B}}(x,u)\hspace{0.1667em}\text{d}x\hspace{0.1667em}\text{d}u$, which shows (3.2).Now, we are ready to prove the statement (3.1). By (3.2) and condition (2.4) we conclude that
\[ {\big(\{{n_{j}}T\},\{{n_{j}}{\theta _{z}}\}\big)}_{z\in \mathcal{A}}\stackrel{\mathcal{L}-s}{\to }{(U,{\eta _{z}})}_{z\in \mathcal{A}}\hspace{1em}\text{as}\hspace{2.5pt}j\to \infty .\]
Next, consider the map $f:{\mathbb{R}}^{d}\times {\mathbb{R}}^{{l^{\prime }}}\to {\mathbb{R}}^{d\times {l^{\prime }}}$, where ${l^{\prime }}$ denotes the cardinality of $\mathcal{A}$, given by
\[ f(x,{y_{1}},\dots ,{y_{{l^{\prime }}}})=\big(\{x+{y_{1}}\},\dots ,\{x+{y_{{l^{\prime }}}}\}\big).\]
This map is discontinuous exactly in those points $x,{y_{1}},\dots ,{y_{{l^{\prime }}}}$ for which ${x_{j}}+{y_{i}}\in \mathbb{Z}$ for some $i\in \{1,\dots ,{l^{\prime }}\}$ and some $j\in \{1,\dots ,d\}$. Note that the probability of the limiting variable ${(U,{\eta _{z}})}_{z\in \mathcal{A}}$ lying in the latter set is 0. Hence, it follows from the continuous mapping theorem for stable convergence that
\[ f\big(\{{n_{j}}T\},{\big(\{{n_{j}}{\theta _{z}}\}\big)}_{z\in \mathcal{A}}\big)\stackrel{\mathcal{L}-s}{\to }f\big(U,{({\eta _{z}})}_{z\in \mathcal{A}}\big)={\big(\{U+{\eta _{z}}\}\big)}_{z\in \mathcal{A}}\]
as $j\to \infty $. Since $x=\{x\}+\lfloor x\rfloor $ we have the identity $\{x+y\}=\{\{x\}+\{y\}\}$ and the left-hand side becomes
\[ f\big(\{{n_{j}}T\},{\big(\{{n_{j}}{\theta _{z}}\}\big)}_{z\in \mathcal{A}}\big)={\big(\{{n_{j}}T+{n_{j}}{\theta _{z}}\}\big)}_{z\in \mathcal{A}},\]
which concludes the proof of Proposition 3.1. □Now, we introduce the notation
and observe the identity
The next lemma presents some estimates for the function ${g_{i,n}}$. Its proof is a straightforward consequence of Assumption (B) and the Taylor expansion.
(3.5)
\[ {g_{i,n}}(x)={\sum \limits_{j=0}^{k}}{(-1)}^{j}\left(\genfrac{}{}{0.0pt}{}{k}{j}\right)g\big((i-j)/n-x\big),\]Lemma 3.2.
Suppose that Assumption (B) holds and let $z=1,\dots ,l$. Then there exists an $N\in \mathbb{N}$ such that for all $n\ge N$ and $i\in \{k,\dots ,n\}$ the following hold:
Furthermore, similar estimates hold for $z=0$ with obvious adjustments that account for the fact that g and ${h_{k,0}}$ are both vanishing on $(-\infty ,0)$.
-
(i) $|{g_{i,n}}(x)|\le C(|i/n-x-{\theta _{z}}{|}^{{\alpha _{z}}}+{n}^{-{\alpha _{z}}})$ for all $x\in [\frac{i-2k}{n}-{\theta _{z}},\frac{i+2k}{n}-{\theta _{z}}]$.
-
(ii) $|{g_{i,n}}(x)|\le C{n}^{-k}|(i-k)/n-x-{\theta _{z}}{|}^{{\alpha _{z}}-k}$ for all $x\in (\frac{i}{n}-\delta -{\theta _{z}},\frac{i-k}{n}-{\theta _{z}})$ if ${\alpha _{z}}-k<0$.
-
(iii) $|{g_{i,n}}(x)|\le C{n}^{-k}|(i-k)/n-x-{\theta _{z}}{|}^{{\alpha _{z}}-k}$ for all $x\in (\frac{i+k}{n}-{\theta _{z}},\frac{i-k}{n}+\delta -{\theta _{z}})$ if ${\alpha _{z}}-k<0$.
-
(iv) $|{h_{k,z}}(x)|\le |x-k{|}^{\alpha -k}$ for all $x\ge k+1$ and $|{h_{k,z}}(x)|\le |x+k{|}^{\alpha -k}$ for all $x\le -k-1$, if ${\alpha _{z}}-k<0$.
-
(v) For each $\varepsilon >0$ it holds that\[\begin{aligned}{}{n}^{k}|{g_{i,n}}(s)|{\mathbf{1}_{(-\infty ,\frac{i}{n}-\varepsilon -{\theta _{l}}]}}(s)& \le {C_{\varepsilon }}\big({\mathbf{1}_{[-{\theta _{l}}-{\delta ^{\prime }},1-{\theta _{l}}]}}(s)\\{} & \hspace{1em}+{\mathbf{1}_{(-\infty ,-{\theta _{l}}-{\delta ^{\prime }})}}(s)|{g}^{(k)}(-s)|\big).\end{aligned}\]
3.1 Proof of Theorem 2.2 in the compound Poisson case
In this subsection we assume that L is a compound Poisson process. Recall that ${({T_{m}})}_{m\ge 1}$ denotes the jump times of L. Let $\varepsilon >0$ and consider ${n_{j}}\in \mathbb{N}$ such that $\varepsilon {n_{j}}>4k$. Define the set
\[\begin{aligned}{}{\varOmega _{\varepsilon }}=\big\{& \omega \in \varOmega :\text{for all}\hspace{2.5pt}m\in \mathbb{N}\hspace{2.5pt}\text{with}\hspace{2.5pt}{T_{m}}(\omega )\in [-{\theta _{l}},1]\hspace{2.5pt}\text{it holds}\\{} & |{T_{m}}(\omega )-{T_{i}}(\omega )|>2\varepsilon ,{T_{m}}(\omega )+{\theta _{z}}-{\theta _{{z^{\prime }}}}\notin \big[{T_{i}}(\omega )-2\varepsilon ,{T_{i}}(\omega )+2\varepsilon \big]\\{} & \forall i\ne m\hspace{2.5pt}\forall z,{z^{\prime }}\in \{0,\dots ,l\}\hspace{2.5pt}\text{and}\hspace{2.5pt}\Delta {L_{s}}(\omega )=0\\{} & \text{for all}\hspace{2.5pt}s\in [-\varepsilon -{\theta _{z}},-{\theta _{z}}+\varepsilon ]\cup [1-\varepsilon -{\theta _{z}},1-{\theta _{z}}+\varepsilon ]\hspace{2.5pt}\forall z\in \{0,\dots ,l\}\big\}.\end{aligned}\]
Roughly speaking, on the set ${\varOmega _{\varepsilon }}$ the jump times in $[-{\theta _{l}},1]$ are well separated, their increments are outside a small neighbourhood of ${\theta _{z}}-{\theta _{{z^{\prime }}}}$, and there are no jumps around the fixed points $-{\theta _{z}}$ and $1-{\theta _{z}}$. In particular, it obviously holds that $\mathbb{P}({\varOmega _{\varepsilon }})\to 1$ as $\varepsilon \to 0$.Throughout the proof we assume without loss of generality that $0\in \mathcal{A}$. Now, we introduce a decomposition, which is central for the proof. Recalling the definition of ${g_{i,n}}$ at (3.5), we observe the identity
where for $z=1,\dots ,l$
(3.6)
\[ {\Delta _{i,k}^{n}}X=\sum \limits_{z\in \mathcal{A}}{M_{i,n,\varepsilon ,z}}+\sum \limits_{z\in {\mathcal{A}}^{c}}{M_{i,n,\varepsilon ,z}}+{R_{i,n,\varepsilon }},\]
\[\begin{aligned}{}{M_{i,n,\varepsilon ,0}}& ={\int _{\frac{i}{n}-\varepsilon }^{\frac{i}{n}}}{g_{i,n}}(s)\hspace{0.1667em}\text{d}{L_{s}},\hspace{2em}{M_{i,n,\varepsilon ,z}}={\int _{\frac{i}{n}-{\theta _{z}}-\varepsilon }^{\frac{i}{n}-{\theta _{z}}+\frac{\lfloor n\varepsilon \rfloor }{n}}}{g_{i,n}}(s)\hspace{0.1667em}\text{d}{L_{s}}\\{} {R_{i,n,\varepsilon }}& ={\int _{-\infty }^{\frac{i}{n}-{\theta _{l}}-\varepsilon }}{g_{i,n}}(s)\hspace{0.1667em}\text{d}{L_{s}}+{\sum \limits_{z=1}^{l}}{\int _{\frac{i}{n}-{\theta _{z}}+\frac{\lfloor n\varepsilon \rfloor }{n}}^{\frac{i}{n}-{\theta _{z-1}}-\varepsilon }}{g_{i,n}}(s)\hspace{0.1667em}\text{d}{L_{s}}.\end{aligned}\]
It turns out that the first term ${\sum _{z\in \mathcal{A}}}{M_{i,n,\varepsilon ,z}}$ is dominating, while the other two are negligible.3.1.1 Main terms in Theorem 2.2(i)
In this subsection we consider the dominating term in the decomposition (3.6). We want to prove that, on ${\varOmega _{\varepsilon }}$, then for $j\to \infty $
where the limit has been introduced in (2.5). Let us fix an index $z\in \mathcal{A}$. Then, on ${\varOmega _{\varepsilon }}$, for each jump time ${T_{m}}\in (-{\theta _{z}},1-{\theta _{z}}]$ there exists a unique random variable ${i_{m,z}}\in \mathbb{N}$ such that
We also observe the following implication, which follows directly from the definition of the set ${\varOmega _{\varepsilon }}$:
where ${v_{m}^{z}}$ are random variables taking values in $\{-2,-1,0\}$ that are measurable with respect to ${T_{m}}$. If $z=0$ then the sum above is one-sided, i.e. from $u=0$ to $\lfloor n\varepsilon \rfloor $, cf. [6, Eq. (4.2)]. Next, we observe the identity
for any $m\in \mathbb{N}$, $0\le r\le k$ and $z\in \mathcal{A}$. Since $f(x)\to 1$ as $x\to {\theta _{z}}$, we find that for any $d\in \mathbb{N}$
as $j\to \infty $, which is a key result of the proof. We now define a truncated version of ${V_{n,\varepsilon ,z}}$ introduced in (3.8):
where
as $d\to \infty $, where the second sum on the right-hand side is finite, since $|{h_{k,z}}(x)|\le C|x{|}^{\alpha -k}$ for large enough $|x|$ and all $z\in \mathcal{A}$, and $\alpha <k-1/p$. In view of (3.11) and (3.12), we are left to prove the convergence
(3.7)
\[ {n_{j}^{\alpha p}}{\sum \limits_{i=k}^{{n_{j}}}}{\Bigg|\sum \limits_{z\in \mathcal{A}}{M_{i,{n_{j}},\varepsilon ,z}}\Bigg|}^{p}\stackrel{\mathcal{L}-s}{\to }\sum \limits_{z\in \mathcal{A}}|{c_{z}}{|}^{p}\sum \limits_{m:{T_{m}}\in [-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{m}}}}{|}^{p}{V_{m}^{z}},\]
\[ \text{On}\hspace{2.5pt}{\varOmega _{\varepsilon }},\hspace{2.5pt}\text{if}\hspace{2.5pt}{M_{i,n,\varepsilon ,z}}\ne 0\hspace{2.5pt}\text{for some}\hspace{2.5pt}z\in \mathcal{A}\hspace{0.2778em}\Longrightarrow \hspace{0.2778em}{M_{i,n,\varepsilon ,{z^{\prime }}}}=0\hspace{2.5pt}\text{for any}\hspace{2.5pt}{z^{\prime }}\ne z\hspace{2.5pt}\text{in}\hspace{2.5pt}\mathcal{A}.\]
Indeed, this is the consequence of the definition of the term ${M_{i,n,\varepsilon ,z}}$ and the statement
\[ {T_{m}}(\omega )+{\theta _{z}}-{\theta _{{z^{\prime }}}}\notin \big[{T_{{m^{\prime }}}}(\omega )-2\varepsilon ,{T_{{m^{\prime }}}}(\omega )+2\varepsilon \big]\hspace{2.5pt}\forall {m^{\prime }}\ne m\hspace{2.5pt}\forall z,{z^{\prime }}\in \{0,\dots ,l\},\]
which holds on ${\varOmega _{\varepsilon }}$. Hence, we conclude that
\[ {n}^{\alpha p}{\sum \limits_{i=k}^{n}}{\Bigg|\sum \limits_{z\in \mathcal{A}}{M_{i,n,\varepsilon ,z}}\Bigg|}^{p}={n}^{\alpha p}\sum \limits_{z\in \mathcal{A}}{\sum \limits_{i=k}^{n}}|{M_{i,n,\varepsilon ,z}}{|}^{p}\]
on ${\varOmega _{\varepsilon }}$, and we obtain the representation
(3.8)
\[\begin{aligned}{}& {n}^{\alpha p}{\sum \limits_{i=k}^{n}}|{M_{i,n,\varepsilon ,z}}{|}^{p}={V_{n,\varepsilon ,z}}\hspace{1em}\text{with}\\{} & {V_{n,\varepsilon ,z}}={n}^{\alpha p}\sum \limits_{m:{T_{m}}\in (-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{m}}}}{|}^{p}{\sum \limits_{u=-\lfloor n\varepsilon \rfloor }^{\lfloor n\varepsilon \rfloor +{v_{m}^{z}}}}|{g_{{i_{m,z}}+u,n}}({T_{m}}){|}^{p},\end{aligned}\]
\[ \{n{T_{m}}+n{\theta _{z}}\}=n{T_{m}}+n{\theta _{z}}-\lfloor n{T_{m}}+n{\theta _{z}}\rfloor =n{T_{m}}+n{\theta _{z}}-({i_{m,z}}-1).\]
Due to Assumption (B), we can write $g(x)={c_{z}}|x-{\theta _{z}}{|}^{\alpha }f(x)$ with $f(x)\to 1$ as $x\to {\theta _{z}}$, for any $z\in \mathcal{A}$ (for ${\theta _{0}}=0$ we need to replace $|x{|}^{\alpha }$ by ${x_{+}^{\alpha }}$). This allows us to decompose
(3.9)
\[\begin{aligned}{}& {n}^{\alpha }g\bigg(\frac{{i_{m,z}}+u-r}{n}-{T_{m}}\bigg)\\{} & \hspace{1em}={c_{z}}{n}^{\alpha }\Big|\frac{{i_{m,z}}+u-r}{n}-{T_{m}}-{\theta _{z}}{\Big|}^{\alpha }f\bigg(\frac{{i_{m,z}}+u-r}{n}-{T_{m}}\bigg)\\{} & \hspace{1em}={c_{z}}|u-r+{i_{m,z}}-n{T_{m}}-n{\theta _{z}}{|}^{\alpha }f\bigg(\frac{u-r}{n}+{n}^{-1}({i_{m,z}}-n{T_{m}})\bigg)\\{} & \hspace{1em}={c_{z}}|u-r+1-\{n{T_{m}}+n{\theta _{z}}\}{|}^{\alpha }f\bigg(\frac{u-r}{n}+{n}^{-1}\big(n{\theta _{z}}+1-\{n{T_{m}}+n{\theta _{z}}\}\big)\bigg)\\{} & \hspace{1em}={c_{z}}|u-r+1-\{n{T_{m}}+n{\theta _{z}}\}{|}^{\alpha }f\bigg(\frac{u-r+1-\{n{T_{m}}+n{\theta _{z}}\}}{n}+{\theta _{z}}\bigg),\end{aligned}\]
\[\begin{aligned}{}& {\bigg({n_{j}^{\alpha }}g\bigg(\frac{{i_{m,z}}+u-r}{{n_{j}}}-{T_{m}}\bigg)\bigg)}_{|u|,m\le d,\hspace{0.1667em}0\le r\le k,\hspace{0.1667em}z\in \mathcal{A}}\\{} & \stackrel{\mathcal{L}-s}{\to }{\big({c_{z}}|u-r+1-\{{U_{m}}+{\eta _{z}}\}{|}^{\alpha }\big)}_{|u|,m\le d,\hspace{0.1667em}0\le r\le k,\hspace{0.1667em}z\in \mathcal{A}},\end{aligned}\]
which holds due to condition (2.4), decomposition (3.9) and Proposition 3.1 (for ${\theta _{0}}=0$ we again need to replace $|x{|}^{\alpha }$ by ${x_{+}^{\alpha }}$). Hence, by continuous mapping theorem for stable convergence we deduce that
(3.10)
\[ {\big({n_{j}^{\alpha }}{g_{{i_{m,z}}+u,{n_{j}}}}({T_{m}})\big)}_{|u|,m\le d,\hspace{0.1667em}z\in \mathcal{A}}\stackrel{\mathcal{L}-s}{\to }{\big({c_{z}}{h_{k,z}}\big(1+u-\{{U_{m}}+{\eta _{z}}\}\big)\big)}_{|u|,m\le d,\hspace{0.1667em}z\in \mathcal{A}}\]
\[ {V_{n,\varepsilon ,z,d}}:={n}^{\alpha p}\sum \limits_{\begin{array}{c}m\le d:\\{} {T_{m}}\in (-{\theta _{z}},1-{\theta _{z}}]\end{array}}|\Delta {L_{{T_{m}}}}{|}^{p}\Bigg({\sum \limits_{u=-\lfloor \varepsilon d\rfloor }^{\lfloor \varepsilon d\rfloor +{v_{m}^{z}}}}|{g_{{i_{m,z}}+u,n}}({T_{m}}){|}^{p}\Bigg).\]
From (3.10) and properties of stable convergence we conclude that
(3.11)
\[ {({V_{{n_{j}},\varepsilon ,z,d}})}_{z\in \mathcal{A}}\stackrel{\mathcal{L}-s}{\to }{({V_{\varepsilon ,z,d}})}_{z\in \mathcal{A}}\hspace{1em}\text{as}\hspace{2.5pt}j\to \infty ,\]
\[ {V_{\varepsilon ,z,d}}=|{c_{z}}{|}^{p}\sum \limits_{\begin{array}{c}m\le d:\\{} {T_{m}}\in (-{\theta _{z}},1-{\theta _{z}}]\end{array}}|\Delta {L_{{T_{m}}}}{|}^{p}\Bigg({\sum \limits_{u=-\lfloor \varepsilon d\rfloor }^{\lfloor \varepsilon d\rfloor +{v_{m}^{z}}}}{\big|{h_{k,z}}\big(1+u-\{{U_{m}}+{\eta _{z}}\}\big)\big|}^{p}\Bigg).\]
Applying a monotone convergence argument, we deduce the almost sure convergence
(3.12)
\[ {V_{\varepsilon ,z,d}}\uparrow {V_{z}}=|{c_{z}}{|}^{p}\sum \limits_{{T_{m}}\in (-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{m}}}}{|}^{p}\bigg(\sum \limits_{u\in \mathbb{Z}}{\big|{h_{k,z}}\big(1+u-\{{U_{m}}+{\eta _{z}}\}\big)\big|}^{p}\bigg)\]
\[ \underset{d\to \infty }{\lim }\underset{n\to \infty }{\limsup }|{V_{n,\varepsilon ,z,d}}-{V_{n,\varepsilon ,z}}|=0\]
on ${\varOmega _{\varepsilon }}$. Set ${K_{d}}={\sum _{m>d:{T_{m}}\in (-{\theta _{z}},1-{\theta _{z}}]}}|\Delta {L_{{T_{m}}}}{|}^{p}$ and observe that ${K_{d}}\to 0$ as $d\to \infty $, since L is a compound Poisson process. Due to Lemma 3.2 we conclude that $|{n}^{\alpha }{g_{i,n}}(x)|\le C\min \{1,|i/n-x{|}^{\alpha -k}\}$ and thus
\[ |{V_{n,\varepsilon ,z,d}}-{V_{n,\varepsilon ,z}}|\le C\bigg({K_{d}}+\sum \limits_{|u|>\lfloor \varepsilon d\rfloor }|u{|}^{p(\alpha -k)}\bigg)\hspace{1em}\text{for all}\hspace{2.5pt}z\in \mathcal{A},\]
and the latter converges to 0 almost surely as $d\to \infty $, because $\alpha <k-1/p$. Consequently, we have shown (3.7). □3.1.2 Main terms in Theorem 2.2(ii)
We start with a simple lemma.
Proof.
Due to the assumption of the lemma, we have that ${({a_{i}})}_{i\in \mathbb{N}}$ is a bounded sequence and for each $\epsilon >0$ there exists an $N=N(\epsilon )$ with
It obviously holds that ${\lim _{n\to \infty }}{\sum _{i=1}^{cn}}{i}^{-1}/\log (n)=1$. On the other hand, we obtain that
\[ \underset{n\to \infty }{\limsup }\frac{1}{\log (n)}{\sum \limits_{i=N}^{cn}}|{a_{i}}-{i}^{-1}|\le \epsilon \underset{n\to \infty }{\limsup }\frac{1}{\log (n)}{\sum \limits_{i=1}^{cn}}{i}^{-1}=\epsilon .\]
Since $\epsilon >0$ is arbitrary, we conclude the statement of Lemma 3.3. □Now, we will again use the decomposition (3.8), which holds on ${\varOmega _{\varepsilon }}$, and treat each term ${V_{n,\varepsilon ,z}}$ separately. We consider $z\ge 1$ and we will show that
as $n\to \infty $, for any $m\in \mathbb{N}$. Let us first consider the case $|u|\ge k$. Recall that we have assumed that ${f_{z}}(x)=g(x)/|x-{\theta _{z}}{|}^{\alpha }$ is in ${C}^{k}(({\theta _{z}}-\delta ,{\theta _{z}}+\delta ))$ for any $\delta <{\max _{1\le j\le l}}({\theta _{j}}-{\theta _{j-1}})$. Now, due to identity (3.9) and Taylor expansion of order k, we obtain the bound (cf. [5, Eqs. (4.8) and (4.9)])
(3.13)
\[ \frac{1}{\log (n)}{\sum \limits_{u=-\lfloor n\varepsilon \rfloor }^{\lfloor n\varepsilon \rfloor +{v_{m}^{z}}}}{\big|{n}^{\alpha }{g_{{i_{m,z}}+u,n}}({T_{m}})-{c_{z}}{h_{k,z}}\big(u+1-\{n{T_{m}}+n{\theta _{z}}\}\big)\big|}^{p}\to 0\]
\[ {\sum \limits_{u=-\lfloor n\varepsilon \rfloor }^{\lfloor n\varepsilon \rfloor +{v_{m}^{z}}}}{\big|{n}^{\alpha }{g_{{i_{m,z}}+u,n}}({T_{m}})-{c_{z}}{h_{k,z}}\big(u+1-\{n{T_{m}}+n{\theta _{z}}\}\big)\big|}^{p}{\mathbf{1}_{\{|u|\ge k\}}}\le C,\]
for any $\varepsilon <{\max _{1\le j\le l}}({\theta _{j}}-{\theta _{j-1}})$. Since $|{n}^{\alpha }{g_{{i_{m,z}}+u,n}}({T_{m}})|$ is bounded for any $|u|<k$ due to Lemma 3.2, we deduce the convergence in (3.13).Next, for large enough $|u|$ we observe the bounds
The same statement holds for $z=0$, but the limit becomes $|{q_{k,\alpha }}{|}^{p}$, since in this setting the sum is one-sided. We set $\| x{\| _{p}^{p}}={\sum _{i=1}^{m}}|{x_{i}}{|}^{p}$ for any $x\in {\mathbb{R}}^{m}$ and $p>0$, and recall that $\| x{\| _{p}}$ is a norm for $p\ge 1$. It holds that
By (3.13), (3.14) and (3.15), and taking into account the definition of ${V_{n,\varepsilon ,z}}$ at (3.8), we readily deduce the convergence
\[ |{q_{k,\alpha }}{|}^{p}{a_{u}}\le {\big|{h_{k,z}}\big(u+1-\{n{T_{m}}+n{\theta _{z}}\}\big)\big|}^{p}\le |{q_{k,\alpha }}{|}^{p}{a_{u-k-1}}\hspace{1em}\text{where}\hspace{2.5pt}{a_{u}}=|u{|}^{-1}.\]
Hence, by Lemma 3.3, we conclude the convergence
(3.14)
\[ \frac{1}{\log (n)}{\sum \limits_{u=-\lfloor n\varepsilon \rfloor }^{\lfloor n\varepsilon \rfloor +{v_{m}^{z}}}}{\big|{h_{k,z}}\big(u+1-\{n{T_{m}}+n{\theta _{z}}\}\big)\big|}^{p}\to 2|{q_{k,\alpha }}{|}^{p}\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\](3.15)
\[ \begin{aligned}{}|\| x{\| _{p}^{p}}-\| y{\| _{p}^{p}}|& \le \| x-y{\| _{p}^{p}}\hspace{1em}\hspace{2.5pt}\text{when}\hspace{2.5pt}p\in (0,1],\\{} |\| x{\| _{p}}-\| y{\| _{p}}|& \le \| x-y{\| _{p}}\hspace{1em}\hspace{2.5pt}\text{when}\hspace{2.5pt}p>1.\end{aligned}\]
\[ \frac{{V_{n,\varepsilon ,z}}}{\log (n)}\stackrel{\mathbb{P}}{\longrightarrow }|{q_{k,\alpha }}{c_{z}}{|}^{p}(1+{\mathbf{1}_{\{z\ge 1\}}})\sum \limits_{m:{T_{m}}\in [-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{m}}}}{|}^{p}\]
as $n\to \infty $, and hence
\[ {n}^{\alpha p}{\sum \limits_{i=k}^{n}}{\Bigg|{\sum \limits_{z=0}^{l}}{M_{i,n,\varepsilon ,z}}\Bigg|}^{p}\stackrel{\mathbb{P}}{\longrightarrow }|{q_{k,\alpha }}{|}^{p}{\sum \limits_{z=0}^{l}}|{c_{z}}{|}^{p}(1+{\mathbf{1}_{\{z\ge 1\}}})\sum \limits_{m:{T_{m}}\in [-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{m}}}}{|}^{p}\]
as $n\to \infty $, on ${\varOmega _{\varepsilon }}$. □3.1.3 Negligible terms
Due to inequalities at (3.15), it suffices to show that on ${\varOmega _{\varepsilon }}$
as $n\to \infty $, where ${a_{n}}={n}^{\alpha p}$ in Theorem 2.2(i) and ${a_{n}}={n}^{\alpha p}/\log (n)$ in Theorem 2.2(ii), and this will prove that these terms do not affect the limits in Theorem 2.2. At this stage we notice that outside the singularity points the kernel function g satisfies the same properties under Assumption (B) (resp. Assumption (B-log)) as under Assumption (A) (resp. Assumption (A-log)). Consequently, we can apply the estimates for the term ${R_{i,n,\varepsilon }}$ derived in [6, Eqs. (4.8) and (4.12)] and [5, Section 4] under conditions (A) and (A-log)
(3.16)
\[ {a_{n}}{\sum \limits_{i=k}^{n}}|{R_{i,n,\varepsilon }}{|}^{p}\stackrel{\mathbb{P}}{\longrightarrow }0\hspace{1em}\text{and}\hspace{1em}{a_{n}}{\sum \limits_{i=k}^{n}}|{M_{i,n,\varepsilon ,z}}{|}^{p}\stackrel{\mathbb{P}}{\longrightarrow }0\hspace{1em}\text{for}\hspace{2.5pt}z\in {\mathcal{A}}^{c},\]
\[\begin{aligned}{}\underset{n\in \mathbb{N},i=k,\dots ,n}{\sup }{n}^{k}|{R_{i,n,\varepsilon }}|& <\infty \hspace{2.5pt}\text{almost surely if}\hspace{2.5pt}w\in (0,1],\\{} \underset{n\in \mathbb{N},i=k,\dots ,n}{\sup }\frac{{n}^{k}|{R_{i,n,\varepsilon }}|}{{(\log (n))}^{q}}& <\infty \hspace{2.5pt}\text{almost surely if}\hspace{2.5pt}w\in (1,2],\end{aligned}\]
where q is determined via $1/q+1/w=1$, since ${R_{i,n,\varepsilon }}$ is only affected by the function g outside the singularity points ${\theta _{z}}$. We readily conclude the first convergence at (3.16) in the setting of Theorem 2.2(i), because $\alpha <k-1/p$. It also holds in the setting of Theorem 2.2(ii), where for $w\in (1,2]$ we use the assumption that $1/p+1/w>1$.Now, we show the second statement of (3.16), which is only relevant in the setting of Theorem 2.2(i). Since ${\alpha _{z}}<k-1/p$ for all z, we can apply to ${\sum _{i=k}^{n}}|{M_{i,n,\varepsilon ,z}}{|}^{p}$, $z\in {\mathcal{A}}^{c}$, the same techniques as for ${\sum _{i=k}^{n}}|{M_{i,n,\varepsilon ,z}}{|}^{p}$, $z\in \mathcal{A}$. Hence, using the same methods as in Section 3.1.1, we conclude that on ${\varOmega _{\varepsilon }}$
\[ {n}^{\alpha p}{\sum \limits_{i=k}^{n}}|{M_{i,n,\varepsilon ,z}}{|}^{p}={O_{\mathbb{P}}}\big({n}^{p(\alpha -{\alpha _{z}})}\big)\hspace{1em}\text{for all}\hspace{2.5pt}z\in {\mathcal{A}}^{c},\]
where the notation ${Y_{n}}={O_{\mathbb{P}}}({a_{n}})$ means that the sequence ${a_{n}^{-1}}{Y_{n}}$ is tight. Since ${\alpha _{z}}>\alpha $ for all $z\in {\mathcal{A}}^{c}$, we obtain the second statement of (3.16). The results of Sections 3.1.1–3.1.3 and the fact that ${\varOmega _{\varepsilon }}\uparrow \varOmega $ as $\varepsilon \to 0$ imply the assertion of Theorem 2.2 in the compound Poisson case. □3.2 Proof of Theorem 2.2 in the general case
Let now ${({L_{t}})}_{t\in \mathbb{R}}$ be a general symmetric pure jump Lévy process with Blumenthal–Getoor index β. We denote by N the corresponding Poisson random measure defined by $N(A):=\mathrm{\# }\{t\in \mathbb{R}:(t,\Delta {L_{t}})\in A\}$ for all measurable $A\subseteq \mathbb{R}\times (\mathbb{R}\setminus \{0\})$. Next, we introduce the process
where ${a_{n}}={n}^{\alpha p}$ in Theorem 2.2(i) and ${a_{n}}={n}^{\alpha p}/\log (n)$ in Theorem 2.2(ii). First, due to Markov’s inequality and the stationary increments of ${X_{t}}(m)$, it follows that
Notice the representation
then the dominated convergence theorem implies that
\[ {X_{t}}(m)={\int _{(-\infty ,t]\times [-\frac{1}{m},\frac{1}{m}]}}x\big(g(t-s)-{g_{0}}(-s)\big)\hspace{0.1667em}N(\text{d}s,\text{d}x),\]
which only involves small jumps of L. We will prove that
(3.17)
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }\mathbb{P}\big({a_{n}}V{\big(X(m),p;k\big)_{n}}>\epsilon \big)=0\hspace{1em}\text{for any}\hspace{2.5pt}\epsilon >0,\]
\[ \mathbb{P}\big({a_{n}}V{\big(X(m),p;k\big)_{n}}>\epsilon \big)\le {\epsilon }^{-1}{a_{n}}{\sum \limits_{i=k}^{n}}\mathbb{E}\big[|{\Delta _{i,k}^{n}}X(m){|}^{p}\big]\le {\epsilon }^{-1}{b_{n}}\mathbb{E}\big[|{\Delta _{k,k}^{n}}X(m){|}^{p}\big],\]
where ${b_{n}}=n{a_{n}}$. Hence it is enough to prove that
(3.18)
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }\mathbb{E}\big[|{Y_{n,m}}{|}^{p}\big]=0\hspace{1em}\text{where}\hspace{1em}{Y_{n,m}}={b_{n}^{1/p}}{\Delta _{k,k}^{n}}X(m).\]
\[ {Y_{n,m}}={\int _{(-\infty ,\frac{k}{n}]\times [-\frac{1}{m},\frac{1}{m}]}}\big({b_{n}^{1/p}}{g_{k,n}}(s)\big)x\hspace{0.1667em}N(\text{d}s,\text{d}x).\]
Using this together with [10, Theorem 3.3], (3.18) will follow if
\[\begin{aligned}{}& \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }{\xi _{n,m}}=0\hspace{1em}\text{where}\hspace{1em}{\xi _{n,m}}={\int _{|x|\le \frac{1}{m}}}{\chi _{n}}(x)\hspace{0.1667em}\nu (\text{d}x)\hspace{1em}\text{and}\\{} & {\chi _{n}}(x)={\int _{-\infty }^{\frac{k}{n}}}\big(|{b_{n}^{1/p}}{g_{k,n}}(s)x{|}^{p}{\mathbf{1}_{\{|{b_{n}^{1/p}}{g_{k,n}}(s)x|\ge 1\}}}\\{} & \hspace{1em}\hspace{2.5pt}\hspace{2em}+|{b_{n}^{1/p}}{g_{k,n}}(s)x{|}^{2}{\mathbf{1}_{\{|{b_{n}^{1/p}}{g_{k,n}}(s)x|<1\}}}\big)\hspace{0.1667em}\text{d}s.\end{aligned}\]
Suppose there exists a constant $K\ge 0$ such that for all large $n\in \mathbb{N}$
(3.19)
\[ {\chi _{n}}(x)\le K\big(|x{|}^{p}+|x{|}^{2}\big)\hspace{1em}\text{for all}\hspace{2.5pt}x\in [-1,1],\]
\[ \underset{m\to \infty }{\limsup }\Big[\underset{n\to \infty }{\limsup }{\xi _{n,m}}\Big]\le K\underset{m\to \infty }{\limsup }{\int _{|x|\le \frac{1}{m}}}\big(|x{|}^{p}+|x{|}^{2}\big)\hspace{0.1667em}\nu (\text{d}x)=0,\]
using the assumption that $p>\beta $. We consider only (3.19) in the case of Theorem 2.2(i) as (ii) is very similar, see [5]. In the case of (i) then ${b_{n}^{1/p}}={n}^{\alpha +1/p}$. For short notation define ${\varPhi _{p}}:\mathbb{R}\to {\mathbb{R}_{+}}$ as the function
\[ {\varPhi _{p}}(y)=|y{|}^{2}{\mathbf{1}_{\{|y|\le 1\}}}+|y{|}^{p}{\mathbf{1}_{\{|y|>1\}}},\hspace{1em}y\in \mathbb{R}.\]
Note that ${\varPhi _{p}}$ is of modular growth, i.e. there exists a constant ${K_{p}}>0$ depending only on p such that ${\varPhi _{p}}(x+y)\le {K_{p}}({\varPhi _{p}}(x)+{\varPhi _{p}}(y))$ for any $x,y\in \mathbb{R}$. We consider the following decomposition
\[\begin{aligned}{}{\chi _{n}}(x)& ={\int _{\frac{k}{n}-\frac{1}{n}}^{\frac{k}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s+{\sum \limits_{z=1}^{l}}{\int _{\frac{k}{n}-{\theta _{z}}-\frac{1}{n}}^{\frac{k}{n}-{\theta _{z}}+\frac{1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}+{\sum \limits_{z=1}^{l}}{\int _{\frac{k}{n}-{\theta _{z}}+\frac{1}{n}}^{\frac{k}{n}-{\theta _{z-1}}-\frac{1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}+{\int _{\frac{k}{n}-{\theta _{l}}-\delta }^{\frac{k}{n}-{\theta _{l}}-\frac{1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}+{\int _{-\infty }^{\frac{k}{n}-{\theta _{l}}-\delta }}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\\{} & =:{I_{0}}(x)+{\sum \limits_{z=1}^{l}}{I_{1,z}}(x)+{\sum \limits_{z=1}^{l}}{I_{2,z}}(x)+{I_{3}}(x)+{I_{4}}(x).\end{aligned}\]
We treat the five types of terms separately.
Estimation of ${I_{0}}$. By Lemma 3.2
\[ |{g_{k,n}}(x)|\le K\big(|\frac{k}{n}-s{|}^{{\alpha _{0}}}\big)\hspace{1em}\text{for all}\hspace{2.5pt}s\in \big[\frac{k}{n}-\frac{1}{n},\frac{k}{n}\big].\]
Since ${\varPhi _{p}}$ is increasing on ${\mathbb{R}_{+}}$ and $\alpha \le {\alpha _{0}}$ it follows that
\[ {I_{0}}(x)\le K{\int _{0}^{\frac{1}{n}}}{\varPhi _{p}}\big(x{n}^{\alpha +1/p}{s}^{{\alpha _{0}}}\big)\hspace{0.1667em}\text{d}s\le K{\int _{0}^{\frac{1}{n}}}{\varPhi _{p}}\big(x{n}^{\alpha +1/p}{s}^{\alpha }\big)\hspace{0.1667em}\text{d}s.\]
By elementary integration it follows that
\[\begin{aligned}{}& {\int _{0}^{\frac{1}{n}}}|x{n}^{\alpha +1/p}{s}^{\alpha }{|}^{2}{\mathbf{1}_{\{|x{n}^{\alpha +1/p}{s}^{\alpha }|\le 1\}}}\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}\le K\big({x}^{2}{\mathbf{1}_{\{|x|\le {n}^{-1/p}\}}}{n}^{2/p-1}+{\mathbf{1}_{\{|x|>{n}^{-1/p}\}}}|x{|}^{-1/\alpha }{n}^{-1-1/(\alpha p)}\big)\\{} & \hspace{1em}\le K\big({x}^{2}+|x{|}^{p}\big).\end{aligned}\]
The second term in ${\varPhi _{p}}$ is dealt with as follows:
\[ {\int _{0}^{\frac{1}{n}}}|x{n}^{\alpha +1/p}{s}^{\alpha }{|}^{p}{\mathbf{1}_{\{|x{n}^{\alpha +1/p}{s}^{\alpha }|>1\}}}\hspace{0.1667em}\text{d}s\le |x{|}^{p}{n}^{\alpha p+1}{\int _{0}^{\frac{1}{n}}}{s}^{\alpha p}\hspace{0.1667em}\text{d}s=\frac{|x{|}^{p}}{\alpha p+1}.\]
Combining the two estimates above it follows that ${I_{0}}(x)\le K(|x{|}^{2}+|x{|}^{p})$.
Estimation of ${I_{1,z}}$. Similarly as for ${I_{0}}$, we have, using arguments as in part (i) of Lemma 3.2, that
\[ |{g_{k,n}}(s)|\le K{\sum \limits_{j=0}^{k}}|\frac{k-j}{n}-s-{\theta _{z}}{|}^{{\alpha _{z}}}\hspace{1em}\text{for all}\hspace{2.5pt}s\in \big[\frac{k}{n}-{\theta _{z}}-\frac{1}{n},\frac{k}{n}-{\theta _{z}}+\frac{1}{n}\big].\]
Using the modular growth of ${\varPhi _{p}}$ it follows that
\[\begin{aligned}{}& {\int _{\frac{k}{n}-{\theta _{z}}-\frac{1}{n}}^{\frac{k}{n}-{\theta _{z}}+\frac{1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}ds\\{} & \hspace{1em}\le {K_{p}}{\sum \limits_{j=0}^{k}}{\int _{\frac{k}{n}-{\theta _{z}}-\frac{1}{n}}^{\frac{k}{n}-{\theta _{z}}+\frac{1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}|\frac{k-j}{n}-s-{\theta _{z}}{|}^{{\alpha _{z}}}x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}={K_{p}}{\sum \limits_{j=0}^{k}}{\int _{-\frac{j}{n}-\frac{1}{n}}^{-\frac{j}{n}+\frac{1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}|s{|}^{{\alpha _{z}}}x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}\le {K_{p}}{\int _{-\frac{k+1}{n}}^{\frac{k+1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}|s{|}^{\alpha }x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}={K_{p}}{\int _{0}^{\frac{k+1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}|s{|}^{\alpha }x\big)\hspace{0.1667em}\text{d}s.\end{aligned}\]
As for ${I_{0}}$, we get ${I_{1,z}}(x)\le K(|x{|}^{2}+|x{|}^{p})$.
Estimation of ${I_{2,z}}$. We decompose ${I_{2,z}}$ into three terms corresponding to whether we are close to the singularity ${\theta _{z}}$ from the right or close to the singularity ${\theta _{z-1}}$ from the left or in between them, but bounded away from both. More specifically, we decompose as
\[\begin{aligned}{}{I_{2,z}}(x)& ={\int _{\frac{k}{n}-{\theta _{z}}+\frac{1}{n}}^{\frac{k}{n}-{\theta _{z}}+\delta }}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}+{\int _{\frac{k}{n}-{\theta _{z}}+\delta }^{\frac{k}{n}-{\theta _{z-1}}-\delta }}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}+{\int _{\frac{k}{n}-{\theta _{z-1}}-\delta }^{\frac{k}{n}-{\theta _{z-1}}-\frac{1}{n}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s=:{I_{2,z}^{l}}(x)+{I_{2,z}^{b}}(x)+{I_{2,z}^{r}}(x).\end{aligned}\]
First we note that arguments similar to Lemma 3.2(iii) imply that
\[ |{g_{k,n}}(s)|\le K{n}^{-k}|\frac{k}{n}-s-{\theta _{z}}{|}^{{\alpha _{z}}-k}\hspace{1em}\text{for all}\hspace{2.5pt}s\in \big[\frac{k}{n}-{\theta _{z}}+\frac{1}{n},\frac{k}{n}-{\theta _{z}}+\delta \big]\text{.}\]
Using again that ${\varPhi _{p}}$ is decreasing on ${\mathbb{R}_{+}}$ it follows that
\[\begin{aligned}{}{I_{2,z}^{l}}(x)& \le K{\int _{\frac{k}{n}-{\theta _{z}}+\frac{1}{n}}^{\frac{k}{n}-{\theta _{z}}+\delta }}{\varPhi _{p}}\big({n}^{\alpha +1/p-k}|\frac{k}{n}-s-{\theta _{z}}{|}^{{\alpha _{z}}-k}x\big)\hspace{0.1667em}\text{d}s\\{} & \le K{\int _{\frac{1}{n}}^{\delta }}{\varPhi _{p}}\big({n}^{\alpha +1/p-k}|s{|}^{{\alpha _{z}}-k}x\big)\hspace{0.1667em}\text{d}s.\end{aligned}\]
If ${\alpha _{z}}=k-1/2$ then
\[\begin{aligned}{}{\int _{\frac{1}{n}}^{\delta }}|x{n}^{\alpha +1/p-k}{s}^{{\alpha _{z}}-k}{|}^{2}{\mathbf{1}_{\{|{x}^{2}{n}^{\alpha +1/p-k}{s}^{{\alpha _{z}}-k}|\le 1\}}}\hspace{0.1667em}\text{d}s& \le {x}^{2}{n}^{2(\alpha +1/p-k)}{\int _{\frac{1}{n}}^{\delta }}{s}^{-1}\hspace{0.1667em}\text{d}s\\{} & \le K{x}^{2},\end{aligned}\]
where we used that $\alpha <k-1/p$. For ${\alpha _{z}}\ne k-1/2$ we have that
\[\begin{aligned}{}& {\int _{\frac{1}{n}}^{\delta }}|x{n}^{\alpha +1/p-k}{s}^{{\alpha _{z}}-k}{|}^{2}{\mathbf{1}_{\{|x{n}^{\alpha +1/p-k}{s}^{{\alpha _{z}}-k}|\le 1\}}}\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}\le K\big(|x{|}^{2}{n}^{2(\alpha +1/p-k)}+|x{|}^{2}{n}^{2(\alpha -{\alpha _{z}})+2/p-1}{\mathbf{1}_{\{|x|\le {n}^{-1/p}\}}}\\{} & \hspace{2em}+|x{|}^{\frac{1}{k-{\alpha _{z}}}}{n}^{\frac{\alpha +1/p-k}{k-{\alpha _{z}}}}{\mathbf{1}_{\{|x|>{n}^{-1/p}\}}}\big)\\{} & \hspace{1em}\le K\big({x}^{2}+|x{|}^{p}\big),\end{aligned}\]
where we used that $\alpha \le {\alpha _{z}}<k-1/p$. Moreover,
\[ {\int _{\frac{1}{n}}^{\delta }}|x{n}^{\alpha +1/p-k}{s}^{{\alpha _{z}}-k}{|}^{p}{\mathbf{1}_{\{|x{n}^{\alpha +1/p-k}{s}^{{\alpha _{z}}-k}|>1\}}}\hspace{0.1667em}\text{d}s\le K|x{|}^{p}.\]
The term ${I_{2,z}^{r}}$ is handled similarly. For the last term ${I_{2,z}^{b}}$ we note that, since we are bounded away from both ${\theta _{z-1}}$ and ${\theta _{z}}$, there exists a constant $K>0$ such that
\[ |{g_{k,n}}(s)|\le K{n}^{-k}\hspace{1em}\text{for all}\hspace{2.5pt}s\in \big[\frac{k}{n}-{\theta _{z}}+\delta ,\frac{k}{n}-{\theta _{z-1}}-\delta \big]\text{.}\]
This readily implies the bound ${I_{2,z}^{b}}(x)\le K({x}^{2}+|x{|}^{p})$.
Estimation of ${I_{3}}$. Arguments as in Lemma 3.2 imply that
\[ |{g_{k,n}}(s)|\le K{n}^{-k}|\frac{k}{n}-s-{\theta _{z}}{|}^{{\alpha _{l}}-k}\hspace{1em}\text{for all}\hspace{2.5pt}s\in \big[\frac{k}{n}-{\theta _{l}}-\delta ,\frac{k}{n}-{\theta _{l}}-\frac{1}{n}\big]\text{.}\]
One may then proceed as for the term ${I_{2,z}^{l}}$ above to conclude that ${I_{3}}(x)\le K({x}^{2}+|x{|}^{p})$.
Estimation of ${I_{4}}$:. First we decompose the integral into two sub-integrals:
\[\begin{aligned}{}{\int _{-\infty }^{\frac{k}{n}-{\theta _{l}}-\delta }}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s& ={\int _{-{\delta ^{\prime }}-{\theta _{l}}}^{\frac{k}{n}-\delta -{\theta _{l}}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}+{\int _{-\infty }^{-{\delta ^{\prime }}-{\theta _{l}}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s.\end{aligned}\]
In the first integral we are bounded away from ${\theta _{l}}$, hence $|{g_{k,n}}(s)|\le K{n}^{-k}$ for all s in the interval $[-{\delta ^{\prime }}-{\theta _{l}},\frac{k}{n}-\delta -{\theta _{l}}]$. For the latter integral note first that by Lemma 3.2(v)
\[ {\int _{-\infty }^{-{\delta ^{\prime }}-{\theta _{l}}}}{\varPhi _{p}}\big({n}^{\alpha +1/p}{g_{k,n}}(s)x\big)\hspace{0.1667em}\text{d}s\le {\int _{-\infty }^{-{\delta ^{\prime }}-{\theta _{l}}}}{\varPhi _{p}}\big({n}^{\alpha +1/p-k}|{g}^{(k)}(-s)|x\big)\hspace{0.1667em}\text{d}s.\]
Now
\[\begin{aligned}{}& {\int _{{\delta ^{\prime }}+{\theta _{l}}}^{\infty }}|x{n}^{\alpha +1/p-k}{g}^{(k)}(s){|}^{2}{\mathbf{1}_{\{|x{n}^{\alpha +1/p-k}{g}^{(k)}(s)|\le 1\}}}\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}\le |x{n}^{\alpha +1/p-k}{|}^{2}{\int _{{\delta ^{\prime }}+{\theta _{l}}}^{\infty }}|{g}^{(k)}(s){|}^{2}\hspace{0.1667em}\text{d}s.\end{aligned}\]
Since $|{g}^{(k)}|$ is decreasing on $({\theta _{l}}+{\delta ^{\prime }},\infty )$ and ${g}^{(k)}\in {L}^{w}(({\theta _{l}}+{\delta ^{\prime }},\infty ))$ for some $w\le 2$ it follows that the last integral is finite. Lastly, we find for $x\in [-1,1]$ that
\[\begin{aligned}{}& {\int _{{\theta _{l}}+{\delta ^{\prime }}}^{\infty }}|x{n}^{\alpha +1/p-k}{g}^{(k)}(s){|}^{p}{\mathbf{1}_{\{|x{n}^{\alpha +1/p-k}{g}^{(k)}(s)|>1\}}}\hspace{0.1667em}\text{d}s\\{} & \hspace{1em}\le |x{|}^{p}{n}^{p(\alpha +1/p-k)}{\int _{{\delta ^{\prime }}+{\theta _{l}}}^{\infty }}|{g}^{(k)}(s){|}^{p}{\mathbf{1}_{\{|{g}^{(k)}(s)|>1\}}}\hspace{0.1667em}\text{d}s.\end{aligned}\]
By our assumptions the last integral is finite, indeed
3.2.1 Negligibility of small jumps
Now, we note that ${X_{t}}-{X_{t}}(m)$ is the integral (1.1), where the integrator is a compound Poisson process that corresponds to big jumps of L. Hence, we obtain the results of Theorem 2.2 for the process $X-X(m)$ as in Section 3.1. More specifically, under assumptions of Theorem 2.2(i) it holds that
\[ {n_{j}^{\alpha p}}V{\big(X-X(m),p;k\big)_{{n_{j}}}}\stackrel{\mathcal{L}-s}{\to }\sum \limits_{z\in \mathcal{A}}|{c_{z}}{|}^{p}\sum \limits_{r:{T_{r}}\in [-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{r}}}}{|}^{p}{\mathbf{1}_{\{|\Delta {L_{{T_{r}}}}|>\frac{1}{m}\}}}{V_{r}^{z}}\]
where ${V_{r}^{z}}$ has been defined at (2.5). The term on the right-hand side converges to the limit of Theorem 2.2(i) as $m\to \infty $, since
\[ \sum \limits_{r:{T_{r}}\in [-{\theta _{z}},1-{\theta _{z}}]}|\Delta {L_{{T_{r}}}}{|}^{p}<\infty \hspace{1em}\text{for any}\hspace{2.5pt}p>\beta \text{.}\]
Finally, using the decomposition $X=(X-X(m))+X(m)$ and letting first ${n_{j}}\to \infty $ and then $m\to \infty $, we deduce the statement of Theorem 2.2 by (3.17) and the inequalities (3.15). This completes the proof. □