1 Introduction
Consider a stationary infinitely divisible indepently scattered random measure Λ whose Lévy density is denoted by ${v_{0}}$. For some (known) Λ-integrable function $f:{\mathbb{R}^{d}}\to \mathbb{R}$ with a compact support, let
be the corresponding moving average random field. In our recent preprint [13], we proposed an estimator $\widehat{u{v_{0}}}$ for the function $\mathbb{R}\ni x\mapsto u(x){v_{0}}(x)=(u{v_{0}})(x)$, $u(x)=x$, based on low frequency observations ${(X(j\Delta ))_{j\in W}}$ of X, with $\Delta >0$ and W a finite subset of ${\mathbb{Z}^{d}}$.
(1.1)
\[ X=\{X(t);\hspace{2.5pt}t\in {\mathbb{R}^{d}}\},\hspace{2em}X(t)={\int _{{\mathbb{R}^{d}}}}f(t-x)\Lambda (dx)\]A wide class of spatio-temporal processes with the spectral representation (1.1) is provided by the so-called ambit random fields, where a space-time Lévy process serves as integrator. Such processes are, e.g., used to model the growth rate of tumours, where the spatial component describes the angle between the center of the tumour cell and the nearest point at its boundary (cf. [3, 16]). Ambit fields cover quite a number of different processes and fields including Ornstein–Uhlenbeck type and mixed moving average random fields (cf. [1, 2]). A further interesting application of (1.1) is given in [17], where the author uses infinitely divisible moving average random fields in order to model claims of natural disaster insurance within different postal code areas.
We point out that there is a large number of literature concerning estimation of the Lévy density ${v_{1}}$ (its Lévy measure, respectively) in the case when X is a Lévy process (cf. [5, 9, 10, 14, 19]). Moreover, in the recent paper [4] the authors provide an estimator for the Lévy density ${v_{0}}$ of the integrator Lévy process $\{{L_{s}}\}$ of a moving average process $X(t)={\textstyle\int _{\mathbb{R}}}f(t-s)d{L_{s}}$, $t\in \mathbb{R}$, which covers the case $d=1$ in (1.1). For a discussion on the differences between our approach and the method provided in [4], we refer to [13] and [18].
In this paper, we investigate asymptotic properties of the linear functional ${L^{2}}(\mathbb{R})\ni v\mapsto {\hat{\mathcal{L}}_{W}}v={\left\langle v,\widehat{u{v_{0}}}\right\rangle _{{L^{2}}(\mathbb{R})}}$ as the sample size $|W|$ tends to infinity. It is motivated by the paper of Nickl and Reiss [20], where the authors provide a Donsker type theorem for the Lévy measure of pure jump Lévy processes. Since our observations are m-dependent, the classical i.i.d. theory does not apply here. Instead, we combine results of Chen and Shao [8] for m-dependent random fields and ideas of Bulinski and Shashkin [7] with exponential inequalities for weakly dependent random fields (see e.g. [15, 11]) in order to prove our limit theorems.
It turns out that under certain regularity assumptions on $u{v_{0}}$, ${\hat{\mathcal{L}}_{W}}v$ is a mean consistent estimator for $\mathcal{L}v={\left\langle v,u{v_{0}}\right\rangle _{{L^{2}}(\mathbb{R})}}$ with a rate of convergence given by $\mathcal{O}(|W{|^{-1/2}})$, for any v that belongs to a subspace $\mathcal{U}$ of ${L^{1}}(\mathbb{R})\cap {L^{2}}(\mathbb{R})$. Moreover, we give conditions such that finite dimensional distributions of the process $\{|W{|^{1/2}}({\hat{\mathcal{L}}_{W}}-\mathcal{L})v;\hspace{2.5pt}v\in \mathcal{U}\}$ are asymptotically Gaussian as $|W|$ is regularly growing to infinity.
From a practical point of view, a naturally arising question is wether a proposed model for ${v_{0}}$ (or equivalently $u{v_{0}}$) is suitable. Knowledge of the asymptotic distribution of $|W{|^{1/2}}({\hat{\mathcal{L}}_{W}}-\mathcal{L})$ can be used in order to construct tests for different hypotheses, e.g., on regularity assumptions of the model for ${v_{0}}$. Indeed, the scalar product ${\left\langle \hspace{0.1667em}\cdot \hspace{0.1667em},\hspace{0.1667em}\cdot \hspace{0.1667em}\right\rangle _{{L^{2}}(\mathbb{R})}}$ naturally induces that the class $\mathcal{U}$ of test functions is growing, when $u{v_{0}}$ becomes more regular.
This paper is organized as follows. In Section 2, we give a brief overview of regularly growing sets and infinitely divisible moving average random fields. We further recall some notation and the most frequently used results from [13]. Section 3 is devoted to asymptotic properties of ${\hat{\mathcal{L}}_{W}}$. Here we discuss our regularity assumptions and state the main results of this paper (Theorems 3.7 and 3.12). Section 4 is dedicated to the proofs of our limit theorems. Some of the shorter proofs as well as external results that will frequently be used in Section 3 are moved to Appendix.
2 Preliminaries
2.1 Notation
Throughout this paper, we use the following notation.
By $\mathcal{B}({\mathbb{R}^{d}})$ we denote the Borel σ-field on the Euclidean space ${\mathbb{R}^{d}}$. The Lebesgue measure on ${\mathbb{R}^{d}}$ is denoted by ${\nu _{d}}$ and we shortly write ${\nu _{d}}(dx)=dx$ when we integrate w.r.t. ${\nu _{d}}$. For any measurable space $(M,\mathcal{M},\mu )$ we denote by ${L^{\alpha }}(M)$, $1\le \alpha <\infty $, the space of all $\mathcal{M}|\mathcal{B}(\mathbb{R})$-measurable functions $f:M\to \mathbb{R}$ with ${\textstyle\int _{M}}|f{|^{\alpha }}(x)\mu (dx)<\infty $. Equipped with the norm $||f|{|_{{L^{\alpha }}(M)}}={\left({\textstyle\int _{M}}|f{|^{\alpha }}(x)\mu (dx)\right)^{1/\alpha }}$, ${L^{\alpha }}(M)$ becomes a Banach space and, in the case $\alpha =2$, even a Hilbert space with scalar product ${\left\langle f,g\right\rangle _{{L^{\alpha }}(M)}}={\textstyle\int _{M}}f(x)g(x)\mu (dx)$, for any $f,g\in {L^{2}}(M)$. With ${L^{\infty }}(M)$ (i.e. if $\alpha =\infty $) we denote the space of all real-valued bounded functions on M. In the case $(M,\mathcal{M},\mu )=(\mathbb{R},\mathcal{B}(\mathbb{R}),{\nu _{1}})$ we denote by
\[ {H^{\delta }}(\mathbb{R})=\Big\{f\in {L^{2}}(\mathbb{R}):\hspace{2.5pt}{\int _{\mathbb{R}}}|{\mathcal{F}_{+}}f{|^{2}}(x){(1+{x^{2}})^{\delta }}dx<\infty \Big\}\]
the Sobolev space of order $\delta >0$ equipped with the Sobolev norm $||f|{|_{{H^{\delta }}(\mathbb{R})}}=||{\mathcal{F}_{+}}f(\cdot ){(1+{\cdot ^{2}})^{\delta /2}}|{|_{{L^{2}}(\mathbb{R})}}$, where ${\mathcal{F}_{+}}$ is the Fourier transform on ${L^{2}}(\mathbb{R})$. For $f\in {L^{1}}(\mathbb{R})$, ${\mathcal{F}_{+}}f$ is defined by ${\mathcal{F}_{+}}f(x)={\textstyle\int _{\mathbb{R}}}{e^{itx}}f(t)dt$, $x\in \mathbb{R}$. Throughout the rest of this paper $(\Omega ,\mathcal{A},\mathbb{P})$ denotes a probability space. Note that in this case ${L^{\alpha }}(\Omega )$ is the space of all random variables with finite α-th moment. For an arbitrary set A we introduce the notation $\text{card}(A)$ or briefly $|A|$ for its cardinality. Let $\operatorname{supp}(f)=\{x\in {\mathbb{R}^{d}}:f(x)\ne 0\}$ be the support set of a function $f:{\mathbb{R}^{d}}\to \mathbb{R}$. Denote by $\text{diam}(A)=\sup \{\| x-y{\| _{\infty }}:x,y\in A\}$ the diameter of a bounded set $A\subset {\mathbb{R}^{d}}$.2.2 Regularly growing sets
In this section, we briefly recall some basic facts about regularly growing sets. For a more detailed investigation on this topic, see, e.g., [7].
Let $a=({a_{1}},\dots ,{a_{d}})\in {\mathbb{R}^{d}}$ be a vector with positive components. In the sequel, we shortly write $a>0$ in this situation. Moreover, let
\[ {\Pi _{0}}(a)=\{x\in {\mathbb{R}^{d}},\hspace{2.5pt}0<{x_{i}}\le {a_{i}},\hspace{2.5pt}i=1,\dots ,d\}\]
and define for any $j\in {\mathbb{Z}^{d}}$ the shifted block ${\Pi _{j}}(a)$ by
\[ {\Pi _{j}}(a)={\Pi _{0}}(a)+ja=\{x\in {\mathbb{R}^{d}},\hspace{2.5pt}{j_{i}}{a_{i}}<{x_{i}}\le {j_{i}}({a_{i}}+1),\hspace{2.5pt}i=1,\dots ,d\}.\]
Clearly $\{{\Pi _{j}},\hspace{2.5pt}j\in {\mathbb{Z}^{d}}\}$ forms a partition of ${\mathbb{R}^{d}}$. For any $U\subset {\mathbb{Z}^{d}}$, introduce the sets
\[\begin{aligned}{}{J_{-}}(U,a)=\{j\in {\mathbb{Z}^{d}},\hspace{2.5pt}{\Pi _{j}}(a)\subset U\},\hspace{2em}& {J_{+}}(U,a)=\{j\in {\mathbb{Z}^{d}},\hspace{2.5pt}{\Pi _{j}}(a)\cap U\ne \varnothing \}\\ {} {U^{-}}(a)=\bigcup \limits_{j\in {J_{-}}(U,a)}{\Pi _{j}}(a),\hspace{2em}& {U^{+}}(a)=\bigcup \limits_{j\in {J_{+}}(U,a)}{\Pi _{j}}(a).\end{aligned}\]
A sequence of sets ${U_{n}}\subset {\mathbb{R}^{d}}$ ($n\in \mathbb{N}$) tends to infinity in Van Hove sense or shortly is VH-growing, if for any $a>0$
\[ {\nu _{d}}({U_{n}^{-}})\to \infty \hspace{1em}\text{and}\hspace{1em}\frac{{\nu _{d}}({U_{n}^{-}})}{{\nu _{d}}({U_{n}^{+}})}\to 1\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]
For a finite set $A\subset {\mathbb{Z}^{d}}$, define by $\partial A=\{j\in {\mathbb{Z}^{d}}\setminus A,\hspace{2.5pt}\operatorname{dist}(j,A)=1\}$ its boundary, where $\operatorname{dist}(j,A)=\inf \{\| j-x{\| _{\infty }},\hspace{2.5pt}x\in A\}$.A sequence of finite sets ${A_{n}}\in {\mathbb{Z}^{d}}$ ($n\in \mathbb{N}$) is called regularly growing (to infinity), if
Remark 2.1.
Regular growth of a family ${A_{n}}\subset {\mathbb{Z}^{d}}$ means that the number of points in the boundary of ${A_{n}}$ grows significantly slower than the number of its interior points.
The following result that connects regularly and VH-growing sequences can be found in [7, p.174].
Lemma 2.2.
-
1. Let ${U_{n}}\subset {\mathbb{R}^{d}}$ ($n\in \mathbb{N}$) be VH-growing. Then ${V_{n}}={U_{n}}\cap {\mathbb{Z}^{d}}$ ($n\in \mathbb{N}$) is regularly growing to infinity.
-
2. If ${({U_{n}})_{n\in \mathbb{N}}}$ is a sequence of finite subsets of ${\mathbb{Z}^{d}}$, regularly growing to infinity, then ${V_{n}}={\cup _{j\in {U_{n}}}}[j,j+1)$ is VH-growing, where $[j,j+1)=\{x\in {\mathbb{R}^{d}}:\hspace{2.5pt}{j_{k}}\le {x_{k}}<{j_{k}}+1,\hspace{2.5pt}k=1,\dots ,d\}$.
2.3 Infinitely divisible random measures
In what follows, denote by ${\mathcal{E}_{0}}({\mathbb{R}^{d}})$ the collection of all bounded Borel sets in ${\mathbb{R}^{d}}$.
Suppose that $\Lambda =\{\Lambda (A);\hspace{2.5pt}A\in {\mathcal{E}_{0}}({\mathbb{R}^{d}})\}$ is an infinitely divisible random measure on some probability space $(\Omega ,\mathcal{A},P)$, i.e. a random measure with the following properties:
-
(a) Let ${({E_{m}})_{m\in \mathbb{N}}}$ be a sequence of disjoint sets in ${\mathcal{E}_{0}}({\mathbb{R}^{d}})$. Then the sequence ${(\Lambda ({E_{m}}))_{m\in \mathbb{N}}}$ consists of independent random variables; if, in addition, ${\cup _{m=1}^{\infty }}{E_{m}}\in {\mathcal{E}_{0}}({\mathbb{R}^{d}})$, then we have $\Lambda ({\cup _{m=1}^{\infty }}{E_{m}})={\textstyle\sum _{m=1}^{\infty }}\Lambda ({E_{m}})$ almost surely.
-
(b) The random variable $\Lambda (A)$ has an infinitely divisible distribution for any choice of $A\in {\mathcal{E}_{0}}({\mathbb{R}^{d}})$.
For every $A\in {\mathcal{E}_{0}}({\mathbb{R}^{d}})$, let ${\varphi _{\Lambda (A)}}$ denote the characteristic function of the random variable $\Lambda (A)$. Due to the infinite divisibility of the random variable $\Lambda (A)$, the characteristic function ${\varphi _{\Lambda (A)}}$ has a Lévy–Khintchin representation which can, in its most general form, be found in [21, p. 456]. Throughout the rest of the paper we make the additional assumption that the Lévy–Khintchin representation of $\Lambda (A)$ is of a special form, namely
where ${\nu _{d}}$ denotes the Lebesgue measure on ${\mathbb{R}^{d}}$, ${a_{0}}$ and ${b_{0}}$ are real numbers with $0\le {b_{0}}<\infty $ and ${v_{0}}:\mathbb{R}\to \mathbb{R}$ is a Lévy density, i.e. a measurable function which fulfills ${\textstyle\int _{\mathbb{R}}}\min \{1,{x^{2}}\}{v_{0}}(x)dx<\infty $. The triplet $({a_{0}},{b_{0}},{v_{0}})$ will be referred to as the Lévy characteristic of Λ. It uniquely determines the distribution of Λ. This particular structure of the characteristic functions ${\varphi _{\Lambda (A)}}$ means that the random measure Λ is stationary with the control measure $\lambda :\mathcal{B}(\mathbb{R})\to [0,\infty )$ given by
\[ {\varphi _{\Lambda (A)}}(t)=\exp \left\{{\nu _{d}}(A)K(t)\right\},\hspace{1em}A\in {\mathcal{E}_{0}}({\mathbb{R}^{d}}),\]
with
(2.1)
\[ K(t)=it{a_{0}}-\frac{1}{2}{t^{2}}{b_{0}}+\underset{\mathbb{R}}{\int }\left({e^{itx}}-1-itx{\mathbb{1}_{[-1,1]}}(x)\right){v_{0}}(x)dx,\]Now one can define the stochastic integral with respect to the infinitely divisible random measure Λ in the following way:
-
1. Let $f={\textstyle\sum _{j=1}^{n}}{x_{j}}{\mathbb{1}_{{A_{j}}}}$ be a real simple function on ${\mathbb{R}^{d}}$, where ${A_{j}}\in {\mathcal{E}_{0}}({\mathbb{R}^{d}})$ are pairwise disjoint. Then for every $A\in \mathcal{B}({\mathbb{R}^{d}})$ we define
-
2. A measurable function $f:({\mathbb{R}^{d}},\mathcal{B}({\mathbb{R}^{d}}))\to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ is said to be Λ-integrable if there exists a sequence ${({f^{(m)}})_{m\in \mathbb{N}}}$ of simple functions as in (1) such that ${f^{(m)}}\to f$ holds λ-almost everywhere and such that, for each $A\in \mathcal{B}({\mathbb{R}^{d}})$, the sequence ${\left({\textstyle\int _{A}}{f^{(m)}}(x)\Lambda (dx)\right)_{m\in \mathbb{N}}}$ converges in probability as $m\to \infty $. In this case we set
A useful characterization of the Λ-integrability of a function f is given in [21, Theorem 2.7]. Now let $f:{\mathbb{R}^{d}}\to \mathbb{R}$ be Λ-integrable; then the function $f(t-\cdot )$ is Λ-integrable for every $t\in {\mathbb{R}^{d}}$ as well. We define the moving average random field $X=\{X(t),\hspace{2.5pt}t\in {\mathbb{R}^{d}}\}$ by
Recall that a random field is called infinitely divisible if its finite dimensional distributions are infinitely divisible. The random field X above is (strictly) stationary and infinitely divisible and the characteristic function ${\varphi _{X(0)}}$ of $X(0)$ is given by
where ${a_{1}}$ and ${b_{1}}$ are real numbers with ${b_{0}}\ge 0$ and the function ${v_{1}}$ is the Lévy density of $X(0)$. The triplet $({a_{1}},{b_{1}},{v_{1}})$ is again referred to as the Lévy characteristic (of $X(0)$) and determines the distribution of $X(0)$ uniquely. A simple computation shows that the triplet $({a_{1}},{b_{1}},{v_{1}})$ is given by the formulas
where $\operatorname{supp}(f):=\{s\in {\mathbb{R}^{d}}:\hspace{2.5pt}f(s)\ne 0\}$ denotes the support of f and the function U is defined via
(2.2)
\[ X(t)=\underset{{\mathbb{R}^{d}}}{\int }f(t-x)\Lambda (dx),\hspace{1em}t\in {\mathbb{R}^{d}}.\]
\[ {\varphi _{X(0)}}(u)=\exp \left({\int _{{\mathbb{R}^{d}}}}K(uf(s))\hspace{0.2222em}\mathrm{d}s\right),\]
where K is the function from (2.1). The argument ${\textstyle\int _{{\mathbb{R}^{d}}}}K(uf(s))\hspace{0.2222em}\mathrm{d}s$ in the above exponential function can be shown to have a similar structure as $K(t)$; more precisely, we have
(2.3)
\[ {\int _{{\mathbb{R}^{d}}}}K(uf(s))\hspace{0.2222em}\mathrm{d}s=iu{a_{1}}-\frac{1}{2}{u^{2}}{b_{1}}+\underset{\mathbb{R}}{\int }\left({e^{iux}}-1-iux{\mathbb{1}_{[-1,1]}}(x)\right){v_{1}}(x)\hspace{0.2222em}\mathrm{d}x\](2.4)
\[\begin{aligned}{}& {a_{1}}=\underset{{\mathbb{R}^{d}}}{\int }U(f(s))\hspace{0.2222em}\mathrm{d}s,\hspace{2em}{b_{1}}={b_{0}}\underset{{\mathbb{R}^{d}}}{\int }{f^{2}}(s)\hspace{0.2222em}\mathrm{d}s,\\ {} & {v_{1}}(x)=\underset{\operatorname{supp}(f)}{\int }\frac{1}{|f(s)|}{v_{0}}\left(\frac{x}{f(s)}\right)\hspace{0.2222em}\mathrm{d}s,\end{aligned}\]
\[ U(u)=u\left({a_{0}}+{\int _{\mathbb{R}}}x\left[{\mathbb{1}_{[-1,1]}}(ux)-{\mathbb{1}_{[-1,1]}}(x)\right]{v_{0}}(x)\hspace{0.2222em}\mathrm{d}x\right).\]
Note that the Λ-integrability of f immediately implies that $f\in {L^{1}}({\mathbb{R}^{d}})\cap {L^{2}}({\mathbb{R}^{d}})$. Hence, all integrals above are finite.For details on the theory of infinitely divisible measures and fields we refer the interested reader to [21].
2.4 A plug-in estimation approach for ${v_{0}}$
Let the random field $X=\{X(t),\hspace{2.5pt}t\in {\mathbb{R}^{d}}\}$ be given as in Section 2.3 and define the function $u:\mathbb{R}\to \mathbb{R}$ by $u(x)=x$. Suppose further that an estimator $\widehat{u{v_{1}}}$ for $u{v_{1}}$ is given. In our recent preprint [13], we provided an estimation approach for $u{v_{0}}$ based on relation (2.4) which we briefly recall in this section. Therefore, quite a number of notations are required.
Assume that f satisfies the integrability condition
and define the operator $\mathcal{G}:{L^{2}}(\mathbb{R})\to {L^{2}}(\mathbb{R})$ by
for $u{v_{0}}$, where ${({a_{n}})_{n\in \mathbb{N}}}\subseteq (0,\infty )$ is an arbitrary sequence, depending on the sample size n, that tends to 0 as $n\to \infty $, and the mapping $\frac{1}{{\mu _{f,n}}}:\mathbb{R}\to \mathbb{C}$ is defined by $\frac{1}{{\mu _{f,n}}}:=\frac{1}{{\mu _{f}}}{\mathbb{1}_{\{|{\mu _{f}}|>{a_{n}}\}}}$. Here ${\mathcal{F}_{\times }}:{L^{2}}({\mathbb{R}^{\times }},\frac{dx}{|x|})\to {L^{2}}({\mathbb{R}^{\times }},\frac{dx}{|x|})$ denotes the Fourier transform on the multiplicative group ${\mathbb{R}^{\times }}$ which is defined by
\[ \mathcal{G}v={\int _{\operatorname{supp}(f)}}\operatorname{sgn}(f(s))v\Big(\frac{\hspace{0.1667em}\cdot \hspace{0.1667em}}{f(s)}\Big)ds,\hspace{1em}v\in {L^{2}}(\mathbb{R}).\]
Moreover, define the isometry $\mathcal{M}:{L^{2}}(\mathbb{R})\to {L^{2}}({\mathbb{R}^{\times }},\frac{dx}{|x|})$ by
and let the functions ${m_{f,\pm }}:{\mathbb{R}^{\times }}\to \mathbb{C}$ and ${\mu _{f}}:{\mathbb{R}^{\times }}\to \mathbb{C}$ be given by
\[\begin{aligned}{}{m_{f,+}}(x)& ={\int _{\operatorname{supp}(f)}}\operatorname{sgn}(f(s))|f(s){|^{1/2}}{e^{-ix\log |f(s)|}}ds,\\ {} {m_{f,-}}(x)& ={\int _{\operatorname{supp}(f)}}|f(s){|^{1/2}}{e^{-ix\log |f(s)|}}ds,\\ {} {\mu _{f}}(y)& =\left\{\begin{array}{l@{\hskip10.0pt}l}{m_{f,+}}(\log |y|)\hspace{1em}& \text{if}\hspace{2.5pt}y>0,\\ {} {m_{f,-}}(\log |y|)\hspace{1em}& \text{if}\hspace{2.5pt}y<0.\end{array}\right.\end{aligned}\]
Multiplying both sides in (2.4) by u leads to the equivalent relation
Suppose $u{v_{1}}\in {L^{2}}(\mathbb{R})$ and assume that for some $\beta \ge 0$,
Then the unique solution $u{v_{0}}\in {L^{2}}(\mathbb{R})$ to equation (2.6) is given by
\[ u{v_{0}}={\mathcal{G}^{-1}}u{v_{1}}={\mathcal{M}^{-1}}{\mathcal{F}_{\times }^{-1}}\Big(\frac{1}{{\mu _{f}}}{\mathcal{F}_{\times }}\mathcal{M}u{v_{1}}\Big),\]
cf. [13, Theorem 3.1]. Based on this relation, the paper [13] provides the estimator
(2.7)
\[ \widehat{u{v_{0}}}={\mathcal{M}^{-1}}{\mathcal{F}_{\times }^{-1}}\Big(\frac{1}{{\mu _{f,n}}}{\mathcal{F}_{\times }}\mathcal{M}\widehat{u{v_{1}}}\Big)=:{\mathcal{G}_{n}^{-1}}\widehat{u{v_{1}}}\]
\[ ({\mathcal{F}_{\times }}u)(y)={\int _{{\mathbb{R}^{\times }}}}u(x)\hspace{0.2778em}{e^{-i\log |x|\cdot \log |y|}}\cdot {e^{i\pi \delta (x)\delta (y)}}\hspace{0.1667em}\frac{\mathrm{d}x}{|x|},\]
for all $u\in {L^{1}}({\mathbb{R}^{\times }},\frac{dx}{|x|})\cap {L^{2}}({\mathbb{R}^{\times }},\frac{dx}{|x|})$, with $\delta :{\mathbb{R}^{\times }}\to \mathbb{R}$ given by $\delta (x)={\mathbb{1}_{(-\infty ,0)}}(x)$ (cf. [13, Section 2.2]). A more detailed introduction to harmonic analysis on locally compact abelian groups can be found, e.g., in [12]. Remark 2.3.
The linear operator ${\mathcal{G}_{n}^{-1}}:{L^{2}}(\mathbb{R})\to {L^{2}}(\mathbb{R})$ defined in (2.7) is bounded in the operator norm $\| {\mathcal{G}_{n}^{-1}}\| \le \frac{1}{{a_{n}}}$, whereas ${\mathcal{G}^{-1}}$ is unbounded in general.
2.5 m-dependent random fields
A random field $X=\{X(t),\hspace{2.5pt}t\in T\}$, $T\subseteq {\mathbb{R}^{d}}$, defined on some probability space $(\Omega ,\mathcal{A},\mathbb{P})$ is called m-dependent if for some $m\in \mathbb{N}$ and any finite subsets U and V of T the random vectors ${(X(u))_{u\in U}}$ and ${(X(v))_{v\in V}}$ are independent whenever
for all $u={({u_{1}},\dots ,{u_{d}})^{\top }}\in U$ and $v={({v_{1}},\dots ,{v_{d}})^{\top }}\in V$.
Lemma 2.4.
Let the random field X be given in (2.2) and suppose that f has a compact support. Then X is m-dependent with $m>\text{diam}(\operatorname{supp}(f))$.
Proof.
Compactness of $\operatorname{supp}(f)$ implies that $\operatorname{supp}(f(t-\cdot ))$ and $\operatorname{supp}(f(s-\cdot ))$ are disjoint whenever $\| t-s{\| _{\infty }}>\text{diam}(\operatorname{supp}(f))$. Since further Λ is independently scattered and integration in (2.2) is done only on $\operatorname{supp}(f(t-\cdot ))$, $X(t)$ and $X(s)$ are independent for $\| t-s{\| _{\infty }}>\text{diam}(\operatorname{supp}(f))$. □
3 A linear functional for infinitely divisible moving averages
3.1 The setting
Let $\Lambda =\{\Lambda (A),\hspace{2.5pt}A\in {\mathcal{E}_{0}}({\mathbb{R}^{d}})\}$ be a stationary infinitely divisible random measure defined on some probability space $(\Omega ,\mathcal{A},\mathbb{P})$ with characteristic triplet $({a_{0}},0,{v_{0}})$, i.e. Λ is purely non-Gaussian. For a known Λ-integrable function $f:{\mathbb{R}^{d}}\to \mathbb{R}$ let $X=\{X(t)={\textstyle\int _{{\mathbb{R}^{d}}}}f(t-x)\Lambda (dx),\hspace{2.5pt}t\in {\mathbb{R}^{d}}\}$ be the infinitely divisible moving average random field defined in Section 2.3.
Fix $\Delta >0$ and suppose X is observed on a regular grid $\Delta {\mathbb{Z}^{d}}=\{j\Delta ,\hspace{2.5pt}j\in {\mathbb{Z}^{d}}\}$ with the mesh size Δ, i.e. consider the random field Y given by
For a finite subset $W\subset {\mathbb{Z}^{d}}$ let ${({Y_{j}})_{j\in W}}$ be a sample drawn from Y and denote by n the cardinality of W.
(3.1)
\[ Y={({Y_{j}})_{j\in {\mathbb{Z}^{d}}}},\hspace{1em}{Y_{j}}=X(j\Delta ),\hspace{2.5pt}j\in {\mathbb{Z}^{d}}.\]Throughout this paper, for any numbers a, $b\ge 0$, we use the notation $a\lesssim b$ if $a\le cb$ for some constant $c>0$.
Assumption 3.1.
Let the function $u:\mathbb{R}\to \mathbb{R}$ be given by $u(x)=x$. We make the following assumptions: for some $\tau >0$
-
1. $f\in {L^{2+\tau }}({\mathbb{R}^{d}})$ has compact support;
-
2. $u{v_{0}}\in {L^{1}}(\mathbb{R})\cap {L^{2}}(\mathbb{R})$ is bounded;
-
3. ${\textstyle\int _{\mathbb{R}}}|x{|^{1+\tau }}|(u{v_{0}})(x)|dx<\infty $;
-
4. $|{\textstyle\int _{\operatorname{supp}(f)}}f(s){\mathcal{F}_{+}}[u{v_{0}}](f(s)x)ds|\lesssim {(1+{x^{2}})^{-1/2}}$ for all $x\in \mathbb{R}$;
Suppose that $\widehat{u{v_{1}}}$ is an estimator for $u{v_{1}}$ (which we will precisely define in the next section) based on the sample ${({Y_{j}})_{j\in W}}$. Then, using the notation in Section 2.4, we introduce the linear functional
\[ {\hat{\mathcal{L}}_{W}}:{L^{2}}(\mathbb{R})\to \mathbb{R},\hspace{1em}{\hat{\mathcal{L}}_{W}}v:={\left\langle v,\widehat{u{v_{0}}}\right\rangle _{{L^{2}}(\mathbb{R})}}={\left\langle v,{\mathcal{G}_{n}^{-1}}\widehat{u{v_{1}}}\right\rangle _{{L^{2}}(\mathbb{R})}}.\]
The purpose of this paper is to investigate asymptotic properties of ${\hat{\mathcal{L}}_{W}}$ as the sample size $|W|=n$ tends to infinity.3.2 An estimator for $\mathbf{u}{v_{1}}$
In this section we introduce an estimator for the function $u{v_{1}}$. Therefore, let ψ denote the characteristic function of $X(0)$. Then, by Assumption 3.1, (2), together with formula (2.3), we find that ψ can be rewritten as
for some $\gamma \in \mathbb{R}$ and the Lévy density ${v_{1}}$ given in (2.4). We call γ the drift parameter or shortly drift of X. As a consequence of representation (3.3), the random field X is purely non-Gaussian. It is subsequently assumed that the drift γ is known.
(3.3)
\[ \psi (t)=\mathbb{E}{e^{it{Y_{0}}}}=\exp \Big(i\gamma t+{\int _{\mathbb{R}}}({e^{itx}}-1){v_{1}}(x)dx\Big),\hspace{1em}t\in \mathbb{R},\]Taking derivatives in (3.3) leads to the identity
\[ -i\frac{{\psi ^{\prime }}(t)}{\psi (t)}=\gamma +{\mathcal{F}_{+}}[u{v_{1}}](t),\hspace{1em}t\in \mathbb{R}.\]
Neglecting γ for the moment, this relation suggests that a natural estimator $\widehat{{\mathcal{F}_{+}}[u{v_{1}}]}$ for ${\mathcal{F}_{+}}[u{v_{1}}]$ is given by
\[ \widehat{{\mathcal{F}_{+}}[u{v_{1}}]}(t)=\frac{\hat{\theta }(t)}{\tilde{\psi }(t)},\hspace{1em}t\in \mathbb{R},\]
with
\[ \tilde{\psi }(t)=\frac{1}{\hat{\psi }(t)}{\mathbb{1}_{\{|\hat{(\psi )}(t)|>{n^{-1/2}}\}}},\hspace{1em}t\in \mathbb{R},\]
and $\hat{\psi }(t)={\textstyle\sum _{j\in W}}{e^{it{Y_{j}}}}$, $\hat{\theta }(t)={\textstyle\sum _{j\in W}}{Y_{j}}{e^{it{Y_{j}}}}$ being the empirical counterparts of ψ and $\theta =-i{\psi ^{\prime }}$.Now, consider for any $b>0$ a function ${K_{b}}:\mathbb{R}\to \mathbb{R}$ with the following properties:
Then, for any $b>0$, we define the estimator $\widehat{u{v_{1}}}$ for $u{v_{1}}$ by
(3.4)
\[ \widehat{u{v_{1}}}(t)={\mathcal{F}_{+}^{-1}}\Big[\widehat{{\mathcal{F}_{+}}[u{v_{1}}]}{\mathcal{F}_{+}}[{K_{b}}]\Big](t)=\frac{1}{2\pi }{\int _{\mathbb{R}}}{e^{-itx}}\frac{\hat{\theta }(x)}{\tilde{\psi }(x)}{\mathcal{F}_{+}}[{K_{b}}](x)dx,\hspace{1em}t\in \mathbb{R}.\]Remark 3.2.
-
(a) If $\widehat{u{v_{1}}}$ is a consistent estimator for $u{v_{1}}$, it is reasonable to assume that $\gamma =0$ (cf. [18]). Indeed, for the asymptotic results below, the value of γ is irrelevant. Even if $\gamma \ne 0$, the functional ${\hat{\mathcal{L}}_{W}}$ estimates the intended quantity with $\widehat{u{v_{1}}}$ given in (3.4) (cf. Section 4.3).
3.3 Discussion and examples
In order to explain Assumption 3.1, we prepend the following proposition whose proof can be found in Appendix.
Proposition 3.3.
Let the infinitely divisible moving average random field $X=\{X(t),t\in {\mathbb{R}^{d}}\}$ be given as above and suppose $u(x)=x$.
-
(a) Let Assumption 3.1, (1) and (2) be satisfied. Then $u{v_{1}}\in {L^{1}}(\mathbb{R})\cap {L^{2}}(\mathbb{R})$ is bounded. Moreover,
(3.5)
\[ {\mathcal{F}_{+}}[u{v_{1}}](x)={\int _{\operatorname{supp}(f)}}f(s){\mathcal{F}_{+}}[u{v_{0}}](f(s)x)ds,\hspace{1em}\textit{for all}\hspace{2.5pt}x\in \mathbb{R},\] -
(b) Let Assumption 3.1, (1) and (3) hold true. Then ${\textstyle\int _{\mathbb{R}}}|x{|^{2+\tau }}|(u{v_{1}})(x)|dx<\infty $ (also in the case when $\tau =0$).
The compact support property in Assumption 3.1, (1) ensures that the random field ${({Y_{j}})_{j\in {\mathbb{Z}^{d}}}}$ is m-dependent with $m>{\Delta ^{-1}}\text{diam}(\operatorname{supp}(f))$ (cf. Lemma 2.4). In particular, m increases when the grid size Δ of the sample is decreasing. Moreover, compact support of f together with $f\in {L^{2+\tau }}(\mathbb{R})$ implies that $f\in {L^{q}}(\mathbb{R})$ for all $0<q\le 2+\tau $. Consequently, f fulfills the integrability condition (2.5). In contrast, if f does not have compact support, the Λ-integrability only ensures $f\in {L^{2}}(\mathbb{R})$.
Assumption 3.1, (3) is a moment assumption on Λ. More precisely, it is satisfied if and only if
for all $A\in {\mathcal{E}_{0}}({\mathbb{R}^{d}})$, cf. [22]. By Proposition 3.3, (b), this assumption also implies $\mathbb{E}|X(0){|^{2+\tau }}<\infty $ in our setting.
As a consequence of Proposition 3.3, (a) and (c), Assumption 3.1, (4) ensures regularity of $u{v_{1}}$ whereas (5) yields the polynomial decay of ψ. It was shown in [13, Theorem 3.10] that ψ and $u{v_{1}}$ are connected via the relation
\[ |\psi (x)|=\exp \Big(-{\int _{0}^{x}}\text{Im}\big({\mathcal{F}_{+}}[u{v_{1}}](y)\big)dy\Big),\hspace{1em}x\in \mathbb{R};\]
hence, more regularity of $u{v_{1}}$ ensures slower decay rates for $|\psi (x)|$ as $x\to \pm \infty $. Further results on the polynomial decay of infinitely divisible characteristic functions as well as sufficient conditions for this property to hold can be found in [23].Let us give some examples of Λ and f satisfying Assumption 3.1, (1)–(5).
Example 3.4 (Gamma random measure).
Fix $b>0$ and let for any $x\in \mathbb{R}$, ${v_{0}}(x)={x^{-1}}{e^{-bx}}{\mathbb{1}_{(0,\infty )}}(x)$. Clearly, Assumption 3.1, (2) and (3) are satisfied for any $\tau >0$. The Fourier transform of $u{v_{0}}$ is given by ${\mathcal{F}_{+}}[u{v_{0}}](x)={(b-ix)^{-1}}$, $x\in \mathbb{R}$; hence
This condition is fulfilled for any $\varepsilon <\frac{1}{2}-\alpha $ if
\[ {\int _{\operatorname{supp}(f)}}f(s){\mathcal{F}_{+}}[u{v_{0}}](f(s)x)ds={\int _{\operatorname{supp}(f)}}\frac{f(s)}{b-if(s)x}ds,\hspace{1em}x\in \mathbb{R}.\]
The latter identity shows that Assumption 3.1, (4) holds true for any integrable f with a compact support. Moreover, a simple calculation yields that for any $x\in \mathbb{R}$, Assumption 3.1, (5) becomes
(3.6)
\[ {\int _{\mathbb{R}}}{(1+{x^{2}})^{-1+\varepsilon }}\exp \Big({\int _{\operatorname{supp}(f)}}\log \big(1+\frac{{x^{2}}{f^{2}}(s)}{b}\big)ds\Big)dx<\infty .\]3.4 Consistency of ${\hat{\mathcal{L}}_{W}}$
In this section, we give an upper bound for the estimation error $\mathbb{E}|{\hat{\mathcal{L}}_{W}}v-\mathcal{L}v|$ that allows to derive conditions under which ${\hat{\mathcal{L}}_{W}}$ is consistent for the linear functional $\mathcal{L}:{L^{2}}(\mathbb{R})\to \mathbb{R}$ given by
With the notations from Section 2.4, we have that the adjoint operator ${\mathcal{G}^{-1\ast }}:\text{Image}(\mathcal{G})\to {L^{2}}(\mathbb{R})$ of ${\mathcal{G}^{-1}}$ is given by
where ${\bar{\mu }_{f}}$ denotes the complex conjugate function of ${\mu _{f}}$. Moreover, the adjoint ${\mathcal{G}_{n}^{-1\ast }}:{L^{2}}(\mathbb{R})\to {L^{2}}(\mathbb{R})$ of ${\mathcal{G}_{n}^{-1}}$ writes as
(3.7)
\[ {\mathcal{G}^{-1\ast }}v={\mathcal{M}^{-1}}{\mathcal{F}_{\times }^{-1}}\Big(\frac{1}{{\bar{\mu }_{f}}}{\mathcal{F}_{\times }}\mathcal{M}v\Big),\hspace{1em}v\in \text{Image}(\mathcal{G}),\]
\[ {\mathcal{G}_{n}^{-1\ast }}v={\mathcal{M}^{-1}}{\mathcal{F}_{\times }^{-1}}\Big(\frac{1}{{\bar{\mu }_{f,n}}}{\mathcal{F}_{\times }}\mathcal{M}v\Big),\hspace{1em}v\in {L^{2}}(\mathbb{R}),\]
with $\frac{1}{{\bar{\mu }_{f,n}}}=\frac{1}{{\bar{\mu }_{f}}}{\mathbb{1}_{\{|{\bar{\mu }_{f}}|>{a_{n}}\}}}$. Notice that ${\mathcal{G}_{n}^{-1\ast }}$ is a bounded operator whereas ${\mathcal{G}^{-1\ast }}$ is unbounded in general.Remark 3.5.
Notice that ${\mathcal{G}_{n}^{-1\ast }}={\mathcal{G}^{-1\ast }}$ if ${a_{n}}=0$ for any $n\in \mathbb{N}$. Hence, ${\mathcal{G}_{n}^{-1\ast }}\widehat{u{v_{1}}}$ in this case only is well-defined if $\widehat{u{v_{1}}}$ is an element of $\text{Image}({\mathcal{G}^{\ast }})$ what is indeed a very restrictive assumption. For a detailed discussion we refer to [13].
With the previous notations we now derive an upper bound for $\mathbb{E}|{\hat{\mathcal{L}}_{W}}v-\mathcal{L}v|$. Therefore, recall condition (${\mathbf{U}_{\beta }}$) from Section 2.4.
Lemma 3.6.
Let $\gamma =0$ and suppose Assumption 3.1, (1)–(3) hold true for some $\tau \ge 0$. Moreover, let condition $({\mathbf{U}_{\beta }})$ be satisfied for some $\beta \ge 0$ and assume that ${K_{b}}:\mathbb{R}\to \mathbb{R}$ is a function with properties (K1)–(K3). Then
for any $v\in \text{Image}(\mathcal{G})$ such that ${\textstyle\int _{\mathbb{R}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}dx<\infty $, where $c>0$ is some constant and $S:={\sup _{b>0,\hspace{2.5pt}x\in \mathbb{R}}}|{\mathcal{F}_{+}}[{K_{b}}](x)|$.
(3.8)
\[\begin{aligned}{}\mathbb{E}|{\hat{\mathcal{L}}_{W}}v-\mathcal{L}v|\le & \frac{S}{\sqrt{\pi }}\mathbb{E}|{Y_{0}}|{\Big(\frac{n}{b}\Big)^{1/2}}\| \big({\mathcal{G}_{n}^{-1\ast }}-{\mathcal{G}^{-1\ast }}\big)v{\| _{{L^{2}}(\mathbb{R})}}\\ {} & +\frac{1}{2\pi }{\left\langle |{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]|,|{\mathcal{F}_{+}}[u{v_{1}}]||1-{\mathcal{F}_{+}}[{K_{b}}]|\right\rangle _{{L^{2}}(\mathbb{R})}}\\ {} & +\frac{c\cdot S}{2\pi \sqrt{n}}\Big(\sqrt{\mathbb{E}|{Y_{0}}{|^{2}}}+\| u{v_{1}}{\| _{{L^{1}}(\mathbb{R})}}\Big){\int _{\mathbb{R}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]|(x)}{|\psi (x)|}dx\end{aligned}\]Theorem 3.7.
Fix $\gamma \in \mathbb{R}$. Suppose that condition $({\mathbf{U}_{{\beta _{1}}}})$ is satisfied for some ${\beta _{1}}\ge 0$ and let $v\in {L^{2}}(\mathbb{R})$ be such that ${\mathcal{G}^{-1\ast }}v\in {H^{1}}(\mathbb{R})$, $\frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }\in {L^{1}}(\mathbb{R})$ and
for some ${\beta _{2}}>{\beta _{1}}$. Moreover, let $a={a_{n}}$ and $b={b_{n}}$ be sequences with the properties
(3.9)
\[ \displaystyle (\mathcal{M}v)(\exp (\hspace{0.1667em}\cdot \hspace{0.1667em})),\hspace{2.5pt}(\mathcal{M}v)(-\exp (\hspace{0.1667em}\cdot \hspace{0.1667em}))\in {H^{{\beta _{2}}}}(\mathbb{R})\]
\[ {a_{n}}\to 0,\hspace{1em}{b_{n}}\to 0\hspace{1em}\textit{and}\hspace{1em}{a_{n}}=o\Big({\Big(\frac{n}{{b_{n}}}\Big)^{\frac{{\beta _{1}}}{2({\beta _{1}}-{\beta _{2}})}}}\Big),\hspace{1em}\textit{as}\hspace{2.5pt}n\to \infty ,\]
and assume that conditions (K1)–(K3) are fulfilled. Then, under Assumption 3.1, (1)–(4), $\mathbb{E}|{\hat{\mathcal{L}}_{W}}v-\mathcal{L}v|\to 0$ as $n\to \infty $ with the order of convergence given by
Remark 3.8.
-
(a) Notice that condition $({\mathbf{U}_{\beta }})$ ensures uniqueness of $u{v_{0}}\in {L^{2}}(\mathbb{R})$ as a solution of $\mathcal{G}u{v_{0}}=u{v_{1}}$. In Lemma 3.6, it can be replaced by the more (and most) general assumption ${m_{f,\pm }}\ne 0$ almost everywhere on $\mathbb{R}$. Moreover, condition (K3) can be replaced by ${\sup _{b>0,\hspace{2.5pt}x\in \mathbb{R}}}|{\mathcal{F}_{+}}[{K_{b}}](x)|<\infty $ in Lemma 3.6.
-
(c) The condition ${\mathcal{G}^{-1\ast }}v\in {H^{1}}(\mathbb{R})$ in Theorem 3.7 can be dropped if $\gamma =0$.
-
(d) Under the conditions of Theorem 3.7, the convergence rate of $\mathbb{E}|{\hat{\mathcal{L}}_{W}}v-\mathcal{L}v|\to 0$ is at least $\mathcal{O}({n^{-1/2}})$ as $n\to \infty $, provided that
We close this section with the following example, showing that the functions ${g_{t}}$ considered in [20, p. 3309] may belong to the range of ${\mathcal{G}^{-1\ast }}$.
Example 3.9.
Fix $t>0$ and let $v(x)=\frac{1}{x}{\mathbb{1}_{\mathbb{R}\setminus [-t,t]}}(x)$, $x\in \mathbb{R}$. Apparently, $v\in {L^{2}}(\mathbb{R})$ fulfills condition (3.9) for any ${\beta _{2}}>0$. Let for some fixed $\lambda ,\hspace{2.5pt}\theta >0$, $f(s)={e^{-\lambda s}}{\mathbb{1}_{(0,\theta )}}(s)$, $s\in \mathbb{R}$. Then a simple computation yields that $({\mathbf{U}_{{\beta _{1}}}})$ is satisfied with ${\beta _{1}}=1$. Moreover, for all $x\ne 0$
\[ ({\mathcal{G}^{-1\ast }}v)(x)=\frac{1}{2x}\log \Big(\frac{|x|}{t}\Big){\mathbb{1}_{(t,t{e^{\lambda \theta }}]}}(|x|)+\frac{\lambda \theta }{2x}{\mathbb{1}_{(t{e^{\lambda \theta }},\infty )}}(|x|);\]
hence, ${\mathcal{G}^{-1\ast }}v\in {H^{1}}(\mathbb{R})$. Since
\[ \Big\| \frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }{\Big\| _{{L^{1}}(\mathbb{R})}}\le \| {\mathcal{G}^{-1\ast }}v{\| _{{H^{1}}(\mathbb{R})}}\Big\| \frac{{(1+\hspace{0.1667em}\cdot {\hspace{0.1667em}^{2}})^{-\frac{1+\varepsilon }{2}}}}{\psi }{\Big\| _{{L^{2}}(\mathbb{R})}},\]
any random measure Λ satisfying Assumption 3.1, (5) yields $\frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }\in {L^{1}}(\mathbb{R})$ (cf. Proposition 3.3, (c)).3.5 A central limit theorem for ${\hat{\mathcal{L}}_{W}}$
Provided the assumptions of Theorem 3.7 are satisfied,
is bounded in mean. In this section, we give conditions under which ${\operatorname{err}_{W}}(v)$ is asymptotically Gaussian. For this purpose, introduce the following notation.
Definition 3.10.
Let Assumption 3.1 be satisfied and suppose that condition (${\mathbf{U}_{{\beta _{1}}}})$ is fulfilled for some ${\beta _{1}}>0$. A function $v\in {L^{2}}(\mathbb{R})$ is called admissible of index $(\xi ,{\beta _{2}})$ if
The linear subspace of all admissible functions of index $(\xi ,{\beta _{2}})$ is denoted by $\mathcal{U}(\xi ,{\beta _{2}})$.
-
(i) ${\mathcal{G}^{-1\ast }}v\in {H^{\frac{3}{2}-\varepsilon }}(\mathbb{R})$,
-
(ii) $(\mathcal{M}v)(\exp (\hspace{0.1667em}\cdot \hspace{0.1667em}))$, $(\mathcal{M}v)(-\exp (\hspace{0.1667em}\cdot \hspace{0.1667em}))\in {H^{{\beta _{2}}}}(\mathbb{R})$ for some ${\beta _{2}}>{\beta _{1}}$ and
-
(iii) $|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\lesssim {(1+{x^{2}})^{-\xi /2}}$ for some $\xi >2(1-\varepsilon )-\Big(\frac{1}{2}-\varepsilon \Big)\frac{1+\tau }{2+\tau }$.
Remark 3.11.
-
(a) The parameters ε and τ describe the size of $\mathcal{U}(\xi ,{\beta _{2}})$. In particular, for larger values of ε and τ, the set of admissible functions is increasing and vice versa.
-
(c) Clearly, the lower bound for ξ in Definition 3.10, (iii) can be replaced by $\xi >\frac{7}{4}-\frac{3}{2}\varepsilon $. Nevertheless, since our purpose is to point out the influence of τ on the set of admissible functions, we do not use this simplification.
-
(d) It immediately follows from formula (3.7) that ${\mathcal{G}^{-1\ast }}v\in {H^{\delta }}(\mathbb{R})$ if and only if ${\mathcal{G}^{-1}}v\in {H^{\delta }}(\mathbb{R})$.
For any $j\in W$ and any admissible function $v\in \mathcal{U}(\xi ,{\beta _{2}})$, introduce the random variables
\[\begin{aligned}{}{Z_{j,v}^{(1)}}& =\frac{1}{2\pi }{Y_{j}}{\mathcal{F}_{+}}\Big[\frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](-\hspace{0.1667em}\cdot \hspace{0.1667em})}{\psi (\hspace{0.1667em}\cdot \hspace{0.1667em})}\Big]({Y_{j}})\hspace{1em}\text{and}\\ {} {Z_{j,v}^{(2)}}& =\frac{i}{2\pi }{\mathcal{F}_{+}}\Big[{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](-\hspace{0.1667em}\cdot \hspace{0.1667em}){\Big(\frac{1}{\psi }\Big)^{\prime }}\Big]({Y_{j}}).\end{aligned}\]
In the sequel, it is assumed that the random field Y introduced in (3.1) is observed on a sequence ${({W_{k}})_{k\in \mathbb{N}}}$ of regularly growing observation windows (cf. Section 2.2). To avoid longer notations, we drop the index k in this notation and shortly write W instead of ${W_{k}}$. Moreover, we denote by n $(=n(k))$ the cardinality of W.With the previous notation, we now can formulate the main result of this section.
Theorem 3.12.
Fix $m\in \mathbb{N}$, $m>{\Delta ^{-1}}\text{diam}(\operatorname{supp}(f))$. Let Assumption 3.1 be satisfied and suppose that conditions (K1)–(K3) are fulfilled. Moreover, let for some $\eta >0$ the sequences ${a_{n}}$ and ${b_{n}}$ be given by
\[ {a_{n}}=o\Big({\Big(\frac{n}{\sqrt{{b_{n}}}}\Big)^{\frac{{\beta _{1}}}{{\beta _{1}}-{\beta _{2}}}}}\Big)\hspace{1em}\textit{and}\hspace{1em}{b_{n}}\approx {n^{-\frac{1}{1-2\varepsilon }}}{(\log n)^{\eta +\frac{1}{1-2\varepsilon }}},\hspace{1em}\textit{as}\hspace{2.5pt}n\to \infty .\]
Then, as W is regularly growing to infinity,
for any admissible function $v\in \mathcal{U}(\xi ,{\beta _{2}})$, where ${N_{v}}$ is a Gaussian random variable with zero expectation and variance given by
A proof of Theorem 3.12 can be found in Section 4.
Remark 3.13.
Unfortunately, we could not provide a rate for the convergence ${\operatorname{err}_{W}}(v)\stackrel{d}{\to }{N_{v}}$ in Theorem 3.12. Therefore, it would be sufficient to provide, e.g., ${L^{1}}(\Omega ,\mathbb{P})$-rates for the convergence ${\sup _{x}}|\hat{\psi }(x)-\psi (x)|$, ${\sup _{x}}|\hat{\theta }(x)-\theta (x)|\to 0$ (as $|W|\to \infty $), that seems to be a hard problem in the dependent observations setting.
Corollary 3.14.
Let the assumptions of Theorem 3.12 hold true. Then, as W is regularly growing to infinity,
\[ {({\operatorname{err}_{W}}({v_{1}}),\dots ,{\operatorname{err}_{W}}({v_{k}}))^{\top }}\stackrel{d}{\to }{N_{{v_{1}},\dots ,{v_{k}}}},\]
for any ${v_{1}}\in \mathcal{U}({\xi _{1}},{\beta _{2}^{(1)}}),\dots ,{v_{k}}\in \mathcal{U}({\xi _{k}},{\beta _{2}^{(k)}})$, where ${N_{{v_{1}},\dots ,{v_{k}}}}$ is a centered Gaussian random vector with covariance matrix ${({\Sigma _{s,t}})_{s,t=1,\dots ,k}}$ given by
Proof.
Suppose ${v_{1}}\in \mathcal{U}({\xi _{1}},{\beta _{2}^{(1)}}),\dots ,{v_{k}}\in \mathcal{U}({\xi _{k}},{\beta _{2}^{(k)}})$ and, for arbitrary numbers ${\lambda _{1}},\dots ,{\lambda _{k}}\in \mathbb{R}$, let $v={\textstyle\sum _{l=1}^{k}}{\lambda _{l}}{v_{l}}$. Then a simple calculation yields
\[ {\sum \limits_{l=1}^{k}}{\lambda _{l}}{\operatorname{err}_{W}}({v_{l}})=\sqrt{n}\hspace{2.5pt}({\hat{\mathcal{L}}_{W}}v-\mathcal{L}v).\]
Since $v\in \mathcal{U}({\min _{l}}{\xi _{l}},{\min _{l}}{\beta _{2}^{(l)}})$, by Theorem 3.12, $\sqrt{n}\hspace{2.5pt}({\hat{\mathcal{L}}_{W}}v-\mathcal{L}v)\stackrel{d}{\to }{N_{v}}$, where ${N_{v}}$ is a Gaussian random variable with zero expectation and variance given in (3.10). Now, let ${({T_{1}},\dots ,{T_{k}})^{\top }}$ be a zero mean Gaussian random vector with covariance given by ${({\Sigma _{s,t}})_{s,t=1,\dots ,k}}$. Using linearity of ${\mathcal{F}_{+}}$ and ${\mathcal{G}^{-1\ast }}$, a short computation shows that
hence, the assertion follows by the Cramér–Wold theorem (cf. [6]). □4 Proof of Theorem 3.12
In order to prove Theorem 4, we adopt the strategy of the proof of [20, Theorem 2]. Nevertheless, the main difficulty in our setting is that the observations ${({Y_{j}})_{j\in W}}$ are not independent; hence the classical theory cannot be applied here. Instead, we use asymptotic results for partial sums of m-dependent random fields (cf. [8]) in combination with the theory developed by Bulinski and Shashkin in [7] for weakly dependent random fields.
We start with the following lemma.
Lemma 4.1.
Let $\gamma =0$ and suppose that $v\in \mathcal{U}(\xi ,{\beta _{2}})$ is an admissible function. Then Assumption 3.1 implies:
-
1. $xP$ has a bounded Lebesgue density on $\mathbb{R}$, where P denotes the distribution of $X(0)$.
-
2. ${\Big(\frac{1}{\psi }\Big)^{\prime }}\in {L^{2}}(\mathbb{R})\cap {L^{\infty }}(\mathbb{R})$ and $\frac{1}{|\psi (x)|}\lesssim {(1+|x|)^{\frac{1}{2}-\varepsilon }}$ for all $x\in \mathbb{R}$.
-
3. ${\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]$, $\frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }\in {L^{1}}(\mathbb{R})\cap {L^{2}}(\mathbb{R})$.
Proof.
-
1. Let $\mu (dx)=(u{v_{1}})(x)dx$. By Proposition 3.3, (a), $u{v_{1}}\in {L^{1}}(\mathbb{R})$; hence, μ defines a finite signed measure on $\mathbb{R}$. Since $\theta =\psi {\mathcal{F}_{+}}[u{v_{1}}]$, we conclude that\[ {\mathcal{F}_{+}}[xP](t)=\theta (t)={\mathcal{F}_{+}}[P](t){\mathcal{F}_{+}}[u{v_{1}}](t)={\mathcal{F}_{+}}[\mu \ast P](t),\]i.e. $xP(dx)=(\mu \ast P)(dx)$; thus, $xP$ has the density given by $\frac{d[xP]}{dx}={\textstyle\int _{\mathbb{R}}}(u{v_{1}})(x-y)P(dy)$ and consequently $\| \frac{d[xP]}{dx}{\| _{{L^{\infty }}(\mathbb{R})}}\le \| u{v_{1}}{\| _{{L^{\infty }}(\mathbb{R})}}$.
-
2. By Assumption 3.1, (4), (5), Proposition 3.3, (a), (c) and the Cauchy–Schwarz inequality, we obtain for any $x\in \mathbb{R}$,\[\begin{aligned}{}\frac{1}{|\psi (x)|}=& 1+{\int _{0}^{x}}{\Big(\frac{1}{\psi }\Big)^{\prime }}(t)dt=1+{\int _{0}^{x}}\frac{|\theta (t)|}{|\psi (t){|^{2}}}dt\\ {} =& 1+{\int _{0}^{x}}\frac{|{\mathcal{F}_{+}}[u{v_{1}}](t)|}{|\psi (t)|}dt\\ {} \lesssim & 1+{\int _{0}^{x}}{(1+{t^{2}})^{-\frac{\varepsilon }{2}}}\frac{{(1+{t^{2}})^{-\frac{1-\varepsilon }{2}}}}{|\psi (t)|}dt\\ {} \le & 1+\Big\| \frac{{(1+\hspace{0.1667em}\cdot {\hspace{0.1667em}^{2}})^{-\frac{1-\varepsilon }{2}}}}{\psi }{\Big\| _{{L^{2}}(\mathbb{R})}}{\Big({\int _{0}^{x}}{(1+{t^{2}})^{-\varepsilon }}dt\Big)^{1/2}}\\ {} \lesssim & {(1+|x|)^{\frac{1}{2}-\varepsilon }}.\end{aligned}\]Further, we have for any $x\in \mathbb{R}$,\[ \left|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\right|=\frac{|{\mathcal{F}_{+}}[u{v_{1}}](x)|}{|\psi (x)|}\lesssim {(1+|x|)^{-\frac{1}{2}-\varepsilon }}.\]The last expression is bounded and square integrable, hence ${\Big(\frac{1}{\psi }\Big)^{\prime }}\in {L^{2}}(\mathbb{R})\cap {L^{\infty }}(\mathbb{R})$.
-
3. ${\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]\in {L^{1}}(\mathbb{R})\cap {L^{2}}(\mathbb{R})$ immediately follows from Definition 3.10, (i) (cf. Remark 3.11, (b)). Moreover, by Proposition 3.3, (a), we find that\[ {\int _{\mathbb{R}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}dx\le \| {\mathcal{G}^{-1\ast }}v{\| _{{H^{1-\varepsilon }}(\mathbb{R})}}\Big\| \frac{{(1+\hspace{0.1667em}\cdot {\hspace{0.1667em}^{2}})^{-\frac{1-\varepsilon }{2}}}}{\psi }{\Big\| _{{L^{2}}(\mathbb{R})}},\]where the latter is finite due to Definition 3.10, (i). The bound in part (2) finally yields\[\begin{aligned}{}{\int _{\mathbb{R}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x){|^{2}}}{|\psi (x){|^{2}}}dx\lesssim & {\int _{\mathbb{R}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x){|^{2}}{(1+|x{|^{2}})^{\frac{1}{2}-\varepsilon }}dx\\ {} =& \| {\mathcal{G}^{-1\ast }}v{\| _{{H^{1-2\varepsilon }}(\mathbb{R})}}<\infty .\end{aligned}\]
In order to prove Theorem 3.12, consider the following decomposition that can be obtained by the isometry property of ${\mathcal{F}_{+}}$:
\[ {\operatorname{err}_{W}}(v)=\sqrt{n}\big({\hat{\mathcal{L}}_{W}}v-\mathcal{L}v\big)=\frac{1}{2\pi }\Big[{E_{1}}+{E_{2}}+{E_{3}}+{E_{4}}\Big]+{E_{5}},\]
with ${E_{1}},\dots ,{E_{5}}$ given by
\[\begin{aligned}{}{E_{1}}=& \sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{\frac{\hat{\theta }-\theta }{\psi }-i{\Big(\frac{1}{\psi }\Big)^{\prime }}(\hat{\psi }-\psi )\Big\}{\mathcal{F}_{+}}[{K_{b}}]{>_{{L^{2}}(\mathbb{R})}}\\ {} {E_{2}}=& \sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{{R_{n}}+\theta \frac{\psi -\hat{\psi }}{{\psi ^{2}}}{\mathbb{1}_{\{|\hat{\psi }|\le {n^{-1/2}}\}}}\Big\}{\mathcal{F}_{+}}[{K_{b}}]{>_{{L^{2}}(\mathbb{R})}}\\ {} {E_{3}}=& \sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\frac{\theta }{\psi }({\mathcal{F}_{+}}[{K_{b}}]-1){>_{{L^{2}}(\mathbb{R})}}\\ {} {E_{4}}=& \sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathcal{F}_{+}}[{K_{b}}]\Big\{\theta \frac{\hat{\psi }-\psi }{{\psi ^{2}}}-\frac{\hat{\theta }}{\psi }\Big\}{\mathbb{1}_{\{|\hat{\psi }|\le {n^{-1/2}}\}}}{>_{{L^{2}}(\mathbb{R})}}\\ {} {E_{5}}=& \sqrt{n}<({\mathcal{G}_{n}^{-1\ast }}-{\mathcal{G}^{-1\ast }})v,\widehat{u{v_{1}}}{>_{{L^{2}}(\mathbb{R})}},\end{aligned}\]
and ${R_{n}}=\big(1-\frac{\hat{\psi }}{\psi }\big)\big(\frac{\hat{\theta }}{\tilde{\psi }}-\frac{\theta }{\psi }\big)$. We call the expression ${E_{1}}$ main stochastic term and the expression ${E_{2}}$ remainder term.Subsequently, we give a step by step proof for Theorem 3.12 by considering each of the above terms ${E_{1}},\dots ,{E_{5}}$ seperately.
We first show that the deterministic term ${E_{3}}$ tends to zero as the sample size n tends to infinity.
Lemma 4.2.
Suppose $\gamma =0$. Then, under the conditions of Theorem 3.12,
\[ {E_{3}}=\sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\frac{\theta }{\psi }({\mathcal{F}_{+}}[{K_{b}}]-1){>_{{L^{2}}({\mathbb{R}^{\times }})}}\to 0,\hspace{1em}\textit{as}\hspace{2.5pt}n\to \infty ,\]
for any admissible function $v\in \mathcal{U}(\xi ,{\beta _{2}})$.
Proof.
Taking into account that $\big|\frac{\theta }{\psi }\big|=|{\mathcal{F}_{+}}[u{v_{1}}]|$, Assumption 3.1, (4), together with Proposition 3.3, (a) and condition (K3) yield
\[\begin{aligned}{}|{E_{3}}|\le & \sqrt{n}{\int _{\mathbb{R}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)||{\mathcal{F}_{+}}[u{v_{1}}](x)||1-\mathcal{F}[{K_{{b_{n}}}}](x)|dx\\ {} \lesssim & {b_{n}}\sqrt{n}{\int _{\mathbb{R}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|dx,\end{aligned}\]
where the last line is finite due to Lemma 4.1. Moreover, since, ${b_{n}}=o({n^{-1/2}})$ it tends to 0 as $n\to \infty $. □Next, we observe that ${E_{5}}$ is asymptotically negligible in mean.
Lemma 4.3.
Let the assumptions of Theorem 3.12 be satisfied. Then
\[ \mathbb{E}|{E_{5}}|={n^{1/2}}\mathbb{E}\Big|{\left\langle ({\mathcal{G}_{n}^{-1\ast }}-{\mathcal{G}^{-1\ast }})v,\widehat{u{v_{1}}}\right\rangle _{{L^{2}}(\mathbb{R})}}\Big|\to 0,\hspace{1em}\textit{as}\hspace{2.5pt}n\to \infty ,\]
for any $v\in \mathcal{U}(\xi ,{\beta _{2}})$.
Proof.
From the proofs of Theorem 3.6 and Corollary 3.7 we conclude that
\[ \sqrt{n}\hspace{2.5pt}\mathbb{E}\Big|{\left\langle ({\mathcal{G}_{n}^{-1\ast }}-{\mathcal{G}^{-1\ast }})v,\widehat{u{v_{1}}}\right\rangle _{{L^{2}}(\mathbb{R})}}\Big|\lesssim \sqrt{n}\hspace{2.5pt}{a_{n}^{\frac{{\beta _{2}}}{{\beta _{1}}}-1}}{\Big(\frac{n}{{b_{n}}}\Big)^{1/2}};\]
hence, $\mathbb{E}|{E_{5}}|\to 0$ as $n\to \infty $, since ${a_{n}}=o\Big({\Big(\frac{n}{\sqrt{{b_{n}}}}\Big)^{\frac{{\beta _{1}}}{{\beta _{1}}-{\beta _{2}}}}}\Big)$. □Lemma 4.4.
Suppose $\gamma =0$ and let the assumptions of Theorem 3.12 be satisfied. Then
\[ \mathbb{E}|{E_{4}}|=\sqrt{n}\hspace{2.5pt}\mathbb{E}\Big|<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathcal{F}_{+}}[{K_{b}}]\Big\{\theta \frac{\hat{\psi }-\psi }{{\psi ^{2}}}-\frac{\hat{\theta }}{\psi }\Big\}{\mathbb{1}_{\{|\hat{\psi }|\le {n^{-1/2}}\}}}{>_{{L^{2}}(\mathbb{R})}}\Big|\to 0,\]
as $n\to \infty $, for any admissible function $v\in \mathcal{U}(\xi ,{\beta _{2}})$.
Proof.
Since $\frac{\theta }{{\psi ^{2}}}=i{\Big(\frac{1}{\psi }\Big)^{\prime }}$, we obtain by conditions (K2) and (K3),
Indeed, by Lemma 4.1, (2), there is a constant $c>0$ such that $\frac{1}{|\psi (x)|}\le c{(1+|x|)^{\frac{1}{2}-\varepsilon }}$, for all $x\in \mathbb{R}$. Hence, if $|\psi (x)|\le 2{n^{-1/2}}$, then $|x|\ge {\Big(\frac{1}{2c}\Big)^{2/(1-2\varepsilon )}}{n^{1/(1-2\varepsilon )}}-1$. Since ${b_{n}^{-1}}=o({n^{1/(1-2\varepsilon )}})$ as $n\to \infty $, there exists ${n_{0}}\in \mathbb{N}$, such that ${b_{n}^{-1}}<{\Big(\frac{1}{2c}\Big)^{2/(1-2\varepsilon )}}{n^{1/(1-2\varepsilon )}}-1$ for all $n\ge {n_{0}}$, i.e. $x\notin [-{b_{n}^{-1}},{b_{n}^{-1}}]$. This shows (4.1).
\[\begin{aligned}{}\mathbb{E}|{E_{4}}|\le & S{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|\sqrt{n}\hspace{2.5pt}\mathbb{E}\Big[|\hat{\psi }(x)-\psi (x)|{\mathbb{1}_{\{|\hat{\psi }(x)|\le {n^{-1/2}}\}}}\Big]dx\\ {} & +S{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\sqrt{n}\hspace{2.5pt}\mathbb{E}\Big[|\hat{\theta }(x)|{\mathbb{1}_{\{|\hat{\psi }(x)|\le {n^{-1/2}}\}}}\Big]dx,\end{aligned}\]
with $S:={\sup _{x\in \mathbb{R},\hspace{2.5pt}b>0}}|{\mathcal{F}_{+}}[{K_{b}}](x)|$. In order to bound the summands on the right-hand side of the latter inequality, we start with the following observation: ∃ ${n_{0}}\in \mathbb{N}$ such that for all $n\ge {n_{0}}$,
(4.1)
\[ x\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\hspace{1em}\Rightarrow \hspace{1em}|\psi (x)|>2{n^{-1/2}}.\]In the sequel, we assume that $n\ge {n_{0}}$ and consider each summand in the above inequality seperately:
All in all, this shows the assertion of the lemma. □
-
1. Using the m-dependence of ${({Y_{j}})_{j\in {\mathbb{Z}^{d}}}}$, we conclude as in the first part of the proof of [18, Lemma 8.3] that for any $p\ge 1/2$ and all $x\in \mathbb{R}$ with $|\psi (x)|>2{n^{-1/2}}$. Taking $p=1/2$ in (4.2), by the Cauchy–Schwarz inequality, [18, Lemma 8.2] and Lemma 4.1, (2), (3), we find that\[\begin{aligned}{}& {\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|\sqrt{n}\hspace{2.5pt}\mathbb{E}\Big[|\hat{\psi }(x)-\psi (x)|{\mathbb{1}_{\{|\hat{\psi }(x)|\le {n^{-1/2}}\}}}\Big]dx\\ {} \lesssim & {\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|\hspace{2.5pt}\mathbb{P}{\Big(|\hat{\psi }(x)|\le {n^{-1/2}}\Big)^{1/2}}{\mathbb{1}_{\{|\psi (x)|>2{n^{-1/2}}\}}}dx\\ {} \le & {n^{-1/4}}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x){|^{1/2}}}\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|{\mathbb{1}_{\{|\psi (x)|>2{n^{-1/2}}\}}}dx\\ {} \le & {n^{-1/4}}\Big\| \frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }{\Big\| _{{L^{1}}(\mathbb{R})}}{\Big\| {\Big(\frac{1}{\psi }\Big)^{\prime }}\Big\| _{{L^{\infty }}(\mathbb{R})}},\end{aligned}\]for all $n\ge {n_{0}}$, where the last inequality uses the fact that $|\psi (x)|\le 1$. Hence, the first integral tends to zero as $n\to \infty $.
-
2. For the second integral, by the triangle inequality we observe that for any $n\ge {n_{0}}$,\[\begin{aligned}{}& {\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\sqrt{n}\hspace{2.5pt}\mathbb{E}\Big[|\hat{\theta }(x)|{\mathbb{1}_{\{|\hat{\psi }(x)|\le {n^{-1/2}}\}}}\Big]dx\\ {} \le & {\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\Big({I_{1}}(x)+{I_{2}}(x)\Big)dx,\end{aligned}\]where\[ {I_{1}}(x)=\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\sqrt{n}\hspace{2.5pt}\mathbb{E}\Big[|\hat{\theta }(x)-\theta (x)|{\mathbb{1}_{|\hat{\psi }(x)|\le {n^{-1/2}}}}\Big]{\mathbb{1}_{\{|\psi (x)|>2{n^{-1/2}}\}}}\]and\[ {I_{2}}(x)=|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\sqrt{n}\hspace{2.5pt}\frac{|\theta (x)|}{|\psi (x)|}\mathbb{P}\Big(|\hat{\psi }(x)|\le {n^{-1/2}}\Big){\mathbb{1}_{\{|\psi (x)|>2{n^{-1/2}}\}}}.\]Applying Lemma A.2 with $q=1/2$ (cf. Appendix A.4), we find that\[\begin{aligned}{}{I_{1}}(x)\lesssim & \sqrt{\mathbb{E}|{Y_{0}}{|^{2}}}\hspace{2.5pt}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|},\hspace{1em}\hspace{2.5pt}\text{for all}\hspace{2.5pt}x\in \mathbb{R};\end{aligned}\]hence, by Lemma 4.1, (3) and the finite $(2+\tau )$-moment condition, ${I_{1}}$ is majorized by an integrable function. Moreover, applying the Cauchy–Schwarz inequality, (4.2) and again Lemma A.2 (with $q=1$) yields\[\begin{aligned}{}{I_{1}}(x)\lesssim & \frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\mathbb{P}{\Big(|\hat{\psi }(x)|\le {n^{-1/2}}\Big)^{1/2}}{\mathbb{1}_{\{|\psi (x)|>2{n^{-1/2}}\}}}\\ {} \le & \frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\frac{{n^{-1/4}}}{|\psi (x){|^{1/2}}}\to 0,\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\end{aligned}\]for all $x\in \mathbb{R}$. By dominated convergence, ${\lim \nolimits_{n\to \infty }}{\textstyle\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}{I_{1}}(x)dx=0$ follows. Further,\[\begin{aligned}{}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}{I_{2}}(x)dx\le & {n^{-1/2}}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\frac{|\theta (x)|}{|\psi (x){|^{2}}}{\mathbb{1}_{\{|\psi (x)|>2{n^{-1/2}}\}}}dx\\ {} \le & {n^{-1/2}}\Big\| \frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }{\Big\| _{{L^{1}}(\mathbb{R})}}{\Big\| {\Big(\frac{1}{\psi }\Big)^{\prime }}\Big\| _{{L^{\infty }}(\mathbb{R})}},\end{aligned}\]by (4.2) (with $p=1$), Lemma 4.1, (2), (3) and the Cauchy–Schwarz inequality; hence, also ${\lim \nolimits_{n\to \infty }}{\textstyle\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}{I_{2}}(x)dx=0$.
4.1 Main stochastic term
In this section we show the asymptotic normality of the main stochastic term. For this purpose, let ${P_{n}}:\mathcal{B}(\mathbb{R})\to [0,1]$ be the empirical measure given by
where ${\delta _{x}}:\mathcal{B}(\mathbb{R})\to \{0,1\}$ denotes the Dirac measure concentrated in $x\in \mathbb{R}$. Further, for any $v\in \mathcal{U}(\xi ,{\beta _{2}})$, define the random fields ${({Z_{j,v,n}^{(k)}})_{j\in {\mathbb{Z}^{d}}}}$, $k=1,2$, by
and
The following theorem is the main result of this section.
Theorem 4.5.
Let the assumptions of Theorem 3.12 be satisfied. Then, as W is regularly growing to infinity,
\[ \frac{1}{2\pi }{E_{1}}=\frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{\frac{\hat{\theta }-\theta }{\psi }-i{\Big(\frac{1}{\psi }\Big)^{\prime }}(\hat{\psi }-\psi )\Big\}{\mathcal{F}_{+}}[{K_{b}}]{>_{{L^{2}}(\mathbb{R})}}\stackrel{d}{\to }{N_{v}},\]
for any $v\in \mathcal{U}(\xi ,{\beta _{2}})$, where ${N_{v}}$ is a Gaussian random variable with zero expectation and variance ${\sigma ^{2}}$ given in (3.10).
In order to prove Theorem 4.5, we first show some auxiliary statements. We begin with the following representation for the main stochastic term.
Lemma 4.6.
Let $v\in \mathcal{U}(\xi ,{\beta _{2}})$. Then, under the assumptions of Theorem 3.12, the main stochastic term can be represented by
\[ \frac{1}{2\pi }{E_{1}}=\frac{1}{\sqrt{n}}\sum \limits_{j\in W}\Big({Z_{j,v,n}^{(1)}}-{Z_{j,v,n}^{(2)}}\Big),\]
with ${Z_{j,v,n}^{(k)}}$, $k=1,2$, given in (4.3) and (4.4).
Proof.
Since $\theta =-i{\psi ^{\prime }}$,
\[ i\Big(\psi {\Big(\frac{1}{\psi }\Big)^{\prime }}-\frac{\theta }{\psi }\Big)=i{\Big(\psi \frac{1}{\psi }\Big)^{\prime }}=0;\]
hence,
\[\begin{aligned}{}\frac{1}{2\pi }{E_{1}}=& \frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{\frac{\hat{\theta }}{\psi }-i{\Big(\frac{1}{\psi }\Big)^{\prime }}\hat{\psi }\Big\}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]{>_{{L^{2}}(\mathbb{R})}}\\ {} =& \frac{\sqrt{n}}{2\pi }{\int _{\mathbb{R}}}{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)\Big\{\frac{\hat{\theta }(-x)}{\psi (-x)}-i{\Big(\frac{1}{\psi }\Big)^{\prime }}(-x)\hat{\psi }(-x)\Big\}{\mathcal{F}_{+}}[{K_{{b_{n}}}}](-x)dx\\ {} =& \frac{\sqrt{n}}{2\pi }\Big[{\int _{\mathbb{R}}}{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)\frac{\hat{\theta }(-x)}{\psi (-x)}{\mathcal{F}_{+}}[{K_{{b_{n}}}}](-x)dx\\ {} & -i{\int _{\mathbb{R}}}{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x){\Big(\frac{1}{\psi }\Big)^{\prime }}(-x)\hat{\psi }(-x){\mathcal{F}_{+}}[{K_{{b_{n}}}}](-x)dx\Big].\end{aligned}\]
Now, taking into account that $\hat{\psi }(x)={\textstyle\int _{\mathbb{R}}}{e^{itx}}{P_{n}}(dt)$ and $\hat{\theta }(x)={\textstyle\int _{\mathbb{R}}}{e^{itx}}t{P_{n}}(dt)$, Fubini’s theorem yields the desired result. □The following lemma justifies the asymptotic variance ${\sigma ^{2}}$ in Theorem 3.12.
Lemma 4.7.
Let the assumptions of Theorem 3.12 be satisfied and suppose functions ${v_{1}}\in \mathcal{U}({\xi _{1}},{\beta _{2}^{(1)}})$ and ${v_{2}}\in \mathcal{U}({\xi _{2}},{\beta _{2}^{(2)}})$. Then, as W is regularly growing to infinity,
\[ \operatorname{Cov}\Big(|W{|^{-1/2}}\sum \limits_{j\in W}\Big({Z_{j,{v_{1}},n}^{(1)}}-{Z_{j,{v_{1}},n}^{(2)}}\Big),|W{|^{-1/2}}\sum \limits_{j\in W}\Big({Z_{j,{v_{2}},n}^{(1)}}-{Z_{j,{v_{2}},n}^{(2)}}\Big)\Big)\to {\sigma _{{v_{1}},{v_{2}}}},\]
with ${\sigma _{{v_{1}},{v_{2}}}}\in \mathbb{R}$ given by
Proof.
Let ${v_{1}}\in \mathcal{U}({\xi _{1}},{\beta _{2}^{(1)}})$, ${v_{2}}\in \mathcal{U}({\xi _{2}},{\beta _{2}^{(2)}})$ and define the functions ${g^{(k)}}$, ${g_{n}^{(k)}}:\mathbb{R}\to \mathbb{R}$, $k=1,2$, by
\[\begin{aligned}{}{g_{n}^{(k)}}(x)=& \frac{x}{2\pi }{\mathcal{F}_{+}}\Big[\frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}{v_{k}}](-\hspace{0.1667em}\cdot \hspace{0.1667em})}{\psi }{\mathcal{F}_{+}}[{K_{{b_{n}}}}]\Big](x)\\ {} & -\frac{i}{2\pi }{\mathcal{F}_{+}}\Big[{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}{v_{k}}](-\hspace{0.1667em}\cdot \hspace{0.1667em}){\Big(\frac{1}{\psi }\Big)^{\prime }}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]\Big](x),\hspace{1em}x\in \mathbb{R},\end{aligned}\]
and
\[\begin{aligned}{}{g^{(k)}}(x)=& \frac{x}{2\pi }{\mathcal{F}_{+}}\Big[\frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}{v_{k}}](-\hspace{0.1667em}\cdot \hspace{0.1667em})}{\psi }\Big](x)\\ {} & -\frac{i}{2\pi }{\mathcal{F}_{+}}\Big[{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}{v_{k}}](-\hspace{0.1667em}\cdot \hspace{0.1667em}){\Big(\frac{1}{\psi }\Big)^{\prime }}\Big](x),\hspace{1em}x\in \mathbb{R}.\end{aligned}\]
Then ${({g^{(k)}}({Y_{j}}))_{j\in {\mathbb{Z}^{d}}}}$ and ${({g_{n}^{(k)}}({Y_{j}}))_{j\in {\mathbb{Z}^{d}}}}$ fulfill properties (1)–(3) from Lemma A.4 (cf. Appendix A.5). Indeed, by Lemma 4.6, it follows that
\[\begin{aligned}{}\mathbb{E}[{g_{n}^{(k)}}({Y_{0}})]=& \mathbb{E}\Big[{Z_{0,{v_{k}},n}^{(1)}}-{Z_{0,{v_{k}},n}^{(2)}}\Big]=\frac{1}{n}\mathbb{E}\Big[\sum \limits_{j\in W}\Big({Z_{j,{v_{k}},n}^{(1)}}-{Z_{j,{v_{k}},n}^{(2)}}\Big)\Big]\\ {} =& \frac{1}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{\frac{\hat{\theta }-\theta }{\psi }-i{\Big(\frac{1}{\psi }\Big)^{\prime }}(\hat{\psi }-\psi )\Big\}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]{>_{{L^{2}}(\mathbb{R})}}.\end{aligned}\]
Since $\mathbb{E}[\hat{\psi }(x)-\psi (x)]=\mathbb{E}[\hat{\theta }(x)-\theta (x)]=0$ for all $x\in \mathbb{R}$, we conclude by Fubini’s theorem that $\mathbb{E}[{g_{n}^{(k)}}({Y_{0}})]=\mathbb{E}[{g^{(k)}}({Y_{0}})]=0$, for $k=1,2$. Moreover, since the Fourier transform of an integrable function is bounded, the finite $(2+\tau )$-moment condition together with Lemma 4.1, (2), (3) and (K3) imply $\mathbb{E}|{g^{(k)}}({Y_{0}}){|^{2}}$, $\mathbb{E}|{g_{n}^{(k)}}({Y_{0}}){|^{2}}<\infty $, $k=1,2$. The same arguments in combination with the dominated convergence yields
\[ \mathbb{E}\Big[{g_{n}^{(1)}}({Y_{0}}){g_{n}^{(2)}}({Y_{j}})\Big]\to \mathbb{E}\Big[{g^{(1)}}({Y_{0}}){g^{(2)}}({Y_{j}})\Big],\]
as $|W|\to \infty $. Hence, Lemma A.4 yields the assertion of the lemma. □We now can give a proof of Theorem 4.5.
Proof of Theorem 4.5.
If ${\sigma _{v}^{2}}=0$, then Lemma 4.7 yields
\[ {\sigma _{v,n}^{2}}:=\mathbb{E}\Big[\Big(\frac{1}{\sqrt{n}}\sum \limits_{j\in W}{\Big({Z_{0,v,n}^{(1)}}-{Z_{0,v,n}^{(2)}}\Big)\Big)^{2}}\Big]\to {\sigma _{v}^{2}}=0,\]
as W is regularly growing to infinity; hence, ${n^{-1/2}}{\textstyle\sum _{j\in W}}\Big({Z_{0,v,n}^{(1)}}-{Z_{0,v,n}^{(2)}}\Big)\to 0$ in probability.Now, assume that ${\sigma _{v}^{2}}>0$ and choose ${n_{0}}\in \mathbb{N}$ such that ${\sigma _{v,n}^{2}}>0$ for all $n\ge {n_{0}}$ (which is indeed possible, since ${\sigma _{v,n}^{2}}\to {\sigma _{v}^{2}}>0$ as $n\to \infty $). For any $n\ge {n_{0}}$, let
\[ {X_{j,n}}:=\frac{1}{\sqrt{n}}\frac{{Z_{j,v,n}^{(1)}}-{Z_{j,v,n}^{(2)}}}{{\sigma _{v,n}}},\hspace{1em}j\in {\mathbb{Z}^{d}},\]
and denote by ${F_{n}}$ the distribution function of ${\textstyle\sum _{j\in W}}{X_{j,n}}$. In the proof of Lemma 4.7 we have seen that ${({X_{j,n}})_{j\in {\mathbb{Z}^{d}}}}$ is a centered m-dependent random field and $\mathbb{E}|{X_{j,n}}{|^{2+\tau }}\le c{n^{-1-\tau /2}}{\sigma _{v,n}^{-(2+\tau )}}$ for any $n\in \mathbb{N}$ and a constant $c>0$. Hence, applying [8, Theorem 2.6] with $p=2+\tau $ yields
\[ \underset{x\in \mathbb{R}}{\sup }|{F_{n}}(x)-\phi (x)|\le 75c{(m+1)^{(1+\tau )d}}{\sigma _{v,n}^{-(2+\tau )}}{n^{-\tau /2}}\to 0,\]
as $n\to \infty $. This completes the proof. □4.2 Remainder term
In this section, we show that the remainder term ${E_{2}}$ is stochastically negligible as the sample size n tends to infinity.
Theorem 4.8.
Let $\gamma =0$ and suppose that the assumptions of Theorem 3.12 are satisfied. Then, as $n\to \infty $,
\[ {E_{2}}=\sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{{R_{n}}+\theta \frac{\psi -\hat{\psi }}{{\psi ^{2}}}{\mathbb{1}_{\{|\hat{\psi }|\le {n^{-1/2}}\}}}\Big\}{\mathcal{F}_{+}}[{K_{b}}]{>_{{L^{2}}(\mathbb{R})}}\stackrel{\mathbb{P}}{\to }0,\]
for any $v\in \mathcal{U}(\xi ,{\beta _{2}})$.
In order to prove Theorem 4.8, some auxiliary statements are required. Therefore, we introduce the following notation.
For any $t\in \mathbb{R}$, $j\in {\mathbb{Z}^{d}}$, let the centered random variables ${\xi _{j}^{(l)}}(t)$, ${\tilde{\xi }_{j}^{(l)}}(t)$, $l=1,2$, be given by
\[\begin{aligned}{}{\xi _{j}^{(1)}}(t)& =\cos (t{Y_{j}})-\mathbb{E}\Big[\cos (t{Y_{0}})\Big],\\ {} {\xi _{j}^{(2)}}(t)& =\sin (t{Y_{j}})-\mathbb{E}\Big[\sin (t{Y_{0}})\Big],\\ {} {\tilde{\xi }_{j}^{(1)}}(t)& ={Y_{j}}\cos (t{Y_{j}})-\mathbb{E}\Big[{Y_{0}}\cos (t{Y_{0}})\Big],\\ {} {\tilde{\xi }_{j}^{(2)}}(t)& ={Y_{j}}\sin (t{Y_{j}})-\mathbb{E}\Big[{Y_{0}}\sin (t{Y_{0}})\Big].\end{aligned}\]
Then $\hat{\psi }-\psi $ and $\hat{\theta }-\theta $ can be rewritten as
\[\begin{aligned}{}\hat{\psi }(t)-\psi (t)=& \frac{1}{n}\sum \limits_{j\in W}\Big({\xi _{j}^{(1)}}(t)+i{\xi _{j}^{(2)}}(t)\Big)\hspace{1em}\text{and}\\ {} \hat{\theta }(t)-\theta (t)=& \frac{1}{n}\sum \limits_{j\in W}\Big({\tilde{\xi }_{j}^{(1)}}(t)+i{\tilde{\xi }_{j}^{(2)}}(t)\Big).\end{aligned}\]
In the sequel, we shortly write ${\xi ^{(l)}}(t)$, ${\tilde{\xi }^{(l)}}(t)$ for the random fields ${({\xi _{j}^{(l)}}(t))_{j\in {\mathbb{Z}^{d}}}}$ and ${({\tilde{\xi }_{j}^{(l)}}(t))_{j\in {\mathbb{Z}^{d}}}}$, $l=1,2$. Moreover, for any $K>0$, we define the random fields ${\bar{\xi }_{K}^{(l)}}(t)={({\bar{\xi }_{j,K}^{(l)}}(t))_{j\in {\mathbb{Z}^{d}}}}$, $l=1,2$, by
\[\begin{aligned}{}{\bar{\xi }_{j,K}^{(1)}}(t)& ={Y_{j}}\cos (t{Y_{j}}){\mathbb{1}_{[-K,K]}}({Y_{j}})-\mathbb{E}\Big[{Y_{0}}\cos (t{Y_{0}}){\mathbb{1}_{[-K,K]}}({Y_{0}})\Big],\\ {} {\bar{\xi }_{j,K}^{(2)}}(t)& ={Y_{j}}\sin (t{Y_{j}}){\mathbb{1}_{[-K,K]}}({Y_{j}})-\mathbb{E}\Big[{Y_{0}}\sin (t{Y_{0}}){\mathbb{1}_{[-K,K]}}({Y_{0}})\Big].\end{aligned}\]
For any finite subset $V\subset {\mathbb{Z}^{d}}$ and any random field $Y={({Y_{j}})_{j\in {\mathbb{Z}^{d}}}}$, let
Lemma 4.9.
Let the assumptions of Theorem 3.12 be satisfied and suppose $K\ge 1$. Then
and
for any $t\in \mathbb{R}$, $x\ge 0$ and $l=1,2$.
(4.5)
\[ \mathbb{P}(|{S_{W}}({\xi ^{(l)}}(t))|\ge x)\le 2\exp \left(-\frac{1}{8{(m+1)^{d}}}\frac{{x^{2}}}{x+2|W|}\right)\](4.6)
\[ \mathbb{P}(|{S_{W}}({\bar{\xi }_{K}^{(l)}}(t))|\ge x)\le 2\exp \left(-\frac{1}{8{(m+1)^{d}}{K^{2}}}\frac{{x^{2}}}{x+2|W|}\right),\]Proof.
Since $|{\xi _{j}^{(l)}}(t)|\le 2$ for all $t\in \mathbb{R}$, $j\in {\mathbb{Z}^{d}}$ and $l=1,2$, we have that
\[\begin{aligned}{}|\mathbb{E}[{\xi _{j}^{(l)}}{(t)^{p}}]|& \le \mathbb{E}[|{\xi _{j}^{(l)}}(t){|^{p-2}}{\xi _{j}^{(l)}}{(t)^{2}}]\le {2^{p-2}}\mathbb{E}[{\xi _{j}^{(l)}}{(t)^{2}}],\hspace{1em}p=3,4,\dots ;\end{aligned}\]
hence, Theorem A.1 (with $H=2$) implies (4.5). Next, we obtain
\[ |\mathbb{E}[{\bar{\xi }_{j,K}^{(l)}}{(t)^{p}}]|\le \mathbb{E}[|{\bar{\xi }_{j,K}^{(l)}}(t){|^{p-2}}|{\bar{\xi }_{j,K}^{(l)}}(t){|^{2}}]\le {(2K)^{p-2}}\mathbb{E}[{\bar{\xi }_{j,K}^{(l)}}{(t)^{2}}],\hspace{1em}p=3,4,\dots .\]
Taking into account that ${\textstyle\sum _{j\in W}}\mathbb{E}[{\bar{\xi }_{j,K}^{(l)}}{(t)^{2}}]\le 4{K^{2}}|W|$, Theorem A.1 (with $H=2K$) yields the bound in (4.6). □Lemma 4.10.
Let the assumptions of Theorem 3.12 be satisfied and let $n=|W|$. Moreover, for any n, suppose numbers ${\varepsilon _{n}}>0$, ${K_{n}}\ge 1$, such that ${\varepsilon _{n}}\to 0$ and ${K_{n}}\to \infty $ as $n\to \infty $. Then, for any n with ${\varepsilon _{n}}<\min \{1,\frac{T}{4}\}$,
and
$l=1,2$, where ${C_{1}}=4(1+2\mathbb{E}|{Y_{0}}|)$, ${C_{2}}=4\sqrt{2}(1+2\mathbb{E}|{Y_{0}}{|^{2}})$ and ${C_{3}}=8\mathbb{E}|{Y_{0}}{|^{2+\tau }}$.
(4.7)
\[\begin{aligned}{}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)& \le {C_{1}}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{n{\varepsilon _{n}^{2}}}{160{(m+1)^{d}}}\right\}\end{aligned}\](4.8)
\[ \begin{aligned}{}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\tilde{\xi }^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\le & {C_{2}}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{n{\varepsilon _{n}^{2}}}{576{(m+1)^{d}}{K_{n}^{2}}}\right\}\\ {} & +\frac{{C_{3}}}{{\varepsilon _{n}}{K_{n}^{1+\tau }}},\end{aligned}\]Proof.
We use the same idea as in the proof of [4, Theorem 2]: divide the interval $[-T,T]$ by $2J$ equidistant points ${({t_{k}})_{k=1,\dots ,2J}}=\mathcal{D}$, where ${t_{k}}=-T+k\frac{T}{J}$, $k=1,\dots ,2J$. Then, for any $t\in [-T,T]$ such that $|t-{t_{k}}|\le \frac{T}{J}$, we have for any $j\in {\mathbb{Z}^{d}}$ that
\[ |{\xi _{j}^{(l)}}(t)-{\xi _{j}^{(l)}}({t_{k}})|\le |t-{t_{k}}|(|{Y_{j}}|+\mathbb{E}|{Y_{0}}|)\le (|{Y_{j}}|+\mathbb{E}|{Y_{0}}|)\frac{T}{J},\hspace{1em}l=1,2.\]
Hence, by Markov’s inequality and Lemma 4.9, for any $n\in \mathbb{N}$, we obtain that
\[\begin{aligned}{}& \mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)=\mathbb{P}\Big(\underset{{t_{k}}\in \mathcal{D}}{\sup }\underset{t:|t-{t_{k}}|\le \frac{T}{J}}{\sup }|{S_{W}}({\xi ^{(l)}}(t))|\ge n{\varepsilon _{n}}\Big)\\ {} \le & \mathbb{P}\Big(\underset{{t_{k}}\in \mathcal{D}}{\sup }|{S_{W}}({\xi ^{(l)}}({t_{k}}))|\ge \frac{n{\varepsilon _{n}}}{2}\Big)\\ {} & +\mathbb{P}\Big(\underset{{t_{k}}\in \mathcal{D}}{\sup }\underset{t:|t-{t_{k}}|\le \frac{T}{J}}{\sup }|{S_{W}}({\xi ^{(l)}}(t))-{S_{W}}({\xi ^{(l)}}({t_{k}}))|\ge \frac{n{\varepsilon _{n}}}{2}\Big)\\ {} \le & \sum \limits_{{t_{k}}\in \mathcal{D}}\mathbb{P}\Big(|{S_{W}}({\xi ^{(l)}}({t_{k}}))|\ge \frac{n{\varepsilon _{n}}}{2}\Big)+\mathbb{P}\Big(\sum \limits_{j\in W}(|{Y_{j}}|+\mathbb{E}|{Y_{0}}|)\frac{T}{J}\ge \frac{n{\varepsilon _{n}}}{2}\Big)\\ {} \le & 4J\exp \left\{-\frac{1}{16{(m+1)^{d}}}\frac{n{\varepsilon _{n}^{2}}}{{\varepsilon _{n}}+4}\right\}+\frac{4T}{J{\varepsilon _{n}}}\mathbb{E}|{Y_{0}}|,\end{aligned}\]
$l=1,2$. Now, let $n\in \mathbb{N}$ be such that ${\varepsilon _{n}}<\frac{T}{4}$ and choose
\[ J=\Bigg\lfloor {\left(\frac{T}{{\varepsilon _{n}}}\exp \left\{\frac{1}{16{(m+1)^{d}}}\frac{n{\varepsilon _{n}^{2}}}{{\varepsilon _{n}}+4}\right\}\right)^{1/2}}\Bigg\rfloor ,\]
where $\lfloor x\rfloor $ denotes the integer part of $x\in \mathbb{R}$. Then
\[\begin{aligned}{}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\le & {C_{1}}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{1}{32{(m+1)^{d}}}\frac{n{\varepsilon _{n}^{2}}}{{\varepsilon _{n}}+4}\right\}\end{aligned}\]
with ${C_{1}}=4(1+2\mathbb{E}|{Y_{0}}|)$. Applying the same arguments to ${\sup _{t\in [-T,T]}}|{S_{W}}({\bar{\xi }_{{K_{n}}}^{(l)}}(t))|$ yields
\[ \mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\bar{\xi }_{{K_{n}}}^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\le \tilde{C}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{1}{32{(m+1)^{d}}{K_{n}^{2}}}\frac{n{\varepsilon _{n}^{2}}}{{\varepsilon _{n}}+4}\right\},\]
whenever ${\varepsilon _{n}}<\frac{T}{4}$, where $\tilde{C}=4(1+2\mathbb{E}|{Y_{0}}{|^{2}})$. Combining Markov’s inequality, Hölder’s inequality and the finite $(2+\tau )$-moment property of ${Y_{0}}$ implies
\[\begin{aligned}{}& \mathbb{P}\Bigg(\underset{t\in [-T,T]}{\sup }{n^{-1}}\Big|\sum \limits_{j\in W}\Big({\tilde{\xi }_{j}^{(l)}}(t)-{\bar{\xi }_{j,{K_{n}}}^{(l)}}(t)\Big)\Big|\ge \frac{{\varepsilon _{n}}}{2}\Bigg)\\ {} \le & \mathbb{P}\Bigg(\sum \limits_{j\in W}(|{Y_{j}}|{\mathbb{1}_{({K_{n}},\infty )}}(|{Y_{j}}|)+\mathbb{E}|{Y_{0}}|{\mathbb{1}_{({K_{n}},\infty )}}(|{Y_{0}}|))\ge \frac{n{\varepsilon _{n}}}{2}\Bigg)\\ {} \le & \frac{4}{{\varepsilon _{n}}}{\left(\mathbb{E}|{Y_{0}}{|^{2+\tau }}\right)^{1/(2+\tau )}}\mathbb{P}{(|{Y_{0}}|>{K_{n}})^{(1+\tau )/(2+\tau )}}\\ {} \le & \frac{4}{{K_{n}^{1+\tau }}{\varepsilon _{n}}}\mathbb{E}|{Y_{0}}{|^{2+\tau }},\end{aligned}\]
$l=1,2$. All in all, we have for any n such that ${\varepsilon _{n}}<\frac{T}{2}$,
\[\begin{aligned}{}& \mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\tilde{\xi }^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\\ {} \le & \mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\bar{\xi }_{{K_{n}}}^{(l)}}(t))|\ge \frac{{\varepsilon _{n}}}{2}\Big)\\ {} & +\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\tilde{\xi }^{(l)}}(t)-{\bar{\xi }_{{K_{n}}}^{(l)}}(t))|\ge \frac{{\varepsilon _{n}}}{2}\Big)\\ {} \le & \sqrt{2}\tilde{C}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{1}{64{(m+1)^{d}}{K_{n}^{2}}}\frac{n{\varepsilon _{n}^{2}}}{{\varepsilon _{n}}+8}\right\}+\frac{8}{{K_{n}^{1+\tau }}{\varepsilon _{n}}}\mathbb{E}|{Y_{0}}{|^{2+\tau }}.\end{aligned}\]
Hence, it follows for any n with ${\varepsilon _{n}}<\min \{1,\frac{T}{4}\}$ that
\[\begin{aligned}{}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)& \le {C_{1}}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{n{\varepsilon _{n}^{2}}}{160{(m+1)^{d}}}\right\}\end{aligned}\]
as well as
\[\begin{aligned}{}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\tilde{\xi }_{j}^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\le & {C_{2}}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{n{\varepsilon _{n}^{2}}}{576{(m+1)^{d}}{K_{n}^{2}}}\right\}\\ {} & +\frac{{C_{3}}}{{K_{n}^{1+\tau }}{\varepsilon _{n}}},\end{aligned}\]
where ${C_{2}}=\sqrt{2}\tilde{C}$ and ${C_{3}}=8\mathbb{E}|{Y_{0}}{|^{2+\tau }}$. □Theorem 4.11.
For some $\zeta >0$, suppose ${\varepsilon _{n}}\approx {n^{-\frac{1+\tau }{2(2+\tau )}}}\Big[\log {\Big({T^{\frac{1}{2}}}{n^{\frac{1+\tau }{4(2+\tau )}}}\Big)\Big]^{\zeta +\frac{1}{2}}}$ and ${K_{n}}={n^{\frac{1}{2(2+\tau )}}}$ in Lemma 4.10. Then, for n sufficiently large,
\[ \mathbb{P}\Big(\max \Big\{\underset{t\in [-T,T]}{\sup }|\hat{\psi }(t)-\psi (t)|,\underset{t\in [-T,T]}{\sup }|\hat{\theta }(t)-\theta (t)|\Big\}>{\varepsilon _{n}}\Big)\le \tilde{C}{y_{n}},\]
where
\[ {y_{n}}=\Big[\log {\Big({T^{\frac{1}{2}}}{n^{\frac{1+\tau }{4(2+\tau )}}}\Big)\Big]^{-\frac{\zeta }{2}-\frac{1}{4}}}\]
and $\tilde{C}>0$ is a constant (independent of T).
Proof.
By Lemma 4.10 it follows that
\[\begin{aligned}{}& \mathbb{P}\Big(\max \Big\{\underset{t\in [-T,T]}{\sup }|\hat{\psi }(t)-\psi (t)|,\underset{t\in [-T,T]}{\sup }|\hat{\theta }(t)-\theta (t)|\Big\}>{\varepsilon _{n}}\Big)\\ {} \le & {\sum \limits_{l=1}^{2}}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\\ {} & +{\sum \limits_{l=1}^{2}}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\tilde{\xi }^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\\ {} \le & C\left(\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{n{\varepsilon _{n}^{2}}}{576{(m+1)^{d}}{K_{n}^{2}}}\right\}+\frac{1}{{\varepsilon _{n}}{K_{n}^{1+\tau }}}\right),\end{aligned}\]
for some constant $C>0$. Let ${\varepsilon _{n}}={n^{-\frac{1+\tau }{2(2+\tau )}}}\Big[\log {\Big({T^{\frac{1}{2}}}{n^{\frac{1+\tau }{4(2+\tau )}}}\Big)\Big]^{\zeta +\frac{1}{2}}}$, without loss of generality. Then we observe that
\[ \frac{1}{{\varepsilon _{n}}{K_{n}^{1+\tau }}}=\Big[\log {\Big({T^{\frac{1}{2}}}{n^{\frac{1+\tau }{4(2+\tau )}}}\Big)\Big]^{-\zeta -\frac{1}{2}}}.\]
Moreover,
\[\begin{aligned}{}\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{n{\varepsilon _{n}^{2}}}{576{(m+1)^{d}}{K_{n}^{2}}}\right\}=& {\Big({T^{\frac{1}{2}}}{n^{\frac{1+\tau }{4(2+\tau )}}}\Big)^{1-\frac{{r_{n}}}{576{(m+1)^{d}}}}}{y_{n}},\end{aligned}\]
with ${r_{n}}={\Big[\log ({T^{1/2}}{n^{\frac{1+\tau }{4(2+\tau )}}})\Big]^{2\zeta }}$. Hence, the assertion of the theorem follows. □Remark 4.12.
-
(a) Fix $T>0$. Then, provided the assumptions of Theorem 3.12 are satisfied, Theorem 4.11 states that\[ \max \Big\{\underset{t\in [-T,T]}{\sup }|\hat{\psi }(t)-\psi (t)|,\underset{t\in [-T,T]}{\sup }|\hat{\theta }(t)-\theta (t)|\Big\}={\mathcal{O}_{\mathbb{P}}}({\varepsilon _{n}}),\]as $n\to \infty $, where ${\mathcal{O}_{\mathbb{P}}}$ denotes the probabilistic order of convergence.
-
(b) For large n in Theorem 4.11 is understood in the following sense: for any fixed m, there exists ${n_{0}}={n_{0}}(m)$ such that the bound holds for all $n\ge {n_{0}}$. Of course, the function $m\mapsto {n_{0}}(m)$ is increasing.
The following corollary is an immediate consequence of Theorem 4.11.
Corollary 4.13.
Let the assumptions of Theorem 3.12 be satisfied. Then
\[ \underset{n\to \infty }{\lim }\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\hat{\psi }(t)-\psi (t)|\ge c{b_{n}^{\frac{1}{2}-\varepsilon }}\Big)=0\]
for any constant $c>0$.
Proof.
Fix $c>0$ and assume that ${b_{n}}={n^{-\frac{1}{1-2\varepsilon }}}{\Big(\log n\Big)^{\eta +\frac{1}{1-2\varepsilon }}}$, without loss of generality. Since ${b_{n}}\to 0$ as $n\to \infty $, there exists ${n_{0}}\in \mathbb{N}$ such that $c{b_{n}^{\frac{1}{2}-\varepsilon }}<\min \{1,\frac{1}{4{b_{n}}}\}$ for all $n\ge {n_{0}}$. Taking ${\varepsilon _{n}}=c{b_{n}^{\frac{1}{2}-\varepsilon }}$ and $T={b_{n}^{-1}}$ in Lemma 4.10, it follows that
\[\begin{aligned}{}& \mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\hat{\psi }(t)-\psi (t)|\ge c{b_{n}^{\frac{1}{2}-\varepsilon }}\Big)\\ {} \le & \mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(1)}}(t))|\ge c{b_{n}^{\frac{1}{2}-\varepsilon }}\Big)\\ {} & +\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(2)}}(t))|\ge c{b_{n}^{\frac{1}{2}-\varepsilon }}\Big)\\ {} \le & 2\tilde{C}{c^{-\frac{1}{2}}}{b_{n}^{\frac{2\varepsilon -3}{4}}}\exp \left\{-\frac{{c^{2}}n{b_{n}^{1-2\varepsilon }}}{160{(m+1)^{d}}}\right\}\end{aligned}\]
for all $n\ge {n_{0}}$ and some constant $\tilde{C}>0$. Hence,
\[\begin{aligned}{}& \mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\hat{\psi }(t)-\psi (t)|>c{b_{n}^{\tilde{\delta }/2}}\Big)\\ {} \le & \check{C}{n^{\frac{3-2\varepsilon }{4(1-2\varepsilon )}-\frac{{c^{2}}{(\log n)^{\eta (1-2\varepsilon )}}}{160{(m+1)^{d}}}}}{\Big(\log n\Big)^{-\frac{3-2\varepsilon }{4(1-2\varepsilon )}(1+\eta (1-2\varepsilon ))}}\to 0,\end{aligned}\]
as $n\to \infty $, where $\check{C}=2\tilde{C}{c^{-\frac{1}{2}}}$. □Corollary 4.14.
Let $\gamma =0$ and suppose that the assumptions of Theorem 3.12 are satisfied. Moreover, let ${\kappa _{n}}=2{\Big(\log n\Big)^{\frac{1+\eta (1+2\varepsilon )}{2}}}$. Then,
Proof.
By Lemma 4.1, (2), $\frac{1}{|\psi (x)|}\le c{(1+|x|)^{\frac{1}{2}-\varepsilon }}$ for some constant $c>0$; hence, there exists ${n_{0}}\in \mathbb{N}$ such that
for all $n\ge {n_{0}}$. We first show that the probabilities of the events
(4.9)
\[ \underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\inf }|\psi (t)|\ge {c^{-1}}{(1+|{b_{n}^{-1}}|)^{\varepsilon -\frac{1}{2}}}\ge {c^{-1}}{b_{n}^{\frac{1}{2}-\varepsilon }}\]
\[ {A_{n}}:=\Big\{|\hat{\psi }(t)|<{n^{-1/2}}\hspace{2.5pt}\text{for some}\hspace{2.5pt}t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\Big\}\]
tend to 0 as $n\to \infty $: By (4.1), $t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]$ implies $|\psi (t)|>2{n^{-1/2}}$, for all $n\ge {n_{1}}$ and some ${n_{1}}\in \mathbb{N}$. Set $\tilde{n}=\max \{{n_{0}},\hspace{2.5pt}{n_{1}}\}$. Then
\[\begin{aligned}{}\mathbb{P}({A_{n}})\le & \mathbb{P}\Big(|\hat{\psi }(t)-\psi (t)|>|\psi (t)|-{n^{-1/2}}\hspace{2.5pt}\text{for some}\hspace{2.5pt}t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\Big)\\ {} \le & \mathbb{P}\Big(|\hat{\psi }(t)-\psi (t)|>\frac{1}{2}|\psi (t)|\hspace{2.5pt}\text{for some}\hspace{2.5pt}t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\Big)\\ {} \le & \mathbb{P}\Big(|\hat{\psi }(t)-\psi (t)|>\frac{1}{2c}{b_{n}^{\frac{1}{2}-\varepsilon }}\hspace{2.5pt}\text{for some}\hspace{2.5pt}t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\Big),\end{aligned}\]
for all $n\ge \tilde{n}$, where the last inequality follows from (4.9). Hence, by Corollary 4.13, ${\lim \nolimits_{n\to \infty }}\mathbb{P}({A_{n}})=0$.Suppose ${\kappa _{n}}=2{\Big(\log n\Big)^{\frac{1+\eta (1-2\varepsilon )}{2}}}$. Then we find that
\[\begin{aligned}{}& \mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }\frac{|\psi (t)|}{|\tilde{\psi }(t)|}\ge {\kappa _{n}}\Big)\\ {} =& \mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }\frac{|\psi (t)|}{|\hat{\psi }(t)|}\ge {\kappa _{n}},\hspace{2.5pt}\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\inf }|\hat{\psi }(t)|\ge {n^{-1/2}}\Big)\\ {} & +\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }\frac{|\psi (t)|}{|\tilde{\psi }(t)|}\ge {\kappa _{n}},\hspace{2.5pt}\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\inf }|\hat{\psi }(t)|<{n^{-1/2}}\Big)\\ {} \le & \mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }\frac{|\psi (t)-\hat{\psi }(t)|}{|\hat{\psi }(t)|}\ge {\kappa _{n}}-1,\hspace{2.5pt}\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\inf }|\hat{\psi }(t)|\ge {n^{-1/2}}\Big)\\ {} & +\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\inf }|\hat{\psi }(t)|<{n^{-1/2}}\Big)\\ {} \le & \mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\psi (t)-\hat{\psi }(t)|\ge ({\kappa _{n}}-1){n^{-1/2}}\Big)+\mathbb{P}({A_{n}}),\end{aligned}\]
for all $n\ge \tilde{n}$. Taking into account that for large n, $({\kappa _{n}}-1){n^{-1/2}}=2{b_{n}^{\frac{1}{2}-\varepsilon }}-{n^{-1/2}}\ge {b_{n}^{\frac{1}{2}-\varepsilon }}$, the assertion follows by Corollary 4.13. □Now we can give the proof for Theorem 4.8.
Proof of Theorem 4.8.
First of all, observe that
\[\begin{aligned}{}{R_{n}}+\theta \frac{\psi -\hat{\psi }}{{\psi ^{2}}}{\mathbb{1}_{|\hat{\psi }|\le {n^{-1/2}}}}=& \Big(1-\frac{\hat{\psi }}{\psi }\Big)\Big(\frac{\hat{\theta }-\theta }{\tilde{\psi }}+\theta \frac{\psi -\hat{\psi }}{\psi \tilde{\psi }}\Big)\\ {} =& \frac{\psi }{\tilde{\psi }}\Big(\frac{\psi -\hat{\psi }}{\psi }\Big)\Big(\frac{\hat{\theta }-\theta }{\psi }+\theta \frac{\psi -\hat{\psi }}{{\psi ^{2}}}\Big)\\ {} =& \frac{\psi }{\tilde{\psi }}\Big(\frac{\psi -\hat{\psi }}{\psi }\Big)\Big(\frac{\hat{\theta }-\theta }{\psi }+i{\Big(\frac{1}{\psi }\Big)^{\prime }}\Big(\psi -\hat{\psi }\Big)\Big).\end{aligned}\]
Now, fix $\tilde{\gamma }>0$ and let ${\kappa _{n}}=2{\Big(\log n\Big)^{\frac{1+\eta (1-2\varepsilon )}{2}}}$. Moreover, let
\[ {M_{n}}=\max \Big\{\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\hat{\psi }(t)-\psi (t)|,\hspace{2.5pt}\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\hat{\theta }(t)-\theta (t)|\Big\}.\]
By (K3), ${\sup _{x\in \mathbb{R},\hspace{2.5pt}n\in \mathbb{N}}}|{\mathcal{F}_{+}}[{K_{{b_{n}}}}](x)|\le 2$; hence
\[\begin{aligned}{}& \mathbb{P}\Big({E_{2}}\ge \tilde{\gamma }\Big)\\ {} =& \mathbb{P}\Big(\sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{{R_{n}}+\theta \frac{\psi -\hat{\psi }}{{\psi ^{2}}}{\mathbb{1}_{\{|\hat{\psi }|\le {n^{-1/2}}\}}}\Big\}{\mathcal{F}_{+}}[{K_{b}}]{>_{{L^{2}}({\mathbb{R}^{\times }})}}\ge \tilde{\gamma }\Big)\\ {} \le & \mathbb{P}\Big({\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\frac{|\psi (x)|}{|\tilde{\psi }(x)|}\frac{|\hat{\psi }(x)-\psi (x)|}{|\psi (x)|}\\ {} & \times \Big(\frac{|\hat{\theta }(x)-\theta (x)|}{|\psi (x)|}+\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big||\psi (x)-\hat{\psi }(x)|\Big)dx\ge \frac{\tilde{\gamma }}{2}{n^{-\frac{1}{2}}}\Big)\\ {} \le & \mathbb{P}\Big({M_{n}}{b_{n}^{\frac{1}{2}-\varepsilon }}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\Big(\frac{1}{|\psi (x)|}+\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|\Big)dx\ge \frac{\tilde{\gamma }}{2}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}\Big)\\ {} & +\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }\frac{|\psi (t)|}{|\tilde{\psi }(t)|}>{\kappa _{n}}\Big)+\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\psi (t)-\hat{\psi }(t)|>{b_{n}^{\frac{1}{2}-\varepsilon }}\Big).\end{aligned}\]
Since by Lemma 4.1, (3), ${\Big(\frac{1}{\psi }\Big)^{\prime }}\in {L^{\infty }}(\mathbb{R})$, there is a constant $\tilde{c}>0$ such that
\[\begin{aligned}{}& \mathbb{P}\Big({M_{n}}{b_{n}^{\frac{1}{2}-\varepsilon }}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\Big(\frac{1}{|\psi (x)|}+\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|\Big)dx\ge \frac{\tilde{\gamma }}{2}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}\Big)\\ {} \le & \mathbb{P}\Big({M_{n}}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x){|^{2}}}dx\ge \frac{\tilde{\gamma }}{2\tilde{c}}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}{b_{n}^{\varepsilon -\frac{1}{2}}}\Big).\end{aligned}\]
Moreover, by Definition 3.10, (iii), we have for some $\check{c}>0$,
\[\begin{aligned}{}& \mathbb{P}\Big({M_{n}}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x){|^{2}}}dx\ge \frac{\tilde{\gamma }}{2\tilde{c}}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}{b_{n}^{\varepsilon -\frac{1}{2}}}\Big)\\ {} \le & \mathbb{P}\Big({M_{n}}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{{(1+{x^{2}})^{-1+\varepsilon }}}{|\psi (x){|^{2}}}{(1+{x^{2}})^{1-\varepsilon -\xi /2}}dx\ge \frac{\tilde{\gamma }}{2\tilde{c}\check{c}}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}{b_{n}^{\varepsilon -\frac{1}{2}}}\Big)\\ {} \le & \mathbb{P}\Big({M_{n}}\ge \frac{\tilde{\gamma }}{2\tilde{c}\check{c}}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}{b_{n}^{\frac{3}{2}-\varepsilon -\xi }}\Big\| \frac{{(1+{\cdot ^{2}})^{-\frac{\varepsilon -1}{2}}}}{\psi }{\Big\| _{{L^{2}}(\mathbb{R})}^{-1}}\Big),\end{aligned}\]
where the last line follows because $\frac{{(1+\hspace{0.1667em}\cdot {\hspace{0.1667em}^{2}})^{-(\varepsilon -1)/2}}}{\psi }\in {L^{2}}(\mathbb{R})$ by Proposition 3.3, (c). Now, for n sufficiently large, we obtain
\[\begin{aligned}{}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}{b_{n}^{\frac{3}{2}-\varepsilon -\xi }}=& \frac{1}{2}{n^{\frac{\xi -1}{1-2\varepsilon }-1}}{\big(\log n\big)^{(1-\xi )\big(\eta +\frac{1}{1-2\varepsilon }\big)}}\\ {} >& {n^{-\frac{1+\tau }{2(2+\tau )}}}\Big[\log {\Big({b_{n}^{-\frac{1}{2}}}{n^{\frac{1+\tau }{4(2+\tau )}}}\Big)\Big]^{\eta +\frac{1}{2}}}.\end{aligned}\]
Indeed, by Definition 3.10, (iii), we have
\[ \frac{\xi -1}{1-2\varepsilon }-1>\frac{1}{1-2\varepsilon }-\frac{1+\tau }{2(2+\tau )}>-\frac{1+\tau }{2(2+\tau )}.\]
Hence, we conclude by Theorem 4.11, Corollary 4.13 and Corollary 4.14 that
\[ \mathbb{P}\Big({E_{2}}\ge \tilde{\gamma }\Big)\to 0,\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]
for any $\tilde{\gamma }>0$. □4.3 Neglecting the drift γ
It remains to show that the result of Theorem 3.12 still holds true if γ is assumed to be arbitrary. For this purpose, consider the sample ${({\tilde{Y}_{j}})_{j\in W}}$ given by ${\tilde{Y}_{j}}={Y_{j}}-\gamma $, $j\in W$. Moreover, let ${\psi _{\ast }}(t)=\mathbb{E}[{e^{it{\tilde{Y}_{0}}}}]$ be the characteristic function of ${\tilde{Y}_{0}}$ and write ${\hat{\psi }_{\ast }}$ for its empirical counterpart, i.e. ${\hat{\psi }_{\ast }}(t)=\frac{1}{n}{\textstyle\sum _{j\in W}}{e^{it{\tilde{Y}_{j}}}}$. Then, with the notation
where ${\theta _{\ast }}(t)=\mathbb{E}[{\tilde{Y}_{0}}{e^{it{\tilde{Y}_{0}}}}]$ and ${\hat{\theta }_{\ast }}(t)=\frac{1}{n}{\textstyle\sum _{j\in W}}{\tilde{Y}_{j}}{e^{it{\tilde{Y}_{j}}}}$. For any $v\in \mathcal{U}(\xi ,{\beta _{2}})$, consider the decomposition
\[ \frac{1}{{\tilde{\psi }_{\ast }}(t)}:=\frac{1}{{\hat{\psi }_{\ast }}(t)}{\mathbb{1}_{\{|{\hat{\psi }_{\ast }}(t)|>{n^{-1/2}}\}}}={e^{it\gamma }}\frac{1}{\tilde{\psi }(t)},\]
we have for any $t\in \mathbb{R}$,
(4.10)
\[ \displaystyle \frac{{\hat{\theta }_{\ast }}(t)}{{\tilde{\psi }_{\ast }}(t)}=\frac{\hat{\theta }(t)}{\tilde{\psi }(t)}-\gamma {\mathbb{1}_{\{|{\hat{\psi }_{\ast }}(t)|>{n^{-1/2}}\}}}\hspace{1em}\text{as well as}\hspace{1em}\frac{{\theta _{\ast }}(t)}{{\psi _{\ast }}(t)}=\frac{\theta (t)}{\psi (t)}-\gamma ,\]
\[\begin{aligned}{}\sqrt{n}({\hat{L}_{W}}v-\mathcal{L}v)=& \frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\frac{\hat{\theta }}{\tilde{\psi }}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-\frac{\theta }{\psi }{>_{{L^{2}}(\mathbb{R})}}\\ {} =& \frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\frac{{\hat{\theta }_{\ast }}}{{\tilde{\psi }_{\ast }}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-\frac{{\theta _{\ast }}}{{\psi _{\ast }}}{>_{{L^{2}}(\mathbb{R})}}\\ {} & +\frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\gamma {\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|>{n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-\gamma {>_{{L^{2}}(\mathbb{R})}}.\end{aligned}\]
As W is regularly growing to infinity, the first summand on the right-hand side of the last equation tends to a Gaussian random variable since ${\psi _{\ast }}$ is an infinitely divisible characteristic function without drift component. For the second summand, we find that
\[\begin{aligned}{}& \frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\gamma {\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|>{n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-\gamma {>_{{L^{2}}(\mathbb{R})}}\\ {} =& \frac{\sqrt{n}\gamma }{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-1{>_{{L^{2}}(\mathbb{R})}}\\ {} & -\frac{\sqrt{n}\gamma }{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|\le {n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]{>_{{L^{2}}(\mathbb{R})}}.\end{aligned}\]
Hence, by (K3) and Definition 3.10, (iii), we obtain
\[\begin{aligned}{}& \sqrt{n}\mathbb{E}\Big|<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-1{>_{{L^{2}}(\mathbb{R})}}\Big|\\ {} \le & {\int _{\mathbb{R}}}\sqrt{n}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)||1-{\mathcal{F}_{+}}[{K_{{b_{n}}}}](x)|dx\lesssim {b_{n}}\sqrt{n}\| {\mathcal{G}^{-1\ast }}v{\| _{{H^{1}}(\mathbb{R})}},\end{aligned}\]
where the last term tends to zero as $n\to \infty $, since ${b_{n}}=o({n^{-1/2}})$. Moreover,
\[\begin{aligned}{}& \sqrt{n}\mathbb{E}\Big|<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|\le {n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]{>_{{L^{2}}(\mathbb{R})}}\Big|\\ {} \le & 2\sqrt{n}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\mathbb{P}\Big(|{\hat{\psi }_{\ast }}(x)|\le {n^{-\frac{1}{2}}}\Big)dx.\end{aligned}\]
Taking into account that $|\psi (x)|=|{\hat{\psi }_{\ast }}(x)|$, relation (4.2) with $p=1/2$ yields
\[ \sqrt{n}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\mathbb{P}\Big(|{\hat{\psi }_{\ast }}(x)|\le {n^{-\frac{1}{2}}}\Big){\mathbb{1}_{[-{b_{n}^{-1}},{b_{n}^{-1}}]}}(x)\le \frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}.\]
Applying again (4.2) with $p=1$ implies
\[ \sqrt{n}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\mathbb{P}\Big(|{\hat{\psi }_{\ast }}(x)|\le {n^{-\frac{1}{2}}}\Big){\mathbb{1}_{[-{b_{n}^{-1}},{b_{n}^{-1}}]}}(x)\le {n^{-\frac{1}{2}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x){|^{2}}}\to 0,\]
as $n\to \infty $; thus, by dominated convergence, we have
\[ \sqrt{n}\mathbb{E}\Big|<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|\le {n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]{>_{{L^{2}}(\mathbb{R})}}\Big|\to 0,\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]
All in all, this shows that Theorem 3.12 holds for any fixed $\gamma \in \mathbb{R}$.