Given a low-frequency sample of the infinitely divisible moving average random field {∫Rdf(t−x)Λ(dx),t∈Rd}, in [13] we proposed an estimator uv0ˆ for the function R∋x↦u(x)v0(x)=(uv0)(x), with u(x)=x and v0 being the Lévy density of the integrator random measure Λ. In this paper, we study asymptotic properties of the linear functional L2(R)∋v↦v,uv0ˆL2(R), if the (known) kernel function f has a compact support. We provide conditions that ensure consistency (in mean) and prove a central limit theorem for it.
Infinitely divisible random measurestationary random fieldLévy process; moving averageLévy densityFourier transformcentral limit theoremIntroduction
Consider a stationary infinitely divisible indepently scattered random measure Λ whose Lévy density is denoted by v0. For some (known) Λ-integrable function f:Rd→R with a compact support, let
X={X(t);t∈Rd},X(t)=∫Rdf(t−x)Λ(dx)
be the corresponding moving average random field. In our recent preprint [13], we proposed an estimator uv0ˆ for the function R∋x↦u(x)v0(x)=(uv0)(x), u(x)=x, based on low frequency observations (X(jΔ))j∈W of X, with Δ>00$]]> and W a finite subset of Zd.
A wide class of spatio-temporal processes with the spectral representation (1.1) is provided by the so-called ambit random fields, where a space-time Lévy process serves as integrator. Such processes are, e.g., used to model the growth rate of tumours, where the spatial component describes the angle between the center of the tumour cell and the nearest point at its boundary (cf. [3, 16]). Ambit fields cover quite a number of different processes and fields including Ornstein–Uhlenbeck type and mixed moving average random fields (cf. [1, 2]). A further interesting application of (1.1) is given in [17], where the author uses infinitely divisible moving average random fields in order to model claims of natural disaster insurance within different postal code areas.
We point out that there is a large number of literature concerning estimation of the Lévy density v1 (its Lévy measure, respectively) in the case when X is a Lévy process (cf. [5, 9, 10, 14, 19]). Moreover, in the recent paper [4] the authors provide an estimator for the Lévy density v0 of the integrator Lévy process {Ls} of a moving average process X(t)=∫Rf(t−s)dLs, t∈R, which covers the case d=1 in (1.1). For a discussion on the differences between our approach and the method provided in [4], we refer to [13] and [18].
In this paper, we investigate asymptotic properties of the linear functional L2(R)∋v↦LˆWv=v,uv0ˆL2(R) as the sample size |W| tends to infinity. It is motivated by the paper of Nickl and Reiss [20], where the authors provide a Donsker type theorem for the Lévy measure of pure jump Lévy processes. Since our observations are m-dependent, the classical i.i.d. theory does not apply here. Instead, we combine results of Chen and Shao [8] for m-dependent random fields and ideas of Bulinski and Shashkin [7] with exponential inequalities for weakly dependent random fields (see e.g. [15, 11]) in order to prove our limit theorems.
It turns out that under certain regularity assumptions on uv0, LˆWv is a mean consistent estimator for Lv=v,uv0L2(R) with a rate of convergence given by O(|W|−1/2), for any v that belongs to a subspace U of L1(R)∩L2(R). Moreover, we give conditions such that finite dimensional distributions of the process {|W|1/2(LˆW−L)v;v∈U} are asymptotically Gaussian as |W| is regularly growing to infinity.
From a practical point of view, a naturally arising question is wether a proposed model for v0 (or equivalently uv0) is suitable. Knowledge of the asymptotic distribution of |W|1/2(LˆW−L) can be used in order to construct tests for different hypotheses, e.g., on regularity assumptions of the model for v0. Indeed, the scalar product ·,·L2(R) naturally induces that the class U of test functions is growing, when uv0 becomes more regular.
This paper is organized as follows. In Section 2, we give a brief overview of regularly growing sets and infinitely divisible moving average random fields. We further recall some notation and the most frequently used results from [13]. Section 3 is devoted to asymptotic properties of LˆW. Here we discuss our regularity assumptions and state the main results of this paper (Theorems 3.7 and 3.12). Section 4 is dedicated to the proofs of our limit theorems. Some of the shorter proofs as well as external results that will frequently be used in Section 3 are moved to Appendix.
PreliminariesNotation
Throughout this paper, we use the following notation.
By B(Rd) we denote the Borel σ-field on the Euclidean space Rd. The Lebesgue measure on Rd is denoted by νd and we shortly write νd(dx)=dx when we integrate w.r.t. νd. For any measurable space (M,M,μ) we denote by Lα(M), 1≤α<∞, the space of all M|B(R)-measurable functions f:M→R with ∫M|f|α(x)μ(dx)<∞. Equipped with the norm ||f||Lα(M)=∫M|f|α(x)μ(dx)1/α, Lα(M) becomes a Banach space and, in the case α=2, even a Hilbert space with scalar product f,gLα(M)=∫Mf(x)g(x)μ(dx), for any f,g∈L2(M). With L∞(M) (i.e. if α=∞) we denote the space of all real-valued bounded functions on M. In the case (M,M,μ)=(R,B(R),ν1) we denote by
Hδ(R)={f∈L2(R):∫R|F+f|2(x)(1+x2)δdx<∞}
the Sobolev space of order δ>00$]]> equipped with the Sobolev norm ||f||Hδ(R)=||F+f(·)(1+·2)δ/2||L2(R), where F+ is the Fourier transform on L2(R). For f∈L1(R), F+f is defined by F+f(x)=∫Reitxf(t)dt, x∈R. Throughout the rest of this paper (Ω,A,P) denotes a probability space. Note that in this case Lα(Ω) is the space of all random variables with finite α-th moment. For an arbitrary set A we introduce the notation card(A) or briefly |A| for its cardinality. Let supp(f)={x∈Rd:f(x)≠0} be the support set of a function f:Rd→R. Denote by diam(A)=sup{‖x−y‖∞:x,y∈A} the diameter of a bounded set A⊂Rd.
Regularly growing sets
In this section, we briefly recall some basic facts about regularly growing sets. For a more detailed investigation on this topic, see, e.g., [7].
Let a=(a1,…,ad)∈Rd be a vector with positive components. In the sequel, we shortly write a>00$]]> in this situation. Moreover, let
Π0(a)={x∈Rd,0<xi≤ai,i=1,…,d}
and define for any j∈Zd the shifted blockΠj(a) by
Πj(a)=Π0(a)+ja={x∈Rd,jiai<xi≤ji(ai+1),i=1,…,d}.
Clearly {Πj,j∈Zd} forms a partition of Rd. For any U⊂Zd, introduce the sets
J−(U,a)={j∈Zd,Πj(a)⊂U},J+(U,a)={j∈Zd,Πj(a)∩U≠∅}U−(a)=⋃j∈J−(U,a)Πj(a),U+(a)=⋃j∈J+(U,a)Πj(a).
A sequence of sets Un⊂Rd (n∈N) tends to infinity in Van Hove sense or shortly is VH-growing, if for any a>00$]]>νd(Un−)→∞andνd(Un−)νd(Un+)→1asn→∞.
For a finite set A⊂Zd, define by ∂A={j∈Zd∖A,dist(j,A)=1} its boundary, where dist(j,A)=inf{‖j−x‖∞,x∈A}.
A sequence of finite sets An∈Zd (n∈N) is called regularly growing (to infinity), if
|An|→∞,and|∂An||An|→0,asn→∞.
Regular growth of a family An⊂Zd means that the number of points in the boundary of An grows significantly slower than the number of its interior points.
The following result that connects regularly and VH-growing sequences can be found in [7, p.174].
LetUn⊂Rd(n∈N) be VH-growing. ThenVn=Un∩Zd(n∈N) is regularly growing to infinity.
If(Un)n∈Nis a sequence of finite subsets ofZd, regularly growing to infinity, thenVn=∪j∈Un[j,j+1)is VH-growing, where[j,j+1)={x∈Rd:jk≤xk<jk+1,k=1,…,d}.
Infinitely divisible random measures
In what follows, denote by E0(Rd) the collection of all bounded Borel sets in Rd.
Suppose that Λ={Λ(A);A∈E0(Rd)} is an infinitely divisible random measure on some probability space (Ω,A,P), i.e. a random measure with the following properties:
Let (Em)m∈N be a sequence of disjoint sets in E0(Rd). Then the sequence (Λ(Em))m∈N consists of independent random variables; if, in addition, ∪m=1∞Em∈E0(Rd), then we have Λ(∪m=1∞Em)=∑m=1∞Λ(Em) almost surely.
The random variable Λ(A) has an infinitely divisible distribution for any choice of A∈E0(Rd).
For every A∈E0(Rd), let φΛ(A) denote the characteristic function of the random variable Λ(A). Due to the infinite divisibility of the random variable Λ(A), the characteristic function φΛ(A) has a Lévy–Khintchin representation which can, in its most general form, be found in [21, p. 456]. Throughout the rest of the paper we make the additional assumption that the Lévy–Khintchin representation of Λ(A) is of a special form, namely
φΛ(A)(t)=expνd(A)K(t),A∈E0(Rd),
with
K(t)=ita0−12t2b0+∫Reitx−1−itx1[−1,1](x)v0(x)dx,
where νd denotes the Lebesgue measure on Rd, a0 and b0 are real numbers with 0≤b0<∞ and v0:R→R is a Lévy density, i.e. a measurable function which fulfills ∫Rmin{1,x2}v0(x)dx<∞. The triplet (a0,b0,v0) will be referred to as the Lévy characteristic of Λ. It uniquely determines the distribution of Λ. This particular structure of the characteristic functions φΛ(A) means that the random measure Λ is stationary with the control measure λ:B(R)→[0,∞) given by
λ(A)=νd(A)|a0|+b0+∫Rmin{1,x2}v0(x)dxfor allA∈E0(Rd).
Now one can define the stochastic integral with respect to the infinitely divisible random measure Λ in the following way:
Let f=∑j=1nxj1Aj be a real simple function on Rd, where Aj∈E0(Rd) are pairwise disjoint. Then for every A∈B(Rd) we define
∫Af(x)Λ(dx)=∑j=1nxjΛ(A∩Aj).
A measurable function f:(Rd,B(Rd))→(R,B(R)) is said to be Λ-integrable if there exists a sequence (f(m))m∈N of simple functions as in (1) such that f(m)→f holds λ-almost everywhere and such that, for each A∈B(Rd), the sequence ∫Af(m)(x)Λ(dx)m∈N converges in probability as m→∞. In this case we set
∫Af(x)Λ(dx)=P‐limm→∞∫Af(m)(x)Λ(dx).
A useful characterization of the Λ-integrability of a function f is given in [21, Theorem 2.7]. Now let f:Rd→R be Λ-integrable; then the function f(t−·) is Λ-integrable for every t∈Rd as well. We define the moving average random field X={X(t),t∈Rd} by
X(t)=∫Rdf(t−x)Λ(dx),t∈Rd.
Recall that a random field is called infinitely divisible if its finite dimensional distributions are infinitely divisible. The random field X above is (strictly) stationary and infinitely divisible and the characteristic function φX(0) of X(0) is given by
φX(0)(u)=exp∫RdK(uf(s))ds,
where K is the function from (2.1). The argument ∫RdK(uf(s))ds in the above exponential function can be shown to have a similar structure as K(t); more precisely, we have
∫RdK(uf(s))ds=iua1−12u2b1+∫Reiux−1−iux1[−1,1](x)v1(x)dx
where a1 and b1 are real numbers with b0≥0 and the function v1 is the Lévy density of X(0). The triplet (a1,b1,v1) is again referred to as the Lévy characteristic (of X(0)) and determines the distribution of X(0) uniquely. A simple computation shows that the triplet (a1,b1,v1) is given by the formulas
a1=∫RdU(f(s))ds,b1=b0∫Rdf2(s)ds,v1(x)=∫supp(f)1|f(s)|v0xf(s)ds,
where supp(f):={s∈Rd:f(s)≠0} denotes the support of f and the function U is defined via
U(u)=ua0+∫Rx1[−1,1](ux)−1[−1,1](x)v0(x)dx.
Note that the Λ-integrability of f immediately implies that f∈L1(Rd)∩L2(Rd). Hence, all integrals above are finite.
For details on the theory of infinitely divisible measures and fields we refer the interested reader to [21].
A plug-in estimation approach for v0
Let the random field X={X(t),t∈Rd} be given as in Section 2.3 and define the function u:R→R by u(x)=x. Suppose further that an estimator uv1ˆ for uv1 is given. In our recent preprint [13], we provided an estimation approach for uv0 based on relation (2.4) which we briefly recall in this section. Therefore, quite a number of notations are required.
Assume that f satisfies the integrability condition
∫supp(f(s))|f(s)|1/2ds<∞,
and define the operator G:L2(R)→L2(R) by
Gv=∫supp(f)sgn(f(s))v(·f(s))ds,v∈L2(R).
Moreover, define the isometry M:L2(R)→L2(R×,dx|x|) by
(Mv)(x)=|x|1/2v(x),v∈L2(R),
and let the functions mf,±:R×→C and μf:R×→C be given by
mf,+(x)=∫supp(f)sgn(f(s))|f(s)|1/2e−ixlog|f(s)|ds,mf,−(x)=∫supp(f)|f(s)|1/2e−ixlog|f(s)|ds,μf(y)=mf,+(log|y|)ify>0,mf,−(log|y|)ify<0.0,\\ {} {m_{f,-}}(\log |y|)\hspace{1em}& \text{if}\hspace{2.5pt}y<0.\end{array}\right.\end{aligned}\]]]>
Multiplying both sides in (2.4) by u leads to the equivalent relation
uv1=Guv0.
Suppose uv1∈L2(R) and assume that for some β≥0,
|mf,±(x)|≳11+|x|β,for allx∈R.
Then the unique solution uv0∈L2(R) to equation (2.6) is given by
uv0=G−1uv1=M−1F×−1(1μfF×Muv1),
cf. [13, Theorem 3.1]. Based on this relation, the paper [13] provides the estimator
uv0ˆ=M−1F×−1(1μf,nF×Muv1ˆ)=:Gn−1uv1ˆ
for uv0, where (an)n∈N⊆(0,∞) is an arbitrary sequence, depending on the sample size n, that tends to 0 as n→∞, and the mapping 1μf,n:R→C is defined by 1μf,n:=1μf1{|μf|>an}{a_{n}}\}}}$]]>. Here F×:L2(R×,dx|x|)→L2(R×,dx|x|) denotes the Fourier transform on the multiplicative group R× which is defined by
(F×u)(y)=∫R×u(x)e−ilog|x|·log|y|·eiπδ(x)δ(y)dx|x|,
for all u∈L1(R×,dx|x|)∩L2(R×,dx|x|), with δ:R×→R given by δ(x)=1(−∞,0)(x) (cf. [13, Section 2.2]). A more detailed introduction to harmonic analysis on locally compact abelian groups can be found, e.g., in [12].
The linear operator Gn−1:L2(R)→L2(R) defined in (2.7) is bounded in the operator norm ‖Gn−1‖≤1an, whereas G−1 is unbounded in general.
m-dependent random fields
A random field X={X(t),t∈T}, T⊆Rd, defined on some probability space (Ω,A,P) is called m-dependent if for some m∈N and any finite subsets U and V of T the random vectors (X(u))u∈U and (X(v))v∈V are independent whenever
‖u−v‖∞=max{|ui−vi|,i=1,…,d}>m,m,\]]]>
for all u=(u1,…,ud)⊤∈U and v=(v1,…,vd)⊤∈V.
Let the random field X be given in (2.2) and suppose that f has a compact support. Then X is m-dependent withm>diam(supp(f))\text{diam}(\operatorname{supp}(f))$]]>.
Compactness of supp(f) implies that supp(f(t−·)) and supp(f(s−·)) are disjoint whenever ‖t−s‖∞>diam(supp(f))\text{diam}(\operatorname{supp}(f))$]]>. Since further Λ is independently scattered and integration in (2.2) is done only on supp(f(t−·)), X(t) and X(s) are independent for ‖t−s‖∞>diam(supp(f))\text{diam}(\operatorname{supp}(f))$]]>. □
A linear functional for infinitely divisible moving averagesThe setting
Let Λ={Λ(A),A∈E0(Rd)} be a stationary infinitely divisible random measure defined on some probability space (Ω,A,P) with characteristic triplet (a0,0,v0), i.e. Λ is purely non-Gaussian. For a known Λ-integrable function f:Rd→R let X={X(t)=∫Rdf(t−x)Λ(dx),t∈Rd} be the infinitely divisible moving average random field defined in Section 2.3.
Fix Δ>00$]]> and suppose X is observed on a regular grid ΔZd={jΔ,j∈Zd} with the mesh size Δ, i.e. consider the random field Y given by
Y=(Yj)j∈Zd,Yj=X(jΔ),j∈Zd.
For a finite subset W⊂Zd let (Yj)j∈W be a sample drawn from Y and denote by n the cardinality of W.
Throughout this paper, for any numbers a, b≥0, we use the notation a≲b if a≤cb for some constant c>00$]]>.
Let the function u:R→R be given by u(x)=x. We make the following assumptions: for some τ>00$]]>
f∈L2+τ(Rd) has compact support;
uv0∈L1(R)∩L2(R) is bounded;
∫R|x|1+τ|(uv0)(x)|dx<∞;
|∫supp(f)f(s)F+[uv0](f(s)x)ds|≲(1+x2)−1/2 for all x∈R;
∃ε>00$]]> such that the function
R∋x↦exp(∫supp(f)∫0f(s)xIm(F+[uv0](y))dyds)
is contained in H−1+ε(R).
Suppose that uv1ˆ is an estimator for uv1 (which we will precisely define in the next section) based on the sample (Yj)j∈W. Then, using the notation in Section 2.4, we introduce the linear functional
LˆW:L2(R)→R,LˆWv:=v,uv0ˆL2(R)=v,Gn−1uv1ˆL2(R).
The purpose of this paper is to investigate asymptotic properties of LˆW as the sample size |W|=n tends to infinity.
An estimator for uv1
In this section we introduce an estimator for the function uv1. Therefore, let ψ denote the characteristic function of X(0). Then, by Assumption 3.1, (2), together with formula (2.3), we find that ψ can be rewritten as
ψ(t)=EeitY0=exp(iγt+∫R(eitx−1)v1(x)dx),t∈R,
for some γ∈R and the Lévy density v1 given in (2.4). We call γ the drift parameter or shortly drift of X. As a consequence of representation (3.3), the random field X is purely non-Gaussian. It is subsequently assumed that the drift γ is known.
Taking derivatives in (3.3) leads to the identity
−iψ′(t)ψ(t)=γ+F+[uv1](t),t∈R.
Neglecting γ for the moment, this relation suggests that a natural estimator F+[uv1]ˆ for F+[uv1] is given by
F+[uv1]ˆ(t)=θˆ(t)ψ˜(t),t∈R,
with
ψ˜(t)=1ψˆ(t)1{|(ψ)ˆ(t)|>n−1/2},t∈R,{n^{-1/2}}\}}},\hspace{1em}t\in \mathbb{R},\]]]>
and ψˆ(t)=∑j∈WeitYj, θˆ(t)=∑j∈WYjeitYj being the empirical counterparts of ψ and θ=−iψ′.
Now, consider for any b>00$]]> a function Kb:R→R with the following properties:
Kb∈L2(R);
supp(F+[Kb])⊆[−b−1,b−1];
|1−F+[Kb](x)|≲min{1,b|x|} for all x∈R.
Then, for any b>00$]]>, we define the estimator uv1ˆ for uv1 by
uv1ˆ(t)=F+−1[F+[uv1]ˆF+[Kb]](t)=12π∫Re−itxθˆ(x)ψ˜(x)F+[Kb](x)dx,t∈R.
If uv1ˆ is a consistent estimator for uv1, it is reasonable to assume that γ=0 (cf. [18]). Indeed, for the asymptotic results below, the value of γ is irrelevant. Even if γ≠0, the functional LˆW estimates the intended quantity with uv1ˆ given in (3.4) (cf. Section 4.3).
Choosing Kb(x)=sin(b−1x)πx yields the estimator uv1ˆ that we introduced in [18] and [13], originally designed by Comte and Genon-Catalot [10] in the case when X is a pure jump Lévy process.
Discussion and examples
In order to explain Assumption 3.1, we prepend the following proposition whose proof can be found in Appendix.
Let the infinitely divisible moving average random fieldX={X(t),t∈Rd}be given as above and supposeu(x)=x.
Let Assumption3.1, (1) and (2) be satisfied. Thenuv1∈L1(R)∩L2(R)is bounded. Moreover,F+[uv1](x)=∫supp(f)f(s)F+[uv0](f(s)x)ds,for allx∈R,that is, the expression in Assumption3.1, (4) is valid.
Let Assumption3.1, (1) and (3) hold true. Then∫R|x|2+τ|(uv1)(x)|dx<∞(also in the case whenτ=0).
Assumption3.1, (5) is satisfied if and only if the functionR∋x↦(1+x2)−12+ε1ψ(x), with ψ given in (3.3), for someε>00$]]>belongs toL2(R).
The compact support property in Assumption 3.1, (1) ensures that the random field (Yj)j∈Zd is m-dependent with m>Δ−1diam(supp(f)){\Delta ^{-1}}\text{diam}(\operatorname{supp}(f))$]]> (cf. Lemma 2.4). In particular, m increases when the grid size Δ of the sample is decreasing. Moreover, compact support of f together with f∈L2+τ(R) implies that f∈Lq(R) for all 0<q≤2+τ. Consequently, f fulfills the integrability condition (2.5). In contrast, if f does not have compact support, the Λ-integrability only ensures f∈L2(R).
Assumption 3.1, (3) is a moment assumption on Λ. More precisely, it is satisfied if and only if
E|Λ(A)|2+τ<∞
for all A∈E0(Rd), cf. [22]. By Proposition 3.3, (b), this assumption also implies E|X(0)|2+τ<∞ in our setting.
As a consequence of Proposition 3.3, (a) and (c), Assumption 3.1, (4) ensures regularity of uv1 whereas (5) yields the polynomial decay of ψ. It was shown in [13, Theorem 3.10] that ψ and uv1 are connected via the relation
|ψ(x)|=exp(−∫0xIm(F+[uv1](y))dy),x∈R;
hence, more regularity of uv1 ensures slower decay rates for |ψ(x)| as x→±∞. Further results on the polynomial decay of infinitely divisible characteristic functions as well as sufficient conditions for this property to hold can be found in [23].
Let us give some examples of Λ and f satisfying Assumption 3.1, (1)–(5). (Gamma random measure).
Fix b>00$]]> and let for any x∈R, v0(x)=x−1e−bx1(0,∞)(x). Clearly, Assumption 3.1, (2) and (3) are satisfied for any τ>00$]]>. The Fourier transform of uv0 is given by F+[uv0](x)=(b−ix)−1, x∈R; hence
∫supp(f)f(s)F+[uv0](f(s)x)ds=∫supp(f)f(s)b−if(s)xds,x∈R.
The latter identity shows that Assumption 3.1, (4) holds true for any integrable f with a compact support. Moreover, a simple calculation yields that for any x∈R, Assumption 3.1, (5) becomes
∫R(1+x2)−1+εexp(∫supp(f)log(1+x2f2(s)b)ds)dx<∞.
This condition is fulfilled for any ε<12−α if
α:=∫supp(f)max{1,f2(s)b}ds<12.
Consistency of LˆW
In this section, we give an upper bound for the estimation error E|LˆWv−Lv| that allows to derive conditions under which LˆW is consistent for the linear functional L:L2(R)→R given by
Lv=v,uv0,v∈L2(R).
With the notations from Section 2.4, we have that the adjoint operator G−1∗:Image(G)→L2(R) of G−1 is given by
G−1∗v=M−1F×−1(1μ¯fF×Mv),v∈Image(G),
where μ¯f denotes the complex conjugate function of μf. Moreover, the adjoint Gn−1∗:L2(R)→L2(R) of Gn−1 writes as
Gn−1∗v=M−1F×−1(1μ¯f,nF×Mv),v∈L2(R),
with 1μ¯f,n=1μ¯f1{|μ¯f|>an}{a_{n}}\}}}$]]>. Notice that Gn−1∗ is a bounded operator whereas G−1∗ is unbounded in general.
Notice that Gn−1∗=G−1∗ if an=0 for any n∈N. Hence, Gn−1∗uv1ˆ in this case only is well-defined if uv1ˆ is an element of Image(G∗) what is indeed a very restrictive assumption. For a detailed discussion we refer to [13].
With the previous notations we now derive an upper bound for E|LˆWv−Lv|. Therefore, recall condition (Uβ) from Section 2.4.
Letγ=0and suppose Assumption3.1, (1)–(3) hold true for someτ≥0. Moreover, let condition(Uβ)be satisfied for someβ≥0and assume thatKb:R→Ris a function with properties (K1)–(K3). ThenE|LˆWv−Lv|≤SπE|Y0|(nb)1/2‖(Gn−1∗−G−1∗)v‖L2(R)+12π|F+[G−1∗v]|,|F+[uv1]||1−F+[Kb]|L2(R)+c·S2πn(E|Y0|2+‖uv1‖L1(R))∫R|F+[G−1∗v]|(x)|ψ(x)|dxfor anyv∈Image(G)such that∫R|F+[G−1∗v](x)||ψ(x)|dx<∞, wherec>00$]]>is some constant andS:=supb>0,x∈R|F+[Kb](x)|0,\hspace{2.5pt}x\in \mathbb{R}}}|{\mathcal{F}_{+}}[{K_{b}}](x)|$]]>.
A proof of Lemma 3.6 as well as of Theorem 3.7 below can be found in Appendix.
Fixγ∈R. Suppose that condition(Uβ1)is satisfied for someβ1≥0and letv∈L2(R)be such thatG−1∗v∈H1(R),F+[G−1∗v]ψ∈L1(R)and(Mv)(exp(·)),(Mv)(−exp(·))∈Hβ2(R)for someβ2>β1{\beta _{1}}$]]>. Moreover, leta=anandb=bnbe sequences with the propertiesan→0,bn→0andan=o((nbn)β12(β1−β2)),asn→∞,and assume that conditions (K1)–(K3) are fulfilled. Then, under Assumption3.1, (1)–(4),E|LˆWv−Lv|→0asn→∞with the order of convergence given byE|LˆWv−Lv|=O(anβ2β1−1nbn+bn+1n).
Notice that condition (Uβ) ensures uniqueness of uv0∈L2(R) as a solution of Guv0=uv1. In Lemma 3.6, it can be replaced by the more (and most) general assumption mf,±≠0 almost everywhere on R. Moreover, condition (K3) can be replaced by supb>0,x∈R|F+[Kb](x)|<∞0,\hspace{2.5pt}x\in \mathbb{R}}}|{\mathcal{F}_{+}}[{K_{b}}](x)|<\infty $]]> in Lemma 3.6.
In order to deduce the convergence rate in Theorem 3.7 explicitely, condition (3.9) is essential. Moreover, it ensures that the function v belongs to the range of G (cf. [13, Theorem 3.1]); hence the expression G−1∗v is well-defined.
The condition G−1∗v∈H1(R) in Theorem 3.7 can be dropped if γ=0.
Under the conditions of Theorem 3.7, the convergence rate of E|LˆWv−Lv|→0 is at least O(n−1/2) as n→∞, provided that
an=o((nbn)β1β1−β2)andbn=O(1n),asn→∞.
We close this section with the following example, showing that the functions gt considered in [20, p. 3309] may belong to the range of G−1∗.
Fix t>00$]]> and let v(x)=1x1R∖[−t,t](x), x∈R. Apparently, v∈L2(R) fulfills condition (3.9) for any β2>00$]]>. Let for some fixed λ,θ>00$]]>, f(s)=e−λs1(0,θ)(s), s∈R. Then a simple computation yields that (Uβ1) is satisfied with β1=1. Moreover, for all x≠0(G−1∗v)(x)=12xlog(|x|t)1(t,teλθ](|x|)+λθ2x1(teλθ,∞)(|x|);
hence, G−1∗v∈H1(R). Since
‖F+[G−1∗v]ψ‖L1(R)≤‖G−1∗v‖H1(R)‖(1+·2)−1+ε2ψ‖L2(R),
any random measure Λ satisfying Assumption 3.1, (5) yields F+[G−1∗v]ψ∈L1(R) (cf. Proposition 3.3, (c)).
A central limit theorem for LˆW
Provided the assumptions of Theorem 3.7 are satisfied,
errW(v):=n(LˆWv−Lv)
is bounded in mean. In this section, we give conditions under which errW(v) is asymptotically Gaussian. For this purpose, introduce the following notation.
Let Assumption 3.1 be satisfied and suppose that condition (Uβ1) is fulfilled for some β1>00$]]>. A function v∈L2(R) is called admissible of index (ξ,β2) if
G−1∗v∈H32−ε(R),
(Mv)(exp(·)), (Mv)(−exp(·))∈Hβ2(R) for some β2>β1{\beta _{1}}$]]> and
|F+[G−1∗v](x)|≲(1+x2)−ξ/2 for some ξ>2(1−ε)−(12−ε)1+τ2+τ2(1-\varepsilon )-\Big(\frac{1}{2}-\varepsilon \Big)\frac{1+\tau }{2+\tau }$]]>.
The linear subspace of all admissible functions of index (ξ,β2) is denoted by U(ξ,β2).
The parameters ε and τ describe the size of U(ξ,β2). In particular, for larger values of ε and τ, the set of admissible functions is increasing and vice versa.
Assumption 3.1, (5) implies ε<12 (otherwise 1|ψ(x)|→0 as |x|→∞); hence Definition 3.10, (i) yields that F+[G−1∗v]∈L1(R).
Clearly, the lower bound for ξ in Definition 3.10, (iii) can be replaced by ξ>74−32ε\frac{7}{4}-\frac{3}{2}\varepsilon $]]>. Nevertheless, since our purpose is to point out the influence of τ on the set of admissible functions, we do not use this simplification.
It immediately follows from formula (3.7) that G−1∗v∈Hδ(R) if and only if G−1v∈Hδ(R).
For any j∈W and any admissible function v∈U(ξ,β2), introduce the random variables
Zj,v(1)=12πYjF+[F+[G−1∗v](−·)ψ(·)](Yj)andZj,v(2)=i2πF+[F+[G−1∗v](−·)(1ψ)′](Yj).
In the sequel, it is assumed that the random field Y introduced in (3.1) is observed on a sequence (Wk)k∈N of regularly growing observation windows (cf. Section 2.2). To avoid longer notations, we drop the index k in this notation and shortly write W instead of Wk. Moreover, we denote by n(=n(k)) the cardinality of W.
With the previous notation, we now can formulate the main result of this section.
Fixm∈N,m>Δ−1diam(supp(f)){\Delta ^{-1}}\text{diam}(\operatorname{supp}(f))$]]>. Let Assumption3.1be satisfied and suppose that conditions(K1)–(K3)are fulfilled. Moreover, let for someη>00$]]>the sequencesanandbnbe given byan=o((nbn)β1β1−β2)andbn≈n−11−2ε(logn)η+11−2ε,asn→∞.Then, as W is regularly growing to infinity,errW(v)→dNv,for any admissible functionv∈U(ξ,β2), whereNvis a Gaussian random variable with zero expectation and variance given byσv2=∑j∈Zd:‖j‖∞≤mE[(Zj,v(1)−Zj,v(2))(Z0,v(1)−Z0,v(2))].
A proof of Theorem 3.12 can be found in Section 4.
Unfortunately, we could not provide a rate for the convergence errW(v)→dNv in Theorem 3.12. Therefore, it would be sufficient to provide, e.g., L1(Ω,P)-rates for the convergence supx|ψˆ(x)−ψ(x)|, supx|θˆ(x)−θ(x)|→0 (as |W|→∞), that seems to be a hard problem in the dependent observations setting.
Let the assumptions of Theorem3.12hold true. Then, as W is regularly growing to infinity,(errW(v1),…,errW(vk))⊤→dNv1,…,vk,for anyv1∈U(ξ1,β2(1)),…,vk∈U(ξk,β2(k)), whereNv1,…,vkis a centered Gaussian random vector with covariance matrix(Σs,t)s,t=1,…,kgiven byΣs,t=∑j∈Zd:‖j‖∞≤mE[(Zj,vs(1)−Zj,vs(2))(Z0,vt(1)−Z0,vt(2))].
Suppose v1∈U(ξ1,β2(1)),…,vk∈U(ξk,β2(k)) and, for arbitrary numbers λ1,…,λk∈R, let v=∑l=1kλlvl. Then a simple calculation yields
∑l=1kλlerrW(vl)=n(LˆWv−Lv).
Since v∈U(minlξl,minlβ2(l)), by Theorem 3.12, n(LˆWv−Lv)→dNv, where Nv is a Gaussian random variable with zero expectation and variance given in (3.10). Now, let (T1,…,Tk)⊤ be a zero mean Gaussian random vector with covariance given by (Σs,t)s,t=1,…,k. Using linearity of F+ and G−1∗, a short computation shows that
Nv=d∑l=1kλlTl;
hence, the assertion follows by the Cramér–Wold theorem (cf. [6]). □
Proof of Theorem 3.12
In order to prove Theorem 4, we adopt the strategy of the proof of [20, Theorem 2]. Nevertheless, the main difficulty in our setting is that the observations (Yj)j∈W are not independent; hence the classical theory cannot be applied here. Instead, we use asymptotic results for partial sums of m-dependent random fields (cf. [8]) in combination with the theory developed by Bulinski and Shashkin in [7] for weakly dependent random fields.
We start with the following lemma.
Letγ=0and suppose thatv∈U(ξ,β2)is an admissible function. Then Assumption3.1implies:
xPhas a bounded Lebesgue density onR, where P denotes the distribution ofX(0).
Let μ(dx)=(uv1)(x)dx. By Proposition 3.3, (a), uv1∈L1(R); hence, μ defines a finite signed measure on R. Since θ=ψF+[uv1], we conclude that
F+[xP](t)=θ(t)=F+[P](t)F+[uv1](t)=F+[μ∗P](t),
i.e. xP(dx)=(μ∗P)(dx); thus, xP has the density given by d[xP]dx=∫R(uv1)(x−y)P(dy) and consequently ‖d[xP]dx‖L∞(R)≤‖uv1‖L∞(R).
By Assumption 3.1, (4), (5), Proposition 3.3, (a), (c) and the Cauchy–Schwarz inequality, we obtain for any x∈R,
1|ψ(x)|=1+∫0x(1ψ)′(t)dt=1+∫0x|θ(t)||ψ(t)|2dt=1+∫0x|F+[uv1](t)||ψ(t)|dt≲1+∫0x(1+t2)−ε2(1+t2)−1−ε2|ψ(t)|dt≤1+‖(1+·2)−1−ε2ψ‖L2(R)(∫0x(1+t2)−εdt)1/2≲(1+|x|)12−ε.
Further, we have for any x∈R,
(1ψ)′(x)=|F+[uv1](x)||ψ(x)|≲(1+|x|)−12−ε.
The last expression is bounded and square integrable, hence (1ψ)′∈L2(R)∩L∞(R).
F+[G−1∗v]∈L1(R)∩L2(R) immediately follows from Definition 3.10, (i) (cf. Remark 3.11, (b)). Moreover, by Proposition 3.3, (a), we find that
∫R|F+[G−1∗v](x)||ψ(x)|dx≤‖G−1∗v‖H1−ε(R)‖(1+·2)−1−ε2ψ‖L2(R),
where the latter is finite due to Definition 3.10, (i). The bound in part (2) finally yields
∫R|F+[G−1∗v](x)|2|ψ(x)|2dx≲∫R|F+[G−1∗v](x)|2(1+|x|2)12−εdx=‖G−1∗v‖H1−2ε(R)<∞.
□
In order to prove Theorem 3.12, consider the following decomposition that can be obtained by the isometry property of F+:
errW(v)=n(LˆWv−Lv)=12π[E1+E2+E3+E4]+E5,
with E1,…,E5 given by
E1=n<F+[G−1∗v],{θˆ−θψ−i(1ψ)′(ψˆ−ψ)}F+[Kb]>L2(R)E2=n<F+[G−1∗v],{Rn+θψ−ψˆψ21{|ψˆ|≤n−1/2}}F+[Kb]>L2(R)E3=n<F+[G−1∗v],θψ(F+[Kb]−1)>L2(R)E4=n<F+[G−1∗v],F+[Kb]{θψˆ−ψψ2−θˆψ}1{|ψˆ|≤n−1/2}>L2(R)E5=n<(Gn−1∗−G−1∗)v,uv1ˆ>L2(R),_{{L^{2}}(\mathbb{R})}}\\ {} {E_{2}}=& \sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\Big\{{R_{n}}+\theta \frac{\psi -\hat{\psi }}{{\psi ^{2}}}{\mathbb{1}_{\{|\hat{\psi }|\le {n^{-1/2}}\}}}\Big\}{\mathcal{F}_{+}}[{K_{b}}]{>_{{L^{2}}(\mathbb{R})}}\\ {} {E_{3}}=& \sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\frac{\theta }{\psi }({\mathcal{F}_{+}}[{K_{b}}]-1){>_{{L^{2}}(\mathbb{R})}}\\ {} {E_{4}}=& \sqrt{n}<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathcal{F}_{+}}[{K_{b}}]\Big\{\theta \frac{\hat{\psi }-\psi }{{\psi ^{2}}}-\frac{\hat{\theta }}{\psi }\Big\}{\mathbb{1}_{\{|\hat{\psi }|\le {n^{-1/2}}\}}}{>_{{L^{2}}(\mathbb{R})}}\\ {} {E_{5}}=& \sqrt{n}<({\mathcal{G}_{n}^{-1\ast }}-{\mathcal{G}^{-1\ast }})v,\widehat{u{v_{1}}}{>_{{L^{2}}(\mathbb{R})}},\end{aligned}\]]]>
and Rn=(1−ψˆψ)(θˆψ˜−θψ). We call the expression E1main stochastic term and the expression E2remainder term.
Subsequently, we give a step by step proof for Theorem 3.12 by considering each of the above terms E1,…,E5 seperately.
We first show that the deterministic term E3 tends to zero as the sample size n tends to infinity.
Supposeγ=0. Then, under the conditions of Theorem3.12,E3=n<F+[G−1∗v],θψ(F+[Kb]−1)>L2(R×)→0,asn→∞,_{{L^{2}}({\mathbb{R}^{\times }})}}\to 0,\hspace{1em}\textit{as}\hspace{2.5pt}n\to \infty ,\]]]>for any admissible functionv∈U(ξ,β2).
Taking into account that |θψ|=|F+[uv1]|, Assumption 3.1, (4), together with Proposition 3.3, (a) and condition (K3) yield
|E3|≤n∫R|F+[G−1∗v](x)||F+[uv1](x)||1−F[Kbn](x)|dx≲bnn∫R|F+[G−1∗v](x)|dx,
where the last line is finite due to Lemma 4.1. Moreover, since, bn=o(n−1/2) it tends to 0 as n→∞. □
Next, we observe that E5 is asymptotically negligible in mean.
Let the assumptions of Theorem3.12be satisfied. ThenE|E5|=n1/2E|(Gn−1∗−G−1∗)v,uv1ˆL2(R)|→0,asn→∞,for anyv∈U(ξ,β2).
From the proofs of Theorem 3.6 and Corollary 3.7 we conclude that
nE|(Gn−1∗−G−1∗)v,uv1ˆL2(R)|≲nanβ2β1−1(nbn)1/2;
hence, E|E5|→0 as n→∞, since an=o((nbn)β1β1−β2). □
Supposeγ=0and let the assumptions of Theorem3.12be satisfied. ThenE|E4|=nE|<F+[G−1∗v],F+[Kb]{θψˆ−ψψ2−θˆψ}1{|ψˆ|≤n−1/2}>L2(R)|→0,_{{L^{2}}(\mathbb{R})}}\Big|\to 0,\]]]>asn→∞, for any admissible functionv∈U(ξ,β2).
Since θψ2=i(1ψ)′, we obtain by conditions (K2) and (K3),
E|E4|≤S∫−bn−1bn−1|F+[G−1∗v](x)||(1ψ)′(x)|nE[|ψˆ(x)−ψ(x)|1{|ψˆ(x)|≤n−1/2}]dx+S∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)|nE[|θˆ(x)|1{|ψˆ(x)|≤n−1/2}]dx,
with S:=supx∈R,b>0|F+[Kb](x)|0}}|{\mathcal{F}_{+}}[{K_{b}}](x)|$]]>. In order to bound the summands on the right-hand side of the latter inequality, we start with the following observation: ∃ n0∈N such that for all n≥n0,
x∈[−bn−1,bn−1]⇒|ψ(x)|>2n−1/2.2{n^{-1/2}}.\]]]>
Indeed, by Lemma 4.1, (2), there is a constant c>00$]]> such that 1|ψ(x)|≤c(1+|x|)12−ε, for all x∈R. Hence, if |ψ(x)|≤2n−1/2, then |x|≥(12c)2/(1−2ε)n1/(1−2ε)−1. Since bn−1=o(n1/(1−2ε)) as n→∞, there exists n0∈N, such that bn−1<(12c)2/(1−2ε)n1/(1−2ε)−1 for all n≥n0, i.e. x∉[−bn−1,bn−1]. This shows (4.1).
In the sequel, we assume that n≥n0 and consider each summand in the above inequality seperately:
Using the m-dependence of (Yj)j∈Zd, we conclude as in the first part of the proof of [18, Lemma 8.3] that
P(|ψˆ(x)|≤n−1/2)≲n−p|ψ(x)|2p,
for any p≥1/2 and all x∈R with |ψ(x)|>2n−1/22{n^{-1/2}}$]]>. Taking p=1/2 in (4.2), by the Cauchy–Schwarz inequality, [18, Lemma 8.2] and Lemma 4.1, (2), (3), we find that
∫−bn−1bn−1|F+[G−1∗v](x)||(1ψ)′(x)|nE[|ψˆ(x)−ψ(x)|1{|ψˆ(x)|≤n−1/2}]dx≲∫−bn−1bn−1|F+[G−1∗v](x)||(1ψ)′(x)|P(|ψˆ(x)|≤n−1/2)1/21{|ψ(x)|>2n−1/2}dx≤n−1/4∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)|1/2|(1ψ)′(x)|1{|ψ(x)|>2n−1/2}dx≤n−1/4‖F+[G−1∗v]ψ‖L1(R)‖(1ψ)′‖L∞(R),2{n^{-1/2}}\}}}dx\\ {} \le & {n^{-1/4}}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x){|^{1/2}}}\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|{\mathbb{1}_{\{|\psi (x)|>2{n^{-1/2}}\}}}dx\\ {} \le & {n^{-1/4}}\Big\| \frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }{\Big\| _{{L^{1}}(\mathbb{R})}}{\Big\| {\Big(\frac{1}{\psi }\Big)^{\prime }}\Big\| _{{L^{\infty }}(\mathbb{R})}},\end{aligned}\]]]>
for all n≥n0, where the last inequality uses the fact that |ψ(x)|≤1. Hence, the first integral tends to zero as n→∞.
For the second integral, by the triangle inequality we observe that for any n≥n0,
∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)|nE[|θˆ(x)|1{|ψˆ(x)|≤n−1/2}]dx≤∫−bn−1bn−1(I1(x)+I2(x))dx,
where
I1(x)=|F+[G−1∗v](x)||ψ(x)|nE[|θˆ(x)−θ(x)|1|ψˆ(x)|≤n−1/2]1{|ψ(x)|>2n−1/2}2{n^{-1/2}}\}}}\]]]>
and
I2(x)=|F+[G−1∗v](x)|n|θ(x)||ψ(x)|P(|ψˆ(x)|≤n−1/2)1{|ψ(x)|>2n−1/2}.2{n^{-1/2}}\}}}.\]]]>
Applying Lemma A.2 with q=1/2 (cf. Appendix A.4), we find that
I1(x)≲E|Y0|2|F+[G−1∗v](x)||ψ(x)|,for allx∈R;
hence, by Lemma 4.1, (3) and the finite (2+τ)-moment condition, I1 is majorized by an integrable function. Moreover, applying the Cauchy–Schwarz inequality, (4.2) and again Lemma A.2 (with q=1) yields
I1(x)≲|F+[G−1∗v](x)||ψ(x)|P(|ψˆ(x)|≤n−1/2)1/21{|ψ(x)|>2n−1/2}≤|F+[G−1∗v](x)||ψ(x)|n−1/4|ψ(x)|1/2→0,asn→∞,2{n^{-1/2}}\}}}\\ {} \le & \frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\frac{{n^{-1/4}}}{|\psi (x){|^{1/2}}}\to 0,\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\end{aligned}\]]]>
for all x∈R. By dominated convergence, limn→∞∫−bn−1bn−1I1(x)dx=0 follows. Further,
∫−bn−1bn−1I2(x)dx≤n−1/2∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)||θ(x)||ψ(x)|21{|ψ(x)|>2n−1/2}dx≤n−1/2‖F+[G−1∗v]ψ‖L1(R)‖(1ψ)′‖L∞(R),2{n^{-1/2}}\}}}dx\\ {} \le & {n^{-1/2}}\Big\| \frac{{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v]}{\psi }{\Big\| _{{L^{1}}(\mathbb{R})}}{\Big\| {\Big(\frac{1}{\psi }\Big)^{\prime }}\Big\| _{{L^{\infty }}(\mathbb{R})}},\end{aligned}\]]]>
by (4.2) (with p=1), Lemma 4.1, (2), (3) and the Cauchy–Schwarz inequality; hence, also limn→∞∫−bn−1bn−1I2(x)dx=0.
All in all, this shows the assertion of the lemma. □
Main stochastic term
In this section we show the asymptotic normality of the main stochastic term. For this purpose, let Pn:B(R)→[0,1] be the empirical measure given by
Pn=1n∑j∈WδYj,
where δx:B(R)→{0,1} denotes the Dirac measure concentrated in x∈R. Further, for any v∈U(ξ,β2), define the random fields (Zj,v,n(k))j∈Zd, k=1,2, by
Zj,v,n(1)=12πYjF+[F+[G−1∗v](−·)ψF+[Kbn]](Yj)
and
Zj,v,n(2)=i2πF+[F+[G−1∗v](−·)(1ψ)′F+[Kbn]](Yj).
The following theorem is the main result of this section.
Let the assumptions of Theorem3.12be satisfied. Then, as W is regularly growing to infinity,12πE1=n2π<F+[G−1∗v],{θˆ−θψ−i(1ψ)′(ψˆ−ψ)}F+[Kb]>L2(R)→dNv,_{{L^{2}}(\mathbb{R})}}\stackrel{d}{\to }{N_{v}},\]]]>for anyv∈U(ξ,β2), whereNvis a Gaussian random variable with zero expectation and varianceσ2given in (3.10).
In order to prove Theorem 4.5, we first show some auxiliary statements. We begin with the following representation for the main stochastic term.
Letv∈U(ξ,β2). Then, under the assumptions of Theorem3.12, the main stochastic term can be represented by12πE1=1n∑j∈W(Zj,v,n(1)−Zj,v,n(2)),withZj,v,n(k),k=1,2, given in (4.3) and (4.4).
Since θ=−iψ′,
i(ψ(1ψ)′−θψ)=i(ψ1ψ)′=0;
hence,
12πE1=n2π<F+[G−1∗v],{θˆψ−i(1ψ)′ψˆ}F+[Kbn]>L2(R)=n2π∫RF+[G−1∗v](x){θˆ(−x)ψ(−x)−i(1ψ)′(−x)ψˆ(−x)}F+[Kbn](−x)dx=n2π[∫RF+[G−1∗v](x)θˆ(−x)ψ(−x)F+[Kbn](−x)dx−i∫RF+[G−1∗v](x)(1ψ)′(−x)ψˆ(−x)F+[Kbn](−x)dx]._{{L^{2}}(\mathbb{R})}}\\ {} =& \frac{\sqrt{n}}{2\pi }{\int _{\mathbb{R}}}{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)\Big\{\frac{\hat{\theta }(-x)}{\psi (-x)}-i{\Big(\frac{1}{\psi }\Big)^{\prime }}(-x)\hat{\psi }(-x)\Big\}{\mathcal{F}_{+}}[{K_{{b_{n}}}}](-x)dx\\ {} =& \frac{\sqrt{n}}{2\pi }\Big[{\int _{\mathbb{R}}}{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)\frac{\hat{\theta }(-x)}{\psi (-x)}{\mathcal{F}_{+}}[{K_{{b_{n}}}}](-x)dx\\ {} & -i{\int _{\mathbb{R}}}{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x){\Big(\frac{1}{\psi }\Big)^{\prime }}(-x)\hat{\psi }(-x){\mathcal{F}_{+}}[{K_{{b_{n}}}}](-x)dx\Big].\end{aligned}\]]]>
Now, taking into account that ψˆ(x)=∫ReitxPn(dt) and θˆ(x)=∫ReitxtPn(dt), Fubini’s theorem yields the desired result. □
The following lemma justifies the asymptotic variance σ2 in Theorem 3.12.
Let the assumptions of Theorem3.12be satisfied and suppose functionsv1∈U(ξ1,β2(1))andv2∈U(ξ2,β2(2)). Then, as W is regularly growing to infinity,Cov(|W|−1/2∑j∈W(Zj,v1,n(1)−Zj,v1,n(2)),|W|−1/2∑j∈W(Zj,v2,n(1)−Zj,v2,n(2)))→σv1,v2,withσv1,v2∈Rgiven byσv1,v2=∑t∈Zd:‖t‖∞≤mE[(Zt,v1(1)−Zt,v1(2))(Z0,v2(1)−Z0,v2(2))].
Let v1∈U(ξ1,β2(1)), v2∈U(ξ2,β2(2)) and define the functions g(k), gn(k):R→R, k=1,2, by
gn(k)(x)=x2πF+[F+[G−1∗vk](−·)ψF+[Kbn]](x)−i2πF+[F+[G−1∗vk](−·)(1ψ)′F+[Kbn]](x),x∈R,
and
g(k)(x)=x2πF+[F+[G−1∗vk](−·)ψ](x)−i2πF+[F+[G−1∗vk](−·)(1ψ)′](x),x∈R.
Then (g(k)(Yj))j∈Zd and (gn(k)(Yj))j∈Zd fulfill properties (1)–(3) from Lemma A.4 (cf. Appendix A.5). Indeed, by Lemma 4.6, it follows that
E[gn(k)(Y0)]=E[Z0,vk,n(1)−Z0,vk,n(2)]=1nE[∑j∈W(Zj,vk,n(1)−Zj,vk,n(2))]=12π<F+[G−1∗v],{θˆ−θψ−i(1ψ)′(ψˆ−ψ)}F+[Kbn]>L2(R)._{{L^{2}}(\mathbb{R})}}.\end{aligned}\]]]>
Since E[ψˆ(x)−ψ(x)]=E[θˆ(x)−θ(x)]=0 for all x∈R, we conclude by Fubini’s theorem that E[gn(k)(Y0)]=E[g(k)(Y0)]=0, for k=1,2. Moreover, since the Fourier transform of an integrable function is bounded, the finite (2+τ)-moment condition together with Lemma 4.1, (2), (3) and (K3) imply E|g(k)(Y0)|2, E|gn(k)(Y0)|2<∞, k=1,2. The same arguments in combination with the dominated convergence yields
E[gn(1)(Y0)gn(2)(Yj)]→E[g(1)(Y0)g(2)(Yj)],
as |W|→∞. Hence, Lemma A.4 yields the assertion of the lemma. □
We now can give a proof of Theorem 4.5.
If σv2=0, then Lemma 4.7 yields
σv,n2:=E[(1n∑j∈W(Z0,v,n(1)−Z0,v,n(2)))2]→σv2=0,
as W is regularly growing to infinity; hence, n−1/2∑j∈W(Z0,v,n(1)−Z0,v,n(2))→0 in probability.
Now, assume that σv2>00$]]> and choose n0∈N such that σv,n2>00$]]> for all n≥n0 (which is indeed possible, since σv,n2→σv2>00$]]> as n→∞). For any n≥n0, let
Xj,n:=1nZj,v,n(1)−Zj,v,n(2)σv,n,j∈Zd,
and denote by Fn the distribution function of ∑j∈WXj,n. In the proof of Lemma 4.7 we have seen that (Xj,n)j∈Zd is a centered m-dependent random field and E|Xj,n|2+τ≤cn−1−τ/2σv,n−(2+τ) for any n∈N and a constant c>00$]]>. Hence, applying [8, Theorem 2.6] with p=2+τ yields
supx∈R|Fn(x)−ϕ(x)|≤75c(m+1)(1+τ)dσv,n−(2+τ)n−τ/2→0,
as n→∞. This completes the proof. □
Remainder term
In this section, we show that the remainder term E2 is stochastically negligible as the sample size n tends to infinity.
Letγ=0and suppose that the assumptions of Theorem3.12are satisfied. Then, asn→∞,E2=n<F+[G−1∗v],{Rn+θψ−ψˆψ21{|ψˆ|≤n−1/2}}F+[Kb]>L2(R)→P0,_{{L^{2}}(\mathbb{R})}}\stackrel{\mathbb{P}}{\to }0,\]]]>for anyv∈U(ξ,β2).
In order to prove Theorem 4.8, some auxiliary statements are required. Therefore, we introduce the following notation.
For any t∈R, j∈Zd, let the centered random variables ξj(l)(t), ξ˜j(l)(t), l=1,2, be given by
ξj(1)(t)=cos(tYj)−E[cos(tY0)],ξj(2)(t)=sin(tYj)−E[sin(tY0)],ξ˜j(1)(t)=Yjcos(tYj)−E[Y0cos(tY0)],ξ˜j(2)(t)=Yjsin(tYj)−E[Y0sin(tY0)].
Then ψˆ−ψ and θˆ−θ can be rewritten as
ψˆ(t)−ψ(t)=1n∑j∈W(ξj(1)(t)+iξj(2)(t))andθˆ(t)−θ(t)=1n∑j∈W(ξ˜j(1)(t)+iξ˜j(2)(t)).
In the sequel, we shortly write ξ(l)(t), ξ˜(l)(t) for the random fields (ξj(l)(t))j∈Zd and (ξ˜j(l)(t))j∈Zd, l=1,2. Moreover, for any K>00$]]>, we define the random fields ξ¯K(l)(t)=(ξ¯j,K(l)(t))j∈Zd, l=1,2, by
ξ¯j,K(1)(t)=Yjcos(tYj)1[−K,K](Yj)−E[Y0cos(tY0)1[−K,K](Y0)],ξ¯j,K(2)(t)=Yjsin(tYj)1[−K,K](Yj)−E[Y0sin(tY0)1[−K,K](Y0)].
For any finite subset V⊂Zd and any random field Y=(Yj)j∈Zd, let
SV(Y)=∑j∈VYj.
Let the assumptions of Theorem3.12be satisfied and supposeK≥1. ThenP(|SW(ξ(l)(t))|≥x)≤2exp−18(m+1)dx2x+2|W|andP(|SW(ξ¯K(l)(t))|≥x)≤2exp−18(m+1)dK2x2x+2|W|,for anyt∈R,x≥0andl=1,2.
Since |ξj(l)(t)|≤2 for all t∈R, j∈Zd and l=1,2, we have that
|E[ξj(l)(t)p]|≤E[|ξj(l)(t)|p−2ξj(l)(t)2]≤2p−2E[ξj(l)(t)2],p=3,4,…;
hence, Theorem A.1 (with H=2) implies (4.5). Next, we obtain
|E[ξ¯j,K(l)(t)p]|≤E[|ξ¯j,K(l)(t)|p−2|ξ¯j,K(l)(t)|2]≤(2K)p−2E[ξ¯j,K(l)(t)2],p=3,4,….
Taking into account that ∑j∈WE[ξ¯j,K(l)(t)2]≤4K2|W|, Theorem A.1 (with H=2K) yields the bound in (4.6). □
Let the assumptions of Theorem3.12be satisfied and letn=|W|. Moreover, for any n, suppose numbersεn>00$]]>,Kn≥1, such thatεn→0andKn→∞asn→∞. Then, for any n withεn<min{1,T4},P(supt∈[−T,T]n−1|SW(ξ(l)(t))|≥εn)≤C1Tεnexp−nεn2160(m+1)dandP(supt∈[−T,T]n−1|SW(ξ˜(l)(t))|≥εn)≤C2Tεnexp−nεn2576(m+1)dKn2+C3εnKn1+τ,l=1,2, whereC1=4(1+2E|Y0|),C2=42(1+2E|Y0|2)andC3=8E|Y0|2+τ.
We use the same idea as in the proof of [4, Theorem 2]: divide the interval [−T,T] by 2J equidistant points (tk)k=1,…,2J=D, where tk=−T+kTJ, k=1,…,2J. Then, for any t∈[−T,T] such that |t−tk|≤TJ, we have for any j∈Zd that
|ξj(l)(t)−ξj(l)(tk)|≤|t−tk|(|Yj|+E|Y0|)≤(|Yj|+E|Y0|)TJ,l=1,2.
Hence, by Markov’s inequality and Lemma 4.9, for any n∈N, we obtain that
P(supt∈[−T,T]n−1|SW(ξ(l)(t))|≥εn)=P(suptk∈Dsupt:|t−tk|≤TJ|SW(ξ(l)(t))|≥nεn)≤P(suptk∈D|SW(ξ(l)(tk))|≥nεn2)+P(suptk∈Dsupt:|t−tk|≤TJ|SW(ξ(l)(t))−SW(ξ(l)(tk))|≥nεn2)≤∑tk∈DP(|SW(ξ(l)(tk))|≥nεn2)+P(∑j∈W(|Yj|+E|Y0|)TJ≥nεn2)≤4Jexp−116(m+1)dnεn2εn+4+4TJεnE|Y0|,l=1,2. Now, let n∈N be such that εn<T4 and choose
J=⌊Tεnexp116(m+1)dnεn2εn+41/2⌋,
where ⌊x⌋ denotes the integer part of x∈R. Then
P(supt∈[−T,T]n−1|SW(ξ(l)(t))|≥εn)≤C1Tεnexp−132(m+1)dnεn2εn+4
with C1=4(1+2E|Y0|). Applying the same arguments to supt∈[−T,T]|SW(ξ¯Kn(l)(t))| yields
P(supt∈[−T,T]n−1|SW(ξ¯Kn(l)(t))|≥εn)≤C˜Tεnexp−132(m+1)dKn2nεn2εn+4,
whenever εn<T4, where C˜=4(1+2E|Y0|2). Combining Markov’s inequality, Hölder’s inequality and the finite (2+τ)-moment property of Y0 implies
P(supt∈[−T,T]n−1|∑j∈W(ξ˜j(l)(t)−ξ¯j,Kn(l)(t))|≥εn2)≤P(∑j∈W(|Yj|1(Kn,∞)(|Yj|)+E|Y0|1(Kn,∞)(|Y0|))≥nεn2)≤4εnE|Y0|2+τ1/(2+τ)P(|Y0|>Kn)(1+τ)/(2+τ)≤4Kn1+τεnE|Y0|2+τ,{K_{n}})^{(1+\tau )/(2+\tau )}}\\ {} \le & \frac{4}{{K_{n}^{1+\tau }}{\varepsilon _{n}}}\mathbb{E}|{Y_{0}}{|^{2+\tau }},\end{aligned}\]]]>l=1,2. All in all, we have for any n such that εn<T2,
P(supt∈[−T,T]n−1|SW(ξ˜(l)(t))|≥εn)≤P(supt∈[−T,T]n−1|SW(ξ¯Kn(l)(t))|≥εn2)+P(supt∈[−T,T]n−1|SW(ξ˜(l)(t)−ξ¯Kn(l)(t))|≥εn2)≤2C˜Tεnexp−164(m+1)dKn2nεn2εn+8+8Kn1+τεnE|Y0|2+τ.
Hence, it follows for any n with εn<min{1,T4} that
P(supt∈[−T,T]n−1|SW(ξ(l)(t))|≥εn)≤C1Tεnexp−nεn2160(m+1)d
as well as
P(supt∈[−T,T]n−1|SW(ξ˜j(l)(t))|≥εn)≤C2Tεnexp−nεn2576(m+1)dKn2+C3Kn1+τεn,
where C2=2C˜ and C3=8E|Y0|2+τ. □
For someζ>00$]]>, supposeεn≈n−1+τ2(2+τ)[log(T12n1+τ4(2+τ))]ζ+12andKn=n12(2+τ)in Lemma4.10. Then, for n sufficiently large,P(max{supt∈[−T,T]|ψˆ(t)−ψ(t)|,supt∈[−T,T]|θˆ(t)−θ(t)|}>εn)≤C˜yn,{\varepsilon _{n}}\Big)\le \tilde{C}{y_{n}},\]]]>whereyn=[log(T12n1+τ4(2+τ))]−ζ2−14andC˜>00$]]>is a constant (independent of T).
By Lemma 4.10 it follows that
P(max{supt∈[−T,T]|ψˆ(t)−ψ(t)|,supt∈[−T,T]|θˆ(t)−θ(t)|}>εn)≤∑l=12P(supt∈[−T,T]n−1|SW(ξ(l)(t))|≥εn)+∑l=12P(supt∈[−T,T]n−1|SW(ξ˜(l)(t))|≥εn)≤CTεnexp−nεn2576(m+1)dKn2+1εnKn1+τ,{\varepsilon _{n}}\Big)\\ {} \le & {\sum \limits_{l=1}^{2}}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\xi ^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\\ {} & +{\sum \limits_{l=1}^{2}}\mathbb{P}\Big(\underset{t\in [-T,T]}{\sup }{n^{-1}}|{S_{W}}({\tilde{\xi }^{(l)}}(t))|\ge {\varepsilon _{n}}\Big)\\ {} \le & C\left(\sqrt{\frac{T}{{\varepsilon _{n}}}}\exp \left\{-\frac{n{\varepsilon _{n}^{2}}}{576{(m+1)^{d}}{K_{n}^{2}}}\right\}+\frac{1}{{\varepsilon _{n}}{K_{n}^{1+\tau }}}\right),\end{aligned}\]]]>
for some constant C>00$]]>. Let εn=n−1+τ2(2+τ)[log(T12n1+τ4(2+τ))]ζ+12, without loss of generality. Then we observe that
1εnKn1+τ=[log(T12n1+τ4(2+τ))]−ζ−12.
Moreover,
Tεnexp−nεn2576(m+1)dKn2=(T12n1+τ4(2+τ))1−rn576(m+1)dyn,
with rn=[log(T1/2n1+τ4(2+τ))]2ζ. Hence, the assertion of the theorem follows. □
Fix T>00$]]>. Then, provided the assumptions of Theorem 3.12 are satisfied, Theorem 4.11 states that
max{supt∈[−T,T]|ψˆ(t)−ψ(t)|,supt∈[−T,T]|θˆ(t)−θ(t)|}=OP(εn),
as n→∞, where OP denotes the probabilistic order of convergence.
For large n in Theorem 4.11 is understood in the following sense: for any fixed m, there exists n0=n0(m) such that the bound holds for all n≥n0. Of course, the function m↦n0(m) is increasing.
The following corollary is an immediate consequence of Theorem 4.11.
Let the assumptions of Theorem3.12be satisfied. Thenlimn→∞P(supt∈[−bn−1,bn−1]|ψˆ(t)−ψ(t)|≥cbn12−ε)=0for any constantc>00$]]>.
Fix c>00$]]> and assume that bn=n−11−2ε(logn)η+11−2ε, without loss of generality. Since bn→0 as n→∞, there exists n0∈N such that cbn12−ε<min{1,14bn} for all n≥n0. Taking εn=cbn12−ε and T=bn−1 in Lemma 4.10, it follows that
P(supt∈[−bn−1,bn−1]|ψˆ(t)−ψ(t)|≥cbn12−ε)≤P(supt∈[−bn−1,bn−1]n−1|SW(ξ(1)(t))|≥cbn12−ε)+P(supt∈[−bn−1,bn−1]n−1|SW(ξ(2)(t))|≥cbn12−ε)≤2C˜c−12bn2ε−34exp−c2nbn1−2ε160(m+1)d
for all n≥n0 and some constant C˜>00$]]>. Hence,
P(supt∈[−bn−1,bn−1]|ψˆ(t)−ψ(t)|>cbnδ˜/2)≤Cˇn3−2ε4(1−2ε)−c2(logn)η(1−2ε)160(m+1)d(logn)−3−2ε4(1−2ε)(1+η(1−2ε))→0,c{b_{n}^{\tilde{\delta }/2}}\Big)\\ {} \le & \check{C}{n^{\frac{3-2\varepsilon }{4(1-2\varepsilon )}-\frac{{c^{2}}{(\log n)^{\eta (1-2\varepsilon )}}}{160{(m+1)^{d}}}}}{\Big(\log n\Big)^{-\frac{3-2\varepsilon }{4(1-2\varepsilon )}(1+\eta (1-2\varepsilon ))}}\to 0,\end{aligned}\]]]>
as n→∞, where Cˇ=2C˜c−12. □
Letγ=0and suppose that the assumptions of Theorem3.12are satisfied. Moreover, letκn=2(logn)1+η(1+2ε)2. Then,limn→∞P(|ψ(t)||ψ˜(t)|≥κnfor somet∈[−bn−1,bn−1])=0.
By Lemma 4.1, (2), 1|ψ(x)|≤c(1+|x|)12−ε for some constant c>00$]]>; hence, there exists n0∈N such that
inft∈[−bn−1,bn−1]|ψ(t)|≥c−1(1+|bn−1|)ε−12≥c−1bn12−ε
for all n≥n0. We first show that the probabilities of the events
An:={|ψˆ(t)|<n−1/2for somet∈[−bn−1,bn−1]}
tend to 0 as n→∞: By (4.1), t∈[−bn−1,bn−1] implies |ψ(t)|>2n−1/22{n^{-1/2}}$]]>, for all n≥n1 and some n1∈N. Set n˜=max{n0,n1}. Then
P(An)≤P(|ψˆ(t)−ψ(t)|>|ψ(t)|−n−1/2for somet∈[−bn−1,bn−1])≤P(|ψˆ(t)−ψ(t)|>12|ψ(t)|for somet∈[−bn−1,bn−1])≤P(|ψˆ(t)−ψ(t)|>12cbn12−εfor somet∈[−bn−1,bn−1]),|\psi (t)|-{n^{-1/2}}\hspace{2.5pt}\text{for some}\hspace{2.5pt}t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\Big)\\ {} \le & \mathbb{P}\Big(|\hat{\psi }(t)-\psi (t)|>\frac{1}{2}|\psi (t)|\hspace{2.5pt}\text{for some}\hspace{2.5pt}t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\Big)\\ {} \le & \mathbb{P}\Big(|\hat{\psi }(t)-\psi (t)|>\frac{1}{2c}{b_{n}^{\frac{1}{2}-\varepsilon }}\hspace{2.5pt}\text{for some}\hspace{2.5pt}t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]\Big),\end{aligned}\]]]>
for all n≥n˜, where the last inequality follows from (4.9). Hence, by Corollary 4.13, limn→∞P(An)=0.
Suppose κn=2(logn)1+η(1−2ε)2. Then we find that
P(supt∈[−bn−1,bn−1]|ψ(t)||ψ˜(t)|≥κn)=P(supt∈[−bn−1,bn−1]|ψ(t)||ψˆ(t)|≥κn,inft∈[−bn−1,bn−1]|ψˆ(t)|≥n−1/2)+P(supt∈[−bn−1,bn−1]|ψ(t)||ψ˜(t)|≥κn,inft∈[−bn−1,bn−1]|ψˆ(t)|<n−1/2)≤P(supt∈[−bn−1,bn−1]|ψ(t)−ψˆ(t)||ψˆ(t)|≥κn−1,inft∈[−bn−1,bn−1]|ψˆ(t)|≥n−1/2)+P(inft∈[−bn−1,bn−1]|ψˆ(t)|<n−1/2)≤P(supt∈[−bn−1,bn−1]|ψ(t)−ψˆ(t)|≥(κn−1)n−1/2)+P(An),
for all n≥n˜. Taking into account that for large n, (κn−1)n−1/2=2bn12−ε−n−1/2≥bn12−ε, the assertion follows by Corollary 4.13. □
Now we can give the proof for Theorem 4.8.
First of all, observe that
Rn+θψ−ψˆψ21|ψˆ|≤n−1/2=(1−ψˆψ)(θˆ−θψ˜+θψ−ψˆψψ˜)=ψψ˜(ψ−ψˆψ)(θˆ−θψ+θψ−ψˆψ2)=ψψ˜(ψ−ψˆψ)(θˆ−θψ+i(1ψ)′(ψ−ψˆ)).
Now, fix γ˜>00$]]> and let κn=2(logn)1+η(1−2ε)2. Moreover, let
Mn=max{supt∈[−bn−1,bn−1]|ψˆ(t)−ψ(t)|,supt∈[−bn−1,bn−1]|θˆ(t)−θ(t)|}.
By (K3), supx∈R,n∈N|F+[Kbn](x)|≤2; hence
P(E2≥γ˜)=P(n<F+[G−1∗v],{Rn+θψ−ψˆψ21{|ψˆ|≤n−1/2}}F+[Kb]>L2(R×)≥γ˜)≤P(∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)||ψ˜(x)||ψˆ(x)−ψ(x)||ψ(x)|×(|θˆ(x)−θ(x)||ψ(x)|+|(1ψ)′(x)||ψ(x)−ψˆ(x)|)dx≥γ˜2n−12)≤P(Mnbn12−ε∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)|(1|ψ(x)|+|(1ψ)′(x)|)dx≥γ˜2n−12κn−1)+P(supt∈[−bn−1,bn−1]|ψ(t)||ψ˜(t)|>κn)+P(supt∈[−bn−1,bn−1]|ψ(t)−ψˆ(t)|>bn12−ε)._{{L^{2}}({\mathbb{R}^{\times }})}}\ge \tilde{\gamma }\Big)\\ {} \le & \mathbb{P}\Big({\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\frac{|\psi (x)|}{|\tilde{\psi }(x)|}\frac{|\hat{\psi }(x)-\psi (x)|}{|\psi (x)|}\\ {} & \times \Big(\frac{|\hat{\theta }(x)-\theta (x)|}{|\psi (x)|}+\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big||\psi (x)-\hat{\psi }(x)|\Big)dx\ge \frac{\tilde{\gamma }}{2}{n^{-\frac{1}{2}}}\Big)\\ {} \le & \mathbb{P}\Big({M_{n}}{b_{n}^{\frac{1}{2}-\varepsilon }}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}\frac{|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|}{|\psi (x)|}\Big(\frac{1}{|\psi (x)|}+\Big|{\Big(\frac{1}{\psi }\Big)^{\prime }}(x)\Big|\Big)dx\ge \frac{\tilde{\gamma }}{2}{n^{-\frac{1}{2}}}{\kappa _{n}^{-1}}\Big)\\ {} & +\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }\frac{|\psi (t)|}{|\tilde{\psi }(t)|}>{\kappa _{n}}\Big)+\mathbb{P}\Big(\underset{t\in [-{b_{n}^{-1}},{b_{n}^{-1}}]}{\sup }|\psi (t)-\hat{\psi }(t)|>{b_{n}^{\frac{1}{2}-\varepsilon }}\Big).\end{aligned}\]]]>
Since by Lemma 4.1, (3), (1ψ)′∈L∞(R), there is a constant c˜>00$]]> such that
P(Mnbn12−ε∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)|(1|ψ(x)|+|(1ψ)′(x)|)dx≥γ˜2n−12κn−1)≤P(Mn∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)|2dx≥γ˜2c˜n−12κn−1bnε−12).
Moreover, by Definition 3.10, (iii), we have for some cˇ>00$]]>,
P(Mn∫−bn−1bn−1|F+[G−1∗v](x)||ψ(x)|2dx≥γ˜2c˜n−12κn−1bnε−12)≤P(Mn∫−bn−1bn−1(1+x2)−1+ε|ψ(x)|2(1+x2)1−ε−ξ/2dx≥γ˜2c˜cˇn−12κn−1bnε−12)≤P(Mn≥γ˜2c˜cˇn−12κn−1bn32−ε−ξ‖(1+·2)−ε−12ψ‖L2(R)−1),
where the last line follows because (1+·2)−(ε−1)/2ψ∈L2(R) by Proposition 3.3, (c). Now, for n sufficiently large, we obtain
n−12κn−1bn32−ε−ξ=12nξ−11−2ε−1(logn)(1−ξ)(η+11−2ε)>n−1+τ2(2+τ)[log(bn−12n1+τ4(2+τ))]η+12.& {n^{-\frac{1+\tau }{2(2+\tau )}}}\Big[\log {\Big({b_{n}^{-\frac{1}{2}}}{n^{\frac{1+\tau }{4(2+\tau )}}}\Big)\Big]^{\eta +\frac{1}{2}}}.\end{aligned}\]]]>
Indeed, by Definition 3.10, (iii), we have
ξ−11−2ε−1>11−2ε−1+τ2(2+τ)>−1+τ2(2+τ).\frac{1}{1-2\varepsilon }-\frac{1+\tau }{2(2+\tau )}>-\frac{1+\tau }{2(2+\tau )}.\]]]>
Hence, we conclude by Theorem 4.11, Corollary 4.13 and Corollary 4.14 that
P(E2≥γ˜)→0,asn→∞,
for any γ˜>00$]]>. □
Neglecting the drift γ
It remains to show that the result of Theorem 3.12 still holds true if γ is assumed to be arbitrary. For this purpose, consider the sample (Y˜j)j∈W given by Y˜j=Yj−γ, j∈W. Moreover, let ψ∗(t)=E[eitY˜0] be the characteristic function of Y˜0 and write ψˆ∗ for its empirical counterpart, i.e. ψˆ∗(t)=1n∑j∈WeitY˜j. Then, with the notation
1ψ˜∗(t):=1ψˆ∗(t)1{|ψˆ∗(t)|>n−1/2}=eitγ1ψ˜(t),{n^{-1/2}}\}}}={e^{it\gamma }}\frac{1}{\tilde{\psi }(t)},\]]]>
we have for any t∈R,
θˆ∗(t)ψ˜∗(t)=θˆ(t)ψ˜(t)−γ1{|ψˆ∗(t)|>n−1/2}as well asθ∗(t)ψ∗(t)=θ(t)ψ(t)−γ,{n^{-1/2}}\}}}\hspace{1em}\text{as well as}\hspace{1em}\frac{{\theta _{\ast }}(t)}{{\psi _{\ast }}(t)}=\frac{\theta (t)}{\psi (t)}-\gamma ,\]]]>
where θ∗(t)=E[Y˜0eitY˜0] and θˆ∗(t)=1n∑j∈WY˜jeitY˜j. For any v∈U(ξ,β2), consider the decomposition
n(LˆWv−Lv)=n2π<F+[G−1∗v],θˆψ˜F+[Kbn]−θψ>L2(R)=n2π<F+[G−1∗v],θˆ∗ψ˜∗F+[Kbn]−θ∗ψ∗>L2(R)+n2π<F+[G−1∗v],γ1{|ψˆ∗|>n−1/2}F+[Kbn]−γ>L2(R)._{{L^{2}}(\mathbb{R})}}\\ {} =& \frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\frac{{\hat{\theta }_{\ast }}}{{\tilde{\psi }_{\ast }}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-\frac{{\theta _{\ast }}}{{\psi _{\ast }}}{>_{{L^{2}}(\mathbb{R})}}\\ {} & +\frac{\sqrt{n}}{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],\gamma {\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|>{n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-\gamma {>_{{L^{2}}(\mathbb{R})}}.\end{aligned}\]]]>
As W is regularly growing to infinity, the first summand on the right-hand side of the last equation tends to a Gaussian random variable since ψ∗ is an infinitely divisible characteristic function without drift component. For the second summand, we find that
n2π<F+[G−1∗v],γ1{|ψˆ∗|>n−1/2}F+[Kbn]−γ>L2(R)=nγ2π<F+[G−1∗v],F+[Kbn]−1>L2(R)−nγ2π<F+[G−1∗v],1{|ψˆ∗|≤n−1/2}F+[Kbn]>L2(R).{n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-\gamma {>_{{L^{2}}(\mathbb{R})}}\\ {} =& \frac{\sqrt{n}\gamma }{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathcal{F}_{+}}[{K_{{b_{n}}}}]-1{>_{{L^{2}}(\mathbb{R})}}\\ {} & -\frac{\sqrt{n}\gamma }{2\pi }<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|\le {n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]{>_{{L^{2}}(\mathbb{R})}}.\end{aligned}\]]]>
Hence, by (K3) and Definition 3.10, (iii), we obtain
nE|<F+[G−1∗v],F+[Kbn]−1>L2(R)|≤∫Rn|F+[G−1∗v](x)||1−F+[Kbn](x)|dx≲bnn‖G−1∗v‖H1(R),_{{L^{2}}(\mathbb{R})}}\Big|\\ {} \le & {\int _{\mathbb{R}}}\sqrt{n}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)||1-{\mathcal{F}_{+}}[{K_{{b_{n}}}}](x)|dx\lesssim {b_{n}}\sqrt{n}\| {\mathcal{G}^{-1\ast }}v{\| _{{H^{1}}(\mathbb{R})}},\end{aligned}\]]]>
where the last term tends to zero as n→∞, since bn=o(n−1/2). Moreover,
nE|<F+[G−1∗v],1{|ψˆ∗|≤n−1/2}F+[Kbn]>L2(R)|≤2n∫−bn−1bn−1|F+[G−1∗v](x)|P(|ψˆ∗(x)|≤n−12)dx._{{L^{2}}(\mathbb{R})}}\Big|\\ {} \le & 2\sqrt{n}{\int _{-{b_{n}^{-1}}}^{{b_{n}^{-1}}}}|{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v](x)|\mathbb{P}\Big(|{\hat{\psi }_{\ast }}(x)|\le {n^{-\frac{1}{2}}}\Big)dx.\end{aligned}\]]]>
Taking into account that |ψ(x)|=|ψˆ∗(x)|, relation (4.2) with p=1/2 yields
n|F+[G−1∗v](x)|P(|ψˆ∗(x)|≤n−12)1[−bn−1,bn−1](x)≤|F+[G−1∗v](x)||ψ(x)|.
Applying again (4.2) with p=1 implies
n|F+[G−1∗v](x)|P(|ψˆ∗(x)|≤n−12)1[−bn−1,bn−1](x)≤n−12|F+[G−1∗v](x)||ψ(x)|2→0,
as n→∞; thus, by dominated convergence, we have
nE|<F+[G−1∗v],1{|ψˆ∗|≤n−1/2}F+[Kbn]>L2(R)|→0,asn→∞._{{L^{2}}(\mathbb{R})}}\Big|\to 0,\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]]]>
All in all, this shows that Theorem 3.12 holds for any fixed γ∈R.
AppendixProof of Lemma 3.3
Minkowski’s integral inequality together with formula (2.4) yields
‖uv1‖Lk(R)≤‖uv0‖Lk(R)∫supp(f)|f(s)|1/kds,k=1,2.
The right-hand side in the last inequality is finite by Assumption 3.1, (1) and (2); hence uv1∈L1(R)∩L2(R). In particular, R∋x↦F+[uv1](x)=∫Reitx(uv1)(t)dt is well-defined. Using again formula (2.4) together with Fubini’s theorem and a simple integral substitution yields (3.5).
The triangle inequality followed by a simple integral substitution shows that
∫R|x|1+τ|(uv1)|(x)dx≤‖f‖L2+τ(R)∫R|x|1+τ|(uv0)(x)|dx<∞.
The proof of Theorem 3.10 in [13] yields that |ψ(x)| coincides with the inverse of the right-hand side in (3.2). This shows part (c). □
Proof of Lemma 3.6
Let v∈Image(G) be such that ∫R|F+[G−1∗v]|(x)|ψ(x)|dx<∞. In order to prove the upper bound in Theorem 3.6, decompose E|LˆWv−Lv| as follows:
E|LˆWv−Lv|≤E|(Gn−1∗−G−1∗)v,uv1ˆL2(R)|︸(I)+E|G−1∗v,uv1ˆ−uv1L2(R)|︸(II).
We estimate parts (I) and (II) seperately. Using the isometry property of F+, we obtain
(I)≤12π∫R|F+[(Gn−1∗−G−1∗)v](x)|E|F+[uv1ˆ](x)|dx.
Furthermore, since uv1ˆ=F+−1[θˆ(x)ψ˜(x)F+[Kb]], stationarity of Y yields for any x∈R,
E|F+[uv1ˆ](x)|=|F+[Kb](x)|E∑t∈WYtnψˆ(x)1{|ψˆ(x)|>n−1/2}≤n1/2|F+[Kb](x)|E|Y0|.{n^{-1/2}}\}}}\right|\\ {} & \le {n^{1/2}}|{\mathcal{F}_{+}}[{K_{b}}](x)|\mathbb{E}|{Y_{0}}|.\end{aligned}\]]]>
Hence, by the Cauchy–Schwarz inequality we obtain that
(I)≤n1/2E|Y0|2π‖F+[(Gn−1∗−G−1∗)v]‖L2(R)‖F+[Kb]‖L2(R)≤SE|Y0|π(nb)1/2‖(Gn−1∗−G−1∗)v‖L2(R),
where the last line follows from (K2) and again by applying the isometry property of F+. For the second part, we find that
(II)=12π∫R|F+[G−1∗v](x)|Eθˆ(x)ψ˜(x)F+[Kb](x)−θ(x)ψ(x)dx≤12π∫R|F+[G−1∗v](x)|Eθˆ(x)ψ˜(x)−θ(x)ψ(x)|F+[Kb](x)|dx︸(III)+12π|F+[G−1∗v]|,|F+[uv1]||1−F+[Kb]|L2(R),
where the identity |F+[uv1]|(x)=θ(x)ψ(x) was used in the last line. Hence, it remains to bound expression (III). Indeed, applying the triangle inequality followed by the Cauchy–Schwarz inequality and the bounds in [18, Lemma 8.1 and 8.3] yields
(III)≤∫R|F+[Kb](x)||F+[G−1∗v](x)|E|θˆ(x)−θ(x)||1ψ˜(x)−1ψ(x)|dx+∫R|F+[Kb](x)||F+[G−1∗v](x)||θ(x)|E|1ψ˜(x)−1ψ(x)|dx+∫R|F+[Kb](x)||F+[G−1∗v](x)|E|θˆ(x)−θ(x)||ψ(x)|dx≤S∫R|F+[G−1∗v](x)|E|θˆ(x)−θ(x)|2E|1ψ˜(x)−1ψ(x)|2dx+∫R|F+[G−1∗v](x)||ψ(x)||F+[uv1](x)|E|1ψ˜(x)−1ψ(x)|2dx+∫R|F+[G−1∗v](x)|ψ(x)|E|θˆ(x)−θ(x)|2dx≤Sc1n1/2E|Y0|2∫R|F+[G−1∗v](x)||ψ(x)|dx+c2n1/2∫R|F+[G−1∗v](x)||ψ(x)||F+[uv1](x)|dx+c3n1/2E|Y0|2∫R|F+[G−1∗v](x)|ψ(x)|dx,
with constants c1, c2, c3>00$]]>. Hence, by integrability of uv1 it follows
(III)≤c·SnE|Y0|2+‖uv1‖L1(R×)∫R×|F+[G−1∗v](x)||ψ(x)|dx
for some constant c>00$]]>. This finishes the proof. □
Proof of Theorem 3.7
Using Assumption 3.1, (4), (K3) and F+[G−1∗]∈L1(R) we find that
|F+[G−1∗v]|,|F+[uv1]||1−F+[Kb]|L2(R)≲min{1,bn}∫R|F+[G−1∗v](x)|dx=O(bn),asn→∞.
Moreover, applying the same arguments as in the proof of [13, Corollary 3.7], we observe that
‖(Gn−1∗−G−1∗)v‖L2(R)≲anβ2β1−1.
Hence, if γ=0, the assertions of the theorem immediately follow by the upper bound in Lemma 3.6. Otherwise, if γ≠0, consider the sample (Y˜j)j∈W defined in Section 4.3. Following the computations there, one finds that on the right-hand side of (3.8) the additional term
γ2πE|<F+[G−1∗v],F+[Kbn]−1>L2(R)−<F+[G−1∗v],1{|ψˆ∗|≤n−1/2}F+[Kbn]>L2(R)|_{{L^{2}}(\mathbb{R})}}-<{\mathcal{F}_{+}}[{\mathcal{G}^{-1\ast }}v],{\mathbb{1}_{\{|{\hat{\psi }_{\ast }}|\le {n^{-1/2}}\}}}{\mathcal{F}_{+}}[{K_{{b_{n}}}}]{>_{{L^{2}}(\mathbb{R})}}\Big|\]]]>
arises. Using G−1∗v∈H1(R), F+[G−1∗]∈L1(R) and (K3) yields that the latter expression can be estimated from above by
γ2π(bn‖G−1∗v‖H1(R)+1nS‖F+[G−1∗v]ψ‖L1(R)).
This completes the proof. □
Moment inequalities for m-dependent random fields
In this section, we sum up some moment inequalities that are quite helpful for the proofs in Section 3.
We start with the following Bernstein type inequality that is due to [15, p. 316].
Let(Xj)j∈V,V⊂Zd, be a centered m-dependent random field satisfying0<EXj2<∞and, for someH>00$]]>,|EXjp|≤p!2Hp−2EXj2,p≥3,j∈V.ThenP(SV(X)≥xBV)≤exp−x24(m+1)dρV,0≤x≤ρVBV/H,exp−xBV4(m+1)dH,x≥ρVBV/H,whereSV(X)=∑j∈VXj,BV2=ESV2andρV=∑j∈VEXj2/BV2.
The following lemma generalizes Lemma 8.1 in [18]. It can be easily proven using the same arguments as there.
Let(Yj)j∈Zdbe a stationary m-dependent random field satisfyingE|Y0|2q<∞. Furthermore, letW⊂Zdbe a finite subset,n=card(W), and letθˆ(u)=1n∑j∈WYjeiuYjandθ(u)=EY0eiuY0. ThenE|θˆ(u)−θ(u)|2q≤CnqE|Y0|2q,whereC>00$]]>is a constant.
Clearly, applying the Cauchy–Schwarz inequality, Lemma A.2 also yields a bound in case that q=1/2.
Asymptotic covariance of m-dependent random field
Let the sequence(Bn)n∈Nbe regularly growing to infinity. Moreover, let(Xj)j∈Zdbe a stationary m-dependent random field and suppose there are measurable functionsg(1),gn(1),g(2),gn(2):R→R,n∈N, with the following properties:
E[gn(k)(X0)]=0for alln∈N,k=1,2;
E[g(k)(X0)2],E[gn(k)(X0)2]<∞,k=1,2,n∈N;
limn→∞E[gn(1)(X0)gn(2)(Xk)]=E[g(1)(X0)g(2)(Xk)]=:σk, for anyk∈Zd.
We observe that
Cov(|Bn|−1/2∑j∈Bngn(1)(Xj),|Bn|−1/2∑k∈Bngn(2)(Xk))=1|Bn|∑j∈Bn∑k∈BnE[gn(1)(Xj),gn(2)(Xk)]−E[g(1)(Xj),g(2)(Xk)]︸=yn+1|Bn|∑j∈Bn∑k∈BnE[g(1)(Xj),g(2)(Xk)]︸=zn.
Since (Xj)j∈Zd is m-dependent and stationary, and, since (Bn)n∈N is regularly growing to infinity, the same computation as in the proof of Theorem 1.8 in [7, p.175] shows that
limn→∞zn=∑t∈Zd:‖t‖∞≤mσt.
It remains to show that limn→∞yn=0. Indeed, the m-dependence and the property 1. yield
|yn|≤1|Bn|∑j∈Bn∑k∈Zd|E[gn(1)(X0),gn(2)(Xk−j)]−E[g(1)(X0),g(2)(Xk−j)]|≤∑k∈Zd:‖k‖∞≤m|E[gn(1)(X0),gn(2)(Xk)]−E[g(1)(X0),g(2)(Xk)]|→0,asn→∞.
□
Acknowledgement
I would like to thank Evgeny Spodarev and Alexander Bulinski for their fruitful discussions on the subject of this paper.
ReferencesBarndorff-Nielsen, O.E.: Stationary infinitely divisible processes. 25(3), 294–322 (2011). MR2832888. https://doi.org/10.1214/11-BJPS140.Barndorff-Nielsen, O.E., Schmiegel, J.: Lévy-based tempo-spatial modelling; with applications to turbulence. 59(1), 63–90 (2004)Barndorff-Nielsen, O.E., Schmiegel, J.: Ambit processes; with applications to turbulence and tumour growth. : The Abel Symposium 2005, 93–124 (2007). MR2397785. https://doi.org/10.1007/978-3-540-70847-6_5Belomestny, D., Panov, V., Woerner, J.: Low frequency estimation of continuous–time moving average Lévy processes. to appear in: Bernoulli. arXiv: 1607.00896v1 (2017). MR3920361. https://doi.org/10.3150/17-bej1008Belomestny, D., Comte, F., Genon-Catalot, V., Masuda, H., Reiß, M.: . Springer (2010). MR3364253Billingsley, P.: . Wiley, New Jersey (2012). MR2893652Bulinski, A., Shashkin, A.: . World Scientific Publishing, Singapore (2007). MR2375106. https://doi.org/10.1142/9789812709417Chen, L.H.Y., Shao, Q.: Normal approximation under local dependence. 32(3A), 1985–2028 (2004). MR2073183. https://doi.org/10.1214/009117904000000450Comte, F., Genon-Catalot, V.: Nonparametric estimation for pure jump Lévy processes based on high frequency data. 119(12), 4088–4123 (2009). MR2565560. https://doi.org/10.1016/j.spa.2009.09.013Comte, F., Genon-Catalot, V.: Nonparametric adaptive estimation for pure jump Lévy processes. 46(3), 595–617 (2010). MR2682259. https://doi.org/10.1214/09-AIHP323Dedecker, J.: Exponential inequalities and functional central limit theorems for random fields. 5(1), 77–104 (2001). MR1875665. https://doi.org/10.1051/ps:2001103Deitmar, A., Echterhoff, S.: . Springer (2009). MR2457798Glück, J., Roth, S., Spodarev, E.: A solution of a linear integral equation with the application to statistics of infinitely divisible moving averages. Preprint. arXiv:1807.02003 (2018)Gugushvili, S.: Nonparametric inference for discretely sampled Lévy processes. 48, 282–307 (2012). MR2919207. https://doi.org/10.1214/11-AIHP433Heinrich, L.: Some bounds of cumulants of m-dependent random fields. 149(1), 303–317 (1990). MR1124812. https://doi.org/10.1002/mana.19901490123Jónsdóttir, K.Y., Schmiegel, J., Jensen, E.B.V.: Lévy-based growth models. 14(1), 62–90 (2008). MR2401654. https://doi.org/10.3150/07-BEJ6130Karcher, W.: On infinitely divisible random fields with an application in insurance. PhD thesis, Ulm University (2012)Karcher, W., Roth, S., Spodarev, E., Walk, C.: An inverse problem for infinitely divisible moving average random fields. (2018). MR3959289. https://doi.org/10.1007/s11203-018-9188-6Neumann, M.H., Reiß, M.: Nonparametric estimation for Lévy processes from low-frequency observations. 15(1), 223–248 (2009). MR2546805. https://doi.org/10.3150/08-BEJ148Nickl, R., Reiß, M.: A Donsker theorem for Lévy measures. 263(10), 3306–3332 (2012). MR2973342. https://doi.org/10.1016/j.jfa.2012.08.012Rajput, B.S., Rosinski, J.: Spectral representations of infinitely divisible processes. 82, 451–487 (1989). MR1001524. https://doi.org/10.1007/BF00339998Sato, K.I.: . Cambridge University Press, Cambridge (1999). MR1739520Trabs, M.: On infinitely divisible distributions with polynomially decaying characteristic functions. 94, 56–62 (2014). MR3257361. https://doi.org/10.1016/j.spl.2014.07.002