Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 6, Issue 2 (2019)
  4. Convergence rates in the central limit t ...

Convergence rates in the central limit theorem for weighted sums of Bernoulli random fields
Volume 6, Issue 2 (2019), pp. 251–267
Davide Giraudo  

Authors

 
Placeholder
https://doi.org/10.15559/18-VMSTA121
Pub. online: 21 December 2018      Type: Research Article      Open accessOpen Access

Received
20 December 2017
Revised
19 October 2018
Accepted
20 October 2018
Published
21 December 2018

Abstract

Moment inequalities for a class of functionals of i.i.d. random fields are proved. Then rates are derived in the central limit theorem for weighted sums of such randoms fields via an approximation by m-dependent random fields.

1 Introduction and main results

1.1 Goal of the paper

In its simplest form, the central limit theorem states that if ${({X_{i}})}_{i\geqslant 1}$ is an independent identically distributed (i.i.d.) sequence of centered random variables having variance one, then the sequence ${({n^{-1/2}}{\sum _{i=1}^{n}}{X_{i}})}_{n\geqslant 1}$ converges in distribution to a standard normal random variable. If ${X_{1}}$ has a finite moment of order three, Berry [2] and Esseen [12] gave the following convergence rate:
(1)
\[ \underset{t\in \mathbb{R}}{\sup }\Bigg|\mathbb{P}\Bigg\{{n^{-1/2}}{\sum \limits_{i=1}^{n}}{X_{i}}\leqslant t\Bigg\}-\mathbb{P}\{N\leqslant t\}\Bigg|\leqslant C\mathbb{E}\big[|{X_{1}}{|^{3}}\big]{n^{-1/2}},\]
where C is a numerical constant and N has the standard normal distribution. The question of extending the previous result to a larger class of sequences has received a lot of attention. When ${X_{i}}$ can be represented as a function of an i.i.d. sequence, optimal convergence rates are given in [13].
In this paper, we will focus on random fields, that is, collections of random variables indexed by ${\mathbb{Z}^{d}}$ and more precisely in Bernoulli random fields, which are defined as follows.
Definition 1.
Let $d\geqslant 1$ be an integer. The random field ${({X_{\boldsymbol{n}}})}_{\boldsymbol{n}\in {\mathbb{Z}^{d}}}$ is said to be Bernoulli if there exist an i.i.d. random field ${({\varepsilon _{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and a measurable function $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ such that ${X_{\boldsymbol{n}}}=f({({\varepsilon _{\boldsymbol{n}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ for each $\boldsymbol{n}\in {\mathbb{Z}^{d}}$.
We are interested in the asymptotic behavior of the sequence ${({S_{n}})}_{n\geqslant 1}$ defined by
(2)
\[ {S_{n}}:=\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}},\]
where ${b_{n}}:={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is an element of ${\ell ^{2}}({\mathbb{Z}^{d}})$. Under appropriated conditions on the dependence of the random field ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and the sequence of weights ${({b_{n}})}_{n\geqslant 1}$ that will be specified later, the sequence ${({S_{n}}/\| {b_{n}}{\| _{2}})}_{n\geqslant 1}$ converges in law to a normal distribution [15]. The goal of this paper is to provide bounds of the type Berry–Esseen in order to give convergence rates in the central limit theorem.
This type of question has been addressed for the so-called $\operatorname{BL}(\theta )$-dependent random fields [5], martingale differences random fields [19], positively and negatively dependent random fields [4, 20] and mixing random fields [1, 6].
In order to establish results of this kind, we need several ingredients. First, we need convergence rates for m-dependent random fields. Second, a Bernoulli random field can be decomposed as the sum of an m-dependent random field and a remainder. The control of the contribution of the remainder is done by a moment inequality in the spirit of Rosenthal’s inequality [24]. One of the main applications of such an inequality is the estimate of the convergence rates in the central limit theorem for random fields that can be expressed as a functional of an i.i.d. random field. The method consists of approximating the considered random field by an m-dependent one and controlling the approximation with the help of the established moment inequality. In the one-dimensional case, probability and moment inequalities have been established in [16] for maxima of partial sums of Bernoulli sequences. The techniques used therein permit to derive results for weighted sums of such sequences.
The paper is organized as follows. In Subsection 1.2, we give the material which is necessary to understand the moment inequality stated in Theorem 1. We then give the results on convergence rates in Subsection 1.3 (for weighted sums, sums on subsets of ${\mathbb{Z}^{d}}$ and in a regression model) and compare the obtained results in the case of linear random fields with some existing ones. Section 2 is devoted to the proofs.

1.2 Background

The following version of Rosenthal’s inequality is due to Johnson, Schechtman and Zinn [14]: if ${({Y_{i}})_{i=1}^{n}}$ are independent centered random variables with a finite moment of order $p\geqslant 2$, then
(3)
\[ {\Bigg\| {\sum \limits_{i=1}^{n}}{Y_{i}}\Bigg\| }_{p}\leqslant \frac{14.5p}{\log p}\Bigg({\Bigg({\sum \limits_{i=1}^{n}}\| {Y_{i}}{\| _{2}^{2}}\Bigg)^{1/2}}+{\Bigg({\sum \limits_{i=1}^{n}}\| {Y_{i}}{\| _{p}^{p}}\Bigg)^{1/p}}\Bigg),\]
where $\| Y{\| _{q}}:={(\mathbb{E}[|Y{|^{q}}])^{1/q}}$ for $q\geqslant 1$.
It was first establish without explicit constant in Theorem 3 of [24].
Various extensions of Rosenthal-type inequalities have been obtained under mixing conditions [25, 22] or projective conditions [21, 23, 17]. We are interested in extensions of (3) to the setting of dependent random fields.
Throughout the paper, we shall use the following notations.
  • (N.1) For a positive integer d, the set $\{1,\dots ,d\}$ is denoted by $[d]$.
  • (N.2) The coordinatewise order is denoted by ≼, that is, for $\boldsymbol{i}={({i_{q}})_{q=1}^{d}}\in {\mathbb{Z}^{d}}$ and $\boldsymbol{j}={({j_{q}})_{q=1}^{d}}\in {\mathbb{Z}^{d}}$, $\boldsymbol{i}\preccurlyeq \boldsymbol{j}$ means that ${i_{k}}\leqslant {j_{k}}$ for any $k\in [d]$.
  • (N.3) For $q\in [d]$, ${\boldsymbol{e}_{\boldsymbol{q}}}$ denotes the element of ${\mathbb{Z}^{d}}$ whose qth coordinate is 1 and all the others are zero. Moreover, we write $\mathbf{0}=(0,\dots ,0)$ and $\mathbf{1}=(1,\dots ,1)$.
  • (N.4) For $\boldsymbol{n}={({n_{k}})_{k=1}^{d}}\in {\mathbb{N}^{d}}$, we write the product ${\prod _{k=1}^{d}}{n_{q}}$ as $|\boldsymbol{n}|$.
  • (N.5) The cardinality of a set I is denoted by $|I|$.
  • (N.6) For a real number x, we denote by $[x]$ the unique integer such that $[x]\leqslant x\mathrm{<}[x]+1$.
  • (N.7) We write Φ for the cumulative distribution function of the standard normal law.
  • (N.8) If Λ is a subset of ${\mathbb{Z}^{d}}$ and $\boldsymbol{k}\in {\mathbb{Z}^{d}}$, then $\varLambda -\boldsymbol{k}$ is defined as $\{\boldsymbol{l}-\boldsymbol{k},\boldsymbol{l}\in \varLambda \}$.
  • (N.9) For $q\geqslant 1$, we denote by ${\ell ^{q}}({\mathbb{Z}^{d}})$ the space of sequences $\boldsymbol{a}:={({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ such that $\| \boldsymbol{a}{\| _{{\ell ^{q}}}}:={({\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|{a_{\boldsymbol{i}}}{|^{q}})^{1/q}}\mathrm{<}+\infty $.
  • (N.10) For $\boldsymbol{i}={({i_{q}})_{q=1}^{d}}$, the quantity $\| \boldsymbol{i}{\| _{\infty }}$ is defined as ${\max _{1\leqslant q\leqslant d}}|{i_{q}}|$.
Let ${({Y_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be a random field. The sum ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}{Y_{\boldsymbol{i}}}$ is understood as the ${\mathbb{L}^{1}}$-limit of the sequence ${({S_{k}})}_{k\geqslant 1}$ where ${S_{k}}={\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}},\| \boldsymbol{i}{\| _{\infty }}\leqslant k}}{Y_{\boldsymbol{i}}}$.
Following [27] we define the physical dependence measure.
Definition 2.
Let ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}:={(f{(({\varepsilon _{\boldsymbol{i}-\boldsymbol{j}}}))}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be a Bernoulli random field, $p\geqslant 1$ and ${({\varepsilon ^{\prime }_{\boldsymbol{u}}})_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}}$ be an i.i.d. random field which is independent of the i.i.d. random field ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$ and has the same distribution as ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$. For $\boldsymbol{i}\in {\mathbb{Z}^{d}}$, we introduce the physical dependence measure
(4)
\[ {\delta _{\boldsymbol{i},p}}:={\big\| {X_{\boldsymbol{i}}}-{X_{\boldsymbol{i}}^{\ast }}\big\| }_{p},\]
where ${X_{\boldsymbol{i}}^{\ast }}=f({({\varepsilon _{\boldsymbol{i}-\boldsymbol{j}}^{\ast }})_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}})$ and ${\varepsilon _{\boldsymbol{u}}^{\ast }}={\varepsilon _{\boldsymbol{u}}}$ if $\boldsymbol{u}\ne \mathbf{0}$, ${\varepsilon _{\mathbf{0}}^{\ast }}={\varepsilon ^{\prime }_{\mathbf{0}}}$.
In [11, 3], various examples of Bernoulli random fields are given, for which the physical dependence measure is either computed or estimated. Proposition 1 of [11] also gives the following moment inequality: if Γ is a finite subset of ${\mathbb{Z}^{d}}$, ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in \varGamma }$ is a family of real numbers and $p\geqslant 2$, then for any Bernoulli random field ${({X_{\boldsymbol{n}}})}_{\boldsymbol{n}\in {\mathbb{Z}^{d}}}$,
(5)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant {\bigg(2p\sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\cdot \sum \limits_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}{\delta _{\boldsymbol{j},p}}.\]
This was used in [11, 3] in order to establish functional central limit theorems. Truquet [26] also obtained an inequality in this spirit. If ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is an i.i.d. and centered random field, (3) would give
(6)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant C{\bigg(\sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\| {X_{\mathbf{1}}}{\| _{p}},\]
while Rosenthal’s inequality (3) would give
(7)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant C{\bigg(\sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\| {X_{\mathbf{1}}}{\| _{2}}+C{\bigg(\sum \limits_{\boldsymbol{i}\in \varGamma }|{a_{\boldsymbol{i}}}{|^{p}}\bigg)^{1/p}}\| {X_{\mathbf{1}}}{\| _{p}},\]
what is a better result in this context.
In the case of linear processes, equality ${\delta _{\boldsymbol{j},p}}\leqslant K{\delta _{\boldsymbol{j},2}}$ holds for a constant K which does not depend on $\boldsymbol{j}$. However, there are processes for which such an inequality does not hold.
Example 1.
We give an example of a random field for which there is no constant K such that ${\delta _{\boldsymbol{j},p}}\leqslant K{\delta _{\boldsymbol{j},2}}$ holds for all $\boldsymbol{j}\in {\mathbb{Z}^{d}}$. Let $p\geqslant 2$ and let ${({\varepsilon _{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an i.i.d. random field and for each $\boldsymbol{k}\in {\mathbb{Z}^{d}}$, let ${f_{\boldsymbol{k}}}:\mathbb{R}\to \mathbb{R}$ be a function such that the random variable ${Z_{\boldsymbol{k}}}:={f_{\boldsymbol{k}}}({\varepsilon _{\mathbf{0}}})$ is centered and has a finite moment of order p, and ${\sum _{\boldsymbol{k}\in {\mathbb{Z}^{d}}}}\| {Z_{\boldsymbol{k}}}{\| _{2}^{2}}\mathrm{<}+\infty $. Define ${X_{\boldsymbol{n}}}:={\lim \nolimits_{N\to +\infty }}{\sum _{-N\mathbf{1}\preccurlyeq \boldsymbol{j}\preccurlyeq N\mathbf{1}}}{f_{\boldsymbol{k}}}({\varepsilon _{\boldsymbol{n}-\boldsymbol{k}}})$, where the limit is taken in ${\mathbb{L}^{2}}$. Then ${X_{\boldsymbol{i}}}-{X_{\boldsymbol{i}}^{\ast }}={f_{\boldsymbol{i}}}({\varepsilon _{\mathbf{0}}})-{f_{\boldsymbol{i}}}({\varepsilon ^{\prime }_{\mathbf{0}}})$, hence ${\delta _{\boldsymbol{i},2}}$ is of order $\| {Z_{\boldsymbol{i}}}{\| _{2}}$ while ${\delta _{\boldsymbol{i},p}}$ is of order $\| {Z_{\boldsymbol{i}}}{\| _{p}}$.
Consequently, having the ${\ell ^{p}}$-norm instead of the ${\ell ^{2}}$-norm of the ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in \varGamma }$ is more suitable.

1.3 Mains results

We now give a Rosenthal-like inequality for weighted sums of Bernoulli random fields in terms of the physical dependence measure.
Theorem 1.
Let ${({\varepsilon _{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an i.i.d. random field. For any measurable function $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ such that ${X_{\boldsymbol{j}}}:=f({({X_{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ has a finite moment of order $p\geqslant 2$ and is centered, and any ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\in {\ell ^{2}}({\mathbb{Z}^{d}})$,
(8)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d/2}}\| {X_{\mathbf{0},j}}{\| _{2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{\boldsymbol{i}}}{|^{p}}\bigg)^{1/p}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d(1-1/p)}}\| {X_{\mathbf{0},j}}{\| _{p}},\end{array}\]
where for $j\geqslant 1$,
(9)
\[ {X_{\mathbf{0},j}}=\mathbb{E}\big[{X_{\mathbf{0}}}\mid \sigma \big\{{\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j\big\}\big]-\mathbb{E}\big[{X_{\mathbf{0}}}\mid \sigma \big\{{\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j-1\big\}\big]\]
and ${X_{\mathbf{0},0}}=\mathbb{E}[{X_{\mathbf{0}}}\mid \sigma \{{\varepsilon _{\mathbf{0}}}\}]$.
We can formulate a version of inequality (8) where the right-hand side is expressed in terms of the coefficients of the physical dependence measure. The obtained result is not directly comparable to (5) because of the presence of the ${\ell ^{p}}$-norm of the coefficients.
Corollary 1.
Let $\{{\varepsilon _{\boldsymbol{i}}},\boldsymbol{i}\in {\mathbb{Z}^{d}}\}$ be an i.i.d. set of random variables. Then for any measurable function $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ such that ${X_{\boldsymbol{j}}}:=f({({X_{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ has a finite moment of order $p\geqslant 2$ and is centered, and any ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\in {\ell ^{2}}({\mathbb{Z}^{d}})$,
(10)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant \sqrt{2}\frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d/2}}{\bigg(\sum \limits_{\| \boldsymbol{i}{\| _{\infty }}=j}{\delta _{\boldsymbol{i},2}^{2}}\bigg)^{1/2}}\\ {} & \displaystyle +\sqrt{2}\frac{14.5p}{\log p}\sqrt{p-1}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{\boldsymbol{i}}}{|^{p}}\bigg)^{1/p}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d(1-1/p)}}{\bigg(\sum \limits_{\| \boldsymbol{i}{\| _{\infty }}=j}{\delta _{\boldsymbol{i},p}^{2}}\bigg)^{1/2}}.\end{array}\]
Let ${({X_{\boldsymbol{j}}})}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}=f({({\varepsilon _{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ be a centered square integrable Bernoulli random field and for any positive integer n, let ${b_{n}}:={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an element of ${\ell ^{2}}({\mathbb{Z}^{d}})$. We are interested in the asymptotic behavior of the sequence ${({S_{n}})}_{n\geqslant 1}$ defined by
(11)
\[ {S_{n}}:=\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}}.\]
Let us denote for $\boldsymbol{k}\in {\mathbb{Z}^{d}}$ the map ${\tau _{\boldsymbol{k}}}:{\ell ^{2}}({\mathbb{Z}^{d}})\to {\ell ^{2}}({\mathbb{Z}^{d}})$ defined by
\[ {\tau _{\boldsymbol{k}}}\big({({x_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\big):={({x_{\boldsymbol{i}+\boldsymbol{k}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}.\]
In [15], Corollary 2.6 gives the following result: under a Hannan type condition on the random field ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and under the condition on the weights that for any $q\in [d]$,
(12)
\[ \frac{1}{\| {b_{n}}{\| _{{\ell ^{2}}}}}{\big\| {\tau _{{\boldsymbol{e}_{\boldsymbol{q}}}}}({b_{n}})-{b_{n}}\big\| }_{{\ell ^{2}}}=0,\]
the series ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ converges, and the sequence ${({S_{n}}/\| {b_{n}}{\| _{{\ell ^{2}}}})}_{n\geqslant 1}$ converges in distribution to a centered normal distribution with variance ${\sigma ^{2}}$, where
(13)
\[ \sigma :={\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})\bigg)^{1/2}}.\]
The argument relies on an approximation by an m-dependent random field.
The purpose of the next theorem is to give a general rates of convergence. In order to measure it, we define
(14)
\[ {\varDelta _{n}}:=\underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\bigg\{\frac{{S_{n}}}{\| {b_{n}}{\| _{{\ell ^{2}}}}}\leqslant t\bigg\}-\varPhi (t/\sigma )\bigg|.\]
The following quantity will also play an important role in the estimation of convergence rates:
(15)
\[ {\varepsilon _{n}}:=\sum \limits_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}\big|\mathbb{E}[{X_{\mathbf{0}}}{X_{\boldsymbol{j}}}]\big|\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\bigg|\frac{{b_{n,\boldsymbol{i}}}{b_{n,\boldsymbol{i}+\boldsymbol{j}}}}{\| {b_{n}}{\| _{{\ell ^{2}}}}}-1\bigg|.\]
Theorem 2.
Let $p\mathrm{>}2$, ${p^{\prime }}:=\min \{p,3\}$ and let ${({X_{\boldsymbol{j}}})}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}={(f({({\varepsilon _{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}))}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}$ be a centered Bernoulli random field with a finite moment of order p and for any positive integer n, let ${b_{n}}:={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an element of ${\ell ^{2}}({\mathbb{Z}^{d}})$ such that for any $n\geqslant 1$, the set $\{\boldsymbol{k}\in {\mathbb{Z}^{d}},{b_{n,\boldsymbol{k}}}\ne 0\}$ is finite and nonempty, ${\lim \nolimits_{n\to +\infty }}\| {b_{n}}{\| _{{\ell ^{2}}}}=+\infty $ and (12) holds for any $q\in [d]$. Assume that for some positive α and β, the following series are convergent:
(16)
\[ {C_{2}}(\alpha ):={\sum \limits_{i=0}^{+\infty }}{(i+1)^{d/2+\alpha }}\| {X_{\mathbf{0},i}}{\| _{2}}\hspace{1em}\hspace{2.5pt}\textit{and}\hspace{2.5pt}\hspace{1em}{C_{p}}(\beta ):={\sum \limits_{i=0}^{+\infty }}{(i+1)^{d(1-1/p)+\beta }}\| {X_{\mathbf{0},i}}{\| _{p}}.\]
Let ${S_{n}}$ be defined by (11).
Assume that ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ is finite and that σ be given by (13) is positive. Let $\gamma \mathrm{>}0$ and let
(17)
\[ {n_{0}}:=\inf \big\{N\geqslant 1\mid \forall n\geqslant N,\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){\big({\big[\| {b_{n}}{\| _{{\ell ^{2}}}}\big]^{\gamma }}\big)^{-\alpha }}\geqslant \sigma /2\big\}.\]
Then for each $n\geqslant {n_{0}}$,
(18)
\[\begin{array}{cc}& \displaystyle \widetilde{{\varDelta _{n}}}\leqslant 150{\big(29{\big(\big[\| {b_{n}}{\| _{{\ell ^{2}}}}\big]+21\big)^{\gamma }}+21\big)^{({p^{\prime }}-1)d}}\| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}^{{p^{\prime }}}}{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}}}{\| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)^{{p^{\prime }}}}{(\sigma /2)^{-{p^{\prime }}}}\\ {} & \displaystyle +\bigg(2\frac{|{\varepsilon _{n}}|}{{\sigma ^{2}}}+80{(\log 2)^{-1}}\frac{\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha }}}{{\sigma ^{2}}}{C_{2}}{(\alpha )^{2}}\bigg){(2\pi e)^{-1/2}}\\ {} & \displaystyle +{\bigg(\frac{14.5p}{\sigma \log p}{4^{d/2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha }}{C_{2}}(\alpha )\bigg)^{\frac{p}{p+1}}}\\ {} & \displaystyle +{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{p}}}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\frac{14.5p}{\log p}{4^{d(1-1/p)}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \beta }}{C_{p}}(\beta )\bigg)^{\frac{p}{p+1}}}.\end{array}\]
In particular, there exists a constant κ such that for all $n\geqslant {n_{0}}$,
(19)
\[\begin{array}{cc}& \displaystyle {\varDelta _{n}}\leqslant \kappa \big(\| {b_{n}}{\| _{{\ell ^{2}}}^{\gamma ({p^{\prime }}-1)d-{p^{\prime }}}}\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}^{{p^{\prime }}}}+|{\varepsilon _{n}}|\big)\\ {} & \displaystyle +\kappa \big(\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha \frac{p}{p+1}}}+\| {b_{n}}{\| _{{\ell ^{p}}}^{\frac{p}{p+1}}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\frac{p}{p+1}(\gamma \beta +1)}}\big).\end{array}\]
Remark 1.
If (12), ${\lim \nolimits_{n\to +\infty }}\| {b_{n}}{\| _{{\ell ^{2}}}}=+\infty $ and the family ${({\delta _{\boldsymbol{i},2}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is summable, then the sequence ${({\varepsilon _{n}})}_{n\geqslant 1}$ converges to 0 hence ${n_{0}}$ is well defined. However, it is not clear to us whether the finiteness of ${C_{2}}(\alpha )$ combined with (12) and ${\lim \nolimits_{n\to +\infty }}\| {b_{n}}{\| _{{\ell ^{2}}}}=+\infty $ imply that ${\sum _{\boldsymbol{j}\in {\mathbb{Z}^{d}}}}|\mathbb{E}[{X_{\mathbf{0}}}{X_{\boldsymbol{j}}}]|$ is finite. Nevertheless, we can show an analogous result in terms of coefficients ${\delta _{\boldsymbol{i},p}}$ with the following changes in the statement of Theorem 2:
  • 1. the definition of ${C_{2}}(\alpha )$ should be replaced with
    (20)
    \[ {C_{2}}(\alpha ):=\sqrt{2}{\sum \limits_{j=0}^{+\infty }}{(j+1)^{d/2+\alpha }}{\bigg(\sum \limits_{\| \boldsymbol{i}{\| _{\infty }}=j}{\delta _{\boldsymbol{i},2}^{2}}\bigg)^{1/2}};\]
  • 2. the definition of ${C_{p}}(\beta )$ should be replaced with
    (21)
    \[ {C_{p}}(\beta ):=\sqrt{2(p-1)}{\sum \limits_{j=0}^{+\infty }}{(j+1)^{d(1-1/p)+\beta }}{\bigg(\sum \limits_{\| \boldsymbol{i}{\| _{\infty }}=j}{\delta _{\boldsymbol{i},2}^{2}}\bigg)^{1/2}}.\]
In this case, the convergence of ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ holds (cf. Proposition 2 in [11]).
Recall the notation (N.8). Let ${({\varLambda _{n}})}_{n\geqslant 1}$ be a sequence of subsets of ${\mathbb{Z}^{d}}$. The choice ${b_{n,\boldsymbol{j}}}=1$ if $\boldsymbol{j}\in {\varLambda _{n}}$ and 0 otherwise yields the following corollary for set-indexed partial sums.
Corollary 2.
Let ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be a centered Bernoulli random field with a finite moment of order $p\geqslant 2$, ${p^{\prime }}:=\min \{p,3\}$ and let ${({\varLambda _{n}})}_{n\geqslant 1}$ be a sequence of subset of ${\mathbb{Z}^{d}}$ such that $|{\varLambda _{n}}|\to +\infty $ and for any $\boldsymbol{k}\in {\mathbb{Z}^{d}}$, ${\lim \nolimits_{n\to +\infty }}|{\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{k})|/|{\varLambda _{n}}|=1$. Assume that the series defined in (16) are convergent for some positive α and β, that ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ is finite and that σ defined by (13) is positive. Let $\gamma \mathrm{>}0$ and ${n_{0}}$ be defined by (17). There exists a constant κ such that for any $n\geqslant {n_{0}}$,
(22)
\[\begin{array}{cc}& \displaystyle \underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\bigg\{\frac{{\sum _{\boldsymbol{i}\in {\varLambda _{n}}}}{X_{\boldsymbol{i}}}}{|{\varLambda _{n}}{|^{1/2}}}\leqslant t\bigg\}-\varPhi (t/\sigma )\bigg|\\ {} & \displaystyle \leqslant \kappa \bigg(|{\varLambda _{n}}{|^{q}}+\sum \limits_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}\big|\mathbb{E}[{X_{\mathbf{0}}}{X_{\boldsymbol{j}}}]\big|\bigg|\frac{|{\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{j})|}{|{\varLambda _{n}}|}-1\bigg|\bigg),\end{array}\]
where
(23)
\[ q:=\max \bigg\{\frac{\gamma ({p^{\prime }}-1)d-{p^{\prime }}}{2}+1;-\gamma \alpha \frac{p}{2(p+1)};\frac{2-p-p\gamma \beta }{2(p+1)}\bigg\}.\]
We consider now the following regression model:
(24)
\[ {Y_{\boldsymbol{i}}}=g\bigg(\frac{\boldsymbol{i}}{n}\bigg)+{X_{\boldsymbol{i}}},\hspace{1em}\boldsymbol{i}\in {\varLambda _{n}}:={\{1,\dots ,n\}^{d}},\]
where $g:{[0,1]^{d}}\to \mathbb{R}$ is an unknown smooth function and ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is a zero mean stationary Bernoulli random field. Let K be a probability kernel defined on ${\mathbb{R}^{d}}$ and let ${({h_{n}})}_{n\geqslant 1}$ be a sequence of positive numbers which converges to zero and which satisfies
(25)
\[ \underset{n\to +\infty }{\lim }n{h_{n}}=+\infty \hspace{1em}\hspace{2.5pt}\text{and}\hspace{2.5pt}\hspace{1em}\underset{n\to +\infty }{\lim }n{h_{n}^{d+1}}=0.\]
We estimate the function g by the kernel estimator ${g_{n}}$ defined by
(26)
\[ {g_{n}}(\boldsymbol{x})=\frac{{\sum _{\boldsymbol{i}\in {\varLambda _{n}}}}{Y_{\boldsymbol{i}}}K(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}})}{{\sum _{\boldsymbol{i}\in {\varLambda _{n}}}}K(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}})},\hspace{1em}x\in {[0,1]^{d}}.\]
We make the following assumptions on the regression function g and the probability kernel K:
  • (A) The probability kernel K fulfills ${\int _{{\mathbb{R}^{d}}}}K(\boldsymbol{u})\mathrm{d}\boldsymbol{u}=1$, is symmetric, non-negative, supported by ${[-1,1]^{d}}$. Moreover, there exist positive constants r, c and C such that for any $\boldsymbol{x},\boldsymbol{y}\in {[-1,1]^{d}}$, $|K(\boldsymbol{x})-K(\boldsymbol{y})|\leqslant r\| \boldsymbol{x}-\boldsymbol{y}{\| _{\infty }}$ and $c\leqslant K(\boldsymbol{x})\leqslant C$.
We measure the rate of convergence of ${({(n{h_{n}})^{d/2}}({g_{n}}(\boldsymbol{x})-\mathbb{E}[{g_{n}}(\boldsymbol{x})]))}_{n\geqslant 1}$ to a normal distribution by the use of the quantity
(27)
\[ \widetilde{{\varDelta _{n}}}:=\underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\big\{{(n{h_{n}})^{d/2}}\big({g_{n}}(\boldsymbol{x})-\mathbb{E}\big[{g_{n}}(\boldsymbol{x})\big]\big)\leqslant t\big\}-\varPhi \bigg(\frac{t}{\sigma \| K{\| _{2}}}\bigg)\bigg|.\]
Two other quantities will be involved, namely,
(28)
\[ {A_{n}}:={(n{h_{n}})^{d/2}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}}{K^{2}}\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\bigg)^{1/2}}\| K{\| _{{\mathbb{L}^{2}}({\mathbb{R}^{d}})}^{-1}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}}K\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\bigg)^{-1/2}}\]
and
(29)
\[ {\varepsilon _{n}}:=\sum \limits_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}\big|\mathbb{E}[{X_{\mathbf{0}}}{X_{\boldsymbol{j}}}]\big|\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{j})}\frac{K(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}})K(\frac{\boldsymbol{x}-(\boldsymbol{i}-\boldsymbol{j})/n}{{h_{n}}})}{{\sum _{\boldsymbol{k}\in {\varLambda _{n}}}}{K^{2}}(\frac{\boldsymbol{x}-\boldsymbol{k}/n}{{h_{n}}})}-1\bigg).\]
Theorem 3.
Let $p\mathrm{>}2$, ${p^{\prime }}:=\min \{p,3\}$ and let ${({X_{\boldsymbol{j}}})}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}={(f({({\varepsilon _{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}))}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}$ be a centered Bernoulli random field with a finite moment of order p. Assume that for some positive α and β, the following series are convergent:
(30)
\[ {C_{2}}(\alpha ):={\sum \limits_{i=0}^{+\infty }}{(i+1)^{d/2+\alpha }}\| {X_{\mathbf{0},i}}{\| _{2}}\hspace{1em}\hspace{2.5pt}\textit{and}\hspace{2.5pt}\hspace{1em}{C_{p}}(\beta ):={\sum \limits_{i=0}^{+\infty }}{(i+1)^{d(1-1/p)+\beta }}\| {X_{\mathbf{0},i}}{\| _{p}}.\]
Let ${g_{n}}(\boldsymbol{x})$ be defined by (26), ${({h_{n}})}_{n\geqslant 1}$ be a sequence which converges to 0 and satisfies (25).
Assume that ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ is finite and that $\sigma :={\sum _{\boldsymbol{j}\in {\mathbb{Z}^{d}}}}\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{j}}})\mathrm{>}0$. Let ${n_{1}}\in \mathbb{N}$ be such that for each $n\geqslant {n_{1}}$,
(31)
\[ \frac{1}{2}\leqslant {(n{h_{n}})^{-d}}K\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\leqslant \frac{3}{2}\]
and
(32)
\[ \frac{1}{2}\| K{\| _{{\mathbb{L}^{2}}({\mathbb{R}^{d}})}}\leqslant {(n{h_{n}})^{-d}}{K^{2}}\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\leqslant \frac{3}{2}\| K{\| _{{\mathbb{L}^{2}}({\mathbb{R}^{d}})}}.\]
Let ${n_{0}}$ be the smallest integer for which for all $n\geqslant {n_{0}}$,
(33)
\[ \sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){\bigg({\bigg[{\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}}K{\bigg(\frac{1}{{h_{n}}}\bigg(\boldsymbol{x}-\frac{\boldsymbol{i}}{n}\bigg)\bigg)^{2}}\bigg)^{1/2}}\bigg]^{\gamma }}\bigg)^{-\alpha }}\geqslant \sigma /2.\]
Then there exists a constant κ such that for each $n\geqslant \max \{{n_{0}},{n_{1}}\}$,
(34)
\[\begin{array}{cc}& \displaystyle {\varDelta _{n}}\leqslant \kappa |{A_{n}}-1{|^{\frac{p}{p+1}}}+|{\varepsilon _{n}}|+\kappa {(n{h_{n}})^{\frac{d}{2}(\gamma ({p^{\prime }}-1)d-{p^{\prime }}+2)}}\\ {} & \displaystyle +{(n{h_{n}})^{-\frac{d}{2}\gamma \alpha \frac{p}{p+1}}}+{(n{h_{n}})^{\frac{2d-p(\gamma \beta +1)}{2(p+1)}}}.\end{array}\]
Lemma 1 in [10] shows that under (25), the sequence ${({A_{n}})}_{n\geqslant 1}$ goes to 1 as n goes to infinity and that the integer ${n_{1}}$ is well defined.
We now consider the case of linear random fields in dimension 2, that is,
(35)
\[ {X_{{j_{1}},{j_{2}}}}=\sum \limits_{{i_{1}},{i_{2}}\in \mathbb{Z}}{a_{{i_{1}},{i_{2}}}}{\varepsilon _{{j_{1}}-{i_{1}},{j_{2}}-{i_{2}}}},\]
where ${({a_{{i_{1}},{i_{2}}}})}_{{i_{1}},{i_{2}}\mathbb{Z}}\in {\ell ^{1}}({\mathbb{Z}^{2}})$ and ${({\varepsilon _{{u_{1}},{u_{2}}}})}_{{u_{1}},{u_{2}}\in {\mathbb{Z}^{2}}}$ is an i.i.d. centered random field and ${\varepsilon _{0,0}}$ has a finite variance. We will focus on the case where the weights are of the form ${b_{n,{i_{1}},{i_{2}}}}=1$ if $1\leqslant {i_{1}},{i_{2}}\leqslant n$ and ${b_{n,{i_{1}},{i_{2}}}}=0$ otherwise.
Mielkaitis and Paulauskas [18] established the following convergence rate. Denoting
(36)
\[ {\varDelta ^{\prime }_{n}}:=\underset{r\geqslant 0}{\sup }\Bigg|\mathbb{P}\Bigg\{\Bigg|\frac{1}{n}{\sum \limits_{{i_{1}},{i_{2}}=1}^{n}}{X_{{i_{1}},{i_{2}}}}\Bigg|\leqslant r\Bigg\}-\mathbb{P}\big\{|N|\leqslant r\big\}\Bigg|\]
and assuming that $\mathbb{E}[|{\varepsilon _{0,0}}{|^{2+\delta }}]$ is finite and
(37)
\[ \sum \limits_{{k_{1}},{k_{2}}\in \mathbb{Z}}{\big(|{k_{1}}|+1\big)^{2}}{\big(|{k_{2}}|+1\big)^{2}}{a_{{k_{1}},{k_{2}}}^{2}}\mathrm{<}+\infty ,\]
the following estimate holds for ${\varDelta ^{\prime }_{n}}$:
(38)
\[ {\varDelta ^{\prime }_{n}}=O\big({n^{-r}}\big),\hspace{1em}r:=\frac{1}{2}\min \bigg\{\delta ,1-\frac{1}{3+\delta }\bigg\}.\]
In the context of Corollary 2, the condition on the coefficients reads as follows:
(39)
\[ {\sum \limits_{i=0}^{+\infty }}\big({i^{1+\alpha }}+{i^{2-2/p+\beta }}\big){\bigg(\sum \limits_{({j_{1}},{j_{2}}):\| ({j_{1}},{j_{2}}){\| _{\infty }}=i}{a_{{j_{1}},{j_{2}}}^{2}}\bigg)^{1/2}}\mathrm{<}+\infty ,\]
where $p=2+\delta $. Let us compare (37) with (39). Let $s:=\max \{1+\alpha ,2-2/p+\beta \}$. When $s\geqslant 2$, (39) implies (37). However, this implication does not hold if $s\mathrm{<}3/2$. Indeed, let $r\in (s+1,5/2)$ and define ${a_{{k_{1}},{k_{2}}}}:={k_{1}^{-r}}$ if ${k_{1}}={k_{2}}\geqslant 1$ and ${a_{{k_{1}},{k_{2}}}}:=0$ otherwise. Then (39) holds whereas (37) does not.
Let us discuss the convergence rates in the following example. Let ${a_{{k_{1}},{k_{2}}}}:={2^{-|{k_{1}}|-|{k_{2}}|}}$ and let $p=2+\delta $, where $\delta \in (0,1]$. In our context,
(40)
\[ \bigg|\frac{|{\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{j})|}{|{\varLambda _{n}}|}-1\bigg|\leqslant \frac{{n^{2}}-(n-{j_{1}})(n-{j_{2}})}{{n^{2}}}\leqslant \frac{{j_{1}}+{j_{2}}}{n},\]
hence the convergence of ${\sum _{{j_{1}},{j_{2}}\in \mathbb{Z}}}|\operatorname{Cov}({X_{0,0}},{X_{{j_{1}},{j_{2}}}})|({j_{1}}+{j_{2}})$ guarantees that ${\varepsilon _{n}}$ in Corollary 2 is of order $1/n$. Moreover, since (39) holds for all α and β, the choice of γ allows to reach rates of the form ${n^{-\delta +{r_{0}}}}$ for any fixed ${r_{0}}$. In particular, when $\delta =1$, one can reach for any fixed ${r_{0}}$ rates of the form ${n^{-1+{r_{0}}}}$. In comparison, with the same assumptions, the result of [18] gives ${n^{-3/8}}$.

2 Proofs

2.1 Proof of Theorem 1

We define for $j\geqslant 1$ and $\boldsymbol{i}\in {\mathbb{Z}^{d}}$,
(41)
\[ {X_{\boldsymbol{i},j}}=\mathbb{E}\big[{X_{i}}\mid \sigma \big({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}-\boldsymbol{i}{\| _{\infty }}\leqslant j\big)\big]-\mathbb{E}\big[{X_{i}}\mid \sigma \big({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}-\boldsymbol{i}{\| _{\infty }}\leqslant j-1\big)\big].\]
In this way, by the martingale convergence theorem,
(42)
\[ {X_{\boldsymbol{i}}}-\mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]=\underset{N\to +\infty }{\lim }{\sum \limits_{j=1}^{N}}{X_{\boldsymbol{i},j}},\]
hence
(43)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant {\sum \limits_{j=1}^{+\infty }}{\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}+{\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}\mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]\bigg\| }_{p}.\]
Let us fix $j\geqslant 1$. We divide ${\mathbb{Z}^{d}}$ into blocks. For $\boldsymbol{v}\in {\mathbb{Z}^{d}}$, we define
(44)
\[ {A_{\boldsymbol{v}}}:={\prod \limits_{q=1}^{d}}\big(\big[(2j+2){v_{q}},(2j+2)({v_{q}}+1)-1\big]\cap \mathbb{Z}\big),\]
and if K is a subset of $[d]$, we define
(45)
\[ {E_{K}}:=\big\{\boldsymbol{v}\in {\mathbb{Z}^{d}},{v_{q}}\hspace{2.5pt}\text{is even if and only if}\hspace{2.5pt}q\in K\big\}.\]
Therefore, the following inequality takes place
(46)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \sum \limits_{K\subset [d]}{\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}.\]
Observe that the random variable ${\sum _{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}$ is measurable with respect of the σ-algebra generated by ${\varepsilon _{\boldsymbol{u}}}$, where $\boldsymbol{u}$ satisfies $(2j+2){v_{q}}-(j+1)\leqslant {u_{q}}\leqslant j+1+(2j+2)({v_{q}}+1)-1$ for all $q\in [d]$. Since the family $\{{\varepsilon _{\boldsymbol{u}}},\boldsymbol{u}\in {\mathbb{Z}^{d}}\}$ is independent, the family $\{{\sum _{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}},\boldsymbol{v}\in {E_{K}}\}$ is independent for each fixed $K\subset [d]$. Using inequality (3), it thus follows that
(47)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg\| \sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| _{2}^{2}}\bigg)^{1/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg\| \sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| _{p}^{p}}\bigg)^{1/p}}.\end{array}\]
By stationarity, one can see that $\| {X_{\boldsymbol{i},j}}{\| _{q}}=\| {X_{\mathbf{0},j}}{\| _{q}}$ for $q\in \{2,p\}$, hence the triangle inequality yields
(48)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{2}}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg(\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}|\bigg)^{2}}\bigg)^{1/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{p}}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg(\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}|\bigg)^{p}}\bigg)^{1/p}}.\end{array}\]
By Jensen’s inequality, for $q\in \{2,p\}$,
(49)
\[ {\bigg(\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}|\bigg)^{q}}\leqslant |{A_{\boldsymbol{v}}}{|^{q-1}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}{|^{q}}\leqslant {(2j+2)^{d(q-1)}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}{|^{q}}\]
and using ${\sum _{i=1}^{N}}{x_{i}^{1/q}}\leqslant {N^{\frac{q-1}{q}}}{({\sum _{i=1}^{N}}{x_{i}})^{1/q}}$, it follows that
(50)
\[\begin{array}{cc}& \displaystyle \sum \limits_{K\subset [d]}{\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{2}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{i}^{2}}\bigg)^{1/2}}{(4j+4)^{d/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{p}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{i}}{|^{p}}\bigg)^{1/p}}{(4j+4)^{d(1-1/p)}}.\end{array}\]
Combining (43), (46) and (50), we derive that
(51)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}{\sum \limits_{j=1}^{+\infty }}\| {X_{\mathbf{0},j}}{\| _{2}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{i}^{2}}\bigg)^{1/2}}{(4j+4)^{d/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}{\sum \limits_{j=1}^{+\infty }}\| {X_{\mathbf{0},j}}{\| _{p}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{i}}{|^{p}}\bigg)^{1/p}}{(4j+4)^{d(1-1/p)}}+{\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}\mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]\bigg\| }_{p}.\end{array}\]
In order to control the last term, we use inequality (3) and bound $\| \mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]{\| _{q}}$ by $\| {X_{\mathbf{0},0}}{\| _{q}}$ for $q\in \{1,2\}$. This ends the proof of Theorem 1.
Proof of Corollary 1.
The following lemma gives a control of the ${\mathbb{L}^{q}}$-norm of ${X_{\mathbf{0},j}}$ in terms of the physical measure dependence.
Lemma 1.
For $q\in \{2,p\}$ and $j\in \mathbb{N}$, the following inequality holds
(52)
\[ \| {X_{\mathbf{0},j}}{\| _{q}}\leqslant {\bigg(2(q-1)\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}},\| \boldsymbol{i}{\| _{\infty }}=j}{\delta _{\boldsymbol{i},q}^{2}}\bigg)^{1/2}}.\]
Proof.
Let j be fixed. Let us write the set of elements of ${\mathbb{Z}^{d}}$ whose infinite norm is equal to j as $\{{\boldsymbol{v}_{\boldsymbol{s}}},1\leqslant s\leqslant {N_{j}}\}$ where ${N_{j}}\in \mathbb{N}$. We also assume that ${\boldsymbol{v}_{\boldsymbol{s}}}-{\boldsymbol{v}_{\boldsymbol{s}\boldsymbol{-}\mathbf{1}}}\in \{{\boldsymbol{e}_{\boldsymbol{k}}},1\leqslant k\leqslant d\}$ for all $s\in \{2,\dots ,{N_{j}}\}$.
Denote
(53)
\[ {\mathcal{F}_{s}}:=\sigma \big({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j,{\varepsilon _{{\boldsymbol{v}_{\boldsymbol{t}}}}},1\leqslant t\leqslant s\big),\]
and ${\mathcal{F}_{0}}:=\sigma ({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j)$. Then ${X_{\mathbf{0},j}}={\sum _{s=1}^{{N_{j}}}}\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s}}]-\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s-1}}]$, from which it follows, by Theorem 2.1 in [23], that
(54)
\[ \| {X_{\mathbf{0},j}}{\| _{q}^{2}}\leqslant (q-1){\sum \limits_{s=1}^{{N_{j}}}}{\big\| \mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s}}]-\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s-1}}]\big\| _{q}^{2}}.\]
Then arguments similar as in the proof of Theorem 1 (i) in [27] give the bound $\| \mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s}}]-\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s-1}}]{\| _{q}}\leqslant {\delta _{{\boldsymbol{v}_{\boldsymbol{s}}},q}}+{\delta _{{\boldsymbol{v}_{\boldsymbol{s}\boldsymbol{-}\mathbf{1}}},q}}$. This ends the proof of Lemma 1.  □
Now, Corollary 1 follows from an application of Lemma 1 with $q=2$ and $q=p$ respectively.  □

2.2 Proof of Theorem 2

Denote for a random variable Z the quantity
(55)
\[ \delta (Z):=\underset{t\in \mathbb{R}}{\sup }\big|\mathbb{P}\{Z\leqslant t\}-\varPhi (t)\big|.\]
We say that a random field ${({Y_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is m-dependent if the collections of random variables $({Y_{\boldsymbol{i}}},\boldsymbol{i}\in A)$ and $({Y_{\boldsymbol{i}}},\boldsymbol{i}\in B)$ are independent whenever
\[ \inf \big\{\| \boldsymbol{a}-\boldsymbol{b}{\| _{\infty }},\boldsymbol{a}\in A,\boldsymbol{b}\in B\big\}\mathrm{>}m.\]
The proof of Theorem 2 will use the following tools.
  • (T.1) By Theorem 2.6 in [7], if I is a finite subset of ${\mathbb{Z}^{d}}$, ${({Y_{\boldsymbol{i}}})}_{\boldsymbol{i}\in I}$ an m-dependent centered random field such that $\mathbb{E}[|{Y_{\boldsymbol{i}}}{|^{p}}]\mathrm{<}+\infty $ for each $\boldsymbol{i}\in I$ and some $p\in (2,3]$ and $\operatorname{Var}({\sum _{\boldsymbol{i}\in I}}{Y_{\boldsymbol{i}}})=1$, then
    (56)
    \[ \delta \bigg(\sum \limits_{\boldsymbol{i}\in I}{Y_{\boldsymbol{i}}}\bigg)\leqslant 75{(10m+1)^{(p-1)d}}\sum \limits_{\boldsymbol{i}\in I}\mathbb{E}\big[|{Y_{\boldsymbol{i}}}{|^{p}}\big].\]
  • (T.2) By Lemma 1 in [8], for any two random variables Z and ${Z^{\prime }}$ and $p\geqslant 1$,
    (57)
    \[ \delta \big(Z+{Z^{\prime }}\big)\leqslant 2\delta (Z)+{\big\| {Z^{\prime }}\big\| _{p}^{\frac{p}{p+1}}}.\]
Let ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$ be an i.i.d. random field and let $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ be a measurable function such that for each $\boldsymbol{i}\in {\mathbb{Z}^{d}}$, ${X_{\boldsymbol{i}}}=f({({\varepsilon _{\boldsymbol{i}-\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}})$. Let $\gamma \mathrm{>}0$ and ${n_{0}}$ defined by (17).
Let $m:={([\| {b_{n}}{\| _{{\ell ^{2}}}}]+1)^{\gamma }}$ and let us define
(58)
\[ {X_{\boldsymbol{i}}^{(m)}}:=\mathbb{E}\big[{X_{\boldsymbol{i}}}\mid \sigma ({\varepsilon _{\boldsymbol{u}}},\boldsymbol{i}-m\mathbf{1}\preccurlyeq \boldsymbol{u}\preccurlyeq \boldsymbol{i}+m\mathbf{1})\big].\]
Since the random field ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$ is independent, the following properties hold.
  • (P.1) The random field ${({X_{\boldsymbol{i}}^{(m)}})_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}$ is $(2m+1)$-dependent.
  • (P.2) The random field ${({X_{\boldsymbol{i}}^{(m)}})_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}$ is identically distributed and $\| {X_{\boldsymbol{i}}^{(m)}}{\| _{{p^{\prime }}}}\leqslant \| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}}$.
  • (P.3) For any ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\in {\ell ^{2}}({\mathbb{Z}^{d}})$ and $q\geqslant 2$, the following inequality holds:
    (59)
    \[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}\big({X_{\boldsymbol{i}}}-{X_{\boldsymbol{i}}^{(m)}}\big)\bigg\| }_{q}\leqslant \frac{14.5q}{\log q}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\sum \limits_{j\geqslant m}{(4j+4)^{d/2}}\| {X_{\mathbf{0},j}}{\| _{2}}\\ {} & \displaystyle +\frac{14.5q}{\log q}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{\boldsymbol{i}}}{|^{q}}\bigg)^{1/q}}\sum \limits_{j\geqslant m}{(4j+4)^{d(1-1/q)}}\| {X_{\mathbf{0},j}}{\| _{q}}.\end{array}\]
    In order to prove (59), we follow the proof of Theorem 1 and start from the decomposition ${X_{\boldsymbol{i}}}-{X_{\boldsymbol{i}}^{(m)}}={\lim \nolimits_{N\to +\infty }}{\sum _{j=m}^{N}}{X_{\boldsymbol{i},j}}$ instead of (42).
Define ${S_{n}^{(m)}}:={\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}^{(m)}}$. An application of (T.2) to $Z:={S_{n}^{(m)}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}{\sigma ^{-1}}$ and ${Z^{\prime }}:=({S_{n}}-{S_{n}^{(m)}})\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}{\sigma ^{-1}}$ yields
(60)
\[ {\varDelta _{n}}\leqslant 2\delta \bigg(\frac{{S_{n}^{(m)}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)+{\sigma ^{-\frac{p}{p+1}}}\frac{1}{\| {b_{n}}{\| _{{\ell ^{2}}}^{\frac{p}{p+1}}}}{\big\| {S_{n}}-{S_{n}^{(m)}}\big\| _{p}^{\frac{p}{p+1}}}.\]
Moreover,
(61)
\[\begin{aligned}{}\delta \bigg(\frac{{S_{n}^{(m)}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)& =\underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\bigg\{\frac{{S_{n}^{(m)}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\leqslant t\bigg\}-\varPhi (t)\bigg|\end{aligned}\]
(62)
\[\begin{aligned}{}& =\underset{u\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\bigg\{\frac{{S_{n}^{(m)}}}{\| {S_{n}^{(m)}}{\| _{2}}}\leqslant u\bigg\}-\varPhi \bigg(u\frac{\| {S_{n}^{(m)}}{\| _{2}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)\bigg|\end{aligned}\]
(63)
\[\begin{aligned}{}& \leqslant \delta \bigg(\frac{{S_{n}^{(m)}}}{\| {S_{n}^{(m)}}{\| _{2}}}\bigg)+\underset{u\in \mathbb{R}}{\sup }\bigg|\varPhi \bigg(u\frac{\| {S_{n}^{(m)}}{\| _{2}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)-\varPhi (u)\bigg|,\end{aligned}\]
hence, by (P.1) and (T.1) applied with ${Y_{\boldsymbol{i}}}:={X_{\boldsymbol{i}}^{(m)}}/\| {S_{n}^{(m)}}{\| _{2}}$, ${p^{\prime }}$ instead of p and $2m+1$ instead of m, we derive that
(64)
\[ {\varDelta _{n}}\leqslant (I)+(II)+(III)\]
where
(65)
\[ (I):=150{(20m+21)^{({p^{\prime }}-1)d}}\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{b_{n,\boldsymbol{i}}}{|^{{p^{\prime }}}}{\big\| {X_{i}^{(m)}}\big\| _{{p^{\prime }}}^{{p^{\prime }}}}{\big\| {S_{n}^{(m)}}\big\| _{2}^{-{p^{\prime }}}},\]
(66)
\[ (II):=2\underset{u\in \mathbb{R}}{\sup }\bigg|\varPhi \bigg(u\frac{\| {S_{n}^{(m)}}{\| _{2}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)-\varPhi (u)\bigg|\hspace{1em}\hspace{2.5pt}\text{and}\hspace{2.5pt}\]
(67)
\[ (III):={\sigma ^{-\frac{p}{p+1}}}\frac{1}{\| {b_{n}}{\| _{{\ell ^{2}}}^{\frac{p}{p+1}}}}{\big\| {S_{n}}-{S_{n}^{(m)}}\big\| _{p}^{\frac{p}{p+1}}}.\]
By (P.2) and the reversed triangular inequality, the term $(I)$ can be bounded as
(68)
\[ (I)\leqslant 150{(20m+21)^{({p^{\prime }}-1)d}}\| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}^{{p^{\prime }}}}\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}^{{p^{\prime }}}}{\big(\| {S_{n}}{\| _{2}}-{\big\| {S_{n}}-{S_{n}^{(m)}}\big\| }_{2}\big)^{-{p^{\prime }}}}\]
and by (P.3) with $q=2$, we obtain that
(69)
\[ {\big(\| {S_{n}}{\| _{2}}-{\big\| {S_{n}}-{S_{n}^{(m)}}\big\| }_{2}\big)^{-{p^{\prime }}}}\leqslant {\big(\| {S_{n}}{\| _{2}}-29{(\log 2)^{-1}}{m^{-\alpha }}\| {b_{n}}{\| _{{\ell ^{2}}}}{C_{2}}(\alpha )\big)^{-{p^{\prime }}}}.\]
By (15), we have
(70)
\[ \frac{\| {S_{n}}{\| _{2}^{2}}}{\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}={\sigma ^{2}}+{\varepsilon _{n}},\]
and we eventually get
\[\begin{array}{c}\displaystyle (I)\leqslant 150{(20m+21)^{({p^{\prime }}-1)d}}\| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}^{{p^{\prime }}}}{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}}}{\| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)^{{p^{\prime }}}}\\ {} \displaystyle \cdot {\big(\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{m^{-\alpha }}{C_{2}}(\alpha )\big)^{-{p^{\prime }}}}.\end{array}\]
Since $n\geqslant {n_{0}}$, we derive, in view of (17),
(71)
\[ (I)\leqslant 150{(20m+21)^{({p^{\prime }}-1)d}}\| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}^{{p^{\prime }}}}{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}}}{\| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)^{{p^{\prime }}}}{(\sigma /2)^{-{p^{\prime }}}}.\]
In order to bound $(II)$, we argue as in [28] (p. 456). Doing similar computations as in [9] (p. 272), we obtain that
(72)
\[ (II)\leqslant {(2\pi e)^{-1/2}}{\Big(\underset{k\geqslant 1}{\inf }{a_{k}}\Big)^{-1}}\big|{a_{n}^{2}}-1\big|,\]
where ${a_{n}}:=\| {S_{n}^{(m)}}{\| _{2}}{\sigma ^{-1}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}$. Observe that for any n, by (P.3),
(73)
\[ {a_{n}}\geqslant \frac{\| {S_{n}}{\| _{2}}-\| {S_{n}}-{S_{n}^{(m)}}{\| _{2}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\geqslant \frac{\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){m^{-\alpha }}}{\sigma }\]
and using again (P.3) combined with Theorem 1 for $p=q=2$,
(74)
\[\begin{aligned}{}\big|{a_{n}^{2}}-1\big|& =\bigg|\frac{\| {S_{n}^{(m)}}{\| _{2}^{2}}}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}-1\bigg|\end{aligned}\]
(75)
\[\begin{aligned}{}& \leqslant \bigg|\frac{\| {S_{n}}{\| _{2}^{2}}}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}-1\bigg|+\frac{|\| {S_{n}^{(m)}}{\| _{2}^{2}}-\| {S_{n}}{\| _{2}^{2}}|}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}\end{aligned}\]
(76)
\[\begin{aligned}{}& \leqslant \frac{|{\varepsilon _{n}}|}{{\sigma ^{2}}}+\frac{|\| {S_{n}^{(m)}}{\| _{2}}-\| {S_{n}}{\| _{2}}|(\| {S_{n}^{(m)}}{\| _{2}}+\| {S_{n}}{\| _{2}})}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}\end{aligned}\]
(77)
\[\begin{aligned}{}& \leqslant \frac{|{\varepsilon _{n}}|}{{\sigma ^{2}}}+\frac{\| {S_{n}^{(m)}}-{S_{n}}{\| _{2}}(\| {S_{n}^{(m)}}{\| _{2}}+\| {S_{n}}{\| _{2}})}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}\end{aligned}\]
(78)
\[\begin{aligned}{}& \leqslant \frac{|{\varepsilon _{n}}|}{{\sigma ^{2}}}+40{(\log 2)^{-1}}\frac{{m^{-\alpha }}}{{\sigma ^{2}}}{C_{2}}{(\alpha )^{2}}.\end{aligned}\]
This leads to the estimate
(79)
\[ (II)\leqslant \frac{{(2\pi e)^{-1/2}}}{\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){m^{-\alpha }}}\bigg(\frac{|{\varepsilon _{n}}|}{\sigma }+40{(\log 2)^{-1}}\frac{{m^{-\alpha }}}{\sigma }{C_{2}}{(\alpha )^{2}}\bigg),\]
and since $n\geqslant {n_{0}}$, we derive, in view of (17),
(80)
\[ (II)\leqslant \bigg(2\frac{|{\varepsilon _{n}}|}{{\sigma ^{2}}}+80{(\log 2)^{-1}}\frac{\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha }}}{{\sigma ^{2}}}{C_{2}}{(\alpha )^{2}}\bigg){(2\pi e)^{-1/2}}.\]
The estimate of $(III)$ relies on (P.3):
(81)
\[\begin{array}{cc}& \displaystyle (III)\leqslant {\sigma ^{-\frac{p}{p+1}}}{\bigg(\frac{14.5p}{\log p}\sum \limits_{j\geqslant m}{(4j+4)^{d/2}}\| {X_{\mathbf{0},j}}{\| _{2}}\bigg)^{\frac{p}{p+1}}}\\ {} & \displaystyle +{\sigma ^{-\frac{p}{p+1}}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\frac{p}{p+1}}}\| {b_{n}}{\| _{{\ell ^{p}}}^{\frac{p}{p+1}}}{\bigg(\frac{14.5p}{\log p}\sum \limits_{j\geqslant m}{(4j+4)^{d(1-1/p)}}\| {X_{\mathbf{0},j}}{\| _{p}}\bigg)^{\frac{p}{p+1}}}\end{array}\]
hence
(82)
\[\begin{array}{cc}& \displaystyle (III)\leqslant {\bigg(\frac{14.5p}{\sigma \log p}{4^{d/2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha }}{C_{2}}(\alpha )\bigg)^{\frac{p}{p+1}}}\\ {} & \displaystyle +{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{p}}}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\frac{14.5p}{\log p}{4^{d(1-1/p)}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \beta }}{C_{p}}(\beta )\bigg)^{\frac{p}{p+1}}}.\end{array}\]
The combination of (64), (71), (80) and (82) gives (18).

2.3 Proof of Theorem 3

Since the random variables ${X_{\boldsymbol{i}}}$ are centered, we derive by definition of ${g_{n}}(\boldsymbol{x})$ that
(83)
\[ {(n{h_{n}})^{d/2}}\big({g_{n}}(\boldsymbol{x})-\mathbb{E}\big[{g_{n}}(\boldsymbol{x})\big]\big)={(n{h_{n}})^{d/2}}\frac{{\sum _{\boldsymbol{i}\in {\varLambda _{n}}}}{X_{\boldsymbol{i}}}K(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}})}{{\sum _{\boldsymbol{i}\in {\varLambda _{n}}}}K(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}})}.\]
We define
(84)
\[ {b_{n,\boldsymbol{i}}}=K\bigg(\frac{1}{{h_{n}}}\bigg(\boldsymbol{x}-\frac{\boldsymbol{i}}{n}\bigg)\bigg),\hspace{1em}\boldsymbol{i}\in {\varLambda _{n}},\]
and ${b_{n,\boldsymbol{i}}}=0$ otherwise. Denote ${b_{n}}={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and $\| {b_{n}}{\| _{{\ell ^{2}}}}:={({\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}{b_{n,\boldsymbol{i}}^{2}})^{1/2}}$. In this way, by (83) and (28),
(85)
\[ \frac{1}{\| K{\| _{{\mathbb{L}^{2}}({\mathbb{R}^{d}})}}\sigma }{(n{h_{n}})^{d/2}}\big({g_{n}}(\boldsymbol{x})-\mathbb{E}\big[{g_{n}}(\boldsymbol{x})\big]\big)=\frac{1}{\sigma }\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}{A_{n}}.\]
Applying (T.2) to
(86)
\[ Z=\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}\hspace{1em}\hspace{2.5pt}\text{and}\hspace{2.5pt}\hspace{1em}{Z^{\prime }}=\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}{\sigma ^{-1}}({A_{n}}-1)\]
and using Theorem 1, we derive that
(87)
\[ \widetilde{{\varDelta _{n}}}\leqslant {c_{p}}{\varDelta ^{\prime }_{n}}+{c_{p}}{\big({\sigma ^{-1}}{C_{2}}(\alpha )+{C_{p}}(\beta )\big)^{\frac{p}{p+1}}}|{A_{n}}-1{|^{\frac{p}{p+1}}},\]
where
(88)
\[ {\varDelta ^{\prime }_{n}}=\underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\{Z\leqslant t\}-\varPhi \bigg(\frac{t}{\sigma }\bigg)\bigg|.\]
We then use Theorem 2 to handle ${\varDelta ^{\prime }_{n}}$ (which is allowed, by (A)). Using boundedness of K, we control the ${\ell ^{p}}$ and ${\ell ^{{p^{\prime }}}}$ norms by a constant times the ${\ell ^{2}}$-norm. This ends the proof of Theorem 3.

Acknowledgments

The author would like to thank the referees for many suggestions which improved the presentation of the paper.

References

[1] 
Bentkus, V., Sunklodas, J.K.: On normal approximations to strongly mixing random fields. Publ. Math. (Debr.) 70(3-4), 253–270 (2007). MR2310650
[2] 
Berry, A.C.: The accuracy of the Gaussian approximation to the sum of independent variates. Trans. Am. Math. Soc. 49, 122–136 (1941). MR0003498. https://doi.org/10.2307/1990053
[3] 
Biermé, H., Durieu, O.: Invariance principles for self-similar set-indexed random fields. Trans. Am. Math. Soc. 366(11), 5963–5989 (2014). MR3256190. https://doi.org/10.1090/S0002-9947-2014-06135-7
[4] 
Bulinski, A.V.: On the convergence rates in the CLT for positively and negatively dependent random fields. In: Probability Theory and Mathematical Statistics (St. Petersburg, 1993), pp. 3–14. Gordon and Breach, Amsterdam (1996). MR1661688
[5] 
Bulinski, A., Kryzhanovskaya, N.: Convergence rate in CLT for vector-valued random fields with self-normalization. Probab. Math. Stat. 26(2), 261–281 (2006). MR2325308
[6] 
Bulinskii, A.V., Doukhan, P.: Vitesse de convergence dans le théorème de limite centrale pour des champs mélangeants satisfaisant des hypothèses de moment faibles. C. R. Acad. Sci. Paris Sér. I Math. 311(12), 801–805 (1990). MR1082637
[7] 
Chen, L.H.Y., Shao, Q.-M.: Normal approximation under local dependence. Ann. Probab. 32(3A), 1985–2028 (2004). MR2073183. https://doi.org/10.1214/009117904000000450
[8] 
El Machkouri, M., Ouchti, L.: Exact convergence rates in the central limit theorem for a class of martingales. Bernoulli 13(4), 981–999 (2007). MR2364223. https://doi.org/10.3150/07-BEJ6116
[9] 
El Machkouri, M.: Kernel density estimation for stationary random fields. ALEA Lat. Am. J. Probab. Math. Stat. 11(1), 259–279 (2014). MR3225977
[10] 
El Machkouri, M., Stoica, R.: Asymptotic normality of kernel estimates in a regression model for random fields. J. Nonparametr. Stat. 22(8), 955–971 (2010). MR2738877. https://doi.org/10.1080/10485250903505893
[11] 
El Machkouri, M., Volný, D., Wu, W.B.: A central limit theorem for stationary random fields. Stoch. Process. Appl. 123(1), 1–14 (2013). MR2988107. https://doi.org/10.1016/j.spa.2012.08.014
[12] 
Esseen, C.-G.: On the Liapounoff limit of error in the theory of probability. Ark. Mat. Astron. Fys. 28A(9), 19 (1942). MR0011909
[13] 
Jirak, M.: Berry-Esseen theorems under weak dependence. Ann. Probab. 44(3), 2024–2063 (2016). MR3502600. https://doi.org/10.1214/15-AOP1017
[14] 
Johnson, W.B., Schechtman, G., Zinn, J.: Best constants in moment inequalities for linear combinations of independent and exchangeable random variables. Ann. Probab. 13(1), 234–253 (1985). MR0770640
[15] 
Klicnarová, J., Volný, D., Wang, Y.: Limit theorems for weighted Bernoulli random fields under Hannan’s condition. Stoch. Process. Appl. 126(6), 1819–1838 (2016). MR3483738. https://doi.org/10.1016/j.spa.2015.12.006
[16] 
Liu, W., Xiao, H., Wu, W.B.: Probability and moment inequalities under dependence. Stat. Sin. 23(3), 1257–1272 (2013). MR3114713
[17] 
Merlevède, F., Peligrad, M.: Rosenthal-type inequalities for the maximum of partial sums of stationary processes and examples. Ann. Probab. 41(2), 914–960 (2013). MR3077530. https://doi.org/10.1214/11-AOP694
[18] 
Mielkaitis, E., Paulauskas, V.: Rates of convergence in the CLT for linear random fields. Lith. Math. J. 51(2), 233–250 (2011). MR2805741. https://doi.org/10.1007/s10986-011-9122-8
[19] 
Nakhapetyan, B.S., Petrosyan, A.N.: On the rate of convergence in the central limit theorem for martingale-difference random fields. Izv. Nats. Akad. Nauk Armenii Mat. 39(2), 59–68 (2004). MR2167826
[20] 
Pavlenko, D.A.: An estimate for the rate of convergence in the central limit theorem for a class of random fields. Vestnik Moskov. Univ. Ser. I Mat. Mekh. 3, 85–87 (1993). MR1355526
[21] 
Peligrad, M., Utev, S., Wu, W.B.: A maximal ${\mathbb{L}_{p}}$-inequality for stationary sequences and its applications. Proc. Am. Math. Soc. 135(2), 541–550 (2007). MR2255301. https://doi.org/10.1090/S0002-9939-06-08488-7
[22] 
Rio, E.: Théorie Asymptotique des Processus Aléatoires Faiblement Dépendants. Mathématiques & Applications (Berlin) [Mathematics & Applications], vol. 31, p. 169. Springer, Berlin (2000). MR2117923
[23] 
Rio, E.: Moment inequalities for sums of dependent random variables under projective conditions. J. Theor. Probab. 22(1), 146–163 (2009). MR2472010. https://doi.org/10.1007/s10959-008-0155-9
[24] 
Rosenthal, H.P.: On the subspaces of ${L^{p}}$ $(p\mathrm{>}2)$ spanned by sequences of independent random variables. Isr. J. Math. 8, 273–303 (1970). MR0271721. https://doi.org/10.1007/BF02771562
[25] 
Shao, Q.M.: Maximal inequalities for partial sums of ρ-mixing sequences. Ann. Probab. 23(2), 948–965 (1995). MR1334179
[26] 
Truquet, L.: A moment inequality of the Marcinkiewicz-Zygmund type for some weakly dependent random fields. Stat. Probab. Lett. 80(21–22), 1673–1679 (2010). MR2684016. https://doi.org/10.1016/j.spl.2010.07.011
[27] 
Wu, W.B.: Nonlinear system theory: another look at dependence. Proc. Natl. Acad. Sci. USA 102(40), 14150–14154 (2005). MR2172215. https://doi.org/10.1073/pnas.0506715102
[28] 
Yang, W., Wang, X., Li, X., Hu, S.: Berry-Esséen bound of sample quantiles for ϕ-mixing random variables. J. Math. Anal. Appl. 388(1), 451–462 (2012). MR2869760. https://doi.org/10.1016/j.jmaa.2011.10.058
Exit Reading PDF XML


Table of contents
  • 1 Introduction and main results
  • 2 Proofs
  • Acknowledgments
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy