1 Introduction and main results
1.1 Goal of the paper
In its simplest form, the central limit theorem states that if ${({X_{i}})}_{i\geqslant 1}$ is an independent identically distributed (i.i.d.) sequence of centered random variables having variance one, then the sequence ${({n^{-1/2}}{\sum _{i=1}^{n}}{X_{i}})}_{n\geqslant 1}$ converges in distribution to a standard normal random variable. If ${X_{1}}$ has a finite moment of order three, Berry [2] and Esseen [12] gave the following convergence rate:
where C is a numerical constant and N has the standard normal distribution. The question of extending the previous result to a larger class of sequences has received a lot of attention. When ${X_{i}}$ can be represented as a function of an i.i.d. sequence, optimal convergence rates are given in [13].
(1)
\[ \underset{t\in \mathbb{R}}{\sup }\Bigg|\mathbb{P}\Bigg\{{n^{-1/2}}{\sum \limits_{i=1}^{n}}{X_{i}}\leqslant t\Bigg\}-\mathbb{P}\{N\leqslant t\}\Bigg|\leqslant C\mathbb{E}\big[|{X_{1}}{|^{3}}\big]{n^{-1/2}},\]In this paper, we will focus on random fields, that is, collections of random variables indexed by ${\mathbb{Z}^{d}}$ and more precisely in Bernoulli random fields, which are defined as follows.
Definition 1.
Let $d\geqslant 1$ be an integer. The random field ${({X_{\boldsymbol{n}}})}_{\boldsymbol{n}\in {\mathbb{Z}^{d}}}$ is said to be Bernoulli if there exist an i.i.d. random field ${({\varepsilon _{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and a measurable function $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ such that ${X_{\boldsymbol{n}}}=f({({\varepsilon _{\boldsymbol{n}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ for each $\boldsymbol{n}\in {\mathbb{Z}^{d}}$.
We are interested in the asymptotic behavior of the sequence ${({S_{n}})}_{n\geqslant 1}$ defined by
where ${b_{n}}:={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is an element of ${\ell ^{2}}({\mathbb{Z}^{d}})$. Under appropriated conditions on the dependence of the random field ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and the sequence of weights ${({b_{n}})}_{n\geqslant 1}$ that will be specified later, the sequence ${({S_{n}}/\| {b_{n}}{\| _{2}})}_{n\geqslant 1}$ converges in law to a normal distribution [15]. The goal of this paper is to provide bounds of the type Berry–Esseen in order to give convergence rates in the central limit theorem.
(2)
\[ {S_{n}}:=\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}},\]This type of question has been addressed for the so-called $\operatorname{BL}(\theta )$-dependent random fields [5], martingale differences random fields [19], positively and negatively dependent random fields [4, 20] and mixing random fields [1, 6].
In order to establish results of this kind, we need several ingredients. First, we need convergence rates for m-dependent random fields. Second, a Bernoulli random field can be decomposed as the sum of an m-dependent random field and a remainder. The control of the contribution of the remainder is done by a moment inequality in the spirit of Rosenthal’s inequality [24]. One of the main applications of such an inequality is the estimate of the convergence rates in the central limit theorem for random fields that can be expressed as a functional of an i.i.d. random field. The method consists of approximating the considered random field by an m-dependent one and controlling the approximation with the help of the established moment inequality. In the one-dimensional case, probability and moment inequalities have been established in [16] for maxima of partial sums of Bernoulli sequences. The techniques used therein permit to derive results for weighted sums of such sequences.
The paper is organized as follows. In Subsection 1.2, we give the material which is necessary to understand the moment inequality stated in Theorem 1. We then give the results on convergence rates in Subsection 1.3 (for weighted sums, sums on subsets of ${\mathbb{Z}^{d}}$ and in a regression model) and compare the obtained results in the case of linear random fields with some existing ones. Section 2 is devoted to the proofs.
1.2 Background
The following version of Rosenthal’s inequality is due to Johnson, Schechtman and Zinn [14]: if ${({Y_{i}})_{i=1}^{n}}$ are independent centered random variables with a finite moment of order $p\geqslant 2$, then
where $\| Y{\| _{q}}:={(\mathbb{E}[|Y{|^{q}}])^{1/q}}$ for $q\geqslant 1$.
(3)
\[ {\Bigg\| {\sum \limits_{i=1}^{n}}{Y_{i}}\Bigg\| }_{p}\leqslant \frac{14.5p}{\log p}\Bigg({\Bigg({\sum \limits_{i=1}^{n}}\| {Y_{i}}{\| _{2}^{2}}\Bigg)^{1/2}}+{\Bigg({\sum \limits_{i=1}^{n}}\| {Y_{i}}{\| _{p}^{p}}\Bigg)^{1/p}}\Bigg),\]It was first establish without explicit constant in Theorem 3 of [24].
Various extensions of Rosenthal-type inequalities have been obtained under mixing conditions [25, 22] or projective conditions [21, 23, 17]. We are interested in extensions of (3) to the setting of dependent random fields.
Throughout the paper, we shall use the following notations.
-
(N.1) For a positive integer d, the set $\{1,\dots ,d\}$ is denoted by $[d]$.
-
(N.2) The coordinatewise order is denoted by ≼, that is, for $\boldsymbol{i}={({i_{q}})_{q=1}^{d}}\in {\mathbb{Z}^{d}}$ and $\boldsymbol{j}={({j_{q}})_{q=1}^{d}}\in {\mathbb{Z}^{d}}$, $\boldsymbol{i}\preccurlyeq \boldsymbol{j}$ means that ${i_{k}}\leqslant {j_{k}}$ for any $k\in [d]$.
-
(N.3) For $q\in [d]$, ${\boldsymbol{e}_{\boldsymbol{q}}}$ denotes the element of ${\mathbb{Z}^{d}}$ whose qth coordinate is 1 and all the others are zero. Moreover, we write $\mathbf{0}=(0,\dots ,0)$ and $\mathbf{1}=(1,\dots ,1)$.
-
(N.4) For $\boldsymbol{n}={({n_{k}})_{k=1}^{d}}\in {\mathbb{N}^{d}}$, we write the product ${\prod _{k=1}^{d}}{n_{q}}$ as $|\boldsymbol{n}|$.
-
(N.5) The cardinality of a set I is denoted by $|I|$.
-
(N.6) For a real number x, we denote by $[x]$ the unique integer such that $[x]\leqslant x\mathrm{<}[x]+1$.
-
(N.7) We write Φ for the cumulative distribution function of the standard normal law.
-
(N.8) If Λ is a subset of ${\mathbb{Z}^{d}}$ and $\boldsymbol{k}\in {\mathbb{Z}^{d}}$, then $\varLambda -\boldsymbol{k}$ is defined as $\{\boldsymbol{l}-\boldsymbol{k},\boldsymbol{l}\in \varLambda \}$.
-
(N.9) For $q\geqslant 1$, we denote by ${\ell ^{q}}({\mathbb{Z}^{d}})$ the space of sequences $\boldsymbol{a}:={({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ such that $\| \boldsymbol{a}{\| _{{\ell ^{q}}}}:={({\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|{a_{\boldsymbol{i}}}{|^{q}})^{1/q}}\mathrm{<}+\infty $.
-
(N.10) For $\boldsymbol{i}={({i_{q}})_{q=1}^{d}}$, the quantity $\| \boldsymbol{i}{\| _{\infty }}$ is defined as ${\max _{1\leqslant q\leqslant d}}|{i_{q}}|$.
Let ${({Y_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be a random field. The sum ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}{Y_{\boldsymbol{i}}}$ is understood as the ${\mathbb{L}^{1}}$-limit of the sequence ${({S_{k}})}_{k\geqslant 1}$ where ${S_{k}}={\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}},\| \boldsymbol{i}{\| _{\infty }}\leqslant k}}{Y_{\boldsymbol{i}}}$.
Following [27] we define the physical dependence measure.
Definition 2.
Let ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}:={(f{(({\varepsilon _{\boldsymbol{i}-\boldsymbol{j}}}))}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be a Bernoulli random field, $p\geqslant 1$ and ${({\varepsilon ^{\prime }_{\boldsymbol{u}}})_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}}$ be an i.i.d. random field which is independent of the i.i.d. random field ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$ and has the same distribution as ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$. For $\boldsymbol{i}\in {\mathbb{Z}^{d}}$, we introduce the physical dependence measure
where ${X_{\boldsymbol{i}}^{\ast }}=f({({\varepsilon _{\boldsymbol{i}-\boldsymbol{j}}^{\ast }})_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}})$ and ${\varepsilon _{\boldsymbol{u}}^{\ast }}={\varepsilon _{\boldsymbol{u}}}$ if $\boldsymbol{u}\ne \mathbf{0}$, ${\varepsilon _{\mathbf{0}}^{\ast }}={\varepsilon ^{\prime }_{\mathbf{0}}}$.
(4)
\[ {\delta _{\boldsymbol{i},p}}:={\big\| {X_{\boldsymbol{i}}}-{X_{\boldsymbol{i}}^{\ast }}\big\| }_{p},\]In [11, 3], various examples of Bernoulli random fields are given, for which the physical dependence measure is either computed or estimated. Proposition 1 of [11] also gives the following moment inequality: if Γ is a finite subset of ${\mathbb{Z}^{d}}$, ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in \varGamma }$ is a family of real numbers and $p\geqslant 2$, then for any Bernoulli random field ${({X_{\boldsymbol{n}}})}_{\boldsymbol{n}\in {\mathbb{Z}^{d}}}$,
This was used in [11, 3] in order to establish functional central limit theorems. Truquet [26] also obtained an inequality in this spirit. If ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is an i.i.d. and centered random field, (3) would give
while Rosenthal’s inequality (3) would give
what is a better result in this context.
(5)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant {\bigg(2p\sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\cdot \sum \limits_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}{\delta _{\boldsymbol{j},p}}.\](6)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant C{\bigg(\sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\| {X_{\mathbf{1}}}{\| _{p}},\](7)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant C{\bigg(\sum \limits_{\boldsymbol{i}\in \varGamma }{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\| {X_{\mathbf{1}}}{\| _{2}}+C{\bigg(\sum \limits_{\boldsymbol{i}\in \varGamma }|{a_{\boldsymbol{i}}}{|^{p}}\bigg)^{1/p}}\| {X_{\mathbf{1}}}{\| _{p}},\]In the case of linear processes, equality ${\delta _{\boldsymbol{j},p}}\leqslant K{\delta _{\boldsymbol{j},2}}$ holds for a constant K which does not depend on $\boldsymbol{j}$. However, there are processes for which such an inequality does not hold.
Example 1.
We give an example of a random field for which there is no constant K such that ${\delta _{\boldsymbol{j},p}}\leqslant K{\delta _{\boldsymbol{j},2}}$ holds for all $\boldsymbol{j}\in {\mathbb{Z}^{d}}$. Let $p\geqslant 2$ and let ${({\varepsilon _{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an i.i.d. random field and for each $\boldsymbol{k}\in {\mathbb{Z}^{d}}$, let ${f_{\boldsymbol{k}}}:\mathbb{R}\to \mathbb{R}$ be a function such that the random variable ${Z_{\boldsymbol{k}}}:={f_{\boldsymbol{k}}}({\varepsilon _{\mathbf{0}}})$ is centered and has a finite moment of order p, and ${\sum _{\boldsymbol{k}\in {\mathbb{Z}^{d}}}}\| {Z_{\boldsymbol{k}}}{\| _{2}^{2}}\mathrm{<}+\infty $. Define ${X_{\boldsymbol{n}}}:={\lim \nolimits_{N\to +\infty }}{\sum _{-N\mathbf{1}\preccurlyeq \boldsymbol{j}\preccurlyeq N\mathbf{1}}}{f_{\boldsymbol{k}}}({\varepsilon _{\boldsymbol{n}-\boldsymbol{k}}})$, where the limit is taken in ${\mathbb{L}^{2}}$. Then ${X_{\boldsymbol{i}}}-{X_{\boldsymbol{i}}^{\ast }}={f_{\boldsymbol{i}}}({\varepsilon _{\mathbf{0}}})-{f_{\boldsymbol{i}}}({\varepsilon ^{\prime }_{\mathbf{0}}})$, hence ${\delta _{\boldsymbol{i},2}}$ is of order $\| {Z_{\boldsymbol{i}}}{\| _{2}}$ while ${\delta _{\boldsymbol{i},p}}$ is of order $\| {Z_{\boldsymbol{i}}}{\| _{p}}$.
Consequently, having the ${\ell ^{p}}$-norm instead of the ${\ell ^{2}}$-norm of the ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in \varGamma }$ is more suitable.
1.3 Mains results
We now give a Rosenthal-like inequality for weighted sums of Bernoulli random fields in terms of the physical dependence measure.
Theorem 1.
Let ${({\varepsilon _{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an i.i.d. random field. For any measurable function $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ such that ${X_{\boldsymbol{j}}}:=f({({X_{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ has a finite moment of order $p\geqslant 2$ and is centered, and any ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\in {\ell ^{2}}({\mathbb{Z}^{d}})$,
where for $j\geqslant 1$,
and ${X_{\mathbf{0},0}}=\mathbb{E}[{X_{\mathbf{0}}}\mid \sigma \{{\varepsilon _{\mathbf{0}}}\}]$.
(8)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d/2}}\| {X_{\mathbf{0},j}}{\| _{2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{\boldsymbol{i}}}{|^{p}}\bigg)^{1/p}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d(1-1/p)}}\| {X_{\mathbf{0},j}}{\| _{p}},\end{array}\](9)
\[ {X_{\mathbf{0},j}}=\mathbb{E}\big[{X_{\mathbf{0}}}\mid \sigma \big\{{\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j\big\}\big]-\mathbb{E}\big[{X_{\mathbf{0}}}\mid \sigma \big\{{\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j-1\big\}\big]\]We can formulate a version of inequality (8) where the right-hand side is expressed in terms of the coefficients of the physical dependence measure. The obtained result is not directly comparable to (5) because of the presence of the ${\ell ^{p}}$-norm of the coefficients.
Corollary 1.
Let $\{{\varepsilon _{\boldsymbol{i}}},\boldsymbol{i}\in {\mathbb{Z}^{d}}\}$ be an i.i.d. set of random variables. Then for any measurable function $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ such that ${X_{\boldsymbol{j}}}:=f({({X_{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ has a finite moment of order $p\geqslant 2$ and is centered, and any ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\in {\ell ^{2}}({\mathbb{Z}^{d}})$,
(10)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant \sqrt{2}\frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d/2}}{\bigg(\sum \limits_{\| \boldsymbol{i}{\| _{\infty }}=j}{\delta _{\boldsymbol{i},2}^{2}}\bigg)^{1/2}}\\ {} & \displaystyle +\sqrt{2}\frac{14.5p}{\log p}\sqrt{p-1}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{\boldsymbol{i}}}{|^{p}}\bigg)^{1/p}}{\sum \limits_{j=0}^{+\infty }}{(4j+4)^{d(1-1/p)}}{\bigg(\sum \limits_{\| \boldsymbol{i}{\| _{\infty }}=j}{\delta _{\boldsymbol{i},p}^{2}}\bigg)^{1/2}}.\end{array}\]Let ${({X_{\boldsymbol{j}}})}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}=f({({\varepsilon _{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}})$ be a centered square integrable Bernoulli random field and for any positive integer n, let ${b_{n}}:={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an element of ${\ell ^{2}}({\mathbb{Z}^{d}})$. We are interested in the asymptotic behavior of the sequence ${({S_{n}})}_{n\geqslant 1}$ defined by
Let us denote for $\boldsymbol{k}\in {\mathbb{Z}^{d}}$ the map ${\tau _{\boldsymbol{k}}}:{\ell ^{2}}({\mathbb{Z}^{d}})\to {\ell ^{2}}({\mathbb{Z}^{d}})$ defined by
(11)
\[ {S_{n}}:=\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}}.\]In [15], Corollary 2.6 gives the following result: under a Hannan type condition on the random field ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and under the condition on the weights that for any $q\in [d]$,
the series ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ converges, and the sequence ${({S_{n}}/\| {b_{n}}{\| _{{\ell ^{2}}}})}_{n\geqslant 1}$ converges in distribution to a centered normal distribution with variance ${\sigma ^{2}}$, where
The argument relies on an approximation by an m-dependent random field.
(12)
\[ \frac{1}{\| {b_{n}}{\| _{{\ell ^{2}}}}}{\big\| {\tau _{{\boldsymbol{e}_{\boldsymbol{q}}}}}({b_{n}})-{b_{n}}\big\| }_{{\ell ^{2}}}=0,\]The purpose of the next theorem is to give a general rates of convergence. In order to measure it, we define
The following quantity will also play an important role in the estimation of convergence rates:
Theorem 2.
Let $p\mathrm{>}2$, ${p^{\prime }}:=\min \{p,3\}$ and let ${({X_{\boldsymbol{j}}})}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}={(f({({\varepsilon _{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}))}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}$ be a centered Bernoulli random field with a finite moment of order p and for any positive integer n, let ${b_{n}}:={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be an element of ${\ell ^{2}}({\mathbb{Z}^{d}})$ such that for any $n\geqslant 1$, the set $\{\boldsymbol{k}\in {\mathbb{Z}^{d}},{b_{n,\boldsymbol{k}}}\ne 0\}$ is finite and nonempty, ${\lim \nolimits_{n\to +\infty }}\| {b_{n}}{\| _{{\ell ^{2}}}}=+\infty $ and (12) holds for any $q\in [d]$. Assume that for some positive α and β, the following series are convergent:
Let ${S_{n}}$ be defined by (11).
Assume that ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ is finite and that σ be given by (13) is positive. Let $\gamma \mathrm{>}0$ and let
Then for each $n\geqslant {n_{0}}$,
In particular, there exists a constant κ such that for all $n\geqslant {n_{0}}$,
(17)
\[ {n_{0}}:=\inf \big\{N\geqslant 1\mid \forall n\geqslant N,\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){\big({\big[\| {b_{n}}{\| _{{\ell ^{2}}}}\big]^{\gamma }}\big)^{-\alpha }}\geqslant \sigma /2\big\}.\](18)
\[\begin{array}{cc}& \displaystyle \widetilde{{\varDelta _{n}}}\leqslant 150{\big(29{\big(\big[\| {b_{n}}{\| _{{\ell ^{2}}}}\big]+21\big)^{\gamma }}+21\big)^{({p^{\prime }}-1)d}}\| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}^{{p^{\prime }}}}{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}}}{\| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)^{{p^{\prime }}}}{(\sigma /2)^{-{p^{\prime }}}}\\ {} & \displaystyle +\bigg(2\frac{|{\varepsilon _{n}}|}{{\sigma ^{2}}}+80{(\log 2)^{-1}}\frac{\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha }}}{{\sigma ^{2}}}{C_{2}}{(\alpha )^{2}}\bigg){(2\pi e)^{-1/2}}\\ {} & \displaystyle +{\bigg(\frac{14.5p}{\sigma \log p}{4^{d/2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha }}{C_{2}}(\alpha )\bigg)^{\frac{p}{p+1}}}\\ {} & \displaystyle +{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{p}}}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\frac{14.5p}{\log p}{4^{d(1-1/p)}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \beta }}{C_{p}}(\beta )\bigg)^{\frac{p}{p+1}}}.\end{array}\](19)
\[\begin{array}{cc}& \displaystyle {\varDelta _{n}}\leqslant \kappa \big(\| {b_{n}}{\| _{{\ell ^{2}}}^{\gamma ({p^{\prime }}-1)d-{p^{\prime }}}}\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}^{{p^{\prime }}}}+|{\varepsilon _{n}}|\big)\\ {} & \displaystyle +\kappa \big(\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha \frac{p}{p+1}}}+\| {b_{n}}{\| _{{\ell ^{p}}}^{\frac{p}{p+1}}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\frac{p}{p+1}(\gamma \beta +1)}}\big).\end{array}\]Remark 1.
If (12), ${\lim \nolimits_{n\to +\infty }}\| {b_{n}}{\| _{{\ell ^{2}}}}=+\infty $ and the family ${({\delta _{\boldsymbol{i},2}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is summable, then the sequence ${({\varepsilon _{n}})}_{n\geqslant 1}$ converges to 0 hence ${n_{0}}$ is well defined. However, it is not clear to us whether the finiteness of ${C_{2}}(\alpha )$ combined with (12) and ${\lim \nolimits_{n\to +\infty }}\| {b_{n}}{\| _{{\ell ^{2}}}}=+\infty $ imply that ${\sum _{\boldsymbol{j}\in {\mathbb{Z}^{d}}}}|\mathbb{E}[{X_{\mathbf{0}}}{X_{\boldsymbol{j}}}]|$ is finite. Nevertheless, we can show an analogous result in terms of coefficients ${\delta _{\boldsymbol{i},p}}$ with the following changes in the statement of Theorem 2:
In this case, the convergence of ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ holds (cf. Proposition 2 in [11]).
Recall the notation (N.8). Let ${({\varLambda _{n}})}_{n\geqslant 1}$ be a sequence of subsets of ${\mathbb{Z}^{d}}$. The choice ${b_{n,\boldsymbol{j}}}=1$ if $\boldsymbol{j}\in {\varLambda _{n}}$ and 0 otherwise yields the following corollary for set-indexed partial sums.
Corollary 2.
Let ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ be a centered Bernoulli random field with a finite moment of order $p\geqslant 2$, ${p^{\prime }}:=\min \{p,3\}$ and let ${({\varLambda _{n}})}_{n\geqslant 1}$ be a sequence of subset of ${\mathbb{Z}^{d}}$ such that $|{\varLambda _{n}}|\to +\infty $ and for any $\boldsymbol{k}\in {\mathbb{Z}^{d}}$, ${\lim \nolimits_{n\to +\infty }}|{\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{k})|/|{\varLambda _{n}}|=1$. Assume that the series defined in (16) are convergent for some positive α and β, that ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ is finite and that σ defined by (13) is positive. Let $\gamma \mathrm{>}0$ and ${n_{0}}$ be defined by (17). There exists a constant κ such that for any $n\geqslant {n_{0}}$,
where
(22)
\[\begin{array}{cc}& \displaystyle \underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\bigg\{\frac{{\sum _{\boldsymbol{i}\in {\varLambda _{n}}}}{X_{\boldsymbol{i}}}}{|{\varLambda _{n}}{|^{1/2}}}\leqslant t\bigg\}-\varPhi (t/\sigma )\bigg|\\ {} & \displaystyle \leqslant \kappa \bigg(|{\varLambda _{n}}{|^{q}}+\sum \limits_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}\big|\mathbb{E}[{X_{\mathbf{0}}}{X_{\boldsymbol{j}}}]\big|\bigg|\frac{|{\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{j})|}{|{\varLambda _{n}}|}-1\bigg|\bigg),\end{array}\]We consider now the following regression model:
where $g:{[0,1]^{d}}\to \mathbb{R}$ is an unknown smooth function and ${({X_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is a zero mean stationary Bernoulli random field. Let K be a probability kernel defined on ${\mathbb{R}^{d}}$ and let ${({h_{n}})}_{n\geqslant 1}$ be a sequence of positive numbers which converges to zero and which satisfies
(24)
\[ {Y_{\boldsymbol{i}}}=g\bigg(\frac{\boldsymbol{i}}{n}\bigg)+{X_{\boldsymbol{i}}},\hspace{1em}\boldsymbol{i}\in {\varLambda _{n}}:={\{1,\dots ,n\}^{d}},\]We estimate the function g by the kernel estimator ${g_{n}}$ defined by
We make the following assumptions on the regression function g and the probability kernel K:
-
(A) The probability kernel K fulfills ${\int _{{\mathbb{R}^{d}}}}K(\boldsymbol{u})\mathrm{d}\boldsymbol{u}=1$, is symmetric, non-negative, supported by ${[-1,1]^{d}}$. Moreover, there exist positive constants r, c and C such that for any $\boldsymbol{x},\boldsymbol{y}\in {[-1,1]^{d}}$, $|K(\boldsymbol{x})-K(\boldsymbol{y})|\leqslant r\| \boldsymbol{x}-\boldsymbol{y}{\| _{\infty }}$ and $c\leqslant K(\boldsymbol{x})\leqslant C$.
We measure the rate of convergence of ${({(n{h_{n}})^{d/2}}({g_{n}}(\boldsymbol{x})-\mathbb{E}[{g_{n}}(\boldsymbol{x})]))}_{n\geqslant 1}$ to a normal distribution by the use of the quantity
Two other quantities will be involved, namely,
and
(27)
\[ \widetilde{{\varDelta _{n}}}:=\underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\big\{{(n{h_{n}})^{d/2}}\big({g_{n}}(\boldsymbol{x})-\mathbb{E}\big[{g_{n}}(\boldsymbol{x})\big]\big)\leqslant t\big\}-\varPhi \bigg(\frac{t}{\sigma \| K{\| _{2}}}\bigg)\bigg|.\](28)
\[ {A_{n}}:={(n{h_{n}})^{d/2}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}}{K^{2}}\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\bigg)^{1/2}}\| K{\| _{{\mathbb{L}^{2}}({\mathbb{R}^{d}})}^{-1}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}}K\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\bigg)^{-1/2}}\](29)
\[ {\varepsilon _{n}}:=\sum \limits_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}\big|\mathbb{E}[{X_{\mathbf{0}}}{X_{\boldsymbol{j}}}]\big|\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{j})}\frac{K(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}})K(\frac{\boldsymbol{x}-(\boldsymbol{i}-\boldsymbol{j})/n}{{h_{n}}})}{{\sum _{\boldsymbol{k}\in {\varLambda _{n}}}}{K^{2}}(\frac{\boldsymbol{x}-\boldsymbol{k}/n}{{h_{n}}})}-1\bigg).\]Theorem 3.
Let $p\mathrm{>}2$, ${p^{\prime }}:=\min \{p,3\}$ and let ${({X_{\boldsymbol{j}}})}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}={(f({({\varepsilon _{\boldsymbol{j}-\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}))}_{\boldsymbol{j}\in {\mathbb{Z}^{d}}}$ be a centered Bernoulli random field with a finite moment of order p. Assume that for some positive α and β, the following series are convergent:
Let ${g_{n}}(\boldsymbol{x})$ be defined by (26), ${({h_{n}})}_{n\geqslant 1}$ be a sequence which converges to 0 and satisfies (25).
Assume that ${\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}|\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{i}}})|$ is finite and that $\sigma :={\sum _{\boldsymbol{j}\in {\mathbb{Z}^{d}}}}\operatorname{Cov}({X_{\mathbf{0}}},{X_{\boldsymbol{j}}})\mathrm{>}0$. Let ${n_{1}}\in \mathbb{N}$ be such that for each $n\geqslant {n_{1}}$,
and
Let ${n_{0}}$ be the smallest integer for which for all $n\geqslant {n_{0}}$,
(31)
\[ \frac{1}{2}\leqslant {(n{h_{n}})^{-d}}K\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\leqslant \frac{3}{2}\](32)
\[ \frac{1}{2}\| K{\| _{{\mathbb{L}^{2}}({\mathbb{R}^{d}})}}\leqslant {(n{h_{n}})^{-d}}{K^{2}}\bigg(\frac{\boldsymbol{x}-\boldsymbol{i}/n}{{h_{n}}}\bigg)\leqslant \frac{3}{2}\| K{\| _{{\mathbb{L}^{2}}({\mathbb{R}^{d}})}}.\](33)
\[ \sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){\bigg({\bigg[{\bigg(\sum \limits_{\boldsymbol{i}\in {\varLambda _{n}}}K{\bigg(\frac{1}{{h_{n}}}\bigg(\boldsymbol{x}-\frac{\boldsymbol{i}}{n}\bigg)\bigg)^{2}}\bigg)^{1/2}}\bigg]^{\gamma }}\bigg)^{-\alpha }}\geqslant \sigma /2.\]
Then there exists a constant κ such that for each $n\geqslant \max \{{n_{0}},{n_{1}}\}$,
(34)
\[\begin{array}{cc}& \displaystyle {\varDelta _{n}}\leqslant \kappa |{A_{n}}-1{|^{\frac{p}{p+1}}}+|{\varepsilon _{n}}|+\kappa {(n{h_{n}})^{\frac{d}{2}(\gamma ({p^{\prime }}-1)d-{p^{\prime }}+2)}}\\ {} & \displaystyle +{(n{h_{n}})^{-\frac{d}{2}\gamma \alpha \frac{p}{p+1}}}+{(n{h_{n}})^{\frac{2d-p(\gamma \beta +1)}{2(p+1)}}}.\end{array}\]Lemma 1 in [10] shows that under (25), the sequence ${({A_{n}})}_{n\geqslant 1}$ goes to 1 as n goes to infinity and that the integer ${n_{1}}$ is well defined.
We now consider the case of linear random fields in dimension 2, that is,
where ${({a_{{i_{1}},{i_{2}}}})}_{{i_{1}},{i_{2}}\mathbb{Z}}\in {\ell ^{1}}({\mathbb{Z}^{2}})$ and ${({\varepsilon _{{u_{1}},{u_{2}}}})}_{{u_{1}},{u_{2}}\in {\mathbb{Z}^{2}}}$ is an i.i.d. centered random field and ${\varepsilon _{0,0}}$ has a finite variance. We will focus on the case where the weights are of the form ${b_{n,{i_{1}},{i_{2}}}}=1$ if $1\leqslant {i_{1}},{i_{2}}\leqslant n$ and ${b_{n,{i_{1}},{i_{2}}}}=0$ otherwise.
(35)
\[ {X_{{j_{1}},{j_{2}}}}=\sum \limits_{{i_{1}},{i_{2}}\in \mathbb{Z}}{a_{{i_{1}},{i_{2}}}}{\varepsilon _{{j_{1}}-{i_{1}},{j_{2}}-{i_{2}}}},\]Mielkaitis and Paulauskas [18] established the following convergence rate. Denoting
and assuming that $\mathbb{E}[|{\varepsilon _{0,0}}{|^{2+\delta }}]$ is finite and
the following estimate holds for ${\varDelta ^{\prime }_{n}}$:
In the context of Corollary 2, the condition on the coefficients reads as follows:
where $p=2+\delta $. Let us compare (37) with (39). Let $s:=\max \{1+\alpha ,2-2/p+\beta \}$. When $s\geqslant 2$, (39) implies (37). However, this implication does not hold if $s\mathrm{<}3/2$. Indeed, let $r\in (s+1,5/2)$ and define ${a_{{k_{1}},{k_{2}}}}:={k_{1}^{-r}}$ if ${k_{1}}={k_{2}}\geqslant 1$ and ${a_{{k_{1}},{k_{2}}}}:=0$ otherwise. Then (39) holds whereas (37) does not.
(36)
\[ {\varDelta ^{\prime }_{n}}:=\underset{r\geqslant 0}{\sup }\Bigg|\mathbb{P}\Bigg\{\Bigg|\frac{1}{n}{\sum \limits_{{i_{1}},{i_{2}}=1}^{n}}{X_{{i_{1}},{i_{2}}}}\Bigg|\leqslant r\Bigg\}-\mathbb{P}\big\{|N|\leqslant r\big\}\Bigg|\](37)
\[ \sum \limits_{{k_{1}},{k_{2}}\in \mathbb{Z}}{\big(|{k_{1}}|+1\big)^{2}}{\big(|{k_{2}}|+1\big)^{2}}{a_{{k_{1}},{k_{2}}}^{2}}\mathrm{<}+\infty ,\](38)
\[ {\varDelta ^{\prime }_{n}}=O\big({n^{-r}}\big),\hspace{1em}r:=\frac{1}{2}\min \bigg\{\delta ,1-\frac{1}{3+\delta }\bigg\}.\](39)
\[ {\sum \limits_{i=0}^{+\infty }}\big({i^{1+\alpha }}+{i^{2-2/p+\beta }}\big){\bigg(\sum \limits_{({j_{1}},{j_{2}}):\| ({j_{1}},{j_{2}}){\| _{\infty }}=i}{a_{{j_{1}},{j_{2}}}^{2}}\bigg)^{1/2}}\mathrm{<}+\infty ,\]Let us discuss the convergence rates in the following example. Let ${a_{{k_{1}},{k_{2}}}}:={2^{-|{k_{1}}|-|{k_{2}}|}}$ and let $p=2+\delta $, where $\delta \in (0,1]$. In our context,
hence the convergence of ${\sum _{{j_{1}},{j_{2}}\in \mathbb{Z}}}|\operatorname{Cov}({X_{0,0}},{X_{{j_{1}},{j_{2}}}})|({j_{1}}+{j_{2}})$ guarantees that ${\varepsilon _{n}}$ in Corollary 2 is of order $1/n$. Moreover, since (39) holds for all α and β, the choice of γ allows to reach rates of the form ${n^{-\delta +{r_{0}}}}$ for any fixed ${r_{0}}$. In particular, when $\delta =1$, one can reach for any fixed ${r_{0}}$ rates of the form ${n^{-1+{r_{0}}}}$. In comparison, with the same assumptions, the result of [18] gives ${n^{-3/8}}$.
(40)
\[ \bigg|\frac{|{\varLambda _{n}}\cap ({\varLambda _{n}}-\boldsymbol{j})|}{|{\varLambda _{n}}|}-1\bigg|\leqslant \frac{{n^{2}}-(n-{j_{1}})(n-{j_{2}})}{{n^{2}}}\leqslant \frac{{j_{1}}+{j_{2}}}{n},\]2 Proofs
2.1 Proof of Theorem 1
We define for $j\geqslant 1$ and $\boldsymbol{i}\in {\mathbb{Z}^{d}}$,
In this way, by the martingale convergence theorem,
hence
Let us fix $j\geqslant 1$. We divide ${\mathbb{Z}^{d}}$ into blocks. For $\boldsymbol{v}\in {\mathbb{Z}^{d}}$, we define
and if K is a subset of $[d]$, we define
Therefore, the following inequality takes place
(41)
\[ {X_{\boldsymbol{i},j}}=\mathbb{E}\big[{X_{i}}\mid \sigma \big({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}-\boldsymbol{i}{\| _{\infty }}\leqslant j\big)\big]-\mathbb{E}\big[{X_{i}}\mid \sigma \big({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}-\boldsymbol{i}{\| _{\infty }}\leqslant j-1\big)\big].\](42)
\[ {X_{\boldsymbol{i}}}-\mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]=\underset{N\to +\infty }{\lim }{\sum \limits_{j=1}^{N}}{X_{\boldsymbol{i},j}},\](43)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant {\sum \limits_{j=1}^{+\infty }}{\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}+{\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}\mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]\bigg\| }_{p}.\](44)
\[ {A_{\boldsymbol{v}}}:={\prod \limits_{q=1}^{d}}\big(\big[(2j+2){v_{q}},(2j+2)({v_{q}}+1)-1\big]\cap \mathbb{Z}\big),\](45)
\[ {E_{K}}:=\big\{\boldsymbol{v}\in {\mathbb{Z}^{d}},{v_{q}}\hspace{2.5pt}\text{is even if and only if}\hspace{2.5pt}q\in K\big\}.\](46)
\[ {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \sum \limits_{K\subset [d]}{\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}.\]Observe that the random variable ${\sum _{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}$ is measurable with respect of the σ-algebra generated by ${\varepsilon _{\boldsymbol{u}}}$, where $\boldsymbol{u}$ satisfies $(2j+2){v_{q}}-(j+1)\leqslant {u_{q}}\leqslant j+1+(2j+2)({v_{q}}+1)-1$ for all $q\in [d]$. Since the family $\{{\varepsilon _{\boldsymbol{u}}},\boldsymbol{u}\in {\mathbb{Z}^{d}}\}$ is independent, the family $\{{\sum _{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}},\boldsymbol{v}\in {E_{K}}\}$ is independent for each fixed $K\subset [d]$. Using inequality (3), it thus follows that
By stationarity, one can see that $\| {X_{\boldsymbol{i},j}}{\| _{q}}=\| {X_{\mathbf{0},j}}{\| _{q}}$ for $q\in \{2,p\}$, hence the triangle inequality yields
By Jensen’s inequality, for $q\in \{2,p\}$,
and using ${\sum _{i=1}^{N}}{x_{i}^{1/q}}\leqslant {N^{\frac{q-1}{q}}}{({\sum _{i=1}^{N}}{x_{i}})^{1/q}}$, it follows that
(47)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg\| \sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| _{2}^{2}}\bigg)^{1/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg\| \sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| _{p}^{p}}\bigg)^{1/p}}.\end{array}\](48)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{2}}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg(\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}|\bigg)^{2}}\bigg)^{1/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{p}}{\bigg(\sum \limits_{\boldsymbol{v}\in {E_{K}}}{\bigg(\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}|\bigg)^{p}}\bigg)^{1/p}}.\end{array}\](49)
\[ {\bigg(\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}|\bigg)^{q}}\leqslant |{A_{\boldsymbol{v}}}{|^{q-1}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}{|^{q}}\leqslant {(2j+2)^{d(q-1)}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}|{a_{\boldsymbol{i}}}{|^{q}}\](50)
\[\begin{array}{cc}& \displaystyle \sum \limits_{K\subset [d]}{\bigg\| \sum \limits_{\boldsymbol{v}\in {E_{K}}}\sum \limits_{\boldsymbol{i}\in {A_{\boldsymbol{v}}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i},j}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{2}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{i}^{2}}\bigg)^{1/2}}{(4j+4)^{d/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}\| {X_{\mathbf{0},j}}{\| _{p}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{i}}{|^{p}}\bigg)^{1/p}}{(4j+4)^{d(1-1/p)}}.\end{array}\]Combining (43), (46) and (50), we derive that
In order to control the last term, we use inequality (3) and bound $\| \mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]{\| _{q}}$ by $\| {X_{\mathbf{0},0}}{\| _{q}}$ for $q\in \{1,2\}$. This ends the proof of Theorem 1.
(51)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}{X_{\boldsymbol{i}}}\bigg\| }_{p}\leqslant \frac{14.5p}{\log p}{\sum \limits_{j=1}^{+\infty }}\| {X_{\mathbf{0},j}}{\| _{2}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{i}^{2}}\bigg)^{1/2}}{(4j+4)^{d/2}}\\ {} & \displaystyle +\frac{14.5p}{\log p}{\sum \limits_{j=1}^{+\infty }}\| {X_{\mathbf{0},j}}{\| _{p}}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{i}}{|^{p}}\bigg)^{1/p}}{(4j+4)^{d(1-1/p)}}+{\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}\mathbb{E}[{X_{\boldsymbol{i}}}\mid {\varepsilon _{\boldsymbol{i}}}]\bigg\| }_{p}.\end{array}\]Proof of Corollary 1.
The following lemma gives a control of the ${\mathbb{L}^{q}}$-norm of ${X_{\mathbf{0},j}}$ in terms of the physical measure dependence.
Proof.
Let j be fixed. Let us write the set of elements of ${\mathbb{Z}^{d}}$ whose infinite norm is equal to j as $\{{\boldsymbol{v}_{\boldsymbol{s}}},1\leqslant s\leqslant {N_{j}}\}$ where ${N_{j}}\in \mathbb{N}$. We also assume that ${\boldsymbol{v}_{\boldsymbol{s}}}-{\boldsymbol{v}_{\boldsymbol{s}\boldsymbol{-}\mathbf{1}}}\in \{{\boldsymbol{e}_{\boldsymbol{k}}},1\leqslant k\leqslant d\}$ for all $s\in \{2,\dots ,{N_{j}}\}$.
Denote
and ${\mathcal{F}_{0}}:=\sigma ({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j)$. Then ${X_{\mathbf{0},j}}={\sum _{s=1}^{{N_{j}}}}\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s}}]-\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s-1}}]$, from which it follows, by Theorem 2.1 in [23], that
Then arguments similar as in the proof of Theorem 1 (i) in [27] give the bound $\| \mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s}}]-\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s-1}}]{\| _{q}}\leqslant {\delta _{{\boldsymbol{v}_{\boldsymbol{s}}},q}}+{\delta _{{\boldsymbol{v}_{\boldsymbol{s}\boldsymbol{-}\mathbf{1}}},q}}$. This ends the proof of Lemma 1. □
(53)
\[ {\mathcal{F}_{s}}:=\sigma \big({\varepsilon _{\boldsymbol{u}}},\| \boldsymbol{u}{\| _{\infty }}\leqslant j,{\varepsilon _{{\boldsymbol{v}_{\boldsymbol{t}}}}},1\leqslant t\leqslant s\big),\](54)
\[ \| {X_{\mathbf{0},j}}{\| _{q}^{2}}\leqslant (q-1){\sum \limits_{s=1}^{{N_{j}}}}{\big\| \mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s}}]-\mathbb{E}[{X_{\mathbf{0}}}\mid {\mathcal{F}_{s-1}}]\big\| _{q}^{2}}.\]2.2 Proof of Theorem 2
Denote for a random variable Z the quantity
We say that a random field ${({Y_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ is m-dependent if the collections of random variables $({Y_{\boldsymbol{i}}},\boldsymbol{i}\in A)$ and $({Y_{\boldsymbol{i}}},\boldsymbol{i}\in B)$ are independent whenever
The proof of Theorem 2 will use the following tools.
-
(T.1) By Theorem 2.6 in [7], if I is a finite subset of ${\mathbb{Z}^{d}}$, ${({Y_{\boldsymbol{i}}})}_{\boldsymbol{i}\in I}$ an m-dependent centered random field such that $\mathbb{E}[|{Y_{\boldsymbol{i}}}{|^{p}}]\mathrm{<}+\infty $ for each $\boldsymbol{i}\in I$ and some $p\in (2,3]$ and $\operatorname{Var}({\sum _{\boldsymbol{i}\in I}}{Y_{\boldsymbol{i}}})=1$, then
Let ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$ be an i.i.d. random field and let $f:{\mathbb{R}^{{\mathbb{Z}^{d}}}}\to \mathbb{R}$ be a measurable function such that for each $\boldsymbol{i}\in {\mathbb{Z}^{d}}$, ${X_{\boldsymbol{i}}}=f({({\varepsilon _{\boldsymbol{i}-\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}})$. Let $\gamma \mathrm{>}0$ and ${n_{0}}$ defined by (17).
Let $m:={([\| {b_{n}}{\| _{{\ell ^{2}}}}]+1)^{\gamma }}$ and let us define
Since the random field ${({\varepsilon _{\boldsymbol{u}}})}_{\boldsymbol{u}\in {\mathbb{Z}^{d}}}$ is independent, the following properties hold.
Define ${S_{n}^{(m)}}:={\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}{b_{n,\boldsymbol{i}}}{X_{\boldsymbol{i}}^{(m)}}$. An application of (T.2) to $Z:={S_{n}^{(m)}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}{\sigma ^{-1}}$ and ${Z^{\prime }}:=({S_{n}}-{S_{n}^{(m)}})\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}{\sigma ^{-1}}$ yields
Moreover,
hence, by (P.1) and (T.1) applied with ${Y_{\boldsymbol{i}}}:={X_{\boldsymbol{i}}^{(m)}}/\| {S_{n}^{(m)}}{\| _{2}}$, ${p^{\prime }}$ instead of p and $2m+1$ instead of m, we derive that
where
By (P.2) and the reversed triangular inequality, the term $(I)$ can be bounded as
and by (P.3) with $q=2$, we obtain that
By (15), we have
and we eventually get
(58)
\[ {X_{\boldsymbol{i}}^{(m)}}:=\mathbb{E}\big[{X_{\boldsymbol{i}}}\mid \sigma ({\varepsilon _{\boldsymbol{u}}},\boldsymbol{i}-m\mathbf{1}\preccurlyeq \boldsymbol{u}\preccurlyeq \boldsymbol{i}+m\mathbf{1})\big].\]-
(P.1) The random field ${({X_{\boldsymbol{i}}^{(m)}})_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}$ is $(2m+1)$-dependent.
-
(P.2) The random field ${({X_{\boldsymbol{i}}^{(m)}})_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}$ is identically distributed and $\| {X_{\boldsymbol{i}}^{(m)}}{\| _{{p^{\prime }}}}\leqslant \| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}}$.
-
(P.3) For any ${({a_{\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}\in {\ell ^{2}}({\mathbb{Z}^{d}})$ and $q\geqslant 2$, the following inequality holds:
(59)
\[\begin{array}{cc}& \displaystyle {\bigg\| \sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}}\big({X_{\boldsymbol{i}}}-{X_{\boldsymbol{i}}^{(m)}}\big)\bigg\| }_{q}\leqslant \frac{14.5q}{\log q}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}{a_{\boldsymbol{i}}^{2}}\bigg)^{1/2}}\sum \limits_{j\geqslant m}{(4j+4)^{d/2}}\| {X_{\mathbf{0},j}}{\| _{2}}\\ {} & \displaystyle +\frac{14.5q}{\log q}{\bigg(\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{a_{\boldsymbol{i}}}{|^{q}}\bigg)^{1/q}}\sum \limits_{j\geqslant m}{(4j+4)^{d(1-1/q)}}\| {X_{\mathbf{0},j}}{\| _{q}}.\end{array}\]
(60)
\[ {\varDelta _{n}}\leqslant 2\delta \bigg(\frac{{S_{n}^{(m)}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)+{\sigma ^{-\frac{p}{p+1}}}\frac{1}{\| {b_{n}}{\| _{{\ell ^{2}}}^{\frac{p}{p+1}}}}{\big\| {S_{n}}-{S_{n}^{(m)}}\big\| _{p}^{\frac{p}{p+1}}}.\](61)
\[\begin{aligned}{}\delta \bigg(\frac{{S_{n}^{(m)}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)& =\underset{t\in \mathbb{R}}{\sup }\bigg|\mathbb{P}\bigg\{\frac{{S_{n}^{(m)}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\leqslant t\bigg\}-\varPhi (t)\bigg|\end{aligned}\](65)
\[ (I):=150{(20m+21)^{({p^{\prime }}-1)d}}\sum \limits_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}|{b_{n,\boldsymbol{i}}}{|^{{p^{\prime }}}}{\big\| {X_{i}^{(m)}}\big\| _{{p^{\prime }}}^{{p^{\prime }}}}{\big\| {S_{n}^{(m)}}\big\| _{2}^{-{p^{\prime }}}},\](68)
\[ (I)\leqslant 150{(20m+21)^{({p^{\prime }}-1)d}}\| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}^{{p^{\prime }}}}\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}^{{p^{\prime }}}}{\big(\| {S_{n}}{\| _{2}}-{\big\| {S_{n}}-{S_{n}^{(m)}}\big\| }_{2}\big)^{-{p^{\prime }}}}\](69)
\[ {\big(\| {S_{n}}{\| _{2}}-{\big\| {S_{n}}-{S_{n}^{(m)}}\big\| }_{2}\big)^{-{p^{\prime }}}}\leqslant {\big(\| {S_{n}}{\| _{2}}-29{(\log 2)^{-1}}{m^{-\alpha }}\| {b_{n}}{\| _{{\ell ^{2}}}}{C_{2}}(\alpha )\big)^{-{p^{\prime }}}}.\](70)
\[ \frac{\| {S_{n}}{\| _{2}^{2}}}{\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}={\sigma ^{2}}+{\varepsilon _{n}},\]
\[\begin{array}{c}\displaystyle (I)\leqslant 150{(20m+21)^{({p^{\prime }}-1)d}}\| {X_{\mathbf{0}}}{\| _{{p^{\prime }}}^{{p^{\prime }}}}{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{{p^{\prime }}}}}}}{\| {b_{n}}{\| _{{\ell ^{2}}}}}\bigg)^{{p^{\prime }}}}\\ {} \displaystyle \cdot {\big(\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{m^{-\alpha }}{C_{2}}(\alpha )\big)^{-{p^{\prime }}}}.\end{array}\]
Since $n\geqslant {n_{0}}$, we derive, in view of (17),
In order to bound $(II)$, we argue as in [28] (p. 456). Doing similar computations as in [9] (p. 272), we obtain that
where ${a_{n}}:=\| {S_{n}^{(m)}}{\| _{2}}{\sigma ^{-1}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-1}}$. Observe that for any n, by (P.3),
and using again (P.3) combined with Theorem 1 for $p=q=2$,
This leads to the estimate
and since $n\geqslant {n_{0}}$, we derive, in view of (17),
(72)
\[ (II)\leqslant {(2\pi e)^{-1/2}}{\Big(\underset{k\geqslant 1}{\inf }{a_{k}}\Big)^{-1}}\big|{a_{n}^{2}}-1\big|,\](73)
\[ {a_{n}}\geqslant \frac{\| {S_{n}}{\| _{2}}-\| {S_{n}}-{S_{n}^{(m)}}{\| _{2}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\geqslant \frac{\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){m^{-\alpha }}}{\sigma }\](74)
\[\begin{aligned}{}\big|{a_{n}^{2}}-1\big|& =\bigg|\frac{\| {S_{n}^{(m)}}{\| _{2}^{2}}}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}-1\bigg|\end{aligned}\](75)
\[\begin{aligned}{}& \leqslant \bigg|\frac{\| {S_{n}}{\| _{2}^{2}}}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}-1\bigg|+\frac{|\| {S_{n}^{(m)}}{\| _{2}^{2}}-\| {S_{n}}{\| _{2}^{2}}|}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}\end{aligned}\](76)
\[\begin{aligned}{}& \leqslant \frac{|{\varepsilon _{n}}|}{{\sigma ^{2}}}+\frac{|\| {S_{n}^{(m)}}{\| _{2}}-\| {S_{n}}{\| _{2}}|(\| {S_{n}^{(m)}}{\| _{2}}+\| {S_{n}}{\| _{2}})}{{\sigma ^{2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{2}}}\end{aligned}\](79)
\[ (II)\leqslant \frac{{(2\pi e)^{-1/2}}}{\sqrt{{\sigma ^{2}}+{\varepsilon _{n}}}-29{(\log 2)^{-1}}{C_{2}}(\alpha ){m^{-\alpha }}}\bigg(\frac{|{\varepsilon _{n}}|}{\sigma }+40{(\log 2)^{-1}}\frac{{m^{-\alpha }}}{\sigma }{C_{2}}{(\alpha )^{2}}\bigg),\]The estimate of $(III)$ relies on (P.3):
hence
The combination of (64), (71), (80) and (82) gives (18).
(81)
\[\begin{array}{cc}& \displaystyle (III)\leqslant {\sigma ^{-\frac{p}{p+1}}}{\bigg(\frac{14.5p}{\log p}\sum \limits_{j\geqslant m}{(4j+4)^{d/2}}\| {X_{\mathbf{0},j}}{\| _{2}}\bigg)^{\frac{p}{p+1}}}\\ {} & \displaystyle +{\sigma ^{-\frac{p}{p+1}}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\frac{p}{p+1}}}\| {b_{n}}{\| _{{\ell ^{p}}}^{\frac{p}{p+1}}}{\bigg(\frac{14.5p}{\log p}\sum \limits_{j\geqslant m}{(4j+4)^{d(1-1/p)}}\| {X_{\mathbf{0},j}}{\| _{p}}\bigg)^{\frac{p}{p+1}}}\end{array}\](82)
\[\begin{array}{cc}& \displaystyle (III)\leqslant {\bigg(\frac{14.5p}{\sigma \log p}{4^{d/2}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \alpha }}{C_{2}}(\alpha )\bigg)^{\frac{p}{p+1}}}\\ {} & \displaystyle +{\bigg(\frac{\| {b_{n}}{\| _{{\ell ^{p}}}}}{\sigma \| {b_{n}}{\| _{{\ell ^{2}}}}}\frac{14.5p}{\log p}{4^{d(1-1/p)}}\| {b_{n}}{\| _{{\ell ^{2}}}^{-\gamma \beta }}{C_{p}}(\beta )\bigg)^{\frac{p}{p+1}}}.\end{array}\]2.3 Proof of Theorem 3
Since the random variables ${X_{\boldsymbol{i}}}$ are centered, we derive by definition of ${g_{n}}(\boldsymbol{x})$ that
We define
and ${b_{n,\boldsymbol{i}}}=0$ otherwise. Denote ${b_{n}}={({b_{n,\boldsymbol{i}}})}_{\boldsymbol{i}\in {\mathbb{Z}^{d}}}$ and $\| {b_{n}}{\| _{{\ell ^{2}}}}:={({\sum _{\boldsymbol{i}\in {\mathbb{Z}^{d}}}}{b_{n,\boldsymbol{i}}^{2}})^{1/2}}$. In this way, by (83) and (28),
Applying (T.2) to
and using Theorem 1, we derive that
where
We then use Theorem 2 to handle ${\varDelta ^{\prime }_{n}}$ (which is allowed, by (A)). Using boundedness of K, we control the ${\ell ^{p}}$ and ${\ell ^{{p^{\prime }}}}$ norms by a constant times the ${\ell ^{2}}$-norm. This ends the proof of Theorem 3.