Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 10, Issue 3 (2023)
  4. Consistency of LSE for the many-dimensio ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Consistency of LSE for the many-dimensional symmetric textured surface parameters
Volume 10, Issue 3 (2023), pp. 267–285
Oleksandr Dykyi ORCID icon link to view author Oleksandr Dykyi details   Alexander Ivanov ORCID icon link to view author Alexander Ivanov details  

Authors

 
Placeholder
https://doi.org/10.15559/23-VMSTA225
Pub. online: 16 March 2023      Type: Research Article      Open accessOpen Access

Received
8 December 2022
Revised
27 February 2023
Accepted
2 March 2023
Published
16 March 2023

Abstract

A multivariate trigonometric regression model is considered. In the paper strong consistency of the least squares estimator for amplitudes and angular frequencies is obtained for such a multivariate model on the assumption that the random noise is a homogeneous or homogeneous and isotropic Gaussian, specifically, strongly dependent random field on ${\mathbb{R}^{M}},M\ge 3$.

1 Introduction

We consider here a trigonometric regression model on ${\mathbb{R}_{+}^{M}}={\left[0,\infty \right)^{M}}$, $M\ge 3$, where the arguments of cosine and sine functions are linear forms with unknown coefficients being angular frequencies of the sum of multivariate harmonic oscillations. Moreover, the noise is a homogeneous or homogeneous and isotropic [9, 3] Gaussian random field satisfying some conditions on its covariance function.
In the case $M=2$ several discrete modifications of such a regression model were studied in numerous works on signal and image processing due to their applications to texture analysis. A number of useful references to publications in this area can be found in the papers [5, 4, 6]. Besides, in [5, 4] strong consistency and asymptotic normality of the least squares estimate (LSE) for amplitudes and frequencies of sinusoidal model on ${\mathbb{R}_{+}^{2}}$ with the above described Gaussian random field are proved. In turn, in [6] the asymptotic normality of the consistent LSE for the same trigonometric model on ${\mathbb{R}_{+}^{M}}$, $M\ge 3$, as in the article, is obtained.
It is important to mention that in the paper [1] a multivariate harmonic oscillation observed discretely against the background of a homogeneous random field having spectral densities of all orders is considered. For this model some asymptotic results on LSE and periodogram estimate of unknown amplitudes and frequencies are formulated.
Note also that from the mathematical point of view a setting of the estimation problem for $M\ge 2$ in trigonometric models is a natural generalization of the problem of detection of hidden periodicities $(M=1)$, see, for example, [2, 7] and references therein.
The further text related to the definition of LSE to some extent repeats the text of paper [6].
Let $\langle \varphi ,t\rangle \hspace{-0.1667em}=\hspace{-0.1667em}{\textstyle\sum \limits_{l=1}^{M}}{\varphi _{l}}{t_{l}}$, $\hspace{0.2778em}\| t\| =\sqrt{\langle t,t\rangle }$ for vectors $\varphi \hspace{-0.1667em}=\hspace{-0.1667em}\left({\varphi _{1}},\dots ,{\varphi _{M}}\right)$, $t=\left({t_{1}},\dots ,{t_{M}}\right)$, $M\ge 3$.
Consider the regression model
(1)
\[ X(t)=g\left(t,{\theta ^{0}}\right)+\varepsilon \left(t\right),\hspace{1em}t\in {\mathbb{R}_{+}^{M}},\]
where
(2)
\[ g(t,{\theta ^{0}})={\sum \limits_{k=1}^{N}}\left({A_{k}^{0}}\cos \langle {\varphi _{k}^{0}},t\rangle +{B_{k}^{0}}\sin \langle {\varphi _{k}^{0}},t\rangle \right),\]
(3)
\[ \begin{aligned}{}{\varphi _{k}^{0}}& =\left({\varphi _{1k}^{0}},\dots ,{\varphi _{Mk}^{0}}\right),\hspace{1em}k=\overline{1,N},\\ {} {\theta ^{0}}& =\left({A_{1}^{0}},{B_{1}^{0}},{\varphi _{11}^{0}},\dots ,{\varphi _{M1}^{0}},\dots ,{A_{N}^{0}},{B_{N}^{0}},{\varphi _{1N}^{0}},\dots ,{\varphi _{MN}^{0}}\right),\end{aligned}\]
${\left({A_{k}^{0}}\right)^{2}}+{\left({B_{k}^{0}}\right)^{2}}>0$, $k=\overline{1,N}$; $\varepsilon =\left\{\varepsilon \left(t\right),t\in {\mathbb{R}^{M}}\right\}$ is a random noise defined on a probability space $\left(\Omega ,\mathcal{F},P\right)$ and satisfying the next assumption.
A. ε is a sample continuous homogeneous Gaussian random field with zero mean and covariance functions $B\left(t\right)=\mathbb{E}\varepsilon \left(t\right)\varepsilon \left(0\right)$, $t\in {\mathbb{R}^{M}}$, satisfying one of the conditions:
  • (i) ε is isotropic field and $B\left(t\right)=\widetilde{B}\left(\| t\| \right)=L\left(\| t\| \right)/\| t{\| ^{\alpha }}$, $\alpha \in \left(0,M-\left[\frac{M}{2}\right]\right)$, with nondecreasing slowly varying at infinity function L;
  • (ii) $B\left(\cdot \right)\in {L_{1}}\left({\mathbb{R}^{M}}\right)$.
To prove the consistency of LSE of the parameters (3) it is reasonable to modify the standard definition of LSE using parametric sets that allow a good enough distinction between angular frequencies [8, 1, 5].
Consider the space ${\mathbb{R}^{MN}}=\underset{M\hspace{0.2778em}\mathrm{times}}{\underbrace{{\mathbb{R}^{N}}\times \cdots \times {\mathbb{R}^{N}}}}$, and in each space ${\mathbb{R}^{N}}$ for some fixed numbers $0\le {\underline{\varphi }_{l}}<{\overline{\varphi }_{l}}<\infty $ define the sets
(4)
\[ \begin{aligned}{}& {\Lambda _{l}}=\left\{{\varphi _{l}}=\left({\varphi _{l1}},\dots ,{\varphi _{lN}}\right)\in {\mathbb{R}^{N}}:0\le {\underline{\varphi }_{l}}<{\varphi _{lk}}<{\overline{\varphi }_{l}}<\infty ,k=\overline{1,N}\right\},\\ {} & \hspace{1em}l=\overline{1,M},\end{aligned}\]
containing all the corresponding true value of frequencies in (3) for fixed l.
Introduce the functional
(5)
\[ {Q_{T}}\left(\theta \right)={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\left[X\left(t\right)-g\left(t,\theta \right)\right]^{2}}dt,\hspace{0.2778em}dt={\prod \limits_{l=1}^{M}}d{t_{l}}.\]
According to the standard definition a random vector (6)
(6)
\[ {\theta _{T}}=\left({A_{1T}},{B_{1T}},{\varphi _{11,T}},\dots ,{\varphi _{M1,T}},\dots ,{A_{NT}},{B_{NT}},{\varphi _{1N,T}},\dots ,{\varphi _{MN,T}}\right)\]
is called LSE of ${\theta ^{0}}$, if it minimizes (5) on the parametric set ${\Theta ^{c}}\subset {\mathbb{R}^{\left(M+2\right)N}}$. In ${\Theta ^{c}}$ amplitudes ${A_{k}}$, ${B_{k}}$, $k=\overline{1,N}$, can take any values and the frequencies ${\varphi _{l}}$, $l=\overline{1,M}$, take values in the closed set
\[ {\Lambda ^{c}}={\prod \limits_{l=1}^{M}}{\Lambda _{l}^{c}}.\]
To prove the strong consistency of ${\Theta ^{c}}$ it is necessary to provide convergence, as $T\to \infty $, to zero almost surely (a.s.) of the fractions (see [2, 7, 5])
(7)
\[ \frac{\sin T\left({\varphi _{lk,T}}-{\varphi _{lj,T}}\right)}{T\left({\varphi _{lk,T}}-{\varphi _{lj,T}}\right)},\hspace{0.2778em}\frac{\sin T\left({\varphi _{lk,T}}-{\varphi _{lj}^{0}}\right)}{T\left({\varphi _{lk,T}}-{\varphi _{lj}^{0}}\right)},\hspace{1em}k\ne j;\]
(8)
\[ \frac{\sin T{\varphi _{lk,T}}}{T{\varphi _{lk,T}}},\hspace{1em}l=\overline{1,M},\hspace{0.2778em}k,j=\overline{1,N}.\]
However, the use of the previous definition of LSE does not allow to control the behavior of fractions (7), (8), as $T\to \infty $.
A.M. Walker [8] modified the definition of LSE of the frequencies in the classical formulation of the problem of detecting of hidden periodicities so that the terms (7), (8) tend to zero, as $T\to \infty $. In our setting the sense of similar modification is that LSE (6) is defined as an absolute minimum point of (5) on the parametric set depending on T and distinguishing frequencies properly, as $T\to \infty $.
Following D.R. Brillinger [1], consider monotonically nondecreasing families of open sets ${\Lambda _{lT}}\subset {\Lambda _{l}}$, $l=\overline{1,M}$, $T>{T_{0}}>0$, such that $\textstyle\bigcup \limits_{T>{T_{0}}}{\Lambda _{lT}}={\widetilde{\Lambda }_{l}}$, ${\left({\widetilde{\Lambda }_{l}}\right)^{c}}={\Lambda _{l}^{c}}$, and satisfying the following conditions.
B. For $l=\overline{1,M}$ and $k,{k^{\prime }}=\overline{1,N}$:
  • 1. ${\varphi _{l}^{0}}=\left({\varphi _{l1}^{0}},\dots ,{\varphi _{lN}^{0}}\right)\in {\Lambda _{lT}}$, $T>{T_{0}}$ (do not confuse with ${\varphi _{k}^{0}}$ in (2));
  • 2. $\underset{T\to \infty }{\lim }\underset{{\varphi _{l}}\in {\Lambda _{lT}}}{\inf }T\left|{\varphi _{lk}}-{\varphi _{l{k^{\prime }}}}\right|=\infty $, $k\ne {k^{\prime }}$;
  • 3. $\underset{T\to \infty }{\lim }\underset{{\varphi _{l}}\in {\Lambda _{lT}}}{\inf }T{\varphi _{lk}}=\infty $.
The meaning of (2) and (3) is to cover cases of close true frequencies and close to zero true frequencies.
Definition 1.
Any random vector (6) such that it is an absolute minimum point of (5) on the parametric set ${\Theta _{T}^{c}}\subset {\mathbb{R}^{\left(M+2\right)N}}$, where amplitudes ${A_{k}}$, ${B_{k}}$, $k=\overline{1,N}$, can take any values and angular frequencies take values in the set
\[ {\Lambda _{T}^{c}}={\prod \limits_{l=1}^{M}}{\Lambda _{lT}^{c}},\hspace{1em}T>{T_{0}}>0,\]
is called LSE in the Walker–Brillinger sense.
We study in the paper just such an estimate.
Obviously, additional prior information about frequencies in the trigonometric model (1) can clarify the description of parametric sets containing true frequencies. The corresponding example is given in [6].

2 LLN for finite Fourier transform of random noise

In the section a strong LLN, uniform in frequencies, is obtained for the finite Fourier transform of the random field ε from the model (1).
Theorem 1.
Under condition A
(9)
\[ {\xi _{T}}=\underset{\varphi \in {\mathbb{R}^{M}}}{\sup }\left|{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt\right|\to 0\hspace{1em}a.s.,\hspace{0.2778em}as\hspace{0.2778em}T\to \infty .\]
Proof.
Denote by ${\eta _{T}}\left(\varphi \right)$ the expression under the supremum sign in (9). Then
\[\begin{aligned}{}{\eta _{T}^{2}}\left(\varphi \right)=& {T^{-2M}}\underset{{\left[0,T\right]^{2M}}}{\int }\exp \{-i\left<\varphi ,t-s\right>\}\varepsilon \left(t\right)\varepsilon \left(s\right)dtds\\ {} =& {T^{-2M}}\underset{{\left[0,T\right]^{2M-2}}}{\int }\exp \left\{-i{\sum \limits_{l=2}^{M}}{\varphi _{l}}\left({t_{l}}-{s_{l}}\right)\right\}\\ {} & \times \left(\underset{{\left[0,T\right]^{2}}}{\int }{e^{-i{\varphi _{1}}\left({t_{1}}-{s_{1}}\right)}}\varepsilon \left(t\right)\varepsilon \left(s\right)d{t_{1}}d{s_{1}}\right)d{t_{2}}...d{t_{M}}d{s_{2}}...d{s_{M}}.\end{aligned}\]
Transform the inner integral in this expression:
\[\begin{aligned}{}& \underset{{\left[0,T\right]^{2}}}{\int }{e^{-i{\varphi _{1}}\left({t_{1}}-{s_{1}}\right)}}\varepsilon \left(t\right)\varepsilon \left(s\right)d{t_{1}}d{s_{1}}\\ {} & \hspace{1em}={\underset{0}{\overset{T}{\int }}}{\underset{0}{\overset{T-{u_{1}}}{\int }}}{e^{-i{\varphi _{1}}{u_{1}}}}\varepsilon \left({v_{1}}+{u_{1}},{t_{2}},\dots ,{t_{M}}\right)\varepsilon \left({v_{1}},{s_{2}},\dots ,{s_{M}}\right)d{v_{1}}d{u_{1}}\\ {} & \hspace{2em}+{\underset{0}{\overset{T}{\int }}}{\underset{0}{\overset{T-{u_{1}}}{\int }}}{e^{i{\varphi _{1}}{u_{1}}}}\varepsilon \left({v_{1}},{t_{2}},\dots ,{t_{M}}\right)\varepsilon \left({v_{1}}+{u_{1}},{s_{2}},\dots ,{s_{M}}\right)d{v_{1}}d{u_{1}}.\end{aligned}\]
Thus
(10)
\[ \begin{aligned}{}& {\eta _{T}^{2}}\left(\varphi \right)={T^{-2M}}{\underset{0}{\overset{T}{\int }}}{\underset{0}{\overset{T-{u_{1}}}{\int }}}{e^{-i{\varphi _{1}}{u_{1}}}}\Bigg(\underset{{\left[0,T\right]^{2M-4}}}{\int }\exp \left\{-i{\sum \limits_{l=3}^{M}}{\varphi _{l}}\left({t_{l}}-{s_{l}}\right)\right\}\\ {} & \hspace{1em}\times \left(\underset{{\left[0,T\right]^{2}}}{\int }{e^{-i{\varphi _{2}}\left({t_{2}}-{s_{2}}\right)}}\varepsilon \left({v_{1}}+{u_{1}},{t_{2}},\dots ,{t_{M}}\right)\varepsilon \left({v_{1}},{s_{2}},\dots ,{s_{M}}\right)d{t_{2}}d{s_{2}}\right)\\ {} & \hspace{2em}\hspace{1em}\times d{t_{3}}...d{t_{M}}d{s_{3}}...d{s_{M}}\Bigg)d{v_{1}}d{u_{1}}\\ {} & \hspace{1em}+{T^{-2M}}{\underset{0}{\overset{T}{\int }}}{\underset{0}{\overset{T-{u_{1}}}{\int }}}{e^{i{\varphi _{1}}{u_{1}}}}\Bigg(\underset{{\left[0,T\right]^{2M-4}}}{\int }\exp \left\{-i{\sum \limits_{l=3}^{M}}{\varphi _{l}}\left({t_{l}}-{s_{l}}\right)\right\}\\ {} & \hspace{1em}\times \left(\underset{{\left[0,T\right]^{2}}}{\int }{e^{-i{\varphi _{2}}\left({t_{2}}-{s_{2}}\right)}}\varepsilon \left({v_{1}},{t_{2}},\dots ,{t_{M}}\right)\varepsilon \left({v_{1}}+{u_{1}},{s_{2}},\dots ,{s_{M}}\right)d{t_{2}}d{s_{2}}\right)\\ {} & \hspace{2em}\hspace{1em}\times d{t_{3}}...d{t_{M}}d{s_{3}}...d{s_{M}}\Bigg)d{v_{1}}d{u_{1}}={I_{1}}+{I_{0}}.\end{aligned}\]
In (10) and below we will write index “1” in the summands I with indices when in the entry $\varepsilon \left({t_{1}},\dots ,{t_{M}}\right)$ after the changes of variables, instead of ${t_{l}}$ the sum ${v_{l}}+{u_{l}}$ stands, and index “0” when, instead of ${t_{l}}$, the variable ${v_{l}}$ appears.
In ${I_{1}}$ and ${I_{0}}$ we make similar changes of variables in double integrals over the variables ${t_{2}}$, ${s_{2}}$. Then we get ${I_{1}}={I_{11}}+{I_{10}}$, ${I_{0}}={I_{01}}+{I_{00}}$, ${\eta _{T}^{2}}\left(\varphi \right)={I_{11}}+{I_{10}}+{I_{01}}+{I_{00}}$.
In the expressions obtained, the change of variables ${t_{3}}$, ${s_{3}}$ will lead to the sums of integrals ${I_{11}}={I_{111}}+{I_{110}}$, ${I_{10}}={I_{101}}+{I_{100}}$, ${I_{01}}={I_{011}}+{I_{010}}$, $\hspace{0.2778em}{I_{00}}={I_{001}}+{I_{000}}$, ${\eta _{T}^{2}}\left(\varphi \right)={I_{111}}+{I_{110}}+{I_{101}}+{I_{100}}+{I_{011}}+{I_{010}}+{I_{001}}+{I_{000}}$.
Continuing to apply the same changes of variables $\left({t_{4}},\hspace{0.2778em}{s_{4}}\right)$, ..., $\left({t_{M}},\hspace{0.2778em}{s_{M}}\right)$, we get the expression of ${\eta _{T}^{2}}\left(\varphi \right)$ as a sum of ${2^{M}}$ integrals. An explicit representation of this ordered sum requires some efforts.
Denote by ${b_{j}}$, $j=\overline{1,{2^{M}}}$, binary sets of length M which are indices of the specified sum terms, i.e.
(11)
\[ {\eta _{T}^{2}}\left(\varphi \right)={\sum \limits_{j=1}^{{2^{M}}}}{I_{{b_{j}}}}.\]
Generally speaking the terms in the sum (11) are written in the definite order using the following inductive rule. For $M=1,2,3$ these sums are written above. If we have a sum ${\textstyle\sum \limits_{j=1}^{{2^{{M^{\prime }}}}}}{I_{{b^{\prime }_{j}}}}$ for some ${M^{\prime }}$, then to obtain the similar sum for ${M^{\prime }}+1$, we have to take any term ${I_{{b^{\prime }_{j}}}}$ of the sum for ${M^{\prime }}$ and replace it with the sum of two terms ${I_{{b^{\prime }_{j}}1}}+{I_{{b^{\prime }_{j}}0}}$, $j=\overline{1,{2^{{M^{\prime }}}}}$. In the final analysis it turns out that terms in the sum (11) are arranged in descending lexicographic order.
We will say that the binary set $\overline{b}$ is the opposite of the binary set $b=\left({\beta _{1}},\dots ,{\beta _{M}}\right)$, if $\overline{b}=\left({\overline{\beta }_{1}},\dots ,{\overline{\beta }_{M}}\right)$. The ordered collection of ${2^{M}}$ binary sets ${b_{j}}$ in (11) has the property that binary sets equidistant from the beginning and end of this sum are opposite:
(12)
\[ {b_{{2^{M}}-\left(j-1\right)}}={\overline{b}_{j}},\hspace{1em}j=\overline{1,{2^{M-1}}}.\]
Note that the frequency ${\varphi _{1}}$ is included in the 1st term of the sum (10) with the sign “−”, and this sign corresponds to the sum ${v_{1}}+{u_{1}}$ in the 1st factor depending on t, of the product of the random field ε values. In the 2nd term of (10) the frequency ${\varphi _{1}}$ enters with the sign “+”, and this sign corresponds to the variable ${v_{1}}$ in the 1st factor of the product of the ε values. However, the sum ${v_{1}}+{u_{1}}$ corresponds to the index “1” in the 1st term (10), and the variable ${v_{1}}$ corresponds to the index “0” in the 2nd term of (10). Further changes of variables do not change these rules, and $2M$-fold integrals ${I_{{b_{j}}}}$ in the sum (11) include the exponents $\exp \left\{i\left<{c_{j}}\varphi ,u\right>\right\}$, where the vector ${c_{j}}=\left({\gamma _{{j_{1}}}},\dots ,{\gamma _{{j_{M}}}}\right)$ is obtained from the binary set ${b_{j}}$ by replacing the coordinates “0” with “1” and the coordinates “1” with “$-1$”, ${c_{j}}\varphi :=\left({\gamma _{{j_{1}}}}{\varphi _{1}},\dots ,{\gamma _{{j_{M}}}}{\varphi _{M}}\right)$. Due to (12),
(13)
\[ {c_{{2^{M}}-\left(j-1\right)}}=-{c_{j}},\hspace{1em}j=\overline{1,{2^{M-1}}}.\]
It follows from (13) and the previous considerations that the integrals equidistant from the beginning and end of the sum (11) are conjugate complex numbers, and it turns into the sum
(14)
\[ {\eta _{T}^{2}}\left(\varphi \right)={\sum \limits_{j=1}^{{2^{M-1}}}}{I_{{c_{j}}}},\]
(15)
\[ \begin{aligned}{}{I_{{c_{j}}}}& =2{T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }\cos \left<{c_{j}}\varphi ,u\right>\underset{{\textstyle\prod _{T}}\left(u\right)}{\int }\varepsilon \left(v+{b_{j}}u\right)\varepsilon \left(v+{\overline{b}_{j}}u\right)dvdu,\\ {} & \hspace{2em}j=\overline{1,{2^{M-1}}},\hspace{0.2778em}{b_{j}}u:=\left({\beta _{{j_{1}}}}{u_{1}},\dots ,{\beta _{{j_{M}}}}{u_{M}}\right),\\ {} & \hspace{2em}{\prod _{T}}\left(u\right)=\left[0,T-{u_{1}}\right]\times \cdots \times \left[0,T-{u_{M}}\right].\end{aligned}\]
From (14) and (15) we get
(16)
\[ \begin{aligned}{}\mathbb{E}{\xi _{T}^{2}}& =\mathbb{E}\underset{\varphi \in {\mathbb{R}^{M}}}{\sup }{\eta _{T}^{2}}\left(\varphi \right)\\ {} & \le 2{\sum \limits_{j=1}^{{2^{M-1}}}}{T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }\mathbb{E}\left|\hspace{0.1667em}\underset{{\textstyle\prod _{T}}\left(u\right)}{\int }\varepsilon \left(v+{b_{j}}u\right)\varepsilon \left(v+{\overline{b}_{j}}u\right)dv\right|du\\ {} & \le 2{\sum \limits_{j=1}^{{2^{M-1}}}}{T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\Psi _{j}^{1/2}}\left(u\right)du,\end{aligned}\]
where
(17)
\[ \begin{aligned}{}{\Psi _{j}}\left(u\right)& =\underset{{\textstyle\textstyle\prod _{T}^{2}}\left(u\right)}{\int }\mathbb{E}\varepsilon \left(v+{b_{j}}u\right)\varepsilon \left(v+{\overline{b}_{j}}u\right)\varepsilon \left(w+{b_{j}}u\right)\varepsilon \left(w+{\overline{b}_{j}}u\right)dvdw,\\ {} & \hspace{1em}j=\overline{1,{2^{M-1}}}.\end{aligned}\]
To deal with the integrals (17) we use Isserlis’ theorem
\[\begin{aligned}{}& \mathbb{E}\varepsilon \left(v+{b_{j}}u\right)\varepsilon \left(v+{\overline{b}_{j}}u\right)\varepsilon \left(w+{b_{j}}u\right)\varepsilon \left(w+{\overline{b}_{j}}u\right)\\ {} & \hspace{1em}={B^{2}}\left(\left({b_{j}}-{\overline{b}_{j}}\right)u\right)++{B^{2}}\left(v-w\right)\\ {} & \hspace{2em}+B\left(v-w+\left({b_{j}}-{\overline{b}_{j}}\right)u\right)B\left(v-w+\left({\overline{b}_{j}}-{b_{j}}\right)u\right).\end{aligned}\]
Then
(18)
\[ \begin{aligned}{}{\Psi _{j}}\left(u\right)& ={\left(T-{u_{1}}\right)^{2}}{\left(T-{u_{2}}\right)^{2}}\dots {\left(T-{u_{M}}\right)^{2}}{B^{2}}\left(\left({b_{j}}-{\overline{b}_{j}}\right)u\right)\\ {} & \hspace{1em}+\underset{{\textstyle\textstyle\prod _{T}^{2}}\left(u\right)}{\int }{B^{2}}\left(v-w\right)dvdw\\ {} & \hspace{1em}+\underset{{\textstyle\textstyle\prod _{T}^{2}}\left(u\right)}{\int }B\left(v-w+\left({b_{j}}-{\overline{b}_{j}}\right)u\right)B\left(v-w+\left({\overline{b}_{j}}-{b_{j}}\right)u\right)dvdw\\ {} & ={\Psi _{j1}}\left(u\right)+{\Psi _{j2}}\left(u\right)+{\Psi _{j3}}\left(u\right),\hspace{1em}j=\overline{1,{2^{M-1}}}.\end{aligned}\]
Let us first estimate the term ${\Psi _{j3}}\left(u\right)$ using notation $B\left(v-w+\left({b_{j}}-{\overline{b}_{j}}\right)u\right)\times \times B\left(v-w+\left({\overline{b}_{j}}-{b_{j}}\right)u\right)={R_{u}^{\left(j\right)}}\left(v-w\right)$:
\[\begin{aligned}{}{\Psi _{j3}}\left(u\right)& =\underset{{\textstyle\textstyle\prod _{T}^{2}}\left(u\right)}{\int }{R_{u}^{\left(j\right)}}\left(v-w\right)dvdw\\ {} & ={\prod \limits_{l=1}^{M}}\left(T-{u_{l}}\right){\underset{-\left(T-{u_{1}}\right)}{\overset{\left(T-{u_{1}}\right)}{\int }}}\hspace{-0.1667em}\cdots {\underset{-\left(T-{u_{M}}\right)}{\overset{\left(T-{u_{M}}\right)}{\int }}}{\prod \limits_{l=1}^{M}}\left(1-\frac{{\left|t\right|_{l}}}{T-{u_{l}}}\right){R_{u}^{\left(j\right)}}\left(t\right)dt\\ {} & \le {\prod \limits_{l=1}^{M}}\left(T-{u_{l}}\right)\underset{{\left[-T,T\right]^{M}}}{\int }{R_{u}^{\left(j\right)}}\left(t\right)dt={T^{M}}{\prod \limits_{l=1}^{M}}\left(T-{u_{l}}\right)\underset{{\left[-1,1\right]^{M}}}{\int }{R_{u}^{\left(j\right)}}\left(Tt\right)dt,\end{aligned}\]
(19)
\[ \begin{aligned}{}& 2{T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\Psi _{j3}^{1/2}}\left(u\right)du=2{T^{-M}}\underset{{\left[0,1\right]^{M}}}{\int }{\Psi _{j3}^{1/2}}\left(Tu\right)du\\ {} & \hspace{1em}\le 2\underset{{\left[0,1\right]^{M}}}{\int }\sqrt{{\prod \limits_{l=1}^{M}}\left(1-{u_{l}}\right)\underset{{\left[-1,1\right]^{M}}}{\int }{R_{Tu}^{\left(j\right)}}\left(Tt\right)dt}du.\end{aligned}\]
Consider the integral under the root sign in (19) if condition $\textbf{A}(i)$ is met.
(20)
\[ \begin{aligned}{}& \underset{{\left[-1,1\right]^{M}}}{\int }{R_{Tu}^{\left(j\right)}}\left(Tt\right)dt\\ {} & \hspace{1em}=\underset{{\left[-1,1\right]^{M}}}{\int }\widetilde{B}\left(T\| t+\left({b_{j}}-{\overline{b}_{j}}\right)u\| \right)\widetilde{B}\left(T\| t+\left({\overline{b}_{j}}-{b_{j}}\right)u\| \right)dt\\ {} & \hspace{1em}=\underset{{\left[-1,1\right]^{M}}}{\int }\frac{L\left(T\| t+\left({b_{j}}-{\overline{b}_{j}}\right)u\| \right)}{{T^{\alpha }}\| t+\left({b_{j}}-{\overline{b}_{j}}\right)u{\| ^{\alpha }}}\cdot \frac{L\left(T\| t+\left({\overline{b}_{j}}-{b_{j}}\right)u\| \right)}{{T^{\alpha }}\| t+\left({\overline{b}_{j}}-{b_{j}}\right)u{\| ^{\alpha }}}dt.\end{aligned}\]
If L is monotone nondecreasing function, as it is assumed in condition $\textbf{A}(i)$, then the numerators in (20) can always be bounded as follows.
(21)
\[ L\left(T\| t\pm \left({b_{j}}-{\overline{b}_{j}}\right)u\| \right)\le L\left(2\sqrt{M}T\right)<\left(1+\delta \right)L\left(T\right)\]
for any $\delta >0$, as $T>T\left(\delta \right)$.
The denominators in (20) need to be estimated more carefully. Let $b=({\beta _{1}},\dots ,{\beta _{M}})$ be an arbitrary binary set. Using the notation
(22)
\[ {\underset{-{\overline{b}_{i}}}{\overset{{b_{i}}}{\int }}}={\underset{-{\overline{\beta }_{i1}}}{\overset{{\beta _{i1}}}{\int }}}{\underset{-{\overline{\beta }_{i2}}}{\overset{{\beta _{i2}}}{\int }}}\hspace{-0.1667em}\cdots {\underset{-{\overline{\beta }_{iM}}}{\overset{{\beta _{iM}}}{\int }}},\hspace{1em}i=\overline{1,{2^{M}}},\]
we get for any fixed j
(23)
\[ \underset{{\left[-1,1\right]^{M}}}{\int }{R_{Tu}^{\left(j\right)}}\left(Tt\right)dt={\sum \limits_{i=1}^{{2^{M}}}}{\underset{-{\overline{b}_{i}}}{\overset{{b_{i}}}{\int }}}{R_{Tu}^{\left(j\right)}}\left(Tt\right)dt.\]
In the sum (23), for any fixed i put the M-fold integral ${\underset{-{\overline{b}_{i}}}{\overset{{b_{i}}}{\textstyle\int }}}$ in correspondence with the set of pluses and minuses ${q_{i}}$ of the length M according to the following rule: the sign “+” in ${q_{i}}$ corresponds to the integral over [0,1], the sign “−” corresponds to the integral over [−1,0]. Consider another set ${q_{j}}$ consisting of the signs of the ${b_{j}}-{\overline{b}_{j}}$ in the denominator of the 1st fraction in (20) under the integral sign of the ith term in (23). Similarly we get the 3rd set ${\overline{q}_{j}}$ consisting of the signs of the opposite set ${\overline{b}_{j}}-{b_{j}}$ in the denominator of the 2nd fraction of the ith term in (23).
Denote by ${d_{ij}}$ the number of coincidences of signs in the sets ${q_{i}}$, ${q_{j}}$ and by ${\overline{d}_{ij}}$ the number of coincidences in ${q_{i}}$ and ${\overline{q}_{j}}$. If ${d_{ij}}\ge M-\left[\frac{M}{2}\right]$, then the 2nd fraction of the right hand side in (20) we bound by $\widetilde{B}\left(0\right)$ in the ith term of (23), and the integral of $\| t+\left({b_{j}}-{\overline{b}_{j}}\right)u{\| ^{-\alpha }}$ related to the 1st fraction in (20) we will estimate below. On the other hand, if ${d_{ij}} < M-\left[\frac{M}{2}\right]$, then ${\overline{d}_{ij}}\ge M-\left[\frac{M}{2}\right]$. And similarly we bound the 1st fraction in our integral by $\widetilde{B}\left(0\right)$ and the integral of the 2nd denominator we have to bound separately. Thus $m=\max \left({d_{ij}},{\overline{d}_{ij}}\right)\ge M-\left[\frac{M}{2}\right]$.
Coordinates ${t_{r}}$, $r=\overline{1,M}$, of the vector t can be positive or negative according to which integral, ${\underset{0}{\overset{1}{\textstyle\int }}}$ or ${\underset{-1}{\overset{0}{\textstyle\int }}}$, corresponds to the variable ${t_{r}}$. All the coordinates of the vector u are positive, however coordinates of the vectors ${b_{j}}-{\overline{b}_{j}}$ and ${\overline{b}_{j}}-{b_{j}}$ take values $+1$ or $-1$.
Let $m={d_{ij}}$. The case $m={\overline{d}_{ij}}$ can be considered similarly. In the norm $\| t+\left(b-\overline{b}\right)u\| $ the squares ${\left({t_{l}}+\left({\beta _{{j_{l}}}}-{\overline{\beta }_{{j_{l}}}}\right){u_{l}}\right)^{2}}$ where the signs of ${t_{l}}$ and $\left({\beta _{{j_{l}}}}-{\overline{\beta }_{{j_{l}}}}\right){u_{l}}$ differs are estimated from below by zeros. Denote by ${l_{k}}$, $k=\overline{1,m}$, the numbers coordinates of the vectors t and $\left({b_{j}}-{\overline{b}_{j}}\right)u$ with coinciding signs. Then ${\Big({t_{{l_{k}}}}+\left({\beta _{{j_{{l_{k}}}}}}-{\overline{\beta }_{{j_{{l_{k}}}}}}\right){u_{{l_{k}}}}\Big)^{2}}\ge {t_{{l_{k}}}^{2}}$, $k=\overline{1,m}$, $\hspace{0.2778em}\| t+\left({b_{j}}-{\overline{b}_{j}}\right)u\| \ge {\left({\textstyle\sum _{k=1}^{m}}{t_{{l_{k}}}^{2}}\right)^{\frac{1}{2}}}$, and, respectively,
(24)
\[ \| t+\left({b_{j}}-{\overline{b}_{j}}\right)u{\| ^{-\alpha }}\le {\left({\sum \limits_{k=1}^{m}}{t_{{l_{k}}}^{2}}\right)^{-\frac{\alpha }{2}}}.\]
Then for any $\delta >0$ and $T>T\left(\delta \right)>0$ taking into account (21), (22) and (24) we obtain
(25)
\[ \begin{aligned}{}& {\underset{-{\overline{b}_{i}}}{\overset{{b_{i}}}{\int }}}{R_{Tu}^{\left(j\right)}}\left(Tt\right)dt\\ {} & \hspace{1em}\le \left(1+\delta \right)\widetilde{B}\left(0\right)\widetilde{B}\left(T\right){\underset{-{\overline{\beta }_{i{l_{1}}}}}{\overset{{\beta _{i{l_{1}}}}}{\int }}}\hspace{-0.1667em}\cdots {\underset{-{\overline{\beta }_{i{l_{m}}}}}{\overset{{\beta _{i{l_{m}}}}}{\int }}}{\left({\sum \limits_{k=1}^{m}}{t_{{l_{k}}}^{2}}\right)^{-\frac{\alpha }{2}}}d{t_{{l_{1}}}}\dots d{t_{{l_{m}}}}.\end{aligned}\]
Denote by ${\left\| t\right\| _{m}}={\left({\textstyle\sum \limits_{k=1}^{m}}{t_{{l_{k}}}^{2}}\right)^{1/2}}$ the norm in ${\mathbb{R}^{m}}$, by ${c_{m}}=\left[-{\overline{\beta }_{i{l_{1}}}},{\beta _{i{l_{1}}}}\right]\times \cdots \times \left[-{\overline{\beta }_{i{l_{m}}}},{\beta _{i{l_{m}}}}\right]$ unit cubes in ${\mathbb{R}^{m}}$, and by ${V_{m}}\left(\sqrt{m}\right)$ the ball of radius $\sqrt{m}$ centered at the origin, $I\left(\alpha ;m\right)=\underset{{c_{m}}}{\textstyle\int }{\left\| t\right\| ^{-\alpha }}dt$.
Then for $\alpha < M-\left[\frac{M}{2}\right]$
(26)
\[ I\left(\alpha ;m\right)\le \underset{{V_{m}}\left(\sqrt{m}\right)}{\int }{\left\| t\right\| ^{-\alpha }}dt=2\frac{{\pi ^{\frac{m}{2}}}}{\Gamma \left(\frac{m}{2}\right)}\left(\frac{{m^{\frac{m-\alpha }{2}}}}{m-\alpha }\right),\]
and due to (19)–(26) for any $j=\overline{1,{2^{M-1}}}$
(27)
\[ {T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\Psi _{j3}^{1/2}}\left(u\right)du=O\left({\widetilde{B}^{1/2}}\left(T\right)\right),\hspace{1em}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
It follows from (18) that ${\Psi _{j2}}\left(u\right)$ are equal for all j. Using similar but much simpler reasoning, we arrive at the inequality
(28)
\[ \begin{aligned}{}& {T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\Psi _{j2}^{1/2}}\left(u\right)du={T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }\sqrt{\underset{{\textstyle\textstyle\prod _{T}^{2}}\left(u\right)}{\int }{B^{2}}{\left(v-w\right)^{2}}dvdw}du\\ {} & \hspace{1em}\le {\left(\frac{2}{3}\right)^{M}}{\widetilde{B}^{1/2}}\left(0\right){\left(\hspace{0.1667em}\underset{{\left[-1,1\right]^{M}}}{\int }B\left(Tt\right)dt\right)^{1/2}}\\ {} & \hspace{1em}=O\left({\widetilde{B}^{1/2}}\left(T\right)\right),\hspace{1em}\mathrm{as}\hspace{2.5pt}T\to \infty .\end{aligned}\]
And finally
(29)
\[ \begin{aligned}{}& {T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\Psi _{j1}^{1/2}}\left(u\right)du={T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\prod \limits_{l=1}^{M}}\left(T-{u_{l}}\right)\widetilde{B}\left(\left\| u\right\| \right)du\\ {} & \hspace{1em}\le {T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\widetilde{B}\left(\left\| u\right\| \right)du=\underset{{\left[0,1\right]^{M}}}{\int }\widetilde{B}\left(T\left\| u\right\| \right)du=O\left(\widetilde{B}\left(T\right)\right),\\ {} & \hspace{2em}\mathrm{as}\hspace{2.5pt}T\to \infty .\end{aligned}\]
From (16)–(18), (27)–(29) we conclude that under condition $\textbf{A}(i)$
(30)
\[ \mathbb{E}{\xi _{T}^{2}}=O\left({\widetilde{B}^{1/2}}\left(T\right)\right),\hspace{1em}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
Let condition $\textbf{A}(ii)$ be satisfied now. Using the same notation we get for the even function ${R_{u}^{\left(j\right)}}\left(\cdot \right)$ that
\[ {T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\left|{\Psi _{j3}}\left(u\right)\right|^{1/2}}du\le {T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }\sqrt{\underset{{\textstyle\textstyle\prod _{T}^{2}}\left(u\right)}{\int }\left|{R_{u}^{\left(j\right)}}\left(v-w\right)\right|dvdw}du,\]
\[\begin{aligned}{}& \underset{{\textstyle\textstyle\prod _{T}^{2}}\left(u\right)}{\int }\left|{R_{u}^{\left(j\right)}}\left(v-w\right)\right|dvdw\le {\prod \limits_{l=1}^{M}}\left(T-{u_{l}}\right){\underset{-\left(T-{u_{1}}\right)}{\overset{\left(T-{u_{1}}\right)}{\int }}}\hspace{-0.1667em}\cdots {\underset{-\left(T-{u_{M}}\right)}{\overset{\left(T-{u_{M}}\right)}{\int }}}\left|{R_{u}^{\left(j\right)}}\left(t\right)\right|dt\\ {} & \hspace{1em}\le B\left(0\right){\left\| B\right\| _{1}}{\prod \limits_{l=1}^{M}}\left(T-{u_{l}}\right),\hspace{1em}{\left\| B\right\| _{1}}={\int _{{\mathbb{R}^{M}}}}\left|B\left(t\right)\right|dt,\end{aligned}\]
\[ {T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\left|{\Psi _{j3}}\left(u\right)\right|^{1/2}}du\le {\left(\frac{2}{3}\right)^{M}}{B^{1/2}}\left(0\right){\left\| B\right\| _{1}^{1/2}}{T^{-\frac{M}{2}}}.\]
For the same integral of ${\Psi _{j2}}\left(u\right)$ we obtain the same bound. And finally
\[ {T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\Psi _{j1}^{1/2}}\left(u\right)du={T^{-2M}}\underset{{\left[0,T\right]^{M}}}{\int }{\prod \limits_{l=1}^{M}}\left(T-{u_{l}}\right)\left|B\left(u\right)\right|du\le {\left\| B\right\| _{1}}{T^{-M}}.\]
Therefore under the condition $\textbf{A}(ii)$
(31)
\[ \mathbb{E}{\xi _{T}^{2}}=O\left({T^{-\frac{M}{2}}}\right),\hspace{1em}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
We show now that under condition $\textbf{A}(i)$ (30) implies assertion (9) of the theorem. Obtaining of (9) from (31) under condition $\textbf{A}(ii)$ is similar. Consider a sequence ${T_{n}}={n^{\beta }}$, $n\ge 1$, where the number $\beta >0$ is such that $\frac{1}{2}\alpha \beta >1$. Then
\[ {\sum \limits_{n=1}^{\infty }}\mathbb{E}{\xi _{{T_{n}}}^{2}}<\infty ,\]
and ${\xi _{{T_{n}}}}\to 0$ a.s., as $n\to \infty $.
Consider also the sequence of random variables
\[\begin{aligned}{}{\zeta _{n}}& =\underset{{T_{n}}\le T<{T_{n+1}}}{\sup }\left|{\xi _{T}}-{\xi _{{T_{n}}}}\right|\\ {} & \le \underset{{T_{n}}\le T<{T_{n+1}}}{\sup }\underset{\varphi \in {\mathbb{R}^{M}}}{\sup }\left|{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt-{T_{n}^{-M}}\underset{{\left[0,{T_{n}}\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt\right|.\end{aligned}\]
We get successively
\[\begin{aligned}{}& \left|{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt-{T_{n}^{-M}}\underset{{\left[0,{T_{n}}\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt\right|\\ {} & \hspace{1em}=\left|{T^{-M}}\underset{{\left[0,{T_{n}}\right]^{M}}\textstyle\bigcup \left({\left[0,T\right]^{M}}\setminus {\left[0,{T_{n}}\right]^{M}}\right)}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt-{T_{n}^{-M}}\underset{{\left[0,{T_{n}}\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt\right|\\ {} & \hspace{1em}\le \left|{T^{-M}}-{T_{n}^{-M}}\right|\left|\underset{{\left[0,{T_{n}}\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt\right|+{T^{-M}}\underset{{\left[0,T\right]^{M}}\setminus {\left[0,{T_{n}}\right]^{M}}}{\int }\left|\varepsilon \left(t\right)\right|dt\\ {} & \hspace{1em}={J_{1}}+{J_{2}},\end{aligned}\]
and
\[ \underset{{T_{n}}\le T<{T_{n+1}}}{\max }{J_{1}}=\left({\left(\frac{{T_{n+1}}}{{T_{n}}}\right)^{M}}-1\right){\xi _{{T_{n}}}}\to 0\hspace{1em}\mathrm{a}.\mathrm{s}.,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}n\to \infty .\]
On the other hand,
\[ \underset{{T_{n}}\le t<{T_{n+1}}}{\sup }{J_{2}}\le {T_{n}^{-M}}\underset{{\left[0,{T_{n+1}}\right]^{M}}\setminus {\left[0,{T_{n}}\right]^{M}}}{\int }\left|\varepsilon \left(t\right)\right|dt.\]
To write the last integral conveniently, let us first consider the obvious case $M=2$:
\[ {\underset{0}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n+1}}}{\int }}}={\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}.\]
To obtain the similar representation for $M=3$, consider a formal “multiplication” of integrals, which is in fact an application of the distributivity law of union and direct product set operations:
(32)
\[ \begin{aligned}{}{\underset{0}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n+1}}}{\int }}}& =\left({\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}\right)\left({\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}\right)\\ {} & ={\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}\\ {} & \hspace{1em}+{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}+{\underset{0}{\overset{{T_{n}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}+{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}{\underset{{T_{n}}}{\overset{{T_{n+1}}}{\int }}}.\end{aligned}\]
Note that the sum (32) contains ${2^{3}}$ threefold integrals whose lower bounds of integration form a complete set of symbols 0 and ${T_{n}}$ of length 3.
By induction it can be proved (the proof is obvious, it actually repeats the calculation (32)) that for any $M\in \mathbb{N}$
(33)
\[ \begin{aligned}{}& {T_{n}^{-M}}\underset{{\left[0,{T_{n+1}}\right]^{M}}\setminus {\left[0,{T_{n}}\right]^{M}}}{\int }\left|\varepsilon \left(t\right)\right|dt\\ {} & \hspace{1em}={T_{n}^{-M}}{\sum \limits_{k=1}^{M}}{\sum \limits_{i=1}^{{C_{M}^{k}}}}\underset{\left\{{e_{ik}}\left(n\right)\right\}}{\int }\left|\varepsilon \left({t_{1}},\dots ,{t_{M}}\right)\right|d{t_{1}}\dots d{t_{M}},\end{aligned}\]
where in noncommutative binomial formula $\underset{\left\{{e_{ik}}\left(n\right)\right\}}{\textstyle\int }$ are the M-fold integrals, which contain $k\ge 1$ integrals ${\underset{{T_{n}}}{\overset{{T_{n+1}}}{\textstyle\int }}}$ and $M-k$ integrals ${\underset{0}{\overset{{T_{n}}}{\textstyle\int }}}$, and ${e_{ik}}\left(n\right)$ denote the sets of symbols 0 and ${T_{n}}$ of length M.
We will show that each term in (33), denoted by ${J_{ik}}\left(n\right)$, vanishes a.s., as $n\to \infty $. For this purpose consider
\[\begin{aligned}{}\mathbb{E}{J_{ik}^{2}}\left(n\right)& ={T_{n}^{-2M}}\underset{\left\{{e_{ik}}\left(n\right)\right\}}{\int }\underset{\left\{{e_{ik}}\left(n\right)\right\}}{\int }\mathbb{E}\Big|\varepsilon \left({t_{1}^{\left(1\right)}},\dots ,{t_{M}^{\left(1\right)}}\right)\\ {} & \hspace{1em}\times \varepsilon \left({t_{1}^{\left(2\right)}},\dots ,{t_{M}^{\left(2\right)}}\right)\Big|d{t_{1}^{\left(1\right)}}\dots d{t_{M}^{\left(1\right)}}d{t_{1}^{\left(2\right)}}\dots d{t_{M}^{\left(2\right)}}\\ {} & \le B\left(0\right){T_{n}^{-2M}}{\left({T_{n+1}}-{T_{n}}\right)^{2k}}{T_{n}^{2M-2k}}\\ {} & =B\left(0\right){\left(\frac{{T_{n+1}}}{{T_{n}}}-1\right)^{2k}}=O\left({n^{-2k}}\right),\hspace{1em}\mathrm{as}\hspace{2.5pt}n\to \infty .\end{aligned}\]
Since $k\ge 1$, then
\[ {\sum \limits_{n=1}^{\infty }}\mathbb{E}{J_{ik}^{2}}\left(n\right)<\infty ,\hspace{1em}i=\overline{1,{C_{M}^{k}}},\hspace{2.5pt}k=\overline{1,M},\]
that is, $\underset{{T_{n}}\le T<{T_{n+1}}}{\sup }{J_{2}}\to 0$ a.s., as $n\to \infty $. Theorem 1 is proved.  □

3 Strong consistency of LSE

In this section, we prove a theorem on the strong consistency of LSE ${\theta _{T}}$ in the trigonometric model (1)–(3).
Theorem 2.
Let conditions $\boldsymbol{A}$ and $\boldsymbol{B}$ be satisfied. Then LSE ${\theta _{T}}$ is a strongly consistent estimate of parameter ${\theta ^{0}}$ in the sense that ${A_{kT}}\to {A_{k}^{0}}$, ${B_{kT}}\to {B_{k}^{0}}$, $T\left({\varphi _{lk,T}}-{\varphi _{lk}^{0}}\right)\to 0$, a.s., as $T\to \infty $, $l=\overline{1,M}$, $k=\overline{1,N}$.
Proof.
Consider a system of linear equations for ${A_{kT}}$, ${B_{kT}},$ $k=\overline{1,N}$, which is a subsystem of the system of normal equations for finding ${\theta _{T}}$:
\[ \frac{\partial {Q_{T}}\left(\theta \right)}{\partial {A_{p}}}{\bigg|_{\theta ={\theta _{T}}}}=\frac{\partial {Q_{T}}\left(\theta \right)}{\partial {B_{p}}}{\bigg|_{\theta ={\theta _{T}}}}=0,\hspace{1em}p=\overline{1,N},\]
and write it in the form
(34)
\[ \left\{\begin{array}{l@{\hskip10.0pt}l}{\textstyle\sum \limits_{k=1}^{N}}{a_{kp}^{\left(1\right)}}{A_{kT}}+{\textstyle\sum \limits_{k=1}^{N}}{b_{kp}^{\left(1\right)}}{B_{kT}}={c_{p}^{\left(1\right)}},\hspace{1em}& p=\overline{1,N},\\ {} {\textstyle\sum \limits_{k=1}^{N}}{a_{kp}^{\left(2\right)}}{A_{kT}}+{\textstyle\sum \limits_{k=1}^{N}}{b_{kp}^{\left(2\right)}}{B_{kT}}={c_{p}^{\left(2\right)}},\hspace{1em}& p=\overline{1,N}.\end{array}\right.\]
We introduce the following notation.
(35)
\[ \begin{array}{r}\displaystyle \cos \left({\sum \limits_{l=1}^{M}}{\varphi _{lk,T}}{t_{l}}\right)={\cos _{k}}\left(t\right),\hspace{2em}\sin \left({\sum \limits_{l=1}^{M}}{\varphi _{lk,T}}{t_{l}}\right)={\sin _{k}}\left(t\right),\\ {} \displaystyle \cos \left({\sum \limits_{l=1}^{M}}{\varphi _{lk}^{0}}{t_{l}}\right)={\cos _{k}^{0}}\left(t\right),\hspace{2em}\sin \left({\sum \limits_{l=1}^{M}}{\varphi _{lk}^{0}}{t_{l}}\right)={\sin _{k}^{0}}\left(t\right).\end{array}\]
Then the coefficients of system (34) can be written as
(36)
\[ \begin{aligned}{}& {a_{kp}^{\left(1\right)}}={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\cos _{k}}\left(t\right){\cos _{p}}\left(t\right)dt,\hspace{2.5pt}{a_{kp}^{\left(2\right)}}={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\cos _{k}}\left(t\right){\sin _{p}}\left(t\right)dt,\\ {} & {b_{kp}^{\left(1\right)}}={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\sin _{k}}\left(t\right){\cos _{p}}\left(t\right)dt,\hspace{2.5pt}{b_{kp}^{\left(2\right)}}={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\sin _{k}}\left(t\right){\sin _{p}}\left(t\right)dt,\\ {} & {c_{p}^{\left(1\right)}}={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }X\left(t\right){\cos _{p}}\left(t\right)dt,\hspace{2.5pt}{c_{p}^{\left(2\right)}}={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }X\left(t\right){\sin _{p}}\left(t\right)dt.\end{aligned}\]
We also denote by ${o_{T}}\left(1\right)$, $T>0$, generally speaking, different stochastic processes converging to zero a.s., as $T\to \infty $. Using condition $\textbf{B}$ we find
(37)
\[ \begin{aligned}{}& {a_{kp}^{\left(1\right)}}={o_{T}}\left(1\right),\hspace{0.2778em}k\ne p,\hspace{0.2778em}{a_{pp}^{\left(1\right)}}=\frac{1}{2}+{o_{T}}\left(1\right);\hspace{0.2778em}{a_{kp}^{\left(2\right)}}={o_{T}}\left(1\right),\hspace{0.2778em}k,p=\overline{1,N};\\ {} & {b_{kp}^{\left(1\right)}}={a_{pk}^{\left(2\right)}}={o_{T}}\left(1\right);\hspace{0.2778em}{b_{kp}^{\left(2\right)}}={o_{T}}\left(1\right),\hspace{0.2778em}k\ne p,\hspace{0.2778em}{b_{pp}^{\left(1\right)}}=\frac{1}{2}+{o_{T}}\left(1\right),\\ {} & \hspace{2em}k,p=\overline{1,N}.\end{aligned}\]
On the other hand,
(38)
\[ {c_{p}^{\left(1\right)}}={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\varepsilon \left(t\right){\cos _{p}}\left(t\right)dt+{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }g\left(t,{\theta ^{0}}\right){\cos _{p}}\left(t\right)dt.\]
The 1st term of this sum is ${o_{T}}\left(1\right)$ according to Theorem 1. The study of the 2nd term is much more difficult.
For fixed p,
(39)
\[ \begin{aligned}{}& {T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }g\left(t,{\theta ^{0}}\right){\cos _{p}}\left(t\right)dt={\sum \limits_{k=1}^{N}}{A_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\cos _{k}^{0}}\left(t\right){\cos _{p}}\left(t\right)dt\\ {} & \hspace{1em}+{\sum \limits_{k=1}^{N}}{B_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\sin _{k}^{0}}\left(t\right){\cos _{p}}\left(t\right)dt={\sum \nolimits_{1}}+{\sum \nolimits_{2}}.\end{aligned}\]
Obviously,
(40)
\[ \begin{aligned}{}& {\sum \nolimits_{1}}=\frac{1}{2}{\sum \limits_{k=1}^{N}}{A_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\cos \left({\sum \limits_{l=1}^{M}}\left({\varphi _{lp,T}}+{\varphi _{lk}^{0}}\right){t_{l}}\right)dt\\ {} & \hspace{1em}+\frac{1}{2}{\sum \limits_{k=1}^{N}}{A_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\cos \left({\sum \limits_{l=1}^{M}}\left({\varphi _{lp,T}}-{\varphi _{lk}^{0}}\right){t_{l}}\right)dt={\sum \nolimits_{11}}+{\sum \nolimits_{12}}.\end{aligned}\]
Subject to $\textbf{B}$ ${\textstyle\sum _{11}}={o_{T}}\left(1\right)$,
(41)
\[ {\sum \nolimits_{12}}=\frac{1}{2}{A_{p}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\cos \left({\sum \limits_{l=1}^{M}}\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right){t_{l}}\right)dt+{o_{T}}\left(1\right).\]
Similarly,
(42)
\[ \begin{aligned}{}& {\sum \nolimits_{2}}=\frac{1}{2}{\sum \limits_{k=1}^{N}}{B_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\sin \left({\sum \limits_{l=1}^{M}}\left({\varphi _{lk}^{0}}+{\varphi _{lp,T}}\right){t_{l}}\right)dt\\ {} & \hspace{1em}-\frac{1}{2}{\sum \limits_{k=1}^{N}}{B_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\sin \left({\sum \limits_{l=1}^{M}}\left({\varphi _{lp,T}}-{\varphi _{lk}^{0}}\right){t_{l}}\right)dt={\sum \nolimits_{21}}+{\sum \nolimits_{22}}.\end{aligned}\]
And again, due to condition $\textbf{B}$, ${\textstyle\sum _{21}}={o_{T}}\left(1\right)$,
(43)
\[ {\sum \nolimits_{22}}=-\frac{1}{2}{B_{p}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\sin \left({\sum \limits_{l=1}^{M}}\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right){t_{l}}\right)dt+{o_{T}}\left(1\right).\]
Set
(44)
\[ \begin{aligned}{}& {x_{lp}}=\frac{\sin T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)}{T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)},\hspace{1em}{y_{lp}}=\frac{1-\cos T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)}{T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)},\\ {} & \hspace{2em}l=\overline{1,M},\hspace{2.5pt}p=\overline{1,N},\end{aligned}\]
and note that integrals in (41) and (43) are some homogeneous polynomials of variables ${x_{lp}}$, ${y_{lp}}$, $l=\overline{1,M}$, for which we will use the notation $\left(r\ge 3\right)$
(45)
\[ \begin{array}{r}\displaystyle {T^{-r}}\underset{{\left[0,T\right]^{r}}}{\int }\cos \left({\sum \limits_{l=1}^{r}}\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right){t_{l}}\right)d{t_{1}}\dots d{t_{r}}={C_{r}}\left({x_{lp}},{y_{lp}}\right)={C_{rp}},\\ {} \displaystyle {T^{-r}}\underset{{\left[0,T\right]^{r}}}{\int }\sin \left({\sum \limits_{l=1}^{r}}\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right){t_{l}}\right)d{t_{1}}\dots d{t_{r}}={S_{r}}\left({x_{lp}},{y_{lp}}\right)={S_{rp}}.\end{array}\]
Then
(46)
\[ \begin{aligned}{}& {\sum \nolimits_{12}}=\frac{1}{2}{A_{p}^{0}}{C_{Mp}}+{o_{T}}\left(1\right),\hspace{2em}{\sum \nolimits_{22}}=-\frac{1}{2}{B_{p}^{0}}{S_{Mp}}+{o_{T}}\left(1\right),\\ {} & {c_{p}^{\left(1\right)}}=\frac{1}{2}{A_{p}^{0}}{C_{Mp}}-\frac{1}{2}{B_{p}^{0}}{S_{Mp}}+{o_{T}}\left(1\right).\end{aligned}\]
Besides, similarly to ${c_{p}^{\left(1\right)}}$,
(47)
\[ \begin{aligned}{}{c_{p}^{\left(2\right)}}& ={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }X\left(t\right){\sin _{p}}\left(t\right)dt\\ {} & ={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }g\left(t,{\theta ^{0}}\right){\sin _{p}}\left(t\right)dt+{o_{T}}\left(1\right)\\ {} & ={\sum \limits_{k=1}^{N}}{A_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\cos \left({\sum \limits_{l=1}^{M}}{\varphi _{lk}^{0}}{t_{l}}\right)\sin \left({\sum \limits_{l=1}^{M}}{\varphi _{lp,T}}{t_{l}}\right)dt\\ {} & \hspace{1em}+{\sum \limits_{k=1}^{N}}{B_{k}^{0}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\sin \left({\sum \limits_{l=1}^{M}}{\varphi _{lk}^{0}}{t_{l}}\right)\sin \left({\sum \limits_{l=1}^{M}}{\varphi _{lp,T}}{t_{l}}\right)dt\\ {} & =\frac{1}{2}{A_{p}^{0}}{S_{Mp}}+\frac{1}{2}{B_{p}^{0}}{C_{Mp}}+{o_{T}}\left(1\right).\end{aligned}\]
From (34), (39), (40), (46) and (47) we get for $p=\overline{1,N}$
(48)
\[ {A_{pT}}={A_{p}^{0}}{C_{Mp}}-{B_{p}^{0}}{S_{Mp}}+{o_{T}}\left(1\right),\hspace{1em}{B_{pT}}={A_{p}^{0}}{S_{Mp}}+{B_{p}^{0}}{C_{Mp}}+{o_{T}}\left(1\right).\]
From (45) and (48) it follows
(49)
\[ \left|{A_{pT}}\right|,\left|{B_{pT}}\right|\le \left|{A_{p}^{0}}\right|+\left|{B_{p}^{0}}\right|+{o_{T}}\left(1\right),\hspace{1em}p=\overline{1,N}.\]
Using the function
\[ {\Phi _{T}}\left({\theta _{1}},{\theta _{2}}\right)={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{\left(g\left(t,{\theta _{1}}\right)-g\left(t,{\theta _{2}}\right)\right)^{2}}dt,\]
from the definition of ${\theta _{T}}$ we derive
(50)
\[ \begin{aligned}{}& {Q_{T}}\left({\theta _{T}}\right)-Q\left({\theta ^{0}}\right)\\ {} & \hspace{1em}={\Phi _{T}}\left({\theta _{T}},{\theta ^{0}}\right)+2{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\varepsilon \left(t\right)\left(g\left(t,{\theta ^{0}}\right)-g\left(t,{\theta _{T}}\right)\right)dt\le 0\hspace{1em}\mathrm{a}.\mathrm{s}.\end{aligned}\]
By Theorem 1 and (49)
(51)
\[ {T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\varepsilon \left(t\right)\left(g\left(t,{\theta ^{0}}\right)-g\left(t,{\theta _{T}}\right)\right)dt\to 0\hspace{1em}\mathrm{a}.\mathrm{s}.,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
Using (50) and (51) we arrive at the convergence
(52)
\[ {\Phi _{T}}\left({\theta _{T}},{\theta ^{0}}\right)\to 0\hspace{1em}\mathrm{a}.\mathrm{s}.,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
Consider the value ${\Phi _{T}}\left({\theta _{T}},{\theta ^{0}}\right)$ in more detail.
(53)
\[ \begin{aligned}{}{\Phi _{T}}\left({\theta _{T}},{\theta ^{0}}\right)& ={T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{g^{2}}\left(t,{\theta _{T}}\right)dt+{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{g^{2}}\left(t,{\theta ^{0}}\right)dt\\ {} & \hspace{1em}-2{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }g\left(t,{\theta _{T}}\right)g\left(t,{\theta ^{0}}\right)dt={\Phi _{1}}+{\Phi _{2}}+{\Phi _{3}}.\end{aligned}\]
Taking into account condition B and (49) we obtain
(54)
\[ {\Phi _{1}}=\frac{1}{2}{\sum \limits_{p=1}^{N}}\left({A_{pT}^{2}}+{B_{pT}^{2}}\right)+{o_{T}}\left(1\right),\]
(55)
\[ {\Phi _{2}}=\frac{1}{2}{\sum \limits_{p=1}^{N}}\left({\left({A_{p}^{0}}\right)^{2}}+{\left({B_{p}^{0}}\right)^{2}}\right)+{o_{T}}\left(1\right),\]
(56)
\[ \begin{aligned}{}{\Phi _{3}}& =-2{\sum \limits_{p=1}^{N}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\left({A_{pT}}{A_{p}^{0}}{\cos _{p}}\left(t\right){\cos _{p}^{0}}\left(t\right)+{A_{pT}}{B_{p}^{0}}{\cos _{p}}\left(t\right){\sin _{p}^{0}}\left(t\right)\right)dt\\ {} & \hspace{1em}-2{\sum \limits_{p=1}^{N}}{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }\left({B_{pT}}{A_{p}^{0}}{\sin _{p}}\left(t\right){\cos _{p}^{0}}\left(t\right)+{B_{pT}}{B_{p}^{0}}{\sin _{p}}\left(t\right){\sin _{p}^{0}}\left(t\right)\right)dt\\ {} & \hspace{1em}+{o_{T}}\left(1\right)\\ {} & ={\sum \limits_{p=1}^{N}}{A_{pT}}{A_{p}^{0}}{C_{Mp}}+{\sum \limits_{p=1}^{N}}{A_{pT}}{B_{p}^{0}}{S_{Mp}}-{\sum \limits_{p=1}^{N}}{B_{pT}}{A_{p}^{0}}{S_{Mp}}-{\sum \limits_{p=1}^{N}}{B_{pT}}{B_{p}^{0}}{C_{Mp}}.\end{aligned}\]
Substitute now formulas (48) into (54) and (56). Then after reduction of similar terms we get
(57)
\[ \begin{aligned}{}& {\Phi _{T}}\left({\theta _{T}},{\theta ^{0}}\right)=\frac{1}{2}{\sum \limits_{p=1}^{N}}{\left({A_{p}^{0}}{C_{Mp}}-{B_{p}^{0}}{S_{Mp}}\right)^{2}}+\frac{1}{2}{\sum \limits_{p=1}^{N}}{\left({A_{p}^{0}}{S_{Mp}}+{B_{p}^{0}}{C_{Mp}}\right)^{2}}\\ {} & \hspace{2em}+\frac{1}{2}{\sum \limits_{p=1}^{N}}\left({\left({A_{p}^{0}}\right)^{2}}+{\left({B_{p}^{0}}\right)^{2}}\right)-{\sum \limits_{p=1}^{N}}\left({A_{p}^{0}}{C_{Mp}}-{B_{p}^{0}}{S_{Mp}}\right){A_{p}^{0}}{C_{Mp}}\\ {} & \hspace{2em}+{\sum \limits_{p=1}^{N}}\left({A_{p}^{0}}{C_{Mp}}-{B_{p}^{0}}{S_{Mp}}\right){B_{p}^{0}}{S_{Mp}}-{\sum \limits_{p=1}^{N}}\left({A_{p}^{0}}{S_{Mp}}+{B_{p}^{0}}{C_{Mp}}\right){A_{p}^{0}}{S_{Mp}}\\ {} & \hspace{2em}-{\sum \limits_{p=1}^{N}}\left({A_{p}^{0}}{S_{Mp}}+{B_{p}^{0}}{C_{Mp}}\right){B_{p}^{0}}{C_{Mp}}+{o_{T}}\left(1\right)\\ {} & \hspace{1em}=\frac{1}{2}{\sum \limits_{p=1}^{N}}\left({\left({A_{p}^{0}}\right)^{2}}+{\left({B_{p}^{0}}\right)^{2}}\right)\left(1-{C_{Mp}^{2}}-{S_{Mp}^{2}}\right)+{o_{T}}\left(1\right).\end{aligned}\]
The last sum can be written in a more convenient form.
Lemma 1.
For any $M\ge 3$ and $p=\overline{1,N}$
(58)
\[ {C_{Mp}^{2}}\left({x_{lp}},{y_{lp}}\right)+{S_{Mp}^{2}}\left({x_{lp}},{y_{lp}}\right)={\prod \limits_{l=1}^{M}}\left({x_{lp}^{2}}+{y_{lp}^{2}}\right).\]
Proof.
For $M=3$ it is easy to calculate that
\[ {C_{3p}}={x_{1p}}{x_{2p}}{x_{3p}}-{y_{1p}}{y_{2p}}{x_{3p}}-{y_{1p}}{x_{2p}}{y_{3p}}-{x_{1p}}{y_{2p}}{y_{3p}},\]
\[ {S_{3p}}={y_{1p}}{x_{2p}}{x_{3p}}+{x_{1p}}{y_{2p}}{x_{3p}}+{x_{1p}}{x_{2p}}{y_{3p}}-{y_{1p}}{y_{2p}}{y_{3p}},\]
and
\[ {C_{3p}^{2}}+{S_{3p}^{2}}=\left({x_{1p}^{2}}+{y_{1p}^{2}}\right)\left({x_{2p}^{2}}+{y_{2p}^{2}}\right)\left({x_{3p}^{2}}+{y_{3p}^{2}}\right).\]
Assume (58) is true. Show that the similar identity is correct for $M+1$ as well. Using obvious iterative formulas
(59)
\[ \begin{array}{r}\displaystyle {C_{M+1,p}}={C_{Mp}}{x_{M+1,p}}-{S_{Mp}}{y_{M+1,p}},\\ {} \displaystyle {S_{M+1,p}}={S_{Mp}}{x_{M+1,p}}+{C_{Mp}}{y_{M+1,p}},\end{array}\]
we find
\[\begin{aligned}{}& {C_{M+1,p}^{2}}+{S_{M+1,p}^{2}}\\ {} & \hspace{1em}={\left({C_{Mp}}{x_{M+1,p}}-{S_{Mp}}{y_{M+1,p}}\right)^{2}}+{\left({S_{Mp}}{x_{M+1,p}}+{C_{Mp}}{y_{M+1,p}}\right)^{2}}\\ {} & \hspace{1em}=\left({C_{Mp}^{2}}+{S_{Mp}^{2}}\right)\left({x_{M+1,p}^{2}}+{y_{M+1,p}^{2}}\right)={\prod \limits_{l=1}^{M+1}}\left({x_{lp}^{2}}+{y_{lp}^{2}}\right).\end{aligned}\]
 □
Note that according to the formulas (44)
(60)
\[ {x_{lp}^{2}}+{y_{lp}^{2}}={\left(\frac{\sin \left(T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)/2\right)}{T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)/2}\right)^{2}},\hspace{1em}l=\overline{1,M},\hspace{0.2778em}p=\overline{1,N}.\]
As follows from (57), Lemma 1, and (60)
(61)
\[ \begin{aligned}{}{\Phi _{T}}\left({\theta _{T}},{\theta ^{0}}\right)& =\frac{1}{2}{\sum \limits_{p=1}^{N}}\left({\left({A_{p}^{0}}\right)^{2}}+{\left({B_{p}^{0}}\right)^{2}}\right)\\ {} & \hspace{1em}\times \left(1-{\prod \limits_{l=1}^{M}}{\left(\frac{\sin \left(T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)/2\right)}{T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)/2}\right)^{2}}\right)+{o_{T}}\left(1\right).\end{aligned}\]
Together with (52) this means that
(62)
\[ T\left({\varphi _{lp,T}}-{\varphi _{lp}^{0}}\right)\to 0\hspace{1em}\mathrm{a}.\mathrm{s}.\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty ,\hspace{0.2778em}l=\overline{1,M},\hspace{0.2778em}p=\overline{1,N}.\]
From (44) it follows also
(63)
\[ {x_{lp}}\to 1,\hspace{0.2778em}{y_{lp}}\to 0\hspace{1em}\mathrm{a}.\mathrm{s}.,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty ,\hspace{0.2778em}l=\overline{1,M},\hspace{0.2778em}p=\overline{1,N}.\]
In turn, from (63) and recurrent formulas (59) it follows ${C_{Mp}}\to 1$, ${S_{Mp}}\to 0$ a.s., as $T\to \infty $, $l=\overline{1,M}$.
Finally, from (48) we get
\[ {A_{pT}}\to {A_{p}^{0}},\hspace{0.2778em}{B_{pT}}\to {B_{p}^{0}},\hspace{1em}\mathrm{a}.\mathrm{s}.,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty ,\hspace{0.2778em}p=\overline{1,N},\]
and Theorem 2 is proved.  □

References

[1] 
Brillinger, D.R.: Regression for randomly sampled spatial series: the trigonometric case. J. Appl. Probab. 23A, 275–289, (1986). MR0803178. https://doi.org/10.1017/s0021900200117139
[2] 
Ivanov, A.V.: Consistency of the least squares estimator of the amplitudes and angular frequencies of a sum of harmonic oscillations in models with long-range dependence. Theory Probab. Math. Stat. 80, 61–69, (2010). MR2541952. https://doi.org/10.1090/S0094-9000-2010-00794-0
[3] 
Ivanov, A.V., Leonenko, N.N.: Statistical Analysis of Random Fields. Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht (1989). With a preface by A. V. Skorohod Translated from the Russian by A. I. Kochubinskii. MR1009786. https://doi.org/10.1007/978-94-009-1183-3
[4] 
Ivanov, A.V., Lymar, O.V.: The asymptotic normality for the least squares estimator of parameters in a two dimensional sinusoidal model of observations. Theory Probab. Math. Stat. 100, 107–131 (2020). MR3992995. https://doi.org/10.1090/tpms/1100
[5] 
Ivanov, A.V., Malyar, O.V.: Consistency of the least squares estimators of parameters in the texture surface sinusoidal model. Theory Probab. Math. Stat. 97, 73–84, (2018). MR3746000. https://doi.org/10.1090/tpms/1049
[6] 
Ivanov, A.V., Savych, I.M.: On the least squares estimator asymptotic normality of the multivariate symmetric textured surface parameters. Theory Probab. Math. Stat. 105, 151–169 (2021). MR4421369. https://doi.org/10.1090/tpms/1161
[7] 
Ivanov, A.V., Leonenko, N.N., Ruiz-Medina, M.D., Zhurakovsky, B.M.: Estimation of harmonic component in regression with cyclically dependent errors. Statistics 49, 156–186, (2015). MR3304373. https://doi.org/10.1080/02331888.2013.864656
[8] 
Walker, A.M.: On the estimation of a harmonic component in a time series with stationary dependent residuals. Adv. Appl. Probab. 5, 217–241, (1973). MR0336943. https://doi.org/10.2307/1426034
[9] 
Yadrenko, M.I.: Spectral Theory of Random Fields. Translations Series in Mathematics and Engineering. Springer, (1983). MR0697386
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 LLN for finite Fourier transform of random noise
  • 3 Strong consistency of LSE
  • References

Copyright
© 2023 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Multivariate trigonometric model homogeneous and isotropic strongly dependent Gaussian random field least squares estimate in the Walker–Brillinger sense strong consistency

MSC2010
62J02 62J99

Metrics
since March 2018
490

Article info
views

182

Full article
views

257

PDF
downloads

103

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    2
Theorem 1.
Theorem 2.
Theorem 1.
Under condition A
(9)
\[ {\xi _{T}}=\underset{\varphi \in {\mathbb{R}^{M}}}{\sup }\left|{T^{-M}}\underset{{\left[0,T\right]^{M}}}{\int }{e^{-i\left<\varphi ,t\right>}}\varepsilon \left(t\right)dt\right|\to 0\hspace{1em}a.s.,\hspace{0.2778em}as\hspace{0.2778em}T\to \infty .\]
Theorem 2.
Let conditions $\boldsymbol{A}$ and $\boldsymbol{B}$ be satisfied. Then LSE ${\theta _{T}}$ is a strongly consistent estimate of parameter ${\theta ^{0}}$ in the sense that ${A_{kT}}\to {A_{k}^{0}}$, ${B_{kT}}\to {B_{k}^{0}}$, $T\left({\varphi _{lk,T}}-{\varphi _{lk}^{0}}\right)\to 0$, a.s., as $T\to \infty $, $l=\overline{1,M}$, $k=\overline{1,N}$.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy