1 Introduction
Theory of large deviations in mathematical statistics and statistics of stochastic processes deals with the asymptotic behaviour of tails of distribution functions of parametric and nonparametric statistical estimators. Concerning parametric estimators it is necessary to refer to the monograph of Ibragimov and Has’minskii [6] where the exponential convergence rate of probabilities of large deviations for maximum likelihood estimator was obtained. This result led to the appearance of a large number of publications on the subject of large deviations of statistical estimators.
Further we will speak about least squares estimators (l.s.e.’s) for parameters of a nonlinear regression model. In the paper of Ivanov [8] a statement was proved on the power decreasing rate for probabilities of large deviations of l.s.e. for a scalar parameter in the nonlinear regression model with i.i.d. observation errors having moments of finite order. Prakasa Rao [13] obtained a similar result with the exponential decreasing rate in the Gaussian nonlinear regression.
In the paper of Sieders and Dzhaparidze [15] a general Theorem 2.1 on probabilities of large deviations for M-estimators based on a data set of any structure was proved that generalizes the mentioned result in [6] with an application to l.s.e. for parameters of the nonlinear regression with pre-Gaussian and sub-Gaussian i.i.d. observation errors (Theorems 3.1 and 3.2 in [15]). Some results in this direction are obtained by Ivanov [9].
The results on probabilities of large deviations of an l.s.e. in a nonlinear regression model with correlated observations can be found in the works of Ivanov and Leonenko [11], Prakasa Rao [14], Hu [4], Yang and Hu [16], Huang et al. [5].
Upper exponential bounds for probabilities of large deviations of an l.s.e. for a parameter of the nonlinear regression in discrete-time models with a jointly strictly sub-Gaussian (j.s.s.-G.) random noise were obtained in Ivanov [10]. In the present paper we extend some results of [10] to continuous-time observation models.
Consider a regression model
where $a(t,\hspace{0.1667em}\tau )$, $(t,\hspace{0.1667em}\tau )\in {\mathbb{R}_{+}}\times {\varTheta }^{c}$, is a continuous function, a true parameter value $\theta ={({\theta _{1}},\dots ,{\theta _{q}})}^{\prime }$ belongs to an open bounded convex set $\varTheta \subset {\mathbb{R}}^{q}$ and a random noise $\varepsilon =\{\varepsilon (t),\hspace{2.5pt}t\in \mathbb{R}\}$ satisfies the following condition.
N1. ε is a mean-square and almost sure (a.s.) continuous stochastic process defined on the probability space $(\varOmega ,\hspace{2.5pt}\mathfrak{F},\hspace{2.5pt}P)$, $E\varepsilon (t)=0$, $t\in \mathbb{R}$.
We shall write $\int ={\int _{0}^{T}}$.
Definition 1.
Any random vector ${\theta _{T}}={({\theta _{1T}},\hspace{0.1667em}\dots ,\hspace{0.1667em}{\theta _{qT}})}^{\prime }\in {\varTheta }^{c}$ having the property
\[ {Q_{T}}({\theta _{T}})=\underset{\tau \in {\varTheta }^{c}}{\inf }{Q_{T}}(\tau ),\hspace{0.2222em}{Q_{T}}(\tau )=\int {\big[X(t)-a(t,\hspace{0.1667em}\tau )\big]}^{2}dt.\]
is said to be the l.s.e. for an unknown parameter θ, obtained by the observations $\{X(t),\hspace{2.5pt}t\in [0,T]\}$.Under assumptions introduced above there exists at least one such random vector ${\theta _{T}}$ [12].
In the asymptotic theory of nonlinear regression in the problem of normal approximation of the distribution of an l.s.e., the difference ${\theta _{T}}-\theta $ is normed by diagonal matrix [11]
\[ {d_{T}}(\theta )=\operatorname{diag}\big({d_{iT}}(\theta ),\hspace{2.5pt}i=\overline{1,q}\big),\hspace{2em}{d_{iT}^{2}}(\theta )=\int \hspace{0.2778em}{\bigg(\frac{\partial }{\partial {\theta _{i}}}a(t,\hspace{0.1667em}\theta )\bigg)}^{2}dt.\]
Further it is supposed that the function $a(t,\hspace{0.1667em}\cdot )\in {C}^{1}(\varTheta )$ for any $t\ge 0$.The paper is organized in the following way. In Section 2 an upper exponential bound is obtained for large deviations of ${d_{T}}(\theta )({\theta _{T}}-\theta )$ in the regression model (1) with a j.s.s.-G. random noise ε. In Section 3 the results of Section 2 are applied to a stationary j.s.s.-G. noise ε. Section 4 contains examples of regression functions a and noise ε satisfying the conditions of our theorems.
2 Large deviations in models with a jointly strictly sub-Gaussian noise
Definition 2.
A random vector $\xi ={({\xi _{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{\xi _{n}})}^{\prime }\in {\mathbb{R}}^{n}$ is called strictly sub-Gaussian (s.s.-G.) if for any $\varDelta ={({\varDelta _{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{\varDelta _{n}})}^{\prime }\in {\mathbb{R}}^{n}$
\[ E\exp \big\{\langle \xi ,\varDelta \rangle \big\}\le \exp \bigg\{\frac{1}{2}\langle B\varDelta ,\varDelta \rangle \bigg\},\]
where $\langle \xi ,\varDelta \rangle ={\sum _{i=1}^{n}}\hspace{0.2778em}{\xi _{i}}{\varDelta _{i}}$, $B={(B(i,j))_{i,j=1}^{n}}$ is the covariance matrix of ξ, that is $B(i,j)=E{\xi _{i}}{\xi _{j}}$, $i,j=\overline{1,n}$, $\langle B\varDelta ,\varDelta \rangle ={\sum _{i,j=1}^{n}}\hspace{0.2778em}B(i,j){\varDelta _{i}}{\varDelta _{j}}$.Note that we obtain from Definition 2 the definition of an s.s.-G. random variable (r.v.) ξ taking $n=1$.
Definition 3.
$\{\xi (t),\hspace{2.5pt}t\in \mathbb{R}\}$ is said to be jointly strictly sub-Gaussian (j.s.s.-G.) stochastic process, if for any $n\ge 1$, and any ${t_{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{t_{n}}\in \mathbb{R}$ the random vector ${\xi _{n}}={(\xi ({t_{1}}),\hspace{2.5pt}\dots ,\hspace{2.5pt}\xi ({t_{n}}))}^{\prime }$ is s.s.-G.
These definitions and a more detailed information on sub-Gaussian r.v.’s, vectors and stochastic processes can be found in the monograph [1] by Buldygin and Kozachenko.
Concerning the random noise ε in the model (1) we introduce the following assumption.
N2(i) ε is a j.s.s.-G. stochastic process with the covariance function $B(t,s)=E\varepsilon (t)\varepsilon (s)$, $t,s\in \mathbb{R}$.
(ii) For any $T>0$, $\varDelta (\cdot )\in {L_{2}}([0,T])$
for some constant ${d_{0}}>0$, $\| \varDelta {\| _{T}}={(\int \hspace{0.2778em}{\varDelta }^{2}(t)dt)}^{\frac{1}{2}}$.
(2)
\[ {\langle B\varDelta ,\varDelta \rangle _{T}}=\int \int \hspace{0.2778em}B(t,s)\varDelta (t)\varDelta (s)dtds\le {d_{0}}\| \varDelta {\| _{T}^{2}}\]For a fixed T the exact bound in (2) is
where $\| B{\| _{T}}$ is the norm of a self-adjoint positive semidefinite operator B in ${L_{2}}([0,T])$. Note that $\| B{\| _{T}}$ is a nondecreasing function of $T>0$, so there exists
if (2) is fulfilled.
Example 2.1.
Assume the covariance function $B(t,s)$ is such that
\[ \text{1)}\hspace{1em}{b_{1}^{2}}={\underset{0}{\overset{\infty }{\int }}}{\underset{0}{\overset{\infty }{\int }}}\hspace{0.2778em}{B}^{2}(t,s)dtds<\infty \hspace{2em}\text{or 2)}\hspace{1em}{b_{2}}=\underset{t\in {\mathbb{R}_{+}}}{\sup }\hspace{0.2778em}{\underset{0}{\overset{\infty }{\int }}}\hspace{0.2778em}\big|B(t,s)\big|ds<\infty .\]
Then using condition 1) and the Fubini theorem we get
\[\begin{aligned}{\langle B\varDelta ,\varDelta \rangle _{T}}& =\int \bigg(\int \hspace{0.2778em}B(t,s)\varDelta (s)ds\bigg)\varDelta (t)dt\\{} & \le {\bigg(\int {\bigg(\int \hspace{0.2778em}B(t,s)\varDelta (s)ds\bigg)}^{2}dt\bigg)}^{\frac{1}{2}}\| \varDelta {\| _{T}}\le {b_{1}}\| \varDelta {\| _{T}^{2}},\end{aligned}\]
and we can take ${d_{0}}={b_{1}}$. On the other hand,
\[ {\langle B\varDelta ,\varDelta \rangle _{T}}\le \int \int \hspace{0.2778em}\big|B(t,s)\big|{\varDelta }^{2}(t)dtds\le {b_{2}}\| \varDelta {\| _{T}^{2}},\]
and we can take ${d_{0}}={b_{2}}$.Let $\varDelta (t),\hspace{2.5pt}t\in {\mathbb{R}_{+}}$, be a continuous function. Then condition N1 implies the existence of the integral
determined for almost all paths of the process $\varepsilon (t),\hspace{2.5pt}t\in [0,T]$, as the Riemann integral. Consider partitions $r(n)$
of the interval $[0,T]$ such that ${\max _{1\le k\le n}}({t_{k}^{(n)}}-{t_{k-1}^{(n)}})\to 0$, as $n\to \infty $, and the corresponding integral sums
Then
It is obvious also that
Proof.
From Definition 3 it follows that the process $\varepsilon (t)$, $t\in [0,T]$, is j.s.s.-G., if for any $n\ge 1$, and ${t_{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{t_{n}}\in [0,T]$, ${u_{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{u_{n}}\in \mathbb{R}$, $\lambda \in \mathbb{R}$,
\[ E\exp \Bigg\{\lambda {\sum \limits_{k=1}^{n}}\hspace{0.1667em}{u_{k}}\varepsilon ({t_{k}})\Bigg\}\le \exp \Bigg\{\frac{1}{2}{\lambda }^{2}{\sum \limits_{i,j=1}^{n}}\hspace{0.1667em}B({t_{i}},\hspace{0.1667em}{t_{j}}){u_{i}}{u_{j}}\Bigg\}.\]
Taking ${t_{k}}={t_{k}^{(n)}}$, ${u_{k}}={u_{k}^{(n)}}$, $k=\overline{1,n}$, we obtain
\[ E\exp \big\{\lambda {I_{n}}(T)\big\}\le \exp \bigg\{\frac{1}{2}{\lambda }^{2}E{I_{n}^{2}}(T)\bigg\}.\]
Due to (4) and (5) by the Fatou lemma (see, for example, [3])
\[\begin{aligned}E\exp \big\{\lambda I(T)\big\}& =E\underset{n\to \infty }{\lim }\hspace{0.1667em}\exp \big\{\lambda {I_{n}}(T)\big\}\le \underset{n\to \infty }{\liminf }\hspace{0.1667em}E\exp \big\{\lambda {I_{n}}(T)\big\}\\{} & \le \underset{n\to \infty }{\lim }\hspace{0.1667em}\exp \bigg\{\frac{1}{2}{\lambda }^{2}E{I_{n}^{2}}(T)\bigg\}=\exp \bigg\{\frac{1}{2}{\lambda }^{2}E{I}^{2}(T)\bigg\}.\end{aligned}\]
□The following statement on the exponential bound for distribution tails of integrals (3) plays an important role in subsequent proofs.
Lemma 2.
Proof.
The proof is obvious (see, for example, [1]). For any $x>0$, $\lambda >0$ by the Chebyshev–Markov inequality, (2), and Lemma 1
Minimization of the right-hand side of (7) in λ gives the first inequality in (6). The proof of the second inequality is similar. The third inequality follows from the previous ones. □
(7)
\[ P\big\{I(T)\ge x\big\}\le \exp \{-\lambda x\}\exp \bigg\{\frac{1}{2}{\lambda }^{2}{\langle B\varDelta ,\varDelta \rangle _{T}}\bigg\}\le \exp \bigg\{\frac{1}{2}{\lambda }^{2}{d_{0}}\| \varDelta {\| _{T}^{2}}-\lambda x\bigg\}.\]We need some notation to formulate conditions on the regression function $a(t,\hspace{0.1667em}\theta )$ using the approach of the paper [15] (see also [9, 11]). Write
\[ {U_{T}}(\theta )={d_{T}}(\theta )\big({\varTheta }^{c}-\theta \big),\hspace{2em}{\varGamma _{T,\theta ,R}}={U_{T}}(\theta )\cap \{u\hspace{0.1667em}:\hspace{0.1667em}R\le \| u\| \le R+1\},\]
$u={({u_{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{u_{q}})}^{\prime }\in {\mathbb{R}}^{q}$. Denote by G the family of all functions $g={g_{T}}(R)$, $T>0$, $R>0$, having the following properties:1) for fixed T, ${g_{T}}(R)\uparrow \infty $, as $R\to \infty $;
2) for any $r>0$,
\[ {R}^{r}\exp \big\{-{g_{T}}(R)\big\}\to 0,\hspace{1em}\text{as}\hspace{2.5pt}R,\hspace{0.1667em}T\to \infty .\]
Let $\gamma (R)$ be polynomials of R (possibly different) with coefficients that do not depend on values $T,\hspace{2.5pt}\theta ,\hspace{2.5pt}u,\hspace{2.5pt}v$. Set also
\[\begin{aligned}\varDelta (t,\hspace{0.1667em}u)& =a\big(t,\hspace{0.1667em}\theta +{d_{T}^{-1}}(\theta )u\big)-a(t,\hspace{0.1667em}\theta ),\hspace{1em}t\in [0,T],\\{} {\varPhi _{T}}(u,v)& =\int {\big(\varDelta (t,\hspace{0.1667em}u)-\varDelta (t,\hspace{0.1667em}v)\big)}^{2}dt,\hspace{1em}u,v\in {U_{T}}(\theta ).\end{aligned}\]
Assume the existence of a function $g\in G$, constants $\delta \in (0,\frac{1}{2})$, $\varkappa >0$, $\rho \in (0,1]$ and polynomials $\gamma (R)$ such that for sufficiently large $T,\hspace{2.5pt}R$ (we will write $T>{T_{0}}$, $R>{R_{0}}$) the following conditions are fulfilled.
R1(i) For any $u,v\in {\varGamma _{T,\theta ,R}}$ such that $\| u-v\| \le \varkappa $
(ii) for any $u\in {\varGamma _{T,\theta ,R}}$ ${\varPhi _{T}}(u,0)\le \gamma (R)$.
R2. For any $u\in {\varGamma _{T,\theta ,R}}$
Theorem 1.
If conditions N1, N2, R1 and R2 are fulfilled, then there exist constants ${B_{0}}$, ${b_{0}}>0$ such that for $T>{T_{0}}$, $R>{R_{0}}$
where for any $\beta >0$ the constant ${B_{0}}$ can be chosen so that
(10)
\[ P\big\{\big\| {d_{T}}(\theta )({\theta _{T}}-\theta )\big\| \ge R\big\}\le {B_{0}}\exp \big\{-{b_{0}}{g_{T}}(R)\big\},\]Proof.
Set
\[ I(T,\hspace{0.1667em}u)=\int \hspace{0.2778em}\varDelta (t,\hspace{0.1667em}u)\varepsilon (t)dt,\hspace{2em}{\zeta _{T}}(u)=I(T,\hspace{0.1667em}u)-\frac{1}{2}{\varPhi _{T}}(u,0).\]
To prove the theorem it is sufficient to check the fulfilment of assumptions (M1) and (M2) of the Theorem 2.1 in [15], reformulated in the manner similar to that used in the proof of Theorem 3.1, ibid.:
(M1) for any $m>0$ and $u,v\in {\varGamma _{T,\theta ,R}}$ such that $\| u-v\| \le \varkappa $,
(M2)
From the first inequality in (6) of Lemma 2 for $\varDelta (t)=\varDelta (t,\hspace{0.1667em}u)$, $x=\delta {\varPhi _{T}}(u,0)$, condition R2, taking into account that ${\zeta _{T}}(0)=0$ in our particular case, we obtain
\[\begin{aligned}P\bigg\{{\zeta _{T}}(u)-{\zeta _{T}}(0)\ge -\bigg(\frac{1}{2}-\delta \bigg){\varPhi _{T}}(u,0)\bigg\}& =P\big\{I(T,\hspace{0.1667em}u)\ge \delta {\varPhi _{T}}(u,0)\big\}\\{} & \le \exp \big\{-{\delta }^{2}{(2{d_{0}})}^{-1}{\varPhi _{T}}(u,0)\big\}\\{} & \le \exp \big\{-{g_{T}}(R)\big\},\end{aligned}\]
i.e. (13) is true.On the other hand,
according to R1 (polynomials $\gamma (R)$ are different in the last two lines). Thus we obtain the bound
(14)
\[\begin{aligned}& E{\big|{\zeta _{T}}(u)-{\zeta _{T}}(v)\big|}^{m}\\{} & \hspace{1em}\le \max \big(1,{2}^{m-1}\big)\cdot \big(E{\big|I(T,\hspace{0.1667em}u)-I(T,\hspace{0.1667em}v)\big|}^{m}+{2}^{-m}{\big|{\varPhi _{T}}(u,0)-{\varPhi _{T}}(v,0)\big|}^{m}\big),\\{} & \big|{\varPhi _{T}}(u,0)-{\varPhi _{T}}(v,0)\big|\\{} & \hspace{1em}\le \int \hspace{0.2778em}\big|\varDelta (t,\hspace{0.1667em}u)-\varDelta (t,\hspace{0.1667em}v)\big|\cdot \big|\varDelta (t,\hspace{0.1667em}u)+\varDelta (t,\hspace{0.1667em}v)\big|dt\\{} & \hspace{1em}\le {\varPhi _{T}^{\frac{1}{2}}}(u,v)\cdot \big({\varPhi _{T}^{\frac{1}{2}}}(u,0)+{\varPhi _{T}^{\frac{1}{2}}}(v,0)\big)\\{} & \hspace{1em}\le 2{\big(\gamma (R)\big)}^{\frac{1}{2}}\| u-v{\| }^{\rho }{\big(\gamma (R)\big)}^{\frac{1}{2}}\\{} & \hspace{1em}\le \big(\gamma (R)+\gamma (R)\big)\| u-v{\| }^{\rho }=\gamma (R)\| u-v{\| }^{\rho }\end{aligned}\]By the formula for the moments of nonnegative r.v. (see, for example, [2] and compare with [4]) and the third inequality of Lemma 2 being applied to $\varDelta (t)=\varDelta (t,\hspace{0.1667em}u)-\varDelta (t,\hspace{0.1667em}v)$, $t\in [0,T]$, it holds
where $I(T;u,v)=I(T,\hspace{0.1667em}u)-I(T,\hspace{0.1667em}v)$, Z is a standard Gaussian r.v.,
Relations (16), (17), and (8) lead to the bound
From (14), (15), and (18) it follows (12). □
(16)
\[ \begin{aligned}E{\big|I(T;u,v)\big|}^{m}& =m{\underset{0}{\overset{\infty }{\int }}}\hspace{0.2778em}{x}^{m-1}P\big\{\big|I(T;u,v)\big|\ge x\big\}dx\\{} & \le 2m{\underset{0}{\overset{\infty }{\int }}}\hspace{0.2778em}{x}^{m-1}\exp \bigg\{-\frac{{x}^{2}}{2{d_{0}}{\varPhi _{T}}(u,v)}\bigg\}dx\\{} & =\sqrt{2\pi }m{d_{0}^{\frac{m}{2}}}{\varPhi _{T}^{\frac{m}{2}}}(u,v)E|Z{|}^{m-1},\end{aligned}\](17)
\[ E|Z{|}^{m-1}={\pi }^{-\frac{1}{2}}{2}^{\frac{m-1}{2}}\varGamma \bigg(\frac{m}{2}\bigg),\hspace{1em}m>0.\](18)
\[ E{\big|I(T;u,v)\big|}^{m}\le {2}^{\frac{m}{2}}m\varGamma \bigg(\frac{m}{2}\bigg){d_{0}^{\frac{m}{2}}}{\varPhi _{T}^{\frac{m}{2}}}(u,v)\le \gamma (R)\| u-v{\| }^{\rho m}.\]Suppose there exist a diagonal matrix ${s_{T}}=\operatorname{diag}({s_{iT}},\hspace{2.5pt}i=\overline{1,q})$ with elements that do not depend on $\tau \in \varTheta $, and constants $0<{\underline{c}_{i}}<{\overline{c}_{i}}<\infty $, $i=\overline{1,q}$, such that uniformly in $\tau \in \varTheta $ for $T>{T_{0}}$
Then instead of the matrix ${d_{T}}(\theta )$ (at least in the framework of the topic of this paper) it is possible to consider, without loss of generality, the normalizing matrix ${s_{T}}$.
(19)
\[ {\underline{c}_{i}}<{s_{iT}^{-1}}{d_{iT}}(\tau )<{\overline{c}_{i}},\hspace{1em}i=\overline{1,q}.\]The next condition is more restrictive than R1 and R2, however it is simpler due to requirement (19).
R3. There exist numbers $0<{c_{0}}<{c_{1}}<\infty $ such that for any $u,v\in {U_{T}}(\theta )={s_{T}}({\varTheta }^{c}-\theta )$ and $T>{T_{0}}$,
It goes without saying that in the expression for ${\varPhi _{T}}(u,v)$ in (20) we use the matrix ${s_{T}^{-1}}$ instead of ${d_{T}^{-1}}(\theta )$.
A condition of the type (20) has been introduced in [8] and used in [13, 15, 4] and other works. The next theorem generalizes Theorem 3.2 from [15].
Theorem 2.
Proof.
We will show that R3 implies conditions R1 and R2. Inequality (8) of the condition R1(i) follows from the right-hand side of inequality (20), if we take $\rho =1$, $\gamma (R)={c_{1}}$. Inequality of the condition R1(ii) follows as well from the right-hand side of (20), if we take $v=0$, $\gamma (R)={c_{1}}{(R+1)}^{2}$.
To check the fulfilment of condition R2 we should rewrite the left-hand side of (20) for $v=0$:
\[ {\varPhi _{T}}(u,0)\ge {c_{0}}\| u{\| }^{2}\ge 2{d_{0}}{\delta }^{-2}\bigg(\frac{1}{2}{\delta }^{2}{d_{0}^{-1}}{c_{0}}{R}^{2}\bigg),\]
i.e., in the inequality (9) one can take ${g_{T}}(R)=\frac{1}{2}{\delta }^{2}{d_{0}^{-1}}{c_{0}}{R}^{2}$. In this case for the exponent in (10) we have
Now, since $\rho =1$ in (11), for any $\beta >0$ in (21) we can take
\[ b={b_{\delta }}=\frac{1}{2}{\delta }^{2}{b_{0}}{d_{0}^{-1}}{c_{0}}\ge \frac{{\delta }^{2}{c_{0}}}{2{d_{0}}(1+q)}-\beta .\]
By R3 and N1, N2, inequality (9) with ${g_{T}}(R)=\frac{1}{2}{\delta }^{2}{d_{0}^{-1}}{c_{0}}{R}^{2}$ holds for any $\delta \in (0,\frac{1}{2})$. We get inequality (22) as $\delta \to \frac{1}{2}$. □3 Large deviations in the case of a stationary jointly strictly sub-Gaussian noise
We impose an additional restriction on the noise process ε.
N3. The stochastic process ε is stationary with the covariance function $B(t)=E\varepsilon (0)\varepsilon (t),\hspace{2.5pt}t\in \mathbb{R}$, and the bounded spectral density $f(\lambda ),\hspace{2.5pt}\lambda \in \mathbb{R}$:
Under assumption N3 the following corollaries of the theorems proved in Section 2 are true.
Corollary 1.
If conditions N1, N2(i), N3, R1 and R2 are fulfilled, then the statement of Theorem 1 is true with ${d_{0}}=2\pi {f_{0}}$.
Proof.
We just need to show that condition N2(ii) is fulfilled. Indeed, by the Plancherel identity,
□
Our next assumption is a particularization of the requirements N2 and N3.
N4(i). The random noise ε is of the form
where $\xi =\{\xi (t),\hspace{2.5pt}t\in \mathbb{R}\}$ is a mean-square continuous j.s.s.-G. stochastic process with orthogonal increments, $E\xi (t)=0$,
$\psi (t)=0$ as $t<0$ and
(24)
\[ \varepsilon (t)={\underset{-\infty }{\overset{t}{\int }}}\hspace{0.2778em}\psi (t-s)d\xi (s)={\underset{0}{\overset{\infty }{\int }}}\hspace{0.2778em}\psi (s)d\xi (t-s),\]The stochastic integral in (24) is understood as a mean-square Stieltjes integral [3]. The process ξ is an integrated white noise, ε can be considered as a stationary process at the output of a physically realizable filter with the covariance function (see ibid.)
and the spectral density
N4(ii). ${f_{0}}={\sup _{\lambda \in \mathbb{R}}}|h(i\lambda ){|}^{2}<\infty $.
Obviously N4(ii) holds, if ${\int _{0}^{\infty }}\hspace{0.1667em}|\psi (t)|dt<\infty $.
Proof.
Let $n\ge 1$ be a fixed number and ${t_{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{t_{n}}$, ${\varDelta _{1}},\hspace{2.5pt}\dots ,\hspace{2.5pt}{\varDelta _{n}}$ be arbitrary real numbers. It is necessary to prove that
Formula (24) can be rewritten in the form
(25)
\[ E\exp \Bigg\{{\sum \limits_{k=1}^{n}}\hspace{0.2778em}{\varDelta _{k}}\varepsilon ({t_{k}})\Bigg\}\le \exp \Bigg\{\frac{1}{2}{\sum \limits_{i,j=1}^{n}}\hspace{0.2778em}B({t_{i}}-{t_{j}}){\varDelta _{i}}{\varDelta _{j}}\Bigg\}.\]
\[ \varepsilon (t)={\underset{-\infty }{\overset{\infty }{\int }}}\hspace{0.2778em}\psi (t-s)d\xi (s),\hspace{1em}\psi (t)=0,\hspace{2.5pt}\text{as}\hspace{2.5pt}t<0.\]
Denote ${\psi _{k}}(s)=\psi ({t_{k}}-s)$, $k=\overline{1,n}$. Then
\[ \varepsilon ({t_{k}})={\underset{-\infty }{\overset{\infty }{\int }}}\hspace{0.2778em}{\psi _{k}}(s)d\xi (s),\hspace{1em}k=\overline{1,n}.\]
Let a sequence of simple functions
\[ {\psi _{k}^{(m)}}(s)={\sum \limits_{l=1}^{r(k,m)}}\hspace{0.2778em}{c_{kl}^{(m)}}{\chi _{{\pi _{kl}^{(m)}}}}(s),\hspace{1em}m\ge 1,\]
where ${\pi _{kl}^{(m)}}=[{\alpha _{kl}^{(m)}},\hspace{0.1667em}{\beta _{kl}^{(m)}})$, $k=\overline{1,n}$, $l=\overline{1,r(k,m)}$, ${\chi _{A}}(s)$ is indicator of a set A, approximate the function ${\psi _{k}}(s)$ in ${L_{2}}(\mathbb{R})$:
\[ {\underset{-\infty }{\overset{\infty }{\int }}}\hspace{0.2778em}{\big|{\psi _{k}}(s)-{\psi _{k}^{(m)}}(s)\big|}^{2}ds\to 0,\hspace{1em}\text{as}\hspace{2.5pt}m\to \infty .\]
Then the sequences of random variables
\[ {\varepsilon _{k}^{(m)}}={\sum \limits_{l=1}^{r(k,m)}}\hspace{0.2778em}{c_{kl}^{(m)}}\big(\xi \big({\beta _{kl}^{(m)}}\big)-\xi \big({\alpha _{kl}^{(m)}}\big)\big)={\underset{-\infty }{\overset{\infty }{\int }}}\hspace{0.2778em}{\psi _{k}^{(m)}}(s)d\xi (s)\]
mean-square converge to $\varepsilon ({t_{k}})$ in ${L_{2}}(\varOmega )$:
For any fixed m, the random vector with coordinates ${\varepsilon _{k}^{(m)}}$, $k=\overline{1,n}$ is s.s.-G. Indeed,
From (26) it follows that ${\varepsilon _{k}^{(m)}}\stackrel{P}{\to }{\varepsilon _{{t_{k}}}}$, $k=\overline{1,n}$, as $m\to \infty $, and thus there exists some subsequence of indexes ${m^{\prime }}\to \infty $, independent of k, such that ${\varepsilon _{k}^{({m^{\prime }})}}\to {\varepsilon _{{t_{k}}}}$ a.s., $k=\overline{1,n}$, as ${m^{\prime }}\to \infty $.
\[\begin{aligned}{\sum \limits_{k=1}^{m}}\hspace{0.1667em}{\varDelta _{k}}{\varepsilon _{k}^{(m)}}& ={\sum \limits_{k=1}^{m}}\hspace{0.1667em}{\varDelta _{k}}{\sum \limits_{l=1}^{r(k,m)}}\hspace{0.2778em}{c_{kl}^{(m)}}\big(\xi \big({\beta _{kl}^{(m)}}\big)-\xi \big({\alpha _{kl}^{(m)}}\big)\big)\\{} & ={\sum \limits_{{k^{\prime }}=1}^{{n^{\prime }}(m)}}\hspace{0.1667em}{u_{{k^{\prime }}}^{(m)}}\xi \big({\eta _{{k^{\prime }}}^{(m)}}\big),\end{aligned}\]
where ${u_{{k^{\prime }}}^{(m)}}$ are real numbers and ${\eta _{{k^{\prime }}}^{(m)}}$ are different real numbers. By condition N4(i) the random vector with coordinates $\xi ({\eta _{{k^{\prime }}}^{(m)}})$, ${k^{\prime }}=\overline{1,{n^{\prime }}(m)}$, is s.s.-G., and therefore,
(27)
\[ \begin{aligned}E\exp \Bigg\{{\sum \limits_{k=1}^{m}}\hspace{0.1667em}{\varDelta _{k}}{\varepsilon _{k}^{(m)}}\Bigg\}& =E\exp \Bigg\{{\sum \limits_{{k^{\prime }}=1}^{{n^{\prime }}(m)}}\hspace{0.1667em}{u_{{k^{\prime }}}^{(m)}}\xi \big({\eta _{{k^{\prime }}}^{(m)}}\big)\Bigg\}\\{} & \le \exp \Bigg\{\frac{1}{2}E{\Bigg({\sum \limits_{{k^{\prime }}=1}^{{n^{\prime }}(m)}}\hspace{0.1667em}{u_{{k^{\prime }}}^{(m)}}\xi \big({\eta _{{k^{\prime }}}^{(m)}}\big)\Bigg)}^{2}\Bigg\}\\{} & =\exp \Bigg\{\frac{1}{2}{\Bigg({\sum \limits_{k=1}^{m}}\hspace{0.1667em}{\varDelta _{k}}{\varepsilon _{k}^{(m)}}\Bigg)}^{2}\Bigg\}.\end{aligned}\]Finally, by the Fatou lemma and (27)
\[\begin{aligned}E\exp \Bigg\{{\sum \limits_{k=1}^{n}}\hspace{0.2778em}{\varDelta _{k}}\varepsilon ({t_{k}})\Bigg\}& =E\underset{{m^{\prime }}\to \infty }{\lim }\hspace{0.2778em}\exp \Bigg\{{\sum \limits_{k=1}^{n}}\hspace{0.2778em}{\varDelta _{k}}{\varepsilon _{k}^{({m^{\prime }})}}\Bigg\}\\{} & \le \underset{{m^{\prime }}\to \infty }{\liminf }\hspace{0.2778em}E\exp \Bigg\{{\sum \limits_{k=1}^{n}}\hspace{0.2778em}{\varDelta _{k}}{\varepsilon _{k}^{({m^{\prime }})}}\Bigg\}\\{} & \le \underset{{m^{\prime }}\to \infty }{\lim }\hspace{0.2778em}\exp \Bigg\{\frac{1}{2}E{\Bigg({\sum \limits_{k=1}^{n}}\hspace{0.2778em}{\varDelta _{k}}{\varepsilon _{k}^{({m^{\prime }})}}\Bigg)}^{2}\Bigg\}\\{} & =\exp \Bigg\{\frac{1}{2}E{\Bigg({\sum \limits_{k=1}^{n}}\hspace{0.2778em}{\varDelta _{k}}\varepsilon ({t_{k}})\Bigg)}^{2}\Bigg\}\\{} & =\exp \Bigg\{\frac{1}{2}{\sum \limits_{i,j=1}^{n}}\hspace{0.2778em}B({t_{i}}-{t_{j}}){\varDelta _{i}}{\varDelta _{j}}\Bigg\}.\end{aligned}\]
So, we have obtained (25). □Corollary 3.
If conditions N1, N4(i), N4(ii), R1 and R2 are fulfilled, then the conclusion of Theorem 1 is true with ${d_{0}}=2\pi {f_{0}}$.
Assume
For $\nu =0$ we arrive at a quite strong result on the weak consistency of l.s.e. Similarly, in the conditions of Corollary 5 the following result on probabilities of moderate deviations for l.s.e. holds: for any $h>0$, $T>{T_{0}}$
Obviously, Gaussian stochastic processes ε are j.s.s.-G. ones, and all the previous results are valid for them.
4 Two examples
In this section, we consider an example of a regression function satisfying the condition R3 and an example of the j.s.s.-G. process ξ from expression (24) in condition N4(i).
Example 4.1.
Suppose
\[ a(t,\hspace{0.1667em}\tau )=\exp \big\{\big\langle \tau ,\hspace{0.1667em}y(t)\big\rangle \big\},\]
where $\langle \tau ,\hspace{0.1667em}y(t)\rangle ={\sum _{i=1}^{q}}\hspace{0.1667em}{\tau _{i}}{y_{i}}(t)$, regressors $y(t)={({y_{1}}(t),\hspace{0.1667em}\dots ,\hspace{0.1667em}{y_{q}}(t))}^{\prime }$, $t\ge 0$, take values in a compact set $Y\subset {\mathbb{R}}^{q}$.Let us assume
J is a positive definite matrix. In this case the regression function $a(t,\hspace{0.1667em}\tau )$ satisfies condition R3. Indeed, let
(30)
\[ {J_{T}}={\bigg({T}^{-1}{\underset{0}{\overset{T}{\int }}}{y_{i}}(t){y_{j}}(t)dt\bigg)_{i,j=1}^{q}}\hspace{2.5pt}\to \hspace{2.5pt}J={({J_{ij}})_{i,j=1}^{q}},\hspace{1em}\text{as}\hspace{2.5pt}T\to \infty ,\]
\[ H=\underset{y\in Y,\hspace{2.5pt}\tau \in {\varTheta }^{c}}{\max }\hspace{0.1667em}\exp \big\{\langle y,\hspace{0.1667em}\tau \rangle \big\},\hspace{2em}L=\underset{y\in Y,\hspace{2.5pt}\tau \in {\varTheta }^{c}}{\min }\hspace{0.1667em}\exp \big\{\langle y,\hspace{0.1667em}\tau \rangle \big\}\]
Then for any $\delta >0$ and $T>{T_{0}}$
\[ {L}^{2}({J_{ii}}-\delta )<{T}^{-1}{d_{iT}^{2}}(\theta )<{H}^{2}({J_{ii}}+\delta ),\hspace{1em}i=\overline{1,q},\]
and according to (19) we can take ${s_{T}}={T}^{\frac{1}{2}}{\mathbb{I}_{q}}$ with the identity matrix ${\mathbb{I}_{q}}$ of order q.For a fixed t
\[\begin{aligned}& \exp \big\{\big\langle y(t),\theta +{T}^{-\frac{1}{2}}u\big\rangle \big\}-\exp \big\{\big\langle y(t),\theta +{T}^{-\frac{1}{2}}v\big\rangle \big\}\\{} & \hspace{1em}={T}^{-\frac{1}{2}}{\sum \limits_{i=1}^{q}}\hspace{0.1667em}{y_{i}}(t)\exp \big\{\big\langle y(t),\theta +{T}^{-\frac{1}{2}}\big(u+\eta (v-u)\big)\big\rangle \big\}({u_{i}}-{v_{i}}),\hspace{2.5pt}\eta \in (0,1),\end{aligned}\]
and therefore for any $\delta >0$ and $T>{T_{0}}$
\[ {\varPhi _{T}}(u,v)\le {H}^{2}\bigg({T}^{-1}\int \hspace{0.2778em}{\big\| y(t)\big\| }^{2}dt\bigg)\| u-v{\| }^{2}<{H}^{2}(\operatorname{Tr}J+\delta )\| u-v{\| }^{2},\]
So we obtain the right-hand side of (20) with the constant ${c_{1}}>{H}^{2}\operatorname{Tr}J$.On the other hand, for a fixed t
\[\begin{aligned}& {\big(\varDelta (t,u)-\varDelta (t,v)\big)}^{2}\\{} & \hspace{1em}={\big(\exp \big\{\big\langle y(t),\theta +{T}^{-\frac{1}{2}}u\big\rangle \big\}-\exp \big\{\big\langle y(t),\theta +{T}^{-\frac{1}{2}}v\big\rangle \big\}\big)}^{2}\\{} & \hspace{1em}=\exp \big\{2\big\langle y(t),\theta +{T}^{-\frac{1}{2}}v\big\rangle \big\}{\big(\exp \big\{\big\langle y(t),{T}^{-\frac{1}{2}}(u-v)\big\rangle \big\}-1\big)}^{2}.\end{aligned}\]
Since ${({e}^{x}-1)}^{2}\ge {x}^{2}$, $x\ge 0$, and ${({e}^{x}-1)}^{2}\ge {e}^{2x}{x}^{2}$, $x<0$, it holds
\[ {\big(\exp \big\{\big\langle y(t),{T}^{-\frac{1}{2}}(u-v)\big\rangle \big\}-1\big)}^{2}\ge {L_{t}}{T}^{-1}{\big\langle y(t),u-v\big\rangle }^{2},\]
with ${L_{t}}=\min \{1,\hspace{0.1667em}\exp \{2\langle y(t),{T}^{-\frac{1}{2}}(u-v)\rangle \}\}$, and
\[\begin{aligned}& {\big(\varDelta (t,u)-\varDelta (t,v)\big)}^{2}\\{} & \hspace{1em}\ge \min \big\{\exp \big\{2\big\langle y(t),\theta +{T}^{-\frac{1}{2}}v\big\rangle \big\},\hspace{0.1667em}\exp \big\{2\big\langle y(t),\theta +{T}^{-\frac{1}{2}}u\big\rangle \big\}\big\}{T}^{-1}{\big\langle y(t),u-v\big\rangle }^{2}\\{} & \hspace{1em}>{L}^{2}{T}^{-1}{\big\langle y(t),u-v\big\rangle }^{2}.\end{aligned}\]
Thus for any $\delta >0$ and $T>{T_{0}}$
\[ {\varPhi _{T}}(u,v)\ge {L}^{2}\big\langle {J_{T}}(u-v),\hspace{0.1667em}u-v\big\rangle >{L}^{2}\big({\lambda _{\min }}(J)-\delta \big)\| u-v{\| }^{2},\]
and we have obtained the left-hand side of (20) with the constant ${c_{0}}<{L}^{2}{\lambda _{\min }}(J)$, where ${\lambda _{\min }}(J)$ is the least eigenvalue of the positive definite matrix J.The next fact is a reformulation of Corollary 4 for the regression function $a(t,\hspace{0.1667em}\tau )$ of our example.
Corollary 3.4′.
Under conditions N1, N4(i), N4(ii) and (30) there exist constants B, $b>0$ such that for $T>{T_{0}}$, $R>{R_{0}}$
\[ P\big\{\big\| {T}^{\frac{1}{2}}({\theta _{T}}-\theta )\big\| \ge R\big\}\le B\exp \big\{-b{R}^{2}\big\}.\]
Moreover for any $\beta >0$ the constant B can be chosen so that
Example 4.2.
Here we offer an example of the j.s.s.-G. stochastic process ξ with orthogonal increments in the formula (24) using the Ito–Nicio series (see [7] and references therein).
Consider any orthonormal basis $\{{\varphi _{k}},\hspace{0.1667em}k\ge 1\}$ in ${L_{2}}({\mathbb{R}_{+}})$ and a sequence $\{{Z_{k}},k\ge 1\}$ of independent $N(0,1)$ r.v.’s. Then
\[ {w_{0}}(t)={\sum \limits_{k=1}^{\infty }}\hspace{0.1667em}{Z_{k}}{\underset{0}{\overset{t}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du,\hspace{1em}t\ge 0,\]
is a standard Wiener process with covariances $E{w_{0}}(t){w_{0}}(s)=\min \{t,s\}$. We need some kind of the Wiener process on the real line $\mathbb{R}$. Let $\{{w_{1}}(t),\hspace{0.1667em}t\ge 0\}$, $\{{w_{2}}(t),\hspace{0.1667em}t\ge 0\}$ be two independent Wiener processes of the following form:
\[ {w_{i}}(t)={\sum \limits_{k=1}^{\infty }}\hspace{0.1667em}{Z_{ik}}{\underset{0}{\overset{t}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du,\hspace{1em}t\ge 0,\hspace{2.5pt}i=1,2,\]
and $\{{Z_{ik}},\hspace{0.1667em}k\ge 1,\hspace{0.1667em}i=1,2\}$ be independent $N(0,1)$ r.v.’s. Then the required Wiener process on $\mathbb{R}$ can be defined as $w(t)={w_{1}}(t)$, $t\ge 0$, and $w(t)={w_{2}}(|t|)$, $t<0$. For any real ${t_{1}}<{t_{2}}\le {t_{3}}<{t_{4}}$
i.e. increments are orthogonal. On the other hand, for any $t>s$
Let $\{{\xi _{ik}},\hspace{0.1667em}k\ge 1,\hspace{0.1667em}i=1,2\}$ be i.i.d. s.s.-G. r.v.’s (and non-Gaussian!) with unit variance. Some examples of s.s.-G. r.v.’s can be found in [1]. Thus the Bernoulli r.v. and the r.v. uniformly distributed in $[-\sqrt{3},\hspace{0.1667em}\sqrt{3}]$ are s.s.-G. and have unit variances.
Let us introduce the stochastic processes
\[ {\xi _{i}}(t)={\sum \limits_{k=1}^{\infty }}\hspace{0.1667em}{\xi _{ik}}{\underset{0}{\overset{t}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du,\hspace{1em}t\ge 0,\hspace{2.5pt}i=1,2,\]
$\xi (t)={\xi _{1}}(t)$, $t\ge 0$, and $\xi (t)={\xi _{2}}(|t|)$, $t<0$. Then $\xi =\{\xi (t),\hspace{0.1667em}t\in \mathbb{R}\}$ is a process with orthogonal increments and is not a Gaussian one.However, it is a j.s.s.-G. process. To prove this statement consider arbitrary numbers ${t_{1}}<\cdots <{t_{n}}$, where the first m numbers, $0\le m\le n$, are negative and the rest $n-m$ numbers are positive. Let $\varDelta ={({\varDelta _{1}},\hspace{0.1667em}\dots ,\hspace{0.1667em}{\varDelta _{n}})}^{\prime }\in {\mathbb{R}}^{n}$ be any vector. Then
\[\begin{aligned}{\varSigma _{2}}& ={\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\xi _{2k}}\Bigg({\sum \limits_{i=1}^{m}}\hspace{0.1667em}{\varDelta _{i}}{\underset{0}{\overset{|{t_{i}}|}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg)\to {\sum \limits_{i=1}^{m}}\hspace{0.1667em}{\varDelta _{i}}{\xi _{2}}\big(|{t_{i}}|\big)\hspace{2.5pt}\text{a.s.},\hspace{1em}\text{as}\hspace{2.5pt}N\to \infty ,\\{} {\varSigma _{1}}& ={\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\xi _{1k}}\Bigg({\sum \limits_{i=m+1}^{n}}\hspace{0.1667em}{\varDelta _{i}}{\underset{0}{\overset{{t_{i}}}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg)\to {\sum \limits_{i=m+1}^{n}}\hspace{0.1667em}{\varDelta _{i}}{\xi _{1}}({t_{i}})\hspace{2.5pt}\text{a.s.},\hspace{1em}\text{as}\hspace{2.5pt}N\to \infty .\end{aligned}\]
By the Fatou lemma
\[\begin{aligned}E\exp \Bigg\{{\sum \limits_{i=1}^{n}}\hspace{0.1667em}{\varDelta _{i}}\xi ({t_{i}})\Bigg\}& =E\underset{N\to \infty }{\lim }\hspace{0.1667em}\exp \{{\varSigma _{2}}+{\varSigma _{1}}\}\\{} & \le \underset{N\to \infty }{\liminf }\hspace{0.1667em}{\prod \limits_{k=1}^{N}}\hspace{0.1667em}E\exp \Bigg\{{\xi _{2k}}\Bigg({\sum \limits_{i=1}^{m}}\hspace{0.1667em}{\varDelta _{i}}{\underset{0}{\overset{|{t_{i}}|}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg)\Bigg\}\cdot \\{} & \hspace{2em}\cdot {\prod \limits_{k=1}^{N}}\hspace{0.1667em}E\exp \Bigg\{{\xi _{1k}}\Bigg({\sum \limits_{i=m+1}^{n}}\hspace{0.1667em}{\varDelta _{i}}{\underset{0}{\overset{{t_{i}}}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg)\Bigg\}\le \\{} & \hspace{1em}\le \underset{N\to \infty }{\lim }\hspace{0.1667em}\exp \Bigg\{\frac{1}{2}\Bigg({\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\Bigg({\sum \limits_{i=1}^{m}}\hspace{0.1667em}{\varDelta _{i}}{\underset{0}{\overset{|{t_{i}}|}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg)}^{2}\\{} & \hspace{2em}+{\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\Bigg({\sum \limits_{i=m+1}^{n}}\hspace{0.1667em}{\varDelta _{i}}{\underset{0}{\overset{{t_{i}}}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg)}^{2}\Bigg)\Bigg\}\\{} & \hspace{1em}=\underset{N\to \infty }{\lim }\hspace{0.1667em}\exp \Bigg\{\frac{1}{2}\Bigg({\sum \limits_{i,j=1}^{m}}\hspace{0.1667em}\Bigg({\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\underset{0}{\overset{|{t_{i}}|}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du{\underset{0}{\overset{|{t_{j}}|}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg){\varDelta _{i}}{\varDelta _{j}}\\{} & \hspace{2em}+{\sum \limits_{i,j=m+1}^{n}}\hspace{0.1667em}\Bigg({\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\underset{0}{\overset{{t_{i}}}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du{\underset{0}{\overset{{t_{j}}}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\Bigg){\varDelta _{i}}{\varDelta _{j}}\Bigg)\Bigg\}.\end{aligned}\]
By Parseval’s identity
\[\begin{aligned}& \underset{N\to \infty }{\lim }\hspace{0.1667em}{\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\underset{0}{\overset{|{t_{i}}|}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du{\underset{0}{\overset{|{t_{j}}|}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du\\{} & \hspace{1em}={\sum \limits_{k=1}^{\infty }}\hspace{0.1667em}{\underset{0}{\overset{\infty }{\int }}}\hspace{0.1667em}{\chi _{[0,\hspace{0.1667em}|{t_{i}}|]}}(u){\varphi _{k}}(u)du{\underset{0}{\overset{\infty }{\int }}}\hspace{0.1667em}{\chi _{[0,\hspace{0.1667em}|{t_{j}}|]}}(u){\varphi _{k}}(u)du\\{} & \hspace{1em}={\underset{0}{\overset{\infty }{\int }}}\hspace{0.1667em}{\chi _{[0,\hspace{0.1667em}|{t_{i}}|]}}(u){\chi _{[0,\hspace{0.1667em}|{t_{j}}|]}}(u)du=\min \big\{|{t_{i}}|,\hspace{0.1667em}|{t_{j}}|\big\}.\end{aligned}\]
Similarly
\[ \underset{N\to \infty }{\lim }\hspace{0.1667em}{\sum \limits_{k=1}^{N}}\hspace{0.1667em}{\underset{0}{\overset{{t_{i}}}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du{\underset{0}{\overset{{t_{j}}}{\int }}}\hspace{0.1667em}{\varphi _{k}}(u)du=\min \{{t_{i}},\hspace{0.1667em}{t_{j}}\}.\]
It means that
\[ E\exp \Bigg\{{\sum \limits_{i=1}^{n}}\hspace{0.1667em}{\varDelta _{i}}\xi ({t_{i}})\Bigg\}\le \exp \bigg\{\frac{1}{2}\langle B\varDelta ,\hspace{0.1667em}\varDelta \rangle \bigg\}\]
with
where ${B_{2}}={(\min \{|{t_{i}}|,\hspace{0.1667em}|{t_{j}}|\})_{i,j=1}^{m}}$, ${B_{1}}={(\min \{{t_{i}},\hspace{0.1667em}{t_{j}}\})_{i,j=m+1}^{n}}$, and the process ξ is j.s.s.-G.