Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 1, Issue 2 (2014)
  4. A criterion for testing hypotheses about ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Cited by
  • More
    Article info Full article Cited by

A criterion for testing hypotheses about the covariance function of a stationary Gaussian stochastic process
Volume 1, Issue 2 (2014), pp. 139–149
Yuriy Kozachenko   Viktor Troshki  

Authors

 
Placeholder
https://doi.org/10.15559/15-VMSTA17
Pub. online: 29 January 2015      Type: Research Article      Open accessOpen Access

Received
21 November 2014
Revised
18 January 2015
Accepted
19 January 2015
Published
29 January 2015

Abstract

We consider a measurable stationary Gaussian stochastic process. A criterion for testing hypotheses about the covariance function of such a process using estimates for its norm in the space $L_{p}(\mathbb{T})$, $p\ge 1$, is constructed.

1 Introduction

We construct a criterion for testing the hypothesis that the covariance function of measurable real-valued stationary Gaussian stochastic process $X(t)$ equals $\rho (\tau )$. We shall use the correlogram
\[\hat{\rho }(\tau )=\frac{1}{T}\underset{0}{\overset{T}{\int }}X(t+\tau )X(t)dt,\hspace{1em}0\le \tau \le T,\]
as an estimator of the function $\rho (\tau )$.
A lot of papers so far have been dedicated to estimation of covariance function with given accuracy in the uniform metric, in particular, the papers [2, 4, 6, 11, 12] and the book [13]. We also note that similar estimates of Gaussian stochastic processes were obtained in books [7] and [1]. The main properties of the correlograms of stationary Gaussian stochastic processes were studied by Buldygin and Kozachenko [3].
The definition of a square Gaussian random vector was introduced by Kozachenko and Moklyachuk [10]. Applications of the theory of square Gaussian random variables and stochastic processes in mathematical statistics were considered in the paper [9] and in the book [3]. In the papers [5] and [8], Kozachenko and Fedoryanich constructed a criterion for testing hypotheses about the covariance function of a Gaussian stationary process with given accuracy and reliability in $L_{2}(\mathbb{T})$.
Our goal is to estimate the covariance function $\rho (\tau )$ of a Gaussian stochastic process with given accuracy and reliability in $L_{p}(\mathbb{T})$, $p\ge 1$. Also, we obtain the estimate for the norm of square Gaussian stochastic processes in the space $L_{p}(\mathbb{T})$. We use this estimate for constructing a criterion for testing hypotheses about the covariance function of a Gaussian stochastic process.
The article is organized as follows. In Section 2, we give necessary information about the square Gaussian random variables. In Section 3, we obtain an estimate for the norm of square Gaussian stochastic processes in the space $L_{p}(\mathbb{T})$. In Section 4, we propose a criterion for testing a hypothesis about the covariance function of a stationary Gaussian stochastic process based on the estimate obtained in Section 3.

2 Some information about the square Gaussian random variables and processes

Definition 1 ([3]).
Let $\mathbb{T}$ be a parametric set, and let $\varXi =\{\xi _{t}:t\in \mathbf{T}\}$ be a family of Gaussian random variables such that $\mathbf{E}\xi _{t}=0$. The space $\mathit{SG}_{\varXi }(\varOmega )$ is called a space of square Gaussian random variables if any $\zeta \in \mathit{SG}_{\varXi }(\varOmega )$ can be represented as
\[\zeta ={\bar{\xi }}^{T}A\bar{\xi }-\mathbb{E}{\bar{\xi }}^{T}A\bar{\xi },\]
where $\bar{\xi }={(\xi _{1},\dots ,\xi _{N})}^{T}$ with $\xi _{k}\in \varXi $, $k=1,\dots ,n$, and A is an arbitrary matrix with real-valued entries, or if $\zeta \in \mathit{SG}_{\varXi }(\varOmega )$ has the representation
\[\zeta =\underset{n\to \infty }{\lim }\big({\bar{\xi }_{n}^{T}}A\bar{\xi }_{n}-\mathbb{E}{\bar{\xi }_{n}^{T}}A\bar{\xi }_{n}\big).\]
Theorem 1 ([3]).
Assume that $\zeta \in \mathit{SG}_{\varXi }(\varOmega )$ and $\operatorname{Var}\zeta >0$. Then the following inequality holds for $|s|<1$:
(1)
\[\mathbf{E}\exp \bigg\{\frac{s}{\sqrt{2}}\bigg(\frac{\zeta }{\sqrt{\operatorname{Var}\zeta }}\bigg)\bigg\}\le \frac{1}{\sqrt{1-|s|}}\exp \bigg\{-\frac{|s|}{2}\bigg\}=L_{0}(s).\]
Definition 2 ([3]).
A stochastic process Y is called a square Gaussian stochastic process if for each $t\in \mathbb{T}$, the random variable $Y(t)$ belongs to the space $\mathit{SG}_{\varXi }(\varOmega )$.

3 An estimate for the $L_{p}(\mathbb{T})$ norm of a square Gaussian stochastic process

In the following theorem, we obtain an estimate for the norm of square Gaussian stochastic processes in the space $L_{p}(\mathbb{T})$. We shall use this result for constructing a criterion for testing hypotheses about the covariance function of a Gaussian stochastic process.
Theorem 2.
Let $\{\mathbb{T},\mathfrak{A},\mu \}$be a measurable space, where $\mathbb{T}$ is a parametric set, and let $Y=\{Y(t),t\in \mathbb{T}\}$ be a square Gaussian stochastic process. Suppose that Y is a measurable process. Further, let the Lebesgue integral $\int _{\mathbb{T}}{(\mathbf{E}{Y}^{2}(t))}^{\frac{p}{2}}d\mu (t)$ be well defined for $p\ge 1$. Then the integral $\int _{\mathbb{T}}{(Y(t))}^{p}d\mu (t)$ exists with probability 1, and
(2)
\[P\bigg\{\underset{\mathbb{T}}{\int }{\big|Y(t)\big|}^{p}d\mu (t)>\varepsilon \bigg\}\le 2\sqrt{1+\frac{{\varepsilon }^{1/p}\sqrt{2}}{{C_{p}^{\frac{1}{p}}}}}\exp \bigg\{-\frac{{\varepsilon }^{\frac{1}{p}}}{\sqrt{2}{C_{p}^{\frac{1}{p}}}}\bigg\}\]
for all $\varepsilon \ge {(\frac{p}{\sqrt{2}}+\sqrt{(\frac{p}{2}+1)p})}^{p}C_{p}$, where $C_{p}=\int _{\mathbb{T}}{(\mathbf{E}{Y}^{2}(t))}^{\frac{p}{2}}d\mu (t)$.
Proof.
Since $\max _{x>0}{x}^{\alpha }{e}^{-x}={\alpha }^{\alpha }{e}^{-\alpha }$, we have ${x}^{\alpha }{e}^{-x}\le {\alpha }^{\alpha }{e}^{-\alpha }$.
If ζ is a random variable from the space $\mathit{SG}_{\varXi }(\varOmega )$ and $x=\frac{s}{\sqrt{2}}\cdot \frac{|\zeta |}{\sqrt{\mathbf{E}{\zeta }^{2}}}$, where $s>0$, then
\[\mathbf{E}{\bigg(\frac{s}{\sqrt{2}}\frac{|\zeta |}{\sqrt{\mathbf{E}{\zeta }^{2}}}\bigg)}^{\alpha }\le {\alpha }^{\alpha }{e}^{-\alpha }\cdot \mathbf{E}\exp \bigg\{\frac{s}{\sqrt{2}}\frac{|\zeta |}{\sqrt{\mathbf{E}{\zeta }^{2}}}\bigg\}\]
and
\[\mathbf{E}|\zeta {|}^{\alpha }\le {\bigg(\frac{\sqrt{2\mathbf{E}{\zeta }^{2}}}{s}\bigg)}^{\alpha }{\alpha }^{\alpha }{e}^{-\alpha }\mathbf{E}\exp \bigg\{\frac{s}{\sqrt{2}}\frac{|\zeta |}{\sqrt{\mathbf{E}{\zeta }^{2}}}\bigg\}.\]
From inequality (1) for $0<s<1$ we get that
(3)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}|\zeta {|}^{\alpha }& \displaystyle \le {\bigg(\frac{\sqrt{2\mathbf{E}{\zeta }^{2}}}{s}\bigg)}^{\alpha }{\alpha }^{\alpha }{e}^{-\alpha }\bigg(\mathbf{E}\exp \bigg\{\frac{s}{\sqrt{2}}\frac{\zeta }{\sqrt{\mathbf{E}{\zeta }^{2}}}\bigg\}+\mathbf{E}\exp \bigg\{-\frac{s}{\sqrt{2}}\frac{\zeta }{\sqrt{\mathbf{E}{\zeta }^{2}}}\bigg\}\bigg)\\{} & \displaystyle \le \frac{2}{\sqrt{1-s}}{\bigg(\frac{\sqrt{2\mathbf{E}{\zeta }^{2}}}{s}\bigg)}^{\alpha }{\alpha }^{\alpha }{e}^{-\alpha }\exp \bigg\{-\frac{s}{\sqrt{2}}\bigg\}\\{} & \displaystyle =2L_{0}(s){\bigg(\frac{\sqrt{2\mathbf{E}{\zeta }^{2}}}{s}\bigg)}^{\alpha }{\alpha }^{\alpha }{e}^{-\alpha }.\end{array}\]
Let $Y(t)$, $t\in \mathbb{T}$, be a measurable square Gaussian stochastic process. Using the Chebyshev inequality, we derive that, for all $l\ge 1$,
\[P\bigg\{\underset{\mathbb{T}}{\int }{\big|Y(t)\big|}^{p}d\mu (t)>\varepsilon \bigg\}\le \frac{\mathbf{E}{(\int _{\mathbb{T}}|Y(t){|}^{p}d\mu (t))}^{l}}{{\varepsilon }^{l}}.\]
Then from the generalized Minkowski inequality together with inequality (3) for $l>1$ we obtain that
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\bigg(\mathbf{E}{\bigg(\underset{\mathbb{T}}{\int }{\big|Y(t)\big|}^{p}d\mu (t)\bigg)}^{l}\bigg)}^{\frac{1}{l}}\le \underset{\mathbb{T}}{\int }{\big(\mathbf{E}{\big|Y(t)\big|}^{pl}\big)}^{\frac{1}{l}}d\mu (t)\\{} & \displaystyle \le \underset{\mathbb{T}}{\int }{\big(2L_{0}(s){\big(2\mathbf{E}{Y}^{2}(t)\big)}^{\frac{pl}{2}}{(pl)}^{pl}{s}^{-pl}\exp \{-pl\}\big)}^{\frac{1}{l}}d\mu (t)\\{} & \displaystyle ={\big(2L_{0}(s)\big)}^{\frac{1}{l}}\underset{\mathbb{T}}{\int }{\big(2\mathbf{E}{Y}^{2}(t)\big)}^{\frac{p}{2}}{s}^{-p}{(pl)}^{p}\exp \{-p\}d\mu (t)\\{} & \displaystyle ={\big(2L_{0}(s)\big)}^{\frac{1}{l}}{2}^{\frac{p}{2}}{s}^{-p}{(pl)}^{p}\exp \{-p\}\underset{\mathbb{T}}{\int }{\big(\mathbf{E}{Y}^{2}(t)\big)}^{\frac{p}{2}}d\mu (t).\end{array}\]
Assuming that $C_{p}=\int _{\mathbb{T}}{(\mathbf{E}{Y}^{2}(t))}^{\frac{p}{2}}d\mu (t)$, we deduce that
\[\mathbf{E}{\bigg(\underset{\mathbb{T}}{\int }{\big|Y(t)\big|}^{p}d\mu (t)\bigg)}^{l}\le 2L_{0}(s){2}^{\frac{pl}{2}}{(lp)}^{pl}\exp \{-pl\}{C_{p}^{l}}{s}^{-pl}.\]
Hence,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\bigg\{\underset{\mathbb{T}}{\int }{\big|Y(t)\big|}^{p}d\mu (t)>\varepsilon \bigg\}& \displaystyle \le 2\cdot {\big({2}^{\frac{p}{2}}\big)}^{l}L_{0}(s){\big({p}^{p}\big)}^{l}{\big(\exp \{-p\}\big)}^{l}{C_{p}^{l}}{\big({s}^{-p}\big)}^{l}\cdot \frac{{({l}^{p})}^{l}}{{\varepsilon }^{l}}\\{} & \displaystyle =2L_{0}(s){a}^{l}{\big({l}^{p}\big)}^{l},\end{array}\]
where $a=\frac{{2}^{\frac{p}{2}}{p}^{p}C_{p}}{{e}^{p}{s}^{p}\varepsilon }$, that is, ${a}^{\frac{1}{p}}=\frac{{2}^{\frac{1}{2}}p{C_{p}^{\frac{1}{p}}}}{es{\varepsilon }^{\frac{1}{p}}}$. Let us find the minimum of the function $\psi (l)={a}^{l}{({l}^{p})}^{l}$. We can easily check that it reaches its minimum at the point ${l}^{\ast }=\frac{1}{e{a}^{\frac{1}{p}}}$.
Then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle 2L_{0}(s)\psi \big({l}^{\ast }\big)& \displaystyle =2L_{0}(s){a}^{\frac{1}{e{a}^{\frac{1}{p}}}}\cdot {\bigg(\frac{1}{e{a}^{\frac{1}{p}}}\bigg)}^{p\cdot \frac{1}{e{a}^{\frac{1}{p}}}}=2L_{0}(s){a}^{\frac{1}{e{a}^{\frac{1}{p}}}}\cdot {a}^{-\frac{1}{e{a}^{\frac{1}{p}}}}\cdot {e}^{-\frac{p}{e{a}^{\frac{1}{p}}}}\\{} & \displaystyle =2L_{0}(s)\exp \bigg\{-\frac{pes{\varepsilon }^{\frac{1}{p}}}{{2}^{\frac{1}{2}}pe{C_{p}^{\frac{1}{p}}}}\bigg\}=2L_{0}(s)\exp \bigg\{-\frac{s{\varepsilon }^{\frac{1}{p}}}{{2}^{\frac{1}{2}}{C_{p}^{\frac{1}{p}}}}\bigg\}\\{} & \displaystyle =\frac{2}{\sqrt{1-s}}\exp \bigg\{-s\bigg(\frac{1}{2}+\frac{{\varepsilon }^{1/p}}{{2}^{\frac{1}{2}}{C_{p}^{\frac{1}{p}}}}\bigg)\bigg\}.\end{array}\]
In turn, minimizing the function $\theta (s)=\frac{2}{\sqrt{1-s}}\exp \{-s(\frac{1}{2}+\frac{{\varepsilon }^{1/p}}{{2}^{\frac{1}{2}}{C_{p}^{\frac{1}{p}}}})\}$ in s, we deduce ${s}^{\ast }=1-\frac{1}{1+\frac{\sqrt{2}{\varepsilon }^{1/p}}{{C_{p}^{1/p}}}}$. Thus,
\[\theta \big({s}^{\ast }\big)=2\sqrt{1+\frac{{\varepsilon }^{1/p}\sqrt{2}}{{C_{p}^{\frac{1}{p}}}}}\exp \bigg\{-\frac{{\varepsilon }^{\frac{1}{p}}}{\sqrt{2}{C_{p}^{\frac{1}{p}}}}\bigg\}.\]
Since ${l}^{\ast }\ge 1$, it follows that inequality (2) holds if $\frac{1}{e{a}^{\frac{1}{p}}}=\frac{s{\varepsilon }^{1/p}}{\sqrt{2}p{C_{p}^{1/p}}}\ge 1$. Substituting the value of ${s}^{\ast }$ into this expression, we obtain the inequality ${\varepsilon }^{2/p}\ge p{C_{p}^{1/p}}({C_{p}^{1/p}}+\sqrt{2}{\varepsilon }^{1/p})$. Solving this inequality with respect to $\varepsilon >0$, we deduce that inequality (2) holds for $\varepsilon \ge {(\frac{p}{\sqrt{2}}+\sqrt{(\frac{p}{2}+1)p})}^{p}C_{p}$. The theorem is proved.  □

4 The construction of a criterion for testing hypotheses about the covariance function of a stationary Gaussian stochastic process

Consider a measurable stationary Gaussian stochastic process X defined for all $t\in \mathbb{R}$. Without any loss of generality, we can assume that $X=\{X(t)$, t∈T=[0,T+A]$t\in \mathbb{T}=[0,T+A]$, $0<T<\infty $, $0<A<\infty \}$ and $\mathbf{E}X(t)=0$. The covariance function $\rho (\tau )=\mathbf{E}X(t+\tau )X(t)$ of this process is defined for any $\tau \in \mathbb{R}$ and is an even function. Let $\rho (\tau )$ be continuous on $\mathbb{T}$.
Theorem 3.
Let the correlogram
(4)
\[\hat{\rho }(\tau )=\frac{1}{T}\underset{0}{\overset{T}{\int }}X(t+\tau )X(t)dt,\hspace{1em}0\le \tau \le A,\]
be an estimator of the covariance function $\rho (\tau )$. Then the following inequality holds for all $\varepsilon \ge {(\frac{p}{\sqrt{2}}+\sqrt{(\frac{p}{2}+1)p})}^{p}C_{p}$:
\[P\bigg\{\underset{0}{\overset{A}{\int }}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{p}d\tau >\varepsilon \bigg\}\le 2\sqrt{1+\frac{{\varepsilon }^{1/p}\sqrt{2}}{{C_{p}^{\frac{1}{p}}}}}\exp \bigg\{-\frac{{\varepsilon }^{\frac{1}{p}}}{\sqrt{2}{C_{p}^{\frac{1}{p}}}}\bigg\},\]
where $C_{p}={\int _{0}^{A}}{(\frac{2}{{T}^{2}}{\int _{0}^{T}}(T-u)({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau ))du)}^{\frac{p}{2}}d\tau $ and $0<A<\hspace{2.5pt}\infty $.
Remark 1.
Since the sample paths of the process $X(t)$ are continuous with probability one on the set $\mathbb{T}$, $\hat{\rho }(\tau )$ is a Riemann integral.
Proof.
Consider
\[\mathbf{E}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{2}=\mathbf{E}{\big(\hat{\rho }(\tau )\big)}^{2}-{\rho }^{2}(\tau ).\]
From the Isserlis equality for jointly Gaussian random variables it follows that
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbf{E}{\big(\hat{\rho }(\tau )\big)}^{2}-{\rho }^{2}(\tau )=\mathbf{E}\bigg(\frac{1}{{T}^{2}}\underset{0}{\overset{T}{\int }}\underset{0}{\overset{T}{\int }}X(t+\tau )X(t)X(s+\tau )X(s)dtds\bigg)-{\rho }^{2}(\tau )\\{} & \displaystyle \hspace{1em}=\frac{1}{{T}^{2}}\underset{0}{\overset{T}{\int }}\underset{0}{\overset{T}{\int }}\big(\mathbf{E}X(t+\tau )X(t)\mathbf{E}X(s+\tau )X(s)+\mathbf{E}X(t+\tau )X(s+\tau )\\{} & \displaystyle \hspace{2em}\times \mathbf{E}X(t)X(s)+\mathbf{E}X(t+\tau )X(s)\mathbf{E}X(s+\tau )X(t)\big)dtds-{\rho }^{2}(\tau )\\{} & \displaystyle \hspace{1em}=\frac{1}{{T}^{2}}\underset{0}{\overset{T}{\int }}\underset{0}{\overset{T}{\int }}\big({\rho }^{2}(\tau )+{\rho }^{2}(t-s)+\rho (t-s+\tau )\rho (t-s-\tau )\big)dtds-{\rho }^{2}(\tau )\\{} & \displaystyle \hspace{1em}=\frac{1}{{T}^{2}}\underset{0}{\overset{T}{\int }}\underset{0}{\overset{T}{\int }}\big({\rho }^{2}(t-s)+\rho (t-s+\tau )\rho (t-s-\tau )\big)dtds\\{} & \displaystyle \hspace{1em}=\frac{2}{{T}^{2}}\underset{0}{\overset{T}{\int }}(T-u)\big({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau )\big)du.\end{array}\]
We have obtained that
(5)
\[\mathbf{E}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{2}=\frac{2}{{T}^{2}}\underset{0}{\overset{T}{\int }}(T-u)\big({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau )\big)du.\]
Since $\hat{\rho }(\tau )-\rho (\tau )$ is a square Gaussian stochastic process (see Lemma 3.1, Chapter 6 in [3]), it follows from Theorem 2 that
\[P\bigg\{\underset{0}{\overset{A}{\int }}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{p}d\tau >\varepsilon \bigg\}\le 2\sqrt{1+\frac{{\varepsilon }^{1/p}\sqrt{2}}{{C_{p}^{\frac{1}{p}}}}}\exp \bigg\{-\frac{{\varepsilon }^{\frac{1}{p}}}{\sqrt{2}{C_{p}^{\frac{1}{p}}}}\bigg\}.\]
Applying Eq. (5), we get
\[C_{p}=\underset{0}{\overset{A}{\int }}{\bigg(\frac{2}{{T}^{2}}\underset{0}{\overset{T}{\int }}(T-u)\big({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau )\big)du\bigg)}^{\frac{p}{2}}d\tau .\]
The theorem is proved.  □
Denote
\[g(\varepsilon )=2\sqrt{1+\frac{{\varepsilon }^{1/p}\sqrt{2}}{{C_{p}^{\frac{1}{p}}}}}\exp \bigg\{-\frac{{\varepsilon }^{\frac{1}{p}}}{\sqrt{2}{C_{p}^{\frac{1}{p}}}}\bigg\}.\]
From Theorem 3 it follows that if $\varepsilon \ge z_{p}=C_{p}{(\frac{p}{\sqrt{2}}+\sqrt{(\frac{p}{2}+1)p})}^{p}$, then
\[P\bigg\{\underset{0}{\overset{A}{\int }}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{p}d\tau >\varepsilon \bigg\}\le g(\varepsilon ).\]
Let $\varepsilon _{\delta }$ be a solution of the equation $g(\varepsilon )=\delta $, $0<\delta <1$. Put $S_{\delta }=\max \{\varepsilon _{\delta },z_{p}\}$. It is obvious that $g(S_{\delta })\le \delta $ and
(6)
\[P\bigg\{\underset{0}{\overset{A}{\int }}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{p}d\tau >S_{\delta }\bigg\}\le \delta .\]
Let $\mathbb{H}$ be the hypothesis that the covariance function of a measurable real-valued stationary Gaussian stochastic process $X(t)$ equals $\rho (\tau )$ for $0\le \tau \le A$. From Theorem 3 and (6) it follows that to test the hypothesis $\mathbb{H}$, we can use the following criterion.
Criterion 1.
For a given level of confidence δ the hypothesis $\mathbb{H}$ is accepted if
\[\underset{0}{\overset{A}{\int }}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{p}d\mu (\tau )<S_{\delta };\]
otherwise, the hypothesis is rejected.
Remark 2.
The equation $g(\varepsilon )=\delta $ has a solution for any $\delta >0$ since $g(\varepsilon )$ is a decreasing function. We can find the solution of the equation using numerical methods.
Remark 3.
We can easily see that Criterion 1 can be used if $C_{p}\to 0$ as $T\to \infty $.
The next theorem contain assumptions under which $C_{p}\to 0$ as $T\to \infty $.
Theorem 4.
Let $\rho (\tau )$ be the covariance function of a centered stationary random process. Let $\rho (\tau )$ be a continuous function. If $\rho (T)\to 0$ as $T\to \infty $, then $C_{p}\to 0$ as $T\to \infty $, where $C_{p}={\int _{0}^{A}}{(\psi (T,\tau ))}^{p/2}dt$ and
\[\psi (T,\tau )=\frac{2}{{T}^{2}}\underset{0}{\overset{T}{\int }}(T-u)\big({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau )\big)du,\hspace{1em}A>0,\hspace{2.5pt}T>0.\]
Proof.
We have $\psi (T,\tau )\le \frac{2}{T}{\int _{0}^{T}}({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau ))du\le 4{\rho }^{2}(0)$. Now it is suffices to prove that $\psi (T,\tau )\to 0$ as $T\to \infty $. From the L’Hopital’s rule it follows that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \underset{T\to \infty }{\lim }\psi (T,\tau )& \displaystyle =\underset{T\to \infty }{\lim }\frac{2}{T}\underset{0}{\overset{T}{\int }}\big({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau )\big)du\\{} & \displaystyle =\underset{T\to \infty }{\lim }\big({\rho }^{2}(T)+\rho (T+\tau )\rho (T-\tau )\big)=0.\end{array}\]
Application of Lebesgue’s dominated convergence theorem completes the proof.  □
Here are examples in which we find the estimates for $C_{p}$.
Example 1.
Let $\mathbb{H}$ be the hypothesis that the covariance function of a centered measurable stationary Gaussian stochastic process equals $\rho (\tau )=B\exp \{-a|\tau |\}$, where $B>0$ and $a>0$.
To test the hypothesis $\mathbb{H}$, we can use Criterion 1 by selecting $\hat{\rho }_{T}(\tau )$ that is defined in (4) as an estimator of the function $\rho (\tau )$. Let $0<A<\infty $. We shall find the value of the expression
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I& \displaystyle =\underset{0}{\overset{T}{\int }}(T-u)\big({e}^{-2au}+{e}^{-a|u+\tau |}{e}^{-a|u-\tau |}\big)du\\{} & \displaystyle =\underset{0}{\overset{T}{\int }}T{e}^{-2au}du+T\underset{0}{\overset{T}{\int }}{e}^{-a|u+\tau |}{e}^{-a|u-\tau |}du-\underset{0}{\overset{T}{\int }}u{e}^{-2au}du\\{} & \displaystyle \hspace{1em}-\underset{0}{\overset{T}{\int }}u{e}^{-a|u+\tau |}{e}^{-a|u-\tau |}du\\{} & \displaystyle =I_{1}+I_{2}+I_{3}+I_{4}.\end{array}\]
Now lets us calculate the summands:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I_{1}& \displaystyle =T\underset{0}{\overset{T}{\int }}{e}^{-2au}du=\frac{T}{2a}\big(1-{e}^{-2aT}\big),\\{} \displaystyle I_{2}& \displaystyle =T\underset{0}{\overset{T}{\int }}{e}^{-a|u+\tau |}{e}^{-a|u-\tau |}du\\{} & \displaystyle =T\bigg(\underset{0}{\overset{\tau }{\int }}{e}^{-a(u+\tau )}{e}^{a(u-\tau )}du+\underset{\tau }{\overset{T}{\int }}{e}^{-a(u+\tau )}{e}^{-a(u-\tau )}du\bigg)\\{} & \displaystyle =T\bigg(\underset{0}{\overset{\tau }{\int }}{e}^{-2a\tau }du+\underset{\tau }{\overset{T}{\int }}{e}^{-2au}du\bigg)\\{} & \displaystyle =T\bigg(\tau {e}^{-2a\tau }-\frac{1}{2a}{e}^{-2aT}+\frac{1}{2a}{e}^{-2a\tau }\bigg),\\{} \displaystyle I_{3}& \displaystyle =\underset{0}{\overset{T}{\int }}u{e}^{-2au}du=-\frac{T}{2a}{e}^{-2aT}+\frac{1}{2a}\underset{0}{\overset{T}{\int }}{e}^{-2au}du\\{} & \displaystyle =-\frac{T}{2a}{e}^{-2aT}-\frac{1}{4{a}^{2}}{e}^{-2aT}+\frac{1}{4{a}^{2}},\\{} \displaystyle I_{4}& \displaystyle =\underset{0}{\overset{T}{\int }}u{e}^{-a|u+\tau |}{e}^{-a|u-\tau |}du\\{} & \displaystyle =\underset{0}{\overset{\tau }{\int }}u{e}^{-a(u+\tau )}{e}^{a(u-\tau )}du+\underset{\tau }{\overset{T}{\int }}u{e}^{-a(u+\tau )}{e}^{-a(u-\tau )}du\\{} & \displaystyle =\underset{0}{\overset{\tau }{\int }}u{e}^{-2a\tau }du+\underset{\tau }{\overset{T}{\int }}u{e}^{-2au}du\\{} & \displaystyle =\frac{{\tau }^{2}}{2}{e}^{-2a\tau }-\frac{T}{2a}{e}^{-2aT}+\frac{\tau }{2a}{e}^{-2a\tau }-\frac{1}{4{a}^{2}}{e}^{-2aT}+\frac{1}{4{a}^{2}}{e}^{-2a\tau }.\end{array}\]
Therefore,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I& \displaystyle =\bigg(T\tau +\frac{T}{2a}-\frac{{\tau }^{2}}{2}-\frac{\tau }{2a}-\frac{1}{4{a}^{2}}\bigg){e}^{-2a\tau }+\frac{1}{2{a}^{2}}{e}^{-2aT}+\frac{T}{2a}-\frac{1}{4{a}^{2}}\\{} & \displaystyle \le \bigg(T\tau +\frac{T}{2a}\bigg){e}^{-2a\tau }+\frac{T}{2a}+\frac{1}{2{a}^{2}}{e}^{-2aT}.\end{array}\]
Thus, we obtain
\[\begin{array}{r@{\hskip0pt}l}\displaystyle C_{p}& \displaystyle \le {\bigg(\frac{2B}{{T}^{2}}\bigg)}^{\frac{p}{2}}\underset{0}{\overset{A}{\int }}{\bigg(\bigg(T\tau +\frac{T}{2a}\bigg){e}^{-2a\tau }+\frac{T}{2a}+\frac{1}{2{a}^{2}}{e}^{-2aT}\bigg)}^{p/2}d\tau \\{} & \displaystyle ={(2B)}^{\frac{p}{2}}\frac{{T}^{p/2}}{{T}^{p}}I_{5}={(2B)}^{\frac{p}{2}}\frac{1}{{T}^{p/2}}I_{5},\end{array}\]
where $I_{5}={\int _{0}^{A}}{((\tau +\frac{1}{2a}){e}^{-2a\tau }+\frac{1}{2a}+\frac{1}{2{a}^{2}}{e}^{-2aT})}^{p/2}d\tau $.
Example 2.
Let $\mathbb{H}$ be the hypothesis that the covariance function of a centered measurable stationary Gaussian stochastic process equals $\rho (\tau )=B\exp \{-a|\tau {|}^{2}\}$, where $B>0$ and $a>0$.
Similarly as in the previous example, to test the hypothesis $\mathbb{H}$, we can use Criterion 1 by selecting $\hat{\rho }_{T}(\tau )$ defined in (4) as the estimator of the function $\rho (\tau )$. Let $0<A<\infty $. Let us find the value of the expression
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I& \displaystyle =\underset{0}{\overset{T}{\int }}(T-u)\big({e}^{-2a{u}^{2}}+{e}^{-a|u+\tau {|}^{2}}{e}^{-a|u-\tau {|}^{2}}\big)du\\{} & \displaystyle =\underset{0}{\overset{T}{\int }}T{e}^{-2a{u}^{2}}du+T\underset{0}{\overset{T}{\int }}{e}^{-a|u+\tau {|}^{2}}{e}^{-a|u-\tau {|}^{2}}du-\underset{0}{\overset{T}{\int }}u{e}^{-2a{u}^{2}}du\\{} & \displaystyle \hspace{1em}-\underset{0}{\overset{T}{\int }}u{e}^{-a|u+\tau {|}^{2}}{e}^{-a|u-\tau {|}^{2}}du\\{} & \displaystyle =I_{1}+I_{2}+I_{3}+I_{4}.\end{array}\]
Now let us calculate the summands:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I_{1}& \displaystyle =T\underset{0}{\overset{T}{\int }}{e}^{-2a{u}^{2}}du\le T\underset{0}{\overset{\infty }{\int }}{e}^{-2a{u}^{2}}du=\frac{\sqrt{\pi }T}{2\sqrt{2a}},\\{} \displaystyle I_{2}& \displaystyle =T\underset{0}{\overset{T}{\int }}{e}^{-a|u+\tau {|}^{2}}{e}^{-a|u-\tau {|}^{2}}du=T{e}^{-2a{\tau }^{2}}\underset{0}{\overset{T}{\int }}{e}^{-2a{u}^{2}}du\le \frac{\sqrt{\pi }T}{2\sqrt{2a}}{e}^{-2a{\tau }^{2}},\\{} \displaystyle I_{3}& \displaystyle =\underset{0}{\overset{T}{\int }}u{e}^{-2a{u}^{2}}du=-\frac{1}{4a}\underset{0}{\overset{T}{\int }}{e}^{-2a{u}^{2}}d\big(-2a{u}^{2}\big)=-\frac{1}{4a}\big({e}^{-2a{T}^{2}}-1\big),\\{} \displaystyle I_{4}& \displaystyle =\underset{0}{\overset{T}{\int }}u{e}^{-2a({u}^{2}+{\tau }^{2})}du={e}^{-2a{\tau }^{2}}\underset{0}{\overset{T}{\int }}u{e}^{-2a{u}^{2}}du=-\frac{1}{4a}{e}^{-2a{\tau }^{2}}\big({e}^{-2a{T}^{2}}-1\big).\end{array}\]
Hence,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I& \displaystyle \le \frac{\sqrt{\pi }T}{2\sqrt{2a}}+\frac{\sqrt{\pi }T}{2\sqrt{2a}}{e}^{-2a{\tau }^{2}}+\frac{1}{4a}\big({e}^{-2a{T}^{2}}-1\big)+\frac{1}{4a}{e}^{-2a{\tau }^{2}}\big({e}^{-2a{T}^{2}}-1\big)\\{} & \displaystyle \le T\bigg(\frac{\sqrt{\pi }}{2\sqrt{2a}}+\frac{\sqrt{\pi }}{2\sqrt{2a}}{e}^{-2a{\tau }^{2}}\bigg).\end{array}\]
Thus, we obtain
\[C_{p}\le {\bigg(\frac{2B}{{T}^{2}}\bigg)}^{\frac{p}{2}}\underset{0}{\overset{A}{\int }}{\bigg(T\bigg(\frac{\sqrt{\pi }}{2\sqrt{2a}}+\frac{\sqrt{\pi }}{2\sqrt{2a}}{e}^{-2a{\tau }^{2}}\bigg)\bigg)}^{p/2}d\tau ={(2B)}^{\frac{p}{2}}\frac{1}{{T}^{p/2}}I_{6},\]
where $I_{6}={\int _{0}^{A}}{(\frac{\sqrt{\pi }}{2\sqrt{2a}}+\frac{\sqrt{\pi }}{2\sqrt{2a}}{e}^{-2a{\tau }^{2}})}^{p/2}d\tau $.

Acknowledgments

The authors express their gratitude to the referee and Professor Yu. Mishura for valuable comments that helped to improve the paper.

References

[1] 
Bendat, J., Piersol, A.: Applications of Correlation and Spectral Analysis. John Wiley & Sons, New York (1980)
[2] 
Buldygin, V.: The properties of the empirical correlogram of Gaussian process with integrable in squared spectral density. Ukr. Math. J. 47(7), 876–889 (1995) (in Russian), MR1367943. doi:10.1007/BF01084897
[3] 
Buldygin, V., Kozachenko, Yu.: Metric Characterization of Random Variables and Random Processes. Am. Math. Soc., Providence, RI (2000). MR1743716
[4] 
Buldygin, V., Zayats, V.: On the asymptotic normality of estimates of the correlation functions stationary Gaussian processes in spaces of continuous functions. Ukr. Math. J. 47(11), 1485–1497 (1995) (in Russian), MR1369560. doi:10.1007/BF01057918
[5] 
Fedoryanych, T.: One estimate of the correlation function for Gaussian stochastic process. Bull. T. Shevchenko Nat. Univ. Kyiv 2, 72–76 (2004)
[6] 
Ivanov, A.: A limit theorem for the evaluation of the correlation function. Theory Probab. Math. Stat. 19, 76–81 (1978). MR0503644
[7] 
Jenkins, G., Watts, D.: Spectral Analysis and Its Applications. Holden Day, Merrifield (1971). MR0230438
[8] 
Kozachenko, Yu., Fedoryanych, T.: A criterion for testing hypotheses about the covariance function of a Gaussian stationary process. Theory Probab. Math. Stat. 69, 85–94 (2005)
[9] 
Kozachenko, Yu., Kozachenko, L.: A test for a hypothesis on the correlation function of Gaussian random process. J. Math. Sci. 77(5), 3437–3444 (1995). MR1378447. doi:10.1007/BF02367991
[10] 
Kozachenko, Yu., Moklyachuk, O.: Pre-Gaussian random vectors and their application. Theory Probab. Math. Stat. 50, 87–96 (1994). MR1445521
[11] 
Kozachenko, Yu., Oleshko, T.: Analytic properties of certain classes of pre-Gaussian stochastic processes. Theory Probab. Math. Stat. 48, 37–51 (1993)
[12] 
Kozachenko, Yu., Stadnik, A.: On the convergence of some functionals of Gaussian vectors in Orlicz spaces. Theory Probab. Math. Stat. 44, 80–87 (1991). MR1168431
[13] 
Leonenko, N., Ivanov, A.: Statistical Analysis of Random Fields. Kluwer, Dordrecht (1989). MR1009786. doi:10.1007/978-94-009-1183-3
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Some information about the square Gaussian random variables and processes
  • 3 An estimate for the $L_{p}(\mathbb{T})$ norm of a square Gaussian stochastic process
  • 4 The construction of a criterion for testing hypotheses about the covariance function of a stationary Gaussian stochastic process
  • Acknowledgments
  • References

Copyright
© 2014 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Square Gaussian stochastic process criterion for testing hypotheses correlogram

MSC2010
60G10 62M07

Metrics (since March 2018)
306

Article info
views

279

Full article
views

189

PDF
downloads

58

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    4
Theorem 1 ([3]).
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 1 ([3]).
Assume that $\zeta \in \mathit{SG}_{\varXi }(\varOmega )$ and $\operatorname{Var}\zeta >0$. Then the following inequality holds for $|s|<1$:
(1)
\[\mathbf{E}\exp \bigg\{\frac{s}{\sqrt{2}}\bigg(\frac{\zeta }{\sqrt{\operatorname{Var}\zeta }}\bigg)\bigg\}\le \frac{1}{\sqrt{1-|s|}}\exp \bigg\{-\frac{|s|}{2}\bigg\}=L_{0}(s).\]
Theorem 2.
Let $\{\mathbb{T},\mathfrak{A},\mu \}$be a measurable space, where $\mathbb{T}$ is a parametric set, and let $Y=\{Y(t),t\in \mathbb{T}\}$ be a square Gaussian stochastic process. Suppose that Y is a measurable process. Further, let the Lebesgue integral $\int _{\mathbb{T}}{(\mathbf{E}{Y}^{2}(t))}^{\frac{p}{2}}d\mu (t)$ be well defined for $p\ge 1$. Then the integral $\int _{\mathbb{T}}{(Y(t))}^{p}d\mu (t)$ exists with probability 1, and
(2)
\[P\bigg\{\underset{\mathbb{T}}{\int }{\big|Y(t)\big|}^{p}d\mu (t)>\varepsilon \bigg\}\le 2\sqrt{1+\frac{{\varepsilon }^{1/p}\sqrt{2}}{{C_{p}^{\frac{1}{p}}}}}\exp \bigg\{-\frac{{\varepsilon }^{\frac{1}{p}}}{\sqrt{2}{C_{p}^{\frac{1}{p}}}}\bigg\}\]
for all $\varepsilon \ge {(\frac{p}{\sqrt{2}}+\sqrt{(\frac{p}{2}+1)p})}^{p}C_{p}$, where $C_{p}=\int _{\mathbb{T}}{(\mathbf{E}{Y}^{2}(t))}^{\frac{p}{2}}d\mu (t)$.
Theorem 3.
Let the correlogram
(4)
\[\hat{\rho }(\tau )=\frac{1}{T}\underset{0}{\overset{T}{\int }}X(t+\tau )X(t)dt,\hspace{1em}0\le \tau \le A,\]
be an estimator of the covariance function $\rho (\tau )$. Then the following inequality holds for all $\varepsilon \ge {(\frac{p}{\sqrt{2}}+\sqrt{(\frac{p}{2}+1)p})}^{p}C_{p}$:
\[P\bigg\{\underset{0}{\overset{A}{\int }}{\big(\hat{\rho }(\tau )-\rho (\tau )\big)}^{p}d\tau >\varepsilon \bigg\}\le 2\sqrt{1+\frac{{\varepsilon }^{1/p}\sqrt{2}}{{C_{p}^{\frac{1}{p}}}}}\exp \bigg\{-\frac{{\varepsilon }^{\frac{1}{p}}}{\sqrt{2}{C_{p}^{\frac{1}{p}}}}\bigg\},\]
where $C_{p}={\int _{0}^{A}}{(\frac{2}{{T}^{2}}{\int _{0}^{T}}(T-u)({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau ))du)}^{\frac{p}{2}}d\tau $ and $0<A<\hspace{2.5pt}\infty $.
Theorem 4.
Let $\rho (\tau )$ be the covariance function of a centered stationary random process. Let $\rho (\tau )$ be a continuous function. If $\rho (T)\to 0$ as $T\to \infty $, then $C_{p}\to 0$ as $T\to \infty $, where $C_{p}={\int _{0}^{A}}{(\psi (T,\tau ))}^{p/2}dt$ and
\[\psi (T,\tau )=\frac{2}{{T}^{2}}\underset{0}{\overset{T}{\int }}(T-u)\big({\rho }^{2}(u)+\rho (u+\tau )\rho (u-\tau )\big)du,\hspace{1em}A>0,\hspace{2.5pt}T>0.\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy