Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 1 (2024)
  4. Parameter estimation for fractional mixe ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • More
    Article info Full article Related articles

Parameter estimation for fractional mixed fractional Brownian motion based on discrete observations
Volume 11, Issue 1 (2024), pp. 1–29
Kostiantyn Ralchenko ORCID icon link to view author Kostiantyn Ralchenko details   Mykyta Yakovliev ORCID icon link to view author Mykyta Yakovliev details  

Authors

 
Placeholder
https://doi.org/10.15559/23-VMSTA234
Pub. online: 5 December 2023      Type: Research Article      Open accessOpen Access

Received
3 August 2023
Revised
2 November 2023
Accepted
2 November 2023
Published
5 December 2023

Abstract

The object of investigation is the mixed fractional Brownian motion of the form ${X_{t}}=\kappa {B_{t}^{{H_{1}}}}+\sigma {B_{t}^{{H_{2}}}}$, driven by two independent fractional Brownian motions ${B_{1}^{H}}$ and ${B_{2}^{H}}$ with Hurst parameters ${H_{1}}\lt {H_{2}}$. Strongly consistent estimators of unknown model parameters ${({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})^{\top }}$ are constructed based on the equidistant observations of a trajectory. Joint asymptotic normality of these estimators is proved for $0\lt {H_{1}}\lt {H_{2}}\lt \frac{3}{4}$.

1 Introduction

The paper is devoted to the statistical identification of the following stochastic process:
(1)
\[ {X_{t}}=\kappa {B_{t}^{{H_{1}}}}+\sigma {B_{t}^{{H_{2}}}},\hspace{1em}t\ge 0,\]
where ${B^{{H_{1}}}}$ and ${B^{{H_{2}}}}$ are two independent fractional Brownian motions with Hurst indices ${H_{1}}$ and ${H_{2}}$ respectively. The process (1) is known in the literature as fractional mixed fractional Brownian motion [16]. We aim to estimate all four unknown parameters ${H_{1}}$, ${H_{2}}$, ${\kappa ^{2}}$, ${\sigma ^{2}}$ based on equidistant observations $\{{X_{kh}},k=0,1,2,\dots \hspace{0.1667em}\}$, $h\gt 0$. In order to get an identifiable model, we assume that $0\lt {H_{1}}\lt {H_{2}}\lt 1$, $\sigma \gt 0$, and $\kappa \gt 0$ throughout the article.
The application of fractional Brownian motion is highly widespread for modelling real-world processes that evolve over time and have the properties of self-similarity and long- or short-range dependence, see e.g., [4, 5, 11, 24–26, 32]. Mixed models like (1) containing fractional Brownian motions with different Hurst indices are important from the perspective of practical applications, in particular, to economics and financial mathematics; the motion of a lower Hurst index usually corresponds to a component responsible for changes over a short period of time, and the motion of a larger one corresponds to a long-term component. The particular case of the process (1) with ${H_{1}}=1/2$ is known as mixed fractional Brownian motion. It was firstly proposed by Cheridito [3] for better financial modeling, its properties can be found in [33]. There are already existing practical applications of this model [9, 27, 30, 32]. The model (1) with arbitrary ${H_{1}}$ and ${H_{2}}$ was considered in [8, 16]. Its application to the pricing of credit default swaps was studied in [10]. A more general model, containing several fractional Brownian motions with different Hurst indices can be found in [28].
Various methods for parameter estimation in the mixed Brownian–fractional Brownian model (i.e., in the case ${H_{1}}=1/2$) were proposed in [29] (the maximum likelihood method), [31] (Powell’s optimization algorithm), and [6] (approach based on power variations). The general case of ${H_{1}},{H_{2}}\in (0,1)$ is less studied. We can mention only the papers [18, 19], where the following mixed model with a trend was considered:
(2)
\[ {X_{t}}=\theta t+\kappa {B_{t}^{{H_{1}}}}+\sigma {B_{t}^{{H_{2}}}},\hspace{1em}t\ge 0.\]
The authors applied the maximum likelihood approach for the estimation of the unknown drift parameter θ in model (2) based on the continuous observations. They constructed an estimator, which involves a solution to the integral equation with weak polar kernel. An approach to parameter estimation based on the numerical solution to this equation was recently proposed in [21]. Estimation of the drift parameter by discrete observations was discussed in [20].
In Refs. [2, 7, 14, 17] the parameter estimation for model (2) with ${H_{1}}=1/2$ was studied. The simultaneous estimation of all four parameters of this model was studied in [7] by the maximum likelihood method, and in [14], where the approach of [6] was generalized and, moreover, ergodic-type estimators were proposed. Ref. [2] considered the estimation of the drift parameter θ assuming that H and σ are known and $\kappa =1$. The drift parameter estimation for a more general model driven by a stationary Gaussian process was discussed in [17].
The goal of the present paper is to construct strongly consistent estimators and to study their joint asymptotic normality. We extend the approach of [14], based on the ergodic theorem and generalized method of moments. It is worth mentioning that asymptotic normality does not hold for ${H_{2}}\gt \frac{3}{4}$, which is typical for models with fractional Brownian motion, see, e.g., [22]. The estimators remain strongly consistent but have non-Gaussian asymptotic distribution.
The paper is organized as follows. In Section 2 we construct strongly consistent estimators with the use of four statistics based on the various increments of the process X. Section 3 is devoted to asymptotic normality. First, we prove asymptotic normality of constructed additional statistics and evaluate their asymptotic covariance matrix. Thereafter, we obtain the main result on the asymptotic normality of our estimators by applying the delta method. The theoretical results are illustrated by numerical study, which is presented in Section 4. All technical proofs together with some auxiliary results are collected in Section 5.
We use the following notation. The symbol cov stands for the covariance of two random variables and for the covariance matrix of a random vector. In the paper, all the vectors are column ones. The superscript ⊤ denotes transposition. Weak convergence in distribution is denoted by $\stackrel{\text{d}}{\to }$.

2 Construction of strongly consistent estimators

Let $h\gt 0$ be fixed. Assume that the process ${X_{t}}$ defined in (1) is observed at points ${t_{k}}=kh$, $k=0,1,2,\dots \hspace{0.1667em}$. Let us denote the increment
\[ \Delta {X_{k}}:={X_{(k+1)h}}-{X_{kh}}=\kappa \Delta {B_{k}^{{H_{1}}}}+\sigma \Delta {B_{k}^{{H_{2}}}}.\]
The estimation procedure will be based on the following result.
Lemma 1.
The Gaussian sequence $\Delta {X_{k}}$, $k=0,1,\dots \hspace{0.1667em}$, is stationary and ergodic.
This lemma allows us to apply the ergodic theorem for the construction of estimators. Namely, if $g:{\mathbb{R}^{l+1}}\to \mathbb{R}$ is a Borel function such that $\operatorname{\mathsf{E}}|g(\Delta {X_{0}},\Delta {X_{1}},\dots ,\Delta {X_{l}})|\lt \infty $, then
(3)
\[ \frac{1}{N}{\sum \limits_{k=0}^{N-1}}g(\Delta {X_{k}},\Delta {X_{k+1}},\dots ,\Delta {X_{k+l}})\to \operatorname{\mathsf{E}}g(\Delta {X_{0}},\Delta {X_{1}},\dots ,\Delta {X_{l}})\]
a. s., as $N\to \infty $. The main idea is to obtain four different convergences by choosing different functions g, and then to construct the estimators by solving the corresponding system of four equations. To this end, we introduce the following four statistics
(4)
\[ \begin{array}{r@{\hskip0pt}l@{\hskip10pt}r@{\hskip0pt}l}\displaystyle {\xi _{N}}& \displaystyle :=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({X_{(k+1)h}}-{X_{kh}}\right)^{2}},& \displaystyle {\eta _{N}}& \displaystyle :=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({X_{(k+2)h}}-{X_{kh}}\right)^{2}},\\ {} \displaystyle {\zeta _{N}}& \displaystyle :=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({X_{(k+4)h}}-{X_{kh}}\right)^{2}},& \displaystyle {\phi _{N}}& \displaystyle :=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({X_{(k+8)h}}-{X_{kh}}\right)^{2}}.\end{array}\]
Denote also $\vartheta ={({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})^{\top }}$.
Lemma 2.
The following a.s. convergences hold as $N\to \infty $:
  • (i) ${\xi _{N}}\to \operatorname{\mathsf{E}}{\xi _{0}}={\kappa ^{2}}{h^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}=:{f_{1}}(\vartheta )$,
  • (ii) ${\eta _{N}}\to \operatorname{\mathsf{E}}{\eta _{0}}={\kappa ^{2}}{h^{2{H_{1}}}}{2^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}{2^{2{H_{2}}}}=:{f_{2}}(\vartheta )$,
  • (iii) ${\zeta _{N}}\to \operatorname{\mathsf{E}}{\zeta _{0}}={\kappa ^{2}}{h^{2{H_{1}}}}{2^{4{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}{2^{4{H_{2}}}}=:{f_{3}}(\vartheta )$,
  • (iv) ${\phi _{N}}\to \operatorname{\mathsf{E}}{\phi _{0}}={\kappa ^{2}}{h^{2{H_{1}}}}{2^{6{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}{2^{6{H_{2}}}}=:{f_{4}}(\vartheta )$.
Let us introduce the notation
(5)
\[ f(\vartheta ):={\left({f_{1}}(\vartheta ),{f_{2}}(\vartheta ),{f_{3}}(\vartheta ),{f_{4}}(\vartheta )\right)^{\top }},\]
(6)
\[ {\tau _{N}}={({\xi _{N}},{\eta _{N}},{\zeta _{N}},{\phi _{N}})^{\top }},N\in \mathbb{N},\hspace{1em}{\tau _{0}}={(\operatorname{\mathsf{E}}{\xi _{0}},\operatorname{\mathsf{E}}{\eta _{0}},\operatorname{\mathsf{E}}{\zeta _{0}},\operatorname{\mathsf{E}}{\phi _{0}})^{\top }}.\]
Then $f(\vartheta )={\tau _{0}}$. Therefore, in order to obtain estimators for parameters ${({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})^{\top }}$ we need to construct function q such that $q({\tau _{0}})={f^{-1}}({\tau _{0}})$, and then required estimators would be defined as ${({\widehat{H}_{N}^{(1)}},{\widehat{H}_{N}^{(2)}},{\hat{\kappa }_{N}^{2}},{\hat{\sigma }_{N}^{2}})^{\top }}:=q({\tau _{N}})$. In other words, in order to construct strongly consistent estimators, we need to solve the estimation equation
(7)
\[ f(\vartheta )={\tau _{N}}\]
with respect to unknown parameters. The solution to (7) can be found in explicit form, see Subsection 5.3 for details. This leads to the following result.
Theorem 1.
Let $0\lt {H_{1}}\lt {H_{2}}\lt 1$. The statistic
(8)
\[ \widehat{\vartheta }:={({\widehat{H}_{N}^{(1)}},{\widehat{H}_{N}^{(2)}},{\hat{\kappa }_{N}^{2}},{\hat{\sigma }_{N}^{2}})^{\top }},\]
where
(9)
\[ {\widehat{H}_{N}^{(1)}}=\frac{1}{2}{\log _{2+}}\left(\frac{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}+{d_{N}}}{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}\right),\]
(10)
\[ {\widehat{H}_{N}^{(2)}}=\frac{1}{2}{\log _{2+}}\left(\frac{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}-{d_{N}}}{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}\right),\]
(11)
\[ {\hat{\kappa }_{N}^{2}}={\left(\frac{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}+{d_{N}}}\right)^{{\log _{2}}h}}\frac{{\xi _{N}}{d_{N}}+2{\eta _{N}^{3}}-3{\xi _{N}}{\eta _{N}}{\zeta _{N}}+{\xi _{N}^{2}}{\phi _{N}}}{2{d_{N}}},\]
(12)
\[ {\hat{\sigma }_{N}^{2}}={\left(\frac{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}-{d_{N}}}\right)^{{\log _{2}}h}}\frac{{\xi _{N}}{d_{N}}-2{\eta _{N}^{3}}+3{\xi _{N}}{\eta _{N}}{\zeta _{N}}-{\xi _{N}^{2}}{\phi _{N}}}{2{d_{N}}}\]
with
(13)
\[ {d_{N}}:={\big({\xi _{N}^{2}}{\phi _{N}^{2}}-6{\xi _{N}}{\eta _{N}}{\zeta _{N}}{\phi _{N}}-3{\eta _{N}^{2}}{\zeta _{N}^{2}}+4{\eta _{N}^{3}}{\phi _{N}}+4{\xi _{N}}{\zeta _{N}^{3}}\big)_{+}^{1/2}}\]
is a strongly consistent estimator of $\vartheta ={({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})^{\top }}$.
Here
\[ {\log _{2+}}x=\left\{\begin{array}{l@{\hskip10.0pt}l}{\log _{2}}x,\hspace{1em}& \textit{if}\hspace{2.5pt}x\gt 0,\\ {} 0,\hspace{1em}& \textit{if}\hspace{2.5pt}x\le 0,\end{array}\right.\hspace{2em}{(x)_{+}^{1/2}}=\left\{\begin{array}{l@{\hskip10.0pt}l}\sqrt{x},\hspace{1em}& \textit{if}\hspace{2.5pt}x\gt 0,\\ {} 0,\hspace{1em}& \textit{if}\hspace{2.5pt}x\le 0.\end{array}\right.\]
Remark 1.
The constructed estimators can be generalized for mixed model with trend (2) by introducing the estimator $\frac{{X_{Nh}}}{N}=\frac{1}{N}{\textstyle\sum _{k=0}^{N-1}}({X_{(k+1)h}}-{X_{kh}})$ to define the estimator of θ. By subtracting $\frac{{X_{Nh}}}{N}$ from previously constructed statistics (4) we can obtain a system of equations similar to (7).

3 Asymptotic normality

We will start with a study of the asymptotic properties of the vector ${\tau _{N}}=({\phi _{N}},{\xi _{N}},{\eta _{N}},{\zeta _{N}})$. The previously defined stationary Gaussian sequence $\Delta {X_{k}}$ has the autocovariance function
(14)
\[ \tilde{\rho }(i)=\operatorname{\mathbf{cov}}(\Delta {X_{0}},\Delta {X_{i}})={\kappa ^{2}}{h^{2{H_{1}}}}\rho (i,{H_{1}})+{\sigma ^{2}}{h^{2{H_{2}}}}\rho (i,{H_{2}}),\hspace{1em}i\in \mathbb{Z},\]
where
(15)
\[ \rho (i,H):=\frac{1}{2}\left(|i+1{|^{2H}}-2|i{|^{2H}}+|i-1{|^{2H}}\right)\]
denotes the autocovariance function of the stationary sequence $\{{B_{k+1}^{H}}-{B_{k}^{H}},k\ge 0\}$, which is known as fractional Gaussian noise.
Theorem 2.
Let $0\lt {H_{1}}\lt {H_{2}}\lt \frac{3}{4}$. The vector ${\tau _{N}}$ defined by (6) is asymptotically normal, namely,
\[ \sqrt{N}({\tau _{N}}-{\tau _{0}})=\sqrt{N}\left(\begin{array}{c}{\xi _{N}}-\operatorname{\mathsf{E}}{\xi _{0}}\\ {} {\eta _{N}}-\operatorname{\mathsf{E}}{\eta _{0}}\\ {} {\zeta _{N}}-\operatorname{\mathsf{E}}{\zeta _{0}}\\ {} {\phi _{N}}-\operatorname{\mathsf{E}}{\phi _{0}}\end{array}\right)\stackrel{\text{d}}{\to }\mathcal{N}(\vec{0},\widetilde{\Sigma })\]
with the asymptotic covariance matrix $\widetilde{\Sigma }$, which can be presented explicitly as
\[ \widetilde{\Sigma }=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\widetilde{\Sigma }_{11}}& {\widetilde{\Sigma }_{12}}& {\widetilde{\Sigma }_{13}}& {\widetilde{\Sigma }_{14}}\\ {} {\widetilde{\Sigma }_{12}}& {\widetilde{\Sigma }_{22}}& {\widetilde{\Sigma }_{23}}& {\widetilde{\Sigma }_{24}}\\ {} {\widetilde{\Sigma }_{13}}& {\widetilde{\Sigma }_{23}}& {\widetilde{\Sigma }_{33}}& {\widetilde{\Sigma }_{34}}\\ {} {\widetilde{\Sigma }_{14}}& {\widetilde{\Sigma }_{24}}& {\widetilde{\Sigma }_{34}}& {\widetilde{\Sigma }_{44}}\end{array}\right),\]
where
\[\begin{aligned}{}{\widetilde{\Sigma }_{11}}& =2{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }{(i)^{2}},\hspace{1em}{\widetilde{\Sigma }_{12}}={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\Big(4\tilde{\rho }(i)+4\tilde{\rho }(i+1)\Big),\\ {} {\widetilde{\Sigma }_{22}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(12\tilde{\rho }(i)+16\tilde{\rho }(i+1)+4\tilde{\rho }(i+2)\Big),\\ {} {\widetilde{\Sigma }_{13}}& ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\Big(8\tilde{\rho }(i)+12\tilde{\rho }(i+1)+8\tilde{\rho }(i+2)+4\tilde{\rho }(i+3)\Big),\\ {} {\widetilde{\Sigma }_{23}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(28\tilde{\rho }(i)+48\tilde{\rho }(i+1)+32\tilde{\rho }(i+2)+16\tilde{\rho }(i+3)+4\tilde{\rho }(i+4)\Big),\\ {} {\widetilde{\Sigma }_{33}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(88\tilde{\rho }(i)+160\tilde{\rho }(i+1)+124\tilde{\rho }(i+2)+80\tilde{\rho }(i+3)\\ {} & \hspace{1em}+40\tilde{\rho }(i+4)+16\tilde{\rho }(i+5)+4\tilde{\rho }(i+6)\Big),\\ {} {\widetilde{\Sigma }_{14}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(16\tilde{\rho }(i)+28\tilde{\rho }(i+1)+24\tilde{\rho }(i+2)+20\tilde{\rho }(i+3)\\ {} & \hspace{1em}+16\tilde{\rho }(i+4)+12\tilde{\rho }(i+5)+8\tilde{\rho }(i+6)+4\tilde{\rho }(i+7)\Big),\\ {} {\widetilde{\Sigma }_{24}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(60\tilde{\rho }(i)+112\tilde{\rho }(i+1)+96\tilde{\rho }(i+2)+80\tilde{\rho }(i+3)\\ {} & \hspace{1em}+64\tilde{\rho }(i+4)+48\tilde{\rho }(i+5)+32\tilde{\rho }(i+6)+16\tilde{\rho }(i+7)+4\tilde{\rho }(i+8)\Big),\\ {} {\widetilde{\Sigma }_{34}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(216\tilde{\rho }(i)+416\tilde{\rho }(i+1)+376\tilde{\rho }(i+2)+320\tilde{\rho }(i+3)\\ {} & \hspace{1em}+256\tilde{\rho }(i+4)+192\tilde{\rho }(i+5)+132\tilde{\rho }(i+6)+80\tilde{\rho }(i+7)\\ {} & \hspace{1em}+40\tilde{\rho }(i+8)+16\tilde{\rho }(i+9)+4\tilde{\rho }(i+10)\Big),\\ {} {\widetilde{\Sigma }_{44}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(688\tilde{\rho }(i)+1344\tilde{\rho }(i+1)+1260\tilde{\rho }(i+2)+1136\tilde{\rho }(i+3)\\ {} & \hspace{1em}+984\tilde{\rho }(i+4)+816\tilde{\rho }(i+5)+644\tilde{\rho }(i+6)+480\tilde{\rho }(i+7)\\ {} & \hspace{1em}+336\tilde{\rho }(i+8)+224\tilde{\rho }(i+9)+140\tilde{\rho }(i+10)+80\tilde{\rho }(i+11)\\ {} & \hspace{1em}+40\tilde{\rho }(i+12)+16\tilde{\rho }(i+13)+4\tilde{\rho }(i+14)\Big).\end{aligned}\]
Remark 2.
The condition $H\lt \frac{3}{4}$ is essential since otherwise the series in Theorem 2 do not converge. Moreover, for $H\gt \frac{3}{4}$, the asymptotic normality of ${\tau _{N}}$ does not hold; using the results of [15], it is possible to establish the convergence of the suitably normalized vector ${\tau _{N}}-{\tau _{0}}$ to a special non-Gaussian distribution.
Now we are ready to provide the main result on asymptotic normality of our estimator (8).
Theorem 3.
Let $0\lt {H_{1}}\lt {H_{2}}\lt \frac{3}{4}$. The estimator ${\hat{\vartheta }_{N}}$ is asymptotically normal, namely,
\[ \sqrt{N}\left({\hat{\vartheta }_{N}}-\vartheta \right)=\sqrt{N}\left(\begin{array}{c}{\widehat{H}_{N}^{(1)}}-{H_{1}}\\ {} {\widehat{H}_{N}^{(2)}}-{H_{2}}\\ {} {\hat{\kappa }_{N}^{2}}-{\kappa ^{2}}\\ {} {\hat{\sigma }_{N}^{2}}-{\sigma ^{2}}\end{array}\right)\stackrel{\text{d}}{\to }\mathcal{N}\left(\vec{0},{\Sigma ^{0}}\right)\]
with the asymptotic covariance matrix ${\Sigma ^{0}}$ that can be found by the formula
\[ {\Sigma ^{0}}={({f^{\prime }}(\vartheta ))^{-1}}\widetilde{\Sigma }{\left({f^{\prime }}(\vartheta )\right)^{-\top }},\]
where $\widetilde{\Sigma }$ is defined in Theorem 2 and
\[ {f^{\prime }}(\vartheta )=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}2{\kappa ^{2}}{h^{2{H_{1}}}}\log h& 2{\sigma ^{2}}{h^{2{H_{2}}}}\log h& {h^{2{H_{1}}}}& {h^{2{H_{2}}}}\\ {} 2{\kappa ^{2}}{(2h)^{2{H_{1}}}}\log (2h)& 2{\sigma ^{2}}{(2h)^{2{H_{2}}}}\log (2h)& {(2h)^{2{H_{1}}}}& {(2h)^{2{H_{2}}}}\\ {} 2{\kappa ^{2}}{(4h)^{2{H_{1}}}}\log (4h)& 2{\sigma ^{2}}{(4h)^{2{H_{2}}}}\log (4h)& {(4h)^{2{H_{1}}}}& {(4h)^{2{H_{2}}}}\\ {} 2{\kappa ^{2}}{(8h)^{2{H_{1}}}}\log (8h)& 2{\sigma ^{2}}{(8h)^{2{H_{2}}}}\log (8h)& {(8h)^{2{H_{1}}}}& {(8h)^{2{H_{2}}}}\end{array}\right).\]

4 Simulation study

In this section, we present results of estimators performance in numerical simulations. For each generated trajectory, we will estimate asymptotic covariance matrices defined in Theorem 2 and Theorem 3 by using values of estimators (9)–(12). For each set of parameters with $0\lt {H_{1}}\lt {H_{2}}\lt 1$, we generate 1000 trajectories of the process X and calculate the empirical means, empirical standard deviations of the estimates, and percentage of iterations with constant ${d_{N}}$ defined in (13) being equal to zero. Additionally, for $0\lt {H_{1}}\lt {H_{2}}\lt \frac{3}{4}$, we calculate the average approximate theoretical standard deviation (ASD) – the square root of the estimated asymptotic variance divided by N ($\sqrt{{({\hat{\Sigma }_{N}^{0}})_{ii}}/N}$) – and the coverage probability (CP) for $\alpha =5\% $, based on the estimator of the asymptotic covariance matrix. We are computing these statistics only for ${H_{1}}\lt {H_{2}}\lt \frac{3}{4}$, excluding interval $(\frac{3}{4},1)$ where the series (17) will not converge as per Lemma 3. Therefore, the asymptotic covariance matrix and corresponding statistics (ASD and CP) will be undefined. Moreover, when any of the estimates ${\widehat{H}_{N}}$ will be out of boundaries $(0,\frac{3}{4})$, or the calculated constant ${d_{N}}$ defined in (13) is zero (e.g., in cases where discriminant D defined in the proof of Theorem 1 by (30) is nonpositive), then ASD and CP will also be considered undefined.
To numerically approximate limiting covariances, series alike (17) are divided into the sum of two convergent series
\[ {\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i+\alpha )\tilde{\rho }(i+\beta )={\sum \limits_{i=0}^{+\infty }}\tilde{\rho }(i+\alpha )\tilde{\rho }(i+\beta )+{\sum \limits_{i=0}^{+\infty }}\tilde{\rho }(i+1-\alpha )\tilde{\rho }(i+1-\beta ),\]
where $\alpha ,\beta \in \mathbb{Z}$.
Then, series ${\textstyle\sum _{i=0}^{+\infty }}\tilde{\rho }(i+\alpha )\tilde{\rho }(i+\beta )$ are approximated with precision $\delta ={10^{-3}}$ by evaluating the sum ${S_{m}}={\textstyle\sum _{i=0}^{m}}\tilde{\rho }(i+\alpha )\tilde{\rho }(i+\beta )$, where m is the smallest number such that $|{S_{m}}-{S_{m-1}}|\lt {10^{-3}}$. Obtained approximations will be used to compute the asymptotic covariance matrix defined in Theorem 2.
For ${\widehat{H}_{N}^{(1)}}$, ${\widehat{H}_{N}^{(2)}}$, ${\hat{\kappa }_{N}^{2}}$, ${\hat{\sigma }_{N}^{2}}$, ${\hat{\Sigma }_{N}^{0}}$ and fixed time step $h=1$ we vary the horizon $T=h{2^{n}}$ for $n\in \{8,10,12,14,16,18,20\}$. For all simulations, the values $\sigma =1$ and $\kappa =1$ were used.
Table 1.
The estimator ${\widehat{H}_{N}^{(1)}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean 0.0722 0.0434 −0.0230 −0.1009 −0.0844 0.0482 0.0842
S.dev. 0.5041 0.4727 0.5063 0.5160 0.4459 0.1629 0.0791
ASD 0.7648 0.7039 1.1629 0.3871 0.2011 0.1239 0.0812
CP% 100.00 100.00 100.00 92.18 75.61 75.58 87.91
0.1 0.5 Mean 0.0184 −0.0455 −0.0009 0.0674 0.0953 0.0988 0.0991
S.dev. 0.5618 0.5945 0.3733 0.1710 0.0824 0.0414 0.0205
ASD 1.1951 0.5894 0.2696 0.1457 0.0829 0.0439 0.0222
CP% 100.00 100.00 99.30 89.82 93.76 96.85 96.50
0.1 0.7 Mean −0.0408 −0.0071 0.0773 0.0945 0.1002 0.0998 0.1005
S.dev. 0.5931 0.3828 0.1513 0.0715 0.0342 0.0179 0.0090
ASD 0.7428 0.3223 0.1516 0.0742 0.0375 0.0188 0.0094
CP% 100.00 100.00 100.00 99.30 97.09 95.90 95.40
0.1 0.9 Mean 0.0131 0.0725 0.0886 0.0956 0.0972 0.0986 0.0993
S.dev. 0.3842 0.1554 0.0707 0.0350 0.0187 0.0113 0.0075
0.3 0.5 Mean 0.2096 0.1797 0.1161 0.0665 0.1731 0.2658 0.2901
S.dev. 0.5181 0.5738 0.4782 0.5254 0.3445 0.1209 0.0564
ASD 1.1101 1.0150 0.7507 0.4497 0.2337 0.1124 0.0570
CP% 100.00 100.00 100.00 100.00 92.01 87.11 90.80
0.3 0.7 Mean 0.1748 0.1524 0.2330 0.2764 0.2931 0.2989 0.3001
S.dev. 0.5138 0.4661 0.2630 0.1165 0.0563 0.0263 0.0137
ASD 1.4395 0.5732 0.2682 0.1248 0.0566 0.0277 0.0138
CP% 100.00 100.00 100.00 100.00 98.32 96.30 95.60
0.3 0.9 Mean 0.1863 0.2636 0.2863 0.2934 0.2969 0.2983 0.2988
S.dev. 0.4268 0.1859 0.0824 0.0423 0.0211 0.0124 0.0078
0.5 0.7 Mean 0.4213 0.3300 0.2774 0.3367 0.4563 0.4859 0.4974
S.dev. 0.6302 0.6258 0.5085 0.3908 0.1461 0.0763 0.0367
ASD 2.0105 1.2367 0.8386 0.4502 0.1964 0.0830 0.0366
CP% 100.00 100.00 100.00 100.00 99.84 99.09 98.09
0.5 0.9 Mean 0.3188 0.3701 0.4577 0.4878 0.4943 0.4975 0.4986
S.dev. 0.5170 0.4018 0.1477 0.0627 0.0299 0.0162 0.0095
0.7 0.9 Mean 0.5487 0.4918 0.5172 0.5854 0.6706 0.6898 0.6957
S.dev. 0.6543 0.6735 0.4569 0.2908 0.0914 0.0396 0.0193
Table 2.
The estimator ${\widehat{H}_{N}^{(2)}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean 0.2296 0.2845 0.3834 0.5157 0.4571 0.3634 0.3182
S.dev. 0.5205 0.6239 0.5643 0.5944 0.3578 0.1543 0.0594
ASD 2.2888 2.0419 1.4188 0.8483 0.4334 0.1937 0.0791
CP% 100.00 100.00 100.00 100.00 100.00 100.00 99.88
0.1 0.5 Mean 0.5437 0.6554 0.6072 0.5302 0.5093 0.5024 0.5003
S.dev. 0.6078 0.5234 0.2668 0.0985 0.0411 0.0193 0.0094
ASD 0.9381 0.5332 0.2632 0.1158 0.0481 0.0218 0.0107
CP% 100.00 100.00 100.00 100.00 99.66 98.37 96.80
0.1 0.7 Mean 0.7912 0.7303 0.7069 0.7018 0.7006 0.7000 0.7001
S.dev. 0.4123 0.1341 0.0554 0.0254 0.0126 0.0066 0.0032
ASD 0.3193 0.1257 0.0557 0.0265 0.0139 0.0064 0.0032
CP% 100.00 100.00 100.00 99.77 95.98 94.50 94.70
0.1 0.9 Mean 0.8818 0.8825 0.8882 0.8930 0.8959 0.8979 0.8991
S.dev. 0.1599 0.0601 0.0381 0.0272 0.0201 0.0151 0.0121
0.3 0.5 Mean 0.4179 0.4999 0.5838 0.6819 0.5997 0.5329 0.5097
S.dev. 0.6199 0.6409 0.5791 0.4784 0.2349 0.0921 0.0418
ASD 0.9471 0.7822 0.6198 0.3870 0.1737 0.0861 0.0412
CP% 100.00 100.00 100.00 100.00 97.81 90.36 90.90
0.3 0.7 Mean 0.7636 0.8383 0.7481 0.7094 0.7023 0.7007 0.7004
S.dev. 0.5831 0.4365 0.1412 0.0574 0.0271 0.0128 0.0066
ASD 0.4241 0.1793 0.0914 0.0466 0.0256 0.0130 0.0065
CP% 100.00 100.00 99.38 90.74 93.60 95.30 94.00
0.3 0.9 Mean 0.9267 0.8926 0.8898 0.8936 0.8954 0.8975 0.8981
S.dev. 0.2765 0.0827 0.0440 0.0301 0.0213 0.0166 0.0130
0.5 0.7 Mean 0.5823 0.6867 0.8179 0.8324 0.7494 0.7181 0.7056
S.dev. 0.6097 0.6288 0.5302 0.3556 0.1229 0.0581 0.0265
ASD 0.5033 0.2277 0.1997 0.1055 0.0672 0.0422 0.0245
CP% 100.00 100.00 99.54 82.13 78.62 85.38 92.58
0.5 0.9 Mean 0.8978 0.9350 0.8980 0.8936 0.8939 0.8965 0.8978
S.dev. 0.5267 0.2147 0.0793 0.0413 0.0274 0.0198 0.0150
0.7 0.9 Mean 0.7878 0.8862 0.9672 0.9309 0.9021 0.8963 0.8965
S.dev. 0.6431 0.5628 0.4037 0.1334 0.0578 0.0316 0.0213
Table 3.
The estimator ${\hat{\kappa }_{N}^{2}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean NaN NaN NaN Inf 1.0591 1.0732 1.0460
S.dev. NaN NaN NaN NaN 0.7876 0.6474 0.4604
ASD 4.3622 4.9405 5.0730 2.8475 1.5443 0.9266 0.6165
CP% 100.00 100.00 93.08 80.66 70.73 74.27 89.53
0.1 0.5 Mean NaN Inf 1.1104 1.0594 1.0340 1.0098 1.0006
S.dev. NaN NaN 0.6161 0.4314 0.2528 0.1333 0.0672
ASD 4.4678 2.4422 1.0822 0.5453 0.2928 0.1484 0.0743
CP% 100.00 100.00 99.30 92.92 96.15 97.97 96.20
0.1 0.7 Mean Inf 1.0400 1.0183 1.0054 1.0035 1.0004 1.0010
S.dev. NaN 0.4440 0.2538 0.1260 0.0623 0.0330 0.0164
ASD 1.8115 0.6744 0.2911 0.1395 0.0693 0.0346 0.0173
CP% 100.00 100.00 100.00 100.00 97.79 95.70 95.70
0.1 0.9 Mean 1.0054 0.9872 0.9878 0.9922 0.9950 0.9972 0.9987
S.dev. 0.3366 0.1663 0.0823 0.0437 0.0264 0.0177 0.0132
0.3 0.5 Mean NaN NaN Inf 1.0451 1.0379 1.0466 1.0232
S.dev. NaN NaN NaN 0.8444 0.7341 0.5658 0.3768
ASD 4.8428 4.2202 3.2605 2.1396 1.0325 0.6043 0.3862
CP% 100.00 100.00 100.00 96.51 83.23 77.25 84.00
0.3 0.7 Mean NaN 1.1084 1.0696 1.0255 1.0106 1.0026 1.0017
S.dev. NaN NaN 0.5169 0.3305 0.1797 0.0877 0.0455
ASD 3.5137 1.3194 0.6535 0.3286 0.1770 0.0917 0.0461
CP% 100.00 100.00 0.9979 0.9074 0.9496 0.9570 0.9460
0.3 0.9 Mean Inf 0.9970 0.9867 0.9887 0.9923 0.9955 0.9968
S.dev. NaN 0.2908 0.1463 0.0802 0.0436 0.0285 0.0201
0.5 0.7 Mean NaN NaN Inf 1.0557 1.0674 1.0502 1.0270
S.dev. NaN NaN NaN 0.7737 0.6163 0.4567 0.2703
ASD 4.6051 2.2181 2.1464 1.2319 0.7646 0.4713 0.2722
CP% 100.00 100.00 92.17 77.05 78.14 86.29 93.96
0.5 0.9 Mean NaN 1.0008 0.9750 0.9800 0.9814 0.9905 0.9940
S.dev. NaN 0.5558 0.3666 0.1997 0.1057 0.0621 0.0402
0.7 0.9 Mean NaN NaN −Inf 0.9524 0.9568 0.9572 0.9690
S.dev. NaN NaN NaN 0.6566 0.4605 0.2777 0.1643
Table 4.
The estimator ${\hat{\sigma }_{N}^{2}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean NaN NaN NaN Inf 0.9764 0.9499 0.9781
S.dev. NaN NaN NaN NaN 0.7882 0.6480 0.4606
ASD 4.3572 4.9387 5.0724 2.8472 1.5442 0.9265 0.6165
CP% 100.00 100.00 93.08 80.25 70.95 74.27 89.53
0.1 0.5 Mean NaN −Inf 0.8916 0.9408 0.9656 0.9902 0.9995
S.dev. NaN NaN 0.6168 0.4320 0.2527 0.1334 0.0672
ASD 4.4647 2.4412 1.0818 0.5452 0.2928 0.1484 0.0748
CP% 100.00 100.00 99.54 93.07 96.15 97.87 96.40
0.1 0.7 Mean −Inf 0.9609 0.9795 0.9941 0.9959 0.9994 0.9989
S.dev. NaN 0.4474 0.2513 0.1245 0.0617 0.0326 0.0162
ASD 1.8071 0.6716 0.2897 0.1388 0.0690 0.0344 0.0172
CP% 100.00 100.00 100.00 100.00 97.49 95.90 96.20
0.1 0.9 Mean 1.0170 1.0253 1.0273 1.0157 1.0099 1.0064 1.0067
S.dev. 0.5639 0.3708 0.2810 0.2096 0.1452 0.1039 0.0784
0.3 0.5 Mean NaN NaN −Inf 0.9554 0.9623 0.9536 0.9769
S.dev. NaN NaN NaN 0.8435 0.7340 0.5662 0.3768
ASD 4.8407 4.2199 3.2605 2.1396 1.0325 0.6043 0.3862
CP% 100.00 100.00 100.00 96.80 83.23 77.03 84.00
0.3 0.7 Mean NaN −Inf 0.9327 0.9886 0.9978 0.9984 0.9982
S.dev. NaN NaN 0.5156 0.3295 0.1787 0.0872 0.0453
ASD 3.5101 1.3180 0.6527 0.3283 0.1768 0.0915 0.0460
CP% 100.00 100.00 100.00 90.61 95.07 95.50 94.80
0.3 0.9 Mean −Inf 1.0177 1.0233 1.0235 1.0092 1.0092 1.0044
S.dev. NaN 0.4173 0.2704 0.1891 0.1256 0.1035 0.0803
0.5 0.7 Mean NaN NaN −Inf 0.9450 0.9323 0.9497 0.9730
S.dev. NaN NaN NaN 0.7735 0.6158 0.4568 0.2702
ASD 4.5998 2.2161 2.1461 1.2318 0.7645 0.4712 0.2721
CP% 100.00 100.00 91.71 77.05 78.30 86.42 94.06
0.5 0.9 Mean NaN 1.0025 1.0274 1.0277 1.0211 1.0122 1.0075
S.dev. NaN 0.6212 0.4051 0.2374 0.1476 0.1002 0.0726
0.7 0.9 Mean NaN NaN Inf 1.0399 1.0375 1.0414 1.0288
S.dev. NaN NaN NaN 0.6359 0.4416 0.2525 0.1337
Results of parameters estimations provided in Tables 1–4 indicate that constructed estimators perform better, the higher the difference is between ${H_{1}}$ and ${H_{2}}$. That impact is especially significant for estimators of variances ${\kappa ^{2}}$ and ${\sigma ^{2}}$ since the denominator in (11) and (12) tends to 0 when the difference between two Hurst parameters tends to zero. Also, the smaller this difference is, the higher the percentage of iterations with constant ${d_{N}}$ being equal to zero, which is presented in Table 5. Additionally, we also observe that constructed estimators perform better for higher values of ${H_{1}}$ and ${H_{2}}$. This also might be caused by the difference between ${H_{1}}$ and ${H_{2}}$ since most of the values in numerators and denominators of constructed estimators are highly dependent on the corresponding difference, as was shown in the proof of Theorem 1. And higher values of ${H_{1}}$ and ${H_{2}}$ might result in a more accurate estimate of the difference ${\widehat{H}_{N}^{(2)}}-{\widehat{H}_{N}^{(1)}}$ in terms of the ratio between the estimated value and its deviation, and therefore – a lower percentage of iterations with the estimated difference being negative.
Table 5.
The percentage (%) of iterations with constant ${d_{N}}$ being equal to zero with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 2.9 1.3 0.3 0 0 0 0
0.1 0.5 1.1 0.1 0 0 0 0 0
0.1 0.7 0.2 0 0 0 0 0 0
0.1 0.9 0 0 0 0 0 0 0
0.3 0.5 3.8 1.5 0.1 0 0 0 0
0.3 0.7 0.9 0 0 0 0 0 0
0.3 0.9 0.1 0 0 0 0 0 0
0.5 0.7 3.5 2.2 0.1 0 0 0 0
0.5 0.9 0.6 0 0 0 0 0 0
0.7 0.9 4.3 1.1 0.4 0 0 0 0
Table 6.
The coverage probability (%) for estimators ${({\xi _{N}},{\eta _{N}},{\zeta _{N}},{\phi _{N}})^{\top }}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 ${\xi _{N}}$ 94.29 92.19 95.38 94.24 94.68 92.25 93.14
${\eta _{N}}$ 100.0 98.44 99.23 99.59 98.67 99.27 98.95
${\zeta _{N}}$ 100.0 100.0 100.0 100.0 100.0 100.0 99.88
${\phi _{N}}$ 100.0 100.0 100.0 100.0 100.0 100.0 100.0
0.1 0.5 ${\xi _{N}}$ 95.06 94.55 94.20 94.25 93.54 94.41 95.10
${\eta _{N}}$ 98.77 97.52 97.45 99.12 97.73 98.27 98.50
${\zeta _{N}}$ 97.53 100.0 98.84 99.41 98.75 99.19 99.20
${\phi _{N}}$ 100.0 100.0 99.54 99.26 99.55 99.19 99.40
0.1 0.7 ${\xi _{N}}$ 95.83 95.11 96.33 95.24 94.48 94.00 94.40
${\eta _{N}}$ 93.75 95.49 97.10 94.66 94.68 94.10 93.70
${\zeta _{N}}$ 90.62 95.11 93.82 91.88 91.77 90.00 91.10
${\phi _{N}}$ 88.54 92.48 91.31 89.91 90.26 87.50 88.60
0.3 0.5 ${\xi _{N}}$ 95.00 95.28 95.21 95.64 94.36 95.77 94.80
${\eta _{N}}$ 95.00 96.23 97.01 96.51 97.34 97.29 96.50
${\zeta _{N}}$ 96.67 98.11 98.20 97.67 99.06 98.70 97.80
${\phi _{N}}$ 100.0 99.06 97.01 98.84 98.90 98.81 97.90
0.3 0.7 ${\xi _{N}}$ 96.15 94.21 94.24 92.70 93.07 94.00 93.70
${\eta _{N}}$ 98.72 93.82 94.65 92.18 91.71 91.80 91.40
${\zeta _{N}}$ 96.15 93.05 91.98 90.61 90.35 89.40 89.00
${\phi _{N}}$ 93.59 93.44 89.51 87.22 88.88 86.90 85.70
0.5 0.7 ${\xi _{N}}$ 89.36 96.15 95.85 91.06 94.18 92.56 94.27
${\eta _{N}}$ 87.23 97.69 92.17 87.68 91.04 89.16 90.88
${\zeta _{N}}$ 89.36 96.92 87.56 84.30 88.52 86.81 87.91
${\phi _{N}}$ 91.49 97.69 85.25 82.37 86.95 84.60 86.00
Table 7.
The percentage (%) of iterations with undefined asymptotic covariance matrix ${\hat{\Sigma }_{N}^{0}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 96.50 93.60 87.00 75.70 54.90 31.60 14.00
0.1 0.5 91.90 79.80 56.90 32.20 11.80 1.60 0
0.1 0.7 90.40 73.40 48.20 13.80 0.40 0 0
0.3 0.5 94.00 89.40 83.30 65.60 36.20 7.70 0
0.3 0.7 92.20 74.10 51.40 23.30 4.70 0 0
0.5 0.7 95.30 87.00 78.30 58.60 36.40 23.40 5.70
We observe that the higher the difference is between the true values of ${H_{1}}$ and ${H_{2}}$, the closer the ASD is to the empirical standard deviation of the estimator and the closer the coverage probability to the theoretical $95\% $. At the same time, results similar to those described above are observed in the performance of our estimates for asymptotic variance of (4) constructed in Theorem 2, which is shown in Table 6 by CP estimates. This table also shows that for higher values of ${H_{1}}$ and ${H_{2}}$, the accuracy of (4) is the lower, the bigger the increment step (e.g., ${\phi _{N}}$ is less accurate than ${\zeta _{N}}$, which is less accurate than ${\eta _{N}}$, which is less accurate than ${\xi _{N}}$). Unfortunately, the percentage of iterations with undefined asymptotic covariance matrix ${\hat{\Sigma }_{N}^{0}}$ is high but decreases for bigger samples, which is shown in Table 7, due to estimated ${\widehat{H}_{N}^{(1)}}$, ${\widehat{H}_{N}^{(2)}}$ violating the conditions of Theorem 3. Also, the percentage of iterations with the undefined ${\hat{\Sigma }_{N}^{0}}$ is the lower, the higher the difference between true ${H_{1}}$ and ${H_{2}}$.
Additionally, by conducting provided simulations, we observed that the closer parameters ${H_{1}}$ and ${H_{2}}$ to 0.75, the lower the convergence rate in the numerical approximation of the limiting covariance. To study this effect, we have conducted simulations for the series
(16)
\[ {\sum \limits_{i=0}^{+\infty }}\tilde{\rho }{(i)^{2}}.\]
We estimate (16) by finding the smallest number $m\in \mathbb{N}$ such that for ${S_{m}}\hspace{-0.1667em}=\hspace{-0.1667em}{\textstyle\sum _{i=0}^{m}}\tilde{\rho }{(i)^{2}}$ the inequality $|{S_{m}}-{S_{m-1}}|\le \delta $ holds for some predefined δ.
This estimation was conducted for $\delta \in \{{10^{-3}},{10^{-4}},{10^{-5}},{10^{-6}},{10^{-7}},{10^{-8}}$, ${10^{-9}}\}$ for $\sigma =1$, $\kappa =1$, ${H_{1}}=0.1$ and ${H_{2}}\in \{0.3,0.5,0.7\}$. Corresponding estimation results are presented in Table 8. As one can see, in order to estimate ${S_{m}}$ with the precision $\delta ={10^{-9}}$ for ${H_{2}}=0.3$, only a few hundred iterations are needed, while for ${H_{2}}=0.7$, it requires dozens of millions of iterations. Moreover, lowering the precision from ${10^{-4}}$ to ${10^{-3}}$ has almost no impact on the estimated value ${S_{m}}$ for low ${H_{2}}$, but for high ${H_{2}}$, it could result in around three times difference.
Table 8.
The convergence of the series (16) with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ and ${H_{1}}=0.1$ ($h=1$)
N
${H_{2}}$ ${10^{-3}}$ ${10^{-4}}$ ${10^{-5}}$ ${10^{-6}}$ ${2^{-7}}$ ${10^{-8}}$ ${10^{-9}}$
0.3 m 4 8 16 35 76 168 377
${S_{m}}$ 4.45362 4.45427 4.45445 4.45451 4.45452 4.45452 4.45453
0.5 m 3 4 7 12 22 42 78
${S_{m}}$ 4.18198 4.18203 4.18206 4.18207 4.18208 4.18208 4.18208
0.7 m $5.9\cdot {10^{6}}$ $4.9\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$
${S_{m}}$ 9981.86 35818.4 45796.6 45796.6 45796.6 45796.6 45796.6

5 Proofs

5.1 Auxiliary properties of covariance functions

The following statement provides some important properties of sequences $\tilde{\rho }(i)$ and $\rho (i,H)$, $i\in \mathbb{Z}$, defined by (14) and (15), respectively.
Lemma 3.
  • 1. For any $H\in (0,1)$, $\rho (i,H)\sim H(2H-1){|i|^{2H-2}}$ as $\left|i\right|\to \infty $.
  • 2. For any $H\in (0,\frac{3}{4})$, ${\textstyle\sum _{i=-\infty }^{\infty }}{\rho ^{2}}(i,H)\lt \infty $.
  • 3. For any $0\lt {H_{1}}\lt {H_{2}}\lt \frac{3}{4}$ and for all $\alpha ,\beta \in \mathbb{Z}$,
    (17)
    \[ {\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i+\alpha )\tilde{\rho }(i+\beta )\lt \infty .\]
Proof.
First two statements are well known, see, e.g., [22]. In order to prove the third one, we observe that
\[\begin{aligned}{}{\sum \limits_{i=-\infty }^{\infty }}{\tilde{\rho }^{2}}(i)& \le 2{\kappa ^{4}}{h^{4{H_{1}}}}{\sum \limits_{i=-\infty }^{\infty }}{\rho ^{2}}(i,{H_{1}})+2{\sigma ^{4}}{h^{4{H_{2}}}}{\sum \limits_{i=-\infty }^{\infty }}{\rho ^{2}}(i,{H_{2}})\lt \infty ,\end{aligned}\]
by the second statement. Then, using the Cauchy–Schwarz inequality, we get for all $\alpha ,\beta \in \mathbb{Z}$,
\[ {\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i+\alpha )\tilde{\rho }(i+\beta )\le \sqrt{{\sum \limits_{i=-\infty }^{+\infty }}{\tilde{\rho }^{2}}(i+\alpha )\cdot {\sum \limits_{i=-\infty }^{+\infty }}{\tilde{\rho }^{2}}(i+\beta )}={\sum \limits_{i=-\infty }^{+\infty }}{\tilde{\rho }^{2}}(i)\lt \infty .\]
 □

5.2 Proofs of ergodic results

Proof of Lemma 1.
Since ${B^{{H_{1}}}}$ and ${B^{{H_{2}}}}$ are independent centered Gaussian processes with stationary increments, we see that $\{\Delta {X_{k}},k\ge 0\}$ is a stationary Gaussian sequence with $\operatorname{\mathsf{E}}\Delta {X_{k}}=0$. Therefore, in order to establish the ergodicity of $\Delta {X_{k}}$, it suffices to prove that its autocovariance function $\tilde{\rho }(k)$ vanishes at infinity. In turn, this fact follows immediately from the first statement of Lemma 3:
\[\begin{aligned}{}\tilde{\rho }(k)& ={\kappa ^{2}}{h^{2{H_{1}}}}\rho (k,{H_{1}})+{\sigma ^{2}}{h^{2{H_{2}}}}\rho (k,{H_{2}})\\ {} & \sim {\kappa ^{2}}{h^{2{H_{1}}}}{H_{1}}(2{H_{1}}-1){k^{(2{H_{1}}-2)}}+{\sigma ^{2}}{h^{2{H_{2}}}}{H_{2}}(2{H_{2}}-1){k^{(2{H_{2}}-2)}}\to 0,\end{aligned}\]
as $k\to \infty $.  □
Proof of Lemma 2.
$(i)$ Taking $g({x_{0}})={x_{0}^{2}}$ in (3), we obtain that
\[\begin{aligned}{}{\xi _{N}}\to \operatorname{\mathsf{E}}{X_{h}^{2}}& =\operatorname{\mathsf{E}}{\left(\kappa {B_{h}^{{H_{1}}}}+\sigma {B_{h}^{{H_{2}}}}\right)^{2}}={\kappa ^{2}}\operatorname{\mathsf{E}}{({B_{h}^{{H_{1}}}})^{2}}+{\sigma ^{2}}\operatorname{\mathsf{E}}{({B_{h}^{{H_{2}}}})^{2}}\\ {} & ={\kappa ^{2}}{h^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}.\end{aligned}\]
$(\textit{ii})$ Applying $g({x_{0}},{x_{1}})={({x_{0}}+{x_{1}})^{2}}$ to (3), we arrive at
\[\begin{aligned}{}{\eta _{N}}\to \operatorname{\mathsf{E}}{X_{2h}^{2}}& =\operatorname{\mathsf{E}}{\left(\kappa {B_{2h}^{{H_{1}}}}+\sigma {B_{2h}^{{H_{2}}}}\right)^{2}}={\kappa ^{2}}\operatorname{\mathsf{E}}{({B_{2h}^{{H_{1}}}})^{2}}+{\sigma ^{2}}\operatorname{\mathsf{E}}{({B_{2h}^{{H_{2}}}})^{2}}\\ {} & ={\kappa ^{2}}{h^{2{H_{1}}}}{2^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}{2^{2{H_{2}}}}.\end{aligned}\]
Statements $(\textit{iii})$ and $(\textit{iv})$ can be proved similarly; we need to choose $g({x_{0}},{x_{1}},{x_{2}},{x_{3}})={({x_{0}}+{x_{1}}+{x_{2}}+{x_{3}})^{2}}$ and $g({x_{0}},\dots ,{x_{7}})={({\textstyle\sum _{i=0}^{7}}{x_{i}})^{2}}$ for (3), respectively.  □

5.3 Construction of estimators. Proof of Theorem 1

Let us briefly explain, how to solve the estimating equation (7). Denote $u={\kappa ^{2}}{h^{2{H_{1}}}}$, $v={\sigma ^{2}}{h^{2{H_{2}}}}$, $x={2^{2{H_{1}}}}$, $y={2^{2{H_{2}}}}$, $\xi ={\xi _{N}}$, $\eta ={\eta _{N}}$, $\zeta ={\zeta _{N}}$, $\phi ={\phi _{N}}$. Then (7) takes the form
(18)
\[\begin{aligned}{}\xi & =u+v,\end{aligned}\]
(19)
\[\begin{aligned}{}\eta & =ux+vy,\end{aligned}\]
(20)
\[\begin{aligned}{}\zeta & =u{x^{2}}+v{y^{2}},\end{aligned}\]
(21)
\[\begin{aligned}{}\phi & =u{x^{3}}+v{y^{3}}.\end{aligned}\]
Using equations (18) and (19) we can express the variables u and v in terms of x and y as follows:
(22)
\[ v=\frac{\eta -\xi x}{y-x},\hspace{1em}u=\frac{\eta -\xi y}{x-y}.\]
Combining (22) with (20) we get
(23)
\[\begin{aligned}{}\zeta & ={x^{2}}\cdot \frac{\eta -\xi y}{x-y}+{y^{2}}\cdot \frac{\eta -\xi x}{y-x}=\frac{1}{y-x}\left(-\eta {x^{2}}+\xi y{x^{2}}+\eta {y^{2}}-\xi x{y^{2}}\right)\\ {} & =\frac{1}{y-x}\left(-\xi (y-x)xy+\eta ({y^{2}}-{x^{2}})\right)=-\xi xy+\eta (y+x).\end{aligned}\]
Similarly, from (21) and (22) we derive
(24)
\[\begin{aligned}{}\phi & ={x^{3}}\cdot \frac{\eta -\xi y}{x-y}+{y^{3}}\cdot \frac{\eta -\xi x}{y-x}=\frac{1}{y-x}\left(-\xi ({y^{2}}-{x^{2}})xy+\eta ({y^{3}}-{x^{3}})\right)\\ {} & =-\xi (y+x)xy+\eta ({y^{2}}+xy+{x^{2}}).\end{aligned}\]
Multiplying (23) by $x+y$ results in
(25)
\[ \zeta (x+y)=-\xi (x+y)xy+\eta ({y^{2}}+2xy+{x^{2}}).\]
Finally, by subtracting (24) from (25) we get $\zeta (x+y)-\phi =\eta xy$, whence
(26)
\[ x=\frac{\phi -\zeta y}{\zeta -\eta y}.\]
At the same time, we can express x via y using (23):
(27)
\[ x=\frac{\zeta -\eta y}{\eta -\xi y}.\]
From equations (26) and (27) we obtain
\[ \frac{\phi -\zeta y}{\zeta -\eta y}=\frac{\zeta -\eta y}{\eta -\xi y},\]
whence we get the following quadratic equation for y
(28)
\[ {y^{2}}\left({\eta ^{2}}-\xi \zeta \right)-y\left(\eta \zeta -\xi \phi \right)+\left({\zeta ^{2}}-\eta \phi \right)=0.\]
It is easy to see that x satisfies the same quadratic equation, in view of symmetry. Therefore, x and y ($y\gt x$) are two roots of (28), which are given by
(29)
\[ \frac{\eta \zeta -\xi \phi \pm \sqrt{D}}{2({\eta ^{2}}-\xi \zeta )},\]
where
(30)
\[ \begin{aligned}{}D& ={(\eta \zeta -\xi \phi )^{2}}-4({\eta ^{2}}-\xi \zeta )({\zeta ^{2}}-\eta \phi )\\ {} & ={\xi ^{2}}{\phi ^{2}}-6\xi \eta \zeta \phi -3{\eta ^{2}}{\zeta ^{2}}+4{\eta ^{3}}\phi +4\xi {\zeta ^{3}}.\end{aligned}\]
It is not hard to derive the following relations from equations (18)–(21):
\[\begin{aligned}{}\eta \zeta -\xi \phi & =(ux+vy)(u{x^{2}}+v{y^{2}})-(u+v)(u{x^{3}}+v{y^{3}})=-uv{(y-x)^{2}}(y+x);\\ {} {\eta ^{2}}-\xi \zeta & ={(ux+vy)^{2}}-(u+v)(u{x^{2}}+v{y^{2}})=-uv{(y-x)^{2}};\\ {} {\zeta ^{2}}-\eta \phi & ={(u{x^{2}}+v{y^{2}})^{2}}-(ux+vy)(u{x^{3}}+v{y^{3}})=-uvxy{(y-x)^{2}}.\end{aligned}\]
Thus
\[\begin{aligned}{}D& ={(\eta \zeta -\xi \phi )^{2}}-4({\eta ^{2}}-\xi \zeta )({\zeta ^{2}}-\eta \phi )\\ {} & ={u^{2}}{v^{2}}{(y-x)^{4}}{(y+x)^{2}}-4{u^{2}}{v^{2}}xy{(y-x)^{4}}={u^{2}}{v^{2}}{(y-x)^{6}}\gt 0.\end{aligned}\]
Therefore, the discriminant is well defined and the corresponding equation has two different roots. From equation (29)
\[ \frac{\eta \zeta -\xi \phi \pm \sqrt{D}}{2({\eta ^{2}}-\xi \zeta )}\hspace{-0.1667em}=\hspace{-0.1667em}\frac{-uv{(y-x)^{2}}(y+x)\pm uv{(y-x)^{3}}}{-2uv{(y-x)^{2}}}\hspace{-0.1667em}=\hspace{-0.1667em}\frac{-1}{2}\cdot (-y-x\pm (y-x)),\]
which provides
\[ x=\frac{\eta \zeta -\xi \phi +\sqrt{D}}{2({\eta ^{2}}-\xi \zeta )},\hspace{1em}y=\frac{\eta \zeta -\xi \phi -\sqrt{D}}{2({\eta ^{2}}-\xi \zeta )}.\]
Recall that $x={2^{2{H_{1}}}}$ and $y={2^{2{H_{2}}}}$, so we get
(31)
\[ {H_{1}}=\frac{1}{2}{\log _{2}}\left(\frac{\eta \zeta -\xi \phi +\sqrt{D}}{2({\eta ^{2}}-\xi \zeta )}\right),\]
(32)
\[ {H_{2}}=\frac{1}{2}{\log _{2}}\left(\frac{\eta \zeta -\xi \phi -\sqrt{D}}{2({\eta ^{2}}-\xi \zeta )}\right).\]
Also, from equation (22) we obtain
\[\begin{aligned}{}v& =\frac{\eta -\xi x}{y-x}=\frac{2{\eta ^{3}}-2\xi \eta \zeta -\xi \eta \zeta +{\xi ^{2}}\phi -\xi \sqrt{D}}{-2\sqrt{D}}=\frac{\xi \sqrt{D}-2{\eta ^{3}}+3\xi \eta \zeta -{\xi ^{2}}\phi }{2\sqrt{D}}.\end{aligned}\]
Taking into account that ${\sigma ^{2}}{h^{2{H_{2}}}}=v$ and ${a^{\log b}}={b^{\log a}}$ we get
(33)
\[ {\sigma ^{2}}={\left(\frac{2({\eta ^{2}}-\xi \zeta )}{\eta \zeta -\xi \phi -\sqrt{D}}\right)^{{\log _{2}}h}}\left(\frac{\xi \sqrt{D}-2{\eta ^{3}}+3\xi \eta \zeta -{\xi ^{2}}\phi }{2\sqrt{D}}\right).\]
And similarly, we get
(34)
\[ {\kappa ^{2}}={\left(\frac{2({\eta ^{2}}-\xi \zeta )}{\eta \zeta -\xi \phi +\sqrt{D}}\right)^{{\log _{2}}h}}\left(\frac{\xi \sqrt{D}+2{\eta ^{3}}-3\xi \eta \zeta +{\xi ^{2}}\phi }{2\sqrt{D}}\right).\]
Equations (31)–(34) are providing us with inverse function $q={f^{-1}}$, where f is defined by (5), such that for $f(\vartheta )={\tau _{0}}$ we get $q({\tau _{0}})=\vartheta $. Therefore, by substituting strongly consistent estimator ${\tau _{N}}$ instead of ${\tau _{0}}$ into equations (31)–(34) we obtain strongly consistent estimator ${\hat{\vartheta }_{N}}\hspace{-0.1667em}=\hspace{-0.1667em}{({\widehat{H}_{N}^{(1)}},{\widehat{H}_{N}^{(2)}},{\hat{\kappa }_{N}^{2}},{\hat{\sigma }_{N}^{2}})^{\top }}$ for $\vartheta \hspace{-0.1667em}=\hspace{-0.1667em}{({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})^{\top }}$ defined by (9)–(12).  □

5.4 Proof of Theorem 2

The proof consists of two parts: in the first part we compute the asymptotic covariance matrix $\widetilde{\Sigma }$, while the second part contains the proof of asymptotic normality.
Part 1: Identification of the asymptotic covariance matrix.
Firstly, we will find the explicit form of covariance matrix $\widetilde{\Sigma }$ by evaluating convergence limits as $N\to \infty $ for the following variances and covariances: $N\operatorname{\mathbf{Var}}({\xi _{N}})$, $N\operatorname{\mathbf{Var}}({\eta _{N}})$, $N\operatorname{\mathbf{Var}}({\zeta _{N}})$, $N\operatorname{\mathbf{Var}}({\phi _{N}})$, $N\operatorname{\mathbf{cov}}({\xi _{N}},{\eta _{N}})$, $N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}})$, $N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})$, $N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})$, $N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})$, $N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})$.
1.1 Evaluation of convergence limit for $N\operatorname{\mathbf{Var}}({\xi _{N}})$. Using Isserlis’ theorem,1 and stationarity of the sequence $\left\{\Delta {X_{k}}\right\}$, we can write
\[ \operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},{(\Delta {X_{j}})^{2}}\right)=2\operatorname{\mathbf{cov}}{\left((\Delta {X_{k}}),(\Delta {X_{j}})\right)^{2}}=2\tilde{\rho }{(k-j)^{2}}.\]
Therefore
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\xi _{N}})& =\frac{1}{N}\operatorname{\mathbf{Var}}\left({\sum \limits_{k=0}^{N-1}}{\left(\Delta {X_{k}}\right)^{2}}\right)=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},{(\Delta {X_{j}})^{2}}\right)\\ {} & =\frac{2}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }{(k-j)^{2}}.\end{aligned}\]
Further, by rearranging sums, we get
(35)
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\xi _{N}})& =\frac{2}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{i=k-N+1}^{k}}\tilde{\rho }{(i)^{2}}=\frac{2}{N}{\sum \limits_{i=-N+1}^{0}}{\sum \limits_{k=0}^{N-1+i}}\tilde{\rho }{(i)^{2}}+\frac{2}{N}{\sum \limits_{i=1}^{N-1}}{\sum \limits_{k=i}^{N-1}}\tilde{\rho }{(i)^{2}}\\ {} & =2{\sum \limits_{i=-N+1}^{N-1}}\left(1-\frac{\left|i\right|}{N}\right)\tilde{\rho }{(i)^{2}}\to 2{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }{(i)^{2}},\hspace{1em}\text{as}\hspace{2.5pt}N\to \infty .\end{aligned}\]
where the passage to the limit can be justified by the dominated convergence theorem due to Lemma 1.
1.2. Evaluation of convergence limit for $N\operatorname{\mathbf{cov}}({\xi _{N}},{\eta _{N}})$. Write
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\eta _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{(\Delta {X_{k}})^{2}},{\sum \limits_{j=0}^{N-1}}{(\Delta {X_{j}}+\Delta {X_{j+1}})^{2}}\right)\\ {} & =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{(\Delta {X_{k}})^{2}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{a,b=0}^{1}}\Delta {X_{j+a}}\Delta {X_{j+b}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{1}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},\Delta {X_{j+a}}\Delta {X_{j+b}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}& \operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},\Delta {X_{j+a}}\Delta {X_{j+b}}\right)\\ {} & =\operatorname{\mathsf{E}}{(\Delta {X_{k}})^{2}}\Delta {X_{j+a}}\Delta {X_{j+b}}-\operatorname{\mathsf{E}}{(\Delta {X_{k}})^{2}}\operatorname{\mathsf{E}}\Delta {X_{j+a}}\Delta {X_{j+b}}\\ {} & =\operatorname{\mathsf{E}}\Delta {X_{k}}\Delta {X_{j+a}}\operatorname{\mathsf{E}}\Delta {X_{k}}\Delta {X_{j+b}}+\operatorname{\mathsf{E}}\Delta {X_{k}}\Delta {X_{j+b}}\operatorname{\mathsf{E}}\Delta {X_{k}}\Delta {X_{j+a}}\\ {} & =2\operatorname{\mathbf{cov}}(\Delta {X_{k}},\Delta {X_{j+a}})\operatorname{\mathbf{cov}}(\Delta {X_{k}},\Delta {X_{j+b}})=2\tilde{\rho }(j+a-k)\tilde{\rho }(j+b-k).\end{aligned}\]
Therefore arguing as in (35), we obtain
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\eta _{N}})& ={\sum \limits_{a,b=0}^{1}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{k=0}^{N-1}}\tilde{\rho }(j+a-k)\tilde{\rho }(j+b-k)\\ {} & ={\sum \limits_{a,b=0}^{1}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{i=j-N+1}^{j}}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & =2{\sum \limits_{a,b=0}^{1}}{\sum \limits_{i=-N+1}^{N-1}}\left(1-\frac{\left|i\right|}{N}\right)\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & \to 2{\sum \limits_{a,b=0}^{1}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)(4\tilde{\rho }(i)+4\tilde{\rho }(i+1)),\end{aligned}\]
as $N\to \infty $, where the last series converges according to Lemma 3.
1.3. Evaluation of convergence limit for $N\operatorname{\mathbf{Var}}({\eta _{N}})$. We have
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\eta _{N}})& =\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left({\sum \limits_{a,b=0}^{1}}\Delta {X_{k+a}}\Delta {X_{k+b}},{\sum \limits_{c,d=0}^{1}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & ={\sum \limits_{a,b,c,d=0}^{1}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}(\Delta {X_{k+a}}\Delta {X_{k+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}& \operatorname{\mathbf{cov}}(\Delta {X_{k+a}}\Delta {X_{k+b}},\Delta {X_{j+c}}\Delta {X_{j+d}})\\ {} & =\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{k+b}}\Delta {X_{j+c}}\Delta {X_{j+d}}-\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{k+b}}\operatorname{\mathsf{E}}\Delta {X_{j+c}}\Delta {X_{j+d}}\\ {} & =\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{k+b}}\operatorname{\mathsf{E}}\Delta {X_{j+c}}\Delta {X_{j+d}}+\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{j+c}}\operatorname{\mathsf{E}}\Delta {X_{k+b}}\Delta {X_{j+d}}\\ {} & \hspace{1em}+\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{j+d}}\operatorname{\mathsf{E}}\Delta {X_{j+c}}\Delta {X_{k+b}}-\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{k+b}}\operatorname{\mathsf{E}}\Delta {X_{j+c}}\Delta {X_{j+d}}\\ {} & =\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{j+c}}\operatorname{\mathsf{E}}\Delta {X_{k+b}}\Delta {X_{j+d}}+\operatorname{\mathsf{E}}\Delta {X_{k+a}}\Delta {X_{j+d}}\operatorname{\mathsf{E}}\Delta {X_{j+c}}\Delta {X_{k+b}}\\ {} & =\operatorname{\mathbf{cov}}(\Delta k+a,\Delta {X_{j+c}})\operatorname{\mathbf{cov}}(\Delta {X_{k+b}},\Delta {X_{j+d}})\\ {} & \hspace{1em}+\operatorname{\mathbf{cov}}(\Delta {X_{k+a}},\Delta {X_{j+d}})\operatorname{\mathbf{cov}}(\Delta {X_{k+b}},\Delta {X_{j+c}})\\ {} & =\tilde{\rho }(k-j+c-a)\tilde{\rho }(k-j+d-b)+\tilde{\rho }(k-j+c-b)\tilde{\rho }(k-j+d-a)\\ {} & ={\sum \limits_{\gamma =0}^{1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\tilde{\rho }(k-j+d-b+\gamma (b-a)).\end{aligned}\]
Therefore
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\eta _{N}})& ={\sum \limits_{a,b,c,d,\gamma =0}^{1}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{a,b,c,d,\gamma =0}^{1}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to {\sum \limits_{a,b,c,d,\gamma =0}^{1}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{a,b,c,d,\gamma =0}^{1}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}\]
as $N\to \infty $. After simplifications, we arrive at
\[ \underset{N\to \infty }{\lim }N\operatorname{\mathbf{Var}}({\eta _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(12\tilde{\rho }(i)+16\tilde{\rho }(i+1)+4\tilde{\rho }(i+2)\Big).\]
1.4. Evaluation of convergence limit for $N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}})$.
We rewrite $N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}})$ in the form
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{(\Delta {X_{k}})^{2}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{a,b=0}^{3}}\Delta {X_{j+a}}\Delta {X_{j+b}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},\Delta {X_{j+a}}\Delta {X_{j+b}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}})& ={\sum \limits_{a,b=0}^{3}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{k=0}^{N-1}}\tilde{\rho }(j+a-k)\tilde{\rho }(j+b-k)\\ {} & ={\sum \limits_{a,b=0}^{3}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{i=j-N+1}^{j}}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & =2{\sum \limits_{a,b=0}^{3}}{\sum \limits_{i=-N+1}^{N-1}}\left(1-\frac{\left|i\right|}{N}\right)\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & \to 2{\sum \limits_{a,b=0}^{3}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\left(8\tilde{\rho }(i)+12\tilde{\rho }(i+1)+8\tilde{\rho }(i+2)+4\tilde{\rho }(i+3)\right),\end{aligned}\]
as $N\to \infty $, where the last series converges according to Lemma 3.
1.5. Evaluation of convergence limit for $N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})$. We have:
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{1}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{3}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{1}}{\sum \limits_{c,d=0}^{3}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})& ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to {\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}\]
as $N\to \infty $. After simplifications, we arrive at
\[\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(28\tilde{\rho }(i)+48\tilde{\rho }(i+1)\\ {} \displaystyle +32\tilde{\rho }(i+2)+16\tilde{\rho }(i+3)+4\tilde{\rho }(i+4)\Big).\end{array}\]
1.6. Evaluation of convergence limit for $N\operatorname{\mathbf{Var}}({\zeta _{N}})$. We have:
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\zeta _{N}}))& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{3}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{3}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b,c,d=0}^{3}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\zeta _{N}})& ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}\]
as $N\to \infty $. After simplifications, we arrive at
\[\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{Var}}({\zeta _{N}}))={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(88\tilde{\rho }(i)+160\tilde{\rho }(i+1)\\ {} \displaystyle +124\tilde{\rho }(i+2)+80\tilde{\rho }(i+3)+40\tilde{\rho }(i+4)+16\tilde{\rho }(i+5)+4\tilde{\rho }(i+6)\Big).\end{array}\]
1.7. Evaluation of convergence limit for $N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})$. We have:
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{(\Delta {X_{k}})^{2}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{a,b=0}^{7}}\Delta {X_{j+a}}\Delta {X_{j+b}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},\Delta {X_{j+a}}\Delta {X_{j+b}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})& ={\sum \limits_{a,b=0}^{7}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{k=0}^{N-1}}\tilde{\rho }(j+a-k)\tilde{\rho }(j+b-k)\\ {} & ={\sum \limits_{a,b=0}^{7}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{i=j-N+1}^{j}}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & =2{\sum \limits_{a,b=0}^{7}}{\sum \limits_{i=-N+1}^{N-1}}\left(1-\frac{\left|i\right|}{N}\right)\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & \to 2{\sum \limits_{a,b=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\tilde{\rho }(i+b-a),\end{aligned}\]
as $N\to \infty $. After simplifications, we arrive at
\[\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(16\tilde{\rho }(i)+28\tilde{\rho }(i+1)+24\tilde{\rho }(i+2)\\ {} \displaystyle +20\tilde{\rho }(i+3)+16\tilde{\rho }(i+4)+12\tilde{\rho }(i+5)+8\tilde{\rho }(i+6)+4\tilde{\rho }(i+7)\Big).\end{array}\]
1.8. Evaluation of convergence limit for $N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})$. We have:
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{1}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{7}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{1}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})& ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}\]
as $N\to \infty $. After simplifications, we arrive at
\[\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(60\tilde{\rho }(i)+112\tilde{\rho }(i+1)+96\tilde{\rho }(i+2)\\ {} \displaystyle +80\tilde{\rho }(i+3)+64\tilde{\rho }(i+4)+48\tilde{\rho }(i+5)+32\tilde{\rho }(i+6)+16\tilde{\rho }(i+7)+4\tilde{\rho }(i+8)\Big).\end{array}\]
1.9. Evaluation of convergence limit for $N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})$. We have:
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{3}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{7}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})& ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}\]
as $N\to \infty $. After simplifications, we arrive at
\[\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(216\tilde{\rho }(i)+416\tilde{\rho }(i+1)+376\tilde{\rho }(i+2)\\ {} \displaystyle +320\tilde{\rho }(i+3)+256\tilde{\rho }(i+4)+192\tilde{\rho }(i+5)+132\tilde{\rho }(i+6)\\ {} \displaystyle +80\tilde{\rho }(i+7)+40\tilde{\rho }(i+8)+16\tilde{\rho }(i+9)+4\tilde{\rho }(i+10)\Big).\end{array}\]
1.10. Evaluation of convergence limit for $N\operatorname{\mathbf{Var}}({\phi _{N}})$. We have:
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{7}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{7}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b,c,d=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}\]
By Isserlis’ theorem,
\[\begin{aligned}{}N\operatorname{\mathbf{Var}}({\phi _{N}})& ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}\]
as $N\to \infty $. After simplifications, we arrive at
\[\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{Var}}({\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(688\tilde{\rho }(i)+1344\tilde{\rho }(i+1)+1260\tilde{\rho }(i+2)\\ {} \displaystyle +1136\tilde{\rho }(i+3)+984\tilde{\rho }(i+4)+816\tilde{\rho }(i+5)+644\tilde{\rho }(i+6)+480\tilde{\rho }(i+7)\\ {} \displaystyle +336\tilde{\rho }(i+8)+224\tilde{\rho }(i+9)+140\tilde{\rho }(i+10)+80\tilde{\rho }(i+11)+40\tilde{\rho }(i+12)\\ {} \displaystyle +16\tilde{\rho }(i+13)+4\tilde{\rho }(i+14)\Big).\end{array}\]
Part 2: Proof of asymptotic normality.
Let us define ${Y_{k}}={({Y_{k}^{(1)}},{Y_{k}^{(2)}},{Y_{k}^{(3)}},{Y_{k}^{(4)}})^{\top }}$ by
(36)
\[ \begin{array}{c}\displaystyle {Y_{k}^{(1)}}:=\Delta {X_{k}},\hspace{1em}{Y_{k}^{(2)}}:=\Delta {X_{k+1}},\hspace{1em}{Y_{k}^{(3)}}:=\Delta {X_{k+2}}+\Delta {X_{k+3}}\\ {} \displaystyle {Y_{k}^{(4)}}:=\Delta {X_{k+4}}+\Delta {X_{k+5}}+\Delta {X_{k+6}}+\Delta {X_{k+7}}.\end{array}\]
Then
\[\begin{array}{l}\displaystyle {\xi _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}\right)^{2}},\hspace{2em}{\eta _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}+{Y_{k}^{(2)}}\right)^{2}},\\ {} \displaystyle {\zeta _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}+{Y_{k}^{(2)}}+{Y_{k}^{(3)}}\right)^{2}},\\ {} \displaystyle {\phi _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}+{Y_{k}^{(2)}}+{Y_{k}^{(3)}}+{Y_{k}^{(4)}}\right)^{2}}.\end{array}\]
We shall prove the convergence of vector ${({\xi _{N}},{\eta _{N}},{\zeta _{N}},{\phi _{N}})^{\top }}$ with the help of the Cramér–Wold device. Let the parameters α, β, γ, $\lambda \in \mathbb{R}$ be such that ${\alpha ^{2}}+{\beta ^{2}}+{\gamma ^{2}}+{\lambda ^{2}}\ne 0$. We introduce the function
\[\begin{array}{l}\displaystyle f(y)=\alpha {y_{1}^{2}}+\beta {({y_{1}}+{y_{2}})^{2}}+\gamma {({y_{1}}+{y_{2}}+{y_{3}})^{2}}+\lambda {({y_{1}}+{y_{2}}+{y_{3}}+{y_{4}})^{2}},\\ {} \displaystyle y={({y_{1}},{y_{2}},{y_{3}})^{\top }}\in {\mathbb{R}^{3}},\end{array}\]
so that
\[ \alpha {\xi _{N}}+\beta {\eta _{N}}+\gamma {\zeta _{N}}+\lambda {\phi _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}f({Y_{k}}).\]
Thus, we need to prove that the sequence
(37)
\[\begin{array}{cc}& \displaystyle \sqrt{N}\Big(\alpha ({\xi _{N}}-\operatorname{\mathsf{E}}{\xi _{0}})+\beta ({\eta _{N}}-\operatorname{\mathsf{E}}{\eta _{0}})+\gamma ({\zeta _{N}}-\operatorname{\mathsf{E}}{\zeta _{0}})+\lambda ({\phi _{N}}-\operatorname{\mathsf{E}}{\phi _{0}})\Big)\\ {} & \displaystyle =\frac{1}{\sqrt{N}}{\sum \limits_{k=0}^{N-1}}\Big(f({Y_{k}})-\operatorname{\mathsf{E}}f({Y_{k}})\Big)\end{array}\]
converges to a normal distribution. This fact can be established by application of the Breuer–Major theorem for stationary vectors [1, Theorem 4]. In order to apply this theorem, it suffices to verify the following condition:
(38)
\[ \sum \limits_{j\in \mathbb{Z}}{\left|{r^{(p,l)}}(j)\right|^{q}}\lt \infty ,\hspace{1em}\text{for all}\hspace{2.5pt}p,l\in \{1,2,3\},\]
where
\[ {r^{(p,l)}}(k):=\operatorname{\mathbf{cov}}\left({Y_{1}^{(p)}},{Y_{1+k}^{(l)}}\right),\hspace{1em}k\in \mathbb{Z},\]
and q is the Hermite rank2 of f with respect to ${Y_{1}}$.
It is not hard to see that this Hermite rank $q\ge 2$. Indeed, note that $f({Y_{1}})$ is a second-order polynomial of zero-mean normally distributed random variables ${Y_{1}^{(1)}}$, ${Y_{1}^{(2)}}$, ${Y_{1}^{(3)}}$, ${Y_{1}^{(4)}}$ in which only terms of the second order are present. Therefore, according to Isserlis’ theorem, we will have that the expected value of the product of $f({Y_{1}})$ and any of ${Y_{1}^{(t)}}$, $t\in \{1,2,3\}$, will be equal to zero. All the obtained terms will have the form $\operatorname{\mathsf{E}}{G_{1}}{G_{2}}{G_{3}}$, where ${({G_{1}},{G_{2}},{G_{3}})^{\top }}$ is a zero-mean multivariate normal random vector and according to Isserlis’ theorem $\operatorname{\mathsf{E}}{G_{1}}{G_{2}}{G_{3}}=0$. Therefore, the expected value of the product of $f({Y_{1}})$ and any first-order polynomial of $\{{Y_{1}}\}$ (a linear combination of ${Y_{1}^{(1)}}$, ${Y_{1}^{(2)}}$, ${Y_{1}^{(3)}}$, ${Y_{1}^{(4)}}$) will be equal to zero, and therefore the Hermite rank q of the function f with respect to ${Y_{1}}$ cannot be equal to 1.
Since $q\ge 2$, we easily see that in order to verify the condition (38), it suffices to prove that
(39)
\[ \sum \limits_{j\in \mathbb{Z}}{\left|{r^{(p,l)}}(j)\right|^{2}}\lt \infty ,\hspace{1em}\text{for all}\hspace{2.5pt}p,l\in \{1,2,3\}.\]
In turn, (39) follows from Lemma 1, since using (36) and the definition of $\tilde{\rho }$, we can represent ${r^{(p,l)}}$ via $\tilde{\rho }$ as follows:
\[\begin{array}{l}\displaystyle {r^{(1,1)}}(k)={r^{(2,2)}}(k)=\tilde{\rho }(k),\hspace{1em}{r^{(1,2)}}(k)=\tilde{\rho }(k+1),\hspace{1em}{r^{(2,1)}}(k)=\tilde{\rho }(k-1),\\ {} \displaystyle {r^{(1,3)}}(k)=\tilde{\rho }(k)+\tilde{\rho }(k+2)+\tilde{\rho }(k+3),\hspace{1em}{r^{(3,1)}}(k)=\tilde{\rho }(k-2)+\tilde{\rho }(k-3),\\ {} \displaystyle {r^{(2,3)}}(k)=+\tilde{\rho }(k+1)+\tilde{\rho }(k+2),\hspace{1em}{r^{(3,2)}}(k)=\tilde{\rho }(k-1)+\tilde{\rho }(k-2),\\ {} \displaystyle {r^{(3,3)}}(k)=\tilde{\rho }(k+1)+2\tilde{\rho }(k)+\tilde{\rho }(k-1),\\ {} \displaystyle {r^{(1,4)}}(k)=\tilde{\rho }(k+4)+\tilde{\rho }(k+5)+\tilde{\rho }(k+6)+\tilde{\rho }(k+7),\\ {} \displaystyle {r^{(4,1)}}(k)=\tilde{\rho }(k-4)+\tilde{\rho }(k-5)+\tilde{\rho }(k-6)+\tilde{\rho }(k-7),\\ {} \displaystyle {r^{(2,4)}}(k)=\tilde{\rho }(k+3)+\tilde{\rho }(k+4)+\tilde{\rho }(k+5)+\tilde{\rho }(k+6),\\ {} \displaystyle {r^{(4,2)}}(k)=\tilde{\rho }(k-3)+\tilde{\rho }(k-4)+\tilde{\rho }(k-5)+\tilde{\rho }(k-6),\\ {} \displaystyle {r^{(3,4)}}(k)=\tilde{\rho }(k+2)+2\tilde{\rho }(k+2)+2\tilde{\rho }(k+3)+2\tilde{\rho }(k+4)+\tilde{\rho }(k+5),\\ {} \displaystyle {r^{(4,3)}}(k)=\tilde{\rho }(k-2)+2\tilde{\rho }(k-2)+2\tilde{\rho }(k-3)+2\tilde{\rho }(k-4)+\tilde{\rho }(k-5),\\ {} \displaystyle {r^{(4,4)}}(k)=\tilde{\rho }(k-3)+2\tilde{\rho }(k-2)+3\tilde{\rho }(k-1)+4\tilde{\rho }(k)\\ {} \displaystyle \hspace{2em}+3\tilde{\rho }(k+1)+2\tilde{\rho }(k+2)+\tilde{\rho }(k+3).\end{array}\]
Indeed, for example, the series
\[\begin{aligned}{}& {\sum \limits_{k=-\infty }^{+\infty }}{\left({r^{(1,3)}}(k)\right)^{2}}={\sum \limits_{k=-\infty }^{+\infty }}{(\tilde{\rho }(k+2)+\tilde{\rho }(k+3))^{2}}\\ {} & ={\sum \limits_{k=-\infty }^{+\infty }}\tilde{\rho }{(k+2)^{2}}+2{\sum \limits_{k=-\infty }^{+\infty }}\tilde{\rho }(k+2)\tilde{\rho }(k+3)+{\sum \limits_{k=-\infty }^{+\infty }}\tilde{\rho }{(k+3)^{2}},\end{aligned}\]
converges as a finite sum of series convergent according to Lemma 1. Similarly, other series in (39) are also finite.
Therefore, the assumptions of the Breuer–Major theorem for stationary vectors are satisfied, whence the desired weak converge of (37) to a zero-mean normal distribution follows.  □

5.5 Proof of Theorem 3

In order to prove Theorem 3 we apply the delta method (see, e.g., [13, Theorem B.6]) for function g constructed in Theorem 1 and sequence ${\tau _{N}}$ that is asymptotically normal by Theorem 1, with the inverse function theorem. In order to apply the delta method we need to calculate the derivation matrix ${g^{\prime }}$ and prove that it is nonsingular. But considering the complicated structure of this function, we will calculate the derivation matrix ${f^{\prime }}$ for the function f defined in (5) and apply the inverse function theorem at the point ϑ since g is an inverse function of f, and thus ${g^{\prime }}={({f^{\prime }})^{-1}}$. Therefore, we will need to find matrix ${f^{\prime }}(\vartheta )$ and to show that it is nonsingular.
We will start with evaluating partial derivatives of function ${f_{1}}$. Since
\[ {f_{1}}({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})={\kappa ^{2}}{h^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}},\]
we see that
\[\begin{array}{l}\displaystyle \frac{\partial {f_{1}}}{\partial {H_{1}}}=2\log h{\kappa ^{2}}{h^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{1}}}{\partial {H_{2}}}=2\log h{\sigma ^{2}}{h^{2{H_{2}}}},\\ {} \displaystyle \frac{\partial {f_{1}}}{\partial {\kappa ^{2}}}={h^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{1}}}{\partial {\sigma ^{2}}}={h^{2{H_{2}}}}.\end{array}\]
For the function
\[\begin{aligned}{}{f_{2}}({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})& ={\kappa ^{2}}{h^{2{H_{1}}}}{2^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}{2^{2{H_{2}}}}\\ {} & ={\kappa ^{2}}{e^{2{H_{1}}\log (2h)}}+{\sigma ^{2}}{e^{2{H_{2}}\log (2h)}},\end{aligned}\]
we get
\[\begin{array}{l}\displaystyle \frac{\partial {f_{2}}}{\partial {H_{1}}}=2\log (2h){\kappa ^{2}}{h^{2{H_{1}}}}{2^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{2}}}{\partial {H_{2}}}=2\log (2h){\sigma ^{2}}{h^{2{H_{2}}}}{2^{2{H_{2}}}},\\ {} \displaystyle \frac{\partial {f_{2}}}{\partial {\kappa ^{2}}}={h^{2{H_{1}}}}{2^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{2}}}{\partial {\sigma ^{2}}}={h^{2{H_{2}}}}{2^{2{H_{2}}}}.\end{array}\]
Similarly, one can evaluate partial derivatives for the functions ${f_{3}}$ and ${f_{4}}$. We arrive at the following derivative matrix:
\[ {f^{\prime }}(\vartheta )=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}2\log (h){\kappa ^{2}}{h^{2{H_{1}}}}& 2\log (h){\sigma ^{2}}{h^{2{H_{2}}}}& {h^{2{H_{1}}}}& {h^{2{H_{2}}}}\\ {} 2\log (2h){\kappa ^{2}}{(2h)^{2{H_{1}}}}& 2\log (2h){\sigma ^{2}}{(2h)^{2{H_{2}}}}& {(2h)^{2{H_{1}}}}& {(2h)^{2{H_{2}}}}\\ {} 2\log (4h){\kappa ^{2}}{(4h)^{2{H_{1}}}}& 2\log (4h){\sigma ^{2}}{(4h)^{2{H_{2}}}}& {(4h)^{2{H_{1}}}}& {(4h)^{2{H_{2}}}}\\ {} 2\log (8h){\kappa ^{2}}{(8h)^{2{H_{1}}}}& 2\log (8h){\sigma ^{2}}{(8h)^{2{H_{2}}}}& {(8h)^{2{H_{1}}}}& {(8h)^{2{H_{2}}}}\end{array}\right).\]
Now, we need to compute its determinant, so that we could apply the delta method. We will have
\[\begin{aligned}{}\det ({f^{\prime }}(\vartheta ))& =4{\kappa ^{2}}{\sigma ^{2}}{h^{4{H_{1}}+4{H_{2}}}}\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\log (h)& \log (h)& 1& 1\\ {} \log (2h){2^{2{H_{1}}}}& \log (2h){2^{2{H_{2}}}}& {2^{2{H_{1}}}}& {2^{2{H_{2}}}}\\ {} \log (4h){4^{2{H_{1}}}}& \log (4h){4^{2{H_{2}}}}& {4^{2{H_{1}}}}& {4^{2{H_{2}}}}\\ {} \log (8h){8^{2{H_{1}}}}& \log (8h){8^{2{H_{2}}}}& {8^{2{H_{1}}}}& {8^{2{H_{2}}}}\end{array}\right|\\ {} & =4{\kappa ^{2}}{\sigma ^{2}}{h^{4{H_{1}}+4{H_{2}}}}\log 2\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}0& 0& 1& 1\\ {} {2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{1}}}}& {2^{2{H_{2}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{1}}}}& {2^{4{H_{2}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{1}}}}& {2^{6{H_{2}}}}\end{array}\right|\\ {} & =4{\kappa ^{2}}{\sigma ^{2}}{h^{4{H_{1}}+4{H_{2}}}}\log 2\left(\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{2}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{2}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{2}}}}\end{array}\right|\right.\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}-\left.\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{1}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{1}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{1}}}}\end{array}\right|\right).\end{aligned}\]
Considering that
\[ \left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{2}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{2}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{2}}}}\end{array}\right|=-{2^{2{H_{1}}}}{2^{10{H_{2}}}}+4\cdot {2^{4{H_{1}}}}{2^{8{H_{2}}}}-3\cdot {2^{6{H_{1}}}}{2^{6{H_{2}}}},\]
and
\[ \left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{1}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{1}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{1}}}}\end{array}\right|={2^{10{H_{1}}}}{2^{2{H_{2}}}}+3\cdot {2^{6{H_{1}}}}{2^{6{H_{2}}}}-4\cdot {2^{8{H_{1}}}}{2^{4{H_{2}}}},\]
we get
\[ \det ({f^{\prime }}(\vartheta ))=-4{\kappa ^{2}}{\sigma ^{2}}{h^{6{H_{1}}+6{H_{2}}}}\log (2){\left({2^{2{H_{2}}}}-{2^{2{H_{1}}}}\right)^{4}}\ne 0.\]
Therefore, the derivative matrix is nonsingular, and the corresponding derivative matrix of the inverse function ${g^{\prime }}({\tau _{0}})={({f^{\prime }}(\vartheta ))^{-1}}$ is nonsingular by the inverse function theorem. Thus, the delta method can be applied, which implies the statement of the theorem and provides the formula for the asymptotic covariance matrix ${\Sigma ^{0}}$ defined using the inverse derivative matrix ${f^{\prime }}(\vartheta )$.  □

Footnotes

1 Isserlis’ theorem [12]: if $({X_{1}},{X_{2}},{X_{3}},{X_{4}})$ is a zero-mean multivariate normal random vector, then $\operatorname{\mathsf{E}}({X_{1}}{X_{2}}{X_{3}}{X_{4}})=\operatorname{\mathsf{E}}{X_{1}}{X_{2}}\operatorname{\mathsf{E}}{X_{3}}{X_{4}}+\operatorname{\mathsf{E}}{X_{1}}{X_{3}}\operatorname{\mathsf{E}}{X_{2}}{X_{4}}+\operatorname{\mathsf{E}}{X_{1}}{X_{4}}\operatorname{\mathsf{E}}{X_{2}}{X_{3}}$. In particular, $\operatorname{\mathbf{cov}}({X_{1}^{2}},{X_{2}^{2}})=2\operatorname{\mathbf{cov}}{({X_{1}},{X_{2}})^{2}}$.
2 The function $f:{\mathbb{R}^{d}}\to \mathbb{R}$ is said to have Hermite rank equal to q with respect to a random vector X if (a) $\operatorname{\mathsf{E}}[(f(X)-\operatorname{\mathsf{E}}[f(X)]){p_{m}}(X)]=0$ for every polynomial ${p_{m}}$ (on ${\mathbb{R}^{d}}$) of degree $m\le q-1$; and (b) there exists a polynomial ${p_{q}}$ of degree q such that $\operatorname{\mathsf{E}}[(f(X)-\operatorname{\mathsf{E}}[f(X)]){p_{m}}(X)]\ne 0$, see [23].

References

[1] 
Arcones, M.A.: Limit theorems for nonlinear functionals of a stationary Gaussian sequence of vectors. Ann. Probab. 22(4), 2242–2274 (1994). MR1331224
[2] 
Cai, C., Chigansky, P., Kleptsyna, M.: Mixed Gaussian processes: a filtering approach. Ann. Probab. 44(4), 3032–3075 (2016). MR3531685. https://doi.org/10.1214/15-AOP1041
[3] 
Cheridito, P.: Mixed fractional Brownian motion. Bernoulli 7(6), 913–934 (2001). MR1873835. https://doi.org/10.2307/3318626
[4] 
Dai, Q., Singleton, K.J.: Specification analysis of affine term structure models. J. Finance 55, 1943–1978 (2000)
[5] 
Ding, Z., Granger, C.W.J., Engle, R.F.: A long memory property of stock market returns and a new model. J. Empir. Finance 1(1), 83–106 (1993)
[6] 
Dozzi, M., Mishura, Y., Shevchenko, G.: Asymptotic behavior of mixed power variations and statistical estimation in mixed models. Stat. Inference Stoch. Process. 18(2), 151–175 (2015). MR3348583. https://doi.org/10.1007/s11203-014-9106-5
[7] 
Dufitinema, J., Pynnönen, S., Sottinen, T.: Maximum likelihood estimators from discrete data modeled by mixed fractional Brownian motion with application to the Nordic stock markets. Commun. Stat., Simul. Comput. 51(9), 5264–5287 (2022). MR4491681. https://doi.org/10.1080/03610918.2020.1764581
[8] 
El-Nouty, C.: The fractional mixed fractional Brownian motion. Stat. Probab. Lett. 65(2), 111–120 (2003). MR2017255
[9] 
Filatova, D.: Mixed fractional Brownian motion: Some related questions for computer network traffic modeling. In: 2008 International Conference on Signals and Electronic Systems, pp. 393–396 (2008)
[10] 
He, X., Chen, W.: The pricing of credit default swaps under a generalized mixed fractional Brownian motion. Physica A 404, 26–33 (2014). MR3188756
[11] 
Henry, O.T.: Long memory in stock returns: Some international evidence. Appl. Financ. Econ. 12(10), 725–729 (2002)
[12] 
Isserlis, L.: On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika 12(1/2), 134–139 (1918)
[13] 
Kubilius, K., Mishura, Y., Ralchenko, K.: Parameter Estimation in Fractional Diffusion Models. Bocconi & Springer Series, vol. 8, p. 390. Bocconi University Press & Springer (2017). MR3752152. https://doi.org/10.1007/978-3-319-71030-3
[14] 
Kukush, A., Lohvinenko, S., Mishura, Y., Ralchenko, K.: Two approaches to consistent estimation of parameters of mixed fractional Brownian motion with trend. Stat. Inference Stoch. Process. 25(1), 159–187 (2022). MR4419677. https://doi.org/10.1007/s11203-021-09252-6
[15] 
Major, P.: Non-central limit theorem for non-linear functionals of vector valued Gaussian stationary random fields. arXiv:1901.04086 [math.PR] (2019)
[16] 
Miao, Y., Ren, W., Ren, Z.: On the fractional mixed fractional Brownian motion. Appl. Math. Sci. (Ruse) 2(33-36), 1729–1738 (2008). MR2443901
[17] 
Mishura, Y., Ralchenko, K., Shklyar, S.: Maximum likelihood drift estimation for Gaussian process with stationary increments. Austrian J. Stat. 46(3–4), 67–78 (2017)
[18] 
Mishura, Y.: Maximum likelihood drift estimation for the mixing of two fractional brownian motions. In: Trends in Mathematics, pp. 263–280. Springer (2016). MR3708386. https://doi.org/10.1007/978-3-319-07245-6_14
[19] 
Mishura, Y., Voronov, I.: Construction of maximum likelihood estimator in the mixed fractional–fractional Brownian motion model with double long-range dependence. Mod. Stoch. Theory Appl. 2(2), 147–164 (2015). MR3389587. https://doi.org/10.15559/15-vmsta28
[20] 
Mishura, Y., Ralchenko, K., Shklyar, S.: Parameter estimation for Gaussian processes with application to the model with two independent fractional Brownian motions. In: Stochastic Processes and Applications. Springer Proc. Math. Stat., vol. 271, pp. 123–146. Springer (2018). MR3903341
[21] 
Mishura, Y., Ralchenko, K., Zhelezniak, H.: Numerical approach to the drift parameter estimation in the model with two fractional brownian motions. Commun. Stat., Simul. Comput. 1–15 (2022)
[22] 
Nourdin, I.: Selected Aspects of Fractional Brownian Motion. Springer (2012). MR3076266. https://doi.org/10.1007/978-88-470-2823-4
[23] 
Nourdin, I., Peccati, G., Podolskij, M.: Quantitative Breuer-Major theorems. Stoch. Process. Appl. 121(4), 793–812 (2011). MR2770907
[24] 
Paxson, V., Floyd, S.: Wide area traffic: the failure of Poisson modeling. IEEE/ACM Trans. Netw. 3(3), 226–244 (1995)
[25] 
Rostek, S., Schöbel, R.: A note on the use of fractional Brownian motion for financial modeling. Econ. Model. 30, 30–35 (2013)
[26] 
Sadique, S., Silvapulle, P.: Long-term memory in stock market returns: International evidence. Int. J. Financ. Econ. 6(1), 59–67 (2001)
[27] 
Sun, L.: Pricing currency options in the mixed fractional Brownian motion. Physica A 392(16), 3441–3458 (2013). MR3069168
[28] 
Thäle, C.: Further remarks on mixed fractional brownian motion. Appl. Math. Sci. 38, 1885–1901 (2009)
[29] 
Xiao, W.-L., Zhang, W.-G., Zhang, X.-L.: Maximum-likelihood estimators in the mixed fractional Brownian motion. Statistics 45(1), 73–85 (2011). MR2772157
[30] 
Xiao, W.-L., Zhang, W.-G., Zhang, X., Zhang, X.: Pricing model for equity warrants in a mixed fractional Brownian environment and its algorithm. Physica A 391(24), 6418–6431 (2012). MR2972795
[31] 
Zhang, P., Sun, Q., Xiao, W.-L.: Parameter identification in mixed Brownian–fractional Brownian motions using Powell’s optimization algorithm. Econ. Model. 40, 314–319 (2014)
[32] 
Zhang, W.-G., Xiao, W.-L., He, C.-X.: Equity warrants pricing model under fractional Brownian motion and an empirical study. Expert Syst. Appl. 36(2), 3056–3065 (2009)
[33] 
Zili, M.: On the mixed fractional Brownian motion. J. Appl. Math. Stoch. Anal. 2006, Art. ID 32435 (2006), 9 pp. MR2253522. https://doi.org/10.1155/JAMSA/2006/32435
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Construction of strongly consistent estimators
  • 3 Asymptotic normality
  • 4 Simulation study
  • 5 Proofs
  • Footnotes
  • References

Copyright
© 2024 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Fractional Brownian motion mixed model strong consistency ergodic theorem asymptotic normality

MSC2010
60G22 62F10 62F12

Metrics
since March 2018
555

Article info
views

374

Full article
views

397

PDF
downloads

74

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Tables
    8
  • Theorems
    3
Table 1.
The estimator ${\widehat{H}_{N}^{(1)}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
Table 2.
The estimator ${\widehat{H}_{N}^{(2)}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
Table 3.
The estimator ${\hat{\kappa }_{N}^{2}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
Table 4.
The estimator ${\hat{\sigma }_{N}^{2}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
Table 5.
The percentage (%) of iterations with constant ${d_{N}}$ being equal to zero with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
Table 6.
The coverage probability (%) for estimators ${({\xi _{N}},{\eta _{N}},{\zeta _{N}},{\phi _{N}})^{\top }}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
Table 7.
The percentage (%) of iterations with undefined asymptotic covariance matrix ${\hat{\Sigma }_{N}^{0}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
Table 8.
The convergence of the series (16) with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ and ${H_{1}}=0.1$ ($h=1$)
Theorem 1.
Theorem 2.
Theorem 3.
Table 1.
The estimator ${\widehat{H}_{N}^{(1)}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean 0.0722 0.0434 −0.0230 −0.1009 −0.0844 0.0482 0.0842
S.dev. 0.5041 0.4727 0.5063 0.5160 0.4459 0.1629 0.0791
ASD 0.7648 0.7039 1.1629 0.3871 0.2011 0.1239 0.0812
CP% 100.00 100.00 100.00 92.18 75.61 75.58 87.91
0.1 0.5 Mean 0.0184 −0.0455 −0.0009 0.0674 0.0953 0.0988 0.0991
S.dev. 0.5618 0.5945 0.3733 0.1710 0.0824 0.0414 0.0205
ASD 1.1951 0.5894 0.2696 0.1457 0.0829 0.0439 0.0222
CP% 100.00 100.00 99.30 89.82 93.76 96.85 96.50
0.1 0.7 Mean −0.0408 −0.0071 0.0773 0.0945 0.1002 0.0998 0.1005
S.dev. 0.5931 0.3828 0.1513 0.0715 0.0342 0.0179 0.0090
ASD 0.7428 0.3223 0.1516 0.0742 0.0375 0.0188 0.0094
CP% 100.00 100.00 100.00 99.30 97.09 95.90 95.40
0.1 0.9 Mean 0.0131 0.0725 0.0886 0.0956 0.0972 0.0986 0.0993
S.dev. 0.3842 0.1554 0.0707 0.0350 0.0187 0.0113 0.0075
0.3 0.5 Mean 0.2096 0.1797 0.1161 0.0665 0.1731 0.2658 0.2901
S.dev. 0.5181 0.5738 0.4782 0.5254 0.3445 0.1209 0.0564
ASD 1.1101 1.0150 0.7507 0.4497 0.2337 0.1124 0.0570
CP% 100.00 100.00 100.00 100.00 92.01 87.11 90.80
0.3 0.7 Mean 0.1748 0.1524 0.2330 0.2764 0.2931 0.2989 0.3001
S.dev. 0.5138 0.4661 0.2630 0.1165 0.0563 0.0263 0.0137
ASD 1.4395 0.5732 0.2682 0.1248 0.0566 0.0277 0.0138
CP% 100.00 100.00 100.00 100.00 98.32 96.30 95.60
0.3 0.9 Mean 0.1863 0.2636 0.2863 0.2934 0.2969 0.2983 0.2988
S.dev. 0.4268 0.1859 0.0824 0.0423 0.0211 0.0124 0.0078
0.5 0.7 Mean 0.4213 0.3300 0.2774 0.3367 0.4563 0.4859 0.4974
S.dev. 0.6302 0.6258 0.5085 0.3908 0.1461 0.0763 0.0367
ASD 2.0105 1.2367 0.8386 0.4502 0.1964 0.0830 0.0366
CP% 100.00 100.00 100.00 100.00 99.84 99.09 98.09
0.5 0.9 Mean 0.3188 0.3701 0.4577 0.4878 0.4943 0.4975 0.4986
S.dev. 0.5170 0.4018 0.1477 0.0627 0.0299 0.0162 0.0095
0.7 0.9 Mean 0.5487 0.4918 0.5172 0.5854 0.6706 0.6898 0.6957
S.dev. 0.6543 0.6735 0.4569 0.2908 0.0914 0.0396 0.0193
Table 2.
The estimator ${\widehat{H}_{N}^{(2)}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean 0.2296 0.2845 0.3834 0.5157 0.4571 0.3634 0.3182
S.dev. 0.5205 0.6239 0.5643 0.5944 0.3578 0.1543 0.0594
ASD 2.2888 2.0419 1.4188 0.8483 0.4334 0.1937 0.0791
CP% 100.00 100.00 100.00 100.00 100.00 100.00 99.88
0.1 0.5 Mean 0.5437 0.6554 0.6072 0.5302 0.5093 0.5024 0.5003
S.dev. 0.6078 0.5234 0.2668 0.0985 0.0411 0.0193 0.0094
ASD 0.9381 0.5332 0.2632 0.1158 0.0481 0.0218 0.0107
CP% 100.00 100.00 100.00 100.00 99.66 98.37 96.80
0.1 0.7 Mean 0.7912 0.7303 0.7069 0.7018 0.7006 0.7000 0.7001
S.dev. 0.4123 0.1341 0.0554 0.0254 0.0126 0.0066 0.0032
ASD 0.3193 0.1257 0.0557 0.0265 0.0139 0.0064 0.0032
CP% 100.00 100.00 100.00 99.77 95.98 94.50 94.70
0.1 0.9 Mean 0.8818 0.8825 0.8882 0.8930 0.8959 0.8979 0.8991
S.dev. 0.1599 0.0601 0.0381 0.0272 0.0201 0.0151 0.0121
0.3 0.5 Mean 0.4179 0.4999 0.5838 0.6819 0.5997 0.5329 0.5097
S.dev. 0.6199 0.6409 0.5791 0.4784 0.2349 0.0921 0.0418
ASD 0.9471 0.7822 0.6198 0.3870 0.1737 0.0861 0.0412
CP% 100.00 100.00 100.00 100.00 97.81 90.36 90.90
0.3 0.7 Mean 0.7636 0.8383 0.7481 0.7094 0.7023 0.7007 0.7004
S.dev. 0.5831 0.4365 0.1412 0.0574 0.0271 0.0128 0.0066
ASD 0.4241 0.1793 0.0914 0.0466 0.0256 0.0130 0.0065
CP% 100.00 100.00 99.38 90.74 93.60 95.30 94.00
0.3 0.9 Mean 0.9267 0.8926 0.8898 0.8936 0.8954 0.8975 0.8981
S.dev. 0.2765 0.0827 0.0440 0.0301 0.0213 0.0166 0.0130
0.5 0.7 Mean 0.5823 0.6867 0.8179 0.8324 0.7494 0.7181 0.7056
S.dev. 0.6097 0.6288 0.5302 0.3556 0.1229 0.0581 0.0265
ASD 0.5033 0.2277 0.1997 0.1055 0.0672 0.0422 0.0245
CP% 100.00 100.00 99.54 82.13 78.62 85.38 92.58
0.5 0.9 Mean 0.8978 0.9350 0.8980 0.8936 0.8939 0.8965 0.8978
S.dev. 0.5267 0.2147 0.0793 0.0413 0.0274 0.0198 0.0150
0.7 0.9 Mean 0.7878 0.8862 0.9672 0.9309 0.9021 0.8963 0.8965
S.dev. 0.6431 0.5628 0.4037 0.1334 0.0578 0.0316 0.0213
Table 3.
The estimator ${\hat{\kappa }_{N}^{2}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean NaN NaN NaN Inf 1.0591 1.0732 1.0460
S.dev. NaN NaN NaN NaN 0.7876 0.6474 0.4604
ASD 4.3622 4.9405 5.0730 2.8475 1.5443 0.9266 0.6165
CP% 100.00 100.00 93.08 80.66 70.73 74.27 89.53
0.1 0.5 Mean NaN Inf 1.1104 1.0594 1.0340 1.0098 1.0006
S.dev. NaN NaN 0.6161 0.4314 0.2528 0.1333 0.0672
ASD 4.4678 2.4422 1.0822 0.5453 0.2928 0.1484 0.0743
CP% 100.00 100.00 99.30 92.92 96.15 97.97 96.20
0.1 0.7 Mean Inf 1.0400 1.0183 1.0054 1.0035 1.0004 1.0010
S.dev. NaN 0.4440 0.2538 0.1260 0.0623 0.0330 0.0164
ASD 1.8115 0.6744 0.2911 0.1395 0.0693 0.0346 0.0173
CP% 100.00 100.00 100.00 100.00 97.79 95.70 95.70
0.1 0.9 Mean 1.0054 0.9872 0.9878 0.9922 0.9950 0.9972 0.9987
S.dev. 0.3366 0.1663 0.0823 0.0437 0.0264 0.0177 0.0132
0.3 0.5 Mean NaN NaN Inf 1.0451 1.0379 1.0466 1.0232
S.dev. NaN NaN NaN 0.8444 0.7341 0.5658 0.3768
ASD 4.8428 4.2202 3.2605 2.1396 1.0325 0.6043 0.3862
CP% 100.00 100.00 100.00 96.51 83.23 77.25 84.00
0.3 0.7 Mean NaN 1.1084 1.0696 1.0255 1.0106 1.0026 1.0017
S.dev. NaN NaN 0.5169 0.3305 0.1797 0.0877 0.0455
ASD 3.5137 1.3194 0.6535 0.3286 0.1770 0.0917 0.0461
CP% 100.00 100.00 0.9979 0.9074 0.9496 0.9570 0.9460
0.3 0.9 Mean Inf 0.9970 0.9867 0.9887 0.9923 0.9955 0.9968
S.dev. NaN 0.2908 0.1463 0.0802 0.0436 0.0285 0.0201
0.5 0.7 Mean NaN NaN Inf 1.0557 1.0674 1.0502 1.0270
S.dev. NaN NaN NaN 0.7737 0.6163 0.4567 0.2703
ASD 4.6051 2.2181 2.1464 1.2319 0.7646 0.4713 0.2722
CP% 100.00 100.00 92.17 77.05 78.14 86.29 93.96
0.5 0.9 Mean NaN 1.0008 0.9750 0.9800 0.9814 0.9905 0.9940
S.dev. NaN 0.5558 0.3666 0.1997 0.1057 0.0621 0.0402
0.7 0.9 Mean NaN NaN −Inf 0.9524 0.9568 0.9572 0.9690
S.dev. NaN NaN NaN 0.6566 0.4605 0.2777 0.1643
Table 4.
The estimator ${\hat{\sigma }_{N}^{2}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 Mean NaN NaN NaN Inf 0.9764 0.9499 0.9781
S.dev. NaN NaN NaN NaN 0.7882 0.6480 0.4606
ASD 4.3572 4.9387 5.0724 2.8472 1.5442 0.9265 0.6165
CP% 100.00 100.00 93.08 80.25 70.95 74.27 89.53
0.1 0.5 Mean NaN −Inf 0.8916 0.9408 0.9656 0.9902 0.9995
S.dev. NaN NaN 0.6168 0.4320 0.2527 0.1334 0.0672
ASD 4.4647 2.4412 1.0818 0.5452 0.2928 0.1484 0.0748
CP% 100.00 100.00 99.54 93.07 96.15 97.87 96.40
0.1 0.7 Mean −Inf 0.9609 0.9795 0.9941 0.9959 0.9994 0.9989
S.dev. NaN 0.4474 0.2513 0.1245 0.0617 0.0326 0.0162
ASD 1.8071 0.6716 0.2897 0.1388 0.0690 0.0344 0.0172
CP% 100.00 100.00 100.00 100.00 97.49 95.90 96.20
0.1 0.9 Mean 1.0170 1.0253 1.0273 1.0157 1.0099 1.0064 1.0067
S.dev. 0.5639 0.3708 0.2810 0.2096 0.1452 0.1039 0.0784
0.3 0.5 Mean NaN NaN −Inf 0.9554 0.9623 0.9536 0.9769
S.dev. NaN NaN NaN 0.8435 0.7340 0.5662 0.3768
ASD 4.8407 4.2199 3.2605 2.1396 1.0325 0.6043 0.3862
CP% 100.00 100.00 100.00 96.80 83.23 77.03 84.00
0.3 0.7 Mean NaN −Inf 0.9327 0.9886 0.9978 0.9984 0.9982
S.dev. NaN NaN 0.5156 0.3295 0.1787 0.0872 0.0453
ASD 3.5101 1.3180 0.6527 0.3283 0.1768 0.0915 0.0460
CP% 100.00 100.00 100.00 90.61 95.07 95.50 94.80
0.3 0.9 Mean −Inf 1.0177 1.0233 1.0235 1.0092 1.0092 1.0044
S.dev. NaN 0.4173 0.2704 0.1891 0.1256 0.1035 0.0803
0.5 0.7 Mean NaN NaN −Inf 0.9450 0.9323 0.9497 0.9730
S.dev. NaN NaN NaN 0.7735 0.6158 0.4568 0.2702
ASD 4.5998 2.2161 2.1461 1.2318 0.7645 0.4712 0.2721
CP% 100.00 100.00 91.71 77.05 78.30 86.42 94.06
0.5 0.9 Mean NaN 1.0025 1.0274 1.0277 1.0211 1.0122 1.0075
S.dev. NaN 0.6212 0.4051 0.2374 0.1476 0.1002 0.0726
0.7 0.9 Mean NaN NaN Inf 1.0399 1.0375 1.0414 1.0288
S.dev. NaN NaN NaN 0.6359 0.4416 0.2525 0.1337
Table 5.
The percentage (%) of iterations with constant ${d_{N}}$ being equal to zero with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 2.9 1.3 0.3 0 0 0 0
0.1 0.5 1.1 0.1 0 0 0 0 0
0.1 0.7 0.2 0 0 0 0 0 0
0.1 0.9 0 0 0 0 0 0 0
0.3 0.5 3.8 1.5 0.1 0 0 0 0
0.3 0.7 0.9 0 0 0 0 0 0
0.3 0.9 0.1 0 0 0 0 0 0
0.5 0.7 3.5 2.2 0.1 0 0 0 0
0.5 0.9 0.6 0 0 0 0 0 0
0.7 0.9 4.3 1.1 0.4 0 0 0 0
Table 6.
The coverage probability (%) for estimators ${({\xi _{N}},{\eta _{N}},{\zeta _{N}},{\phi _{N}})^{\top }}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 ${\xi _{N}}$ 94.29 92.19 95.38 94.24 94.68 92.25 93.14
${\eta _{N}}$ 100.0 98.44 99.23 99.59 98.67 99.27 98.95
${\zeta _{N}}$ 100.0 100.0 100.0 100.0 100.0 100.0 99.88
${\phi _{N}}$ 100.0 100.0 100.0 100.0 100.0 100.0 100.0
0.1 0.5 ${\xi _{N}}$ 95.06 94.55 94.20 94.25 93.54 94.41 95.10
${\eta _{N}}$ 98.77 97.52 97.45 99.12 97.73 98.27 98.50
${\zeta _{N}}$ 97.53 100.0 98.84 99.41 98.75 99.19 99.20
${\phi _{N}}$ 100.0 100.0 99.54 99.26 99.55 99.19 99.40
0.1 0.7 ${\xi _{N}}$ 95.83 95.11 96.33 95.24 94.48 94.00 94.40
${\eta _{N}}$ 93.75 95.49 97.10 94.66 94.68 94.10 93.70
${\zeta _{N}}$ 90.62 95.11 93.82 91.88 91.77 90.00 91.10
${\phi _{N}}$ 88.54 92.48 91.31 89.91 90.26 87.50 88.60
0.3 0.5 ${\xi _{N}}$ 95.00 95.28 95.21 95.64 94.36 95.77 94.80
${\eta _{N}}$ 95.00 96.23 97.01 96.51 97.34 97.29 96.50
${\zeta _{N}}$ 96.67 98.11 98.20 97.67 99.06 98.70 97.80
${\phi _{N}}$ 100.0 99.06 97.01 98.84 98.90 98.81 97.90
0.3 0.7 ${\xi _{N}}$ 96.15 94.21 94.24 92.70 93.07 94.00 93.70
${\eta _{N}}$ 98.72 93.82 94.65 92.18 91.71 91.80 91.40
${\zeta _{N}}$ 96.15 93.05 91.98 90.61 90.35 89.40 89.00
${\phi _{N}}$ 93.59 93.44 89.51 87.22 88.88 86.90 85.70
0.5 0.7 ${\xi _{N}}$ 89.36 96.15 95.85 91.06 94.18 92.56 94.27
${\eta _{N}}$ 87.23 97.69 92.17 87.68 91.04 89.16 90.88
${\zeta _{N}}$ 89.36 96.92 87.56 84.30 88.52 86.81 87.91
${\phi _{N}}$ 91.49 97.69 85.25 82.37 86.95 84.60 86.00
Table 7.
The percentage (%) of iterations with undefined asymptotic covariance matrix ${\hat{\Sigma }_{N}^{0}}$ with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ ($h=1$)
N
${H_{1}}$ ${H_{2}}$ ${2^{8}}$ ${2^{10}}$ ${2^{12}}$ ${2^{14}}$ ${2^{16}}$ ${2^{18}}$ ${2^{20}}$
0.1 0.3 96.50 93.60 87.00 75.70 54.90 31.60 14.00
0.1 0.5 91.90 79.80 56.90 32.20 11.80 1.60 0
0.1 0.7 90.40 73.40 48.20 13.80 0.40 0 0
0.3 0.5 94.00 89.40 83.30 65.60 36.20 7.70 0
0.3 0.7 92.20 74.10 51.40 23.30 4.70 0 0
0.5 0.7 95.30 87.00 78.30 58.60 36.40 23.40 5.70
Table 8.
The convergence of the series (16) with ${\sigma ^{2}}=1$, ${\kappa ^{2}}=1$ and ${H_{1}}=0.1$ ($h=1$)
N
${H_{2}}$ ${10^{-3}}$ ${10^{-4}}$ ${10^{-5}}$ ${10^{-6}}$ ${2^{-7}}$ ${10^{-8}}$ ${10^{-9}}$
0.3 m 4 8 16 35 76 168 377
${S_{m}}$ 4.45362 4.45427 4.45445 4.45451 4.45452 4.45452 4.45453
0.5 m 3 4 7 12 22 42 78
${S_{m}}$ 4.18198 4.18203 4.18206 4.18207 4.18208 4.18208 4.18208
0.7 m $5.9\cdot {10^{6}}$ $4.9\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$ $7.2\cdot {10^{7}}$
${S_{m}}$ 9981.86 35818.4 45796.6 45796.6 45796.6 45796.6 45796.6
Theorem 1.
Let $0\lt {H_{1}}\lt {H_{2}}\lt 1$. The statistic
(8)
\[ \widehat{\vartheta }:={({\widehat{H}_{N}^{(1)}},{\widehat{H}_{N}^{(2)}},{\hat{\kappa }_{N}^{2}},{\hat{\sigma }_{N}^{2}})^{\top }},\]
where
(9)
\[ {\widehat{H}_{N}^{(1)}}=\frac{1}{2}{\log _{2+}}\left(\frac{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}+{d_{N}}}{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}\right),\]
(10)
\[ {\widehat{H}_{N}^{(2)}}=\frac{1}{2}{\log _{2+}}\left(\frac{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}-{d_{N}}}{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}\right),\]
(11)
\[ {\hat{\kappa }_{N}^{2}}={\left(\frac{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}+{d_{N}}}\right)^{{\log _{2}}h}}\frac{{\xi _{N}}{d_{N}}+2{\eta _{N}^{3}}-3{\xi _{N}}{\eta _{N}}{\zeta _{N}}+{\xi _{N}^{2}}{\phi _{N}}}{2{d_{N}}},\]
(12)
\[ {\hat{\sigma }_{N}^{2}}={\left(\frac{2({\eta _{N}^{2}}-{\xi _{N}}{\zeta _{N}})}{{\eta _{N}}{\zeta _{N}}-{\xi _{N}}{\phi _{N}}-{d_{N}}}\right)^{{\log _{2}}h}}\frac{{\xi _{N}}{d_{N}}-2{\eta _{N}^{3}}+3{\xi _{N}}{\eta _{N}}{\zeta _{N}}-{\xi _{N}^{2}}{\phi _{N}}}{2{d_{N}}}\]
with
(13)
\[ {d_{N}}:={\big({\xi _{N}^{2}}{\phi _{N}^{2}}-6{\xi _{N}}{\eta _{N}}{\zeta _{N}}{\phi _{N}}-3{\eta _{N}^{2}}{\zeta _{N}^{2}}+4{\eta _{N}^{3}}{\phi _{N}}+4{\xi _{N}}{\zeta _{N}^{3}}\big)_{+}^{1/2}}\]
is a strongly consistent estimator of $\vartheta ={({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})^{\top }}$.
Here
\[ {\log _{2+}}x=\left\{\begin{array}{l@{\hskip10.0pt}l}{\log _{2}}x,\hspace{1em}& \textit{if}\hspace{2.5pt}x\gt 0,\\ {} 0,\hspace{1em}& \textit{if}\hspace{2.5pt}x\le 0,\end{array}\right.\hspace{2em}{(x)_{+}^{1/2}}=\left\{\begin{array}{l@{\hskip10.0pt}l}\sqrt{x},\hspace{1em}& \textit{if}\hspace{2.5pt}x\gt 0,\\ {} 0,\hspace{1em}& \textit{if}\hspace{2.5pt}x\le 0.\end{array}\right.\]
Theorem 2.
Let $0\lt {H_{1}}\lt {H_{2}}\lt \frac{3}{4}$. The vector ${\tau _{N}}$ defined by (6) is asymptotically normal, namely,
\[ \sqrt{N}({\tau _{N}}-{\tau _{0}})=\sqrt{N}\left(\begin{array}{c}{\xi _{N}}-\operatorname{\mathsf{E}}{\xi _{0}}\\ {} {\eta _{N}}-\operatorname{\mathsf{E}}{\eta _{0}}\\ {} {\zeta _{N}}-\operatorname{\mathsf{E}}{\zeta _{0}}\\ {} {\phi _{N}}-\operatorname{\mathsf{E}}{\phi _{0}}\end{array}\right)\stackrel{\text{d}}{\to }\mathcal{N}(\vec{0},\widetilde{\Sigma })\]
with the asymptotic covariance matrix $\widetilde{\Sigma }$, which can be presented explicitly as
\[ \widetilde{\Sigma }=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\widetilde{\Sigma }_{11}}& {\widetilde{\Sigma }_{12}}& {\widetilde{\Sigma }_{13}}& {\widetilde{\Sigma }_{14}}\\ {} {\widetilde{\Sigma }_{12}}& {\widetilde{\Sigma }_{22}}& {\widetilde{\Sigma }_{23}}& {\widetilde{\Sigma }_{24}}\\ {} {\widetilde{\Sigma }_{13}}& {\widetilde{\Sigma }_{23}}& {\widetilde{\Sigma }_{33}}& {\widetilde{\Sigma }_{34}}\\ {} {\widetilde{\Sigma }_{14}}& {\widetilde{\Sigma }_{24}}& {\widetilde{\Sigma }_{34}}& {\widetilde{\Sigma }_{44}}\end{array}\right),\]
where
\[\begin{aligned}{}{\widetilde{\Sigma }_{11}}& =2{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }{(i)^{2}},\hspace{1em}{\widetilde{\Sigma }_{12}}={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\Big(4\tilde{\rho }(i)+4\tilde{\rho }(i+1)\Big),\\ {} {\widetilde{\Sigma }_{22}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(12\tilde{\rho }(i)+16\tilde{\rho }(i+1)+4\tilde{\rho }(i+2)\Big),\\ {} {\widetilde{\Sigma }_{13}}& ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\Big(8\tilde{\rho }(i)+12\tilde{\rho }(i+1)+8\tilde{\rho }(i+2)+4\tilde{\rho }(i+3)\Big),\\ {} {\widetilde{\Sigma }_{23}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(28\tilde{\rho }(i)+48\tilde{\rho }(i+1)+32\tilde{\rho }(i+2)+16\tilde{\rho }(i+3)+4\tilde{\rho }(i+4)\Big),\\ {} {\widetilde{\Sigma }_{33}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(88\tilde{\rho }(i)+160\tilde{\rho }(i+1)+124\tilde{\rho }(i+2)+80\tilde{\rho }(i+3)\\ {} & \hspace{1em}+40\tilde{\rho }(i+4)+16\tilde{\rho }(i+5)+4\tilde{\rho }(i+6)\Big),\\ {} {\widetilde{\Sigma }_{14}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(16\tilde{\rho }(i)+28\tilde{\rho }(i+1)+24\tilde{\rho }(i+2)+20\tilde{\rho }(i+3)\\ {} & \hspace{1em}+16\tilde{\rho }(i+4)+12\tilde{\rho }(i+5)+8\tilde{\rho }(i+6)+4\tilde{\rho }(i+7)\Big),\\ {} {\widetilde{\Sigma }_{24}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(60\tilde{\rho }(i)+112\tilde{\rho }(i+1)+96\tilde{\rho }(i+2)+80\tilde{\rho }(i+3)\\ {} & \hspace{1em}+64\tilde{\rho }(i+4)+48\tilde{\rho }(i+5)+32\tilde{\rho }(i+6)+16\tilde{\rho }(i+7)+4\tilde{\rho }(i+8)\Big),\\ {} {\widetilde{\Sigma }_{34}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(216\tilde{\rho }(i)+416\tilde{\rho }(i+1)+376\tilde{\rho }(i+2)+320\tilde{\rho }(i+3)\\ {} & \hspace{1em}+256\tilde{\rho }(i+4)+192\tilde{\rho }(i+5)+132\tilde{\rho }(i+6)+80\tilde{\rho }(i+7)\\ {} & \hspace{1em}+40\tilde{\rho }(i+8)+16\tilde{\rho }(i+9)+4\tilde{\rho }(i+10)\Big),\\ {} {\widetilde{\Sigma }_{44}}& ={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(688\tilde{\rho }(i)+1344\tilde{\rho }(i+1)+1260\tilde{\rho }(i+2)+1136\tilde{\rho }(i+3)\\ {} & \hspace{1em}+984\tilde{\rho }(i+4)+816\tilde{\rho }(i+5)+644\tilde{\rho }(i+6)+480\tilde{\rho }(i+7)\\ {} & \hspace{1em}+336\tilde{\rho }(i+8)+224\tilde{\rho }(i+9)+140\tilde{\rho }(i+10)+80\tilde{\rho }(i+11)\\ {} & \hspace{1em}+40\tilde{\rho }(i+12)+16\tilde{\rho }(i+13)+4\tilde{\rho }(i+14)\Big).\end{aligned}\]
Theorem 3.
Let $0\lt {H_{1}}\lt {H_{2}}\lt \frac{3}{4}$. The estimator ${\hat{\vartheta }_{N}}$ is asymptotically normal, namely,
\[ \sqrt{N}\left({\hat{\vartheta }_{N}}-\vartheta \right)=\sqrt{N}\left(\begin{array}{c}{\widehat{H}_{N}^{(1)}}-{H_{1}}\\ {} {\widehat{H}_{N}^{(2)}}-{H_{2}}\\ {} {\hat{\kappa }_{N}^{2}}-{\kappa ^{2}}\\ {} {\hat{\sigma }_{N}^{2}}-{\sigma ^{2}}\end{array}\right)\stackrel{\text{d}}{\to }\mathcal{N}\left(\vec{0},{\Sigma ^{0}}\right)\]
with the asymptotic covariance matrix ${\Sigma ^{0}}$ that can be found by the formula
\[ {\Sigma ^{0}}={({f^{\prime }}(\vartheta ))^{-1}}\widetilde{\Sigma }{\left({f^{\prime }}(\vartheta )\right)^{-\top }},\]
where $\widetilde{\Sigma }$ is defined in Theorem 2 and
\[ {f^{\prime }}(\vartheta )=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}2{\kappa ^{2}}{h^{2{H_{1}}}}\log h& 2{\sigma ^{2}}{h^{2{H_{2}}}}\log h& {h^{2{H_{1}}}}& {h^{2{H_{2}}}}\\ {} 2{\kappa ^{2}}{(2h)^{2{H_{1}}}}\log (2h)& 2{\sigma ^{2}}{(2h)^{2{H_{2}}}}\log (2h)& {(2h)^{2{H_{1}}}}& {(2h)^{2{H_{2}}}}\\ {} 2{\kappa ^{2}}{(4h)^{2{H_{1}}}}\log (4h)& 2{\sigma ^{2}}{(4h)^{2{H_{2}}}}\log (4h)& {(4h)^{2{H_{1}}}}& {(4h)^{2{H_{2}}}}\\ {} 2{\kappa ^{2}}{(8h)^{2{H_{1}}}}\log (8h)& 2{\sigma ^{2}}{(8h)^{2{H_{2}}}}\log (8h)& {(8h)^{2{H_{1}}}}& {(8h)^{2{H_{2}}}}\end{array}\right).\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy