1 Introduction
The paper is devoted to the statistical identification of the following stochastic process:
where BH1 and BH2 are two independent fractional Brownian motions with Hurst indices H1 and H2 respectively. The process (1) is known in the literature as fractional mixed fractional Brownian motion [16]. We aim to estimate all four unknown parameters H1, H2, κ2, σ2 based on equidistant observations {Xkh,k=0,1,2,…}, h>0. In order to get an identifiable model, we assume that 0<H1<H2<1, σ>0, and κ>0 throughout the article.
The application of fractional Brownian motion is highly widespread for modelling real-world processes that evolve over time and have the properties of self-similarity and long- or short-range dependence, see e.g., [4, 5, 11, 24–26, 32]. Mixed models like (1) containing fractional Brownian motions with different Hurst indices are important from the perspective of practical applications, in particular, to economics and financial mathematics; the motion of a lower Hurst index usually corresponds to a component responsible for changes over a short period of time, and the motion of a larger one corresponds to a long-term component. The particular case of the process (1) with H1=1/2 is known as mixed fractional Brownian motion. It was firstly proposed by Cheridito [3] for better financial modeling, its properties can be found in [33]. There are already existing practical applications of this model [9, 27, 30, 32]. The model (1) with arbitrary H1 and H2 was considered in [8, 16]. Its application to the pricing of credit default swaps was studied in [10]. A more general model, containing several fractional Brownian motions with different Hurst indices can be found in [28].
Various methods for parameter estimation in the mixed Brownian–fractional Brownian model (i.e., in the case H1=1/2) were proposed in [29] (the maximum likelihood method), [31] (Powell’s optimization algorithm), and [6] (approach based on power variations). The general case of H1,H2∈(0,1) is less studied. We can mention only the papers [18, 19], where the following mixed model with a trend was considered:
The authors applied the maximum likelihood approach for the estimation of the unknown drift parameter θ in model (2) based on the continuous observations. They constructed an estimator, which involves a solution to the integral equation with weak polar kernel. An approach to parameter estimation based on the numerical solution to this equation was recently proposed in [21]. Estimation of the drift parameter by discrete observations was discussed in [20].
In Refs. [2, 7, 14, 17] the parameter estimation for model (2) with H1=1/2 was studied. The simultaneous estimation of all four parameters of this model was studied in [7] by the maximum likelihood method, and in [14], where the approach of [6] was generalized and, moreover, ergodic-type estimators were proposed. Ref. [2] considered the estimation of the drift parameter θ assuming that H and σ are known and κ=1. The drift parameter estimation for a more general model driven by a stationary Gaussian process was discussed in [17].
The goal of the present paper is to construct strongly consistent estimators and to study their joint asymptotic normality. We extend the approach of [14], based on the ergodic theorem and generalized method of moments. It is worth mentioning that asymptotic normality does not hold for H2>34, which is typical for models with fractional Brownian motion, see, e.g., [22]. The estimators remain strongly consistent but have non-Gaussian asymptotic distribution.
The paper is organized as follows. In Section 2 we construct strongly consistent estimators with the use of four statistics based on the various increments of the process X. Section 3 is devoted to asymptotic normality. First, we prove asymptotic normality of constructed additional statistics and evaluate their asymptotic covariance matrix. Thereafter, we obtain the main result on the asymptotic normality of our estimators by applying the delta method. The theoretical results are illustrated by numerical study, which is presented in Section 4. All technical proofs together with some auxiliary results are collected in Section 5.
We use the following notation. The symbol cov stands for the covariance of two random variables and for the covariance matrix of a random vector. In the paper, all the vectors are column ones. The superscript ⊤ denotes transposition. Weak convergence in distribution is denoted by d→.
2 Construction of strongly consistent estimators
Let h>0 be fixed. Assume that the process Xt defined in (1) is observed at points tk=kh, k=0,1,2,…. Let us denote the increment
The estimation procedure will be based on the following result.
This lemma allows us to apply the ergodic theorem for the construction of estimators. Namely, if g:Rl+1→R is a Borel function such that E|g(ΔX0,ΔX1,…,ΔXl)|<∞, then
a. s., as N→∞. The main idea is to obtain four different convergences by choosing different functions g, and then to construct the estimators by solving the corresponding system of four equations. To this end, we introduce the following four statistics
Denote also ϑ=(H1,H2,κ2,σ2)⊤.
(4)
ξN:=1NN−1∑k=0(X(k+1)h−Xkh)2,ηN:=1NN−1∑k=0(X(k+2)h−Xkh)2,ζN:=1NN−1∑k=0(X(k+4)h−Xkh)2,ϕN:=1NN−1∑k=0(X(k+8)h−Xkh)2.Let us introduce the notation Then f(ϑ)=τ0. Therefore, in order to obtain estimators for parameters (H1,H2,κ2,σ2)⊤ we need to construct function q such that q(τ0)=f−1(τ0), and then required estimators would be defined as (ˆH(1)N,ˆH(2)N,ˆκ2N,ˆσ2N)⊤:=q(τN). In other words, in order to construct strongly consistent estimators, we need to solve the estimation equation
with respect to unknown parameters. The solution to (7) can be found in explicit form, see Subsection 5.3 for details. This leads to the following result.
3 Asymptotic normality
We will start with a study of the asymptotic properties of the vector τN=(ϕN,ξN,ηN,ζN). The previously defined stationary Gaussian sequence ΔXk has the autocovariance function
where
denotes the autocovariance function of the stationary sequence {BHk+1−BHk,k≥0}, which is known as fractional Gaussian noise.
Theorem 2.
Let 0<H1<H2<34. The vector τN defined by (6) is asymptotically normal, namely,
with the asymptotic covariance matrix ˜Σ, which can be presented explicitly as
where
˜Σ11=2∞∑i=−∞˜ρ(i)2,˜Σ12=∞∑i=−∞˜ρ(i)(4˜ρ(i)+4˜ρ(i+1)),˜Σ22=+∞∑i=−∞˜ρ(i)(12˜ρ(i)+16˜ρ(i+1)+4˜ρ(i+2)),˜Σ13=∞∑i=−∞˜ρ(i)(8˜ρ(i)+12˜ρ(i+1)+8˜ρ(i+2)+4˜ρ(i+3)),˜Σ23=+∞∑i=−∞˜ρ(i)(28˜ρ(i)+48˜ρ(i+1)+32˜ρ(i+2)+16˜ρ(i+3)+4˜ρ(i+4)),˜Σ33=+∞∑i=−∞˜ρ(i)(88˜ρ(i)+160˜ρ(i+1)+124˜ρ(i+2)+80˜ρ(i+3)+40˜ρ(i+4)+16˜ρ(i+5)+4˜ρ(i+6)),˜Σ14=+∞∑i=−∞˜ρ(i)(16˜ρ(i)+28˜ρ(i+1)+24˜ρ(i+2)+20˜ρ(i+3)+16˜ρ(i+4)+12˜ρ(i+5)+8˜ρ(i+6)+4˜ρ(i+7)),˜Σ24=+∞∑i=−∞˜ρ(i)(60˜ρ(i)+112˜ρ(i+1)+96˜ρ(i+2)+80˜ρ(i+3)+64˜ρ(i+4)+48˜ρ(i+5)+32˜ρ(i+6)+16˜ρ(i+7)+4˜ρ(i+8)),˜Σ34=+∞∑i=−∞˜ρ(i)(216˜ρ(i)+416˜ρ(i+1)+376˜ρ(i+2)+320˜ρ(i+3)+256˜ρ(i+4)+192˜ρ(i+5)+132˜ρ(i+6)+80˜ρ(i+7)+40˜ρ(i+8)+16˜ρ(i+9)+4˜ρ(i+10)),˜Σ44=+∞∑i=−∞˜ρ(i)(688˜ρ(i)+1344˜ρ(i+1)+1260˜ρ(i+2)+1136˜ρ(i+3)+984˜ρ(i+4)+816˜ρ(i+5)+644˜ρ(i+6)+480˜ρ(i+7)+336˜ρ(i+8)+224˜ρ(i+9)+140˜ρ(i+10)+80˜ρ(i+11)+40˜ρ(i+12)+16˜ρ(i+13)+4˜ρ(i+14)).
Remark 2.
The condition H<34 is essential since otherwise the series in Theorem 2 do not converge. Moreover, for H>34, the asymptotic normality of τN does not hold; using the results of [15], it is possible to establish the convergence of the suitably normalized vector τN−τ0 to a special non-Gaussian distribution.
4 Simulation study
In this section, we present results of estimators performance in numerical simulations. For each generated trajectory, we will estimate asymptotic covariance matrices defined in Theorem 2 and Theorem 3 by using values of estimators (9)–(12). For each set of parameters with 0<H1<H2<1, we generate 1000 trajectories of the process X and calculate the empirical means, empirical standard deviations of the estimates, and percentage of iterations with constant dN defined in (13) being equal to zero. Additionally, for 0<H1<H2<34, we calculate the average approximate theoretical standard deviation (ASD) – the square root of the estimated asymptotic variance divided by N (√(ˆΣ0N)ii/N) – and the coverage probability (CP) for α=5%, based on the estimator of the asymptotic covariance matrix. We are computing these statistics only for H1<H2<34, excluding interval (34,1) where the series (17) will not converge as per Lemma 3. Therefore, the asymptotic covariance matrix and corresponding statistics (ASD and CP) will be undefined. Moreover, when any of the estimates ˆHN will be out of boundaries (0,34), or the calculated constant dN defined in (13) is zero (e.g., in cases where discriminant D defined in the proof of Theorem 1 by (30) is nonpositive), then ASD and CP will also be considered undefined.
To numerically approximate limiting covariances, series alike (17) are divided into the sum of two convergent series
where α,β∈Z.
Then, series ∑+∞i=0˜ρ(i+α)˜ρ(i+β) are approximated with precision δ=10−3 by evaluating the sum Sm=∑mi=0˜ρ(i+α)˜ρ(i+β), where m is the smallest number such that |Sm−Sm−1|<10−3. Obtained approximations will be used to compute the asymptotic covariance matrix defined in Theorem 2.
For ˆH(1)N, ˆH(2)N, ˆκ2N, ˆσ2N, ˆΣ0N and fixed time step h=1 we vary the horizon T=h2n for n∈{8,10,12,14,16,18,20}. For all simulations, the values σ=1 and κ=1 were used.
Table 1.
The estimator ˆH(1)N with σ2=1, κ2=1 (h=1)
N | |||||||||
H1 | H2 | 28 | 210 | 212 | 214 | 216 | 218 | 220 | |
0.1 | 0.3 | Mean | 0.0722 | 0.0434 | −0.0230 | −0.1009 | −0.0844 | 0.0482 | 0.0842 |
S.dev. | 0.5041 | 0.4727 | 0.5063 | 0.5160 | 0.4459 | 0.1629 | 0.0791 | ||
ASD | 0.7648 | 0.7039 | 1.1629 | 0.3871 | 0.2011 | 0.1239 | 0.0812 | ||
CP% | 100.00 | 100.00 | 100.00 | 92.18 | 75.61 | 75.58 | 87.91 | ||
0.1 | 0.5 | Mean | 0.0184 | −0.0455 | −0.0009 | 0.0674 | 0.0953 | 0.0988 | 0.0991 |
S.dev. | 0.5618 | 0.5945 | 0.3733 | 0.1710 | 0.0824 | 0.0414 | 0.0205 | ||
ASD | 1.1951 | 0.5894 | 0.2696 | 0.1457 | 0.0829 | 0.0439 | 0.0222 | ||
CP% | 100.00 | 100.00 | 99.30 | 89.82 | 93.76 | 96.85 | 96.50 | ||
0.1 | 0.7 | Mean | −0.0408 | −0.0071 | 0.0773 | 0.0945 | 0.1002 | 0.0998 | 0.1005 |
S.dev. | 0.5931 | 0.3828 | 0.1513 | 0.0715 | 0.0342 | 0.0179 | 0.0090 | ||
ASD | 0.7428 | 0.3223 | 0.1516 | 0.0742 | 0.0375 | 0.0188 | 0.0094 | ||
CP% | 100.00 | 100.00 | 100.00 | 99.30 | 97.09 | 95.90 | 95.40 | ||
0.1 | 0.9 | Mean | 0.0131 | 0.0725 | 0.0886 | 0.0956 | 0.0972 | 0.0986 | 0.0993 |
S.dev. | 0.3842 | 0.1554 | 0.0707 | 0.0350 | 0.0187 | 0.0113 | 0.0075 | ||
0.3 | 0.5 | Mean | 0.2096 | 0.1797 | 0.1161 | 0.0665 | 0.1731 | 0.2658 | 0.2901 |
S.dev. | 0.5181 | 0.5738 | 0.4782 | 0.5254 | 0.3445 | 0.1209 | 0.0564 | ||
ASD | 1.1101 | 1.0150 | 0.7507 | 0.4497 | 0.2337 | 0.1124 | 0.0570 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 92.01 | 87.11 | 90.80 | ||
0.3 | 0.7 | Mean | 0.1748 | 0.1524 | 0.2330 | 0.2764 | 0.2931 | 0.2989 | 0.3001 |
S.dev. | 0.5138 | 0.4661 | 0.2630 | 0.1165 | 0.0563 | 0.0263 | 0.0137 | ||
ASD | 1.4395 | 0.5732 | 0.2682 | 0.1248 | 0.0566 | 0.0277 | 0.0138 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 98.32 | 96.30 | 95.60 | ||
0.3 | 0.9 | Mean | 0.1863 | 0.2636 | 0.2863 | 0.2934 | 0.2969 | 0.2983 | 0.2988 |
S.dev. | 0.4268 | 0.1859 | 0.0824 | 0.0423 | 0.0211 | 0.0124 | 0.0078 | ||
0.5 | 0.7 | Mean | 0.4213 | 0.3300 | 0.2774 | 0.3367 | 0.4563 | 0.4859 | 0.4974 |
S.dev. | 0.6302 | 0.6258 | 0.5085 | 0.3908 | 0.1461 | 0.0763 | 0.0367 | ||
ASD | 2.0105 | 1.2367 | 0.8386 | 0.4502 | 0.1964 | 0.0830 | 0.0366 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 99.84 | 99.09 | 98.09 | ||
0.5 | 0.9 | Mean | 0.3188 | 0.3701 | 0.4577 | 0.4878 | 0.4943 | 0.4975 | 0.4986 |
S.dev. | 0.5170 | 0.4018 | 0.1477 | 0.0627 | 0.0299 | 0.0162 | 0.0095 | ||
0.7 | 0.9 | Mean | 0.5487 | 0.4918 | 0.5172 | 0.5854 | 0.6706 | 0.6898 | 0.6957 |
S.dev. | 0.6543 | 0.6735 | 0.4569 | 0.2908 | 0.0914 | 0.0396 | 0.0193 |
Table 2.
The estimator ˆH(2)N with σ2=1, κ2=1 (h=1)
N | |||||||||
H1 | H2 | 28 | 210 | 212 | 214 | 216 | 218 | 220 | |
0.1 | 0.3 | Mean | 0.2296 | 0.2845 | 0.3834 | 0.5157 | 0.4571 | 0.3634 | 0.3182 |
S.dev. | 0.5205 | 0.6239 | 0.5643 | 0.5944 | 0.3578 | 0.1543 | 0.0594 | ||
ASD | 2.2888 | 2.0419 | 1.4188 | 0.8483 | 0.4334 | 0.1937 | 0.0791 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 99.88 | ||
0.1 | 0.5 | Mean | 0.5437 | 0.6554 | 0.6072 | 0.5302 | 0.5093 | 0.5024 | 0.5003 |
S.dev. | 0.6078 | 0.5234 | 0.2668 | 0.0985 | 0.0411 | 0.0193 | 0.0094 | ||
ASD | 0.9381 | 0.5332 | 0.2632 | 0.1158 | 0.0481 | 0.0218 | 0.0107 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 99.66 | 98.37 | 96.80 | ||
0.1 | 0.7 | Mean | 0.7912 | 0.7303 | 0.7069 | 0.7018 | 0.7006 | 0.7000 | 0.7001 |
S.dev. | 0.4123 | 0.1341 | 0.0554 | 0.0254 | 0.0126 | 0.0066 | 0.0032 | ||
ASD | 0.3193 | 0.1257 | 0.0557 | 0.0265 | 0.0139 | 0.0064 | 0.0032 | ||
CP% | 100.00 | 100.00 | 100.00 | 99.77 | 95.98 | 94.50 | 94.70 | ||
0.1 | 0.9 | Mean | 0.8818 | 0.8825 | 0.8882 | 0.8930 | 0.8959 | 0.8979 | 0.8991 |
S.dev. | 0.1599 | 0.0601 | 0.0381 | 0.0272 | 0.0201 | 0.0151 | 0.0121 | ||
0.3 | 0.5 | Mean | 0.4179 | 0.4999 | 0.5838 | 0.6819 | 0.5997 | 0.5329 | 0.5097 |
S.dev. | 0.6199 | 0.6409 | 0.5791 | 0.4784 | 0.2349 | 0.0921 | 0.0418 | ||
ASD | 0.9471 | 0.7822 | 0.6198 | 0.3870 | 0.1737 | 0.0861 | 0.0412 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 97.81 | 90.36 | 90.90 | ||
0.3 | 0.7 | Mean | 0.7636 | 0.8383 | 0.7481 | 0.7094 | 0.7023 | 0.7007 | 0.7004 |
S.dev. | 0.5831 | 0.4365 | 0.1412 | 0.0574 | 0.0271 | 0.0128 | 0.0066 | ||
ASD | 0.4241 | 0.1793 | 0.0914 | 0.0466 | 0.0256 | 0.0130 | 0.0065 | ||
CP% | 100.00 | 100.00 | 99.38 | 90.74 | 93.60 | 95.30 | 94.00 | ||
0.3 | 0.9 | Mean | 0.9267 | 0.8926 | 0.8898 | 0.8936 | 0.8954 | 0.8975 | 0.8981 |
S.dev. | 0.2765 | 0.0827 | 0.0440 | 0.0301 | 0.0213 | 0.0166 | 0.0130 | ||
0.5 | 0.7 | Mean | 0.5823 | 0.6867 | 0.8179 | 0.8324 | 0.7494 | 0.7181 | 0.7056 |
S.dev. | 0.6097 | 0.6288 | 0.5302 | 0.3556 | 0.1229 | 0.0581 | 0.0265 | ||
ASD | 0.5033 | 0.2277 | 0.1997 | 0.1055 | 0.0672 | 0.0422 | 0.0245 | ||
CP% | 100.00 | 100.00 | 99.54 | 82.13 | 78.62 | 85.38 | 92.58 | ||
0.5 | 0.9 | Mean | 0.8978 | 0.9350 | 0.8980 | 0.8936 | 0.8939 | 0.8965 | 0.8978 |
S.dev. | 0.5267 | 0.2147 | 0.0793 | 0.0413 | 0.0274 | 0.0198 | 0.0150 | ||
0.7 | 0.9 | Mean | 0.7878 | 0.8862 | 0.9672 | 0.9309 | 0.9021 | 0.8963 | 0.8965 |
S.dev. | 0.6431 | 0.5628 | 0.4037 | 0.1334 | 0.0578 | 0.0316 | 0.0213 |
Table 3.
The estimator ˆκ2N with σ2=1, κ2=1 (h=1)
N | |||||||||
H1 | H2 | 28 | 210 | 212 | 214 | 216 | 218 | 220 | |
0.1 | 0.3 | Mean | NaN | NaN | NaN | Inf | 1.0591 | 1.0732 | 1.0460 |
S.dev. | NaN | NaN | NaN | NaN | 0.7876 | 0.6474 | 0.4604 | ||
ASD | 4.3622 | 4.9405 | 5.0730 | 2.8475 | 1.5443 | 0.9266 | 0.6165 | ||
CP% | 100.00 | 100.00 | 93.08 | 80.66 | 70.73 | 74.27 | 89.53 | ||
0.1 | 0.5 | Mean | NaN | Inf | 1.1104 | 1.0594 | 1.0340 | 1.0098 | 1.0006 |
S.dev. | NaN | NaN | 0.6161 | 0.4314 | 0.2528 | 0.1333 | 0.0672 | ||
ASD | 4.4678 | 2.4422 | 1.0822 | 0.5453 | 0.2928 | 0.1484 | 0.0743 | ||
CP% | 100.00 | 100.00 | 99.30 | 92.92 | 96.15 | 97.97 | 96.20 | ||
0.1 | 0.7 | Mean | Inf | 1.0400 | 1.0183 | 1.0054 | 1.0035 | 1.0004 | 1.0010 |
S.dev. | NaN | 0.4440 | 0.2538 | 0.1260 | 0.0623 | 0.0330 | 0.0164 | ||
ASD | 1.8115 | 0.6744 | 0.2911 | 0.1395 | 0.0693 | 0.0346 | 0.0173 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 97.79 | 95.70 | 95.70 | ||
0.1 | 0.9 | Mean | 1.0054 | 0.9872 | 0.9878 | 0.9922 | 0.9950 | 0.9972 | 0.9987 |
S.dev. | 0.3366 | 0.1663 | 0.0823 | 0.0437 | 0.0264 | 0.0177 | 0.0132 | ||
0.3 | 0.5 | Mean | NaN | NaN | Inf | 1.0451 | 1.0379 | 1.0466 | 1.0232 |
S.dev. | NaN | NaN | NaN | 0.8444 | 0.7341 | 0.5658 | 0.3768 | ||
ASD | 4.8428 | 4.2202 | 3.2605 | 2.1396 | 1.0325 | 0.6043 | 0.3862 | ||
CP% | 100.00 | 100.00 | 100.00 | 96.51 | 83.23 | 77.25 | 84.00 | ||
0.3 | 0.7 | Mean | NaN | 1.1084 | 1.0696 | 1.0255 | 1.0106 | 1.0026 | 1.0017 |
S.dev. | NaN | NaN | 0.5169 | 0.3305 | 0.1797 | 0.0877 | 0.0455 | ||
ASD | 3.5137 | 1.3194 | 0.6535 | 0.3286 | 0.1770 | 0.0917 | 0.0461 | ||
CP% | 100.00 | 100.00 | 0.9979 | 0.9074 | 0.9496 | 0.9570 | 0.9460 | ||
0.3 | 0.9 | Mean | Inf | 0.9970 | 0.9867 | 0.9887 | 0.9923 | 0.9955 | 0.9968 |
S.dev. | NaN | 0.2908 | 0.1463 | 0.0802 | 0.0436 | 0.0285 | 0.0201 | ||
0.5 | 0.7 | Mean | NaN | NaN | Inf | 1.0557 | 1.0674 | 1.0502 | 1.0270 |
S.dev. | NaN | NaN | NaN | 0.7737 | 0.6163 | 0.4567 | 0.2703 | ||
ASD | 4.6051 | 2.2181 | 2.1464 | 1.2319 | 0.7646 | 0.4713 | 0.2722 | ||
CP% | 100.00 | 100.00 | 92.17 | 77.05 | 78.14 | 86.29 | 93.96 | ||
0.5 | 0.9 | Mean | NaN | 1.0008 | 0.9750 | 0.9800 | 0.9814 | 0.9905 | 0.9940 |
S.dev. | NaN | 0.5558 | 0.3666 | 0.1997 | 0.1057 | 0.0621 | 0.0402 | ||
0.7 | 0.9 | Mean | NaN | NaN | −Inf | 0.9524 | 0.9568 | 0.9572 | 0.9690 |
S.dev. | NaN | NaN | NaN | 0.6566 | 0.4605 | 0.2777 | 0.1643 |
Table 4.
The estimator ˆσ2N with σ2=1, κ2=1 (h=1)
N | |||||||||
H1 | H2 | 28 | 210 | 212 | 214 | 216 | 218 | 220 | |
0.1 | 0.3 | Mean | NaN | NaN | NaN | Inf | 0.9764 | 0.9499 | 0.9781 |
S.dev. | NaN | NaN | NaN | NaN | 0.7882 | 0.6480 | 0.4606 | ||
ASD | 4.3572 | 4.9387 | 5.0724 | 2.8472 | 1.5442 | 0.9265 | 0.6165 | ||
CP% | 100.00 | 100.00 | 93.08 | 80.25 | 70.95 | 74.27 | 89.53 | ||
0.1 | 0.5 | Mean | NaN | −Inf | 0.8916 | 0.9408 | 0.9656 | 0.9902 | 0.9995 |
S.dev. | NaN | NaN | 0.6168 | 0.4320 | 0.2527 | 0.1334 | 0.0672 | ||
ASD | 4.4647 | 2.4412 | 1.0818 | 0.5452 | 0.2928 | 0.1484 | 0.0748 | ||
CP% | 100.00 | 100.00 | 99.54 | 93.07 | 96.15 | 97.87 | 96.40 | ||
0.1 | 0.7 | Mean | −Inf | 0.9609 | 0.9795 | 0.9941 | 0.9959 | 0.9994 | 0.9989 |
S.dev. | NaN | 0.4474 | 0.2513 | 0.1245 | 0.0617 | 0.0326 | 0.0162 | ||
ASD | 1.8071 | 0.6716 | 0.2897 | 0.1388 | 0.0690 | 0.0344 | 0.0172 | ||
CP% | 100.00 | 100.00 | 100.00 | 100.00 | 97.49 | 95.90 | 96.20 | ||
0.1 | 0.9 | Mean | 1.0170 | 1.0253 | 1.0273 | 1.0157 | 1.0099 | 1.0064 | 1.0067 |
S.dev. | 0.5639 | 0.3708 | 0.2810 | 0.2096 | 0.1452 | 0.1039 | 0.0784 | ||
0.3 | 0.5 | Mean | NaN | NaN | −Inf | 0.9554 | 0.9623 | 0.9536 | 0.9769 |
S.dev. | NaN | NaN | NaN | 0.8435 | 0.7340 | 0.5662 | 0.3768 | ||
ASD | 4.8407 | 4.2199 | 3.2605 | 2.1396 | 1.0325 | 0.6043 | 0.3862 | ||
CP% | 100.00 | 100.00 | 100.00 | 96.80 | 83.23 | 77.03 | 84.00 | ||
0.3 | 0.7 | Mean | NaN | −Inf | 0.9327 | 0.9886 | 0.9978 | 0.9984 | 0.9982 |
S.dev. | NaN | NaN | 0.5156 | 0.3295 | 0.1787 | 0.0872 | 0.0453 | ||
ASD | 3.5101 | 1.3180 | 0.6527 | 0.3283 | 0.1768 | 0.0915 | 0.0460 | ||
CP% | 100.00 | 100.00 | 100.00 | 90.61 | 95.07 | 95.50 | 94.80 | ||
0.3 | 0.9 | Mean | −Inf | 1.0177 | 1.0233 | 1.0235 | 1.0092 | 1.0092 | 1.0044 |
S.dev. | NaN | 0.4173 | 0.2704 | 0.1891 | 0.1256 | 0.1035 | 0.0803 | ||
0.5 | 0.7 | Mean | NaN | NaN | −Inf | 0.9450 | 0.9323 | 0.9497 | 0.9730 |
S.dev. | NaN | NaN | NaN | 0.7735 | 0.6158 | 0.4568 | 0.2702 | ||
ASD | 4.5998 | 2.2161 | 2.1461 | 1.2318 | 0.7645 | 0.4712 | 0.2721 | ||
CP% | 100.00 | 100.00 | 91.71 | 77.05 | 78.30 | 86.42 | 94.06 | ||
0.5 | 0.9 | Mean | NaN | 1.0025 | 1.0274 | 1.0277 | 1.0211 | 1.0122 | 1.0075 |
S.dev. | NaN | 0.6212 | 0.4051 | 0.2374 | 0.1476 | 0.1002 | 0.0726 | ||
0.7 | 0.9 | Mean | NaN | NaN | Inf | 1.0399 | 1.0375 | 1.0414 | 1.0288 |
S.dev. | NaN | NaN | NaN | 0.6359 | 0.4416 | 0.2525 | 0.1337 |
Results of parameters estimations provided in Tables 1–4 indicate that constructed estimators perform better, the higher the difference is between H1 and H2. That impact is especially significant for estimators of variances κ2 and σ2 since the denominator in (11) and (12) tends to 0 when the difference between two Hurst parameters tends to zero. Also, the smaller this difference is, the higher the percentage of iterations with constant dN being equal to zero, which is presented in Table 5. Additionally, we also observe that constructed estimators perform better for higher values of H1 and H2. This also might be caused by the difference between H1 and H2 since most of the values in numerators and denominators of constructed estimators are highly dependent on the corresponding difference, as was shown in the proof of Theorem 1. And higher values of H1 and H2 might result in a more accurate estimate of the difference ˆH(2)N−ˆH(1)N in terms of the ratio between the estimated value and its deviation, and therefore – a lower percentage of iterations with the estimated difference being negative.
Table 5.
The percentage (%) of iterations with constant dN being equal to zero with σ2=1, κ2=1 (h=1)
N | ||||||||
H1 | H2 | 28 | 210 | 212 | 214 | 216 | 218 | 220 |
0.1 | 0.3 | 2.9 | 1.3 | 0.3 | 0 | 0 | 0 | 0 |
0.1 | 0.5 | 1.1 | 0.1 | 0 | 0 | 0 | 0 | 0 |
0.1 | 0.7 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 |
0.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
0.3 | 0.5 | 3.8 | 1.5 | 0.1 | 0 | 0 | 0 | 0 |
0.3 | 0.7 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 |
0.3 | 0.9 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 |
0.5 | 0.7 | 3.5 | 2.2 | 0.1 | 0 | 0 | 0 | 0 |
0.5 | 0.9 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 |
0.7 | 0.9 | 4.3 | 1.1 | 0.4 | 0 | 0 | 0 | 0 |
Table 6.
The coverage probability (%) for estimators (ξN,ηN,ζN,ϕN)⊤ with σ2=1, κ2=1 (h=1)
N | |||||||||
H1 | H2 | 28 | 210 | 212 | 214 | 216 | 218 | 220 | |
0.1 | 0.3 | ξN | 94.29 | 92.19 | 95.38 | 94.24 | 94.68 | 92.25 | 93.14 |
ηN | 100.0 | 98.44 | 99.23 | 99.59 | 98.67 | 99.27 | 98.95 | ||
ζN | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 99.88 | ||
ϕN | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 | ||
0.1 | 0.5 | ξN | 95.06 | 94.55 | 94.20 | 94.25 | 93.54 | 94.41 | 95.10 |
ηN | 98.77 | 97.52 | 97.45 | 99.12 | 97.73 | 98.27 | 98.50 | ||
ζN | 97.53 | 100.0 | 98.84 | 99.41 | 98.75 | 99.19 | 99.20 | ||
ϕN | 100.0 | 100.0 | 99.54 | 99.26 | 99.55 | 99.19 | 99.40 | ||
0.1 | 0.7 | ξN | 95.83 | 95.11 | 96.33 | 95.24 | 94.48 | 94.00 | 94.40 |
ηN | 93.75 | 95.49 | 97.10 | 94.66 | 94.68 | 94.10 | 93.70 | ||
ζN | 90.62 | 95.11 | 93.82 | 91.88 | 91.77 | 90.00 | 91.10 | ||
ϕN | 88.54 | 92.48 | 91.31 | 89.91 | 90.26 | 87.50 | 88.60 | ||
0.3 | 0.5 | ξN | 95.00 | 95.28 | 95.21 | 95.64 | 94.36 | 95.77 | 94.80 |
ηN | 95.00 | 96.23 | 97.01 | 96.51 | 97.34 | 97.29 | 96.50 | ||
ζN | 96.67 | 98.11 | 98.20 | 97.67 | 99.06 | 98.70 | 97.80 | ||
ϕN | 100.0 | 99.06 | 97.01 | 98.84 | 98.90 | 98.81 | 97.90 | ||
0.3 | 0.7 | ξN | 96.15 | 94.21 | 94.24 | 92.70 | 93.07 | 94.00 | 93.70 |
ηN | 98.72 | 93.82 | 94.65 | 92.18 | 91.71 | 91.80 | 91.40 | ||
ζN | 96.15 | 93.05 | 91.98 | 90.61 | 90.35 | 89.40 | 89.00 | ||
ϕN | 93.59 | 93.44 | 89.51 | 87.22 | 88.88 | 86.90 | 85.70 | ||
0.5 | 0.7 | ξN | 89.36 | 96.15 | 95.85 | 91.06 | 94.18 | 92.56 | 94.27 |
ηN | 87.23 | 97.69 | 92.17 | 87.68 | 91.04 | 89.16 | 90.88 | ||
ζN | 89.36 | 96.92 | 87.56 | 84.30 | 88.52 | 86.81 | 87.91 | ||
ϕN | 91.49 | 97.69 | 85.25 | 82.37 | 86.95 | 84.60 | 86.00 |
Table 7.
The percentage (%) of iterations with undefined asymptotic covariance matrix ˆΣ0N with σ2=1, κ2=1 (h=1)
N | ||||||||
H1 | H2 | 28 | 210 | 212 | 214 | 216 | 218 | 220 |
0.1 | 0.3 | 96.50 | 93.60 | 87.00 | 75.70 | 54.90 | 31.60 | 14.00 |
0.1 | 0.5 | 91.90 | 79.80 | 56.90 | 32.20 | 11.80 | 1.60 | 0 |
0.1 | 0.7 | 90.40 | 73.40 | 48.20 | 13.80 | 0.40 | 0 | 0 |
0.3 | 0.5 | 94.00 | 89.40 | 83.30 | 65.60 | 36.20 | 7.70 | 0 |
0.3 | 0.7 | 92.20 | 74.10 | 51.40 | 23.30 | 4.70 | 0 | 0 |
0.5 | 0.7 | 95.30 | 87.00 | 78.30 | 58.60 | 36.40 | 23.40 | 5.70 |
We observe that the higher the difference is between the true values of H1 and H2, the closer the ASD is to the empirical standard deviation of the estimator and the closer the coverage probability to the theoretical 95%. At the same time, results similar to those described above are observed in the performance of our estimates for asymptotic variance of (4) constructed in Theorem 2, which is shown in Table 6 by CP estimates. This table also shows that for higher values of H1 and H2, the accuracy of (4) is the lower, the bigger the increment step (e.g., ϕN is less accurate than ζN, which is less accurate than ηN, which is less accurate than ξN). Unfortunately, the percentage of iterations with undefined asymptotic covariance matrix ˆΣ0N is high but decreases for bigger samples, which is shown in Table 7, due to estimated ˆH(1)N, ˆH(2)N violating the conditions of Theorem 3. Also, the percentage of iterations with the undefined ˆΣ0N is the lower, the higher the difference between true H1 and H2.
Additionally, by conducting provided simulations, we observed that the closer parameters H1 and H2 to 0.75, the lower the convergence rate in the numerical approximation of the limiting covariance. To study this effect, we have conducted simulations for the series
We estimate (16) by finding the smallest number m∈N such that for Sm=∑mi=0˜ρ(i)2 the inequality |Sm−Sm−1|≤δ holds for some predefined δ.
This estimation was conducted for δ∈{10−3,10−4,10−5,10−6,10−7,10−8, 10−9} for σ=1, κ=1, H1=0.1 and H2∈{0.3,0.5,0.7}. Corresponding estimation results are presented in Table 8. As one can see, in order to estimate Sm with the precision δ=10−9 for H2=0.3, only a few hundred iterations are needed, while for H2=0.7, it requires dozens of millions of iterations. Moreover, lowering the precision from 10−4 to 10−3 has almost no impact on the estimated value Sm for low H2, but for high H2, it could result in around three times difference.
Table 8.
N | ||||||||
H2 | 10−3 | 10−4 | 10−5 | 10−6 | 2−7 | 10−8 | 10−9 | |
0.3 | m | 4 | 8 | 16 | 35 | 76 | 168 | 377 |
Sm | 4.45362 | 4.45427 | 4.45445 | 4.45451 | 4.45452 | 4.45452 | 4.45453 | |
0.5 | m | 3 | 4 | 7 | 12 | 22 | 42 | 78 |
Sm | 4.18198 | 4.18203 | 4.18206 | 4.18207 | 4.18208 | 4.18208 | 4.18208 | |
0.7 | m | 5.9⋅106 | 4.9⋅107 | 7.2⋅107 | 7.2⋅107 | 7.2⋅107 | 7.2⋅107 | 7.2⋅107 |
Sm | 9981.86 | 35818.4 | 45796.6 | 45796.6 | 45796.6 | 45796.6 | 45796.6 |
5 Proofs
5.1 Auxiliary properties of covariance functions
The following statement provides some important properties of sequences ˜ρ(i) and ρ(i,H), i∈Z, defined by (14) and (15), respectively.
Proof.
First two statements are well known, see, e.g., [22]. In order to prove the third one, we observe that
by the second statement. Then, using the Cauchy–Schwarz inequality, we get for all α,β∈Z,
□
5.2 Proofs of ergodic results
Proof of Lemma 1.
Since BH1 and BH2 are independent centered Gaussian processes with stationary increments, we see that {ΔXk,k≥0} is a stationary Gaussian sequence with EΔXk=0. Therefore, in order to establish the ergodicity of ΔXk, it suffices to prove that its autocovariance function ˜ρ(k) vanishes at infinity. In turn, this fact follows immediately from the first statement of Lemma 3:
as k→∞. □
5.3 Construction of estimators. Proof of Theorem 1
Let us briefly explain, how to solve the estimating equation (7). Denote u=κ2h2H1, v=σ2h2H2, x=22H1, y=22H2, ξ=ξN, η=ηN, ζ=ζN, ϕ=ϕN. Then (7) takes the form Using equations (18) and (19) we can express the variables u and v in terms of x and y as follows:
Combining (22) with (20) we get
Similarly, from (21) and (22) we derive
Multiplying (23) by x+y results in
Finally, by subtracting (24) from (25) we get ζ(x+y)−ϕ=ηxy, whence
At the same time, we can express x via y using (23):
From equations (26) and (27) we obtain
whence we get the following quadratic equation for y
It is easy to see that x satisfies the same quadratic equation, in view of symmetry. Therefore, x and y (y>x) are two roots of (28), which are given by
where
It is not hard to derive the following relations from equations (18)–(21):
ηζ−ξϕ=(ux+vy)(ux2+vy2)−(u+v)(ux3+vy3)=−uv(y−x)2(y+x);η2−ξζ=(ux+vy)2−(u+v)(ux2+vy2)=−uv(y−x)2;ζ2−ηϕ=(ux2+vy2)2−(ux+vy)(ux3+vy3)=−uvxy(y−x)2.
Thus
Therefore, the discriminant is well defined and the corresponding equation has two different roots. From equation (29)
which provides
Recall that x=22H1 and y=22H2, so we get Also, from equation (22) we obtain
Taking into account that σ2h2H2=v and alogb=bloga we get
And similarly, we get
Equations (31)–(34) are providing us with inverse function q=f−1, where f is defined by (5), such that for f(ϑ)=τ0 we get q(τ0)=ϑ. Therefore, by substituting strongly consistent estimator τN instead of τ0 into equations (31)–(34) we obtain strongly consistent estimator ˆϑN=(ˆH(1)N,ˆH(2)N,ˆκ2N,ˆσ2N)⊤ for ϑ=(H1,H2,κ2,σ2)⊤ defined by (9)–(12). □
5.4 Proof of Theorem 2
The proof consists of two parts: in the first part we compute the asymptotic covariance matrix ˜Σ, while the second part contains the proof of asymptotic normality.
Part 1: Identification of the asymptotic covariance matrix.
Firstly, we will find the explicit form of covariance matrix ˜Σ by evaluating convergence limits as N→∞ for the following variances and covariances: NVar(ξN), NVar(ηN), NVar(ζN), NVar(ϕN), Ncov(ξN,ηN), Ncov(ξN,ζN), Ncov(ξN,ϕN), Ncov(ηN,ζN), Ncov(ηN,ϕN), Ncov(ζN,ϕN).
1.1 Evaluation of convergence limit for NVar(ξN). Using Isserlis’ theorem,1 and stationarity of the sequence {ΔXk}, we can write
Therefore
Further, by rearranging sums, we get
where the passage to the limit can be justified by the dominated convergence theorem due to Lemma 1.
(35)
NVar(ξN)=2NN−1∑k=0k∑i=k−N+1˜ρ(i)2=2N0∑i=−N+1N−1+i∑k=0˜ρ(i)2+2NN−1∑i=1N−1∑k=i˜ρ(i)2=2N−1∑i=−N+1(1−|i|N)˜ρ(i)2→2∞∑i=−∞˜ρ(i)2,asN→∞.
1.2. Evaluation of convergence limit for Ncov(ξN,ηN). Write
Ncov(ξN,ηN)=1Ncov(N−1∑k=0(ΔXk)2,N−1∑j=0(ΔXj+ΔXj+1)2)=1Ncov(N−1∑k=0(ΔXk)2,N−1∑j=01∑a,b=0ΔXj+aΔXj+b)=1N1∑a,b=0N−1∑k=0N−1∑j=0cov((ΔXk)2,ΔXj+aΔXj+b).
By Isserlis’ theorem,
cov((ΔXk)2,ΔXj+aΔXj+b)=E(ΔXk)2ΔXj+aΔXj+b−E(ΔXk)2EΔXj+aΔXj+b=EΔXkΔXj+aEΔXkΔXj+b+EΔXkΔXj+bEΔXkΔXj+a=2cov(ΔXk,ΔXj+a)cov(ΔXk,ΔXj+b)=2˜ρ(j+a−k)˜ρ(j+b−k).
Therefore arguing as in (35), we obtain
Ncov(ξN,ηN)=1∑a,b=02NN−1∑j=0N−1∑k=0˜ρ(j+a−k)˜ρ(j+b−k)=1∑a,b=02NN−1∑j=0j∑i=j−N+1˜ρ(i)˜ρ(i+b−a)=21∑a,b=0N−1∑i=−N+1(1−|i|N)˜ρ(i)˜ρ(i+b−a)→21∑a,b=0∞∑i=−∞˜ρ(i)˜ρ(i+b−a)=∞∑i=−∞˜ρ(i)(4˜ρ(i)+4˜ρ(i+1)),
as N→∞, where the last series converges according to Lemma 3.
1.3. Evaluation of convergence limit for NVar(ηN). We have
By Isserlis’ theorem,
cov(ΔXk+aΔXk+b,ΔXj+cΔXj+d)=EΔXk+aΔXk+bΔXj+cΔXj+d−EΔXk+aΔXk+bEΔXj+cΔXj+d=EΔXk+aΔXk+bEΔXj+cΔXj+d+EΔXk+aΔXj+cEΔXk+bΔXj+d+EΔXk+aΔXj+dEΔXj+cΔXk+b−EΔXk+aΔXk+bEΔXj+cΔXj+d=EΔXk+aΔXj+cEΔXk+bΔXj+d+EΔXk+aΔXj+dEΔXj+cΔXk+b=cov(Δk+a,ΔXj+c)cov(ΔXk+b,ΔXj+d)+cov(ΔXk+a,ΔXj+d)cov(ΔXk+b,ΔXj+c)=˜ρ(k−j+c−a)˜ρ(k−j+d−b)+˜ρ(k−j+c−b)˜ρ(k−j+d−a)=1∑γ=0˜ρ(k−j+c−a+γ(a−b))˜ρ(k−j+d−b+γ(b−a)).
Therefore
NVar(ηN)=1∑a,b,c,d,γ=01NN−1∑k=0N−1∑j=0˜ρ(k−j+c−a+γ(a−b))טρ(k−j+d−b+γ(b−a))=1∑a,b,c,d,γ=0N−1∑i=−(N−1)(1−|i|N)˜ρ(i+c−a+γ(a−b))טρ(i+d−b+γ(b−a))→1∑a,b,c,d,γ=0∞∑i=−∞˜ρ(i+c−a+γ(a−b))˜ρ(i+d−b+γ(b−a))=∞∑i=−∞˜ρ(i)1∑a,b,c,d,γ=0˜ρ(i+d−c+a−b+2γ(b−a)),
as N→∞. After simplifications, we arrive at
1.4. Evaluation of convergence limit for N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}}).
We rewrite N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}}) in the form
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{(\Delta {X_{k}})^{2}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{a,b=0}^{3}}\Delta {X_{j+a}}\Delta {X_{j+b}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},\Delta {X_{j+a}}\Delta {X_{j+b}}\right).\end{aligned}
By Isserlis’ theorem,
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\zeta _{N}})& ={\sum \limits_{a,b=0}^{3}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{k=0}^{N-1}}\tilde{\rho }(j+a-k)\tilde{\rho }(j+b-k)\\ {} & ={\sum \limits_{a,b=0}^{3}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{i=j-N+1}^{j}}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & =2{\sum \limits_{a,b=0}^{3}}{\sum \limits_{i=-N+1}^{N-1}}\left(1-\frac{\left|i\right|}{N}\right)\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & \to 2{\sum \limits_{a,b=0}^{3}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\left(8\tilde{\rho }(i)+12\tilde{\rho }(i+1)+8\tilde{\rho }(i+2)+4\tilde{\rho }(i+3)\right),\end{aligned}
as N\to \infty , where the last series converges according to Lemma 3.
1.5. Evaluation of convergence limit for N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}}). We have:
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{1}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{3}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{1}}{\sum \limits_{c,d=0}^{3}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}
By Isserlis’ theorem,
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})& ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to {\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d,=0}^{3}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}
as N\to \infty . After simplifications, we arrive at
\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\eta _{N}},{\zeta _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(28\tilde{\rho }(i)+48\tilde{\rho }(i+1)\\ {} \displaystyle +32\tilde{\rho }(i+2)+16\tilde{\rho }(i+3)+4\tilde{\rho }(i+4)\Big).\end{array}
1.6. Evaluation of convergence limit for N\operatorname{\mathbf{Var}}({\zeta _{N}}). We have:
\begin{aligned}{}N\operatorname{\mathbf{Var}}({\zeta _{N}}))& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{3}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{3}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b,c,d=0}^{3}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}
By Isserlis’ theorem,
\begin{aligned}{}N\operatorname{\mathbf{Var}}({\zeta _{N}})& ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{3}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}
as N\to \infty . After simplifications, we arrive at
\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{Var}}({\zeta _{N}}))={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(88\tilde{\rho }(i)+160\tilde{\rho }(i+1)\\ {} \displaystyle +124\tilde{\rho }(i+2)+80\tilde{\rho }(i+3)+40\tilde{\rho }(i+4)+16\tilde{\rho }(i+5)+4\tilde{\rho }(i+6)\Big).\end{array}
1.7. Evaluation of convergence limit for N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}}). We have:
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{(\Delta {X_{k}})^{2}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{a,b=0}^{7}}\Delta {X_{j+a}}\Delta {X_{j+b}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left({(\Delta {X_{k}})^{2}},\Delta {X_{j+a}}\Delta {X_{j+b}}\right).\end{aligned}
By Isserlis’ theorem,
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})& ={\sum \limits_{a,b=0}^{7}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{k=0}^{N-1}}\tilde{\rho }(j+a-k)\tilde{\rho }(j+b-k)\\ {} & ={\sum \limits_{a,b=0}^{7}}\frac{2}{N}{\sum \limits_{j=0}^{N-1}}{\sum \limits_{i=j-N+1}^{j}}\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & =2{\sum \limits_{a,b=0}^{7}}{\sum \limits_{i=-N+1}^{N-1}}\left(1-\frac{\left|i\right|}{N}\right)\tilde{\rho }(i)\tilde{\rho }(i+b-a)\\ {} & \to 2{\sum \limits_{a,b=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i)\tilde{\rho }(i+b-a),\end{aligned}
as N\to \infty . After simplifications, we arrive at
\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\xi _{N}},{\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(16\tilde{\rho }(i)+28\tilde{\rho }(i+1)+24\tilde{\rho }(i+2)\\ {} \displaystyle +20\tilde{\rho }(i+3)+16\tilde{\rho }(i+4)+12\tilde{\rho }(i+5)+8\tilde{\rho }(i+6)+4\tilde{\rho }(i+7)\Big).\end{array}
1.8. Evaluation of convergence limit for N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}}). We have:
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{1}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{7}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{1}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}
By Isserlis’ theorem,
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})& ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{a,b,\gamma =0}^{1}}{\sum \limits_{c,d=0}^{7}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}
as N\to \infty . After simplifications, we arrive at
\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\eta _{N}},{\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(60\tilde{\rho }(i)+112\tilde{\rho }(i+1)+96\tilde{\rho }(i+2)\\ {} \displaystyle +80\tilde{\rho }(i+3)+64\tilde{\rho }(i+4)+48\tilde{\rho }(i+5)+32\tilde{\rho }(i+6)+16\tilde{\rho }(i+7)+4\tilde{\rho }(i+8)\Big).\end{array}
1.9. Evaluation of convergence limit for N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}}). We have:
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{3}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{7}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}
By Isserlis’ theorem,
\begin{aligned}{}N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})& ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b=0}^{3}}{\sum \limits_{c,d=0}^{7}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}
as N\to \infty . After simplifications, we arrive at
\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{cov}}({\zeta _{N}},{\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(216\tilde{\rho }(i)+416\tilde{\rho }(i+1)+376\tilde{\rho }(i+2)\\ {} \displaystyle +320\tilde{\rho }(i+3)+256\tilde{\rho }(i+4)+192\tilde{\rho }(i+5)+132\tilde{\rho }(i+6)\\ {} \displaystyle +80\tilde{\rho }(i+7)+40\tilde{\rho }(i+8)+16\tilde{\rho }(i+9)+4\tilde{\rho }(i+10)\Big).\end{array}
1.10. Evaluation of convergence limit for N\operatorname{\mathbf{Var}}({\phi _{N}}). We have:
\begin{aligned}{}N\operatorname{\mathbf{Var}}({\phi _{N}})& =\frac{1}{N}\operatorname{\mathbf{cov}}\left({\sum \limits_{k=0}^{N-1}}{\sum \limits_{a,b=0}^{7}}\Delta {X_{j+a}}\Delta {X_{j+b}},{\sum \limits_{j=0}^{N-1}}{\sum \limits_{c,d=0}^{7}}\Delta {X_{j+c}}\Delta {X_{j+d}}\right)\\ {} & =\frac{1}{N}{\sum \limits_{a,b,c,d=0}^{7}}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\operatorname{\mathbf{cov}}\left(\Delta {X_{j+a}}\Delta {X_{j+b}},\Delta {X_{j+c}}\Delta {X_{j+d}}\right).\end{aligned}
By Isserlis’ theorem,
\begin{aligned}{}N\operatorname{\mathbf{Var}}({\phi _{N}})& ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\sum \limits_{j=0}^{N-1}}\tilde{\rho }(k-j+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(k-j+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}{\sum \limits_{i=-(N-1)}^{N-1}}\left(1-\frac{|i|}{N}\right)\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & \to \hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}{\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i+c-a+\gamma (a-b))\\ {} & \hspace{1em}\times \tilde{\rho }(i+d-b+\gamma (b-a))\\ {} & ={\sum \limits_{i=-\infty }^{\infty }}\tilde{\rho }(i){\sum \limits_{\gamma =0}^{1}}{\sum \limits_{a,b,c,d=0}^{7}}\tilde{\rho }(i+d-c+a-b+2\gamma (b-a)),\end{aligned}
as N\to \infty . After simplifications, we arrive at
\begin{array}{c}\displaystyle \underset{N\to \infty }{\lim }N\operatorname{\mathbf{Var}}({\phi _{N}})={\sum \limits_{i=-\infty }^{+\infty }}\tilde{\rho }(i)\Big(688\tilde{\rho }(i)+1344\tilde{\rho }(i+1)+1260\tilde{\rho }(i+2)\\ {} \displaystyle +1136\tilde{\rho }(i+3)+984\tilde{\rho }(i+4)+816\tilde{\rho }(i+5)+644\tilde{\rho }(i+6)+480\tilde{\rho }(i+7)\\ {} \displaystyle +336\tilde{\rho }(i+8)+224\tilde{\rho }(i+9)+140\tilde{\rho }(i+10)+80\tilde{\rho }(i+11)+40\tilde{\rho }(i+12)\\ {} \displaystyle +16\tilde{\rho }(i+13)+4\tilde{\rho }(i+14)\Big).\end{array}
Part 2: Proof of asymptotic normality.
Let us define {Y_{k}}={({Y_{k}^{(1)}},{Y_{k}^{(2)}},{Y_{k}^{(3)}},{Y_{k}^{(4)}})^{\top }} by
Then
(36)
\begin{array}{c}\displaystyle {Y_{k}^{(1)}}:=\Delta {X_{k}},\hspace{1em}{Y_{k}^{(2)}}:=\Delta {X_{k+1}},\hspace{1em}{Y_{k}^{(3)}}:=\Delta {X_{k+2}}+\Delta {X_{k+3}}\\ {} \displaystyle {Y_{k}^{(4)}}:=\Delta {X_{k+4}}+\Delta {X_{k+5}}+\Delta {X_{k+6}}+\Delta {X_{k+7}}.\end{array}
\begin{array}{l}\displaystyle {\xi _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}\right)^{2}},\hspace{2em}{\eta _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}+{Y_{k}^{(2)}}\right)^{2}},\\ {} \displaystyle {\zeta _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}+{Y_{k}^{(2)}}+{Y_{k}^{(3)}}\right)^{2}},\\ {} \displaystyle {\phi _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}{\left({Y_{k}^{(1)}}+{Y_{k}^{(2)}}+{Y_{k}^{(3)}}+{Y_{k}^{(4)}}\right)^{2}}.\end{array}
We shall prove the convergence of vector {({\xi _{N}},{\eta _{N}},{\zeta _{N}},{\phi _{N}})^{\top }} with the help of the Cramér–Wold device. Let the parameters α, β, γ, \lambda \in \mathbb{R} be such that {\alpha ^{2}}+{\beta ^{2}}+{\gamma ^{2}}+{\lambda ^{2}}\ne 0. We introduce the function
converges to a normal distribution. This fact can be established by application of the Breuer–Major theorem for stationary vectors [1, Theorem 4]. In order to apply this theorem, it suffices to verify the following condition:
where
\begin{array}{l}\displaystyle f(y)=\alpha {y_{1}^{2}}+\beta {({y_{1}}+{y_{2}})^{2}}+\gamma {({y_{1}}+{y_{2}}+{y_{3}})^{2}}+\lambda {({y_{1}}+{y_{2}}+{y_{3}}+{y_{4}})^{2}},\\ {} \displaystyle y={({y_{1}},{y_{2}},{y_{3}})^{\top }}\in {\mathbb{R}^{3}},\end{array}
so that
\alpha {\xi _{N}}+\beta {\eta _{N}}+\gamma {\zeta _{N}}+\lambda {\phi _{N}}=\frac{1}{N}{\sum \limits_{k=0}^{N-1}}f({Y_{k}}).
Thus, we need to prove that the sequence
(37)
\begin{array}{cc}& \displaystyle \sqrt{N}\Big(\alpha ({\xi _{N}}-\operatorname{\mathsf{E}}{\xi _{0}})+\beta ({\eta _{N}}-\operatorname{\mathsf{E}}{\eta _{0}})+\gamma ({\zeta _{N}}-\operatorname{\mathsf{E}}{\zeta _{0}})+\lambda ({\phi _{N}}-\operatorname{\mathsf{E}}{\phi _{0}})\Big)\\ {} & \displaystyle =\frac{1}{\sqrt{N}}{\sum \limits_{k=0}^{N-1}}\Big(f({Y_{k}})-\operatorname{\mathsf{E}}f({Y_{k}})\Big)\end{array}(38)
\sum \limits_{j\in \mathbb{Z}}{\left|{r^{(p,l)}}(j)\right|^{q}}\lt \infty ,\hspace{1em}\text{for all}\hspace{2.5pt}p,l\in \{1,2,3\},
{r^{(p,l)}}(k):=\operatorname{\mathbf{cov}}\left({Y_{1}^{(p)}},{Y_{1+k}^{(l)}}\right),\hspace{1em}k\in \mathbb{Z},
and q is the Hermite rank2 of f with respect to {Y_{1}}.It is not hard to see that this Hermite rank q\ge 2. Indeed, note that f({Y_{1}}) is a second-order polynomial of zero-mean normally distributed random variables {Y_{1}^{(1)}}, {Y_{1}^{(2)}}, {Y_{1}^{(3)}}, {Y_{1}^{(4)}} in which only terms of the second order are present. Therefore, according to Isserlis’ theorem, we will have that the expected value of the product of f({Y_{1}}) and any of {Y_{1}^{(t)}}, t\in \{1,2,3\}, will be equal to zero. All the obtained terms will have the form \operatorname{\mathsf{E}}{G_{1}}{G_{2}}{G_{3}}, where {({G_{1}},{G_{2}},{G_{3}})^{\top }} is a zero-mean multivariate normal random vector and according to Isserlis’ theorem \operatorname{\mathsf{E}}{G_{1}}{G_{2}}{G_{3}}=0. Therefore, the expected value of the product of f({Y_{1}}) and any first-order polynomial of \{{Y_{1}}\} (a linear combination of {Y_{1}^{(1)}}, {Y_{1}^{(2)}}, {Y_{1}^{(3)}}, {Y_{1}^{(4)}}) will be equal to zero, and therefore the Hermite rank q of the function f with respect to {Y_{1}} cannot be equal to 1.
Since q\ge 2, we easily see that in order to verify the condition (38), it suffices to prove that
In turn, (39) follows from Lemma 1, since using (36) and the definition of \tilde{\rho }, we can represent {r^{(p,l)}} via \tilde{\rho } as follows:
\begin{array}{l}\displaystyle {r^{(1,1)}}(k)={r^{(2,2)}}(k)=\tilde{\rho }(k),\hspace{1em}{r^{(1,2)}}(k)=\tilde{\rho }(k+1),\hspace{1em}{r^{(2,1)}}(k)=\tilde{\rho }(k-1),\\ {} \displaystyle {r^{(1,3)}}(k)=\tilde{\rho }(k)+\tilde{\rho }(k+2)+\tilde{\rho }(k+3),\hspace{1em}{r^{(3,1)}}(k)=\tilde{\rho }(k-2)+\tilde{\rho }(k-3),\\ {} \displaystyle {r^{(2,3)}}(k)=+\tilde{\rho }(k+1)+\tilde{\rho }(k+2),\hspace{1em}{r^{(3,2)}}(k)=\tilde{\rho }(k-1)+\tilde{\rho }(k-2),\\ {} \displaystyle {r^{(3,3)}}(k)=\tilde{\rho }(k+1)+2\tilde{\rho }(k)+\tilde{\rho }(k-1),\\ {} \displaystyle {r^{(1,4)}}(k)=\tilde{\rho }(k+4)+\tilde{\rho }(k+5)+\tilde{\rho }(k+6)+\tilde{\rho }(k+7),\\ {} \displaystyle {r^{(4,1)}}(k)=\tilde{\rho }(k-4)+\tilde{\rho }(k-5)+\tilde{\rho }(k-6)+\tilde{\rho }(k-7),\\ {} \displaystyle {r^{(2,4)}}(k)=\tilde{\rho }(k+3)+\tilde{\rho }(k+4)+\tilde{\rho }(k+5)+\tilde{\rho }(k+6),\\ {} \displaystyle {r^{(4,2)}}(k)=\tilde{\rho }(k-3)+\tilde{\rho }(k-4)+\tilde{\rho }(k-5)+\tilde{\rho }(k-6),\\ {} \displaystyle {r^{(3,4)}}(k)=\tilde{\rho }(k+2)+2\tilde{\rho }(k+2)+2\tilde{\rho }(k+3)+2\tilde{\rho }(k+4)+\tilde{\rho }(k+5),\\ {} \displaystyle {r^{(4,3)}}(k)=\tilde{\rho }(k-2)+2\tilde{\rho }(k-2)+2\tilde{\rho }(k-3)+2\tilde{\rho }(k-4)+\tilde{\rho }(k-5),\\ {} \displaystyle {r^{(4,4)}}(k)=\tilde{\rho }(k-3)+2\tilde{\rho }(k-2)+3\tilde{\rho }(k-1)+4\tilde{\rho }(k)\\ {} \displaystyle \hspace{2em}+3\tilde{\rho }(k+1)+2\tilde{\rho }(k+2)+\tilde{\rho }(k+3).\end{array}
Indeed, for example, the series
\begin{aligned}{}& {\sum \limits_{k=-\infty }^{+\infty }}{\left({r^{(1,3)}}(k)\right)^{2}}={\sum \limits_{k=-\infty }^{+\infty }}{(\tilde{\rho }(k+2)+\tilde{\rho }(k+3))^{2}}\\ {} & ={\sum \limits_{k=-\infty }^{+\infty }}\tilde{\rho }{(k+2)^{2}}+2{\sum \limits_{k=-\infty }^{+\infty }}\tilde{\rho }(k+2)\tilde{\rho }(k+3)+{\sum \limits_{k=-\infty }^{+\infty }}\tilde{\rho }{(k+3)^{2}},\end{aligned}
converges as a finite sum of series convergent according to Lemma 1. Similarly, other series in (39) are also finite.Therefore, the assumptions of the Breuer–Major theorem for stationary vectors are satisfied, whence the desired weak converge of (37) to a zero-mean normal distribution follows. □
5.5 Proof of Theorem 3
In order to prove Theorem 3 we apply the delta method (see, e.g., [13, Theorem B.6]) for function g constructed in Theorem 1 and sequence {\tau _{N}} that is asymptotically normal by Theorem 1, with the inverse function theorem. In order to apply the delta method we need to calculate the derivation matrix {g^{\prime }} and prove that it is nonsingular. But considering the complicated structure of this function, we will calculate the derivation matrix {f^{\prime }} for the function f defined in (5) and apply the inverse function theorem at the point ϑ since g is an inverse function of f, and thus {g^{\prime }}={({f^{\prime }})^{-1}}. Therefore, we will need to find matrix {f^{\prime }}(\vartheta ) and to show that it is nonsingular.
We will start with evaluating partial derivatives of function {f_{1}}. Since
{f_{1}}({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})={\kappa ^{2}}{h^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}},
we see that
\begin{array}{l}\displaystyle \frac{\partial {f_{1}}}{\partial {H_{1}}}=2\log h{\kappa ^{2}}{h^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{1}}}{\partial {H_{2}}}=2\log h{\sigma ^{2}}{h^{2{H_{2}}}},\\ {} \displaystyle \frac{\partial {f_{1}}}{\partial {\kappa ^{2}}}={h^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{1}}}{\partial {\sigma ^{2}}}={h^{2{H_{2}}}}.\end{array}
For the function
\begin{aligned}{}{f_{2}}({H_{1}},{H_{2}},{\kappa ^{2}},{\sigma ^{2}})& ={\kappa ^{2}}{h^{2{H_{1}}}}{2^{2{H_{1}}}}+{\sigma ^{2}}{h^{2{H_{2}}}}{2^{2{H_{2}}}}\\ {} & ={\kappa ^{2}}{e^{2{H_{1}}\log (2h)}}+{\sigma ^{2}}{e^{2{H_{2}}\log (2h)}},\end{aligned}
we get
\begin{array}{l}\displaystyle \frac{\partial {f_{2}}}{\partial {H_{1}}}=2\log (2h){\kappa ^{2}}{h^{2{H_{1}}}}{2^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{2}}}{\partial {H_{2}}}=2\log (2h){\sigma ^{2}}{h^{2{H_{2}}}}{2^{2{H_{2}}}},\\ {} \displaystyle \frac{\partial {f_{2}}}{\partial {\kappa ^{2}}}={h^{2{H_{1}}}}{2^{2{H_{1}}}},\hspace{2em}\frac{\partial {f_{2}}}{\partial {\sigma ^{2}}}={h^{2{H_{2}}}}{2^{2{H_{2}}}}.\end{array}
Similarly, one can evaluate partial derivatives for the functions {f_{3}} and {f_{4}}. We arrive at the following derivative matrix:
{f^{\prime }}(\vartheta )=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}2\log (h){\kappa ^{2}}{h^{2{H_{1}}}}& 2\log (h){\sigma ^{2}}{h^{2{H_{2}}}}& {h^{2{H_{1}}}}& {h^{2{H_{2}}}}\\ {} 2\log (2h){\kappa ^{2}}{(2h)^{2{H_{1}}}}& 2\log (2h){\sigma ^{2}}{(2h)^{2{H_{2}}}}& {(2h)^{2{H_{1}}}}& {(2h)^{2{H_{2}}}}\\ {} 2\log (4h){\kappa ^{2}}{(4h)^{2{H_{1}}}}& 2\log (4h){\sigma ^{2}}{(4h)^{2{H_{2}}}}& {(4h)^{2{H_{1}}}}& {(4h)^{2{H_{2}}}}\\ {} 2\log (8h){\kappa ^{2}}{(8h)^{2{H_{1}}}}& 2\log (8h){\sigma ^{2}}{(8h)^{2{H_{2}}}}& {(8h)^{2{H_{1}}}}& {(8h)^{2{H_{2}}}}\end{array}\right).
Now, we need to compute its determinant, so that we could apply the delta method. We will have
\begin{aligned}{}\det ({f^{\prime }}(\vartheta ))& =4{\kappa ^{2}}{\sigma ^{2}}{h^{4{H_{1}}+4{H_{2}}}}\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\log (h)& \log (h)& 1& 1\\ {} \log (2h){2^{2{H_{1}}}}& \log (2h){2^{2{H_{2}}}}& {2^{2{H_{1}}}}& {2^{2{H_{2}}}}\\ {} \log (4h){4^{2{H_{1}}}}& \log (4h){4^{2{H_{2}}}}& {4^{2{H_{1}}}}& {4^{2{H_{2}}}}\\ {} \log (8h){8^{2{H_{1}}}}& \log (8h){8^{2{H_{2}}}}& {8^{2{H_{1}}}}& {8^{2{H_{2}}}}\end{array}\right|\\ {} & =4{\kappa ^{2}}{\sigma ^{2}}{h^{4{H_{1}}+4{H_{2}}}}\log 2\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}0& 0& 1& 1\\ {} {2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{1}}}}& {2^{2{H_{2}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{1}}}}& {2^{4{H_{2}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{1}}}}& {2^{6{H_{2}}}}\end{array}\right|\\ {} & =4{\kappa ^{2}}{\sigma ^{2}}{h^{4{H_{1}}+4{H_{2}}}}\log 2\left(\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{2}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{2}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{2}}}}\end{array}\right|\right.\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}-\left.\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{1}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{1}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{1}}}}\end{array}\right|\right).\end{aligned}
Considering that
\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{2}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{2}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{2}}}}\end{array}\right|=-{2^{2{H_{1}}}}{2^{10{H_{2}}}}+4\cdot {2^{4{H_{1}}}}{2^{8{H_{2}}}}-3\cdot {2^{6{H_{1}}}}{2^{6{H_{2}}}},
and
\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{2^{2{H_{1}}}}& {2^{2{H_{2}}}}& {2^{2{H_{1}}}}\\ {} 2\cdot {2^{4{H_{1}}}}& 2\cdot {2^{4{H_{2}}}}& {2^{4{H_{1}}}}\\ {} 3\cdot {2^{6{H_{1}}}}& 3\cdot {2^{6{H_{2}}}}& {2^{6{H_{1}}}}\end{array}\right|={2^{10{H_{1}}}}{2^{2{H_{2}}}}+3\cdot {2^{6{H_{1}}}}{2^{6{H_{2}}}}-4\cdot {2^{8{H_{1}}}}{2^{4{H_{2}}}},
we get
\det ({f^{\prime }}(\vartheta ))=-4{\kappa ^{2}}{\sigma ^{2}}{h^{6{H_{1}}+6{H_{2}}}}\log (2){\left({2^{2{H_{2}}}}-{2^{2{H_{1}}}}\right)^{4}}\ne 0.
Therefore, the derivative matrix is nonsingular, and the corresponding derivative matrix of the inverse function {g^{\prime }}({\tau _{0}})={({f^{\prime }}(\vartheta ))^{-1}} is nonsingular by the inverse function theorem. Thus, the delta method can be applied, which implies the statement of the theorem and provides the formula for the asymptotic covariance matrix {\Sigma ^{0}} defined using the inverse derivative matrix {f^{\prime }}(\vartheta ). □