1 Introduction
The paper is devoted to parameter estimation in a stochastic heat equation of the following form:
The right-hand side of (1) is a mixed fractional noise. It consists of two independent stochastic processes, namely, a fractional Brownian motion ${B^{H}}=\{{B_{x}^{H}},x\in \mathbb{R}\}$ with Hurst parameter $H\in (0,1)$ and a Wiener process $W=\{{W_{x}},x\in \mathbb{R}\}$; σ and κ are some positive coefficients. The solution to the problem (1)–(2) is understood in mild sense; the precise definition will be given in Section 2.
The mixed fractional Brownian motion was first introduced by P. Cheridito [8] in order to model financial markets that are simultaneously arbitrage-free and have a memory. The properties of this process were studied in [36]. We refer to the book [28] for a detailed presentation of the modern theory in this area. More involved mixed fractional models described by stochastic differential equations are the subject of numerous publications [13, 35, 33, 17] appeared during last decades. Such equations can be used to model processes on financial markets, where two principal random noises influence the prices. The first source of randomness is the stock exchange itself with thousands of agents. The noise coming from this source can be assumed white and is best modeled by a Wiener process. The second one has the financial and economical background and can be modeled by a fractional Brownian motion ${B_{H}}$. Stochastic partial differential equations with such noises can be used, in particular, for the modeling of instantaneous forward rates, where the space variable corresponds to time until maturity [9, 22]. Such equations arise also in geophysics, especially in physical oceanography [30] and in geostatistics [34]. For example, in models for sea surface temperature, noise terms can represent various heat fluxes and ocean processes [29].
Existence, uniqueness and properties of solutions for stochastic differential equations with mixed noise were studied in various papers [12, 16, 17, 25–27]. A stochastic heat equation with white and fractional noises was investigated in [24]. Several approaches to parameter identification in simple linear mixed fractional models for various observation schemes were proposed in [6, 14, 23, 20, 21]. The problem of drift parameter estimation in a mixed stochastic differential equation of a general form was studied in [19]. The statistical problems for the mixed fractional Vasicek model were investigated in the recent papers [7, 31].
Similarly to our previous papers the solution $u(t,x)$ is observed at equidistant spatial points for a several fixed time instants. On one hand, there are many practical cases where the solution is observed at some discrete space points such as temperature of a heated body or velocity of a turbulent flow. In many cases the measurements with a high space resolution are available, but the time series are short. For example, this is the case for satellite observations of sea surface temperature, see [30]. For this reason, it is suitable to assume that $u(t,x)$ is observed at a large number of space points ${x_{k}}$ and only few different time instants ${t_{i}}$. On the other hand, observing the solution at three time points is enough to construct estimators for unknown parameters σ, κ and H. Nevertheless, the additional information of observing the solution discretely in time can be consolidated by taking the (weighted) average of the estimators similarly to [5] or [9].
The present paper is devoted to the problem of estimating unknown parameters H, σ, κ in the equation (1), by discrete observations of its solution $u(t,x)$. The results of this paper are an extension of our previous works [2, 3], where the problems of estimating H and σ were studied for the equation (1) with the fractional Brownian motion only (that is, for $\kappa =0$). The diffusion parameter estimator for SPDE with white noise and its properties was considered in [4]. Similarly to the mentioned articles, in the present paper we start with proving stationarity and ergodicity of the solution $u(t,x)$ as a function of the spatial variable x by analyzing the behavior of the covariance function. Based on these results we construct a strongly consistent estimator of H (assuming that the parameters σ and κ are unknown). The asymptotic normality of this estimator is proved for any $H\in (0,\frac{1}{2})\cup (\frac{1}{2},\frac{3}{4})$. Then we consider the problem of estimating the couple of parameters $(\sigma ,\kappa )$ when the value of H is known. We prove strong consistency of the estimator and investigate its asymptotic normality.
The paper is organized as follows. In Section 2 we introduce the definition of a mild solution to SPDE (1) and present its properties. Furthermore, we prove the limit theorem for it, needed for establishing properties of statistical estimators. The statistical problems are investigated in Section 3. In Subsection 3.1 we construct an estimator of the Hurst index H and prove its strong consistency and asymptotic normality. Subsection 3.2 is devoted to the estimators of the parameters σ and κ and their asymptotic properties. Numerical results are presented and discussed in Section 4.
2 Preliminaries
Assume that ${B^{H}}=\left\{{B_{x}^{H}},x\in \mathbb{R}\right\}$ is a two-sided fractional Brownian motion with the Hurst index $H\in (0,1)$, while $W=\left\{{W_{x}},x\in \mathbb{R}\right\}$ is a Wiener process, independent of ${B^{H}}$. Let G be Green’s function of the heat equation, that is
\[ G(t,x)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{\sqrt{2\pi t}}\exp \left\{-\frac{{x^{2}}}{2t}\right\},\hspace{1em}& \hspace{2.5pt}\text{if}\hspace{2.5pt}t>0\text{,}\\ {} {\delta _{0}}(x),\hspace{1em}& \hspace{2.5pt}\text{if}\hspace{2.5pt}t=0.\end{array}\right.\]
Similarly to [2–4] (see also [10] and the references cited therein), we define a solution to SPDE (1) in a mild sense as follows.Definition 1.
The random field $\left\{u(t,x),t\ge 0,x\in \mathbb{R}\right\}$ defined by
is called a solution to SPDE (1)–(2).
(3)
\[ u(t,x)=\sigma {\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{\mathbb{R}}}G(t-s,x-y)\hspace{0.1667em}d{B_{y}^{H}}\hspace{0.1667em}ds+\kappa {\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{\mathbb{R}}}G(t-s,x-y)\hspace{0.1667em}d{W_{y}}\hspace{0.1667em}ds\]Remark 1.
As shown in [2], both stochastic integrals in (3) exist as pathwise Riemann–Stieltjes integrals. This fact follows from the Hölder regularity of the integrands and integrators. Namely, Green’s function is obviously Lipshitz continuous, while sample paths of the fractional Brownian motion are Hölder continuous up to order H. Such regularity guarantees the existence of the first integral in (3). The second integral is also well defined, since the integrand is square integrable, see [4, Theorem 2.1].
The next proposition summarizes basic properties of the solution $u(t,x)$. These properties, especially stationarity and ergodicity, enable us to construct and investigate statistical estimators for H, κ and σ.
Proposition 1.
Let $u=\left\{u(t,x),t\in [0,T],x\in \mathbb{R}\right\}$ be a solution to (1) defined by (3). Then the following properties hold.
-
1. For all $t,s\in [0,T]$ and $x,z\in \mathbb{R}$,
(4)
\[\begin{aligned}{}\operatorname{cov}\big(u(t,z),u(s,x+z)\big)& =\operatorname{cov}\big(u(t,0),u(s,x)\big)\\ {} & =\frac{1}{\sqrt{2\pi }}{\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{0}^{s}}{(q+r)^{-\frac{3}{2}}}{\int _{\mathbb{R}}}\left({\sigma ^{2}}H{\left|y\right|^{2H-1}}+\frac{{\kappa ^{2}}}{2}\right)\\ {} & \hspace{1em}\times (\operatorname{sign}y)(y-x)\exp \left\{-\frac{{(y-x)^{2}}}{2(q+r)}\right\}\hspace{0.1667em}dy\hspace{0.1667em}dq\hspace{0.1667em}dr.\end{aligned}\] -
4. For all $t,s\in [0,T]$ and $x\in \mathbb{R}$,
(8)
\[\begin{aligned}{}\operatorname{cov}\big(u(t,x),u(s,x)\big)=& \frac{{\sigma ^{2}}{2^{H}}\Gamma (H+\frac{1}{2})\left({(t+s)^{H+1}}-{t^{H+1}}-{s^{H+1}}\right)}{\sqrt{\pi }(H+1)}\\ {} & +\frac{{\kappa ^{2}}{2^{\frac{3}{2}}}\left({(t+s)^{\frac{3}{2}}}-{t^{\frac{3}{2}}}-{s^{\frac{3}{2}}}\right)}{3\sqrt{\pi }}.\end{aligned}\] -
5. For a fixed $t>0$, the random process $\left\{u(t,x),x\in \mathbb{R}\right\}$ is ergodic.
Proof.
The proposition follows from the corresponding results for the equation with pure fractional noise studied in [2, 3]. Indeed, all the statements are based on the properties of the covariance function of the solution. However, since ${B^{H}}$ and W are independent, we see that this covariance function can be represented as
where
(9)
\[ \operatorname{cov}\big(u(t,x),u(s,z)\big)={\sigma ^{2}}\operatorname{cov}\big({u_{b}}(t,x),{u_{b}}(s,z)\big)+{\kappa ^{2}}\operatorname{cov}\big({u_{w}}(t,x),{u_{w}}(s,z)\big),\]
\[\begin{aligned}{}{u_{b}}(t,x)& ={\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{\mathbb{R}}}G(t-s,x-y)\hspace{0.1667em}d{B_{y}^{H}}\hspace{0.1667em}ds,\\ {} {u_{w}}(t,x)& ={\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{\mathbb{R}}}G(t-s,x-y)\hspace{0.1667em}d{W_{y}}\hspace{0.1667em}ds.\end{aligned}\]
Then combining the equality (9) with statements of [3, Prop. 2.2], we immediately obtain formulas (4), (7) and (8). The equality (5) follows from (9) and [2, Prop. 3]. Finally, the last statement of the proposition holds, because the solution $\left\{u(t,x),x\in \mathbb{R}\right\}$ is a stationary Gaussian process, whose covariance function vanishes as $x\to \infty $, according to (8). Hence, the process $u(t,\cdot )$ is ergodic. □Let us fix some $\delta >0$ and consider the following sequence:
The sequence (10) will serve as a statistic for parameter estimation problems in Section 3. We introduce the following notation in addition to (6):
(10)
\[ {V_{N}}(t)=\frac{1}{N}{\sum \limits_{k=1}^{N}}u{(t,k\delta )^{2}},\hspace{1em}t>0,\hspace{0.2778em}N\in \mathbb{N}.\](11)
\[\begin{array}{l}\displaystyle \mu (t):=\mathsf{E}{V_{N}}(t)=\mathsf{E}\big[u{(t,0)^{2}}\big]={\sigma ^{2}}{v_{t}}(H)+{\kappa ^{2}}{v_{t}}\left(\frac{1}{2}\right),\\ {} \displaystyle {\rho _{ts}^{H}}(k)=\operatorname{cov}\big(u(t,k\delta ),u(s,0)\big),\hspace{1em}{r_{ts}}(H)=2{\sum \limits_{k=-\infty }^{\infty }}{\rho _{ts}^{H}}{(k)^{2}},\hspace{1em}t,s>0.\end{array}\]The next theorem describes an asymptotic behavior of the stochastic process ${V_{N}}$. It gives a law of large numbers and a central limit theorem for its finite-dimensional distributions $({V_{N}}({t_{1}}),\dots ,{V_{N}}({t_{n}}))$ as $N\to \infty $. This result is crucial for construction of the estimators and for establishing their asymptotic properties.
Theorem 1.
Let $H\in (0,1)$.
-
2. If additionally $H\in (0,\frac{3}{4})$, then for any $n\ge 1$ and any distinct positive ${t_{1}},\dots ,{t_{n}}$,
(13)
\[ \sqrt{N}\left(\begin{array}{c}{V_{N}}({t_{1}})-\mu ({t_{1}})\\ {} \vdots \\ {} {V_{N}}({t_{n}})-\mu ({t_{n}})\end{array}\right)\xrightarrow{d}\mathcal{N}(\mathbf{0},R)\hspace{1em}\textit{as}\hspace{2.5pt}N\to \infty ,\]
Proof.
1. The ergodic theorem implies that for any $t>0$
\[ \frac{1}{N}{\sum \limits_{k=1}^{N}}u{(t,k\delta )^{2}}\to \mathsf{E}\big[u{(t,0)^{2}}\big]\hspace{1em}\text{a. s. as}\hspace{2.5pt}N\to \infty ,\]
which is equivalent to (12).2. In order to prove the convergence (13), we shall apply the Cramér–Wold theorem. In other words, we need to show that for all ${\alpha _{1}},\dots ,{\alpha _{n}}\in \mathbb{R}$, the convergence
holds with the asymptotic variance
Representing ${V_{N}}$ as the sum (10) and using (11), we rewrite (14) in the form
Further, since $\left(\begin{array}{c}u({t_{1}},k\delta )\\ {} \vdots \\ {} u({t_{n}},k\delta )\end{array}\right),x\in \mathbb{R}$, is a multivariate stationary Gaussian sequence, the convergence (16) can be established by application of the multivariate Breuer–Major theorem [1, Theorem 4]. In order to verify the conditions of this theorem, note that the function $F({x_{1}},\dots ,{x_{n}})={\textstyle\sum _{i=1}^{n}}{\alpha _{i}}{x_{i}^{2}}$ has Hermite rank 2, therefore we need to check the convergence of the series:
(14)
\[ \sqrt{N}\left[{\sum \limits_{i=1}^{n}}{\alpha _{i}}\left({V_{N}}({t_{i}})-\mu ({t_{i}})\right)\right]\xrightarrow{d}\mathcal{N}\left(0,{s^{2}}\right)\](15)
\[ {s^{2}}={\sum \limits_{i=1}^{n}}{\alpha _{i}^{2}}{r_{{t_{i}}{t_{i}}}}(H)+2\sum \limits_{1\le i < j\le n}{\alpha _{i}}{\alpha _{j}}{r_{{t_{i}}{t_{j}}}}(H).\](16)
\[ \frac{1}{\sqrt{N}}{\sum \limits_{k=1}^{N}}{\sum \limits_{i=1}^{n}}\left[{\alpha _{i}}\left(u{({t_{i}},k\delta )^{2}}-\mathsf{E}u{({t_{i}},k\delta )^{2}}\right)\right]\xrightarrow{d}\mathcal{N}(0,{s^{2}}).\]
\[ {\sum \limits_{k=-\infty }^{\infty }}{\rho _{{t_{i}}{t_{j}}}^{H}}{(k)^{2}}<\infty ,\hspace{1em}i,j=1,\dots ,n.\]
It follows immediately from the upper bound (7) that these power series converge if and only if $H\in (0,\frac{3}{4})$. Thus, the assumptions of [1, Theorem 4] are satisfied, whence the convergence (16) holds with the following asymptotic variance:
\[\begin{aligned}{}{s^{2}}& =\operatorname{Var}\Big(F\big(u({t_{1}},0),\dots ,u({t_{n}},0)\big)\Big)\\ {} & \hspace{1em}+2{\sum \limits_{k=1}^{\infty }}\operatorname{cov}\Big(F\big(u({t_{1}},0),\dots ,u({t_{n}},0)\big),F\big(u({t_{1}},k\delta ),\dots ,u({t_{n}},k\delta )\big)\Big).\end{aligned}\]
Now we must only check that the asymptotic variance ${s^{2}}$ satisfies (15). By the definition of the function F, we have
\[\begin{aligned}{}{s^{2}}& =\operatorname{Var}\left({\sum \limits_{i=1}^{n}}{\alpha _{i}}u{({t_{i}},0)^{2}}\right)+2{\sum \limits_{k=1}^{\infty }}\operatorname{cov}\left({\sum \limits_{i=1}^{n}}{\alpha _{i}}u{({t_{i}},0)^{2}},{\sum \limits_{j=1}^{n}}{\alpha _{j}}u{({t_{j}},k\delta )^{2}}\right)\\ {} & ={\sum \limits_{i=1}^{n}}{\alpha _{i}^{2}}\operatorname{Var}\big(u{({t_{i}},0)^{2}}\big)+2\sum \limits_{1\le i < j\le n}{\alpha _{i}}{\alpha _{j}}\operatorname{cov}\left(u{({t_{i}},0)^{2}},u{({t_{j}},0)^{2}}\right)\\ {} & +2{\sum \limits_{k=1}^{\infty }}{\sum \limits_{i=1}^{n}}{\alpha _{i}^{2}}\operatorname{cov}\left(u{({t_{i}},0)^{2}},u{({t_{i}},k\delta )^{2}}\right)\\ {} & +2{\sum \limits_{k=1}^{\infty }}\sum \limits_{1\le i < j\le n}{\alpha _{i}}{\alpha _{j}}\Big(\operatorname{cov}\left(u{({t_{i}},0)^{2}},u{({t_{j}},k\delta )^{2}}\right)+\operatorname{cov}\left(u{({t_{j}},0)^{2}},u{({t_{i}},k\delta )^{2}}\right)\Big).\end{aligned}\]
Now we can use the following well-known fact: if ξ and η are zero-mean Gaussian random variables, then $\operatorname{cov}({\xi ^{2}},{\eta ^{2}})=2\operatorname{cov}{(\xi ,\eta )^{2}}$, in particular, $\operatorname{Var}({\xi ^{2}})=2\operatorname{Var}{(\xi )^{2}}$ (this is a corollary of the Isserlis theorem [18]). Then we get
\[\begin{aligned}{}{s^{2}}& =2{\sum \limits_{i=1}^{n}}{\alpha _{i}^{2}}{\rho _{{t_{i}}{t_{i}}}^{H}}{(0)^{2}}+4\sum \limits_{1\le i < j\le n}{\alpha _{i}}{\alpha _{j}}{\rho _{{t_{i}}{t_{j}}}^{H}}{(0)^{2}}+4{\sum \limits_{k=1}^{\infty }}{\sum \limits_{i=1}^{n}}{\alpha _{i}^{2}}{\rho _{{t_{i}}{t_{i}}}^{H}}{(k)^{2}}\\ {} & \hspace{1em}+4{\sum \limits_{k=1}^{\infty }}\sum \limits_{1\le i < j\le n}{\alpha _{i}}{\alpha _{j}}\left({\rho _{{t_{i}}{t_{j}}}^{H}}{(k)^{2}}+{\rho _{{t_{j}}{t_{i}}}^{H}}{(k)^{2}}\right).\end{aligned}\]
Taking into account the equality ${\rho _{ts}^{H}}(k)={\rho _{st}^{H}}(-k)$, we may rewrite this expression in the more compact form:
\[ {s^{2}}=2{\sum \limits_{i=1}^{n}}\left({\alpha _{i}^{2}}{\sum \limits_{k=-\infty }^{+\infty }}{\rho _{{t_{i}}{t_{i}}}^{H}}{(k)^{2}}\right)+4\sum \limits_{1\le i < j\le n}\left({\alpha _{i}}{\alpha _{j}}{\sum \limits_{k=-\infty }^{+\infty }}{\rho _{{t_{i}}{t_{j}}}^{H}}{(k)^{2}}\right).\]
Thus the equality (15) is verified. This completes the proof of Theorem 1. □3 Parameter estimation
Let us consider the following statistical problem. It is supposed that for fixed ${t_{1}},\dots ,{t_{n}}$ and fixed step $\delta >0$, the random field u given by (3) is observed at the points ${x_{k}}=k\delta $, $k=1,\dots ,N$. So the observations have the following form:
Our aim is to estimate the unknown parameters H, σ and κ. We shall do this it two steps. We start with construction of a strongly consistent estimator of H that does not depend on κ and σ. Also, we shall establish its asymptotic normality. Then assuming that H is known, we shall estimate the parameters σ and κ.
In what follows we assume that $H\ne \frac{1}{2}$, because otherwise the model is non-identifiable. The parameters σ and κ are assumed to be positive.
3.1 Estimation of H
In order to estimate H without knowledge of σ and κ, it suffices to observe $u({t_{i}},{x_{k}})$ only at three time instants ${t_{1}}<{t_{2}}<{t_{3}}$. If we write (12) at these points and replace convergences with equalities, we shall get the following system of equations
Excluding the unknown parameter κ from the system, we obtain
Then excluding σ we arrive at the following estimating equation for H:
The solution of (19) (if exists), can be viewed as an estimator of H.
(17)
\[ \left\{\begin{array}{l}{V_{N}}({t_{1}})={\sigma ^{2}}{c_{H}}{t_{1}^{H+1}}+{\kappa ^{2}}{c_{\frac{1}{2}}}{t_{1}^{3/2}},\hspace{1em}\\ {} {V_{N}}({t_{2}})={\sigma ^{2}}{c_{H}}{t_{2}^{H+1}}+{\kappa ^{2}}{c_{\frac{1}{2}}}{t_{2}^{3/2}},\hspace{1em}\\ {} {V_{N}}({t_{3}})={\sigma ^{2}}{c_{H}}{t_{3}^{H+1}}+{\kappa ^{2}}{c_{\frac{1}{2}}}{t_{3}^{3/2}}.\hspace{1em}\end{array}\right.\](18)
\[ \left\{\begin{array}{l}{t_{2}^{-3/2}}{V_{N}}({t_{2}})-{t_{1}^{-3/2}}{V_{N}}({t_{1}})={\sigma ^{2}}{c_{H}}\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right),\hspace{1em}\\ {} {t_{3}^{-3/2}}{V_{N}}({t_{3}})-{t_{2}^{-3/2}}{V_{N}}({t_{2}})={\sigma ^{2}}{c_{H}}\left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right).\hspace{1em}\end{array}\right.\](19)
\[ \frac{{t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}}{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}=\frac{{t_{3}^{-3/2}}{V_{N}}({t_{3}})-{t_{2}^{-3/2}}{V_{N}}({t_{2}})}{{t_{2}^{-3/2}}{V_{N}}({t_{2}})-{t_{1}^{-3/2}}{V_{N}}({t_{1}})}.\]Note that the left-hand side of (19) is indeterminate for $H=1/2$. However, it is easy to see by l’Hôpital’s rule that there exists the limit
Then the estimator of H is defined as
where ${f^{(-1)}}$ denotes the inverse function of f. In order to prove its existence, we need to establish that $f:\mathbb{R}\to (0,\infty )$ is a one-to-one function. This is true, since f is always a strictly increasing function (see Fig. 1) as shown in the following lemma.
\[ \underset{H\to \frac{1}{2}}{\lim }\frac{{t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}}{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}=\underset{H\to \frac{1}{2}}{\lim }\frac{{t_{3}^{H-\frac{1}{2}}}\log {t_{3}}-{t_{2}^{H-\frac{1}{2}}}\log {t_{2}}}{{t_{2}^{H-\frac{1}{2}}}\log {t_{2}}-{t_{1}^{H-\frac{1}{2}}}\log {t_{1}}}=\frac{\log {t_{3}}-\log {t_{2}}}{\log {t_{2}}-\log {t_{1}}}.\]
Therefore, one may define by continuity
(20)
\[ f(H):=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{{t_{3}^{H-1/2}}-{t_{2}^{H-1/2}}}{{t_{2}^{H-1/2}}-{t_{1}^{H-1/2}}},\hspace{1em}& \text{if}\hspace{2.5pt}H\ne \frac{1}{2},\\ {} \frac{\log {t_{3}}-\log {t_{2}}}{\log {t_{2}}-\log {t_{1}}}\hspace{1em}& \text{if}\hspace{2.5pt}H=\frac{1}{2}.\end{array}\right.\](21)
\[ {\widehat{H}_{N}}={f^{(-1)}}\left(\frac{{t_{3}^{-3/2}}{V_{N}}({t_{3}})-{t_{2}^{-3/2}}{V_{N}}({t_{2}})}{{t_{2}^{-3/2}}{V_{N}}({t_{2}})-{t_{1}^{-3/2}}{V_{N}}({t_{1}})}\right),\]Lemma 1.
For any $0<{t_{1}}<{t_{2}}<{t_{3}}$, the function $f:\mathbb{R}\to (0,\infty )$ defined by (20) is strictly increasing with respect to H.
Proof.
We prove the statement for the case $H\in (\frac{1}{2},\infty )$. The interval $(-\infty ,\frac{1}{2})$ is considered similarly. The derivative of f with respect to H is equal to
Therefore, it suffices to prove the inequality
In order to establish (23), observe that the function $h(x)=x\log x$, $x>0$, is strictly convex (indeed, its second derivative ${h^{\prime\prime }}(x)=1/x>0$). This means that for any $\alpha \in (0,1)$, $x>0$ and $y>0$,
Let us take
(22)
\[\begin{aligned}{}{f^{\prime }}(H)=& \frac{\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)\left({t_{3}^{H-\frac{1}{2}}}\log {t_{3}}-{t_{2}^{H-\frac{1}{2}}}\log {t_{2}}\right)}{{\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)^{2}}}\\ {} & -\frac{\left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)\left({t_{2}^{H-\frac{1}{2}}}\log {t_{2}}-{t_{1}^{H-\frac{1}{2}}}\log {t_{1}}\right)}{{\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)^{2}}}.\end{aligned}\](23)
\[\begin{aligned}{}& \left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)\left({t_{3}^{H-\frac{1}{2}}}\log {t_{3}}-{t_{2}^{H-\frac{1}{2}}}\log {t_{2}}\right)\\ {} & \hspace{1em}>\left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)\left({t_{2}^{H-\frac{1}{2}}}\log {t_{2}}-{t_{1}^{H-\frac{1}{2}}}\log {t_{1}}\right).\end{aligned}\]
\[ x={t_{3}^{H-\frac{1}{2}}}>0,\hspace{1em}y={t_{1}^{H-\frac{1}{2}}}>0,\hspace{1em}\text{and}\hspace{1em}\alpha =\frac{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}{{t_{3}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}\in (0,1).\]
Then
\[ 1-\alpha =\frac{{t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}}{{t_{3}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}},\hspace{1em}\alpha x+(1-\alpha )y={t_{2}^{H-\frac{1}{2}}},\]
and (24) becomes
\[\begin{aligned}{}& \frac{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}{{t_{3}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}\hspace{0.1667em}(H-\frac{1}{2}){t_{3}^{H-\frac{1}{2}}}\log {t_{3}}+\frac{{t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}}{{t_{3}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}\hspace{0.1667em}(H-\frac{1}{2}){t_{1}^{H-\frac{1}{2}}}\log {t_{1}}\\ {} & \hspace{1em}>(H-\frac{1}{2}){t_{2}^{H-\frac{1}{2}}}\log {t_{2}},\end{aligned}\]
which is equivalent to (23). □The above lemma yields that the estimator ${\widehat{H}_{N}}$ is well defined at least for sufficiently large N (when the right-hand side of estimating equation (19) becomes positive). The asymptotic properties of ${\widehat{H}_{N}}$ are summarized in the following theorem, which is the first main result of the paper.
Theorem 2.
1. For any $H\in (0,\frac{1}{2})\cup (\frac{1}{2},1)$, ${\widehat{H}_{N}}$ is a strongly consistent estimator of the parameter H as $N\to \infty $.
2. For $H\in (0,\frac{1}{2})\cup (\frac{1}{2},\frac{3}{4})$, the estimator ${\widehat{H}_{N}}$ is asymptotically normal:
\[ \sqrt{N}\left({\widehat{H}_{N}}-H\right)\xrightarrow{d}\mathcal{N}(0,{\varsigma ^{2}})\hspace{1em}\textit{as}\hspace{2.5pt}N\to \infty ,\]
where
\[\begin{array}{l}\displaystyle {\varsigma ^{2}}=\frac{1}{{D^{2}}{\sigma ^{4}}{c_{H}^{2}}}{\sum \limits_{i,j=1}^{3}}{r_{{t_{i}}{t_{j}}}}(H){L_{i}}{L_{j}},\\ {} \displaystyle {L_{1}}=\frac{{t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}}{{t_{1}^{3/2}}},\hspace{1em}{L_{2}}=\frac{{t_{1}^{H-\frac{1}{2}}}-{t_{3}^{H-\frac{1}{2}}}}{{t_{2}^{3/2}}},\hspace{1em}{L_{3}}=\frac{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}{{t_{3}^{3/2}}},\\ {} \displaystyle \begin{aligned}{}D& =\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)\left({t_{3}^{H-\frac{1}{2}}}\log {t_{3}}-{t_{2}^{H-\frac{1}{2}}}\log {t_{2}}\right)\\ {} & \hspace{1em}-\left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)\left({t_{2}^{H-\frac{1}{2}}}\log {t_{2}}-{t_{1}^{H-\frac{1}{2}}}\log {t_{1}}\right).\end{aligned}\end{array}\]
Proof.
1. The strong consistency follows from the construction of the estimator. Indeed, (12) implies that
Then the convergence ${\widehat{H}_{N}}\to H$ a. s. as $N\to \infty $ follows from (21) and (25) due to the continuity of ${f^{(-1)}}$.
(25)
\[ \frac{{t_{3}^{-3/2}}{V_{N}}({t_{3}})-{t_{2}^{-3/2}}{V_{N}}({t_{2}})}{{t_{2}^{-3/2}}{V_{N}}({t_{2}})-{t_{1}^{-3/2}}{V_{N}}({t_{1}})}\to f(H)\hspace{1em}\text{a. s., as}\hspace{2.5pt}N\to \infty .\]2. By taking expectations in the equalities (18), we get the following relations
whence
or
By the delta method, we derive from (13) the convergence (2) with
(26)
\[ \begin{array}{c}\displaystyle {t_{2}^{-3/2}}\mu ({t_{2}})-{t_{1}^{-3/2}}\mu ({t_{1}})={\sigma ^{2}}{c_{H}}\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right),\\ {} \displaystyle {t_{3}^{-3/2}}\mu ({t_{3}})-{t_{2}^{-3/2}}\mu ({t_{2}})={\sigma ^{2}}{c_{H}}\left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right),\end{array}\](27)
\[ \frac{{t_{3}^{-3/2}}\mu ({t_{3}})-{t_{2}^{-3/2}}\mu ({t_{2}})}{{t_{2}^{-3/2}}\mu ({t_{2}})-{t_{1}^{-3/2}}\mu ({t_{1}})}=\frac{{t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}}{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}=f(H),\]
\[ H={f^{(-1)}}\left(\frac{{t_{3}^{-3/2}}\mu ({t_{3}})-{t_{2}^{-3/2}}\mu ({t_{2}})}{{t_{2}^{-3/2}}\mu ({t_{2}})-{t_{1}^{-3/2}}\mu ({t_{1}})}\right).\]
Therefore,
\[ \sqrt{N}\left({\widehat{H}_{N}}-H\right)=\sqrt{N}\Big(g\big({V_{N}}({t_{1}}),{V_{N}}({t_{2}}),{V_{N}}({t_{3}})\big)-g\big(\mu ({t_{1}}),\mu ({t_{2}}),\mu ({t_{3}})\big)\Big),\]
where
\[ g({x_{1}},{x_{2}},{x_{3}})={f^{(-1)}}\left(\frac{{t_{3}^{-3/2}}{x_{3}}-{t_{2}^{-3/2}}{x_{2}}}{{t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}}\right).\]
In order to derive the desired asymptotic normality from the convergence (13), we shall apply the delta method. Denoting
we see that the partial derivatives of g equal
(29)
\[\begin{aligned}{}{g^{\prime }_{1}}({x_{1}},{x_{2}},{x_{3}})& =h\left(\frac{{t_{3}^{-3/2}}{x_{3}}-{t_{2}^{-3/2}}{x_{2}}}{{t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}}\right)\cdot \frac{{t_{1}^{-3/2}}\left({t_{3}^{-3/2}}{x_{3}}-{t_{2}^{-3/2}}{x_{2}}\right)}{{\left({t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}\right)^{2}}},\\ {} {g^{\prime }_{2}}({x_{1}},{x_{2}},{x_{3}})& =-h\left(\frac{{t_{3}^{-3/2}}{x_{3}}-{t_{2}^{-3/2}}{x_{2}}}{{t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}}\right)\cdot \frac{{t_{2}^{-3/2}}\left({t_{3}^{-3/2}}{x_{3}}-{t_{1}^{-3/2}}{x_{1}}\right)}{{\left({t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}\right)^{2}}},\\ {} {g^{\prime }_{3}}({x_{1}},{x_{2}},{x_{3}})& =h\left(\frac{{t_{3}^{-3/2}}{x_{3}}-{t_{2}^{-3/2}}{x_{2}}}{{t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}}\right)\cdot \frac{{t_{3}^{-3/2}}}{{t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}}.\end{aligned}\]
\[ {\varsigma ^{2}}={\sum \limits_{i,j=1}^{3}}{r_{{t_{i}}{t_{j}}}}(H){g^{\prime }_{i}}{g^{\prime }_{j}}(\mu ({t_{1}}),\mu ({t_{2}}),\mu ({t_{3}})).\]
It remains to prove that ${g^{\prime }_{i}}(\mu ({t_{1}}),\mu ({t_{2}}),\mu ({t_{3}}))={L_{i}}/(D{\sigma ^{2}}{c_{H}})$, $i=1,2,3$. Let us consider in detail the proof of this equality for $i=1$, the cases $i=2$ and $i=3$ are considered similarly. Using successively (27), (28) and (22), we get
\[ h\left(\frac{{t_{3}^{-3/2}}{x_{3}}-{t_{2}^{-3/2}}{x_{2}}}{{t_{2}^{-3/2}}{x_{2}}-{t_{1}^{-3/2}}{x_{1}}}\right){\Bigg|_{{x_{i}}=\mu ({t_{i}})}}=h(f(H))=\frac{1}{{f^{\prime }}(H)}=\frac{{\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)^{2}}}{D}.\]
After inserting this expression into (29) and taking into account the relations (26), we obtain
\[\begin{aligned}{}{g^{\prime }_{1}}(\mu ({t_{1}}),\mu ({t_{2}}),\mu ({t_{3}}))& =\frac{{\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)^{2}}}{D}\cdot \frac{{t_{1}^{-3/2}}\left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}{{\sigma ^{2}}{c_{H}}{\left({t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}\right)^{2}}}\\ {} & =\frac{{t_{1}^{-3/2}}\left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}{D{\sigma ^{2}}{c_{H}}}=\frac{{L_{1}}}{D{\sigma ^{2}}{c_{H}}}.\end{aligned}\]
Note also that the above representation yields ${g^{\prime }_{1}}({x_{1}},{x_{2}},{x_{3}})\ne 0$ in the neighborhood of the point $(\mu ({t_{1}}),\mu ({t_{2}}),\mu ({t_{3}}))$, which is necessary for applying the delta method. The derivatives ${g^{\prime }_{2}}$ and ${g^{\prime }_{3}}$ are considered similarly. □The estimator of H was obtained as a solution to some exponential equation. However, it would be more convenient for applications and modeling to have the explicit form of the estimator. It turns out that in some particular cases it is possible to solve the estimating equation explicitly. Let us consider such example in more detail.
Assume that ${t_{1}}=h>0$, ${t_{2}}=2h$, ${t_{3}}=4h$. Substituting these values in the definition of f, we get
In this case
\[ f(H)=\frac{{t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}}{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}=\frac{{4^{H-\frac{1}{2}}}{h^{H-\frac{1}{2}}}-{2^{H-\frac{1}{2}}}{h^{H-\frac{1}{2}}}}{{2^{H-\frac{1}{2}}}{h^{H-\frac{1}{2}}}-{h^{H-\frac{1}{2}}}}={2^{H-\frac{1}{2}}}.\]
Therefore, ${f^{(-1)}}(x)=\frac{1}{2}+{\log _{2}}x$, $x>0$, consequently (21) becomes1
(30)
\[ {\widehat{H}_{N}}=\frac{1}{2}+{\log _{2}^{+}}\frac{{t_{3}^{-3/2}}{V_{N}}({t_{3}})-{t_{2}^{-3/2}}{V_{N}}({t_{2}})}{{t_{2}^{-3/2}}{V_{N}}({t_{2}})-{t_{1}^{-3/2}}{V_{N}}({t_{1}})}.\]
\[ {L_{1}}=\frac{{4^{H-\frac{1}{2}}}-{2^{H-\frac{1}{2}}}}{{h^{2-H}}},\hspace{1em}{L_{2}}=-\frac{{4^{H-\frac{1}{2}}}-1}{{2^{\frac{3}{2}}}{h^{2-H}}},\hspace{1em}{L_{3}}=\frac{{2^{H-\frac{1}{2}}}-1}{8{h^{2-H}}},\]
and
\[\begin{aligned}{}D& ={h^{2H-1}}\left({2^{H-\frac{1}{2}}}-1\right)\left({4^{H-\frac{1}{2}}}\log (4h)-{2^{H-\frac{1}{2}}}\log (2h)\right)\\ {} & \hspace{1em}-{h^{2H-1}}\left({4^{H-\frac{1}{2}}}-{2^{H-\frac{1}{2}}}\right)\left({2^{H-\frac{1}{2}}}\log (2h)-\log h\right)\\ {} & ={h^{2H-1}}\hspace{-0.1667em}\left({4^{H-\frac{1}{2}}}-{2^{H-\frac{1}{2}}}\right)\hspace{-0.1667em}\left({2^{H-\frac{1}{2}}}\log (4h)-\log (2h)-{2^{H-\frac{1}{2}}}\log (2h)+\log h\right)\\ {} & ={h^{2H-1}}{2^{H-\frac{1}{2}}}\left({2^{H-\frac{1}{2}}}-1\right)\left({2^{H-\frac{1}{2}}}\log 2-\log 2\right)\\ {} & ={h^{2H-1}}{2^{H-\frac{1}{2}}}{\left({2^{H-\frac{1}{2}}}-1\right)^{2}}\log 2.\end{aligned}\]
Denoting ${\ell _{i}}={D^{-1}}{L_{i}}{h^{1+H}}\log 2$, we arrive at the following result.Corollary 1.
Let ${t_{1}}=h>0$, ${t_{2}}=2h$, ${t_{3}}=4h$. Then the estimator ${\widehat{H}_{N}}$ can be written in the explicit form (30). In this case Theorem 2 holds with
\[ {\varsigma ^{2}}=\frac{1}{{\sigma ^{4}}{c_{H}^{2}}{h^{2+2H}}{(\log 2)^{2}}}{\sum \limits_{i,j=1}^{3}}{r_{{t_{i}}{t_{j}}}}(H){\ell _{i}}{\ell _{j}},\]
where ${\ell _{1}}=\frac{1}{{2^{H-\frac{1}{2}}}-1}$, ${\ell _{2}}=-\frac{{2^{H-\frac{1}{2}}}+1}{{2^{H+1}}\left({2^{H-\frac{1}{2}}}-1\right)}$, ${\ell _{3}}=\frac{1}{{2^{H+\frac{5}{2}}}\left({2^{H-\frac{1}{2}}}-1\right)}$.
Remark 3.
Evidently, the explicit form of the estimator can be obtained also in a slightly more general case, when ${t_{1}}=h$, ${t_{2}}=ah$, ${t_{3}}={a^{2}}h$ with some $a>0$. This leads to the estimator of the form (30) with ${\log _{a}^{+}}$ instead of ${\log _{2}^{+}}$.
3.2 Estimation of σ and κ
Now we assume that the Hurst index H is known and investigate the estimation of the coefficients σ and κ. From the first two equations of (17), one can derive the following estimators:
Now we are ready to formulate and prove the second main result of the paper.
(31)
\[ {\widehat{\sigma }_{N}^{2}}=\frac{{t_{1}^{-3/2}}{V_{N}}({t_{1}})-{t_{2}^{-3/2}}{V_{N}}({t_{2}})}{{c_{H}}\left({t_{1}^{H-1/2}}-{t_{2}^{H-1/2}}\right)},\hspace{1em}{\widehat{\kappa }_{N}^{2}}=\frac{{t_{1}^{-1-H}}{V_{N}}({t_{1}})-{t_{2}^{-1-H}}{V_{N}}({t_{2}})}{{c_{\frac{1}{2}}}\left({t_{1}^{1/2-H}}-{t_{2}^{1/2-H}}\right)}.\]Theorem 3.
1. For any $H\in (0,\frac{1}{2})\cup (\frac{1}{2},1)$, $({\widehat{\sigma }_{N}^{2}},{\widehat{\kappa }_{N}^{2}})$ is a strongly consistent estimator of the parameter $({\sigma ^{2}},{\kappa ^{2}})$ as $N\to \infty $.
2. For $H\in (0,\frac{1}{2})\cup (\frac{1}{2},\frac{3}{4})$, the estimator $({\widehat{\sigma }_{N}^{2}},{\widehat{\kappa }_{N}^{2}})$ is asymptotically normal:
where the asymptotic covariance matrix Σ consists of the following elements:
(32)
\[ \sqrt{N}\left(\begin{array}{c}{\widehat{\sigma }_{N}^{2}}-{\sigma ^{2}}\\ {} {\widehat{\kappa }_{N}^{2}}-{\kappa ^{2}}\end{array}\right)\xrightarrow{d}\mathcal{N}(0,\Sigma )\hspace{1em}\textit{as}\hspace{2.5pt}N\to \infty ,\]
\[\begin{array}{l}\displaystyle {\Sigma _{11}}=\frac{{t_{1}^{-3}}({r_{{t_{1}}{t_{1}}}}(H)+{r_{{t_{1}}{t_{2}}}}(H))+{t_{2}^{-3}}({r_{{t_{1}}{t_{2}}}}(H)+{r_{{t_{2}}{t_{2}}}}(H))}{{c_{H}^{2}}\left({t_{1}^{2H-1}}-2{({t_{1}}{t_{2}})^{H-\frac{1}{2}}}+{t_{2}^{2H-1}}\right)},\\ {} \displaystyle {\Sigma _{12}}={\Sigma _{21}}=\frac{{t_{1}^{-\frac{5}{2}-H}}({r_{{t_{1}}{t_{1}}}}(H)+{r_{{t_{1}}{t_{2}}}}(H))+{t_{2}^{-\frac{5}{2}-H}}({r_{{t_{1}}{t_{2}}}}(H)+{r_{{t_{2}}{t_{2}}}}(H))}{{c_{H}}{c_{\frac{1}{2}}}\left(2-{t_{1}^{H-\frac{1}{2}}}{t_{2}^{\frac{1}{2}-H}}-{t_{1}^{\frac{1}{2}-H}}{t_{2}^{H-\frac{1}{2}}}\right)},\\ {} \displaystyle {\Sigma _{22}}=\frac{{t_{1}^{-2-H}}({r_{{t_{1}}{t_{1}}}}(H)+{r_{{t_{1}}{t_{2}}}}(H))+{t_{2}^{-2-H}}({r_{{t_{1}}{t_{2}}}}(H)+{r_{{t_{2}}{t_{2}}}}(H))}{{c_{\frac{1}{2}}^{2}}\left({t_{1}^{1-2H}}-2{({t_{1}}{t_{2}})^{\frac{1}{2}-H}}+{t_{2}^{1-2H}}\right)}.\end{array}\]
Proof.
Using the definition (31) of the estimator ${\widehat{\sigma }_{N}^{2}}$, we rewrite the error ${\widehat{\sigma }_{N}^{2}}-\sigma $ in the following form
Therefore, taking into account the convergence (13), we conclude that (33) weakly converges in distribution to a bivariate normal distribution with the following covariance matrix:
\[\begin{aligned}{}{\widehat{\sigma }_{N}^{2}}-{\sigma ^{2}}& =\frac{{t_{1}^{-3/2}}{\widehat{V}_{N}}({t_{1}})-{t_{2}^{-3/2}}{\widehat{V}_{N}}({t_{2}})-{\sigma ^{2}}{c_{H}}{t_{1}^{H-\frac{1}{2}}}+{\sigma ^{2}}{c_{H}}{t_{2}^{H-\frac{1}{2}}}}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}\\ {} & =\frac{1}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}\bigg({t_{1}^{-3/2}}\left({\widehat{V}_{N}}({t_{1}})-{\sigma ^{2}}{c_{H}}{t_{1}^{H+1}}-{\kappa ^{2}}{c_{\frac{1}{2}}}{t_{1}^{3/2}}\right)\\ {} & \hspace{1em}-{t_{2}^{-3/2}}\left({\widehat{V}_{N}}({t_{2}})-{\sigma ^{2}}{c_{H}}{t_{2}^{H+1}}-{\kappa ^{2}}{c_{\frac{1}{2}}}{t_{2}^{3/2}}\right)\bigg)\\ {} & =\frac{{t_{1}^{-3/2}}\left({\widehat{V}_{N}}({t_{1}})-\mu ({t_{1}})\right)-{t_{2}^{-3/2}}\left({\widehat{V}_{N}}({t_{2}})-\mu ({t_{2}})\right)}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)},\end{aligned}\]
where the last equality follows from (11). Similarly, one can represent ${\widehat{\kappa }_{N}^{2}}-{\kappa ^{2}}$ as follows:
\[ {\widehat{\kappa }_{N}^{2}}-{\kappa ^{2}}=\frac{{t_{1}^{-1-H}}\left({\widehat{V}_{N}}({t_{1}})-\mu ({t_{1}})\right)-{t_{2}^{-1-H}}\left({\widehat{V}_{N}}({t_{2}})-\mu ({t_{2}})\right)}{{c_{\frac{1}{2}}}\left({t_{1}^{\frac{1}{2}-H}}-{t_{2}^{\frac{1}{2}-H}}\right)}.\]
Hence, we see that the random vector in the left-hand side of (32) is a linear transformation of the left-hand side of (13) (for $n=2$), namely
(33)
\[ \begin{aligned}{}& \sqrt{N}\left(\begin{array}{c}{\widehat{\sigma }_{N}^{2}}-{\sigma ^{2}}\\ {} {\widehat{\kappa }_{N}^{2}}-{\kappa ^{2}}\end{array}\right)\\ {} & \hspace{0.2222em}\hspace{0.2222em}=\left(\begin{array}{c@{\hskip10.0pt}c}\frac{{t_{1}^{-3/2}}}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}& -\frac{{t_{2}^{-3/2}}}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}\\ {} \frac{{t_{1}^{-1-H}}}{{c_{\frac{1}{2}}}\left({t_{1}^{\frac{1}{2}-H}}-{t_{2}^{\frac{1}{2}-H}}\right)}& -\frac{{t_{2}^{-1-H}}}{{c_{\frac{1}{2}}}\left({t_{1}^{\frac{1}{2}-H}}-{t_{2}^{\frac{1}{2}-H}}\right)}\end{array}\right)\left(\begin{array}{c}\sqrt{N}\left({\widehat{V}_{N}}({t_{1}})-\mu ({t_{1}})\right)\\ {} \sqrt{N}\left({\widehat{V}_{N}}({t_{2}})-\mu ({t_{2}})\right)\end{array}\right).\end{aligned}\]
\[\begin{aligned}{}\Sigma =& \left(\begin{array}{c@{\hskip10.0pt}c}\frac{{t_{1}^{-3/2}}}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}& -\frac{{t_{2}^{-3/2}}}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}\\ {} \frac{{t_{1}^{-1-H}}}{{c_{\frac{1}{2}}}\left({t_{1}^{\frac{1}{2}-H}}-{t_{2}^{\frac{1}{2}-H}}\right)}& -\frac{{t_{2}^{-1-H}}}{{c_{\frac{1}{2}}}\left({t_{1}^{\frac{1}{2}-H}}-{t_{2}^{\frac{1}{2}-H}}\right)}\end{array}\right)\left(\begin{array}{c@{\hskip10.0pt}c}{r_{{t_{1}}{t_{1}}}}(H)& {r_{{t_{1}}{t_{2}}}}(H)\\ {} {r_{{t_{1}}{t_{2}}}}(H)& {r_{{t_{2}}{t_{2}}}}(H)\end{array}\right)\\ {} & \times \left(\begin{array}{c@{\hskip10.0pt}c}\frac{{t_{1}^{-3/2}}}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}& \frac{{t_{1}^{-1-H}}}{{c_{\frac{1}{2}}}\left({t_{1}^{\frac{1}{2}-H}}-{t_{2}^{\frac{1}{2}-H}}\right)}\\ {} \hspace{2em}-\frac{{t_{2}^{-3/2}}}{{c_{H}}\left({t_{1}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)}& -\frac{{t_{2}^{-1-H}}}{{c_{\frac{1}{2}}}\left({t_{1}^{\frac{1}{2}-H}}-{t_{2}^{\frac{1}{2}-H}}\right)}\end{array}\right)=\left(\begin{array}{c@{\hskip10.0pt}c}{\Sigma _{11}}& {\Sigma _{12}}\\ {} {\Sigma _{12}}& {\Sigma _{22}}\end{array}\right).\end{aligned}\]
This completes the proof. □3.3 Maximum likelihood estimation and Fisher information
In this subsection we analyze the efficiency of the estimator $({\widehat{\sigma }_{N}^{2}},{\widehat{\kappa }_{N}^{2}})$ by comparing it to the maximum likelihood estimator (MLE). Note that MLE is hard to compute, however, it is possible to identify the corresponding Fischer information matrix. We use for construction of MLE the same observations as in the previous subsection, namely let the observation vector be
(here and after we use the differentiation formulas of a matrix with respect to given parameter,2 see, e. g., [32] for more details).
\[ {\mathbf{X}_{N}}={\big(u({t_{1}},\delta ),u({t_{1}},2\delta ),\dots ,u({t_{1}},N\delta ),u({t_{2}},\delta ),u({t_{2}},2\delta ),\dots ,u({t_{2}},N\delta )\big)^{\top }}.\]
Obviously, ${\mathbf{X}_{N}}$ has $2N$-dimensional Gaussian distribution with probability density
\[ f({\mathbf{X}_{N}},\theta )={(2\pi )^{-N}}{\left(\det {\Gamma _{N}}\right)^{-\frac{1}{2}}}\exp \left\{-\frac{1}{2}{\mathbf{X}_{N}^{\top }}{\Gamma ^{-1}}{\mathbf{X}_{N}}\right\},\]
where ${\Gamma _{N}}$ is the covariance matrix of ${\mathbf{X}_{N}}$, that is,
\[ {\Gamma _{N}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\rho _{{t_{1}}{t_{1}}}^{H}}(0)& \cdots & {\rho _{{t_{1}}{t_{1}}}^{H}}(N-1)& {\rho _{{t_{2}}{t_{1}}}^{H}}(0)& \cdots & {\rho _{{t_{2}}{t_{1}}}^{H}}(N-1)\\ {} \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ {} {\rho _{{t_{1}}{t_{1}}}^{H}}(N-1)& \cdots & {\rho _{{t_{1}}{t_{1}}}^{H}}(0)& {\rho _{{t_{2}}{t_{1}}}^{H}}(N-1)& \cdots & {\rho _{{t_{2}}{t_{1}}}^{H}}(0)\\ {} {\rho _{{t_{2}}{t_{1}}}^{H}}(0)& \cdots & {\rho _{{t_{2}}{t_{1}}}^{H}}(N-1)& {\rho _{{t_{2}}{t_{2}}}^{H}}(0)& \cdots & {\rho _{{t_{2}}{t_{2}}}^{H}}(N-1)\\ {} \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ {} {\rho _{{t_{2}}{t_{1}}}^{H}}(N-1)& \cdots & {\rho _{{t_{2}}{t_{1}}}^{H}}(0)& {\rho _{{t_{2}}{t_{2}}}^{H}}(N-1)& \cdots & {\rho _{{t_{2}}{t_{2}}}^{H}}(0)\end{array}\right).\]
Due to (9), this matrix can be decomposed as ${\Gamma _{N}}={\sigma ^{2}}{\Gamma _{N}^{b}}+{\kappa ^{2}}{\Gamma _{N}^{w}}$, where ${\Gamma _{N}^{b}}$ and ${\Gamma _{N}^{w}}$ are the covariance matrices of
\[ {\big({u_{b}}({t_{1}},\delta ),{u_{b}}({t_{1}},2\delta ),\dots ,{u_{b}}({t_{1}},N\delta ),{u_{b}}({t_{2}},\delta ),{u_{b}}({t_{2}},2\delta ),\dots ,{u_{b}}({t_{2}},N\delta )\big)^{\top }}\]
and
\[ {\big({u_{w}}({t_{1}},\delta ),{u_{w}}({t_{1}},2\delta ),\dots ,{u_{w}}({t_{1}},N\delta ),{u_{w}}({t_{2}},\delta ),{u_{w}}({t_{2}},2\delta ),\dots ,{u_{w}}({t_{2}},N\delta )\big)^{\top }},\]
respectively. The log-likelihood function is
\[ \ell ({\mathbf{X}_{N}},\theta )=-N\log (2\pi )-\frac{1}{2}\log (\det {\Gamma _{N}})-\frac{1}{2}{\mathbf{X}_{N}^{\top }}{\Gamma _{N}^{-1}}{\mathbf{X}_{N}}.\]
Then, MLE of $\theta =({\sigma ^{2}},{\kappa ^{2}})$ is obtained as the solution to the following system of equations: The maximum likelihood estimator ${\hat{\theta }_{N}^{mle}}$ of θ can hardly be written in the explicit form, since the estimating equations involve the inverse matrix ${\Gamma _{N}^{-1}}$, which depends nonlinearly on ${\sigma ^{2}}$ and ${\kappa ^{2}}$. However, using the general theory of maximum likelihood estimation for dependent observations [11], it is possible to establish the asymptotic normality in the form
where ${T_{N}}(\theta )$ is the Fisher information matrix and ${I_{2}}$ is the $2\times 2$ identity matrix. The rigorous proof of (36) as well as a careful analysis of the asymptotic behavior of ${T_{N}}(\theta )$ requires an additional investigation. To the best of our knowledge, even for much simpler model of the mixed fractional Brownian motion, this problem has not been completely solved yet, see the recent paper [15] for details. Therefore, here we restrict ourselves to the identification of the matrix ${T_{N}}(\theta )$.
(36)
\[ {\left({T_{N}}(\theta )\right)^{\frac{1}{2}}}\left({\hat{\theta }_{N}^{mle}}-\theta \right)\xrightarrow{d}\mathcal{N}(\mathbf{0},{I_{2}})\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]Lemma 2.
The Fisher information matrix ${T_{N}}(\theta )$ has the form
\[ {T_{N}}(\theta )=\left(\begin{array}{c@{\hskip10.0pt}c}\frac{1}{2}\operatorname{tr}\left({\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}\right)& \frac{1}{2}\operatorname{tr}\left(\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{w}}\right)\right)\\ {} \frac{1}{2}\operatorname{tr}\left(\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{w}}\right)\right)& \frac{1}{2}\operatorname{tr}\left({\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{w}}\right)^{2}}\right)\end{array}\right).\]
Proof.
In order to identify ${T_{N}}(\theta )$, let us calculate the second derivatives. Note that
\[\begin{aligned}{}\frac{\partial }{\partial {\sigma ^{2}}}\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)& =\left(\frac{\partial }{\partial {\sigma ^{2}}}{\Gamma _{N}^{-1}}\right){\Gamma _{N}^{b}}=\left(-{\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}{\Gamma _{N}^{-1}}\right){\Gamma _{N}^{b}}=-{\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}},\\ {} \frac{\partial }{\partial {\sigma ^{2}}}\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}{\Gamma _{N}^{-1}}\right)& =\left(\frac{\partial }{\partial {\sigma ^{2}}}{\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right){\Gamma _{N}^{-1}}+{\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\left(\frac{\partial }{\partial {\sigma ^{2}}}{\Gamma _{N}^{-1}}\right)\\ {} & =-{\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}{\Gamma _{N}^{-1}}-{\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}{\Gamma _{N}^{-1}}\right)\\ {} & =-2{\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}{\Gamma _{N}^{-1}}.\end{aligned}\]
Hence,
\[ \frac{{\partial ^{2}}\ell ({\mathbf{X}_{N}},\theta )}{\partial {({\sigma ^{2}})^{2}}}=\frac{1}{2}\operatorname{tr}\left({\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}\right)-{\mathbf{X}_{N}^{\top }}{\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}{\Gamma _{N}^{-1}}{\mathbf{X}_{N}}.\]
Taking expectations, we obtain that the corresponding element of the Fisher information matrix equals
\[\begin{aligned}{}-\mathsf{E}\left[\frac{{\partial ^{2}}\ell ({\mathbf{X}_{N}},\theta )}{\partial {({\sigma ^{2}})^{2}}}\right]& =-\frac{1}{2}\operatorname{tr}\left({\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}\right)+\mathsf{E}\left[{\mathbf{X}_{N}^{\top }}{\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}{\Gamma _{N}^{-1}}{\mathbf{X}_{N}}\right]\\ {} & =\frac{1}{2}\operatorname{tr}\left({\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)^{2}}\right),\end{aligned}\]
since for any matrix $A={({a_{ij}})_{i,j=1,\dots 2N}}$, we have the equality $\mathsf{E}\left[{\mathbf{X}_{N}^{\top }}A{\mathbf{X}_{N}}\right]={\textstyle\sum _{i,j}}{a_{ij}}\mathsf{E}\left[{X_{i}}{X_{j}}\right]=\operatorname{tr}\left(A{\Gamma _{N}}\right)$. Arguing as above, one can write the derivatives
\[\begin{aligned}{}\frac{{\partial ^{2}}\ell ({\mathbf{X}_{N}},\theta )}{\partial {({\kappa ^{2}})^{2}}}& =\frac{1}{2}\operatorname{tr}\left({\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{w}}\right)^{2}}\right)-{\mathbf{X}_{N}^{\top }}{\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{w}}\right)^{2}}{\Gamma _{N}^{-1}}{\mathbf{X}_{N}}\\ {} \frac{{\partial ^{2}}\ell ({\mathbf{X}_{N}},\theta )}{\partial {\sigma ^{2}}\partial {\kappa ^{2}}}& =\frac{1}{2}\operatorname{tr}\left({\Gamma _{N}^{-1}}{\Gamma _{N}^{w}}{\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right)\\ {} & \hspace{1em}-\frac{1}{2}{\mathbf{X}_{N}^{\top }}{\Gamma _{N}^{-1}}\left({\Gamma _{N}^{b}}{\Gamma _{N}^{-1}}{\Gamma _{N}^{w}}+{\Gamma _{N}^{w}}{\Gamma _{N}^{-1}}{\Gamma _{N}^{b}}\right){\Gamma _{N}^{-1}}{\mathbf{X}_{N}},\end{aligned}\]
and calculate their expectations, identifying other elements of ${T_{N}}(\theta )$. □Remark 4.
1. Similarly to the previous subsection, in the case $H=\frac{1}{2}$ it is impossible to estimate both parameters, ${\sigma ^{2}}$ and ${\kappa ^{2}}$, simultaneously. Only estimation of the sum ${\sigma ^{2}}+{\kappa ^{2}}$ is possible. In this case ${\Gamma _{N}^{b}}={\Gamma _{N}^{w}}$, therefore the estimation equations (34) and (35) coincide.
2. The results of this subsection are valid for any other observations vector of the form $\mathbf{X}=(u({t_{i}},{x_{k}}),i=1\dots ,M,k=1,\dots N)$ and its covariance matrix Γ (with decomposition $\Gamma ={\sigma ^{2}}{\Gamma ^{b}}+{\kappa ^{2}}{\Gamma ^{w}}$).
3. Similar approach can be applied to the case, when H is unknown, that is, to the problem of estimation of all three parameters ${\sigma ^{2}}$, ${\kappa ^{2}}$ and H.
4 Simulations
Let us illustrate the theoretical properties of the estimator by some numerical results. We consider the model with the coefficients $\sigma =\kappa =1$ for various values of H. For each value of the Hurst index H, we simulate 50 sample paths of the solution $u(t,x)$ to the equation (1). The trajectories of a solution are generated by the discretization of the formula (3).
We choose ${t_{1}}=0.25$, ${t_{2}}=0.5$, ${t_{3}}=1$ as the moments of observations, so that the conditions of Corollary 1 are satisfied and the estimator of H can be computed by the explicit formula (30). For each ${t_{i}}$ we observe $u({t_{i}},k\delta )$, $k=1,\dots ,N$, with the step $\delta =1$.
Table 1.
Means and standard deviations of the estimator ${\widehat{H}_{N}}$
N | ${2^{8}}$ | ${2^{9}}$ | ${2^{10}}$ | ${2^{11}}$ | ${2^{12}}$ | |
$H=0.1$ | Mean | −0.0121 | −0.0299 | 0.0353 | 0.0527 | 0.0682 |
Std. dev. | 0.5237 | 0.5552 | 0.2061 | 0.1242 | 0.0752 | |
$H=0.2$ | Mean | 0.1616 | 0.1415 | 0.1910 | 0.1961 | 0.1799 |
Std. dev. | 0.3419 | 0.2243 | 0.1540 | 0.0897 | 0.0692 | |
$H=0.3$ | Mean | 0.1543 | 0.2845 | 0.2685 | 0.2955 | 0.2999 |
Std. dev. | 0.4451 | 0.4177 | 0.1930 | 0.1314 | 0.0781 | |
$H=0.4$ | Mean | 0.2254 | 0.2725 | 0.3129 | 0.2854 | 0.3313 |
Std. dev. | 0.8089 | 0.8803 | 0.4661 | 0.1608 | 0.1299 | |
$H=0.6$ | Mean | 0.4563 | 0.3495 | 0.5266 | 0.5618 | 0.5775 |
Std. dev. | 0.7108 | 0.9010 | 0.2926 | 0.1992 | 0.1108 | |
$H=0.7$ | Mean | 0.6384 | 0.7024 | 0.7160 | 0.7151 | 0.6980 |
Std. dev. | 0.3391 | 0.1512 | 0.1065 | 0.0848 | 0.0511 | |
$H=0.8$ | Mean | 0.8160 | 0.8042 | 0.8073 | 0.8022 | 0.8074 |
Std. dev. | 0.1929 | 0.0922 | 0.0644 | 0.0467 | 0.0333 | |
$H=0.9$ | Mean | 0.8583 | 0.8722 | 0.8815 | 0.8939 | 0.8958 |
Std. dev. | 0.1216 | 0.0772 | 0.0671 | 0.0474 | 0.0334 |
Table 2.
Means and standard deviations of the estimator ${\widehat{\sigma }_{N}^{2}}$ for $\sigma =1$, $\kappa =1$
N | ${2^{8}}$ | ${2^{9}}$ | ${2^{10}}$ | ${2^{11}}$ | ${2^{12}}$ | |
$H=0.1$ | Mean | 1.0084 | 1.0511 | 1.0571 | 1.0405 | 1.0440 |
Std. dev. | 0.3971 | 0.2933 | 0.1829 | 0.1355 | 0.0909 | |
$H=0.2$ | Mean | 1.0963 | 1.0478 | 1.0407 | 1.0183 | 1.0170 |
Std. dev. | 0.3742 | 0.2370 | 0.1888 | 0.1261 | 0.1014 | |
$H=0.3$ | Mean | 1.0107 | 1.0437 | 0.9616 | 0.9939 | 1.0032 |
Std. dev. | 0.4530 | 0.3618 | 0.2454 | 0.1673 | 0.1216 | |
$H=0.4$ | Mean | 0.8117 | 1.0042 | 1.0566 | 1.0964 | 1.0660 |
Std. dev. | 0.9423 | 0.7003 | 0.4789 | 0.3048 | 0.2167 | |
$H=0.6$ | Mean | 1.0905 | 1.1579 | 1.0793 | 1.0667 | 1.0673 |
Std. dev. | 0.6755 | 0.5750 | 0.3875 | 0.2967 | 0.2154 | |
$H=0.7$ | Mean | 0.9536 | 1.0545 | 1.0653 | 1.0106 | 0.9889 |
Std. dev. | 0.4650 | 0.3498 | 0.2526 | 0.1649 | 0.1202 | |
$H=0.8$ | Mean | 1.0171 | 1.0216 | 1.0287 | 1.0252 | 1.0095 |
Std. dev. | 0.4619 | 0.2805 | 0.2465 | 0.1720 | 0.1368 | |
$H=0.9$ | Mean | 1.1814 | 1.1236 | 1.0738 | 1.0391 | 1.0308 |
Std. dev. | 0.8814 | 0.7882 | 0.6851 | 0.4922 | 0.3681 |
Table 3.
Means and standard deviations of the estimator ${\widehat{\kappa }_{N}^{2}}$ for $\sigma =1$, $\kappa =1$
N | ${2^{8}}$ | ${2^{9}}$ | ${2^{10}}$ | ${2^{11}}$ | ${2^{12}}$ | |
$H=0.1$ | Mean | 1.0472 | 1.0133 | 1.0005 | 0.9953 | 0.9886 |
Std. dev. | 0.1664 | 0.1483 | 0.0963 | 0.0606 | 0.0442 | |
$H=0.2$ | Mean | 0.9608 | 0.9720 | 0.9732 | 0.9876 | 0.9859 |
Std. dev. | 0.2430 | 0.1712 | 0.1338 | 0.0834 | 0.0720 | |
$H=0.3$ | Mean | 0.9899 | 0.9638 | 1.0355 | 1.0075 | 1.0032 |
Std. dev. | 0.4197 | 0.3056 | 0.2059 | 0.1472 | 0.1017 | |
$H=0.4$ | Mean | 1.2052 | 1.0076 | 0.9417 | 0.9092 | 0.9365 |
Std. dev. | 0.8747 | 0.6671 | 0.4480 | 0.2988 | 0.2188 | |
$H=0.6$ | Mean | 0.8934 | 0.8285 | 0.9070 | 0.9258 | 0.9279 |
Std. dev. | 0.6869 | 0.5600 | 0.3733 | 0.2869 | 0.2137 | |
$H=0.7$ | Mean | 1.0483 | 0.9526 | 0.9519 | 0.9863 | 1.0040 |
Std. dev. | 0.4351 | 0.3250 | 0.2228 | 0.1448 | 0.1078 | |
$H=0.8$ | Mean | 0.9996 | 0.9938 | 0.9905 | 0.9848 | 0.9974 |
Std. dev. | 0.3113 | 0.1941 | 0.1429 | 0.1031 | 0.0795 | |
$H=0.9$ | Mean | 1.0017 | 0.9877 | 0.9767 | 0.9870 | 0.9945 |
Std. dev. | 0.2985 | 0.2311 | 0.2072 | 0.1480 | 0.1084 |
Table 1 reports the means and standard deviations of ${\widehat{H}_{N}}$ for various H and N. We see that the estimates converge to the true value of the parameter H. However, the convergence is much slower compared to the estimation of H in the pure fractional case (i. e. $\kappa =0$), which was considered in [3]. The results become poorer when H approaches $1/2$, or when H is close to zero. It’s worth mentioning that the best performance of ${\widehat{H}_{N}}$ is observed for large values of H (0.8 and 0.9), for which the asymptotic normality does not hold.
The means and standard deviations of the estimates ${\widehat{\sigma }_{N}^{2}}$ and ${\widehat{\kappa }_{N}^{2}}$ are reported in Tables 2–3. Here we also clearly see that the estimators converge to the true values of the parameters, however the results for both estimators become worse when H is close to 1/2. We observe that unlike ${\widehat{H}_{N}}$, the estimator ${\widehat{\sigma }^{2}}$ converges slowly for $H=0.9$, demonstrating better results for small H. Note that the similar situation for the coefficient at the fractional Brownian motion takes place in the pure fractional case, see [2].