VMSTA Modern Stochastics: Theory and Applications 2351-60542351-60462351-6046 VTeXMokslininkų g. 2A, 08412 Vilnius, Lithuania VMSTA221 10.15559/23-VMSTA221 Research Article Parameter estimation in mixed fractional stochastic heat equation AvetisianDianadiana.avetisian2017@gmail.coma https://orcid.org/0000-0001-7208-3130 RalchenkoKostiantynkostiantynralchenko@knu.uaab Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, Volodymyrska 64, 01601 Kyiv, Ukraine Sydney Mathematical Research Institute, The University of Sydney, Sydney NSW 2006, Australia Corresponding author. 2023 241202300121882022291120221512023 © 2023 The Author(s). Published by VTeX2023 Open access article under the CC BY license.

The paper is devoted to a stochastic heat equation with a mixed fractional Brownian noise. We investigate the covariance structure, stationarity, upper bounds and asymptotic behavior of the solution. Based on its discrete-time observations, we construct a strongly consistent estimator for the Hurst index H and prove the asymptotic normality for H < 3 / 4. Then assuming the parameter H to be known, we deal with joint estimation of the coefficients at the Wiener process and at the fractional Brownian motion. The quality of estimators is illustrated by simulation experiments.

Stochastic partial differential equation mixed fractional Brownian motion Hurst index estimation strong consistency asymptotic normality 60G22 60H15 62F10 62F12 KR was supported by the Sydney Mathematical Research Institute under Ukrainian Visitors Program.
Introduction

The paper is devoted to parameter estimation in a stochastic heat equation of the following form: u t 1 2 · 2 u x 2 ( t , x ) = σ B ˙ x H + κ W ˙ x , t > 0 , x R , 0,\hspace{0.2778em}x\in \mathbb{R},\]]]> u ( 0 , x ) = 0 , x R . The right-hand side of (1) is a mixed fractional noise. It consists of two independent stochastic processes, namely, a fractional Brownian motion B H = { B x H , x R } with Hurst parameter H ( 0 , 1 ) and a Wiener process W = { W x , x R }; σ and κ are some positive coefficients. The solution to the problem (1)–(2) is understood in mild sense; the precise definition will be given in Section 2.

The mixed fractional Brownian motion was first introduced by P. Cheridito  in order to model financial markets that are simultaneously arbitrage-free and have a memory. The properties of this process were studied in . We refer to the book  for a detailed presentation of the modern theory in this area. More involved mixed fractional models described by stochastic differential equations are the subject of numerous publications  appeared during last decades. Such equations can be used to model processes on financial markets, where two principal random noises influence the prices. The first source of randomness is the stock exchange itself with thousands of agents. The noise coming from this source can be assumed white and is best modeled by a Wiener process. The second one has the financial and economical background and can be modeled by a fractional Brownian motion B H . Stochastic partial differential equations with such noises can be used, in particular, for the modeling of instantaneous forward rates, where the space variable corresponds to time until maturity . Such equations arise also in geophysics, especially in physical oceanography  and in geostatistics . For example, in models for sea surface temperature, noise terms can represent various heat fluxes and ocean processes .

Existence, uniqueness and properties of solutions for stochastic differential equations with mixed noise were studied in various papers . A stochastic heat equation with white and fractional noises was investigated in . Several approaches to parameter identification in simple linear mixed fractional models for various observation schemes were proposed in . The problem of drift parameter estimation in a mixed stochastic differential equation of a general form was studied in . The statistical problems for the mixed fractional Vasicek model were investigated in the recent papers .

Similarly to our previous papers the solution u ( t , x ) is observed at equidistant spatial points for a several fixed time instants. On one hand, there are many practical cases where the solution is observed at some discrete space points such as temperature of a heated body or velocity of a turbulent flow. In many cases the measurements with a high space resolution are available, but the time series are short. For example, this is the case for satellite observations of sea surface temperature, see . For this reason, it is suitable to assume that u ( t , x ) is observed at a large number of space points x k and only few different time instants t i . On the other hand, observing the solution at three time points is enough to construct estimators for unknown parameters σ, κ and H. Nevertheless, the additional information of observing the solution discretely in time can be consolidated by taking the (weighted) average of the estimators similarly to  or .

The present paper is devoted to the problem of estimating unknown parameters H, σ, κ in the equation (1), by discrete observations of its solution u ( t , x ). The results of this paper are an extension of our previous works , where the problems of estimating H and σ were studied for the equation (1) with the fractional Brownian motion only (that is, for κ = 0). The diffusion parameter estimator for SPDE with white noise and its properties was considered in . Similarly to the mentioned articles, in the present paper we start with proving stationarity and ergodicity of the solution u ( t , x ) as a function of the spatial variable x by analyzing the behavior of the covariance function. Based on these results we construct a strongly consistent estimator of H (assuming that the parameters σ and κ are unknown). The asymptotic normality of this estimator is proved for any H ( 0 , 1 2 ) ( 1 2 , 3 4 ). Then we consider the problem of estimating the couple of parameters ( σ , κ ) when the value of H is known. We prove strong consistency of the estimator and investigate its asymptotic normality.

The paper is organized as follows. In Section 2 we introduce the definition of a mild solution to SPDE (1) and present its properties. Furthermore, we prove the limit theorem for it, needed for establishing properties of statistical estimators. The statistical problems are investigated in Section 3. In Subsection 3.1 we construct an estimator of the Hurst index H and prove its strong consistency and asymptotic normality. Subsection 3.2 is devoted to the estimators of the parameters σ and κ and their asymptotic properties. Numerical results are presented and discussed in Section 4.

Preliminaries

Assume that B H = B x H , x R is a two-sided fractional Brownian motion with the Hurst index H ( 0 , 1 ), while W = W x , x R is a Wiener process, independent of B H . Let G be Green’s function of the heat equation, that is G ( t , x ) = 1 2 π t exp x 2 2 t , if t > 0 , δ 0 ( x ) , if t = 0 . 0\text{,}\\ {} {\delta _{0}}(x),\hspace{1em}& \hspace{2.5pt}\text{if}\hspace{2.5pt}t=0.\end{array}\right.\]]]> Similarly to  (see also  and the references cited therein), we define a solution to SPDE (1) in a mild sense as follows.

The random field u ( t , x ) , t 0 , x R defined by u ( t , x ) = σ 0 t R G ( t s , x y ) d B y H d s + κ 0 t R G ( t s , x y ) d W y d s is called a solution to SPDE (1)–(2).

As shown in , both stochastic integrals in (3) exist as pathwise Riemann–Stieltjes integrals. This fact follows from the Hölder regularity of the integrands and integrators. Namely, Green’s function is obviously Lipshitz continuous, while sample paths of the fractional Brownian motion are Hölder continuous up to order H. Such regularity guarantees the existence of the first integral in (3). The second integral is also well defined, since the integrand is square integrable, see [4, Theorem 2.1].

The next proposition summarizes basic properties of the solution u ( t , x ). These properties, especially stationarity and ergodicity, enable us to construct and investigate statistical estimators for H, κ and σ.

Let u = u ( t , x ) , t [ 0 , T ] , x R be a solution to (1) defined by (3). Then the following properties hold.

For all t , s [ 0 , T ] and x , z R, cov ( u ( t , z ) , u ( s , x + z ) ) = cov ( u ( t , 0 ) , u ( s , x ) ) = 1 2 π 0 t 0 s ( q + r ) 3 2 R σ 2 H y 2 H 1 + κ 2 2 × ( sign y ) ( y x ) exp ( y x ) 2 2 ( q + r ) d y d q d r . Consequently, for fixed distinct points t 1 , , t n [ 0 , T ], the stochastic process u ( t 1 , x ) u ( t n , x ) , x R, is a multivariate stationary Gaussian process.

The variance of u ( t , x ) is given by E [ u ( t , x ) 2 ] = σ 2 v t ( H ) + κ 2 v t 1 2 , t > 0 , x R , 0,\hspace{0.2778em}x\in \mathbb{R},\]]]> where v t ( H ) = c H t H + 1 , c H = 2 H + 1 ( 2 H 1 ) Γ ( H + 1 2 ) π ( H + 1 ) , Γ denotes the gamma function.

For all t , s [ 0 , T ] and x > 00$]]>, the covariance function admits the following upper bound: cov ( u ( t , 0 ) , u ( s , x ) ) C H t s σ 2 x 2 H 2 + κ 2 x 1 , where C H is a positive constant depending only on H. For all t , s [ 0 , T ] and x R, cov ( u ( t , x ) , u ( s , x ) ) = σ 2 2 H Γ ( H + 1 2 ) ( t + s ) H + 1 t H + 1 s H + 1 π ( H + 1 ) + κ 2 2 3 2 ( t + s ) 3 2 t 3 2 s 3 2 3 π . For a fixed t > 00$]]>, the random process u ( t , x ) , x R is ergodic.

The proposition follows from the corresponding results for the equation with pure fractional noise studied in . Indeed, all the statements are based on the properties of the covariance function of the solution. However, since B H and W are independent, we see that this covariance function can be represented as cov ( u ( t , x ) , u ( s , z ) ) = σ 2 cov ( u b ( t , x ) , u b ( s , z ) ) + κ 2 cov ( u w ( t , x ) , u w ( s , z ) ) , where u b ( t , x ) = 0 t R G ( t s , x y ) d B y H d s , u w ( t , x ) = 0 t R G ( t s , x y ) d W y d s . Then combining the equality (9) with statements of [3, Prop. 2.2], we immediately obtain formulas (4), (7) and (8). The equality (5) follows from (9) and [2, Prop. 3]. Finally, the last statement of the proposition holds, because the solution u ( t , x ) , x R is a stationary Gaussian process, whose covariance function vanishes as x , according to (8). Hence, the process u ( t , · ) is ergodic.  □

Let us fix some δ > 00$]]> and consider the following sequence: V N ( t ) = 1 N k = 1 N u ( t , k δ ) 2 , t > 0 , N N . 0,\hspace{0.2778em}N\in \mathbb{N}.\]]]> The sequence (10) will serve as a statistic for parameter estimation problems in Section 3. We introduce the following notation in addition to (6): μ ( t ) : = E V N ( t ) = E [ u ( t , 0 ) 2 ] = σ 2 v t ( H ) + κ 2 v t 1 2 , ρ t s H ( k ) = cov ( u ( t , k δ ) , u ( s , 0 ) ) , r t s ( H ) = 2 k = ρ t s H ( k ) 2 , t , s > 0 . 0.\end{array}\]]]> The next theorem describes an asymptotic behavior of the stochastic process V N . It gives a law of large numbers and a central limit theorem for its finite-dimensional distributions ( V N ( t 1 ) , , V N ( t n ) ) as N . This result is crucial for construction of the estimators and for establishing their asymptotic properties. Let H ( 0 , 1 ). For any t > 00$]]>, V N ( t ) μ ( t ) a. s., as N .

If additionally H ( 0 , 3 4 ), then for any n 1 and any distinct positive t 1 , , t n , N V N ( t 1 ) μ ( t 1 ) V N ( t n ) μ ( t n ) d N ( 0 , R ) as N , where N ( 0 , R ) is the multivariate normal distribution with zero mean and the covariance matrix R = r t i t j ( H ) i , j = 1 n .

1. The ergodic theorem implies that for any t > 00$]]> 1 N k = 1 N u ( t , k δ ) 2 E [ u ( t , 0 ) 2 ] a. s. as N , which is equivalent to (12). 2. In order to prove the convergence (13), we shall apply the Cramér–Wold theorem. In other words, we need to show that for all α 1 , , α n R, the convergence N i = 1 n α i V N ( t i ) μ ( t i ) d N 0 , s 2 holds with the asymptotic variance s 2 = i = 1 n α i 2 r t i t i ( H ) + 2 1 i < j n α i α j r t i t j ( H ) . Representing V N as the sum (10) and using (11), we rewrite (14) in the form 1 N k = 1 N i = 1 n α i u ( t i , k δ ) 2 E u ( t i , k δ ) 2 d N ( 0 , s 2 ) . Further, since u ( t 1 , k δ ) u ( t n , k δ ) , x R, is a multivariate stationary Gaussian sequence, the convergence (16) can be established by application of the multivariate Breuer–Major theorem [1, Theorem 4]. In order to verify the conditions of this theorem, note that the function F ( x 1 , , x n ) = i = 1 n α i x i 2 has Hermite rank 2, therefore we need to check the convergence of the series: k = ρ t i t j H ( k ) 2 < , i , j = 1 , , n . It follows immediately from the upper bound (7) that these power series converge if and only if H ( 0 , 3 4 ). Thus, the assumptions of [1, Theorem 4] are satisfied, whence the convergence (16) holds with the following asymptotic variance: s 2 = Var ( F ( u ( t 1 , 0 ) , , u ( t n , 0 ) ) ) + 2 k = 1 cov ( F ( u ( t 1 , 0 ) , , u ( t n , 0 ) ) , F ( u ( t 1 , k δ ) , , u ( t n , k δ ) ) ) . Now we must only check that the asymptotic variance s 2 satisfies (15). By the definition of the function F, we have s 2 = Var i = 1 n α i u ( t i , 0 ) 2 + 2 k = 1 cov i = 1 n α i u ( t i , 0 ) 2 , j = 1 n α j u ( t j , k δ ) 2 = i = 1 n α i 2 Var ( u ( t i , 0 ) 2 ) + 2 1 i < j n α i α j cov u ( t i , 0 ) 2 , u ( t j , 0 ) 2 + 2 k = 1 i = 1 n α i 2 cov u ( t i , 0 ) 2 , u ( t i , k δ ) 2 + 2 k = 1 1 i < j n α i α j ( cov u ( t i , 0 ) 2 , u ( t j , k δ ) 2 + cov u ( t j , 0 ) 2 , u ( t i , k δ ) 2 ) . Now we can use the following well-known fact: if ξ and η are zero-mean Gaussian random variables, then cov ( ξ 2 , η 2 ) = 2 cov ( ξ , η ) 2 , in particular, Var ( ξ 2 ) = 2 Var ( ξ ) 2 (this is a corollary of the Isserlis theorem ). Then we get s 2 = 2 i = 1 n α i 2 ρ t i t i H ( 0 ) 2 + 4 1 i < j n α i α j ρ t i t j H ( 0 ) 2 + 4 k = 1 i = 1 n α i 2 ρ t i t i H ( k ) 2 + 4 k = 1 1 i < j n α i α j ρ t i t j H ( k ) 2 + ρ t j t i H ( k ) 2 . Taking into account the equality ρ t s H ( k ) = ρ s t H ( k ), we may rewrite this expression in the more compact form: s 2 = 2 i = 1 n α i 2 k = + ρ t i t i H ( k ) 2 + 4 1 i < j n α i α j k = + ρ t i t j H ( k ) 2 . Thus the equality (15) is verified. This completes the proof of Theorem 1. □ Parameter estimation Let us consider the following statistical problem. It is supposed that for fixed t 1 , , t n and fixed step δ > 00$]]>, the random field u given by (3) is observed at the points x k = k δ, k = 1 , , N. So the observations have the following form: u ( t i , k δ ) , i = 1 , , n , k = 1 , , N . Our aim is to estimate the unknown parameters H, σ and κ. We shall do this it two steps. We start with construction of a strongly consistent estimator of H that does not depend on κ and σ. Also, we shall establish its asymptotic normality. Then assuming that H is known, we shall estimate the parameters σ and κ.

In what follows we assume that H 1 2 , because otherwise the model is non-identifiable. The parameters σ and κ are assumed to be positive.

Estimation of <italic>H</italic>

In order to estimate H without knowledge of σ and κ, it suffices to observe u ( t i , x k ) only at three time instants t 1 < t 2 < t 3 . If we write (12) at these points and replace convergences with equalities, we shall get the following system of equations V N ( t 1 ) = σ 2 c H t 1 H + 1 + κ 2 c 1 2 t 1 3 / 2 , V N ( t 2 ) = σ 2 c H t 2 H + 1 + κ 2 c 1 2 t 2 3 / 2 , V N ( t 3 ) = σ 2 c H t 3 H + 1 + κ 2 c 1 2 t 3 3 / 2 . Excluding the unknown parameter κ from the system, we obtain t 2 3 / 2 V N ( t 2 ) t 1 3 / 2 V N ( t 1 ) = σ 2 c H t 2 H 1 2 t 1 H 1 2 , t 3 3 / 2 V N ( t 3 ) t 2 3 / 2 V N ( t 2 ) = σ 2 c H t 3 H 1 2 t 2 H 1 2 . Then excluding σ we arrive at the following estimating equation for H: t 3 H 1 2 t 2 H 1 2 t 2 H 1 2 t 1 H 1 2 = t 3 3 / 2 V N ( t 3 ) t 2 3 / 2 V N ( t 2 ) t 2 3 / 2 V N ( t 2 ) t 1 3 / 2 V N ( t 1 ) . The solution of (19) (if exists), can be viewed as an estimator of H.

Note that the left-hand side of (19) is indeterminate for H = 1 / 2. However, it is easy to see by l’Hôpital’s rule that there exists the limit lim H 1 2 t 3 H 1 2 t 2 H 1 2 t 2 H 1 2 t 1 H 1 2 = lim H 1 2 t 3 H 1 2 log t 3 t 2 H 1 2 log t 2 t 2 H 1 2 log t 2 t 1 H 1 2 log t 1 = log t 3 log t 2 log t 2 log t 1 . Therefore, one may define by continuity f ( H ) : = t 3 H 1 / 2 t 2 H 1 / 2 t 2 H 1 / 2 t 1 H 1 / 2 , if H 1 2 , log t 3 log t 2 log t 2 log t 1 if H = 1 2 . Then the estimator of H is defined as H ˆ N = f ( 1 ) t 3 3 / 2 V N ( t 3 ) t 2 3 / 2 V N ( t 2 ) t 2 3 / 2 V N ( t 2 ) t 1 3 / 2 V N ( t 1 ) , where f ( 1 ) denotes the inverse function of f. In order to prove its existence, we need to establish that f : R ( 0 , ) is a one-to-one function. This is true, since f is always a strictly increasing function (see Fig. 1) as shown in the following lemma.

The function f ( H ) for t 1 = 1, t 2 = 2, t 3 = 3

For any 0 < t 1 < t 2 < t 3 , the function f : R ( 0 , ) defined by (20) is strictly increasing with respect to H.

We prove the statement for the case H ( 1 2 , ). The interval ( , 1 2 ) is considered similarly. The derivative of f with respect to H is equal to f ( H ) = t 2 H 1 2 t 1 H 1 2 t 3 H 1 2 log t 3 t 2 H 1 2 log t 2 t 2 H 1 2 t 1 H 1 2 2 t 3 H 1 2 t 2 H 1 2 t 2 H 1 2 log t 2 t 1 H 1 2 log t 1 t 2 H 1 2 t 1 H 1 2 2 . Therefore, it suffices to prove the inequality t 2 H 1 2 t 1 H 1 2 t 3 H 1 2 log t 3 t 2 H 1 2 log t 2 > t 3 H 1 2 t 2 H 1 2 t 2 H 1 2 log t 2 t 1 H 1 2 log t 1 . \left({t_{3}^{H-\frac{1}{2}}}-{t_{2}^{H-\frac{1}{2}}}\right)\left({t_{2}^{H-\frac{1}{2}}}\log {t_{2}}-{t_{1}^{H-\frac{1}{2}}}\log {t_{1}}\right).\end{aligned}\]]]> In order to establish (23), observe that the function h ( x ) = x log x, x > 00$]]>, is strictly convex (indeed, its second derivative h ( x ) = 1 / x > 00$]]>). This means that for any α ( 0 , 1 ), x > 00$]]> and y > 00$]]>, α h ( x ) + ( 1 α ) h ( y ) > h ( α x + ( 1 α ) y ) . h\big(\alpha x+(1-\alpha )y\big).\]]]> Let us take x = t 3 H 1 2 > 0 , y = t 1 H 1 2 > 0 , and α = t 2 H 1 2 t 1 H 1 2 t 3 H 1 2 t 1 H 1 2 ( 0 , 1 ) . 0,\hspace{1em}y={t_{1}^{H-\frac{1}{2}}}>0,\hspace{1em}\text{and}\hspace{1em}\alpha =\frac{{t_{2}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}{{t_{3}^{H-\frac{1}{2}}}-{t_{1}^{H-\frac{1}{2}}}}\in (0,1).\]]]> Then 1 α = t 3 H 1 2 t 2 H 1 2 t 3 H 1 2 t 1 H 1 2 , α x + ( 1 α ) y = t 2 H 1 2 , and (24) becomes t 2 H 1 2 t 1 H 1 2 t 3 H 1 2 t 1 H 1 2 ( H 1 2 ) t 3 H 1 2 log t 3 + t 3 H 1 2 t 2 H 1 2 t 3 H 1 2 t 1 H 1 2 ( H 1 2 ) t 1 H 1 2 log t 1 > ( H 1 2 ) t 2 H 1 2 log t 2 , (H-\frac{1}{2}){t_{2}^{H-\frac{1}{2}}}\log {t_{2}},\end{aligned}\]]]> which is equivalent to (23).  □

The above lemma yields that the estimator H ˆ N is well defined at least for sufficiently large N (when the right-hand side of estimating equation (19) becomes positive). The asymptotic properties of H ˆ N are summarized in the following theorem, which is the first main result of the paper.

1. For any H ( 0 , 1 2 ) ( 1 2 , 1 ), H ˆ N is a strongly consistent estimator of the parameter H as N .

2. For H ( 0 , 1 2 ) ( 1 2 , 3 4 ), the estimator H ˆ N is asymptotically normal: N H ˆ N H d N ( 0 , ς 2 ) as N , where ς 2 = 1 D 2 σ 4 c H 2 i , j = 1 3 r t i t j ( H ) L i L j , L 1 = t 3 H 1 2 t 2 H 1 2 t 1 3 / 2 , L 2 = t 1 H 1 2 t 3 H 1 2 t 2 3 / 2 , L 3 = t 2 H 1 2 t 1 H 1 2 t 3 3 / 2 , D = t 2 H 1 2 t 1 H 1 2 t 3 H 1 2 log t 3 t 2 H 1 2 log t 2 t 3 H 1 2 t 2 H 1 2 t 2 H 1 2 log t 2 t 1 H 1 2 log t 1 .

The inequality (23) from the proof of Lemma 1 implies that D > 00$]]> for all H 1 / 2. 1. The strong consistency follows from the construction of the estimator. Indeed, (12) implies that t 3 3 / 2 V N ( t 3 ) t 2 3 / 2 V N ( t 2 ) t 2 3 / 2 V N ( t 2 ) t 1 3 / 2 V N ( t 1 ) f ( H ) a. s., as N . Then the convergence H ˆ N H a. s. as N follows from (21) and (25) due to the continuity of f ( 1 ) . 2. By taking expectations in the equalities (18), we get the following relations t 2 3 / 2 μ ( t 2 ) t 1 3 / 2 μ ( t 1 ) = σ 2 c H t 2 H 1 2 t 1 H 1 2 , t 3 3 / 2 μ ( t 3 ) t 2 3 / 2 μ ( t 2 ) = σ 2 c H t 3 H 1 2 t 2 H 1 2 , whence t 3 3 / 2 μ ( t 3 ) t 2 3 / 2 μ ( t 2 ) t 2 3 / 2 μ ( t 2 ) t 1 3 / 2 μ ( t 1 ) = t 3 H 1 2 t 2 H 1 2 t 2 H 1 2 t 1 H 1 2 = f ( H ) , or H = f ( 1 ) t 3 3 / 2 μ ( t 3 ) t 2 3 / 2 μ ( t 2 ) t 2 3 / 2 μ ( t 2 ) t 1 3 / 2 μ ( t 1 ) . Therefore, N H ˆ N H = N ( g ( V N ( t 1 ) , V N ( t 2 ) , V N ( t 3 ) ) g ( μ ( t 1 ) , μ ( t 2 ) , μ ( t 3 ) ) ) , where g ( x 1 , x 2 , x 3 ) = f ( 1 ) t 3 3 / 2 x 3 t 2 3 / 2 x 2 t 2 3 / 2 x 2 t 1 3 / 2 x 1 . In order to derive the desired asymptotic normality from the convergence (13), we shall apply the delta method. Denoting h ( x ) = d d x f ( 1 ) ( x ) = 1 f ( f ( 1 ) ( x ) ) , we see that the partial derivatives of g equal g 1 ( x 1 , x 2 , x 3 ) = h t 3 3 / 2 x 3 t 2 3 / 2 x 2 t 2 3 / 2 x 2 t 1 3 / 2 x 1 · t 1 3 / 2 t 3 3 / 2 x 3 t 2 3 / 2 x 2 t 2 3 / 2 x 2 t 1 3 / 2 x 1 2 , g 2 ( x 1 , x 2 , x 3 ) = h t 3 3 / 2 x 3 t 2 3 / 2 x 2 t 2 3 / 2 x 2 t 1 3 / 2 x 1 · t 2 3 / 2 t 3 3 / 2 x 3 t 1 3 / 2 x 1 t 2 3 / 2 x 2 t 1 3 / 2 x 1 2 , g 3 ( x 1 , x 2 , x 3 ) = h t 3 3 / 2 x 3 t 2 3 / 2 x 2 t 2 3 / 2 x 2 t 1 3 / 2 x 1 · t 3 3 / 2 t 2 3 / 2 x 2 t 1 3 / 2 x 1 . By the delta method, we derive from (13) the convergence (2) with ς 2 = i , j = 1 3 r t i t j ( H ) g i g j ( μ ( t 1 ) , μ ( t 2 ) , μ ( t 3 ) ) . It remains to prove that g i ( μ ( t 1 ) , μ ( t 2 ) , μ ( t 3 ) ) = L i / ( D σ 2 c H ), i = 1 , 2 , 3. Let us consider in detail the proof of this equality for i = 1, the cases i = 2 and i = 3 are considered similarly. Using successively (27), (28) and (22), we get h t 3 3 / 2 x 3 t 2 3 / 2 x 2 t 2 3 / 2 x 2 t 1 3 / 2 x 1 | x i = μ ( t i ) = h ( f ( H ) ) = 1 f ( H ) = t 2 H 1 2 t 1 H 1 2 2 D . After inserting this expression into (29) and taking into account the relations (26), we obtain g 1 ( μ ( t 1 ) , μ ( t 2 ) , μ ( t 3 ) ) = t 2 H 1 2 t 1 H 1 2 2 D · t 1 3 / 2 t 3 H 1 2 t 2 H 1 2 σ 2 c H t 2 H 1 2 t 1 H 1 2 2 = t 1 3 / 2 t 3 H 1 2 t 2 H 1 2 D σ 2 c H = L 1 D σ 2 c H . Note also that the above representation yields g 1 ( x 1 , x 2 , x 3 ) 0 in the neighborhood of the point ( μ ( t 1 ) , μ ( t 2 ) , μ ( t 3 ) ), which is necessary for applying the delta method. The derivatives g 2 and g 3 are considered similarly. □ The estimator of H was obtained as a solution to some exponential equation. However, it would be more convenient for applications and modeling to have the explicit form of the estimator. It turns out that in some particular cases it is possible to solve the estimating equation explicitly. Let us consider such example in more detail. Assume that t 1 = h > 00$]]>, t 2 = 2 h, t 3 = 4 h. Substituting these values in the definition of f, we get f ( H ) = t 3 H 1 2 t 2 H 1 2 t 2 H 1 2 t 1 H 1 2 = 4 H 1 2 h H 1 2 2 H 1 2 h H 1 2 2 H 1 2 h H 1 2 h H 1 2 = 2 H 1 2 . Therefore, f ( 1 ) ( x ) = 1 2 + log 2 x, x > 00$]]>, consequently (21) becomes1 We use log 2 + rather than log 2 so that the right-hand side of (30) is always well defined. The function log 2 + x is defined as log 2 x if x > 00$]]> and 0 if x 0. Note that the expression under log 2 + in (30) eventually becomes positive, since in converges to 2 H 1 / 2 .

H ˆ N = 1 2 + log 2 + t 3 3 / 2 V N ( t 3 ) t 2 3 / 2 V N ( t 2 ) t 2 3 / 2 V N ( t 2 ) t 1 3 / 2 V N ( t 1 ) . In this case L 1 = 4 H 1 2 2 H 1 2 h 2 H , L 2 = 4 H 1 2 1 2 3 2 h 2 H , L 3 = 2 H 1 2 1 8 h 2 H , and D = h 2 H 1 2 H 1 2 1 4 H 1 2 log ( 4 h ) 2 H 1 2 log ( 2 h ) h 2 H 1 4 H 1 2 2 H 1 2 2 H 1 2 log ( 2 h ) log h = h 2 H 1 4 H 1 2 2 H 1 2 2 H 1 2 log ( 4 h ) log ( 2 h ) 2 H 1 2 log ( 2 h ) + log h = h 2 H 1 2 H 1 2 2 H 1 2 1 2 H 1 2 log 2 log 2 = h 2 H 1 2 H 1 2 2 H 1 2 1 2 log 2 . Denoting i = D 1 L i h 1 + H log 2, we arrive at the following result.

Let t 1 = h > 00$]]>, t 2 = 2 h, t 3 = 4 h. Then the estimator H ˆ N can be written in the explicit form (30). In this case Theorem 2 holds with ς 2 = 1 σ 4 c H 2 h 2 + 2 H ( log 2 ) 2 i , j = 1 3 r t i t j ( H ) i j , where 1 = 1 2 H 1 2 1 , 2 = 2 H 1 2 + 1 2 H + 1 2 H 1 2 1 , 3 = 1 2 H + 5 2 2 H 1 2 1 . Evidently, the explicit form of the estimator can be obtained also in a slightly more general case, when t 1 = h, t 2 = a h, t 3 = a 2 h with some a > 00$]]>. This leads to the estimator of the form (30) with log a + instead of log 2 + .

Estimation of <italic>σ</italic> and <italic>κ</italic>

Now we assume that the Hurst index H is known and investigate the estimation of the coefficients σ and κ. From the first two equations of (17), one can derive the following estimators: σ ˆ N 2 = t 1 3 / 2 V N ( t 1 ) t 2 3 / 2 V N ( t 2 ) c H t 1 H 1 / 2 t 2 H 1 / 2 , κ ˆ N 2 = t 1 1 H V N ( t 1 ) t 2 1 H V N ( t 2 ) c 1 2 t 1 1 / 2 H t 2 1 / 2 H . Now we are ready to formulate and prove the second main result of the paper.

1. For any H ( 0 , 1 2 ) ( 1 2 , 1 ), ( σ ˆ N 2 , κ ˆ N 2 ) is a strongly consistent estimator of the parameter ( σ 2 , κ 2 ) as N .

2. For H ( 0 , 1 2 ) ( 1 2 , 3 4 ), the estimator ( σ ˆ N 2 , κ ˆ N 2 ) is asymptotically normal: N σ ˆ N 2 σ 2 κ ˆ N 2 κ 2 d N ( 0 , Σ ) as N , where the asymptotic covariance matrix Σ consists of the following elements: Σ 11 = t 1 3 ( r t 1 t 1 ( H ) + r t 1 t 2 ( H ) ) + t 2 3 ( r t 1 t 2 ( H ) + r t 2 t 2 ( H ) ) c H 2 t 1 2 H 1 2 ( t 1 t 2 ) H 1 2 + t 2 2 H 1 , Σ 12 = Σ 21 = t 1 5 2 H ( r t 1 t 1 ( H ) + r t 1 t 2 ( H ) ) + t 2 5 2 H ( r t 1 t 2 ( H ) + r t 2 t 2 ( H ) ) c H c 1 2 2 t 1 H 1 2 t 2 1 2 H t 1 1 2 H t 2 H 1 2 , Σ 22 = t 1 2 H ( r t 1 t 1 ( H ) + r t 1 t 2 ( H ) ) + t 2 2 H ( r t 1 t 2 ( H ) + r t 2 t 2 ( H ) ) c 1 2 2 t 1 1 2 H 2 ( t 1 t 2 ) 1 2 H + t 2 1 2 H .

Using the definition (31) of the estimator σ ˆ N 2 , we rewrite the error σ ˆ N 2 σ in the following form σ ˆ N 2 σ 2 = t 1 3 / 2 V ˆ N ( t 1 ) t 2 3 / 2 V ˆ N ( t 2 ) σ 2 c H t 1 H 1 2 + σ 2 c H t 2 H 1 2 c H t 1 H 1 2 t 2 H 1 2 = 1 c H t 1 H 1 2 t 2 H 1 2 ( t 1 3 / 2 V ˆ N ( t 1 ) σ 2 c H t 1 H + 1 κ 2 c 1 2 t 1 3 / 2 t 2 3 / 2 V ˆ N ( t 2 ) σ 2 c H t 2 H + 1 κ 2 c 1 2 t 2 3 / 2 ) = t 1 3 / 2 V ˆ N ( t 1 ) μ ( t 1 ) t 2 3 / 2 V ˆ N ( t 2 ) μ ( t 2 ) c H t 1 H 1 2 t 2 H 1 2 , where the last equality follows from (11). Similarly, one can represent κ ˆ N 2 κ 2 as follows: κ ˆ N 2 κ 2 = t 1 1 H V ˆ N ( t 1 ) μ ( t 1 ) t 2 1 H V ˆ N ( t 2 ) μ ( t 2 ) c 1 2 t 1 1 2 H t 2 1 2 H . Hence, we see that the random vector in the left-hand side of (32) is a linear transformation of the left-hand side of (13) (for n = 2), namely N σ ˆ N 2 σ 2 κ ˆ N 2 κ 2 = t 1 3 / 2 c H t 1 H 1 2 t 2 H 1 2 t 2 3 / 2 c H t 1 H 1 2 t 2 H 1 2 t 1 1 H c 1 2 t 1 1 2 H t 2 1 2 H t 2 1 H c 1 2 t 1 1 2 H t 2 1 2 H N V ˆ N ( t 1 ) μ ( t 1 ) N V ˆ N ( t 2 ) μ ( t 2 ) . Therefore, taking into account the convergence (13), we conclude that (33) weakly converges in distribution to a bivariate normal distribution with the following covariance matrix: Σ = t 1 3 / 2 c H t 1 H 1 2 t 2 H 1 2 t 2 3 / 2 c H t 1 H 1 2 t 2 H 1 2 t 1 1 H c 1 2 t 1 1 2 H t 2 1 2 H t 2 1 H c 1 2 t 1 1 2 H t 2 1 2 H r t 1 t 1 ( H ) r t 1 t 2 ( H ) r t 1 t 2 ( H ) r t 2 t 2 ( H ) × t 1 3 / 2 c H t 1 H 1 2 t 2 H 1 2 t 1 1 H c 1 2 t 1 1 2 H t 2 1 2 H t 2 3 / 2 c H t 1 H 1 2 t 2 H 1 2 t 2 1 H c 1 2 t 1 1 2 H t 2 1 2 H = Σ 11 Σ 12 Σ 12 Σ 22 . This completes the proof.  □

Maximum likelihood estimation and Fisher information

In this subsection we analyze the efficiency of the estimator ( σ ˆ N 2 , κ ˆ N 2 ) by comparing it to the maximum likelihood estimator (MLE). Note that MLE is hard to compute, however, it is possible to identify the corresponding Fischer information matrix. We use for construction of MLE the same observations as in the previous subsection, namely let the observation vector be X N = ( u ( t 1 , δ ) , u ( t 1 , 2 δ ) , , u ( t 1 , N δ ) , u ( t 2 , δ ) , u ( t 2 , 2 δ ) , , u ( t 2 , N δ ) ) . Obviously, X N has 2 N-dimensional Gaussian distribution with probability density f ( X N , θ ) = ( 2 π ) N det Γ N 1 2 exp 1 2 X N Γ 1 X N , where Γ N is the covariance matrix of X N , that is, Γ N = ρ t 1 t 1 H ( 0 ) ρ t 1 t 1 H ( N 1 ) ρ t 2 t 1 H ( 0 ) ρ t 2 t 1 H ( N 1 ) ρ t 1 t 1 H ( N 1 ) ρ t 1 t 1 H ( 0 ) ρ t 2 t 1 H ( N 1 ) ρ t 2 t 1 H ( 0 ) ρ t 2 t 1 H ( 0 ) ρ t 2 t 1 H ( N 1 ) ρ t 2 t 2 H ( 0 ) ρ t 2 t 2 H ( N 1 ) ρ t 2 t 1 H ( N 1 ) ρ t 2 t 1 H ( 0 ) ρ t 2 t 2 H ( N 1 ) ρ t 2 t 2 H ( 0 ) . Due to (9), this matrix can be decomposed as Γ N = σ 2 Γ N b + κ 2 Γ N w , where Γ N b and Γ N w are the covariance matrices of ( u b ( t 1 , δ ) , u b ( t 1 , 2 δ ) , , u b ( t 1 , N δ ) , u b ( t 2 , δ ) , u b ( t 2 , 2 δ ) , , u b ( t 2 , N δ ) ) and ( u w ( t 1 , δ ) , u w ( t 1 , 2 δ ) , , u w ( t 1 , N δ ) , u w ( t 2 , δ ) , u w ( t 2 , 2 δ ) , , u w ( t 2 , N δ ) ) , respectively. The log-likelihood function is ( X N , θ ) = N log ( 2 π ) 1 2 log ( det Γ N ) 1 2 X N Γ N 1 X N . Then, MLE of θ = ( σ 2 , κ 2 ) is obtained as the solution to the following system of equations: ( X N , θ ) σ 2 = 1 2 tr ( Γ N 1 Γ N b ) + 1 2 X N Γ N 1 Γ N b Γ N 1 X N = 0 , ( X N , θ ) κ 2 = 1 2 tr ( Γ N 1 Γ N w ) + 1 2 X N Γ N 1 Γ N w Γ N 1 X N = 0 (here and after we use the differentiation formulas of a matrix with respect to given parameter,2

For square matrices X and Y, ( X Y ) = ( X ) Y + X ( Y ), ( X 1 ) = X 1 ( X ) X 1 , ( ln ( det X ) ) = tr ( X 1 X ), ( tr ( X ) ) = tr ( X ).

see, e. g.,  for more details).

The maximum likelihood estimator θ ˆ N m l e of θ can hardly be written in the explicit form, since the estimating equations involve the inverse matrix Γ N 1 , which depends nonlinearly on σ 2 and κ 2 . However, using the general theory of maximum likelihood estimation for dependent observations , it is possible to establish the asymptotic normality in the form T N ( θ ) 1 2 θ ˆ N m l e θ d N ( 0 , I 2 ) as n , where T N ( θ ) is the Fisher information matrix and I 2 is the 2 × 2 identity matrix. The rigorous proof of (36) as well as a careful analysis of the asymptotic behavior of T N ( θ ) requires an additional investigation. To the best of our knowledge, even for much simpler model of the mixed fractional Brownian motion, this problem has not been completely solved yet, see the recent paper  for details. Therefore, here we restrict ourselves to the identification of the matrix T N ( θ ).

The Fisher information matrix T N ( θ ) has the form T N ( θ ) = 1 2 tr Γ N 1 Γ N b 2 1 2 tr Γ N 1 Γ N b Γ N 1 Γ N w 1 2 tr Γ N 1 Γ N b Γ N 1 Γ N w 1 2 tr Γ N 1 Γ N w 2 .

In order to identify T N ( θ ), let us calculate the second derivatives. Note that σ 2 Γ N 1 Γ N b = σ 2 Γ N 1 Γ N b = Γ N 1 Γ N b Γ N 1 Γ N b = Γ N 1 Γ N b 2 , σ 2 Γ N 1 Γ N b Γ N 1 = σ 2 Γ N 1 Γ N b Γ N 1 + Γ N 1 Γ N b σ 2 Γ N 1 = Γ N 1 Γ N b 2 Γ N 1 Γ N 1 Γ N b Γ N 1 Γ N b Γ N 1 = 2 Γ N 1 Γ N b 2 Γ N 1 . Hence, 2 ( X N , θ ) ( σ 2 ) 2 = 1 2 tr Γ N 1 Γ N b 2 X N Γ N 1 Γ N b 2 Γ N 1 X N . Taking expectations, we obtain that the corresponding element of the Fisher information matrix equals E 2 ( X N , θ ) ( σ 2 ) 2 = 1 2 tr Γ N 1 Γ N b 2 + E X N Γ N 1 Γ N b 2 Γ N 1 X N = 1 2 tr Γ N 1 Γ N b 2 , since for any matrix A = ( a i j ) i , j = 1 , 2 N , we have the equality E X N A X N = i , j a i j E X i X j = tr A Γ N . Arguing as above, one can write the derivatives 2 ( X N , θ ) ( κ 2 ) 2 = 1 2 tr Γ N 1 Γ N w 2 X N Γ N 1 Γ N w 2 Γ N 1 X N 2 ( X N , θ ) σ 2 κ 2 = 1 2 tr Γ N 1 Γ N w Γ N 1 Γ N b 1 2 X N Γ N 1 Γ N b Γ N 1 Γ N w + Γ N w Γ N 1 Γ N b Γ N 1 X N , and calculate their expectations, identifying other elements of T N ( θ ).  □

1. Similarly to the previous subsection, in the case H = 1 2 it is impossible to estimate both parameters, σ 2 and κ 2 , simultaneously. Only estimation of the sum σ 2 + κ 2 is possible. In this case Γ N b = Γ N w , therefore the estimation equations (34) and (35) coincide.

2. The results of this subsection are valid for any other observations vector of the form X = ( u ( t i , x k ) , i = 1 , M , k = 1 , N ) and its covariance matrix Γ (with decomposition Γ = σ 2 Γ b + κ 2 Γ w ).

3. Similar approach can be applied to the case, when H is unknown, that is, to the problem of estimation of all three parameters σ 2 , κ 2 and H.

Simulations

Let us illustrate the theoretical properties of the estimator by some numerical results. We consider the model with the coefficients σ = κ = 1 for various values of H. For each value of the Hurst index H, we simulate 50 sample paths of the solution u ( t , x ) to the equation (1). The trajectories of a solution are generated by the discretization of the formula (3).

We choose t 1 = 0.25, t 2 = 0.5, t 3 = 1 as the moments of observations, so that the conditions of Corollary 1 are satisfied and the estimator of H can be computed by the explicit formula (30). For each t i we observe u ( t i , k δ ), k = 1 , , N, with the step δ = 1.

Means and standard deviations of the estimator H ˆ N

 N 2 8 2 9 2 10 2 11 2 12 H = 0.1 Mean −0.0121 −0.0299 0.0353 0.0527 0.0682 Std. dev. 0.5237 0.5552 0.2061 0.1242 0.0752 H = 0.2 Mean 0.1616 0.1415 0.1910 0.1961 0.1799 Std. dev. 0.3419 0.2243 0.1540 0.0897 0.0692 H = 0.3 Mean 0.1543 0.2845 0.2685 0.2955 0.2999 Std. dev. 0.4451 0.4177 0.1930 0.1314 0.0781 H = 0.4 Mean 0.2254 0.2725 0.3129 0.2854 0.3313 Std. dev. 0.8089 0.8803 0.4661 0.1608 0.1299 H = 0.6 Mean 0.4563 0.3495 0.5266 0.5618 0.5775 Std. dev. 0.7108 0.9010 0.2926 0.1992 0.1108 H = 0.7 Mean 0.6384 0.7024 0.7160 0.7151 0.6980 Std. dev. 0.3391 0.1512 0.1065 0.0848 0.0511 H = 0.8 Mean 0.8160 0.8042 0.8073 0.8022 0.8074 Std. dev. 0.1929 0.0922 0.0644 0.0467 0.0333 H = 0.9 Mean 0.8583 0.8722 0.8815 0.8939 0.8958 Std. dev. 0.1216 0.0772 0.0671 0.0474 0.0334

Means and standard deviations of the estimator σ ˆ N 2 for σ = 1, κ = 1

 N 2 8 2 9 2 10 2 11 2 12 H = 0.1 Mean 1.0084 1.0511 1.0571 1.0405 1.0440 Std. dev. 0.3971 0.2933 0.1829 0.1355 0.0909 H = 0.2 Mean 1.0963 1.0478 1.0407 1.0183 1.0170 Std. dev. 0.3742 0.2370 0.1888 0.1261 0.1014 H = 0.3 Mean 1.0107 1.0437 0.9616 0.9939 1.0032 Std. dev. 0.4530 0.3618 0.2454 0.1673 0.1216 H = 0.4 Mean 0.8117 1.0042 1.0566 1.0964 1.0660 Std. dev. 0.9423 0.7003 0.4789 0.3048 0.2167 H = 0.6 Mean 1.0905 1.1579 1.0793 1.0667 1.0673 Std. dev. 0.6755 0.5750 0.3875 0.2967 0.2154 H = 0.7 Mean 0.9536 1.0545 1.0653 1.0106 0.9889 Std. dev. 0.4650 0.3498 0.2526 0.1649 0.1202 H = 0.8 Mean 1.0171 1.0216 1.0287 1.0252 1.0095 Std. dev. 0.4619 0.2805 0.2465 0.1720 0.1368 H = 0.9 Mean 1.1814 1.1236 1.0738 1.0391 1.0308 Std. dev. 0.8814 0.7882 0.6851 0.4922 0.3681

Means and standard deviations of the estimator κ ˆ N 2 for σ = 1, κ = 1

 N 2 8 2 9 2 10 2 11 2 12 H = 0.1 Mean 1.0472 1.0133 1.0005 0.9953 0.9886 Std. dev. 0.1664 0.1483 0.0963 0.0606 0.0442 H = 0.2 Mean 0.9608 0.9720 0.9732 0.9876 0.9859 Std. dev. 0.2430 0.1712 0.1338 0.0834 0.0720 H = 0.3 Mean 0.9899 0.9638 1.0355 1.0075 1.0032 Std. dev. 0.4197 0.3056 0.2059 0.1472 0.1017 H = 0.4 Mean 1.2052 1.0076 0.9417 0.9092 0.9365 Std. dev. 0.8747 0.6671 0.4480 0.2988 0.2188 H = 0.6 Mean 0.8934 0.8285 0.9070 0.9258 0.9279 Std. dev. 0.6869 0.5600 0.3733 0.2869 0.2137 H = 0.7 Mean 1.0483 0.9526 0.9519 0.9863 1.0040 Std. dev. 0.4351 0.3250 0.2228 0.1448 0.1078 H = 0.8 Mean 0.9996 0.9938 0.9905 0.9848 0.9974 Std. dev. 0.3113 0.1941 0.1429 0.1031 0.0795 H = 0.9 Mean 1.0017 0.9877 0.9767 0.9870 0.9945 Std. dev. 0.2985 0.2311 0.2072 0.1480 0.1084

Table 1 reports the means and standard deviations of H ˆ N for various H and N. We see that the estimates converge to the true value of the parameter H. However, the convergence is much slower compared to the estimation of H in the pure fractional case (i. e.  κ = 0), which was considered in . The results become poorer when H approaches 1 / 2, or when H is close to zero. It’s worth mentioning that the best performance of H ˆ N is observed for large values of H (0.8 and 0.9), for which the asymptotic normality does not hold.

The means and standard deviations of the estimates σ ˆ N 2 and κ ˆ N 2 are reported in Tables 23. Here we also clearly see that the estimators converge to the true values of the parameters, however the results for both estimators become worse when H is close to 1/2. We observe that unlike H ˆ N , the estimator σ ˆ 2 converges slowly for H = 0.9, demonstrating better results for small H. Note that the similar situation for the coefficient at the fractional Brownian motion takes place in the pure fractional case, see .

References Arcones, M.A.: Limit theorems for nonlinear functionals of a stationary Gaussian sequence of vectors. Ann. Probab. 22(4), 22422274 (1994). https://doi.org/10.1214/aop/1176988503 Avetisian, D., Ralchenko, K.: Ergodic properties of the solution to a fractional stochastic heat equation, with an application to diffusion parameter estimation. Mod. Stoch. Theory Appl. 7(3), 339356 (2020). https://doi.org/10.15559/20-VMSTA162 Avetisian, D.A., Ralchenko, K.V.: Estimation of the Hurst and diffusion parameters in fractional stochastic heat equation. Theory Probab. Math. Stat. 104, 6176 (2021). https://doi.org/10.1090/tpms/1145 Avetisian, D.A., Shevchenko, G.M.: Estimation of diffusion parameter for stochastic heat equation with white noise. Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics & Mathematics 3, 916 (2018). https://doi.org/10.17721/1812-5409.2018/3.1 Bibinger, M., Trabs, M.: Volatility estimation for stochastic PDEs using high-frequency observations. Stoch. Process. Appl. 130(5), 30053052 (2020). MR4080737. https://doi.org/10.1016/j.spa.2019.09.002 Cai, C., Chigansky, P., Kleptsyna, M.: Mixed Gaussian processes: A filtering approach. Ann. Probab. 44(4), 30323075 (2016). MR3531685. https://doi.org/10.1214/15-AOP1041 Cai, C.H., Huang, Y.Z., Sun, L., Xiao, W.L.: Maximum likelihood estimation for mixed fractional Vasicek processes. Fractal Fract. 6(1), 44 (2022). https://doi.org/10.3390/fractalfract6010044 Cheridito, P.: Mixed fractional Brownian motion. Bernoulli 7(6), 913934 (2001). MR1873835. https://doi.org/10.2307/3318626 Cialenco, I., Huang, Y.: A note on parameter estimation for discretely sampled SPDEs. Stoch. Dyn. 20(3), 205001628 (2020). MR4101083. https://doi.org/10.1142/S0219493720500161 Cialenco, I., Kim, H.-J.: Parameter estimation for discretely sampled stochastic heat equation driven by space-only noise. Stoch. Process. Appl. 143, 130 (2022). MR4332773. https://doi.org/10.1016/j.spa.2021.09.012 Crowder, M.J.: Maximum likelihood estimation for dependent observations. J. R. Stat. Soc. B 38(1), 4553 (1976). MR0403035 Da Silva, J.L., Erraoui, M.: Mixed stochastic differential equations: Existence and uniqueness result. J. Theor. Probab. 31(2), 11191141 (2018). MR3803926. https://doi.org/10.1007/s10959-016-0738-9 Ditlevsen, S., De Gaetano, A.: Mixed effects in stochastic differential equation models. REVSTAT 3(2), 137153 (2005). MR2259358 Dozzi, M., Mishura, Y., Shevchenko, G.: Asymptotic behavior of mixed power variations and statistical estimation in mixed models. Stat. Inference Stoch. Process. 18(2), 151175 (2015). MR3348583. https://doi.org/10.1007/s11203-014-9106-5 Dufitinema, J., Pynnönen, S., Sottinen, T.: Maximum likelihood estimators from discrete data modeled by mixed fractional Brownian motion with application to the Nordic stock markets. Commun. Stat., Simul. Comput. 51(9), 52645287 (2022). MR4491681. https://doi.org/10.1080/03610918.2020.1764581 Guerra, J., Nualart, D.: Stochastic differential equations driven by fractional Brownian motion and standard Brownian motion. Stoch. Anal. Appl. 26(5), 10531075 (2008). MR2440915. https://doi.org/10.1080/07362990802286483 Han, M., Xu, Y., Pei, B.: Mixed stochastic differential equations: averaging principle result. Appl. Math. Lett. 112, 1067057 (2021). MR4145479. https://doi.org/10.1016/j.aml.2020.106705. Isserlis, L.: On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika 12(1/2), 134139 (1918). https://doi.org/10.1093/biomet/12.1-2.134 Kozachenko, Y., Melnikov, A., Mishura, Y.: On drift parameter estimation in models with fractional Brownian motion. Statistics 49(1), 3562 (2015). MR3304366. https://doi.org/10.1080/02331888.2014.907294. Kubilius, K., Mishura, Y., Ralchenko, K.: Parameter Estimation in Fractional Diffusion Models. Bocconi & Springer Series, vol. 8, p. 390. Bocconi University Press, Milan Springer, Cham (2017). MR3752152. https://doi.org/10.1007/978-3-319-71030-3 Kukush, A., Lohvinenko, S., Mishura, Y., Ralchenko, K.: Two approaches to consistent estimation of parameters of mixed fractional Brownian motion with trend. Stat. Inference Stoch. Process. 25(1), 159187 (2022). MR4419677. https://doi.org/10.1007/s11203-021-09252-6 Markussen, B.: Likelihood inference for a discretely observed stochastic partial differential equation. Bernoulli 9(5), 745762 (2003). MR2047684. https://doi.org/10.3150/bj/1066418876 Mishura, Y., Ralchenko, K., Shklyar, S.: Maximum likelihood drift estimation for gaussian process with stationary increments. Austrian Journal of Statistics 46(3–4), 6778 (2017). https://doi.org/10.17713/ajs.v46i3-4.672 Mishura, Y., Ralchenko, K., Shevchenko, G.: Existence and uniqueness of mild solution to stochastic heat equation with white and fractional noises. Theory Probab. Math. Stat. 98, 149170 (2019). https://doi.org/10.1090/tpms/1068 Mishura, Y.S., Shevchenko, G.M.: Rate of convergence of Euler approximations of solution to mixed stochastic differential equation involving Brownian motion and fractional Brownian motion. Random Oper. Stoch. Equ. 19(4), 387406 (2011). MR2871847. https://doi.org/10.1515/ROSE.2011.021 Mishura, Y.S., Shevchenko, G.M.: Existence and uniqueness of the solution of stochastic differential equation involving Wiener process and fractional Brownian motion with Hurst index H > 1 / 21/2\$]]>. Commun. Stat., Theory Methods 40(19–20), 34923508 (2011). MR2860753. https://doi.org/10.1080/03610926.2011.581174 Mishura, Y., Shevchenko, G.: Mixed stochastic differential equations with long-range dependence: Existence, uniqueness and convergence of solutions. Comput. Math. Appl. 64(10), 32173227 (2012). MR2989350. https://doi.org/10.1016/j.camwa.2012.03.061 Mishura, Y., Zili, M.: Stochastic Analysis of Mixed Fractional Gaussian Processes p. 194. ISTE Press, London Elsevier Ltd, Oxford (2018). MR3793191 Patrizio, C.R., Thompson, D.W.: Understanding the role of ocean dynamics in midlatitude sea surface temperature variability using a simple stochastic climate model. J. Climate 35(11), 33133333 (2022). https://doi.org/10.1175/JCLI-D-21-0184.1 Piterbarg, L., Rozovskii, B.: Maximum likelihood estimators in the equations of physical oceanography. In: Stochastic Modelling in Physical Oceanography. Progr. Probab., vol. 39, pp. 397421. Birkhäuser Boston, Boston, MA (1996) Prakasa Rao, B.L.S.: Maximum likelihood estimation in the mixed fractional Vasicek model. Journal of the Indian Society for Probability and Statistics 22 (2021). https://doi.org/10.1007/s41096-020-00094-8 Schott, J.R.: Matrix Analysis for Statistics, 2nd edn. Wiley Series in Probability and Statistics. Wiley-Interscience. John Wiley & Sons, Hoboken, NJ (2005) Shevchenko, G.: Mixed stochastic delay differential equations. Theory Probab. Math. Stat. 89, 181195 (2014). MR3235184. https://doi.org/10.1090/S0094-9000-2015-00944-3 Vergara, R.C.: Development of geostatistical models using stochastic partial differential equations. PhD thesis, MINES, Paris Tech (2018). http://cg.ensmp.fr/bibliotheque/public/CARRIZO_These_02513.pdf Wiqvist, S., Golightly, A., McLean, A.T., Picchini, U.: Efficient inference for stochastic differential equation mixed-effects models using correlated particle pseudo-marginal algorithms. Comput. Stat. Data Anal. 157, 10715126 (2021). MR4192029. https://doi.org/10.1016/j.csda.2020.107151 Zili, M.: On the mixed fractional Brownian motion. J. Appl. Math. Stoch. Anal., 324359 (2006). MR2253522. https://doi.org/10.1155/JAMSA/2006/32435