1 Introduction and preliminaries
The first attempt to formulate the long-term memory of time series was in hydrology when Hurst (1951) and his colleagues were studying the fluctuation of the reservoir of the Nile river over a long period of time (see [16]). Later on, after the works of Mandelbrot (1968) in [24], it was clarified that this behavior of time series is because of including long-range depended noises called fractional Brownian motion (fBm) ${B^{H}}$. Regarding sources of these noises, in hydrology, they are accumulated from factors such as waterfalls, glaciers melting, riverbed shape and material, slope and direction, width and depth, local temperature, etc. Moreover, we know a river (especially a large one like the Nile) is a combination of many sub-rivers, and each sub-river (or even a large river) is also a combination of many sources like streams, mountain glaciers, underground water reservoirs, and rainfall in general. Now, one may ask The answer of nature then, in practice, is infinity!
So, let us consider the source i has the noise ${B^{{H_{i}}}}$ with the weight of effect ${\sigma _{i}}$ to the river reservoir, then the general noise of the river can be written as a linear combination
On the other hand, about the dynamic of a particle in a liquid, Langevin (1908) in [21] modeled the particle’s velocity U with an equation which was wisely revised later on by Doob (1942) [9] as
where $\lambda \gt 0$ is the mean reversion parameter and M is a noise, caused by a fluctuating force imposed by an impact of the molecules of the surrounding medium. If ${U_{0}}=\xi $ then the unique solution of this equation is
\[ {U_{t}}={\mathrm{e}^{-\lambda t}}\xi +{\int _{0}^{t}}{\mathrm{e}^{-\lambda (t-s)}}\hspace{0.1667em}\mathrm{d}{M_{s}}.\]
First, this solution was given for all cases where M is semimartingale, then Cheridito et al. (2003) in [8] confirmed this solution for the case the noise process is an fBm $M={B^{H}}$. Now, let us think the liquid is not purely homogeneous, so the local surrounding molecules (particles) can have different imposing forces according to their different sizes, weights, density, or dynamic patterns. Hence, if molecule (particle) i imposes the noise force ${B^{{H_{i}}}}$ with the weight of effect ${\sigma _{i}}$ to the free particle, then the Langevin equation takes the form
Fig. 1.
Free particle movement in a liquid bombarded by multiple molecules (particles) imposing multi-mixed fBm noises
In this article, our aim is to develop the analysis and some properties of the stochastic processes in equations (1) and (2). To do this, first, we review some mathematical concepts. The fractional Brownian motion (fBm) ${B^{H}}$, with parameter $H\in (0,1)$ called the Hurst index, is the unique (up to a multiplicative constant) centered H-self-similar stationary-increment Gaussian process. The fBm was first studied in [19]. The name fractional Brownian motion comes from the influential article [24]. For further information of the fBm, see the monographs [6, 25]. The covariance of the fBm with the Hurst index H is given by
For $H=1/2$ this process is well known as the Brownian motion (BM) or the Wiener process: ${B^{\frac{1}{2}}}=W$. As a stationary-increment process, the fBm has the spectral representation
\[ {r_{H}}(t,s)={\int _{\mathbb{R}}}\frac{({\mathrm{e}^{\mathrm{i}sx}}-1)({\mathrm{e}^{\mathrm{i}tx}}-1)}{{x^{2}}}{f_{H}}(x)\hspace{0.1667em}\mathrm{d}x,\]
where
Here Γ is the complete gamma function
\[ \Gamma (\alpha )={\int _{0}^{\infty }}{t^{\alpha -1}}{\mathrm{e}^{-t}}\hspace{0.1667em}\mathrm{d}t,\]
see [28].Let
\[ {\varrho _{H}}(\delta ;t)=\mathbb{E}\left[({B_{\delta }^{H}}-{B_{0}^{H}})({B_{t+\delta }^{H}}-{B_{t}^{H}})\right]\]
be the incremental autocovariance (with lag δ) of the fBm. For $t\to \infty $ we have the power decay
This means that the increments of the fBm, called the fractional Gaussian noise (fGn), for $H\gt \frac{1}{2}$, are positively correlated and long-range dependent. However, for $H\lt \frac{1}{2}$ they are negatively correlated and short-range dependent.In the Bm case ${B^{\frac{1}{2}}}=W$ we have independent increments, i.e. no dependence:
The fBm has almost surely Hölder continuous paths with any order $H-\varepsilon $ for any $\varepsilon \gt 0$. This follows, e.g., from Theorem 1 of [2].
In addition to Hölder continuity, we have the p-variation as a measure of the path regularity. For a process X and $p\in [1,\infty )$ for the partitions ${\pi _{n}}:=\{{t_{k}}=\frac{k}{n}T:k=0,1,\dots ,n\}$, if
\[ {V_{T}^{p}}(X):=\underset{n\to \infty }{\lim }{\sum \limits_{k=1}^{n}}|{X_{{t_{k}}}}-{X_{{t_{k-1}}}}{|^{p}}\lt \infty \hspace{1em}\text{(limit in probability)},\]
then it is said X has equidistant p-variation on $[0,T]$, and its p-variation on $[0,T]$ is ${V_{T}^{p}}(X)$. For the fBm ${B^{H}}$ then the p-variation is
\[ {V_{T}^{p}}({B^{H}})=\left\{\begin{array}{l@{\hskip10.0pt}l@{\hskip10.0pt}l}\infty & ;& pH\lt 1\\ {} T{\mu _{p}}& ;& pH=1\\ {} 0& ;& pH\gt 1\end{array}\right.\]
where ${\mu _{p}}$ is the pth moment of a standard Gaussian process, see [10, 11].While the fBm has been proposed as a model for financial time series, modeling with it makes arbitrage possible, see [4]. To eliminate this problem, a generalization called mixed fractional Brownian motion (mfBm) was introduced in [7]. This is the mixture model
where $a,b\in \mathbb{R}$ and B is a standard Brownian motion (Bm) independent of the fBm ${B^{H}}$. If $H\gt 1/2$, the mfBm has the path roughness governed by the Bm part and the long-range dependence governed by the fBm part. Hence, e.g., in pricing of financial derivatives the corresponding mixed Black–Scholes model yields the same option prices as the standard Brownian model, see [5].
A natural generalization of the mfBm is to consider two (or n) independent fBm mixtures, see [23]. In this paper, we study an independent infinite-mixture generalization that we call the multi-mixed fractional Brownian motion (mmfBm) with parameters ${\sigma _{k}}$, ${H_{k}}$, $k\in \mathbb{N}$:
where ${B^{{H_{k}}}}$’s are independent fBm’s with Hurst indices ${H_{k}}\in (0,1)$, and ${\sigma _{k}}$’s are positive volatility constants satisfying ${\textstyle\sum _{k=1}^{\infty }}{\sigma _{k}^{2}}\lt \infty $. This study extends the work of [30].
The fractional Ornstein–Uhlenbeck process (fOU) ${U^{\lambda ,H}}$, with parameters $\lambda \gt 0$ and $H\in (0,1)$, is the stationary solution of the Langevin equation
which is given by
and ${\gamma _{0}}(x)=1$, ${\Gamma _{0}}(x)=0$. The functions ${\gamma _{\alpha }}$ and ${\Gamma _{\alpha }}$ are related to the incomplete Gamma functions and they can be calculated, e.g., by approximating the integrals with sums. The autocovariance function of the fOU process can be written as
See Proposition 4.
\[ {U_{t}^{\lambda ,H}}={\int _{-\infty }^{t}}{\mathrm{e}^{-\lambda (t-s)}}\hspace{0.1667em}\mathrm{d}{B_{s}^{H}},\]
where ${({B_{s}^{H}})_{s\le 0}}$ is an independent copy of the fBm ${({B_{s}^{H}})_{s\ge 0}}$, see [8]. Note that the Langevin equation and its solution can be understood via integration by parts. As a stationary process, the fOU admits the spectral density
where ${f_{H}}$ is the spectral density of the driving fBm (3), see [3]. Denote, for $\alpha \in (-1,0)\cup (0,1)$, (7)
\[ {\rho _{\lambda ,H}}(t)=\frac{\Gamma (1+2H)}{4}\frac{{\mathrm{e}^{-\lambda t}}}{{\lambda ^{2H}}}\bigg\{1+{\gamma _{2H-1}}(\lambda t)+{\mathrm{e}^{2\lambda t}}{\Gamma _{2H-1}}(\lambda t)\bigg\}.\]A stationary process X with the autocovariance function satisfying
where $0\ne c\in \mathbb{R}$, and “∼” means the ratio of left and right sides tends to 1, is called long-range dependent (having long memory) if $0\lt \alpha \le 1$, and short-range dependent (having short memory) if $\alpha \gt 1$, see [18].
For $H=\frac{1}{2}$ we recover the well-known Bm case
For $t\to \infty $ we have the power decay
\[ {\rho _{\lambda ,H}}(t)=\frac{1}{2}{\sum \limits_{n=1}^{N}}{\lambda ^{-2n}}\left({\prod \limits_{j=0}^{2n-1}}(2H-j)\right){t^{2H-2n}}+O({t^{2H-2N-2}}),\]
for $N=1,2,\dots $ , i.e., the fOU process with $H\gt \frac{1}{2}$ is long-range dependent, and for $H\le \frac{1}{2}$ it is short-range dependent, see [8].The Hölder continuity and p-variation of fOU is the same as for the mmfBm.
In this paper we study the multi-mixed fractional Ornstein–Uhlenbeck process (mmfOU) with parameters $\lambda \gt 0$ and ${\sigma _{k}}$, ${H_{k}}$, $k\in \mathbb{N}$, that is defined naturally as the stationary solution of Langevin equation with mmfBm as the driving noise:
with
where ${({M_{s}})_{s\le 0}}$ is an independent copy of the mmfBm. This study develops the work of [17].
The rest of the paper is organized as follows. In Section 2 we define the multi-mixed fractional Brownian motions (mmfBm) and the associated multi-mixed fractional Ornstein–Uhlenbeck (mmfOU) processes, prove their existence in ${L^{2}}(\Omega \times [0,T])$, and provide their basic properties. The long-range dependence of these processes are studied in Section 3. In Section 4 we analyze the Hölder continuity and p-variation of mmfBm’s and mmfOU processes. The p-variations of these processes are calculated in Section 5. In Section 6 we show that the mmfBm’s and mmfOU processes have the conditional full support property. Finally, In Section 7 some simulated paths of these processes are given.
2 Definitions and basic properties
Definition 1.
Let ${\sigma _{k}}$, $k\in \mathbb{N}$, satisfy
and let ${H_{k}}$, $k\in \mathbb{N}$, satisfy
The multi-mixed fractional Brownian motion (mmfBm) is
where ${B^{{H_{k}}}}$, $k\in \mathbb{N}$, are independent fBm’s.
(9)
\[\begin{aligned}{}{H_{k}}& \ne {H_{l}}\hspace{1em}\text{for}\hspace{2.5pt}k\ne l,\\ {} {H_{\inf }}& =\underset{k\in \mathbb{N}}{\inf }{H_{k}}\gt 0\\ {} {H_{\sup }}& =\underset{k\in \mathbb{N}}{\sup }{H_{k}}\lt 1.\end{aligned}\]The following proposition shows the existence of the mmfBm.
Proposition 1.
The mmfBm M exists as a random function taking values in ${L^{2}}(\Omega \times [0,T])$ for all $T\gt 0$.
Proof.
Let ${M^{n}}={\textstyle\sum _{k=1}^{n}}{\sigma _{k}}{B^{{H_{k}}}}$. Clearly ${M^{n}}$ takes values in ${L^{2}}(\Omega \times [0,T])$. Let $n,m\in \mathbb{N}$ with $n\gt m$. Then
\[\begin{aligned}{}{\| {M^{n}}-{M^{m}}\| _{{L^{2}}(\Omega \times [0,T])}^{2}}& ={\int _{0}^{T}}\mathbb{E}\left[{({M_{t}^{n}}-{M_{t}^{m}})^{2}}\right]\hspace{0.1667em}\mathrm{d}t\\ {} & ={\int _{0}^{T}}\mathbb{E}\left[{\left({\sum \limits_{k=m+1}^{n}}{\sigma _{k}}{B_{t}^{{H_{k}}}}\right)^{2}}\right]\hspace{0.1667em}\mathrm{d}t\\ {} & ={\sum \limits_{k=m+1}^{n}}{\int _{0}^{T}}{\sigma _{k}^{2}}\mathbb{E}\left[{({B_{t}^{{H_{k}}}})^{2}}\right]\hspace{0.1667em}\mathrm{d}t\\ {} & ={\sum \limits_{k=m+1}^{n}}{\int _{0}^{T}}{\sigma _{k}^{2}}{t^{2{H_{k}}}}\hspace{0.1667em}\mathrm{d}t\\ {} & ={\sum \limits_{k=m+1}^{n}}{\sigma _{k}^{2}}\frac{{T^{1+2{H_{k}}}}}{1+2{H_{k}}}\\ {} & \le {\sum \limits_{k=m+1}^{n}}{\sigma _{k}^{2}}\hspace{0.1667em}\max \left\{1,{T^{3}}\right\},\end{aligned}\]
which shows that ${({M^{n}})_{n\in \mathbb{N}}}$ is the Cauchy sequence. Thus ${M^{n}}\to M$ in ${L^{2}}(\Omega \times [0,T])$ showing the existence. □In the same way we see that the mmfBm ${({M_{t}})_{t\ge 0}}$ exists in the sense that ${M_{t}^{n}}\to {M_{t}}$ in ${L^{2}}(\Omega )$ for all $t\ge 0$.
The following is now obvious.
Proposition 2.
The mmfBm has stationary increments, its covariance function is
and it admits the spectral density
Proposition 3.
On ${L^{2}}(\Omega \times [0,T])$, the mmfOU can be represented as the integral
\[ {U_{t}}={\mathrm{e}^{-\lambda t}}\xi +{\int _{0}^{t}}{\mathrm{e}^{-\lambda (t-s)}}\hspace{0.1667em}\mathrm{d}{M_{s}},\]
where the integral is understood in the integration by parts sense, and
where ${({M_{s}})_{s\le 0}}$ is an independent copy of the mmfBm ${({M_{s}})_{s\ge 0}}$.
Proof.
Let ${M^{n}}={\textstyle\sum _{k=1}^{n}}{\sigma _{k}}{B^{{H_{k}}}}$. Then, the stationary solution of the Langevin equation
is given by
\[ {U_{t}^{n}}={\mathrm{e}^{-\lambda t}}{\xi _{n}}+{\int _{0}^{t}}{\mathrm{e}^{-\lambda (t-s)}}\hspace{0.1667em}\mathrm{d}{M_{s}^{n}},\]
where
\[ {\xi _{n}}={\int _{-\infty }^{0}}{\mathrm{e}^{\lambda s}}\hspace{0.1667em}\mathrm{d}{M_{s}^{n}}.\]
Then, with integration by parts
\[\begin{aligned}{}& {\int _{0}^{t}}{\mathrm{e}^{\lambda s}}\hspace{0.1667em}\mathrm{d}{M_{s}^{n}}={\mathrm{e}^{\lambda t}}{M_{t}^{n}}-\lambda {\int _{0}^{t}}{\mathrm{e}^{\lambda s}}{M_{s}^{n}}\hspace{0.1667em}\mathrm{d}s\\ {} & \hspace{1em}\to {\mathrm{e}^{\lambda t}}{M_{t}}-\lambda {\int _{0}^{t}}{\mathrm{e}^{\lambda s}}{M_{s}}\hspace{0.1667em}\mathrm{d}s={\int _{0}^{t}}{\mathrm{e}^{\lambda s}}\hspace{0.1667em}\mathrm{d}{M_{s}},\end{aligned}\]
because ${M^{n}}\to M$ in ${L^{2}}(\Omega \times [0,T])$. With the same arguments ${\xi _{n}}\to \xi $ in ${L^{2}}(\Omega )$. This yields ${U^{n}}\to U$ in ${L^{2}}(\Omega \times [0,T])$. □Lemma 1.
For $0\ne p\in (-1,1)$, $\lambda \gt 0$, $t\gt 0$,
where ${\gamma _{-p}}$ and ${\Gamma _{-p}}$ are given by (5) and (6).
(13)
\[ {\int _{-\infty }^{\infty }}{\mathrm{e}^{\mathrm{i}tx}}\frac{|x{|^{p}}}{{\lambda ^{2}}+{x^{2}}}\hspace{0.1667em}\mathrm{d}x=\frac{\pi {\mathrm{e}^{-\lambda t}}}{2\cos (\frac{p\pi }{2}){\lambda ^{1-p}}}\left\{1+{\gamma _{-p}}(\lambda t)+{\mathrm{e}^{2\lambda t}}{\Gamma _{-p}}(\lambda t)\right\},\]Proof.
Recall that for the Fourier transform
Moreover, we have
The first formula (15) is valid for $\lambda \gt 0$. The second formula (16) is valid for $-1\lt \alpha \lt 0$. For $-2\lt \alpha \lt -1$, because of the function $|t{|^{\alpha }}$, some singular terms arise at the origin. Nevertheless, it admits a unique meromorphic extension as a tempered distribution, also denoted $|t{|^{\alpha }}$ as a homogeneous distribution on all real line $\mathbb{R}$ including the origin (see [13]). So, we use that extension and formula (16) will be valid for all $-1\ne \alpha \in (-2,0)$. So, using $f(t)={e^{-\lambda |t|}}$ and $g(t)=|t{|^{\alpha }}$ in (14) we obtain
\[ \mathcal{F}(f)(x)=\frac{1}{\sqrt{2\pi }}{\int _{-\infty }^{\infty }}{\mathrm{e}^{-\mathrm{i}tx}}f(t)\hspace{0.1667em}\mathrm{d}t\]
we have the convolution theorem
(14)
\[ {\int _{-\infty }^{\infty }}{\mathrm{e}^{\mathrm{i}tx}}\mathcal{F}(f)(x)\mathcal{F}(g)(x)\hspace{0.1667em}\mathrm{d}x={\int _{-\infty }^{\infty }}f(t-\xi )g(\xi )\hspace{0.1667em}\mathrm{d}\xi .\]
\[\begin{aligned}{}& \frac{2}{\pi }\cdot \Gamma (\alpha +1)\cos \left(\frac{(\alpha +1)\pi }{2}\right)\lambda {\int _{-\infty }^{\infty }}{\mathrm{e}^{itx}}\frac{|x{|^{-(\alpha +1)}}}{{\lambda ^{2}}+{x^{2}}}\hspace{0.1667em}\mathrm{d}x\\ {} & \hspace{1em}={\int _{-\infty }^{\infty }}|\xi {|^{\alpha }}{\mathrm{e}^{-\lambda |t-\xi |}}\hspace{0.1667em}\mathrm{d}\xi \\ {} & \hspace{1em}={\int _{-\infty }^{0}}{(-\xi )^{\alpha }}{\mathrm{e}^{-\lambda (t-\xi )}}\hspace{0.1667em}\mathrm{d}\xi \\ {} & \hspace{2em}+{\int _{0}^{t}}{\xi ^{\alpha }}{\mathrm{e}^{-\lambda (t-\xi )}}\hspace{0.1667em}\mathrm{d}\xi \\ {} & \hspace{2em}+{\int _{t}^{\infty }}{\xi ^{\alpha }}{\mathrm{e}^{-\lambda (\xi -t)}}\hspace{0.1667em}\mathrm{d}\xi \\ {} & \hspace{1em}=\frac{{\mathrm{e}^{-\lambda t}}}{{\lambda ^{(\alpha +1)}}}{\int _{0}^{\infty }}{u^{\alpha }}{\mathrm{e}^{-u}}\hspace{0.1667em}\mathrm{d}u\\ {} & \hspace{2em}+\frac{{\mathrm{e}^{-\lambda t}}}{{\lambda ^{(\alpha +1)}}}{\int _{0}^{\lambda t}}{u^{\alpha }}{\mathrm{e}^{u}}\hspace{0.1667em}\mathrm{d}u\\ {} & \hspace{2em}+\frac{{\mathrm{e}^{\lambda t}}}{{\lambda ^{(\alpha +1)}}}{\int _{\lambda t}^{\infty }}{u^{\alpha }}{\mathrm{e}^{-u}}\hspace{0.1667em}\mathrm{d}u\\ {} & \hspace{1em}=\frac{{\mathrm{e}^{-\lambda t}}\Gamma (\alpha +1)}{{\lambda ^{(\alpha +1)}}}\left\{1+{\gamma _{(\alpha +1)}}(\lambda t)+{\mathrm{e}^{2\lambda t}}{\Gamma _{(\alpha +1)}}(\lambda t)\right\}.\end{aligned}\]
Now, choosing $p=-(\alpha +1)$ proves (13). □Proposition 5.
The covariance function of the mmfOU is
and it admits the spectral density
(18)
\[\begin{aligned}{}{\rho _{\lambda }}(t)& =\mathbb{E}[{U_{s}}{U_{s+t}}]\\ {} & ={\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\frac{\Gamma (1+2{H_{k}}){\mathrm{e}^{-\lambda t}}}{4{\lambda ^{2{H_{k}}}}}\bigg\{1+{\gamma _{2{H_{k}}-1}}(\lambda t)+{\mathrm{e}^{2\lambda t}}{\Gamma _{2{H_{k}}-1}}(\lambda t)\bigg\},\end{aligned}\]Proof.
Let ${U^{n}}$ be like in the proof of Proposition 3, then
\[ {f_{\lambda ,n}}(x)={\sum \limits_{k=1}^{n}}{\sigma _{k}^{2}}\frac{\sin (\pi {H_{k}})\Gamma (1+2{H_{k}})}{2\pi }\frac{|x{|^{1-2{H_{k}}}}}{{x^{2}}+{\lambda ^{2}}},\]
and ${f_{\lambda ,n}}(x)\to {f_{\lambda }}(x)$ because ${U^{n}}\to U$ in ${L^{2}}(\Omega \times [0,T])$. This proves (19). Similarly, (18) follows by Proposition 4. □Remark 1.
Proposition 4 represents the covariance function ${\rho _{\lambda ,H}}(t)$ in a form involving special functions. However, these special complex functions are usually not suitable for numerical computations. For example, in [3], Lemma B.1, the following representation was used for $H\gt \frac{1}{2}$:
\[\begin{aligned}{}{\rho _{\lambda ,H}}(t)& =H\Gamma (2H)\frac{{e^{-\lambda t}}}{{\lambda ^{2H}}}\left\{\frac{1+{e^{2\lambda t}}}{2}-\frac{\lambda }{\Gamma (2H-1)}{I_{\lambda ,H}}(t)\right\},\\ {} {I_{\lambda ,H}}(t)& ={\int _{0}^{t}}{\int _{0}^{\lambda v}}{e^{2\lambda v}}{e^{-s}}{s^{2H-2}}\hspace{0.1667em}\mathrm{d}s\mathrm{d}v.\end{aligned}\]
The double integral above seems reasonable enough, but yields slow numerical calculation in practice. This can be improved by calculating the inner integral as follows:
\[\begin{aligned}{}{I_{\lambda ,H}}(t)& ={\int _{0}^{\lambda t}}{\int _{s/\lambda }^{t}}{e^{2\lambda v}}{e^{-s}}{s^{2H-2}}\hspace{0.1667em}\mathrm{d}s\mathrm{d}v\\ {} & =\frac{1}{2\lambda }{\int _{0}^{\lambda t}}{s^{2H-2}}({e^{2\lambda t-s}}-{e^{s}})\hspace{0.1667em}\mathrm{d}s\\ {} & =\frac{{e^{\lambda t}}}{\lambda }{\int _{0}^{\lambda t}}{s^{2H-2}}\sinh (\lambda t-s)\hspace{0.1667em}\mathrm{d}s.\end{aligned}\]
Consequently,
For the case $H\lt 1/2$ we use the following developed version of Lemma 5.1 in [15] for $\alpha \gt -1$. The proof is similar.
Theorem 1.
Proof.
For $H=1/2$, the right-hand side of (21) is ${\mathrm{e}^{-\lambda t}}/2\lambda $, equal to the autocovariance of the classical Ornstein–Uhlenbeck process with respect to the standard Brownian motion. For $H\gt 1/2$, we obtain (21) from (20) via integration by parts. To prove it for $H\lt 1/2$, we will apply the same approach as in the proof of Lemma B.1 in [3]
\[\begin{aligned}{}{\rho _{\lambda ,H}}(t)=& \hspace{2.5pt}\mathbb{E}[{U_{t}^{\lambda ,H}}{U_{0}^{\lambda ,H}}]\\ {} =& \hspace{2.5pt}\mathbb{E}\left[{\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}\hspace{0.1667em}\mathrm{d}{B_{u}^{H}}{\int _{-\infty }^{t}}{\mathrm{e}^{-\lambda (t-v)}}\hspace{0.1667em}\mathrm{d}{B_{v}^{H}}\right]\\ {} =& \hspace{2.5pt}{\mathrm{e}^{-\lambda t}}\bigg\{\mathbb{V}ar({U_{0}^{\lambda ,H}})+\mathbb{E}\bigg[{\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}\hspace{0.1667em}\mathrm{d}{B_{u}^{H}}{\int _{0}^{t}}{\mathrm{e}^{\lambda v}}\hspace{0.1667em}\mathrm{d}{B_{v}^{H}}\bigg]\bigg\}.\end{aligned}\]
To obtain the term $\mathbb{V}ar({U_{0}^{\lambda ,H}})$ in a closed form, [3] referred to Lemma 5.2 in [15]; however, such form was only obtained for $H\ge 1/2$, and so we need to extend their result for $H\lt 1/2$.Since
On the other hand, as in Lemma 2.1 in [8] and the proof of Lemma B.1 in [3], using formula
where ${\gamma _{\ell }}$ is the well-known lower Gamma function, for $H\lt 1/2$ we have
Using (22) and (23), with similar arguments as we did for (20), we obtain (21). □
\[ {U_{0}^{\lambda ,H}}={\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}\hspace{0.1667em}\mathrm{d}{B_{u}^{H}}=-\lambda {\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}{B_{u}^{H}}\hspace{0.1667em}\mathrm{d}u,\]
we have
\[\begin{aligned}{}\mathbb{V}ar({U_{0}^{\lambda ,H}})& =\mathbb{V}ar\left[-\lambda {\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}{B_{u}^{H}}\hspace{0.1667em}\mathrm{d}u\right]\\ {} & ={\lambda ^{2}}\hspace{0.1667em}\mathbb{V}ar\left[{\int _{0}^{\infty }}{\mathrm{e}^{-\lambda u}}{B_{u}^{H}}\hspace{0.1667em}\mathrm{d}u\right]\\ {} & ={\lambda ^{2}}\mathbb{E}\left[{\left({\int _{0}^{\infty }}{\mathrm{e}^{-\lambda u}}{B_{u}^{H}}\hspace{0.1667em}\mathrm{d}u\right)^{2}}\right]\\ {} & ={\lambda ^{2}}\mathbb{E}\left[{\int _{0}^{\infty }}{\int _{0}^{\infty }}{\mathrm{e}^{-\lambda (u+v)}}{B_{u}^{H}}{B_{v}^{H}}\hspace{0.1667em}\mathrm{d}u\mathrm{d}v\right]\\ {} & =\frac{{\lambda ^{2}}}{2}{\int _{0}^{\infty }}{\int _{0}^{\infty }}{\mathrm{e}^{-\lambda (u+v)}}\cdot \Big\{{u^{2H}}+{v^{2H}}-|u-v{|^{2H}}\Big\}\hspace{0.1667em}\mathrm{d}u\mathrm{d}v\\ {} & =\frac{{\lambda ^{2}}}{2}\bigg\{2\left({\int _{0}^{\infty }}{\mathrm{e}^{-\lambda u}}\hspace{0.1667em}\mathrm{d}u\right)\left({\int _{0}^{\infty }}{\mathrm{e}^{-\lambda v}}{v^{2H}}\hspace{0.1667em}\mathrm{d}v\right)\\ {} & \hspace{1em}-{\int _{0}^{\infty }}{\int _{0}^{\infty }}{\mathrm{e}^{-\lambda (u+v)}}|u-v{|^{2H}}\hspace{0.1667em}\mathrm{d}u\mathrm{d}v\bigg\}.\end{aligned}\]
Now choosing $x=\lambda u$, $y=\lambda v$ by Lemma 2 we have
(22)
\[\begin{aligned}{}\mathbb{V}ar({U_{0}^{\lambda ,H}})& =\frac{{\lambda ^{-2H}}}{2}\bigg\{2{\int _{0}^{\infty }}{\mathrm{e}^{-y}}{y^{2H}}\hspace{0.1667em}\mathrm{d}y\\ {} & \hspace{1em}-{\int _{0}^{\infty }}{\int _{0}^{\infty }}{\mathrm{e}^{-(x+y)}}|x-y{|^{2H}}\hspace{0.1667em}\mathrm{d}x\mathrm{d}y\bigg\}\\ {} & =\frac{{\lambda ^{-2H}}}{2}\Big[2\Gamma (2H+1)-\Gamma (2H+1)\Big]\\ {} & ={\lambda ^{-2H}}H\Gamma (2H).\end{aligned}\](23)
\[\begin{aligned}{}& \mathbb{E}\left[{\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}\hspace{0.1667em}\mathrm{d}{B_{u}^{H}}{\int _{0}^{t}}{\mathrm{e}^{\lambda v}}\hspace{0.1667em}\mathrm{d}{B_{v}^{H}}\right]\\ {} & \hspace{1em}=H(2H-1){\int _{-\infty }^{0}}{\int _{0}^{t}}{\mathrm{e}^{-\lambda (u+v)}}|u-v{|^{2H-2}}\hspace{0.1667em}\mathrm{d}u\mathrm{d}v\\ {} & \hspace{1em}=\mathbb{V}ar({U_{0}^{\lambda ,H}})\bigg\{\frac{{\mathrm{e}^{2\lambda t}}-1}{2}\\ {} & \hspace{2em}-\frac{\lambda }{\Gamma (2H-1)}{\int _{0}^{t}}{\mathrm{e}^{2\lambda v}}{\int _{0}^{\lambda v}}{\mathrm{e}^{-s}}{s^{2H-2}}\hspace{0.1667em}\mathrm{d}s\mathrm{d}v\bigg\}\\ {} & \hspace{1em}=\mathbb{V}ar({U_{0}^{\lambda ,H}})\bigg\{\frac{{\mathrm{e}^{2\lambda t}}-1}{2}\\ {} & \hspace{2em}-\frac{\lambda }{\Gamma (2H-1)}{\int _{0}^{t}}{\mathrm{e}^{2\lambda v}}{\gamma _{\ell }}(2H-1,\lambda v)\hspace{0.1667em}\mathrm{d}v\bigg\}\\ {} & \hspace{1em}=\mathbb{V}ar({U_{0}^{\lambda ,H}})\bigg\{\frac{{\mathrm{e}^{2\lambda t}}-1}{2}\\ {} & \hspace{2em}-\frac{\lambda }{\Gamma (2H)}{\int _{0}^{t}}{\mathrm{e}^{2\lambda v}}{\gamma _{\ell }}(2H,\lambda v)\hspace{0.1667em}\mathrm{d}v\\ {} & \hspace{2em}-\frac{{\lambda ^{2H}}}{\Gamma (2H)}{\int _{0}^{t}}{\mathrm{e}^{\lambda v}}{v^{2H-1}}\hspace{0.1667em}\mathrm{d}v\bigg\}\\ {} & \hspace{1em}=\mathbb{V}ar({U_{0}^{\lambda ,H}})\bigg\{\frac{{\mathrm{e}^{2\lambda t}}-1}{2}\\ {} & \hspace{2em}-\frac{\lambda }{\Gamma (2H)}{\int _{0}^{t}}{\mathrm{e}^{2\lambda v}}{\int _{0}^{\lambda v}}{\mathrm{e}^{-s}}{s^{2H-1}}\hspace{0.1667em}\mathrm{d}s\mathrm{d}v\\ {} & \hspace{2em}-\frac{{\lambda ^{2H}}}{\Gamma (2H)}{\int _{0}^{t}}{\mathrm{e}^{\lambda v}}{v^{2H-1}}\hspace{0.1667em}\mathrm{d}v\bigg\}.\end{aligned}\]3 Long-range dependence
The increments of fBm are a well-known stationary process, that is long-range dependent (LRD) if $H\gt 1/2$, see [18]. Motivated by this, we consider the LRD for the increments of the mmfBm
\[ {\Delta _{\delta }}{M_{t}}={\sum \limits_{k=1}^{\infty }}{\sigma _{k}}{\Delta _{\delta }}{B_{t}^{{H_{k}}}},\]
with covariance function
\[ \varrho (\delta ;t)=\mathbb{E}\big[{\Delta _{\delta }}{M_{s+t}}{\Delta _{\delta }}{M_{s}}\big],\]
where $\delta \gt 0$ is the lag and ${\Delta _{\delta }}{x_{t}}={x_{t+\delta }}-{x_{t}}$ for a process x.Proof.
By using the generalized binomial theorem,
Since
the series (25) is uniformly convergent. So we have
(25)
\[\begin{aligned}{}\varrho (\delta ;t)& =\frac{1}{2}{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\Big\{{(t+\delta )^{2{H_{k}}}}+{(t-\delta )^{2{H_{k}}}}-2{t^{2{H_{k}}}}\Big\}\\ {} & =\frac{1}{2}{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}{t^{2{H_{k}}}}\Bigg\{{\Big(1+\frac{\delta }{t}\Big)^{2{H_{k}}}}+{\Big(1-\frac{\delta }{t}\Big)^{2{H_{k}}}}-2\Bigg\}\\ {} & =\frac{1}{2}{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}{t^{2{H_{k}}}}\Bigg\{{\sum \limits_{r=0}^{\infty }}\left(\genfrac{}{}{0.0pt}{}{2{H_{k}}}{r}\right){\Big(\frac{\delta }{t}\Big)^{r}}+{\sum \limits_{r=0}^{\infty }}\left(\genfrac{}{}{0.0pt}{}{2{H_{k}}}{r}\right){(-1)^{r}}{\Big(\frac{\delta }{t}\Big)^{r}}-2\Bigg\}\\ {} & \sim {\delta ^{2}}{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}{H_{k}}(2{H_{k}}-1){t^{2{H_{k}}-2}}.\end{aligned}\]
\[ \underset{t\to \infty }{\lim }{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}{H_{k}}(2{H_{k}}-1){t^{2{H_{k}}-2}}={\sum \limits_{k=1}^{\infty }}\underset{t\to \infty }{\lim }{\sigma _{k}^{2}}{H_{k}}(2{H_{k}}-1){t^{2{H_{k}}-2}}.\]
This yields (24). □To investigate LRD for the mmfOU process, we first need some lemmas.
The following theorem shows that similar to the mmfBm increment process, the long-range dependence of the mmfOU is governed by the long-range dependence of the largest Hurst index in the driving mmfBm.
Theorem 3.
For $t\to \infty $ and each $N=1,2,\dots ,$
So the mmfOU process U is LRD if and only if ${H_{k}}\gt 1/2$ for some $k\ge 0$.
(26)
\[ {\rho _{\lambda }}(t)=\frac{1}{2}{\sum \limits_{k=1}^{\infty }}{\sum \limits_{n=1}^{N}}{\sigma _{k}^{2}}{\lambda ^{-2n}}\left({\prod \limits_{j=0}^{2n-1}}(2{H_{k}}-j)\right){t^{2{H_{k}}-2n}}+O({t^{2{H_{\sup }}-2N-2}}).\]Proof.
By the proof of Lemma 2.2 and Theorem 2.3 in [8]
Now, for $t\in [1,\infty )$,
(27)
\[\begin{aligned}{}{\rho _{\lambda }}(t)& =\mathbb{E}\left[{\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}\hspace{0.1667em}\mathrm{d}{M_{u}}{\int _{-\infty }^{t}}{\mathrm{e}^{-\lambda (t-v)}}\hspace{0.1667em}\mathrm{d}{M_{v}}\right]\\ {} & ={\mathrm{e}^{-\lambda t}}\mathbb{E}\left[{\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}\hspace{0.1667em}\mathrm{d}{M_{u}}{\int _{-\infty }^{1/\lambda }}{\mathrm{e}^{\lambda v}}\hspace{0.1667em}\mathrm{d}{M_{v}}\right]\\ {} & \hspace{1em}+{\mathrm{e}^{-\lambda t}}{\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}{H_{i}}(2{H_{i}}-1)\\ {} & \hspace{1em}\times {\int _{-\infty }^{0}}{\mathrm{e}^{\lambda u}}\left({\int _{1/\lambda }^{t}}{\mathrm{e}^{\lambda v}}{(v-u)^{2{H_{i}}-2}}\hspace{0.1667em}\mathrm{d}v\right)\mathrm{d}u\\ {} & =O({\mathrm{e}^{-\lambda t}})\\ {} & \hspace{1em}+\frac{1}{2}{\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}\frac{{H_{i}}(2{H_{i}}-1)}{{\lambda ^{2{H_{i}}}}}\Bigg\{{\mathrm{e}^{-\lambda t}}{\int _{1}^{\lambda t}}{\mathrm{e}^{y}}{y^{2{H_{i}}-2}}\hspace{0.1667em}\mathrm{d}y\\ {} & \hspace{1em}+{\mathrm{e}^{\lambda t}}{\int _{\lambda t}^{\infty }}{\mathrm{e}^{-y}}{y^{2{H_{i}}-2}}\hspace{0.1667em}\mathrm{d}y\Bigg\}\\ {} & \le O({\mathrm{e}^{-\lambda t}})\\ {} & \hspace{1em}+\frac{1}{2}{\sum \limits_{k=1}^{\infty }}{\sum \limits_{n=1}^{N}}{\sigma _{k}^{2}}{\lambda ^{-2n}}\left({\prod \limits_{j=0}^{2n-1}}(2{H_{k}}-j)\right){t^{2{H_{k}}-2n}}\\ {} & \hspace{1em}+\frac{1}{2}{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\frac{\Big|{H_{k}}(2{H_{k}}-1)\cdots (2{H_{k}}-2-2N)\Big|}{{\lambda ^{2{H_{k}}}}}\\ {} & \hspace{1em}\times \left[{\mathrm{e}^{-\frac{\lambda t}{2}}}+(1+{2^{2{H_{k}}-2N-3}}){(\lambda t)^{2{H_{k}}-2N-3}}\right].\end{aligned}\]
\[\begin{aligned}{}& \frac{\Big|{H_{k}}(2{H_{k}}-1)\cdots (2{H_{k}}-2-2N)\Big|}{{\lambda ^{2{H_{k}}}}}\hspace{0.1667em}{\mathrm{e}^{-\frac{\lambda t}{2}}}\lt {\Lambda _{N}}\\ {} & \frac{\Big|{H_{k}}(2{H_{k}}-1)\cdots (2{H_{k}}-2-2N)\Big|}{{\lambda ^{2{H_{k}}}}}\hspace{0.1667em}(1+{2^{2{H_{k}}-2N-3}}){(\lambda t)^{2{H_{k}}-2N-3}}\lt {\Pi _{N}},\end{aligned}\]
where
\[\begin{aligned}{}{\Lambda _{N}}& ={H_{\sup }}\frac{\max \Big(|2{H_{\inf }}-1|,|2{H_{\sup }}-1|\Big)}{\max \Big({\lambda ^{2{H_{\inf }}}},{\lambda ^{2{H_{\sup }}}}\Big)}\Big|(2{H_{\inf }}-2)\cdots (2{H_{\inf }}-2-2N)\Big|,\\ {} {\Pi _{N}}& ={H_{\sup }}\frac{\max \Big(|2{H_{\inf }}-1|,|2{H_{\sup }}-1|\Big)}{{\lambda ^{2N+3}}}\Big|(2{H_{\inf }}-2)\cdots (2{H_{\inf }}-2-2N)\Big|\\ {} & \hspace{1em}\times (1+{2^{2{H_{\sup }}-2N-3}}).\end{aligned}\]
So, as ${\textstyle\sum _{k=1}^{\infty }}{\sigma _{k}^{2}}\lt \infty $, the series in the right-hand side of the inequality (27) is uniformly convergent on $t\in [1,\infty )$. Hence
\[\begin{aligned}{}& \underset{t\to \infty }{\lim }{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\frac{\Big|{H_{k}}(2{H_{k}}-1)\cdots (2{H_{k}}-2-2N)\Big|}{{\lambda ^{2{H_{k}}}}}\\ {} & \hspace{2em}\times \left[{\mathrm{e}^{-\frac{\lambda t}{2}}}+(1+{2^{2{H_{k}}-2N-3}}){(\lambda t)^{2{H_{k}}-2N-3}}\right]\\ {} & \hspace{1em}={\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\frac{\Big|{H_{k}}(2{H_{k}}-1)\cdots (2{H_{k}}-2-2N)\Big|}{{\lambda ^{2{H_{k}}}}}\\ {} & \hspace{2em}\times \underset{t\to \infty }{\lim }\left[{\mathrm{e}^{-\frac{\lambda t}{2}}}+(1+{2^{2{H_{k}}-2N-3}}){(\lambda t)^{2{H_{k}}-2N-3}}\right].\end{aligned}\]
This proves (26). □4 Continuity
Definition 3.
Let $X=({X_{t}})$ be a continuous stochastic process defined on a probability space $(\Omega ,\mathcal{F},\mathbb{P})$. If
the process X is called Hölder continuous with index H, and H is its Hölder index.
Proof.
For $\epsilon \gt 0$ and $|t-s|\lt 1$, the mmfBm satisfies
\[ \mathbb{E}\Big[{({M_{t}}-{M_{s}})^{2}}\Big]={\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}|t-s{|^{2{H_{k}}}}\le \bigg({\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\bigg)|t-s{|^{2{H_{\inf }}-\epsilon }}={C_{0}}|t-s{|^{2{H_{\inf }}-\epsilon }},\]
where ${C_{0}}:={\textstyle\sum _{k=1}^{\infty }}{\sigma _{k}^{2}}\gt 0$. Thus, the claim follows from Theorem 1 of [2]. On the other hand, for some $j\ge 1$ we have ${H_{\inf }}\le {H_{j}}\lt {H_{\inf }}+\epsilon $ and so the fBm ${B^{{H_{j}}}}$ is not $({H_{\inf }}+\epsilon )$-Hölder continuous. Hence the process $M={\sigma _{j}}{B^{{H_{j}}}}+{\textstyle\sum _{k\ne j}}{\sigma _{k}}{B^{{H_{k}}}}$ is not $({H_{\inf }}+\epsilon )$-Hölder continuous. This proves the claim for mmfBm.For the mmfOU, we apply Corollary 2 of [2]. That states the process ${U_{t}}$ is Hölder-continuous of any order $0\lt a\lt {H_{\inf }}$ if and only if for each $0\lt \epsilon \lt 2{H_{\inf }}$, there is some $0\lt \delta \lt 1$ such that
This is equivalent to
Also, we have
if and only if $0\lt {H_{\mathrm{inf}}}\le {H_{\mathrm{sup}}}\lt 1$ and ${\textstyle\sum _{k=1}^{\infty }}{\sigma _{k}^{2}}\lt \infty $. Now, (29) and (30) yield (28). Moreover, for some $j\ge 1$ we have ${H_{\inf }}\le {H_{j}}\lt {H_{\inf }}+\epsilon $ and so the fOU ${U^{{H_{j}}}}$ is not $({H_{\inf }}+\epsilon )$-Hölder continuous. Hence the process $U={\sigma _{j}}{U^{{H_{j}}}}+{\textstyle\sum _{k\ne j}}{\sigma _{k}}{U^{{H_{k}}}}$ is not $({H_{\inf }}+\epsilon )$-Hölder continuous. This proves the claim for mmfOU. □
(28)
\[ {\int _{0}^{\infty }}(1-\cos (sx)){f_{\lambda }}(x)dx\lt {C_{\epsilon }}{s^{2{H_{\inf }}-\epsilon }},\hspace{1em}s\in (0,\delta ).\]
\[ {\int _{0}^{\infty }}\frac{(1-\cos (sx))}{{s^{2{H_{\inf }}-\epsilon }}}{f_{\lambda }}(x)dx\lt {C_{\epsilon }},\hspace{1em}s\in (0,\delta ).\]
To show this, here for $s\lt 1$ we have
\[\begin{aligned}{}& {\int _{0}^{\infty }}\frac{(1-\cos (sx))}{{s^{2{H_{k}}-\epsilon }}}{f_{\lambda ,{H_{k}}}}(x)dx\\ {} & \hspace{1em}={s^{\epsilon }}{c_{{H_{k}}}}{\int _{0}^{\infty }}(1-\cos (sx))\frac{x\cdot {(sx)^{-2{H_{k}}}}}{{\lambda ^{2}}+{x^{2}}}dx\\ {} & \hspace{1em}={s^{\epsilon }}{c_{{H_{k}}}}{\int _{0}^{\infty }}(1-\cos u)\frac{{u^{1-2{H_{k}}}}}{{s^{2}}{\lambda ^{2}}+{u^{2}}}du\hspace{1em}(u=sx)\\ {} & \hspace{1em}\le {c_{{H_{k}}}}{\int _{0}^{\infty }}(1-\cos u)\frac{{u^{1-2{H_{k}}}}}{{s^{2}}{\lambda ^{2}}+{u^{2}}}du\hspace{1em}(0\lt s\lt 1)\\ {} & \hspace{1em}\le {c_{{H_{k}}}}\Big\{{\int _{0}^{\epsilon }}(1-\cos u)\frac{{u^{1-2{H_{k}}}}}{{u^{2}}}du+{\int _{\epsilon }^{\infty }}\frac{{u^{1-2{H_{k}}}}}{{u^{2}}}du\Big\}\\ {} & \hspace{1em}={c_{{H_{k}}}}\Big\{{\int _{0}^{\epsilon }}\frac{2{\sin ^{2}}(\frac{u}{2})}{{u^{2}}}{u^{1-2{H_{k}}}}du+{\int _{\epsilon }^{\infty }}{u^{-1-2{H_{k}}}}du\Big\}\\ {} & \hspace{1em}\le {c_{{H_{k}}}}\Big\{{\int _{0}^{\epsilon }}\frac{1}{2}{u^{1-2{H_{k}}}}du+{\int _{\epsilon }^{\infty }}{u^{-1-2{H_{k}}}}du\Big\}\\ {} & \hspace{1em}={c_{{H_{k}}}}\Big\{\frac{{\epsilon ^{2-2{H_{k}}}}}{4(1-{H_{k}})}+\frac{{\epsilon ^{-2{H_{k}}}}}{2{H_{k}}}\Big\}=:{C_{\epsilon ,{H_{k}}}}\lt \infty .\end{aligned}\]
Therefore,
(29)
\[ {\int _{0}^{\infty }}(1-\cos (sx)){f_{\lambda ,{H_{k}}}}(x)dx\le {C_{\epsilon ,{H_{k}}}}{s^{2{H_{k}}-\epsilon }}\le {C_{\epsilon ,{H_{k}}}}{s^{2{H_{\inf }}-\epsilon }}.\](30)
\[\begin{aligned}{}{\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}{C_{\epsilon ,{H_{k}}}}& ={\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\hspace{0.1667em}\frac{\sin (\pi {H_{k}})\Gamma (1+2{H_{k}})}{2\pi }\Big\{\frac{{\epsilon ^{2-2{H_{k}}}}}{4(1-{H_{k}})}+\frac{{\epsilon ^{-2{H_{k}}}}}{2{H_{k}}}\Big\}\\ {} & \le \frac{\Gamma (3)}{2\pi }\Big\{\frac{{\epsilon ^{2-2{H_{\mathrm{sup}}}}}}{4(1-{H_{\mathrm{sup}}})}+\frac{{\epsilon ^{-2{H_{\mathrm{inf}}}}}}{2{H_{\mathrm{inf}}}}\Big\}\Big({\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\Big)=:{C_{\epsilon }}\lt \infty ,\end{aligned}\]5 Variation
Recall that the p-variation of fBm with $H\in (0,1)$ on the time-interval $[0,T]$ is given in Definition 3.4 of [29] as
\[ {V_{T}^{p}}({B^{H}})=\underset{|{\pi _{n}}|\to 0}{\lim }\sum \limits_{{t_{k}}\in {\pi _{n}}}|\Delta {B_{{t_{k}}}^{H}}{|^{p}}=\left\{\begin{array}{l@{\hskip10.0pt}l@{\hskip10.0pt}l}\infty & ;& pH\lt 1\\ {} T{\mu _{p}}& ;& pH=1\\ {} 0& ;& pH\gt 1\end{array}\right.\]
where ${\pi _{n}}={\{{t_{k}}=\frac{k}{n}\}_{k=0}^{n}}$ is a partition of $[0,T]$, and ${\mu _{p}}$ is the pth absolute moment of a standard Gaussian process, and the limit is taken in probability. With the same argument, it is easy to check that for the mixed fractional Brownian motion (mfBm) $Y=aB+b{B^{H}}$ the p-variation is
\[ {V_{T}^{p}}(Y)=\underset{|{\pi _{n}}|\to 0}{\lim }\sum \limits_{{t_{k}}\in {\pi _{n}}}|\Delta {Y_{{t_{k}}}}{|^{p}}=\left\{\begin{array}{l@{\hskip10.0pt}l@{\hskip10.0pt}l}\infty & ;& p\min (1/2,H)\lt 1\\ {} T{a^{p}}{\mu _{p}}& ;& H\gt 1/2,\hspace{0.1667em}p/2=1\\ {} T{({a^{2}}+{b^{2}})^{p/2}}{\mu _{p}}& ;& H=1/2,\hspace{0.1667em}p/2=1\\ {} T{b^{p}}{\mu _{p}}& ;& H\lt 1/2,\hspace{0.1667em}pH=1\\ {} 0& ;& p\min (1/2,H)\gt 1\end{array}\right.\]
where $a,b\gt 0$, and B is the standard Brownian motion, and ${B^{H}}$ is a standard fBm independent from B. Now, for the p-variation of the mmfBm we have the next theorem. Theorem 5.
For $p\gt 0$, the p-variations of the mmfBm M and the mmfOU U on the time-interval $[0,T]$ are equal and
Proof.
For the mmfBm M, we have
\[\begin{aligned}{}{v_{{\pi _{n}}}^{p}}(M)& :=\sum \limits_{{t_{k}}\in {\pi _{n}}}|\Delta {M_{{t_{k}}}}{|^{p}}\\ {} & =\sum \limits_{{t_{k}}\in {\pi _{n}}}{\left|{\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}{(\Delta {t_{k}})^{2{H_{i}}}}\right|^{p/2}}{\left|\frac{\Delta {M_{{t_{k}}}}}{{[{\textstyle\textstyle\sum _{i=1}^{\infty }}{\sigma _{i}^{2}}{(\Delta {t_{k}})^{2{H_{i}}}}]^{1/2}}}\right|^{p}}\\ {} & \stackrel{d}{=}{\left({\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}{T^{2{H_{i}}}}{n^{2/p-2{H_{i}}}}\right)^{p/2}}\cdot \frac{1}{n}{\sum \limits_{k=1}^{n}}|{Z_{k}}{|^{p}}\end{aligned}\]
as $|{\pi _{n}}|\to 0$, or equivalently $n\to \infty $. Here ${Z_{k}}$ is a standard Gaussian process and so by the proof of Lemma 3.7 in [29]
as $n\to \infty $, where ${\mu _{p}}$ is the pth absolute moment of the standard Gaussian process. Now, if $p{H_{\inf }}\lt 1$ then ${H_{\inf }}\lt 1/p$, and so there exists some $j\ge 1$ that ${H_{j}}\lt 1/p$, and so $2/p-2{H_{j}}\gt 0$. Therefore
\[ {v_{{\pi _{n}}}^{p}}(M)\ge {\left({\sigma _{j}^{2}}{T^{2{H_{j}}}}{n^{2/p-2{H_{j}}}}\right)^{p/2}}\cdot \frac{1}{n}{\sum \limits_{k=1}^{n}}|{Z_{k}}{|^{p}}\to \infty .\]
On the other hand, if $p{H_{\inf }}\ge 1$, for $x\in (1,\infty )$
and because ${\textstyle\sum _{i=1}^{\infty }}{\sigma _{i}^{2}}\lt \infty $, the ${\textstyle\sum _{i=1}^{\infty }}{\sigma _{i}^{2}}{T^{2{H_{i}}}}{x^{2/p-2{H_{i}}}}$ is uniformly convergent on $x\in [1,\infty )$. So for $p{H_{\inf }}\ge 1$,
\[ \underset{n\to \infty }{\lim }{\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}{T^{2{H_{i}}}}{n^{2/p-2{H_{i}}}}={\sum \limits_{i=1}^{\infty }}\underset{n\to \infty }{\lim }{\sigma _{i}^{2}}{T^{2{H_{i}}}}{n^{2/p-2{H_{i}}}}.\]
This yields the values mentioned in (31) are correct for the p-variation of M. For the mmfOU U, as it is stationary, we have
\[\begin{aligned}{}{v_{{\pi _{n}}}^{p}}(U)& :=\sum \limits_{{t_{k}}\in {\pi _{n}}}|\Delta {U_{{t_{k}}}}{|^{p}}\\ {} & \stackrel{d}{=}{\sum \limits_{k=1}^{n}}{\Big(\mathbb{V}ar[\Delta {U_{{t_{1}}}}]\Big)^{p/2}}|{Z_{k}}{|^{p}}\\ {} & =n{\Big(\mathbb{V}ar[{U_{\frac{T}{n}}}-{U_{0}}]\Big)^{p/2}}\cdot \frac{1}{n}{\sum \limits_{k=1}^{n}}|{Z_{k}}{|^{p}}.\end{aligned}\]
As $\frac{1}{n}{\textstyle\sum _{k=1}^{n}}|{Z_{k}}{|^{p}}\to {\mu _{p}}$ for $n\to \infty $, the problem reduces to the limit
To find it, again because U is stationary, and using the proof of Theorem 1 we have
\[\begin{aligned}{}\mathbb{V}ar[{U_{\frac{T}{n}}}-{U_{0}}]& =\mathbb{V}ar\hspace{0.1667em}{U_{\frac{T}{n}}}+\mathbb{V}ar\hspace{0.1667em}{U_{0}}-2\hspace{0.1667em}\mathrm{C}ov\Big({U_{\frac{T}{n}}},{U_{0}}\Big)\\ {} & =2\hspace{0.1667em}\mathbb{V}ar\hspace{0.1667em}{U_{0}}-2\hspace{0.1667em}\mathrm{C}ov\Big({U_{\frac{T}{n}}},{U_{0}}\Big)\\ {} & =2{\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}{\lambda ^{-2{H_{i}}}}{H_{i}}\Gamma (2{H_{i}})\\ {} & \hspace{1em}-2{\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}\frac{\Gamma (2{H_{i}}+1)}{2{\lambda ^{2{H_{i}}}}}\bigg\{\cosh \Big(\frac{\lambda T}{n}\Big)\\ {} & \hspace{1em}-\frac{1}{\Gamma (2{H_{i}})}{\int _{0}^{\frac{\lambda T}{n}}}{s^{2{H_{i}}-1}}\cosh \Big(\frac{\lambda T}{n}-s\Big)\hspace{0.1667em}\mathrm{d}s\bigg\}\\ {} & ={\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}\frac{\Gamma (2{H_{i}}+1)}{{\lambda ^{2{H_{i}}}}}\bigg\{1-\cosh \Big(\frac{\lambda T}{n}\Big)\\ {} & \hspace{1em}+\frac{1}{\Gamma (2{H_{i}})}{\int _{0}^{\frac{\lambda T}{n}}}{s^{2{H_{i}}-1}}\cosh \Big(\frac{\lambda T}{n}-s\Big)\hspace{0.1667em}\mathrm{d}s\bigg\}.\end{aligned}\]
For the large values of n, the final series in the right-hand side above is uniformly convergent. So, the $\underset{n\to \infty }{\lim }$ and ${\textstyle\sum _{i=1}^{\infty }}$ could change places. This yields
\[\begin{aligned}{}& \underset{n\to \infty }{\lim }n{\Big(\mathbb{V}ar[{U_{\frac{T}{n}}}-{U_{0}}]\Big)^{p/2}}\\ {} & \hspace{1em}=\underset{n\to \infty }{\lim }{\Big({n^{2/p}}\hspace{0.1667em}\mathbb{V}ar[{U_{\frac{T}{n}}}-{U_{0}}]\Big)^{p/2}}\\ {} & \hspace{1em}=\Bigg({\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}\frac{\Gamma (2{H_{i}}+1)}{{\lambda ^{2{H_{i}}}}}\cdot \underset{n\to \infty }{\lim }{n^{2/p}}\bigg\{1-\cosh \Big(\frac{\lambda T}{n}\Big)\\ {} & \hspace{1em}\hspace{1em}+\frac{1}{\Gamma (2{H_{i}})}{\int _{0}^{\frac{\lambda T}{n}}}{s^{2{H_{i}}-1}}\cosh \Big(\frac{\lambda T}{n}-s\Big)\hspace{0.1667em}\mathrm{d}s\bigg\}{\Bigg)^{p/2}}.\end{aligned}\]
Now for $t\to 0$, by the Taylor expansion
and via integration by parts
\[ {\int _{0}^{t}}{s^{2{H_{i}}-1}}\cosh (t-s)\hspace{0.1667em}\mathrm{d}s=\frac{{t^{2{H_{i}}}}}{2{H_{i}}}+\frac{1}{2{H_{i}}}{\int _{0}^{t}}{s^{2{H_{i}}}}\sinh (t-s)\hspace{0.1667em}\mathrm{d}s.\]
Again for $t\to 0$, by the Taylor expansion,
\[ {\int _{0}^{t}}{s^{2{H_{i}}}}\sinh (t-s)\hspace{0.1667em}\mathrm{d}s\le {\int _{0}^{t}}{t^{2{H_{i}}}}\sinh t\hspace{0.1667em}\mathrm{d}s={t^{2{H_{i}}+1}}\sinh t={\sum \limits_{r=1}^{\infty }}\frac{{t^{2r+2{H_{i}}}}}{(2r-1)!}.\]
These yield for $t\to 0$
\[ 1-\cosh t+\frac{1}{\Gamma (2{H_{i}})}{\int _{0}^{t}}{s^{2{H_{i}}-1}}\cosh (t-s)\hspace{0.1667em}\mathrm{d}s\sim \frac{{t^{2{H_{i}}}}}{2{H_{i}}+1}.\]
Therefore
\[ \underset{n\to \infty }{\lim }n{\Big(\mathbb{V}ar[{U_{\frac{T}{n}}}-{U_{0}}]\Big)^{p/2}}={\bigg({\sum \limits_{i=1}^{\infty }}{\sigma _{i}^{2}}{T^{2{H_{i}}}}\underset{n\to \infty }{\lim }{n^{2/p-2{H_{i}}}}\bigg)^{p/2}},\]
this proves (31). □6 Conditional full support
As explained in [5], in mathematical finance models one of the must require features is the so-called Conditional Full Support (CFS) to avoid simple kind of arbitrage. This means that, given the information up to any time $\tau \in [0,T]$, the process is inherently free enough to go anywhere after time τ with positive probability. This motivates us to study the CFS property of the mmfBm and mmfOU processes but first we restate the precise definition of CFS from [12].
Definition 4.
Let $X={({X_{t}})_{0\le t\le T}}$ be a continuous stochastic process defined on a probability space $(\Omega ,\mathcal{F},\mathbb{P})$, and $({\mathcal{F}_{t}})$ be its natural filtration. The process X is said to have CFS if, for all $t\in [0,T]$, the conditional law of ${({X_{u}})_{t\le u\le T}}$ given $({\mathcal{F}_{t}})$, almost surely has support ${C_{{X_{t}}}}[t,T]$, where ${C_{x}}[t,T]$ is the space of continuous functions f on $[t,T]$ satisfying $f(t)=x$. Equivalently, this means that, for all $t\in [0,T]$, $f\in {C_{0}}[t,T]$, and $\varepsilon \gt 0$,
almost surely.
Proof.
It is easy to check that
\[ f(x)={\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}{f_{{H_{k}}}}(x)\ge \left\{\begin{array}{l@{\hskip10.0pt}l@{\hskip10.0pt}l}\frac{{\varepsilon _{H}}\Gamma (1)}{2\pi }\bigg({\displaystyle \sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\bigg)|x{|^{1-2{H_{\inf }}}}& :& |x|\le 1\\ {} \frac{{\varepsilon _{H}}\Gamma (1)}{2\pi }\bigg({\displaystyle \sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\bigg)|x{|^{1-2{H_{\sup }}}}& :& |x|\ge 1\end{array}\right.=:h(x),\]
where
\[ {\varepsilon _{H}}:=\inf {\Big\{\sin (\pi {H_{k}})\Big\}_{k\ge 1}}=\inf \Big\{\sin (\pi {H_{\inf }}),\sin (\pi {H_{\sup }})\Big\}.\]
Since $0\lt {H_{\inf }}\le {H_{\sup }}\lt 1$, ${\varepsilon _{H}}\gt 0$. Thus $h(x)\gt 0$ for $x\ne 0$. Therefore, for any ${x_{0}}\gt 1$ we have
\[\begin{aligned}{}{\int _{{x_{0}}}^{\infty }}\frac{\log f(x)}{{x^{2}}}\hspace{0.1667em}\mathrm{d}x& \ge {\int _{{x_{0}}}^{\infty }}\frac{\log h(x)}{{x^{2}}}\hspace{0.1667em}\mathrm{d}x\\ {} & =\log \left\{\frac{{\varepsilon _{H}}}{2\pi }\bigg({\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\bigg)\right\}{\int _{{x_{0}}}^{\infty }}\frac{\mathrm{d}x}{{x^{2}}}\\ {} & \hspace{1em}+(1-2{H_{\sup }}){\int _{{x_{0}}}^{\infty }}\frac{\log x}{{x^{2}}}\hspace{0.1667em}\mathrm{d}x\gt -\infty ,\end{aligned}\]
and by Theorem 2.1 of [12] this proves that M has conditional full support.For mmfOU it is easy to check that
\[ {f_{\lambda }}(x)={\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}{f_{\lambda ,{H_{k}}}}(x)\ge \left\{\begin{array}{l@{\hskip10.0pt}l@{\hskip10.0pt}l}\frac{{\varepsilon _{H}}\Gamma (1)}{2\pi }\bigg({\displaystyle \sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\bigg)\displaystyle \frac{|x{|^{1-2{H_{\inf }}}}}{{\lambda ^{2}}+{x^{2}}}& :& |x|\le 1\\ {} \frac{{\varepsilon _{H}}\Gamma (1)}{2\pi }\bigg({\displaystyle \sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\bigg)\displaystyle \frac{|x{|^{1-2{H_{\sup }}}}}{{\lambda ^{2}}+{x^{2}}}& :& |x|\ge 1\end{array}\right.=:h(x),\]
where
\[ {\varepsilon _{H}}:=\inf {\Big\{\sin (\pi {H_{k}})\Big\}_{k\ge 1}}=\inf \Big\{\sin (\pi {H_{\inf }}),\sin (\pi {H_{\sup }})\Big\}.\]
Since $0\lt {H_{\inf }}\le {H_{\sup }}\lt 1$, we have ${\varepsilon _{H}}\gt 0$. Consequently, $h(x)\gt 0$ for $x\ne 0$. Therefore, for any ${x_{0}}\gt 1$ we have that
\[\begin{aligned}{}{\int _{{x_{0}}}^{\infty }}\frac{\log {f_{\lambda }}(x)}{{x^{2}}}\hspace{0.1667em}\mathrm{d}x& \ge {\int _{{x_{0}}}^{\infty }}\frac{\log h(x)}{{x^{2}}}\hspace{0.1667em}\mathrm{d}x\\ {} & =\log \left\{\frac{{\varepsilon _{H}}}{2\pi }\bigg({\sum \limits_{k=1}^{\infty }}{\sigma _{k}^{2}}\bigg)\right\}{\int _{{x_{0}}}^{\infty }}\frac{\mathrm{d}x}{{x^{2}}}\\ {} & \hspace{1em}+(1-2{H_{\sup }}){\int _{{x_{0}}}^{\infty }}\frac{\log x}{{x^{2}}}\hspace{0.1667em}\mathrm{d}x\\ {} & \hspace{1em}-{\int _{{x_{0}}}^{\infty }}\frac{\log ({\lambda ^{2}}+{x^{2}})}{{x^{2}}}\hspace{0.1667em}\mathrm{d}x\gt -\infty .\end{aligned}\]
The claim follows now from Theorem 2.1 of [12]. □7 Sample paths
Here we aim to present some replications of the mmfOU and its related mmfBm with different limitations for its Hurst exponents. Obviously the limitations of the Hurst exponents characterize the roughness of the sample paths. In each of these replications, the mmfOU is given on $N=1000$ equidistant points ${t_{k}}=k/(N-1)$ of the time interval $[0,1]$, with $n=10$ equidistant Hurst exponents ${H_{i}}={H_{\inf }}+(i-1)({H_{\sup }}-{H_{\inf }})/(n-1)$ on the Hurst interval $[{H_{\inf }},{H_{\sup }}]$. Also, the coefficients ${\sigma _{i}}={i^{-1}},i{!^{-1}},{\mathrm{e}^{-i}}$ are used and indicated in each figure. In all paths here $\lambda =1$.