Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 3, Issue 2 (2016)
  4. Large deviations for drift parameter est ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Large deviations for drift parameter estimator of mixed fractional Ornstein–Uhlenbeck process
Volume 3, Issue 2 (2016), pp. 107–117
Dmytro Marushkevych  

Authors

 
Placeholder
https://doi.org/10.15559/16-VMSTA54
Pub. online: 7 June 2016      Type: Research Article      Open accessOpen Access

Received
5 May 2016
Revised
16 May 2016
Accepted
16 May 2016
Published
7 June 2016

Abstract

We investigate large deviation properties of the maximum likelihood drift parameter estimator for Ornstein–Uhlenbeck process driven by mixed fractional Brownian motion.

1 Introduction

Our purpose is to establish large deviations principle for the maximum likelihood estimator of drift parameter of the Ornstein–Uhlenbeck process driven by a mixed fractional Brownian motion:
(1)
\[dX_{t}=-\vartheta X_{t}dt+d\widetilde{B}_{t},\hspace{1em}t\in [0,T],\hspace{2.5pt}T>0,\]
where the initial state $X_{0}=0$, and the drift parameter ϑ is strictly positive. The process $\widetilde{B}$ is a mixed fractional Brownian motion
\[\widetilde{B}_{t}=B_{t}+{B_{t}^{H}},\hspace{1em}t\in [0,T],\]
where $B=(B_{t})$ is a Brownian motion, and ${B}^{H}=({B_{t}^{H}})$ is an independent fractional Brownian motion with Hurst exponent $H\in (0,1]$, that is, the centered Gaussian process with covariance function
\[R(s,t)=\mathbb{E}{B_{t}^{H}}{B_{s}^{H}}=\frac{1}{2}\big({t}^{2H}+{s}^{2H}-|t-s{|}^{2H}\big),\hspace{1em}s,t\in [0,T].\]
It is important to notice that the parameter H is considered to be known. The problem of Hurst parameter estimation is not considered in this work but presents a great interest for the future research.
Two following chapters contain information about maximum likelihood estimation procedure for the mixed fractional Ornstein–Uhlenbeck process and description of basic concepts of large deviations theory.
The formulation of our main results and their proofs are given in Section 4, whereas Section 5 contains auxiliary results.

2 Maximum likelihood estimation procedure

The interest to mixed fractional Brownian motion was triggered by Cheridito [5]. Further, a mixed fractional Brownian motion and related models were comprehensively considered by Mishura [12]. Finally, the results of recent works of Cai, Kleptsyna, and Chigansky [4] and Chigansky and Kleptsyna [6] concerning the new canonical representation of mixed fractional Brownian motion present a great value for the purposes of this paper.
An interesting change in properties of a mixed fractional Brownian motion $\widetilde{B}$ occurs depending on the value of H. In particular, it was shown (see [5]) that $\widetilde{B}$ is a semimartingale in its own filtration if and only if either $H=\frac{1}{2}$ or $H\in (\frac{3}{4},1]$.
The main contribution of paper [4] is a novel approach to the analysis of mixed fractional Brownian motion based on the filtering theory of Gaussian processes. The core of this method is a new canonical representation of $\widetilde{B}$.
In fact, there is an integral transformation that changes the mixed fractional Brownian motion to a martingale. In particular (see [4]), let $g(s,t)$ be the solution of the following integro-differential equation
(2)
\[g(s,t)+H\frac{d}{ds}{\int _{0}^{t}}g(r,t)|s-r{|}^{2H-1}\text{sign}(s-r)dr=1,\hspace{1em}0<s<t\le T.\]
Then the process
(3)
\[M_{t}={\int _{0}^{t}}g(s,t)d\widetilde{B}_{s},\hspace{1em}t\in [0,T],\]
is a Gaussian martingale with quadratic variation
\[\langle M\rangle _{t}={\int _{0}^{t}}g(s,t)ds,\hspace{1em}t\in [0,T].\]
Moreover, the natural filtration of the martingale M coincides with that of the mixed fractional Brownian motion $\widetilde{B}$.
Further, to what has just been mentioned concerning the mixed fractional Brownian motion, an auxiliary semimartingale, appropriate for the purposes of statistical analysis, can be also associated to the corresponding Ornstein–Uhlenbeck process X defined by (1). In particular, for the martingale M defined by (3), the sample paths of the process X are smooth enough in the sense that the following process is well defined:
(4)
\[Q_{t}=\frac{d}{d\langle M\rangle _{t}}{\int _{0}^{t}}g(s,t)X_{s}ds.\]
We also define the process $Z=(Z_{t},t\in [0,T])$ by
(5)
\[Z_{t}={\int _{0}^{t}}g(s,t)dX_{s}.\]
One of the most important results of [4] is that the process Z is the fundamental semimartingale associated to X in the following sense.
Theorem 1.
Let $g(s,t)$ be the solution of (2), and the process Z be defined by (5). Then the following assertions hold:
  • 1. Z is a semimartingale with the decomposition
    (6)
    \[Z_{t}=-\vartheta {\int _{0}^{t}}Q_{s}d\langle M\rangle _{s}+M_{t},\]
    where $M_{t}$ is the martingale defined by (3).
  • 2. X admits the representation
    (7)
    \[X_{t}={\int _{0}^{t}}\widehat{g}(s,t)dZ_{s},\]
    where
    \[\widehat{g}(s,t)=1-\frac{d}{d\langle M\rangle _{s}}{\int _{0}^{t}}g(r,s)dr.\]
  • 3. The natural filtrations $(\mathcal{X}_{t})$ and $(\mathcal{Z}_{t})$ of X and Z, respectively, coincide.
In addition, it was shown by Chigansky and Kleptsyna [6] that the process Q admits the following representation:
(8)
\[Q_{t}={\int _{0}^{t}}\psi (s,t)dZ_{s}=\frac{1}{2}\psi (t,t)Z_{t}+\frac{1}{2}{\int _{0}^{t}}\psi (s,s)dZ_{s},\hspace{1em}t\in [0,T],\]
with
\[\psi (s,t)=\frac{1}{2}\bigg(\frac{dt}{d\langle M\rangle _{t}}+\frac{ds}{d\langle M\rangle _{s}}\bigg).\]
The specific structure of the process Q allows us to determine the likelihood function for (1), which according to Corollary 2.9 in [4] equals
\[L_{T}(\vartheta ,X)=\frac{d{\mu }^{X}}{d{\mu }^{\widetilde{B}}}(X)=\exp \Bigg(-\vartheta {\int _{0}^{T}}Q_{t}dZ_{t}-\frac{1}{2}{\vartheta }^{2}{\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg),\]
where ${\mu }^{X}$ and ${\mu }^{\widetilde{B}}$ are the probability measures induced by the processes X and $\widetilde{B}$, respectively. Thus, the score function for (1), that is, the derivative of the log-likelihood function from observations over the interval $[0,T]$ is given by
\[\varSigma _{T}(\theta )=-{\int _{0}^{T}}Q_{t}dZ_{t}-\vartheta {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t},\]
which allows us to determine the maximum likelihood estimator for the drift parameter ϑ. Moreover, according to Theorem 2.9 in [6], which is also presented further, the maximum likelihood estimator is asymptotically normal.
Theorem 2.
Let $g(s,t)$ be the solution of (2), and let the processes Q and Z be defined by (4) and (5), respectively. The maximum likelihood estimator of ϑ is given by
(9)
\[\widehat{\vartheta }_{T}(X)=-\frac{{\textstyle\int _{0}^{T}}Q_{t}dZ_{t}}{{\textstyle\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}}.\]
Since $\vartheta >0$, this estimator is asymptotically normal at the usual rate:
\[\sqrt{T}\big(\widehat{\vartheta }_{T}(X)-\vartheta \big)\underset{T\to \infty }{\overset{d}{\to }}N(0,2\vartheta ).\]
We will develop this result by proving the large deviation principle for the maximum-likelihood estimator (9).

3 Large deviation principle

The large deviations principle characterizes the limiting behavior of a family of random variables (or corresponding probability measures) in terms of a rate function.
A rate function I is a lower semicontinuous function $I:\mathbb{R}\to [0,+\infty ]$ such that, for all $\alpha \in [0,+\infty )$, the level sets $\{x:I(x)\le \alpha \}$ are closed subsets of $\mathbb{R}$. Moreover, I is called a good rate function if its level sets are compacts.
We say that a family of real random variables $(Z_{T})_{T>0}$ satisfies the large deviation principle with rate function $I:\mathbb{R}\to [0,+\infty ]$ if for any Borel set $\varGamma \subset \mathbb{R}$,
\[-\underset{x\in {\varGamma }^{o}}{\inf }I(x)\le \underset{T\to \infty }{\liminf }\frac{1}{T}\log \mathbb{P}(Z_{T}\in \varGamma )\le \underset{T\to \infty }{\limsup }\frac{1}{T}\log \mathbb{P}(Z_{T}\in \varGamma )\le -\underset{x\in \overline{\varGamma }}{\inf }I(x),\]
where ${\varGamma }^{o}$ and $\overline{\varGamma }$ denote the interior and closure of Γ, respectively. Note that a family of random variables can have at most one rate function associated with its large deviation principle (for the proof, we refer the reader to the book by Dembo and Zeitouni [7]). Moreover, it is obvious that if $(Z_{T})_{T>0}$ satisfies the large deviation principle and a Borel set $\varGamma \subset \mathbb{R}$ is such that
\[\underset{x\in {\varGamma }^{o}}{\inf }I(x)=\underset{x\in \overline{\varGamma }}{\inf }I(x),\]
then
\[\underset{T\to \infty }{\lim }\frac{1}{T}\log \mathbb{P}(Z_{T}\in \varGamma )=-\underset{x\in \varGamma }{\inf }I(x).\]
We shall prove the large deviation principle for a family of maximum likelihood estimators (9) via a similar approach as that of [1] and [3] for an Ornstein–Uhlenbeck process and fractional Ornstein–Uhlenbeck process, respectively.
In order to prove the large deviations principle for the drift parameter estimator of mixed fractional Ornstein–Uhlenbeck process (1), the main tool is the normalized cumulant generating function of arbitrary linear combination of ${\int _{0}^{T}}Q_{t}dZ_{t}$ and ${\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}$,
(10)
\[\mathcal{L}_{T}(a,b)=\frac{1}{T}\log \mathbb{E}\big[\exp \big(\mathcal{Z}_{T}(a,b)\big)\big],\]
where, for any $(a,b)\in {\mathbb{R}}^{2}$,
\[\mathcal{Z}_{T}(a,b)=a{\int _{0}^{T}}Q_{t}dZ_{t}+b{\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}.\]
Note that, for some $(a,b)\in {\mathbb{R}}^{2}$, the expectation in (10) may be infinite. In fact, in order to establish a large deviation principle for $\widehat{\vartheta }_{T}$ it suffices to find the limit of $\mathcal{L}_{T}(a,b)$ as $T\to \infty $ and apply the following lemma, which is a consequence of the Gärtner–Ellis theorem (Theorem 2.3.6 in [7]).
Lemma 1.
For a family of maximum likelihood estimators $(\vartheta _{T})_{T>0}$, let the function $\mathcal{L}_{T}(a,b)$ be defined by (10), and, for each fixed value of x, let $\Delta _{x}$ denote the set of $a\in \mathbb{R}$ for which $\lim _{T\to \infty }\mathcal{L}_{T}(a,-xa)$ exists and is finite. If $\Delta _{x}$ is not empty for each value of x, then $(\vartheta _{T})_{T>0}$ satisfies the large deviation principle with a good rate function
(11)
\[I(x)=-\underset{a\in \Delta _{x}}{\inf }\underset{T\to \infty }{\lim }\mathcal{L}_{T}(a,-xa).\]

4 Main results

Theorem 3.
The maximum likelihood estimator $\widehat{\vartheta }_{T}$ defined by (9) satisfies the large deviation principle with the good rate function
\[I(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}-\frac{{(x+\vartheta )}^{2}}{4x}\hspace{1em}& \textit{if}\hspace{2.5pt}x<-\frac{\vartheta }{3},\\{} 2x+\vartheta \hspace{1em}& \textit{if}\hspace{2.5pt}x\ge -\frac{\vartheta }{3}.\end{array}\right.\]
Proof.
As it was mentioned in the previous section, in order to establish the large deviation principle for $\widehat{\vartheta }_{T}$ and determine the corresponding good rate function, it is necessary to find the limit
(12)
\[\mathcal{L}(a,b)=\underset{T\to \infty }{\lim }\mathcal{L}_{T}(a,b)\]
and determine the set of $(a,b)\in {\mathbb{R}}^{2}$ for which this limit is finite.
For arbitrary $\varphi \in \mathbb{R}$, consider the Doleans exponential of $(\varphi +\vartheta ){\int _{0}^{t}}Q_{s}dM_{s}$,
\[\varLambda _{\varphi }(t)=\exp \Bigg((\varphi +\vartheta ){\int _{0}^{t}}Q_{s}dM_{s}-\frac{{(\varphi +\vartheta )}^{2}}{2}{\int _{0}^{t}}{Q_{s}^{2}}d\langle M\rangle _{s}\Bigg).\]
Note that $(\frac{1}{\sqrt{\psi (t,t)}}Q_{t})_{t\ge 0}$ is a Gaussian process whose mean and variance functions are bounded on $[0,T]$. Thus, $\varLambda _{\varphi }$ satisfies the conditions of Girsanov’s theorem in accordance with Example 3 of paragraph 2 of Section 6 in [11], and we can apply a usual change of measures and consider the new probability $\mathbb{P}_{\varphi }$ defined by the local density
\[\frac{d\mathbb{P}_{\varphi }}{d\mathbb{P}}=\varLambda _{\varphi }(T)=\exp \Bigg((\varphi +\vartheta ){\int _{0}^{T}}Q_{t}dM_{t}-\frac{{(\varphi +\vartheta )}^{2}}{2}{\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg).\]
Observe that, due to (6), $\varLambda _{\varphi }(T)$ can be rewritten in terms of the fundamental semimartingale
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{d\mathbb{P}_{\varphi }}{d\mathbb{P}}& \displaystyle =\varLambda _{\varphi }(T)\\{} & \displaystyle =\exp \Bigg(\hspace{-0.1667em}(\varphi +\vartheta )\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}Q_{t}dZ_{t}\hspace{0.1667em}+\hspace{0.1667em}(\varphi +\vartheta )\vartheta \hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\hspace{0.1667em}-\hspace{0.1667em}\frac{{(\varphi +\vartheta )}^{2}}{2}\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\hspace{-0.1667em}\Bigg)\\{} & \displaystyle =\exp \Bigg(\hspace{-0.1667em}(\varphi +\vartheta )\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}Q_{t}dZ_{t}-\frac{{\varphi }^{2}-{\vartheta }^{2}}{2}\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg).\end{array}\]
Consequently, we can rewrite $\mathcal{L}_{T}(a,b)$ as
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathcal{L}_{T}(a,b)& \displaystyle =\frac{1}{T}\log \mathbb{E}\big[\exp \big(\mathcal{Z}_{T}(a,b)\big)\big]\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}_{\varphi }\big[\exp \big(\mathcal{Z}_{T}(a,b)\big)\varLambda _{\varphi }{(T)}^{-1}\big]\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}_{\varphi }\exp \Bigg(\hspace{-0.1667em}(a\hspace{0.1667em}-\hspace{0.1667em}\varphi \hspace{0.1667em}-\hspace{0.1667em}\vartheta )\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}Q_{t}dZ_{t}\hspace{0.1667em}+\hspace{0.1667em}\frac{1}{2}\big(2b\hspace{0.1667em}-\hspace{0.1667em}{\vartheta }^{2}\hspace{0.1667em}+\hspace{0.1667em}{\varphi }^{2}\big)\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\hspace{-0.1667em}\Bigg).\end{array}\]
Given an arbitrary real number φ, we can choose $\varphi =a-\vartheta $. Then
\[\mathcal{L}_{T}(a,b)=\frac{1}{T}\log \mathbb{E}_{\varphi }\exp \Bigg(\frac{1}{2}\big(2b-{\vartheta }^{2}+{(a-\vartheta )}^{2}\big){\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg)\]
or, denoting $\mu =-\frac{1}{2}(2b-{\vartheta }^{2}+{(a-\vartheta )}^{2})$,
(13)
\[\mathcal{L}_{T}(a,b)=\frac{1}{T}\log \mathbb{E}_{\varphi }\exp \Bigg(-\mu {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg).\]
As it was mentioned before, the expectation in (13) can be infinite for some combinations of μ and φ. Our purpose is to determine the set of $(\varphi ,\mu )\in {\mathbb{R}}^{2}$ for which this expectation and limit (12) are finite. According to Girsanov’s theorem, under $\mathbb{P}_{\varphi }$, the process
(14)
\[M_{t}-(\varphi +\vartheta ){\int _{0}^{t}}Q_{s}d\langle M\rangle _{s}=Z_{t}-\varphi {\int _{0}^{t}}Q_{s}d\langle M\rangle _{s}\]
has the same distribution as M under $\mathbb{P}$. Consequently, applying the inverse integral transformation (7) to (14), we get that, under $\mathbb{P}_{\varphi }$, the process $X_{t}-\varphi {\int _{0}^{t}}X_{s}ds$ is a mixed fractional Brownian motion.
Under the new probability measure $\mathbb{P}_{\varphi }$, the process X is a mixed fractional Ornstein–Uhlenbeck process with drift parameter $-\varphi $. Consequently, in order to find the limit of (13) as $T\to \infty $, we can apply Lemma 1, which is presented in Section 5. Thus, we have the equality
\[\mathcal{L}(a,b)=-\frac{\varphi }{2}-\sqrt{\frac{{\varphi }^{2}}{4}+\frac{\mu }{2}}=-\frac{1}{2}\big(a-\vartheta +\sqrt{{\vartheta }^{2}-2b}\big),\]
and convergence (12) holds for $\mu >-\frac{{\varphi }^{2}}{2}$, which gives ${\vartheta }^{2}-2b>0$.
For $x\in \mathbb{R}$, denote the function
\[L_{x}(a)=\mathcal{L}(a,-xa)=-\frac{1}{2}\big(a-\vartheta +\sqrt{{\vartheta }^{2}+2xa}\big)\]
defined on the set
\[\Delta _{x}=\big\{a\in \mathbb{R}|{\vartheta }^{2}+2xa>0\big\}.\]
Then, according to (11), the rate function $I(x)$ for the maximum likelihood estimator $\widehat{\vartheta }_{T}$ can be found as
\[I(x)=-\underset{a\in \Delta _{x}}{\inf }L_{x}(a).\]
Consequently, straightforward calculations of this infimum finish the proof of the theorem.  □
Remark 1.
Observe that the rate function $I(x)$ does not depend on the parameter H. Hence, $\widehat{\vartheta }_{T}$ shares the same large deviation principles as those established by Florens-Landais and Pham [8] for a standard Ornstein–Uhlenbeck process and by Bercu, Coutin, and Savy [3] for a fractional Ornstein–Uhlenbeck process (see also [2, 9]).

5 Auxiliary results

We can observe that the following lemma plays a key role in the proof of Theorem 3.
Lemma 2.
For a mixed fractional Ornstein–Uhlenbeck process X with drift parameter ϑ, we have the following limit:
(15)
\[\mathcal{K}_{T}(\mu )=\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\mu {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg)\to \frac{\vartheta }{2}-\sqrt{\frac{{\vartheta }^{2}}{4}+\frac{\mu }{2}},\hspace{1em}T\to \infty ,\]
for all $\mu >-\frac{{\vartheta }^{2}}{2}$.
Proof.
We shall prove the lemma using an approach similar to that in [6]. Denote $V_{t}={\int _{0}^{t}}\psi (s,s)dZ_{s}$. Then, according to (8), we can rewrite
\[\begin{array}{r@{\hskip0pt}l}\displaystyle dZ_{t}& \displaystyle =-\vartheta Q_{t}d\langle M\rangle _{t}+dM_{t}=-\frac{\vartheta }{2}\psi (t,t)Z_{t}d\langle M\rangle _{t}-\frac{\vartheta }{2}V_{t}d\langle M\rangle _{t}+dM_{t}\\{} & \displaystyle =-\frac{\vartheta }{2}Z_{t}dt-\frac{\vartheta }{2}V_{t}\frac{1}{\psi (t,t)}dt+\frac{1}{\sqrt{\psi (t,t)}}dW_{t},\end{array}\]
where $W_{t}$ is a Brownian motion. Consequently, we get
\[dV_{t}=\psi (t,t)dZ_{t}=-\frac{\vartheta }{2}\psi (t,t)Z_{t}dt-\frac{\vartheta }{2}V_{t}dt+\sqrt{\psi (t,t)}dW_{t}.\]
The Gaussian vector $\zeta _{t}={(Z_{t},V_{t})}^{T}$ is a solution of the linear system of the Itô stochastic differential equation
\[d\zeta _{t}=-\frac{\vartheta }{2}A(t)\zeta _{t}dt+b(t)dW_{t},\]
where
\[A(t)=\left(\begin{array}{c@{\hskip10.0pt}c}1& \frac{1}{\psi (t,t)}\\{} \psi (t,t)& 1\end{array}\right)\hspace{1em}\text{and}\hspace{1em}b(t)=\left(\begin{array}{c}\frac{1}{\sqrt{\psi (t,t)}}\\{} \sqrt{\psi (t,t)}\end{array}\right).\]
Moreover, $\mathcal{K}_{T}(\mu )$ in (15) can be rewritten as
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathcal{K}_{T}(\mu )& \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\mu {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg)\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\frac{\mu }{4}{\int _{0}^{T}}{\big(\psi (t,t)Z_{t}+V_{t}\big)}^{2}d\langle M\rangle _{t}\Bigg)\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\frac{\mu }{4}{\int _{0}^{T}}{\bigg(\sqrt{\psi (t,t)}Z_{t}+\frac{1}{\sqrt{\psi (t,t)}}V_{t}\bigg)}^{2}dt\Bigg)\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\frac{\mu }{4}{\int _{0}^{T}}{\zeta _{t}^{T}}R(t)\zeta _{t}dt\Bigg),\end{array}\]
where
\[R(t)=\left(\begin{array}{c@{\hskip10.0pt}c}\psi (t,t)& 1\\{} 1& \frac{1}{\psi (t,t)}\end{array}\right).\]
By the Cameron–Martin-type formula from Section 4.1 of [10],
\[\mathcal{K}_{T}(\mu )=-\frac{\mu }{4T}{\int _{0}^{T}}\text{tr}\big(\varGamma (t)R(t)\big)dt,\]
where $\varGamma (t)$ is the solution of the equation
(16)
\[\dot{\varGamma (t)}=-\frac{\vartheta }{2}A(t)\varGamma (t)-\frac{\vartheta }{2}\varGamma (t){A}^{T}(t)-\frac{\mu }{2}\varGamma (t)R(t)\varGamma (t)+B(t)\]
with $B(t)=b(t){b}^{T}(t)$ and initial condition $\varGamma (0)=0$.
We shall search solution of (16) as the ratio $\varGamma (t)={\varPsi _{1}^{-1}}(t)\varPsi _{2}(t)$, where $\varPsi _{1}(t)$ and $\varPsi _{2}(t)$ are the solutions of the following equation system:
(17)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \dot{\varPsi }_{1}(t)& \displaystyle =\frac{\vartheta }{2}\varPsi _{1}(t)A(t)+\frac{\mu }{2}\varPsi _{2}(t)R(t),\\{} \displaystyle \dot{\varPsi }_{2}(t)& \displaystyle =\varPsi _{1}(t)B(t)-\frac{\vartheta }{2}\varPsi _{2}(t){A}^{T}(t),\end{array}\]
with initial conditions $\varPsi _{1}(0)=I$ and $\varPsi _{2}(0)=0$. From the first equation of (17) we get
\[{\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t)=\frac{\vartheta }{2}A(t)+\frac{\mu }{2}\varGamma (t)R(t),\]
and since $\text{tr }A(t)=2$, it follows that
\[\frac{\mu }{2}\text{tr}\big(\varGamma (t)R(t)\big)=\text{tr}\big({\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t)\big)-\vartheta .\]
Since $\dot{\varPsi }_{1}(t)=\varPsi _{1}(t)({\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t))$, by Liouville’s formula we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle -\frac{\mu }{4T}{\int _{0}^{T}}\text{tr}\big(\varGamma (t)R(t)\big)dt& \displaystyle =-\frac{1}{2T}{\int _{0}^{T}}\text{tr }\big({\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t)\big)dt+\frac{\vartheta }{2}\\{} & \displaystyle =-\frac{1}{2T}\log \det \varPsi _{1}(T)+\frac{\vartheta }{2}.\end{array}\]
In order to calculate $\lim _{T\to \infty }\frac{1}{T}\log \det \varPsi _{1}(T)$, define the matrix
\[J=\left(\begin{array}{c@{\hskip10.0pt}c}0& 1\\{} 1& 0\end{array}\right)\]
and note that $A{(t)}^{T}=JA(t)J$, $R(t)=JA(t)$, and $B(t)=A(t)J$. Setting $\widetilde{\varPsi }_{2}(t)=\varPsi _{2}(t)J$, from (17) we obtain the following equation system:
(18)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \dot{\varPsi }_{1}(t)& \displaystyle =\frac{\vartheta }{2}\varPsi _{1}(t)A(t)+\frac{\mu }{2}\varPsi _{2}(t)A(t),\\{} \displaystyle \dot{\widetilde{\varPsi }}_{2}(t)& \displaystyle =\varPsi _{1}(t)A(t)-\frac{\vartheta }{2}\widetilde{\varPsi }_{2}(t)A(t),\end{array}\]
with initial conditions $\varPsi _{1}(0)=I$ and $\widetilde{\varPsi }_{2}(0)=0$. When $\frac{{\vartheta }^{2}}{2}+\mu >0$, the coefficient matrix of system (18)
\[\left(\begin{array}{c@{\hskip10.0pt}c}\frac{\vartheta }{2}& \frac{\mu }{2}\\{} 1& -\frac{\vartheta }{2}\end{array}\right)\]
has two real eigenvalues $\pm \lambda $ with $\lambda =\sqrt{\frac{{\vartheta }^{2}}{4}+\frac{\mu }{2}}$ and eigenvectors
\[{v}^{\pm }=\left(\begin{array}{c}\frac{\vartheta }{2}\pm \lambda \\{} 1\end{array}\right).\]
Denote ${a}^{\pm }=\frac{\vartheta }{2}\pm \lambda =\frac{\vartheta }{2}\pm \sqrt{\frac{{\vartheta }^{2}}{4}+\frac{\mu }{2}}$. Diagonalizing system (18), we get
\[\varPsi _{1}(t)={a}^{+}\varUpsilon _{1}(t)+{a}^{-}\varUpsilon _{2}(t),\]
where $\varUpsilon _{1}(t)$ and $\varUpsilon _{2}(t)$ are the solutions of the equations
(19)
\[\begin{array}{r}\displaystyle \dot{\varUpsilon }_{1}(t)=\lambda \varUpsilon _{1}(t)A(t),\\{} \displaystyle \dot{\varUpsilon }_{2}(t)=-\lambda \varUpsilon _{2}(t)A(t),\end{array}\]
with initial conditions $\varUpsilon _{1}(0)=\frac{1}{2\lambda }I$ and $\varUpsilon _{2}(0)=-\frac{1}{2\lambda }I$. Denote the matrix $M(T)={\varUpsilon _{2}^{-1}}(T)\varUpsilon _{1}(T)$, which is the solution of the equation
(20)
\[\dot{M}(t)=\lambda \big(A(t)M(t)+M(t)A(t)\big)\]
subject to initial condition $M(0)=-I$. Then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{1}{T}\log \det \varPsi _{1}(T)& \displaystyle =\frac{1}{T}\log \det \big({a}^{+}\varUpsilon _{1}(T)+{a}^{-}\varUpsilon _{2}(T)\big)\\{} & \displaystyle =\frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(T)\big)+\frac{1}{T}\log \det \bigg(I+\frac{{a}^{+}}{{a}^{-}}M(T)\bigg)\\{} & \displaystyle =\frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(T)\big)\\{} & \displaystyle \hspace{1em}+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\det M(T)+\frac{{a}^{+}}{{a}^{-}}\text{tr }M(T)\bigg)\\{} & \displaystyle =\frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(T)\big)+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\det M(T)\bigg)\\{} & \displaystyle \hspace{1em}+\frac{1}{T}\log \bigg(1+\frac{\frac{{a}^{+}}{{a}^{-}}\text{tr }M(T)}{1+{(\frac{{a}^{+}}{{a}^{-}})}^{2}\det M(T)}\bigg).\end{array}\]
Applying Liouville’s formula to (19), we get
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(t)\big)+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\det M(T)\bigg)\\{} & \displaystyle \hspace{1em}=\frac{1}{T}\log \bigg({\bigg(\frac{{a}^{-}}{2\lambda }\bigg)}^{2}\exp (-2\lambda T)\bigg)+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\exp (4\lambda T)\bigg)\to 2\lambda \end{array}\]
as $T\to \infty $. Thus, in order to prove that limit (15) holds, we should show that
(21)
\[\frac{1}{T}\log \bigg(1+\frac{\frac{{a}^{+}}{{a}^{-}}\text{tr }M(T)}{1+{(\frac{{a}^{+}}{{a}^{-}})}^{2}\exp (4\lambda T)}\bigg)\to 0,\hspace{1em}T\to \infty .\]
Given (20), by Theorem 3 in [13] we have
\[|\text{tr }M(T)|\le 2\sqrt{2}\exp (2\lambda T).\]
Thus, limit (21) holds, which finishes the proof of the lemma.  □

Acknowledgments

My deepest gratitude is to my advisor Marina Kleptsyna, who proposed me to consider the problem presented in this paper and whose advices and help were very important. I also would like to thank the reviewers whose remarks and advices allowed me to significantly improve this article.

References

[1] 
Bercu, B., Rouault, A.: Sharp large deviations for the Ornstein–Uhlenbeck process. Theory Probab. Appl. 46(1), 1–19 (2002). MR1968706. doi:10.1137/S0040585X97978737
[2] 
Bercu, B., Gamboa, F., Rouault, A.: Large deviations for quadratic forms of stationary Gaussian processes. Stoch. Process. Appl. 71(1), 75–90 (1997). MR1480640. doi:10.1016/S0304-4149(97)00071-9
[3] 
Bercu, B., Coutin, L., Savy, N.: Sharp large deviations for the fractional Ornstein–Uhlenbeck process. Theory Probab. Appl. 55(4), 575–610 (2010). MR2859161. doi:10.1137/S0040585X97985108
[4] 
Cai, C., Chigansky, P., Kleptsyna, M.: Mixed Gaussian processes: A filtering approach. Ann. Probab. (2015, to appear). arXiv:1208.6253
[5] 
Cheridito, P.: Mixed fractional Brownian motion. Bernoulli 7(6), 913–934 (2001). MR1873835. doi:10.2307/3318626
[6] 
Chigansky, P., Kleptsyna, M.: Spectral asymptotics of the fractional Brownian motion covariance operator. arXiv:1507.04194 (2015)
[7] 
Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications, 2nd edn. Applications of Mathematics (New York), vol. 38. Springer, New York (1998). MR1619036. doi:10.1007/978-1-4612-5320-4
[8] 
Florens-Landais, D., Pham, H.: Large deviations in estimation of an Ornstein–Uhlenbeck model. J. Appl. Probab. 36(1), 60–77 (1999). MR1699608. doi:10.1239/jap/1032374229
[9] 
Gamboa, F., Rouault, A., Zani, M.: A functional large deviations principle for quadratic forms of Gaussian stationary processes. Stat. Probab. Lett. 43(3), 299–308 (1999). MR1708097. doi:10.1016/S0167-7152(98)00270-3
[10] 
Kleptsyna, M.L., Le Breton, A.: Optimal linear filtering of general multidimensional Gaussian processes and its application to Laplace transforms of quadratic functionals. J. Appl. Math. Stoch. Anal. 14(3), 215–226 (2001). MR1853078. doi:10.1155/S104895330100017X
[11] 
Liptser, R., Shiryaev, A.: Statistics of Random Processes. Nauka, Moscow (1974)
[12] 
Mishura, Y.: Stochastic Calculus for Fractional Brownian Motion and Related Processes. Springer, Berlin Heidelberg (2008). MR2378138. doi:10.1007/978-3-540-75873-0
[13] 
Vidyasagar, M.: Nonlinear Systems Analysis, 2nd edn. SIAM, Philadelphia, PA (2002). MR1946479. doi:10.1137/1.9780898719185
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Maximum likelihood estimation procedure
  • 3 Large deviation principle
  • 4 Main results
  • 5 Auxiliary results
  • Acknowledgments
  • References

Copyright
© 2016 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Large deviations Ornstein–Uhlenbeck process mixed fractional Brownian motion maximum likelihood estimator

MSC2010
60G15 62F12 60G22

Metrics
since March 2018
698

Article info
views

494

Full article
views

440

PDF
downloads

167

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    3
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 1.
Let $g(s,t)$ be the solution of (2), and the process Z be defined by (5). Then the following assertions hold:
  • 1. Z is a semimartingale with the decomposition
    (6)
    \[Z_{t}=-\vartheta {\int _{0}^{t}}Q_{s}d\langle M\rangle _{s}+M_{t},\]
    where $M_{t}$ is the martingale defined by (3).
  • 2. X admits the representation
    (7)
    \[X_{t}={\int _{0}^{t}}\widehat{g}(s,t)dZ_{s},\]
    where
    \[\widehat{g}(s,t)=1-\frac{d}{d\langle M\rangle _{s}}{\int _{0}^{t}}g(r,s)dr.\]
  • 3. The natural filtrations $(\mathcal{X}_{t})$ and $(\mathcal{Z}_{t})$ of X and Z, respectively, coincide.
Theorem 2.
Let $g(s,t)$ be the solution of (2), and let the processes Q and Z be defined by (4) and (5), respectively. The maximum likelihood estimator of ϑ is given by
(9)
\[\widehat{\vartheta }_{T}(X)=-\frac{{\textstyle\int _{0}^{T}}Q_{t}dZ_{t}}{{\textstyle\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}}.\]
Since $\vartheta >0$, this estimator is asymptotically normal at the usual rate:
\[\sqrt{T}\big(\widehat{\vartheta }_{T}(X)-\vartheta \big)\underset{T\to \infty }{\overset{d}{\to }}N(0,2\vartheta ).\]
Theorem 3.
The maximum likelihood estimator $\widehat{\vartheta }_{T}$ defined by (9) satisfies the large deviation principle with the good rate function
\[I(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}-\frac{{(x+\vartheta )}^{2}}{4x}\hspace{1em}& \textit{if}\hspace{2.5pt}x<-\frac{\vartheta }{3},\\{} 2x+\vartheta \hspace{1em}& \textit{if}\hspace{2.5pt}x\ge -\frac{\vartheta }{3}.\end{array}\right.\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy