1 Introduction
Our purpose is to establish large deviations principle for the maximum likelihood estimator of drift parameter of the Ornstein–Uhlenbeck process driven by a mixed fractional Brownian motion:
where the initial state $X_{0}=0$, and the drift parameter ϑ is strictly positive. The process $\widetilde{B}$ is a mixed fractional Brownian motion
where $B=(B_{t})$ is a Brownian motion, and ${B}^{H}=({B_{t}^{H}})$ is an independent fractional Brownian motion with Hurst exponent $H\in (0,1]$, that is, the centered Gaussian process with covariance function
It is important to notice that the parameter H is considered to be known. The problem of Hurst parameter estimation is not considered in this work but presents a great interest for the future research.
Two following chapters contain information about maximum likelihood estimation procedure for the mixed fractional Ornstein–Uhlenbeck process and description of basic concepts of large deviations theory.
2 Maximum likelihood estimation procedure
The interest to mixed fractional Brownian motion was triggered by Cheridito [5]. Further, a mixed fractional Brownian motion and related models were comprehensively considered by Mishura [12]. Finally, the results of recent works of Cai, Kleptsyna, and Chigansky [4] and Chigansky and Kleptsyna [6] concerning the new canonical representation of mixed fractional Brownian motion present a great value for the purposes of this paper.
An interesting change in properties of a mixed fractional Brownian motion $\widetilde{B}$ occurs depending on the value of H. In particular, it was shown (see [5]) that $\widetilde{B}$ is a semimartingale in its own filtration if and only if either $H=\frac{1}{2}$ or $H\in (\frac{3}{4},1]$.
The main contribution of paper [4] is a novel approach to the analysis of mixed fractional Brownian motion based on the filtering theory of Gaussian processes. The core of this method is a new canonical representation of $\widetilde{B}$.
In fact, there is an integral transformation that changes the mixed fractional Brownian motion to a martingale. In particular (see [4]), let $g(s,t)$ be the solution of the following integro-differential equation
Then the process
is a Gaussian martingale with quadratic variation
Moreover, the natural filtration of the martingale M coincides with that of the mixed fractional Brownian motion $\widetilde{B}$.
Further, to what has just been mentioned concerning the mixed fractional Brownian motion, an auxiliary semimartingale, appropriate for the purposes of statistical analysis, can be also associated to the corresponding Ornstein–Uhlenbeck process X defined by (1). In particular, for the martingale M defined by (3), the sample paths of the process X are smooth enough in the sense that the following process is well defined:
We also define the process $Z=(Z_{t},t\in [0,T])$ by
One of the most important results of [4] is that the process Z is the fundamental semimartingale associated to X in the following sense.
Theorem 1.
Let $g(s,t)$ be the solution of (2), and the process Z be defined by (5). Then the following assertions hold:
In addition, it was shown by Chigansky and Kleptsyna [6] that the process Q admits the following representation:
with
The specific structure of the process Q allows us to determine the likelihood function for (1), which according to Corollary 2.9 in [4] equals
\[L_{T}(\vartheta ,X)=\frac{d{\mu }^{X}}{d{\mu }^{\widetilde{B}}}(X)=\exp \Bigg(-\vartheta {\int _{0}^{T}}Q_{t}dZ_{t}-\frac{1}{2}{\vartheta }^{2}{\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg),\]
where ${\mu }^{X}$ and ${\mu }^{\widetilde{B}}$ are the probability measures induced by the processes X and $\widetilde{B}$, respectively. Thus, the score function for (1), that is, the derivative of the log-likelihood function from observations over the interval $[0,T]$ is given by
\[\varSigma _{T}(\theta )=-{\int _{0}^{T}}Q_{t}dZ_{t}-\vartheta {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t},\]
which allows us to determine the maximum likelihood estimator for the drift parameter ϑ. Moreover, according to Theorem 2.9 in [6], which is also presented further, the maximum likelihood estimator is asymptotically normal.Theorem 2.
Let $g(s,t)$ be the solution of (2), and let the processes Q and Z be defined by (4) and (5), respectively. The maximum likelihood estimator of ϑ is given by
Since $\vartheta >0$, this estimator is asymptotically normal at the usual rate:
(9)
\[\widehat{\vartheta }_{T}(X)=-\frac{{\textstyle\int _{0}^{T}}Q_{t}dZ_{t}}{{\textstyle\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}}.\]We will develop this result by proving the large deviation principle for the maximum-likelihood estimator (9).
3 Large deviation principle
The large deviations principle characterizes the limiting behavior of a family of random variables (or corresponding probability measures) in terms of a rate function.
A rate function I is a lower semicontinuous function $I:\mathbb{R}\to [0,+\infty ]$ such that, for all $\alpha \in [0,+\infty )$, the level sets $\{x:I(x)\le \alpha \}$ are closed subsets of $\mathbb{R}$. Moreover, I is called a good rate function if its level sets are compacts.
We say that a family of real random variables $(Z_{T})_{T>0}$ satisfies the large deviation principle with rate function $I:\mathbb{R}\to [0,+\infty ]$ if for any Borel set $\varGamma \subset \mathbb{R}$,
\[-\underset{x\in {\varGamma }^{o}}{\inf }I(x)\le \underset{T\to \infty }{\liminf }\frac{1}{T}\log \mathbb{P}(Z_{T}\in \varGamma )\le \underset{T\to \infty }{\limsup }\frac{1}{T}\log \mathbb{P}(Z_{T}\in \varGamma )\le -\underset{x\in \overline{\varGamma }}{\inf }I(x),\]
where ${\varGamma }^{o}$ and $\overline{\varGamma }$ denote the interior and closure of Γ, respectively. Note that a family of random variables can have at most one rate function associated with its large deviation principle (for the proof, we refer the reader to the book by Dembo and Zeitouni [7]). Moreover, it is obvious that if $(Z_{T})_{T>0}$ satisfies the large deviation principle and a Borel set $\varGamma \subset \mathbb{R}$ is such that
then
We shall prove the large deviation principle for a family of maximum likelihood estimators (9) via a similar approach as that of [1] and [3] for an Ornstein–Uhlenbeck process and fractional Ornstein–Uhlenbeck process, respectively.
In order to prove the large deviations principle for the drift parameter estimator of mixed fractional Ornstein–Uhlenbeck process (1), the main tool is the normalized cumulant generating function of arbitrary linear combination of ${\int _{0}^{T}}Q_{t}dZ_{t}$ and ${\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}$,
where, for any $(a,b)\in {\mathbb{R}}^{2}$,
(10)
\[\mathcal{L}_{T}(a,b)=\frac{1}{T}\log \mathbb{E}\big[\exp \big(\mathcal{Z}_{T}(a,b)\big)\big],\]Note that, for some $(a,b)\in {\mathbb{R}}^{2}$, the expectation in (10) may be infinite. In fact, in order to establish a large deviation principle for $\widehat{\vartheta }_{T}$ it suffices to find the limit of $\mathcal{L}_{T}(a,b)$ as $T\to \infty $ and apply the following lemma, which is a consequence of the Gärtner–Ellis theorem (Theorem 2.3.6 in [7]).
Lemma 1.
For a family of maximum likelihood estimators $(\vartheta _{T})_{T>0}$, let the function $\mathcal{L}_{T}(a,b)$ be defined by (10), and, for each fixed value of x, let $\Delta _{x}$ denote the set of $a\in \mathbb{R}$ for which $\lim _{T\to \infty }\mathcal{L}_{T}(a,-xa)$ exists and is finite. If $\Delta _{x}$ is not empty for each value of x, then $(\vartheta _{T})_{T>0}$ satisfies the large deviation principle with a good rate function
4 Main results
Theorem 3.
The maximum likelihood estimator $\widehat{\vartheta }_{T}$ defined by (9) satisfies the large deviation principle with the good rate function
Proof.
As it was mentioned in the previous section, in order to establish the large deviation principle for $\widehat{\vartheta }_{T}$ and determine the corresponding good rate function, it is necessary to find the limit
and determine the set of $(a,b)\in {\mathbb{R}}^{2}$ for which this limit is finite.
For arbitrary $\varphi \in \mathbb{R}$, consider the Doleans exponential of $(\varphi +\vartheta ){\int _{0}^{t}}Q_{s}dM_{s}$,
Note that $(\frac{1}{\sqrt{\psi (t,t)}}Q_{t})_{t\ge 0}$ is a Gaussian process whose mean and variance functions are bounded on $[0,T]$. Thus, $\varLambda _{\varphi }$ satisfies the conditions of Girsanov’s theorem in accordance with Example 3 of paragraph 2 of Section 6 in [11], and we can apply a usual change of measures and consider the new probability $\mathbb{P}_{\varphi }$ defined by the local density
Observe that, due to (6), $\varLambda _{\varphi }(T)$ can be rewritten in terms of the fundamental semimartingale
As it was mentioned before, the expectation in (13) can be infinite for some combinations of μ and φ. Our purpose is to determine the set of $(\varphi ,\mu )\in {\mathbb{R}}^{2}$ for which this expectation and limit (12) are finite. According to Girsanov’s theorem, under $\mathbb{P}_{\varphi }$, the process
has the same distribution as M under $\mathbb{P}$. Consequently, applying the inverse integral transformation (7) to (14), we get that, under $\mathbb{P}_{\varphi }$, the process $X_{t}-\varphi {\int _{0}^{t}}X_{s}ds$ is a mixed fractional Brownian motion.
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{d\mathbb{P}_{\varphi }}{d\mathbb{P}}& \displaystyle =\varLambda _{\varphi }(T)\\{} & \displaystyle =\exp \Bigg(\hspace{-0.1667em}(\varphi +\vartheta )\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}Q_{t}dZ_{t}\hspace{0.1667em}+\hspace{0.1667em}(\varphi +\vartheta )\vartheta \hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\hspace{0.1667em}-\hspace{0.1667em}\frac{{(\varphi +\vartheta )}^{2}}{2}\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\hspace{-0.1667em}\Bigg)\\{} & \displaystyle =\exp \Bigg(\hspace{-0.1667em}(\varphi +\vartheta )\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}Q_{t}dZ_{t}-\frac{{\varphi }^{2}-{\vartheta }^{2}}{2}\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg).\end{array}\]
Consequently, we can rewrite $\mathcal{L}_{T}(a,b)$ as
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathcal{L}_{T}(a,b)& \displaystyle =\frac{1}{T}\log \mathbb{E}\big[\exp \big(\mathcal{Z}_{T}(a,b)\big)\big]\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}_{\varphi }\big[\exp \big(\mathcal{Z}_{T}(a,b)\big)\varLambda _{\varphi }{(T)}^{-1}\big]\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}_{\varphi }\exp \Bigg(\hspace{-0.1667em}(a\hspace{0.1667em}-\hspace{0.1667em}\varphi \hspace{0.1667em}-\hspace{0.1667em}\vartheta )\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}Q_{t}dZ_{t}\hspace{0.1667em}+\hspace{0.1667em}\frac{1}{2}\big(2b\hspace{0.1667em}-\hspace{0.1667em}{\vartheta }^{2}\hspace{0.1667em}+\hspace{0.1667em}{\varphi }^{2}\big)\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{Q_{t}^{2}}d\langle M\rangle _{t}\hspace{-0.1667em}\Bigg).\end{array}\]
Given an arbitrary real number φ, we can choose $\varphi =a-\vartheta $. Then
\[\mathcal{L}_{T}(a,b)=\frac{1}{T}\log \mathbb{E}_{\varphi }\exp \Bigg(\frac{1}{2}\big(2b-{\vartheta }^{2}+{(a-\vartheta )}^{2}\big){\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg)\]
or, denoting $\mu =-\frac{1}{2}(2b-{\vartheta }^{2}+{(a-\vartheta )}^{2})$,
(13)
\[\mathcal{L}_{T}(a,b)=\frac{1}{T}\log \mathbb{E}_{\varphi }\exp \Bigg(-\mu {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg).\](14)
\[M_{t}-(\varphi +\vartheta ){\int _{0}^{t}}Q_{s}d\langle M\rangle _{s}=Z_{t}-\varphi {\int _{0}^{t}}Q_{s}d\langle M\rangle _{s}\]Under the new probability measure $\mathbb{P}_{\varphi }$, the process X is a mixed fractional Ornstein–Uhlenbeck process with drift parameter $-\varphi $. Consequently, in order to find the limit of (13) as $T\to \infty $, we can apply Lemma 1, which is presented in Section 5. Thus, we have the equality
\[\mathcal{L}(a,b)=-\frac{\varphi }{2}-\sqrt{\frac{{\varphi }^{2}}{4}+\frac{\mu }{2}}=-\frac{1}{2}\big(a-\vartheta +\sqrt{{\vartheta }^{2}-2b}\big),\]
and convergence (12) holds for $\mu >-\frac{{\varphi }^{2}}{2}$, which gives ${\vartheta }^{2}-2b>0$.For $x\in \mathbb{R}$, denote the function
defined on the set
Then, according to (11), the rate function $I(x)$ for the maximum likelihood estimator $\widehat{\vartheta }_{T}$ can be found as
Consequently, straightforward calculations of this infimum finish the proof of the theorem. □
Remark 1.
Observe that the rate function $I(x)$ does not depend on the parameter H. Hence, $\widehat{\vartheta }_{T}$ shares the same large deviation principles as those established by Florens-Landais and Pham [8] for a standard Ornstein–Uhlenbeck process and by Bercu, Coutin, and Savy [3] for a fractional Ornstein–Uhlenbeck process (see also [2, 9]).
5 Auxiliary results
We can observe that the following lemma plays a key role in the proof of Theorem 3.
Lemma 2.
For a mixed fractional Ornstein–Uhlenbeck process X with drift parameter ϑ, we have the following limit:
for all $\mu >-\frac{{\vartheta }^{2}}{2}$.
(15)
\[\mathcal{K}_{T}(\mu )=\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\mu {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg)\to \frac{\vartheta }{2}-\sqrt{\frac{{\vartheta }^{2}}{4}+\frac{\mu }{2}},\hspace{1em}T\to \infty ,\]Proof.
We shall prove the lemma using an approach similar to that in [6]. Denote $V_{t}={\int _{0}^{t}}\psi (s,s)dZ_{s}$. Then, according to (8), we can rewrite
\[\begin{array}{r@{\hskip0pt}l}\displaystyle dZ_{t}& \displaystyle =-\vartheta Q_{t}d\langle M\rangle _{t}+dM_{t}=-\frac{\vartheta }{2}\psi (t,t)Z_{t}d\langle M\rangle _{t}-\frac{\vartheta }{2}V_{t}d\langle M\rangle _{t}+dM_{t}\\{} & \displaystyle =-\frac{\vartheta }{2}Z_{t}dt-\frac{\vartheta }{2}V_{t}\frac{1}{\psi (t,t)}dt+\frac{1}{\sqrt{\psi (t,t)}}dW_{t},\end{array}\]
where $W_{t}$ is a Brownian motion. Consequently, we get
The Gaussian vector $\zeta _{t}={(Z_{t},V_{t})}^{T}$ is a solution of the linear system of the Itô stochastic differential equation
where
with $B(t)=b(t){b}^{T}(t)$ and initial condition $\varGamma (0)=0$.
\[A(t)=\left(\begin{array}{c@{\hskip10.0pt}c}1& \frac{1}{\psi (t,t)}\\{} \psi (t,t)& 1\end{array}\right)\hspace{1em}\text{and}\hspace{1em}b(t)=\left(\begin{array}{c}\frac{1}{\sqrt{\psi (t,t)}}\\{} \sqrt{\psi (t,t)}\end{array}\right).\]
Moreover, $\mathcal{K}_{T}(\mu )$ in (15) can be rewritten as
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathcal{K}_{T}(\mu )& \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\mu {\int _{0}^{T}}{Q_{t}^{2}}d\langle M\rangle _{t}\Bigg)\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\frac{\mu }{4}{\int _{0}^{T}}{\big(\psi (t,t)Z_{t}+V_{t}\big)}^{2}d\langle M\rangle _{t}\Bigg)\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\frac{\mu }{4}{\int _{0}^{T}}{\bigg(\sqrt{\psi (t,t)}Z_{t}+\frac{1}{\sqrt{\psi (t,t)}}V_{t}\bigg)}^{2}dt\Bigg)\\{} & \displaystyle =\frac{1}{T}\log \mathbb{E}\exp \Bigg(-\frac{\mu }{4}{\int _{0}^{T}}{\zeta _{t}^{T}}R(t)\zeta _{t}dt\Bigg),\end{array}\]
where
\[R(t)=\left(\begin{array}{c@{\hskip10.0pt}c}\psi (t,t)& 1\\{} 1& \frac{1}{\psi (t,t)}\end{array}\right).\]
By the Cameron–Martin-type formula from Section 4.1 of [10],
where $\varGamma (t)$ is the solution of the equation
(16)
\[\dot{\varGamma (t)}=-\frac{\vartheta }{2}A(t)\varGamma (t)-\frac{\vartheta }{2}\varGamma (t){A}^{T}(t)-\frac{\mu }{2}\varGamma (t)R(t)\varGamma (t)+B(t)\]We shall search solution of (16) as the ratio $\varGamma (t)={\varPsi _{1}^{-1}}(t)\varPsi _{2}(t)$, where $\varPsi _{1}(t)$ and $\varPsi _{2}(t)$ are the solutions of the following equation system:
with initial conditions $\varPsi _{1}(0)=I$ and $\varPsi _{2}(0)=0$. From the first equation of (17) we get
with initial conditions $\varPsi _{1}(0)=I$ and $\widetilde{\varPsi }_{2}(0)=0$. When $\frac{{\vartheta }^{2}}{2}+\mu >0$, the coefficient matrix of system (18)
with initial conditions $\varUpsilon _{1}(0)=\frac{1}{2\lambda }I$ and $\varUpsilon _{2}(0)=-\frac{1}{2\lambda }I$. Denote the matrix $M(T)={\varUpsilon _{2}^{-1}}(T)\varUpsilon _{1}(T)$, which is the solution of the equation
subject to initial condition $M(0)=-I$. Then
Given (20), by Theorem 3 in [13] we have
Thus, limit (21) holds, which finishes the proof of the lemma. □
(17)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \dot{\varPsi }_{1}(t)& \displaystyle =\frac{\vartheta }{2}\varPsi _{1}(t)A(t)+\frac{\mu }{2}\varPsi _{2}(t)R(t),\\{} \displaystyle \dot{\varPsi }_{2}(t)& \displaystyle =\varPsi _{1}(t)B(t)-\frac{\vartheta }{2}\varPsi _{2}(t){A}^{T}(t),\end{array}\]
\[{\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t)=\frac{\vartheta }{2}A(t)+\frac{\mu }{2}\varGamma (t)R(t),\]
and since $\text{tr }A(t)=2$, it follows that
\[\frac{\mu }{2}\text{tr}\big(\varGamma (t)R(t)\big)=\text{tr}\big({\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t)\big)-\vartheta .\]
Since $\dot{\varPsi }_{1}(t)=\varPsi _{1}(t)({\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t))$, by Liouville’s formula we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle -\frac{\mu }{4T}{\int _{0}^{T}}\text{tr}\big(\varGamma (t)R(t)\big)dt& \displaystyle =-\frac{1}{2T}{\int _{0}^{T}}\text{tr }\big({\varPsi _{1}^{-1}}(t)\dot{\varPsi }_{1}(t)\big)dt+\frac{\vartheta }{2}\\{} & \displaystyle =-\frac{1}{2T}\log \det \varPsi _{1}(T)+\frac{\vartheta }{2}.\end{array}\]
In order to calculate $\lim _{T\to \infty }\frac{1}{T}\log \det \varPsi _{1}(T)$, define the matrix
and note that $A{(t)}^{T}=JA(t)J$, $R(t)=JA(t)$, and $B(t)=A(t)J$. Setting $\widetilde{\varPsi }_{2}(t)=\varPsi _{2}(t)J$, from (17) we obtain the following equation system:
(18)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \dot{\varPsi }_{1}(t)& \displaystyle =\frac{\vartheta }{2}\varPsi _{1}(t)A(t)+\frac{\mu }{2}\varPsi _{2}(t)A(t),\\{} \displaystyle \dot{\widetilde{\varPsi }}_{2}(t)& \displaystyle =\varPsi _{1}(t)A(t)-\frac{\vartheta }{2}\widetilde{\varPsi }_{2}(t)A(t),\end{array}\]
\[\left(\begin{array}{c@{\hskip10.0pt}c}\frac{\vartheta }{2}& \frac{\mu }{2}\\{} 1& -\frac{\vartheta }{2}\end{array}\right)\]
has two real eigenvalues $\pm \lambda $ with $\lambda =\sqrt{\frac{{\vartheta }^{2}}{4}+\frac{\mu }{2}}$ and eigenvectors
Denote ${a}^{\pm }=\frac{\vartheta }{2}\pm \lambda =\frac{\vartheta }{2}\pm \sqrt{\frac{{\vartheta }^{2}}{4}+\frac{\mu }{2}}$. Diagonalizing system (18), we get
where $\varUpsilon _{1}(t)$ and $\varUpsilon _{2}(t)$ are the solutions of the equations
(19)
\[\begin{array}{r}\displaystyle \dot{\varUpsilon }_{1}(t)=\lambda \varUpsilon _{1}(t)A(t),\\{} \displaystyle \dot{\varUpsilon }_{2}(t)=-\lambda \varUpsilon _{2}(t)A(t),\end{array}\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{1}{T}\log \det \varPsi _{1}(T)& \displaystyle =\frac{1}{T}\log \det \big({a}^{+}\varUpsilon _{1}(T)+{a}^{-}\varUpsilon _{2}(T)\big)\\{} & \displaystyle =\frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(T)\big)+\frac{1}{T}\log \det \bigg(I+\frac{{a}^{+}}{{a}^{-}}M(T)\bigg)\\{} & \displaystyle =\frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(T)\big)\\{} & \displaystyle \hspace{1em}+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\det M(T)+\frac{{a}^{+}}{{a}^{-}}\text{tr }M(T)\bigg)\\{} & \displaystyle =\frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(T)\big)+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\det M(T)\bigg)\\{} & \displaystyle \hspace{1em}+\frac{1}{T}\log \bigg(1+\frac{\frac{{a}^{+}}{{a}^{-}}\text{tr }M(T)}{1+{(\frac{{a}^{+}}{{a}^{-}})}^{2}\det M(T)}\bigg).\end{array}\]
Applying Liouville’s formula to (19), we get
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{1}{T}\log \det \big({a}^{-}\varUpsilon _{2}(t)\big)+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\det M(T)\bigg)\\{} & \displaystyle \hspace{1em}=\frac{1}{T}\log \bigg({\bigg(\frac{{a}^{-}}{2\lambda }\bigg)}^{2}\exp (-2\lambda T)\bigg)+\frac{1}{T}\log \bigg(1+{\bigg(\frac{{a}^{+}}{{a}^{-}}\bigg)}^{2}\exp (4\lambda T)\bigg)\to 2\lambda \end{array}\]
as $T\to \infty $. Thus, in order to prove that limit (15) holds, we should show that