1 Introduction
In this paper we study some large deviations principles for conditionally continuous Gaussian processes. Then we find estimates of level crossing probability for such processes. Large deviations theory is concerned with the study of probabilities of very “rare” events. There are events whose probability is very small, however these events are of great importance; they may represent an atypical situation (i.e. a deviation from the average behavior) that may cause disastrous consequences: an insurance company or a bank may bankrupt; a statistical estimator may give a wrong information; a physical or chemical system may show an atypical configuration. The aim of this paper is to extend the theory of large deviations for Gaussian processes to a wider class of random processes – the conditionally Gaussian processes. Such processes were introduced in applications in finance, optimization and control problems. See, for instance, [12, 16, 14] and [1]. More precisely, Doucet et al. in [12] considered modelling the behavior of latent variables in neural networks by Gaussian processes with random parameters; Lototsky in [16] studied stochastic parabolic equations with random coefficients; Gulisashvili in [14] studied large deviations principle for some particular stochastic volatility models where the log-price is, conditionally, a Gaussian process; in [1] probabilities of large extremes of conditionally Gaussian processes were considered, in particular sub-Gaussian processes i.e. Gaussian processes with a random variance. Let $(Y,Z)$ be a random element on the probability space $(\varOmega ,\mathcal{F},\mathbb{P})$, where $Z={({Z_{t}})}_{t\in [0,1]}$ is a process taking values in $\mathbb{R}$ and Y is an arbitrary random element (a process or a random variable). We say that Z is a conditionally Gaussian process if the conditional distribution of the process $Z|Y$ is (almost surely) Gaussian. The theory of large deviations for Gaussian processes and for conditioned Gaussian processes is already well developed. See, for instance, Section 3.4 in [11] (and the references therein) for Gaussian processes, [7] and [13] for particular conditioned Gaussian processes. The extension of this theory is possible thanks to the results obtained by Chaganty in [8].
We consider a family of processes ${({Y^{n}},{Z^{n}})_{n\in \mathbb{N}}}$ on a probability space $(\varOmega ,\mathcal{F},\mathbb{P})$. ${({Y^{n}})_{n\in \mathbb{N}}}$ is a family of processes taking values in a measurable space $({E_{1}},{\mathcal{E}_{1}})$ that satisfies a large deviation principle (LDP for short) and ${({Z^{n}})_{n\in \mathbb{N}}}$ is a family of processes taking values in $({E_{2}},{\mathcal{E}_{2}})$ such that for every $n\in \mathbb{N}$, ${Z^{n}}|{Y^{n}}$ is a Gaussian process ($\mathbb{P}$-a.s.). We want to find a LDP for the family ${({Z^{n}})_{n\in \mathbb{N}}}$.
A possible application of LDPs is computing the estimates of level crossing probability (ruin problem). We will give the asymptotic behavior (in terms of large deviations) of the probability
where φ is a suitable function. We will consider the following families of conditionally Gaussian processes.
1) The class of Gaussian processes with random variance and random mean, i.e. the processes of the type ${({Z_{t}})}_{t\in [0,1]}={({Y_{1}}{X_{t}}+{Y_{2}})}_{t\in [0,1]}$, where X is a centered continuous Gaussian process with covariance function k and $Y=({Y_{1}},{Y_{2}})$ is a random element independent of X. Notice that $Z|Y$ is Gaussian with
and
2) The class of Ornstein–Uhlenbeck type processes with random diffusion coefficients. More precisely ${({Z_{t}})}_{t\in [0,1]}$ is the solution of the following stochastic differential equation:
\[ \left\{\begin{array}{l}d{Z_{t}}=({a_{0}}+{a_{1}}{Z_{t}})\hspace{2.5pt}dt+Yd{W_{t}},\hspace{1em}0<t\le 1,\hspace{1em}\\ {} {Z_{0}}=x,\hspace{1em}\end{array}\right.\]
where $x,{a_{0}},{a_{1}}\in \mathbb{R}$ and Y is a random element independent of the Brownian motion ${({W_{t}})}_{t\in [0,1]}$.The paper is organized as follows. In Section 2 we recall some basic facts on large deviations theory for continuous Gaussian processes. In Section 3 we introduce the conditionally Gaussian processes and the Chaganty theory. In Section 4 and 5 we study the theoretical problem and we give the main results. Finally in Section 6 we investigate the ruin problem for such processes.
2 Large deviations for continuous Gaussian processes
We briefly recall some main facts on large deviations principles and reproducing kernel Hilbert spaces for Gaussian processes we are going to use. For a detailed development of this very wide theory we can refer, for example, to the following classical references: Chapitre II in Azencott [2], Section 3.4 in Deuschel and Strook [11], Chapter 4 (in particular Sections 4.1 and 4.5) in Dembo and Zeitouni [10], for large deviations principles; Chapter 4 (in particular Section 4.3) in [15], Chapter 2 (in particular Sections 2.2 and 2.3) in [5], for reproducing kernel Hilbert space. Without loss of generality, we can consider centered Gaussian processes.
2.1 Reproducing kernel Hilbert space
An important tool to handle continuous Gaussian processes is the associated reproducing kernel Hilbert space (RKHS).
Let $U={({U_{t}})}_{t\in [0,1]}$ be a continuous, centered , Gaussian process on a probability space $(\varOmega ,\mathcal{F},\mathbb{P})$, with covariance function k. From now on, we will denote by $\mathcal{C}([0,1])$ the set of continuous functions on $[0,1]$ endowed with the topology induced by the sup-norm $(\mathcal{C}([0,1]),||\cdot |{|_{\infty }})$. Moreover, we will denote by $\mathcal{M}[0,1]$ its dual, i.e. the set of signed Borel measures on $[0,1]$. The action of $\mathcal{M}[0,1]$ on $\mathcal{C}([0,1])$ is given by
Define now
Since ${L^{2}}$-limits of Gaussian random variables are still Gaussian, we have that H is a closed subspace of ${L^{2}}(\varOmega ,\mathcal{F},\mathbb{P})$ consisting of real Gaussian random variables. Moreover, it becomes a Hilbert space when endowed with the inner product
\[ \langle \lambda ,h\rangle ={\int _{0}^{1}}h(t)\hspace{0.1667em}d\lambda (t),\hspace{1em}\lambda \in \mathcal{M}[0,1],\hspace{2.5pt}h\in \mathcal{C}\big([0,1]\big).\]
Consider the set
\[ \mathcal{L}=\Bigg\{x\in \mathcal{C}\big([0,1]\big)\hspace{0.2778em}\Big|\hspace{0.2778em}x(t)={\int _{0}^{1}}k(t,s)\hspace{0.1667em}d\lambda (s),\hspace{0.1667em}\lambda \in \mathcal{M}[0,1]\Bigg\}.\]
The RKHS relative to the kernel k can be constructed as the completion of the set $\mathcal{L}$ with respect to a suitable norm. Consider the set of (real) Gaussian random variables
\[ \varGamma =\Bigg\{Y\hspace{0.2778em}|\hspace{0.2778em}Y=\langle \lambda ,U\rangle ={\int _{0}^{1}}{U_{t}}\hspace{0.1667em}d\lambda (t),\hspace{0.1667em}\lambda \in \mathcal{M}[0,1]\Bigg\}\subset {L^{2}}(\varOmega ,\mathcal{F},\mathbb{P}).\]
We have that, for ${Y_{1}},{Y_{2}}\in \varGamma $, say ${Y_{i}}=\langle {\lambda _{i}},U\rangle ,\hspace{0.1667em}i=1,2$,
(1)
\[\begin{aligned}{}{\langle {Y_{1}},{Y_{2}}\rangle }_{{L^{2}}(\varOmega ,\mathcal{F},\mathbb{P})}& =\mathrm{Cov}\Bigg({\int _{0}^{1}}{U_{t}}\hspace{0.1667em}d{\lambda _{1}}(t),{\int _{0}^{1}}{U_{t}}\hspace{0.1667em}d{\lambda _{2}}(t)\Bigg)\\ {} & ={\int _{0}^{1}}{\int _{0}^{1}}k(t,s)\hspace{0.1667em}d{\lambda _{1}}(t)d{\lambda _{2}}(s).\end{aligned}\]Remark 1.
We remark that, since any signed Borel measure λ can be weakly approximated by a linear combination of Dirac deltas, the Hilbert space H above is nothing but the Hilbert space generated by the Gaussian process U, namely
\[\begin{aligned}{}H& ={\overline{\mathit{sp}\big\{{U_{t}},\hspace{0.1667em}t\in [0,1]\big\}}^{\| .{\| _{{L^{2}}(\varOmega ,\mathcal{F},\mathbb{P})}}}}\\ {} & ={\overline{\Bigg\{{\textstyle\sum _{j=1}^{n}}{a_{j}}\hspace{0.1667em}{U_{{t_{j}}}}\hspace{0.2778em}|\hspace{0.2778em}n\in \mathbb{N},{a_{j}}\in \mathbb{R},{t_{j}}\in [0,1]\Bigg\}}^{\| .{\| _{{L^{2}}(\varOmega ,\mathcal{F},\mathbb{P})}}}}.\end{aligned}\]
Consider now the following mapping
Definition 1.
Let $U={({U_{t}})}_{t\in [0,1]}$ be a continuous Gaussian process. We define the reproducing kernel Hilbert space relative to the Gaussian process U as
\[ \mathcal{H}=\mathcal{S}(H)=\big\{h\in \mathcal{C}\big([0,1]\big)\hspace{0.2778em}|\hspace{0.2778em}h(t)={(\mathcal{S}Y)_{t}},\hspace{0.1667em}Y\in H\big\},\]
with an inner product defined as
\[ {\langle {h_{1}},{h_{2}}\rangle }_{\mathcal{H}}={\big\langle {\mathcal{S}^{-1}}{h_{1}},{\mathcal{S}^{-1}}{h_{2}}\big\rangle }_{H}={\big\langle {\mathcal{S}^{-1}}{h_{1}},{\mathcal{S}^{-1}}{h_{2}}\big\rangle }_{{L^{2}}(\varOmega ,\mathcal{F},\mathbb{P})},\hspace{1em}{h_{1}},{h_{2}}\in \mathcal{H}.\]
Then, we have
The map $\mathcal{S}$ defined in (2) is referred to as Loève isometry. Since the covariance function fully identifies, up to the mean, a Gaussian process, we can talk equivalently of RKHS associated with the process or with its covariance function.
2.2 Large deviations
Definition 2.
(LDP) Let E be a topological space, $\mathcal{B}(E)$ the Borel σ-algebra and ${({\mu _{n}})}_{n\in \mathbb{N}}$ a family of probability measures on $\mathcal{B}(E)$; let $\gamma \hspace{0.1667em}:\mathbb{N}\to {\mathbb{R}^{+}}$ be a function, such that $\gamma (n)\to +\infty $ as $n\to +\infty $. We say that the family of probability measures ${({\mu _{n}})}_{n\in \mathbb{N}}$ satisfies a large deviation principle (LDP) on E with the rate function I and the speed $\gamma (n)$ if, for any open set Θ,
and for any closed set Γ
A rate function is a lower semicontinuous mapping $I:E\to [0,+\infty ]$. A rate function I is said good if the sets $\{I\le a\}$ are compact for every $a\ge 0$.
Definition 3.
(WLDP) Let E be a topological space, $\mathcal{B}(E)$ the Borel σ-algebra and ${({\mu _{n}})}_{n\in \mathbb{N}}$ a family of probability measures on $\mathcal{B}(E)$; let $\gamma \hspace{0.1667em}:\mathbb{N}\to {\mathbb{R}^{+}}$ be a function, such that $\gamma (n)\to +\infty $ as $n\to +\infty $. We say that the family of probability measures ${({\mu _{n}})}_{n\in \mathbb{N}}$ satisfies a weak large deviation principle (WLDP) on E with the rate function I and the speed $\gamma (n)$ if the upper bound (3) holds for compact sets.
Remark 2.
We say that a family of continuous processes ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ satisfies a LDP if the associated family of laws satisfy a LDP on $\mathcal{C}([0,1])$.
The following remarkable theorem (Proposition 1.5 in [2]) gives an explicit expression of the Cramér transform ${\varLambda ^{\ast }}$ of a continuous centered Gaussian process ${({U_{t}})}_{t\in [0,1]}$ with covariance function k. Let us recall that
\[ \varLambda (\lambda )=\log \mathbb{E}\big[\exp \big(\langle U,\lambda \rangle \big)\big]=\frac{1}{2}{\int _{0}^{1}}{\int _{0}^{1}}k(t,s)\hspace{0.1667em}d\lambda (t)d\lambda (s),\]
for $\lambda \in \mathcal{M}[0,1]$.Theorem 1.
Let ${({U_{t}})}_{t\in [0,1]}$ be a continuous, centered Gaussian process with covariance function k. Let ${\varLambda ^{\ast }}$ denote the Cramér transform of Λ, that is,
where $\mathcal{H}$ and $\| .{\| _{\mathcal{H}}}$ denote, respectively, the reproducing kernel Hilbert space and the related norm associated to the covariance function k.
\[\begin{aligned}{}{\varLambda ^{\ast }}(x)& =\underset{\lambda \in \mathcal{M}[0,1]}{\sup }\big(\langle \lambda ,x\rangle -\varLambda (\lambda )\big)\\ {} & =\underset{\lambda \in \mathcal{M}[0,1]}{\sup }\Bigg(\langle \lambda ,x\rangle -\frac{1}{2}{\int _{0}^{1}}{\int _{0}^{1}}k(t,s)\hspace{0.1667em}d\lambda (t)d\lambda (s)\Bigg).\end{aligned}\]
Then,
(4)
\[ {\varLambda ^{\ast }}(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| x{\| _{\mathcal{H}}^{2}},\hspace{1em}& x\in \mathcal{H},\\ {} +\infty \hspace{1em}& \textit{otherwise},\end{array}\right.\]In order to state a large deviation principle for a family of Gaussian processes, we need the following definition.
Definition 4.
A family of continuous processes ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ is exponentially tight at the speed $\gamma (n)$ if, for every $R>0$ there exists a compact set ${K_{R}}$ such that
If the means and the covariance functions of an exponentially tight family of Gaussian processes have a good limit behavior, then the family satisfies a large deviation principle, as stated in the following theorem which is a consequence of the classic abstract Gärtner–Ellis Theorem (Baldi Theorem 4.5.20 and Corollary 4.6.14 in [10]) and Theorem 1.
Theorem 2.
Let ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ be an exponentially tight family of continuous Gaussian processes with respect to the speed function $\gamma (n)$. Suppose that, for any $\lambda \in \mathcal{M}[0,1]$,
and the limit
exists for some continuous, symmetric, positive definite function $\bar{k}$, which is the covariance function of a continuous Gaussian process. Then ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ satisfies a large deviation principle on $\mathcal{C}([0,1]))$, with the speed $\gamma (n)$ and the good rate function
where $\bar{\mathcal{H}}$ and $\| .{\| _{\bar{\mathcal{H}}}}$ respectively denote the reproducing kernel Hilbert space and the related norm associated to the covariance function $\bar{k}$.
(6)
\[ \underset{n\to +\infty }{\lim }\mathbb{E}\big[\big\langle \lambda ,{X^{n}}\big\rangle \big]=0\](7)
\[ \varLambda (\lambda )=\underset{n\to +\infty }{\lim }\gamma (n)\mathrm{Var}\big(\big\langle \lambda ,{X^{n}}\big\rangle \big)={\int _{0}^{1}}{\int _{0}^{1}}\bar{k}(t,s)\hspace{0.1667em}d\lambda (t)d\lambda (s)\](8)
\[ I(h)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| h{\| _{\bar{\mathcal{H}}}},\hspace{1em}& h\in \bar{\mathcal{H}},\\ {} +\infty \hspace{1em}& \textit{otherwise},\end{array}\right.\]A useful result which can help in investigating the exponential tightness of a family of continuous centered Gaussian processes is the following proposition (Proposition 2.1 in [17]); the required property follows from Hölder continuity of the mean and the covariance function.
Proposition 1.
Let ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ be a family of continuous Gaussian processes with ${X_{0}^{n}}=0$ for all $n\in \mathbb{N}$. Denote ${m^{n}}(t)=\mathbb{E}[{X_{t}^{n}}]$ and ${k^{n}}(t,s)=\mathrm{Cov}({X_{t}^{n}},{X_{s}^{n}})$. Suppose there exist constants $\alpha ,{M_{1}},{M_{2}}>0$ such that for $n\in \mathbb{N}$
Then the family ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ is exponentially tight with respect to the speed function $\gamma (n)$.
\[ \underset{s,t\in [0,1],\hspace{0.1667em}s\ne t}{\sup }\frac{|{m^{n}}(t)-{m^{n}}(s)|}{|t-s{|^{\alpha }}}\le {M_{1}}\]
and
(9)
\[ \underset{s,t\in [0,1],\hspace{0.1667em}s\ne t}{\sup }\gamma (n)\frac{|{k^{n}}(t,t)+{k^{n}}(s,s)-2{k^{n}}(s,t)|}{|t-s{|^{2\alpha }}}\le {M_{2}}.\]3 Conditionally Gaussian processes
In this section we introduce conditionally Gaussian processes and the Chaganty theorem which allows us to find a LDP for families of such processes. We also recall, for sake of completeness, some results about conditional distributions in Polish spaces. We referred to Section 3.1 in [6] and Section 4.3 in [3].
Let Y and Z two random variables, defined on the same probability space $(\varOmega ,\mathcal{F},\mathbb{P})$, with values in the measurable spaces $({E_{1}},{\mathcal{E}_{1}})$ and $({E_{2}},{\mathcal{E}_{2}})$ respectively, and let us denote by ${\mu _{1}},{\mu _{2}}$ the (marginal) laws of Y and Z respectively and by μ the joint distribution of $(Y,Z)$ on $(E,\mathcal{E})=({E_{1}}\times {E_{2}},{\mathcal{E}_{1}}\times {\mathcal{E}_{2}})$. A family of probabilities $({\mu _{2}}{(dz|y)))_{y\in {E_{1}}}}$ on $({E_{2}},{\mathcal{E}_{2}})$ is a regular version of the conditional law of Z given Y if
In this case we have
In this section we will use the notation $(E,\mathcal{B})$ to indicate a Polish space (i.e. a complete separable metric space) with the Borel σ-field, and we say that a sequence ${({x_{n}})}_{n\in \mathbb{N}}\subset \mathbb{E}$ converges to $x\in E$, ${x_{n}}\to x$, if ${d_{E}}({x_{n}},x)\to 0$, as $n\to \infty $, where ${d_{E}}$ denotes the metric on E. Regular conditional probabilities do not always exist, but they exist in many cases. The following result, that immediately follows from Corollary 3.2.1 in [6], shows that in Polish spaces the regular version of the conditional probability is well defined.
Proposition 2.
Let $({E_{1}},{\mathcal{B}_{1}})$ and $({E_{2}},{\mathcal{B}_{2}})$ be two Polish spaces endowed with their Borel σ-fields, μ be a probability measure on $(E,\mathcal{B})=({E_{1}}\times {E_{2}},{\mathcal{B}_{1}}\times {\mathcal{B}_{2}})$. Let ${\mu _{i}}$ be the marginal probability measure on $({E_{i}},{\mathcal{B}_{i}})$, $i=1,2$. Then there exists ${\mu _{1}}$-almost sure a unique regular version of the conditional law of ${\mu _{2}}$ given ${\mu _{1}}$, i.e.
In what follows we always suppose random variables taking values in a Polish space.
Definition 5.
Let $(Y,Z)$ be a random element on the probability space $(\varOmega ,\mathcal{F},\mathbb{P})$, where $Z={({Z_{t}})}_{t\in [0,1]}$ is a real process and Y is an arbitrary random element (a process or a random variable). We say that Z is a conditionally Gaussian process if the conditional distribution of the process $Z|Y$ is (almost surely) Gaussian. We denote by ${({Z_{t}^{y}})_{t\in [0,1]}}$ the Gaussian process $Z|Y=y$.
The main tool that we will use to study LDP for a family of conditionally Gaussian processes is provided by Chaganty Theorem (Theorem 2.3 in [8]). Let $({E_{1}},{\mathcal{B}_{1}})$ and $({E_{2}},{\mathcal{B}_{2}})$ be two Polish spaces. We denote by ${({\mu _{n}})}_{n\in \mathbb{N}}$ a sequence of probabilities measures on $(E,\mathcal{B})=({E_{1}}\times {E_{2}},{\mathcal{B}_{1}}\times {\mathcal{B}_{2}})$ (the sequence of joint distributions), by ${({\mu _{1n}})}_{n\in \mathbb{N}}$ the sequence of the marginal distributions on $({E_{1}},{\mathcal{B}_{1}})$ and by ${({\mu _{2n}}(\cdot |{x_{1}}))}_{n\in \mathbb{N}}$ the sequence of conditional distributions on $({E_{2}},{\mathcal{B}_{2}})$ (${x_{1}}\in {E_{1}}$,), given by Proposition 2, i.e.
\[ {\mu _{n}}({B_{1}}\times {B_{2}})={\int _{{B_{1}}}}{\mu _{2n}}({B_{2}}|{x_{1}})\hspace{2.5pt}{\mu _{1n}}(d{x_{1}}).\]
Definition 6.
Let $({E_{1}},{\mathcal{B}_{1}})$, $({E_{2}},{\mathcal{B}_{2}})$ be two Polish spaces and ${x_{1}}\in {E_{1}}$. We say that the sequence of conditional laws ${({\mu _{2n}}(\cdot |{x_{1}}))}_{n\in \mathbb{N}}$ on $({E_{2}},{\mathcal{B}_{2}})$ satisfies the LDP continuously in ${x_{1}}$ with the rate function $J(\cdot |{x_{1}})$ and the speed $\gamma (n)$, or simply, the LDP continuity condition holds, if
-
a) For each ${x_{1}}\in {E_{1}},J(\cdot |{x_{1}})$ is a good rate function on ${E_{2}}$.
-
b) For any sequence ${({x_{1n}})}_{n\in \mathbb{N}}\subset {E_{1}}$ such that ${x_{1n}}\to {x_{1}}$, the sequence of measures ${({\mu _{2n}}(\cdot |{x_{1n}}))}_{n\in \mathbb{N}}$ satisfies a LDP on ${E_{2}}$ with the (same) rate function $J(\cdot |{x_{1}})$ and the speed $\gamma (n)$.
-
c) $J(\cdot |\cdot )$ is lower semicontinuous as a function of $({x_{1}},{x_{2}})\in {E_{1}}\times {E_{2}}$.
Theorem 3 (Theorem 2.3 in [8]).
Let $({E_{1}},{\mathcal{B}_{1}})$, $({E_{2}},{\mathcal{B}_{2}})$ be two Polish spaces. For $i=1,2$ let ${({\mu _{\mathit{in}}})}_{n\in \mathbb{N}}$ be a sequence of measures on $({E_{i}},{\mathcal{B}_{i}})$. For ${x_{1}}\in {E_{1}}$, let ${({\mu _{2n}}(\cdot |{x_{1}}))}_{n\in \mathbb{N}}$ be the sequence of the conditional laws (of ${\mu _{2n}}$ given ${\mu _{1n}}$) on $({E_{2}},{\mathcal{B}_{2}})$. Suppose that the following two conditions are satisfied:
Then the sequence of joint distributions ${({\mu _{n}})}_{n\in \mathbb{N}}$ satisfies a WLDP on $E={E_{1}}\times {E_{2}}$ with the speed $\gamma (n)$ and the rate function
-
i) ${({\mu _{1n}})}_{n\in \mathbb{N}}$ satisfies a LDP on ${E_{1}}$ with the good rate function ${I_{1}}(\cdot )$ and the speed $\gamma (n)$.
-
ii) For every ${x_{1}}\in {E_{1}}$, the sequence ${({\mu _{2n}}(\cdot |{x_{1}}))}_{n\in \mathbb{N}}$ satisfies the LDP continuity condition on ${E_{2}}$ with the rate function $J(\cdot |{x_{1}})$ and the speed $\gamma (n)$.
\[ I({x_{1}},{x_{2}})={I_{1}}({x_{1}})+J({x_{2}}|{x_{1}}),\hspace{1em}{x_{1}}\in {E_{1}},\hspace{0.1667em}{x_{2}}\in {E_{2}}.\]
The sequence of marginal distributions ${({\mu _{2n}})}_{n\in \mathbb{N}}$ defined on $({E_{2}},{\mathcal{B}_{2}})$, satisfies a LDP with the speed $\gamma (n)$ and the rate function
Moreover, if $I(\cdot ,\cdot )$ is a good rate function then ${({\mu _{n}})}_{n\in \mathbb{N}}$ satisfies a LDP and ${I_{2}}(\cdot )$ is a good rate function.
4 Gaussian process with random mean and random variance
Let $\alpha >0$ and define ${\mathcal{C}_{\alpha }}([0,1])=\{y\in \mathcal{C}([0,1]):y(t)\ge \alpha ,\hspace{0.1667em}\hspace{0.1667em}t\in [0,1]\}$ (with the uniform norm on compact sets). ${\mathcal{C}_{\alpha }}([0,1])$ is a Polish space. Consider the family of processes ${({Y^{n}},{Z^{n}})_{n\in \mathbb{N}}}$, where ${({Y^{n}})_{n\in \mathbb{N}}}={({Y_{1}^{n}},{Y_{2}^{n}})_{n\in \mathbb{N}}}$ is a family of processes with paths in ${\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])$ and for $n\in \mathbb{N}$, ${Z^{n}}={X^{n}}{Y_{1}^{n}}+{Y_{2}^{n}}$ with ${({Y^{n}})_{n\in \mathbb{N}}}$ independent of ${({X^{n}})_{n\in \mathbb{N}}}$. Suppose ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ is family of continuous centered Gaussian processes which satisfy the hypotheses of Theorem 2 and suppose that ${({Y^{n}})_{n\in \mathbb{N}}}$ satisfies a LDP with the good rate function ${I_{Y}}$ and the speed $\gamma (n)$. We want to prove a LDP principle for ${({Z^{n}})_{n\in \mathbb{N}}}$.
Proposition 3.
Let ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ be a family of continuous Gaussian processes which satisfies the hypotheses of Theorem 2 and let $y=({y_{1}},{y_{2}})\in {\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])$. Then the family ${({({X_{t}^{n}}{y_{1}}(t)+{y_{2}}(t))}_{t\in [0,1]})}_{n\in \mathbb{N}}$ is still a family of continuous Gaussian processes which satisfies the hypotheses of Theorem 2 with the same speed function and limit covariance function (depending only on ${y_{1}}$) ${k^{{y_{1}}}}$ given by
Therefore, also ${({({X_{t}^{n}}{y_{1}}(t)+{y_{2}}(t))}_{t\in [0,1]})}_{n\in \mathbb{N}}$ satisfies a LDP with the good rate function
where ${\bar{\mathcal{H}}_{{y_{1}}}}$ is the RKHS associated to the covariance function defined in (10).
(11)
\[ {\varLambda _{y}^{\ast }}(z)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| z{\| _{{\bar{\mathcal{H}}_{{y_{1}}}}}^{2}},\hspace{1em}& z\in \bar{{\mathcal{H}_{{y_{1}}}}},\\ {} +\infty \hspace{1em}& \text{otherwise},\hspace{2.5pt}\end{array}\right.=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| \frac{z-{y_{2}}}{{y_{1}}}{\| _{\bar{\mathcal{H}}}^{2}},\hspace{1em}& \frac{z-{y_{2}}}{{y_{1}}}\in \bar{\mathcal{H}},\\ {} +\infty \hspace{1em}& \text{otherwise},\end{array}\right.\]Definition 7.
Let $(E,{d_{E}})$ be a metric space, and let ${({\mu _{n}})}_{n\in \mathbb{N}}$, ${({\tilde{\mu }_{n}})}_{n\in \mathbb{N}}$ be two families of probability measures on E. Then ${({\mu _{n}})}_{n\in \mathbb{N}}$ and ${({\tilde{\mu }_{n}})}_{n\in \mathbb{N}}$ are exponentially equivalent (at the speed $\gamma (n)$) if there exist a family of probability spaces ${((\varOmega ,{\mathcal{F}^{n}},{\mathbb{P}^{n}}))_{n\in \mathbb{N}}}$ and two families of E-valued random variables ${({Z^{n}})_{n\in \mathbb{N}}}$ and ${({\tilde{Z}^{n}})_{n\in \mathbb{N}}}$ such that, for any $\delta >0$, the set $\{\omega :{d_{E}}({\tilde{Z}^{n}}(\omega ),{Z^{n}}(\omega ))>\delta \}$ is ${\mathcal{F}^{n}}$-measurable and
As far as the LDP is concerned exponentially equivalent measures are indistinguishable. See Theorem 4.2.13 in [10].
Proposition 4.
Let ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ be an exponential tight (at the speed $\gamma (n)$) family of continuous Gaussian processes. Let ${({y^{n}})_{n\in \mathbb{N}}}\subset {\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])$ such that ${y^{n}}\to y$ in ${\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])$. Then, the family of processes ${(({y_{2}^{n}}(t)+{y_{1}^{n}}(t){X^{n}}(t)))_{t\in [0,1]}}{)_{n\in \mathbb{N}}}$ is exponentially equivalent to ${(({y_{2}}(t)+{y_{1}}(t){X^{n}}(t)))}_{n\in \mathbb{N}}$.
Proof.
Let ${Z^{n}}(t)={y_{2}}(t)+{y_{1}}(t){X^{n}}(t)$ and ${\tilde{Z}^{n}}(t)={y_{2}^{n}}(t)+{y_{1}^{n}}(t){X^{n}}(t)$ for $t\in [0,1]$, $n\in \mathbb{N}$. Then, for any $\delta >0$,
\[ \mathbb{P}\big({\big\| {Z^{n}}-{\tilde{Z}^{n}}\big\| _{\infty }}>\delta \big)\le \mathbb{P}\bigg({\big\| {X^{n}}\big\| _{\infty }}\hspace{0.1667em}{\big\| {y_{1}^{n}}-{y_{1}}\big\| }_{\infty }>\frac{\delta }{2}\bigg)+\mathbb{P}\bigg({\big\| {y_{2}^{n}}-{y_{2}}\big\| }_{\infty }>\frac{\delta }{2}\bigg).\]
For n large enough $||{y_{2}^{n}}-{y_{2}}|{|_{\infty }}\le \frac{\delta }{2}$ and thanks to (5)
therefore
□Let us denote $J(z|y)={\varLambda _{y}^{\ast }}(z)$, for $z\in \mathcal{C}([0,1])$ and $y\in {\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])$. We want to prove the lower semicontinuity of $J(\cdot |\cdot )$.
Proof.
Thanks to the lower semicontinuity of $\| \cdot {\| _{\bar{\mathcal{H}}}^{2}}$
\[\begin{aligned}{}\underset{({y^{n}},{z^{n}})\to (y,z)}{\liminf }J\big({z^{n}}|{y^{n}}\big)& =\underset{({y^{n}},{z^{n}})\to (y,z)}{\liminf }\frac{1}{2}{\bigg\| \frac{{z^{n}}-{y_{2}^{n}}}{{y_{1}^{n}}}\bigg\| _{\bar{\mathcal{H}}}^{2}}\\ {} & =\underset{{h^{n}}\to h}{\liminf }\frac{1}{2}{\big\| {h^{n}}\big\| _{\bar{\mathcal{H}}}^{2}}\ge \frac{1}{2}\| h{\| _{\bar{\mathcal{H}}}^{2}}=\frac{1}{2}{\bigg\| \frac{z-{y_{2}}}{{y_{1}}}\bigg\| _{\bar{\mathcal{H}}}^{2}}=J(x|y)\end{aligned}\]
where, ${h^{n}}=\frac{{z^{n}}-{y_{2}^{n}}}{{y_{1}^{n}}}\stackrel{\mathcal{C}([0,1])}{\longrightarrow }h=\frac{z-{y_{2}}}{{y_{1}}}$. □Theorem 4.
Consider the family of processes ${({Y^{n}},{Z^{n}})_{n\in \mathbb{N}}}$, where ${({Y^{n}})_{n\in \mathbb{N}}}={({Y_{1}^{n}},{Y_{2}^{n}})_{n\in \mathbb{N}}}$ is a family of processes with paths in ${\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])$ and for $n\in \mathbb{N}$, ${Z^{n}}={X^{n}}{Y_{1}^{n}}+{Y_{2}^{n}}$ with ${({Y^{n}})_{n\in \mathbb{N}}}$ independent of ${({X^{n}})_{n\in \mathbb{N}}}$. Suppose ${({({X_{t}^{n}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ is family of continuous centered Gaussian processes which satisfy the hypotheses of Theorem 2 and suppose that ${({Y^{n}})_{n\in \mathbb{N}}}$ satisfies a LDP with the good rate function ${I_{Y}}$ and the speed $\gamma (n)$. Then ${({Y^{n}},{Z^{n}})_{n\in \mathbb{N}}}$ satisfies the WLDP with the speed $\gamma (n)$ and the rate function
and ${({Z^{n}})_{n\in \mathbb{N}}}$ satisfies the LDP with the speed $\gamma (n)$ and the rate function
5 Ornstein–Uhlenbeck processes with random diffusion coefficient
Let $\alpha >0$ and let again ${\mathcal{C}_{\alpha }}([0,1])=\{y\in \mathcal{C}([0,1]):y(t)\ge \alpha ,\hspace{0.1667em}\hspace{0.1667em}t\in [0,1]\}$. Consider the family of processes ${({Y^{n}},{Z^{n}})_{n\in \mathbb{N}}}$, where ${({Y^{n}})_{n\in \mathbb{N}}}$ is a family of processes with paths in ${\mathcal{C}_{\alpha }}([0,1])$ and for $n\in \mathbb{N}$, ${Z^{n}}$ is the solution of the following stochastic differential equation.
where, $x,\hspace{0.1667em}{a_{0}},\hspace{0.1667em}{a_{1}}\in \mathbb{R}$ and ${({Y^{n}})_{n\in \mathbb{N}}}$ is a family of random processes independent of the Brownian motion ${({W_{t}})}_{t\in [0,1]}$.
(12)
\[ \left\{\begin{array}{l}d{Z_{t}^{n}}=\big({a_{0}}+{a_{1}}{Z_{t}^{n}}\big)\hspace{2.5pt}dt+\frac{1}{\sqrt{n}}{Y_{t}^{n}}\hspace{2.5pt}d{W_{t}}\hspace{1em}0<t\le 1,\hspace{1em}\\ {} {Z_{0}^{n}}=x,\hspace{1em}\end{array}\right.\]Suppose that ${({Y^{n}})_{n\in \mathbb{N}}}$ satisfies a LDP with the good rate function ${I_{Y}}$, and the speed $\gamma (n)=n$. We want to prove a LDP principle for ${({Z^{n}})_{n\in \mathbb{N}}}$.
Let ${Z^{n,y}}$, $y\in {\mathcal{C}_{\alpha }}([0,1])$, be the solution of the following stochastic differential equation,
that is
$m(t)={e^{{a_{1}}t}}(x+\frac{{a_{0}}}{{a_{1}}}[1-{e^{-{a_{1}}t}}])$. ${({({Z_{t}^{n,y}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ is a family of Gaussian processes and a family of diffusions. It is well known from the Wentzell–Friedlin theory that ${({({Z_{t}^{n,y}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ satisfies the LDP in $\mathcal{C}([0,1])$ with the speed $\gamma (n)=n$ and the good rate function
where
where ${\mathcal{H}_{y}}$ is the reproducing kernel Hilbert space associate to the covariance function
(13)
\[ \left\{\begin{array}{l}d{Z_{t}^{n,y}}=({a_{0}}+{a_{1}}{Z_{t}^{n,y}})\hspace{2.5pt}dt+\frac{1}{\sqrt{n}}y(t)\hspace{2.5pt}d{W_{t}},\hspace{1em}0<t\le 1,\hspace{1em}\\ {} {Z_{0}^{n,y}}=x,\hspace{1em}\end{array}\right.\](14)
\[\begin{aligned}{}{Z_{t}^{n,y}}& ={e^{{a_{1}}t}}\Bigg(x+\frac{{a_{0}}}{{a_{1}}}\big[1-{e^{-{a_{1}}t}}\big]+\frac{1}{\sqrt{n}}{\int _{0}^{t}}{e^{-{a_{1}}s}}y(s)d{W_{s}}\Bigg)\\ {} & =m(t)+{e^{{a_{1}}t}}\frac{1}{\sqrt{n}}{\int _{0}^{t}}{e^{-{a_{1}}s}}y(s)d{W_{s}},\end{aligned}\](15)
\[ J(f|y)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}{\textstyle\int _{0}^{1}}{(\frac{\dot{f}(t)-({a_{0}}+{a_{1}}f(t))}{y(t)})^{2}}dt,\hspace{1em}& f\in {H_{1}^{x}},\\ {} +\infty ,\hspace{1em}& f\notin {H_{1}^{x}},\end{array}\right.\]
\[ {H_{1}^{x}}:=\Bigg\{f:f(t)=x+{\int _{0}^{t}}\phi (s)\hspace{2.5pt}ds,\hspace{0.2778em}\phi \in {L^{2}}\big([0,1]\big)\Bigg\}.\]
And it is well known from the theory of Gaussian processes that the family ${({({Z_{t}^{n,y}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ satisfies the LDP in $\mathcal{C}([0,1])$ with the speed $\gamma (n)=n$ and the good rate function
(16)
\[ J(f|y)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| f-m{\| _{{\mathcal{H}_{y}}}^{2}},\hspace{1em}& f-m\in {\mathcal{H}_{y}},\\ {} +\infty ,\hspace{1em}& f-m\notin {\mathcal{H}_{y}},\end{array}\right.\]The two rate functions (for the unicity of the rate function) are the same rate function. So we can deduce a LDP for the family ${({Z^{n}})_{n\in \mathbb{N}}}$ in two different ways. First let ${({Z_{t}^{n,y}})_{t\in [0,1]}}$ be a family of diffusions.
Remark 4.
For ${y^{n}}\in {\mathcal{C}_{\alpha }}([0,1])$ let ${Z^{n,{y^{n}}}}$ denote the solution of equation (13) with y replaced by ${y^{n}}$. Then if ${y^{n}}\to y$ in ${\mathcal{C}_{\alpha }}([0,1])$, from the generalized Wentzell–Friedlin theory (Theorem 1 in [9]), we have that ${({({Z_{t}^{n,{y^{n}}}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$ satisfies the same LDP in $\mathcal{C}([0,1])$ as ${({({Z_{t}^{n,y}})_{t\in [0,1]}})}_{n\in \mathbb{N}}$.
We now want to prove the lower semicontinuity of $J(\cdot |\cdot )$ on $\mathcal{C}([0,1])\times {\mathcal{C}_{\alpha }}([0,1])$.
Proof.
If ${y^{n}}\stackrel{{\mathcal{C}_{\alpha }}([0,1])}{\longrightarrow }y$, then for any $\varepsilon >0$, eventually ${\inf _{t\in [0,1]}}|\frac{y(t)}{{y^{n}}(t)}{|^{2}}\ge (1-\varepsilon )$, and by the lower semicontinuity of $J(\cdot |y)$,
\[\begin{aligned}{}& \underset{({y^{n}},{f^{n}})\to (y,f)}{\liminf }J\big({f^{n}}|{y^{n}}\big)\\ {} & \hspace{1em}=\underset{({y^{n}},{f^{n}})\to (y,f)}{\liminf }\frac{1}{2}{\int _{0}^{1}}{\bigg|\frac{{\dot{f}^{n}}(t)-({a_{0}}+{a_{1}}{f^{n}}(t))}{{y^{n}}(t)}\bigg|^{2}}dt\\ {} & \hspace{1em}=\underset{({y^{n}},{f^{n}})\to (y,f)}{\liminf }\frac{1}{2}{\int _{0}^{1}}{\bigg|\frac{{\dot{f}_{n}}(t)-({a_{0}}+{a_{1}}{f^{n}}(t))}{y(t)}\bigg|^{2}}\cdot {\bigg|\frac{y(t)}{{y^{n}}(t)}\bigg|^{2}}dt\\ {} & \hspace{1em}\ge \underset{({y^{n}},{f^{n}})\to (y,f)}{\liminf }\frac{1}{2}{\int _{0}^{1}}{\bigg|\frac{{\dot{f}_{n}}(t)-({a_{0}}+{a_{1}}{f^{n}}(t))}{y(t)}\bigg|^{2}}dt\cdot \underset{t\in [0,1]}{\inf }{\bigg|\frac{y(t)}{{y^{n}}(t)}\bigg|^{2}}\\ {} & \hspace{1em}=(1-\varepsilon )\underset{{f^{n}}\to f}{\liminf }J\big({f^{n}}|y\big),\end{aligned}\]
and the proposition holds. □Theorem 5.
Consider the family of processes ${({Y^{n}},{Z^{n}})_{n\in \mathbb{N}}}$, where ${({Y^{n}})_{n\in \mathbb{N}}}$ is a family of processes with paths in ${\mathcal{C}_{\alpha }}([0,1])$ and for $n\in \mathbb{N}$, ${Z^{n}}$ is the solution of (12). Suppose that ${({Y^{n}})_{n\in \mathbb{N}}}$ is independent from the Brownian motion and satisfies a LDP with the good rate function ${I_{Y}}$ and the speed $\gamma (n)=n$. Then ${({Y^{n}},{Z^{n}})_{n\in \mathbb{N}}}$ satisfies the WLDP with the speed $\gamma (n)=n$ and rate function
and ${({Z^{n}})_{n\in \mathbb{N}}}$ satisfies the LDP with the speed $\gamma (n)$ and the rate function
Now let ${({Z_{t}^{n,y}})_{t\in [0,1]}}$ be a family of continuous Gaussian processes. We have to prove that ${({Z_{t}^{n,{y^{n}}}})_{t\in [0,1]}}$ satisfies the same LDP as ${({Z_{t}^{n,y}})_{t\in [0,1]}}$ when ${y^{n}}\stackrel{{\mathcal{C}_{\alpha }}([0,1])}{\longrightarrow }y$. Let ${\tilde{Z}_{t}^{n,{y^{n}}}}={Z_{t}^{n,{y^{n}}}}-m(t)$ for every $n\in \mathbb{N}$ and $t\in [0,1]$.
Straightforward calculations show that there exists $L>0$, such that
\[\begin{aligned}{}& \underset{s,t\in [0,1],s\ne t}{\sup }n\cdot \frac{|{k^{{y^{n}}}}(t,t)+{k^{{y^{n}}}}(s,s)-2{k^{{y^{n}}}}(t,s)|}{|t-s{|^{2\alpha }}}\\ {} & \hspace{1em}\le L\underset{s,t\in [0,1],s\ne t}{\sup }\frac{({e^{{a_{1}}(t-s)}}-1)}{|t-s{|^{2\alpha }}}<+\infty ,\hspace{1em}\mathrm{for}\hspace{2.5pt}2\alpha =1.\end{aligned}\]
Therefore the family ${({\tilde{Z}^{n,{y^{n}}}})_{n\in \mathbb{N}}}$ is exponentially tight at the speed n. Furthermore, conditions (6) and (7) of Theorem 2 are fullfilled, in fact
\[ \underset{n\to +\infty }{\lim }\mathbb{E}\big[\big\langle \lambda ,{\tilde{Z}^{n,{y^{n}}}}\big\rangle \big]=0\]
and
\[ \underset{n\to +\infty }{\lim }\mathrm{Var}\big(\big\langle \lambda ,{\tilde{Z}^{n,{y^{n}}}}\big\rangle \big)\cdot n={\int _{0}^{1}}{\int _{0}^{1}}{k^{y}}(s,t)\hspace{2.5pt}d\lambda (t)\hspace{2.5pt}d\lambda (s),\]
where ${k^{y}}(s,t)={e^{{a_{1}}(s+t)}}{\int _{0}^{s\wedge t}}{e^{-2{a_{1}}u}}{y^{2}}(u)\hspace{2.5pt}du$. Therefore ${({\tilde{Z}^{n,{y^{n}}}})_{n\in \mathbb{N}}}$ satisfies a LDP on $\mathcal{C}([0,1])$. Finally, thanks to the contraction principle, the family ${({Z^{n,{y^{n}}}})_{n\in \mathbb{N}}}$ satisfies a LDP on $\mathcal{C}([0,1])$ with the rate function $J(\cdot |y)$ defined in (16).Remark 5.
The lower semicontinuity of $J(\cdot |\cdot )$ on $\mathcal{C}([0,1])\times {\mathcal{C}_{\alpha }}([0,1])$ follows from Proposition 6.
We have proved that the hypotheses of Theorem 3 are verified, so the LDP for ${({Z^{n}})_{n\in \mathbb{N}}}$ follows.
6 Estimates of level crossing probability
In this section we will study the probability of level crossing for a family of conditionally Gaussian processes. In particular, we will study the probability
as $n\to \infty $, where ${({Z^{n}})_{n\in \mathbb{N}}}$ is a family of conditionally Gaussian process. In this situation the probability ${p_{n}}$ has a large deviation limit
The main reference in this section is [4]. We now compute ${\lim \nolimits_{n\to \infty }}\frac{1}{\gamma (n)}\log ({p_{n}})$, for a fixed continuous path $\varphi \in \mathcal{C}([0,1])$. The computation is simple, in fact, since ${({Z^{n}})_{n\in \mathbb{N}}}$ satisfies a LDP with the rate function
where ${I_{Y}}(\cdot )$ is the rate function associated to the family of conditioning processes ${({Y^{n}})_{n\in \mathbb{N}}}$, C is the Polish set where ${({Y^{n}})_{n\in \mathbb{N}}}$ takes values, and $J(\cdot |y)$ is the good rate function of the family of Gaussian processes ${({Z^{n,y}})_{n\in \mathbb{N}}}$. If we denote
For every $t\in [0,1]$ let ${\mathcal{A}_{t}}=\{w\in \mathcal{C}([0,1]):w(t)=1+\varphi (t)\}$, then $\mathcal{A}={\bigcup _{0\le t\le 1}}{\mathcal{A}_{t}}$ and so
(17)
\[ {p_{n}}=\mathbb{P}\Big(\underset{0\le t\le 1}{\sup }\big({Z_{t}^{n}}-\varphi (t)\big)>1\Big),\]
\[ \mathcal{A}=\Big\{w\in \mathcal{C}\big([0,1]\big):\underset{0\le t\le 1}{\sup }\big(w(t)-\varphi (t)\big)>1\Big\},\]
we have that
\[ -\underset{w\in \stackrel{\circ }{\mathcal{A}}}{\inf }{I_{Z}}(w)\le \underset{n\to +\infty }{\liminf }\frac{1}{\gamma (n)}\log ({p_{n}})\le \underset{n\to +\infty }{\limsup }\frac{1}{\gamma (n)}\log ({p_{n}})\le -\underset{w\in \bar{\mathcal{A}}}{\inf }{I_{Z}}(w)\]
where
\[ \bar{\mathcal{A}}=\Big\{w\in \mathcal{C}\big([0,1]\big):\underset{0\le t\le 1}{\sup }\big(w(t)-\varphi (t)\big)\ge 1\Big\}\hspace{0.1667em}\]
and
\[ \stackrel{\circ }{\mathcal{A}}=\mathcal{A}=\Big\{w\in \mathcal{C}\big([0,1]\big):\underset{0\le t\le 1}{\sup }\big(w(t)-\varphi (t)\big)>1\Big\}.\]
It is a simple calculation to show that ${\inf _{w\in \stackrel{\circ }{\mathcal{A}}}}{I_{Z}}(w)={\inf _{w\in \bar{\mathcal{A}}}}{I_{Z}}(w)$. Therefore,
(19)
\[ \underset{n\to \infty }{\lim }\frac{1}{\gamma (n)}\log ({p_{n}})=-\underset{w\in \mathcal{A}}{\inf }{I_{Z}}(w).\]6.1 Gaussian process with random mean and variance
For every $n\in \mathbb{N}$, let ${Z^{n}}={X^{n}}{Y_{1}^{n}}+{Y_{2}^{n}}$ as in Section 4. In this case we know that
\[ J(z|y)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| \frac{z-{y_{2}}}{{y_{1}}}{\| _{\bar{\mathcal{H}}}^{2}},\hspace{1em}& \frac{z-{y_{2}}}{{y_{1}}}\in \bar{\mathcal{H}},\\ {} +\infty \hspace{1em}& \text{otherwise}.\end{array}\right.\]
Therefore, we have
\[\begin{aligned}{}\underset{w\in \mathcal{A}}{\inf }{I_{Z}}(w)& =\underset{y\in {\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])}{\inf }\underset{0\le t\le 1}{\inf }\underset{w\in {\mathcal{A}_{t}}}{\inf }\big\{{I_{Y}}(y)+J(w|y)\big\}\\ {} & =\underset{y\in {\mathcal{C}_{\alpha }}([0,1])\times \mathcal{C}([0,1])}{\inf }\underset{0\le t\le 1}{\inf }\underset{w\in {\mathcal{A}_{t}}}{\inf }\bigg\{{I_{Y}}(y)+\frac{1}{2}{\bigg\| \frac{w-{y_{2}}}{{y_{1}}}\bigg\| _{\bar{\mathcal{H}}}^{2}}\bigg\}.\end{aligned}\]
The set of paths of the form
\[ h(u)={\int _{0}^{1}}\bar{k}(u,v)\hspace{2.5pt}d\lambda (v),\hspace{1em}u\in [0,1],\hspace{1em}\lambda \in \mathcal{M}[0,1],\]
is dense in $\bar{\mathcal{H}}$ and, therefore, the infimum ${\inf _{w\in {\mathcal{A}_{t}}}}\{{I_{Y}}(y)+\frac{1}{2}\| \frac{w}{y}{\| _{\bar{\mathcal{H}}}^{2}}\}$ is the same as that over the functions w such that
\[ w(u)-{y_{2}}(u)={y_{1}}(u)\cdot {\int _{0}^{1}}\bar{k}(u,v)\hspace{2.5pt}d\lambda (v),\hspace{1em}u\in [0,1],\]
for some $\lambda \in \mathcal{M}[0,1]$. For such kind of paths, recalling the expression of their norms in the RKHS, the functional we aim to minimize is given by
\[ {I_{Y}}(y)+\frac{1}{2}{\bigg\| \frac{w-{y_{2}}}{{y_{1}}}\bigg\| _{\bar{\mathcal{H}}}^{2}}={I_{Y}}(y)+\frac{1}{2}{\int _{0}^{1}}{\int _{0}^{1}}\bar{k}(u,v)\hspace{2.5pt}d\lambda (u)\hspace{2.5pt}d\lambda (v)\]
therefore, it is enough to minimize the right-hand side of the above equation with respect to the measure λ, with the additional constraint that
which we can write in the equivalent form
\[ {\int _{0}^{1}}\bar{k}(t,v)\hspace{2.5pt}d\lambda (v)-\frac{1+\varphi (t)-{y_{2}}(t)}{{y_{1}}(t)}=0.\]
This is a constrained extremum problem, and thus we are led to use the method of Lagrange multipliers. The measure λ must be such that
\[ {\int _{0}^{1}}{\int _{0}^{1}}\bar{k}(u,v)\hspace{2.5pt}d\lambda (u)\hspace{2.5pt}d\mu (v)=\beta {\int _{0}^{1}}\bar{k}(t,v)\hspace{2.5pt}d\mu (v),\hspace{1em}\mu \in \mathcal{M}[0,1],\]
for some $\beta \in \mathbb{R}$. We find
\[ \beta =\frac{{\textstyle\int _{0}^{1}}\bar{k}(t,u)\hspace{2.5pt}d\lambda (u)}{\bar{k}(t,t)}=\frac{1+\varphi (t)-{y_{2}}(t)}{{y_{1}}(t)\bar{k}(t,t)}\]
and
Such measure satisfies the Lagrange multipliers problem, and it is therefore a critical point for the functional we want to minimize. Since this is a strictly convex functional restricted on a linear subspace of $\mathcal{M}[0,1]$, it is still strictly convex, and thus the critical point $\bar{\lambda }$ is actually its unique point of minimum. Hence, we have
6.2 Ornstein–Uhlenbeck processes with random diffusion coefficient
In this case
\[\begin{aligned}{}J(f|y)& =\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| f-m{\| _{{\mathcal{H}_{y}}}^{2}},\hspace{1em}& f-m\in {\mathcal{H}_{y}},\\ {} +\infty ,\hspace{1em}& f-m\notin {\mathcal{H}_{y}},\end{array}\right.\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}{\textstyle\int _{0}^{1}}|\frac{\dot{f}(t)-({a_{0}}+{a_{1}}f(t))}{y(t)}{|^{2}}dt,\hspace{1em}& f\in {H_{1}^{x}},\\ {} +\infty ,\hspace{1em}& f\notin {H_{1}^{x}},\end{array}\right.\end{aligned}\]
where $m(t)={e^{{a_{1}}t}}(x+\frac{{a_{0}}}{{a_{1}}}[1-{e^{-{a_{1}}t}}])$, $t\in [0,1]$, ${\mathcal{H}_{y}}$ is the RKHS associated to
\[ {k^{y}}(s,t)={e^{{a_{1}}(s+t)}}{\int _{0}^{s\wedge t}}{e^{-2{a_{1}}u}}{y^{2}}(u)\hspace{2.5pt}du,\]
and ${H_{1}^{x}}=m+{\mathcal{H}_{y}}$.We have
\[\begin{aligned}{}\underset{w\in \mathcal{A}}{\inf }{I_{Z}}(w)& =\underset{y\in {\mathcal{C}_{\alpha }}[0,1]}{\inf }\underset{0\le t\le 1}{\inf }\underset{w\in {\mathcal{A}_{t}}}{\inf }\big\{{I_{Y}}(y)+J(w|y)\big\}\\ {} & =\underset{y\in {\mathcal{C}_{\alpha }}[0,1]}{\inf }\underset{0\le t\le 1}{\inf }\underset{w\in {\mathcal{A}_{t}}}{\inf }\bigg\{{I_{Y}}(y)+\frac{1}{2}\| w-m{\| _{{\mathcal{H}_{y}}}^{2}}\bigg\}.\end{aligned}\]
The set of paths of the form
\[ h(u)={\int _{0}^{1}}{k^{y}}(u,v)\hspace{2.5pt}d\lambda (v),\hspace{1em}u\in [0,1],\hspace{1em}\lambda \in \mathcal{M}[0,1],\]
is dense in ${\mathcal{H}_{y}}$, therefore, the infimum
\[ \underset{w\in {\mathcal{A}_{t}}}{\inf }\bigg\{{I_{Y}}(y)+\frac{1}{2}\| w-m{\| _{{\mathcal{H}_{y}}}^{2}}\bigg\},\]
is the same as that over the functions of the form
for some $\lambda \in \mathcal{M}[0,1]$. For paths of such kind, recalling the expression of their norms in the RKHS, the functional we aim to minimize is given by
\[ {I_{Y}}(y)+\frac{1}{2}\| w-m{\| _{{\mathcal{H}_{y}}}^{2}}={I_{Y}}(y)+\frac{1}{2}{\int _{0}^{1}}{\int _{0}^{1}}{k^{y}}(u,v)\hspace{2.5pt}d\lambda (u)\hspace{2.5pt}d\lambda (v)\]
therefore, it is enough to minimize the right-hand side of the above equation with respect to the measure λ, with the additional constraint
which can be written in the equivalent form
This is a constrained extremum problem, and thus we are led to use the method of Lagrange multipliers. We find
\[ \beta =\frac{{\textstyle\int _{0}^{1}}{k^{y}}(t,u)\hspace{2.5pt}d\lambda (u)}{{k^{y}}(t,t)}=\frac{1+\varphi (t)-m(t)}{{k^{y}}(t,t)}\]
and
${\delta _{\{t\}}}$ standing for the Dirac mass in t. Such measure satisfies the Lagrange multipliers problem, and it is therefore a critical point for the functional we want to minimize. Since this functional is a strictly convex restricted on a linear subspace of $\mathcal{M}[0,1]$, it is still strictly convex, and thus the critical point $\bar{\lambda }$ is actually its unique point of minimum. Hence, we have
and therefore