1 Introduction
In this paper we study the asymptotics of the regular conditional prediction law of a Gaussian Volterra process in a case where one does not observe the process directly, but instead observes a noisy version of it. More precisely we consider two different situations which generalize the results contained in [13] and [9], respectively. Let $X={({X_{t}})_{t\ge 0}}$ be a continuous real Volterra process.
Definition 1.
A centered Gaussian process X is a Volterra process if, for every $T>0$, it admits the representation
where $B={({B_{t}})_{t\ge 0}}$ is a Brownian motion and K is a square integrable function on ${[0,T]^{2}}$ (the kernel) such that $K(t,s)=0$ for all $s>t$.
For a Volterra process the covariance function is
Let $\tilde{B}={({\tilde{B}_{t}})_{t\ge 0}}$ be another Brownian motion independent of B and for $\alpha ,\tilde{\alpha }\in \mathbb{R}$ define ${W^{\alpha ,\tilde{\alpha }}}=\alpha B+\tilde{\alpha }\tilde{B}$.
(2)
\[ k(t,s)={\int _{0}^{s\wedge t}}K(t,u)K(s,u)\hspace{0.1667em}du\hspace{1em}\text{for}\hspace{2.5pt}t,s\in [0,T].\]
First case For fixed $n\in \mathbb{N}$ and $T>0$, we consider the conditioning of X on n linear functionals of the paths of ${W^{\alpha ,\tilde{\alpha }}}$,
\[ {\boldsymbol{G}_{T}}\big({W^{\alpha ,\tilde{\alpha }}}\big)={\big({G_{T}^{1}}\big({W^{\alpha ,\tilde{\alpha }}}\big),\dots ,{G_{T}^{n}}\big({W^{\alpha ,\tilde{\alpha }}}\big)\big)^{\intercal }},\]
more precisely,
\[ {\boldsymbol{G}_{T}}\big({W^{\alpha ,\tilde{\alpha }}}\big)={\int _{0}^{T}}\boldsymbol{g}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}={\Bigg({\int _{0}^{T}}{g_{1}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}},\dots ,{\int _{0}^{T}}{g_{n}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}\Bigg)^{\intercal }},\]
where $\boldsymbol{g}={({g_{1}},\dots ,{g_{n}})^{\intercal }}$ is a suitable vectorial function defined on $[0,T]$. Informally the generalized conditioned process ${X^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}$, for $\boldsymbol{x}\in {\mathbb{R}^{n}}$, is the law of the Gaussian process X conditioned on the set
\[ \Bigg\{{\int _{0}^{T}}\boldsymbol{g}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}=\boldsymbol{x}\Bigg\}={\bigcap \limits_{i=1}^{n}}\Bigg\{{\int _{0}^{T}}{g_{i}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}={x_{i}}\Bigg\}.\]
We obtain a large deviation principle for the family of processes ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})_{t\in [0,1]}})_{\varepsilon >0}}$.
Second case We are interested in the regular conditional law of the process X given the σ-algebra ${\mathcal{F}_{T}^{\alpha ,\tilde{\alpha }}}$, where ${({\mathcal{F}_{t}^{\alpha ,\tilde{\alpha }}})_{t\ge 0}}$ is the filtration generated by the mixed Brownian motion ${W^{\alpha ,\tilde{\alpha }}}$, i.e. we want to condition the process to the past of the mixed Brownian motion up to a fixed time $T>0$. Informally the generalized conditioned process ${X^{\psi }}$, for ψ being a continuous function, is the law of the Gaussian process X conditioned on the set
Here we obtain a large deviation principle for the family of processes ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }})_{t\in [0,1]}})_{\varepsilon >0}}$.
Since $T,\alpha $ and $\tilde{\alpha }$ are fixed positive numbers the dependence (in the notations) from these quantities will be omitted.
The paper is organized as follows. In Section 2 we recall some basic facts on large deviation theory for continuous Gaussian processes and Volterra processes. Sections 3 and 4 are dedicated to the main results. Both are divided into three subsections. In the first one we give the conditional law, in the second one we prove the large deviation principle and in the third one we present some examples. Section 3 is dedicated to the conditioning on n functionals of the paths of the noisy process. Section 4 is dedicated to the conditioning on the past of the noisy process.
2 Large deviations for continuous Gaussian processes
We briefly recall some main facts on large deviations principles we are going to use. For a detailed development of this very wide theory we can refer, for example, to the following classical references: Chapter II in Azencott [1], Section 3.4 in Deuschel and Strook [6], Chapter 4 (in particular Sections 4.1, 4.2 and 4.5) in Dembo and Zeitouni [5].
Definition 2.
Let E be a topological space, $\mathcal{B}(E)$ be the Borel σ-algebra and ${({\mu _{\varepsilon }})_{\varepsilon >0}}$ be a family of probability measures on $\mathcal{B}(E)$. We say that the family of probability measures ${({\mu _{\varepsilon }})_{\varepsilon >0}}$ satisfies a large deviation principle on E with the rate function I and the inverse speed ${\eta _{\varepsilon }}$ (${\eta _{\varepsilon }}>0$, ${\eta _{\varepsilon }}\to 0$ as $\varepsilon \to 0$) if, for any open set Θ,
and for any closed set Γ,
A rate function is a lower semicontinuous mapping $I:E\to [0,+\infty ]$. A rate function I is said to be good if the sets $\{I\le a\}$ are compact for every $a\ge 0$.
In this paper E will be the set of continuous functions on $[0,1]$ and $\mathcal{B}(E)$ will be the Borel σ-algebra generated by the open sets induced by the uniform convergence. Therefore in this section we consider process in the interval $[0,1]$. Let $U\hspace{0.1667em}=\hspace{0.1667em}{({U_{t}})_{t\in [0,1]}}$, be a continuous and centered Gaussian process on a probability space $(\Omega ,\mathcal{F},\mathbb{P})$. From now on, we will denote by $C[0,1]$ the set of continuous functions on $[0,1]$, and by $\mathcal{B}(C[0,1])$ the Borel σ-algebra generated by the open sets induced by the uniform convergence. Moreover, we will denote by $\mathcal{M}[0,1]$ its dual, that is, the set of signed Borel measures on $[0,1]$. The action of $\mathcal{M}[0,1]$ on $C[0,1]$ is given by
Remark 1.
We say that a family of continuous processes ${({({U_{t}^{\varepsilon }})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle if the associated family of laws satisfy a large deviation principle on $C[0,1]$.
The following remarkable theorem (Proposition 1.5 in [1]) gives an explicit expression of the Cramér transform ${\Lambda ^{\ast }}$ of a continuous centered Gaussian process ${({U_{t}})_{t\in [0,1]}}$ with covariance function k. Let us recall that for $\lambda \in \mathcal{M}[0,1]$,
Theorem 1.
Let ${({U_{t}})_{t\in [0,1]}}$ be a continuous and centered Gaussian process with covariance function k. Let ${\Lambda ^{\ast }}$ denote the Cramér transform of Λ, that is
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\Lambda ^{\ast }}(x)& \displaystyle =& \displaystyle \underset{\lambda \in \mathcal{M}[0,1]}{\sup }\big(\langle \lambda ,x\rangle -\Lambda (\lambda )\big)\\ {} & \displaystyle =& \displaystyle \underset{\lambda \in \mathcal{M}[0,1]}{\sup }\Bigg(\langle \lambda ,x\rangle -\frac{1}{2}{\int _{0}^{1}}{\int _{0}^{1}}k(t,s)\hspace{0.1667em}d\lambda (t)\hspace{0.1667em}d\lambda (s)\Bigg).\end{array}\]
Then,
\[ {\Lambda ^{\ast }}(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| x{\| _{\mathcal{H}}^{2}},\hspace{1em}& x\in \mathcal{H},\\ {} +\infty ,\hspace{1em}& \textit{otherwise},\end{array}\right.\]
where $\mathcal{H}$ and $\| .{\| _{\mathcal{H}}}$ denote, respectively, the reproducing kernel Hilbert space and the related norm associated to the covariance function k.
Reproducing kernel Hilbert spaces are an important tool to handle Gaussian processes. For a detailed development of this wide theory we can refer, for example, to Chapter 4 in [10] (in particular Section 4.3) and Chapter 2 in [3] (in particular Sections 2.2 and 2.3). In order to state a large deviation principle for a family of Gaussian processes, we need the following definition.
Definition 3.
A family of continuous processes ${({({U_{t}^{\varepsilon }})_{t\in [0,1]}})_{\varepsilon >0}}$ is exponentially tight at the inverse speed ${\eta _{\varepsilon }}$, if for every $R>0$ there exists a compact set ${K_{R}}$ such that
If the means and the covariance functions of an exponentially tight family of Gaussian processes have a good limit behavior, then the family satisfies a large deviation principle, as stated in the following theorem which is a consequence of the classic abstract Gärtner–Ellis Theorem (Baldi Theorem 4.5.20 and Corollary 4.6.14 in [5]) and Theorem 1.
Theorem 2.
Let ${({({U_{t}^{\varepsilon }})_{t\in [0,1]}})_{\varepsilon >0}}$ be an exponentially tight family of continuous Gaussian processes at the inverse speed function ${\eta _{\varepsilon }}$. Suppose that, for any $\lambda \in \mathcal{M}[0,1]$,
\[ \underset{\varepsilon \to 0}{\lim }\mathbb{E}\big[\big\langle \lambda ,{U^{\varepsilon }}\big\rangle \big]=0\]
and the limit
\[ \Lambda (\lambda )=\underset{\varepsilon \to 0}{\lim }\frac{1}{{\eta _{\varepsilon }}}\operatorname{Var}\big(\big\langle \lambda ,{U^{\varepsilon }}\big\rangle \big)={\int _{0}^{1}}{\int _{0}^{1}}k(t,s)\hspace{0.1667em}d\lambda (t)\hspace{0.1667em}d\lambda (s)\]
exists for some continuous, symmetric, positive definite function k, that is the covariance function of a continuous Gaussian process, then ${({({U_{t}^{\varepsilon }})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle on $C[0,1]$ with the inverse speed ${\eta _{\varepsilon }}$ and the good rate function
\[ I(h)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| h{\| _{\mathcal{H}}^{2}},\hspace{1em}& h\in \mathcal{H},\\ {} +\infty ,\hspace{1em}& \textit{otherwise},\end{array}\right.\]
where $\mathcal{H}$ and $\| .{\| _{\mathcal{H}}}$, respectively, denote the reproducing kernel Hilbert space and the related norm associated to the covariance function k.
In order to prove exponential tightness we shall use the following result (see Proposition 2.1 in [12]).
Proposition 1.
Let ${({({U_{t}^{\varepsilon }})_{t\in [0,1]}})_{\varepsilon >0}}$ be a family of continuous Gaussian processes, where ${U_{0}^{\varepsilon }}=0$ for all $\varepsilon >0$. Suppose there exist constants $\beta ,{M_{1}},{M_{2}}>0$ such that for $\varepsilon >0$
then ${({({U_{t}^{\varepsilon }})_{t\in [0,1]}})_{\varepsilon >0}}$ is exponentially tight at the inverse speed function ${\eta _{\varepsilon }}$.
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\mathbb{E}[{U_{t}^{\varepsilon }}-{U_{s}^{\varepsilon }}]|}{|t-s{|^{\beta }}}\le {M_{1}}\]
and
(3)
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{\operatorname{Var}({U_{t}^{\varepsilon }}-{U_{s}^{\varepsilon }})}{{\eta _{\varepsilon }}\hspace{0.1667em}|t-s{|^{2\beta }}}\le {M_{2}}\]Remark 2.
Suppose ${({({U_{t}^{\varepsilon }})_{t\in [0,1]}})_{\varepsilon >0}}$ is a family of centered Gaussian processes, defined on the probability space $(\Omega ,\mathcal{F},\mathbb{P})$, that satisfies a large deviation principle on $C[0,1]$ with the inverse speed ${\eta _{\varepsilon }}$ and the good rate function I. Let ${({m^{\varepsilon }})_{\varepsilon >0}}\subset C[0,1]$, $m\in C[0,1]$ be functions such that ${m^{\varepsilon }}\stackrel{C[0,1]}{\underset{}{\longrightarrow }}m$, as $\varepsilon \to 0$. Then, the family of processes ${({m^{\varepsilon }}+{U^{\varepsilon }})_{\varepsilon >0}}$ satisfies a large deviation principle on $C[0,1]$ with the same inverse speed ${\eta _{\varepsilon }}$ and the good rate function
\[ {I_{m}}(h)=I(h-m)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| h-m{\| _{\mathcal{H}}^{2}},\hspace{1em}& h-m\in \mathcal{H},\\ {} +\infty ,\hspace{1em}& h-m\notin \mathcal{H}.\end{array}\right.\]
In fact the two families ${({m^{\varepsilon }}+{U^{\varepsilon }})_{\varepsilon >0}}$ and ${(m+{U^{\varepsilon }})_{\varepsilon >0}}$ are exponentially equivalent (at the inverse speed ${\eta _{\varepsilon }}$) and therefore as far as the large deviation principle is concerned, they are indistinguishable. See Theorem 4.2.13 in [5].Our first aim is to study the behavior of the covariance function and of the mean function of the original process X in order to get a functional large deviation principle for the family ${({({X_{T+\varepsilon t}}-{X_{T}})_{t\in [0,1]}})_{\varepsilon >0}}$, as $\varepsilon \to 0$.
Let ${({X_{t}})_{t\ge 0}}$ be a continuous centered Gaussian processes and fix $T>0$. The next two assumptions guarantee that Theorem 2 is applicable to the family of processes ${({({X_{T+\varepsilon t}}-{X_{T}})_{t\in [0,1]}})_{\varepsilon >0}}$. Let ${\gamma _{\varepsilon }}>0$ be an infinitesimal function, i.e. ${\gamma _{\varepsilon }}\to 0$ for $\varepsilon \to 0$.
Assumption 1.
For any fixed $T>0$ there exists an asymptotic covariance function $\bar{k}$ defined as
uniformly in $(t,s)\in [0,1]\times [0,1]$.
(4)
\[\begin{aligned}{}\bar{k}(t,s)& =\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{X_{T+\varepsilon s}}-{X_{T}})}{{\gamma _{\varepsilon }^{2}}}\\ {} & =\underset{\varepsilon \to 0}{\lim }\frac{k(T+\varepsilon t,T+\varepsilon s)-k(T+\varepsilon t,T)-k(T+\varepsilon s,T)+k(T,T)}{{\gamma _{\varepsilon }^{2}}},\end{aligned}\]Remark 3.
Notice that $\bar{k}$ is a continuous covariance function, being the (uniform) limit of continuous, symmetric and positive definite functions.
Remark 4.
Recall that the continuity of the covariance function is not a sufficient condition to identify a Gaussian process with continuous paths. We need some more regularity. Since we are investigating continuous Volterra processes, it would be useful to have a criterion to establish the regularity of the paths. A sufficient condition for the continuity of the trajectories of a centered Gaussian process can be given in terms of the metric entropy induced by the canonical metric associated to the process (for further details, see [7] and [8]). Such approach may be difficult to apply. However, in [2], a necessary and sufficient condition for the Hölder continuity of a centered Gaussian process is established in terms of the Hölder continuity of the covariance function. More precisely a Gaussian process ${({X_{t}})_{t\in [0,T]}}$ is Hölder continuous of exponent $0<a<A$ if and only if for every $\varepsilon >0$, $s,t\in [0,T]$, there exists a constant ${c_{\varepsilon }}>0$ such that
Although, obviously, the Hölder continuity property of the process is stronger than continuity, in many cases of interest this is more easily established because the covariance function is not difficult to study. Recalling the form of the covariance of a Volterra process (2) we have the following sufficient condition for the Hölder continuity of a Volterra process: there exist constants $c,A>0$ such that
for all $\delta \in [0,T]$, where
From now on with covariance regular enough we mean that the covariance function satisfies some sufficient condition to ensure that the associate process has continuous paths.
Assumption 2.
For any fixed $T>0$ there exist constants $M,\tau >0$, such that for $\varepsilon >0$,
\[\begin{aligned}{}& \underset{s,t\in [0,1],s\ne t}{\sup }\frac{\operatorname{Var}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}})}{{\gamma _{\varepsilon }^{2}}\hspace{0.1667em}|t-s{|^{2\tau }}}\\ {} & =\underset{s,t\in [0,1],s\ne t}{\sup }\frac{k(T+\varepsilon t,T+\varepsilon t)-2k(T+\varepsilon t,T+\varepsilon s)+k(T+\varepsilon s,T+\varepsilon s)}{{\gamma _{\varepsilon }^{2}}\hspace{0.1667em}|t-s{|^{2\tau }}}\le M.\end{aligned}\]
As an immediate application of Theorem 2 (take ${U_{t}^{\varepsilon }}={X_{T+\varepsilon t}}-{X_{T}}$), Assumptions 1 and 2 imply, if $\bar{k}$ is regular enough, that the family ${({({X_{T+\varepsilon t}}-{X_{T}})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle on $C[0,1]$ with the inverse speed ${\gamma _{\varepsilon }^{2}}$ and the good rate function given by
\[ {J_{X}}(h)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| h{\| _{\bar{\mathcal{H}}}^{2}},\hspace{1em}& h\in \bar{\mathcal{H}},\\ {} +\infty ,\hspace{1em}& \text{otherwise,}\end{array}\right.\]
where $\bar{\mathcal{H}}$ is the reproducing kernel Hilbert space associated to the covariance function $\bar{k}$ and the symbol $\| \cdot {\| _{\bar{\mathcal{H}}}}$ denotes the usual norm defined on $\bar{\mathcal{H}}$.In fact Assumption 1 immediately implies that
\[ \Lambda (\lambda )=\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Var}(\langle \lambda ,{X_{T+\varepsilon \cdot }}-{X_{T}}\rangle )}{{\gamma _{\varepsilon }^{2}}}={\int _{0}^{1}}{\int _{0}^{1}}\bar{k}(t,s)\lambda (dt)\lambda (ds).\]
Furthermore, Assumption 2 implies that the family ${({({X_{T+\varepsilon t}}-{X_{T}})_{t\in [0,1]}})_{\varepsilon >0}}$ is exponentially tight at the inverse speed function ${\gamma _{\varepsilon }^{2}}$.3 Conditioning to n functionals of the path
3.1 Conditional law
Let $(\Omega ,\mathcal{F},{({\mathcal{F}_{t}})_{t\ge 0}},\mathbb{P})$ be a filtered probability space. On this space we consider a Brownian motion $B={(B)_{t\ge 0}}$, a continuous real Volterra process $X={({X_{t}})_{t\ge 0}}$ and another Brownian motion $\tilde{B}={(\tilde{B})_{t\ge 0}}$ independent of B. For $\alpha ,\tilde{\alpha }\in \mathbb{R}$ let us define the mixed Brownian motion ${W^{\alpha ,\tilde{\alpha }}}=\alpha B+\tilde{\alpha }\tilde{B}$.
For fixed $n\in \mathbb{N}$ and $T>0$, we consider the conditioning of X on n linear functionals of ${\boldsymbol{G}_{T}}({W^{\alpha ,\tilde{\alpha }}})={({G_{T}^{1}}({W^{\alpha ,\tilde{\alpha }}}),\dots ,{G_{T}^{n}}({W^{\alpha ,\tilde{\alpha }}}))^{\intercal }}$ of the paths of ${W^{\alpha ,\tilde{\alpha }}}$,
\[ {\boldsymbol{G}_{T}}\big({W^{\alpha ,\tilde{\alpha }}}\big)={\int _{0}^{T}}\boldsymbol{g}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}={\Bigg({\int _{0}^{T}}{g_{1}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}},\dots ,{\int _{0}^{T}}{g_{n}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}\Bigg)^{\intercal }},\]
where $\boldsymbol{g}={({g_{1}},\dots ,{g_{n}})^{\intercal }}$ is a vectorial function and ${g_{k}}\in {\mathbb{L}^{2}}[0,T]$, for $k=1,\dots ,n$. We assume, without any loss of generality, that the functions ${g_{i}}$, $i=1,\dots ,n$, are linearly independent. The linearly dependent components of $\boldsymbol{g}$ can be simply removed from the conditioning. As we said in the Introduction, the generalized conditioned process ${X^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}$, for $\boldsymbol{x}\in {\mathbb{R}^{n}}$, is the law of the Gaussian process X conditioned on the set
\[ \Bigg\{{\int _{0}^{T}}\boldsymbol{g}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}=\boldsymbol{x}\Bigg\}={\bigcap \limits_{i=1}^{n}}\Bigg\{{\int _{0}^{T}}{g_{i}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}={x_{i}}\Bigg\}.\]
The law ${\mathbb{P}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}$ of ${X^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}$ is the regular conditional distribution on $C[0,+\infty )$, endowed with the topology induced by the sup-norm on compact sets,
\[ {\mathbb{P}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}(X\in E)=\mathbb{P}\big({X^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}\in E\big)=\mathbb{P}\Bigg(X\in E\Big|{\int _{0}^{T}}\boldsymbol{g}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}=\boldsymbol{x}\Bigg).\]
For more details about existence of such regular conditional distribution see, for example, [11].Denote by ${C^{\boldsymbol{g}}}={({c_{ij}^{{g_{i}}{g_{j}}}})_{i,j=1,\dots ,n}}$ the matrix defined by
\[ {c_{ij}^{{g_{i}}{g_{j}}}}=\operatorname{Cov}\Bigg({\int _{0}^{T}}{g_{i}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}},{\int _{0}^{T}}{g_{j}}(t)\hspace{0.1667em}d{W_{t}^{\alpha ,\tilde{\alpha }}}\Bigg)=\big({\alpha ^{2}}+{\tilde{\alpha }^{2}}\big){\int _{0}^{T}}{g_{i}}(t){g_{j}}(t)\hspace{0.1667em}dt.\]
The matrix ${C^{\boldsymbol{g}}}$ is invertible (since the functions ${g_{i}}$, $i=1,\dots ,n$, are linearly independent). Let us denote
\[ {r_{i}^{{g_{i}}}}(t)=\operatorname{Cov}\Bigg({X_{t}},{\int _{0}^{T}}{g_{i}}(u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}}\Bigg)=\alpha {\int _{0}^{t\wedge T}}K(t,u){g_{i}}(u)\hspace{0.1667em}du,\]
and
\[ {r^{\boldsymbol{g}}}(t)={\big({r_{1}^{{g_{1}}}}(t),\dots ,{r_{n}^{{g_{n}}}}(t)\big)^{\intercal }}.\]
The following theorem, similar to Theorem 3.1 in [17], gives mean and covariance function of the generalized conditioned process. Theorem 3.
The generalized conditioned process ${X^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}$ can be represented as
and covariance
where
\[ {X_{t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}={X_{t}}-{r^{\boldsymbol{g}}}{(t)^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}\Bigg({\int _{0}^{T}}\boldsymbol{g}(u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}}-\boldsymbol{x}\Bigg).\]
Moreover, the conditioned process ${X^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}$ is a Gaussian process with mean
(6)
\[ {m^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}(t)=\mathbb{E}\big[{X_{t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}\big]={r^{\boldsymbol{g}}}{(t)^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}\boldsymbol{x},\]Proof.
It is a classical result on conditioned Gaussian laws. See, e.g., Chapter II, §13, in [15]. □
Remark 5.
Let us note that the covariance function of the conditioned process depends on the conditioning functions ${g_{1}},\dots ,{g_{n}}$ and on the time T, but not on the vector $\boldsymbol{x}$.
Remark 6.
If the conditioning functions ${g_{i}}$ are the indicator functions of the interval $[0,{T_{i}})$, for $i=1,\dots ,n$, then the process is conditioned to the position of the noisy Brownian motion at the times ${T_{1}},\dots ,{T_{n}}$, more precisely to the set ${\textstyle\bigcap _{i=1}^{n}}\{{W_{{T_{i}}}^{\alpha ,\tilde{\alpha }}}={x_{i}}\}$.
Remark 7.
If the conditioning functions are ${g_{i}}(s)=K({T_{i}},s){\text{1}_{[0,{T_{i}})}}(s)$, for $i=1,\dots ,n$, and $\alpha =1$, $\tilde{\alpha }=0$, then the process is conditioned to its position at the times ${T_{1}},\dots ,{T_{n}}$, more precisely to the set ${\textstyle\bigcap _{i=1}^{n}}\{{X_{{T_{i}}}}={x_{i}}\}$ (this is a particular case of the conditioned process in [13]).
3.2 Large deviations
Let ${\gamma _{\varepsilon }}>0$ be an infinitesimal function, i.e. ${\gamma _{\varepsilon }}\to 0$ for $\varepsilon \to 0$. In this section ${({X_{t}})_{t\ge 0}}$ is a continuous Volterra process as in (1). Now, in order to achieve a large deviation principle for the family of processes ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})_{t\in [0,1]}})_{\varepsilon >0}}$, we have to investigate the behavior of the functions ${k^{\boldsymbol{g}}}$ and ${m^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}$ (defined in (7) and (6), respectively) in a small time interval of length ε.
Now we give some conditions on the original process in order to guarantee that the hypotheses of Theorem 2 hold for the conditioned process. The next assumption (Assumption 3) implies the existence of a limit covariance.
Assumption 3.
For any $T>0$ and for ${g_{i}}\in {\mathbb{L}^{2}}[0,T]$, $i=1,\dots ,n$, there exists a vectorial function ${\bar{r}^{\boldsymbol{g}}}=({\bar{r}_{1}^{{g_{1}}}},\dots ,{\bar{r}_{n}^{{g_{n}}}})$, possibly ${\bar{r}_{i}^{{g_{i}}}}=0$ for some $i=1,\dots ,n$, such that
uniformly in $t\in [0,1]$.
(9)
\[ {\bar{r}_{i}^{{g_{i}}}}(t)=\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{\textstyle\textstyle\int _{0}^{T}}{g_{i}}(u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})}{{\gamma _{\varepsilon }}}=\underset{\varepsilon \to 0}{\lim }\frac{{r_{i}^{{g_{i}}}}(T+\varepsilon t)-{r_{i}^{{g_{i}}}}(T)}{{\gamma _{\varepsilon }}},\]The next assumption (Assumption 4) implies the exponential tightness of the family of the centered processes.
Assumption 4.
For any fixed $T>0$ there exist constants $M,\hat{\tau }>0$, such that for $i=1,\dots ,n$ and $\varepsilon >0$,
\[\begin{aligned}{}& \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{\textstyle\textstyle\int _{0}^{T}}{g_{i}}(u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})|}{{\gamma _{\varepsilon }}|t-s{|^{\hat{\tau }}}}\\ {} & \hspace{1em}=\underset{s,t\in [0,1],s\ne t}{\sup }\frac{|{r_{i}^{{g_{i}}}}(T+\varepsilon t)-{r_{i}^{{g_{i}}}}(T+\varepsilon s)|}{{\gamma _{\varepsilon }}|t-s{|^{\hat{\tau }}}}\le M.\end{aligned}\]
Remark 8.
Let us observe that Assumption 3 implies that for any fixed $T>0$
uniformly in $t\in [0,1]$. Therefore,
uniformly in $t\in [0,1]$. In fact, one has
(10)
\[ \underset{\varepsilon \to 0}{\lim }{m^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}(T+\varepsilon t)={m^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}(T),\]
\[ {m^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}(T+\varepsilon t)-{m^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}(T)={\big({\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon t)-{\boldsymbol{r}^{\boldsymbol{g}}}(T)\big)^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}x,\]
and (10) immediately follows.Remark 9.
Let us observe that Assumption 4 implies that there exists $M>0$ such that the following estimate holds for the function ${\kappa ^{\boldsymbol{g}}}$ defined in (8):
(11)
\[\begin{aligned}{}& \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|{\kappa ^{\boldsymbol{g}}}(T+\varepsilon t,T+\varepsilon t)-2{\kappa ^{\boldsymbol{g}}}(T+\varepsilon t,T+\varepsilon s)+{\kappa ^{\boldsymbol{g}}}(T+\varepsilon s,T+\varepsilon s)|}{{\gamma _{\varepsilon }^{2}}|t-s{|^{2\hat{\tau }}}}\\ {} & \hspace{1em}\le M.\end{aligned}\]In fact, straightforward computations show that
\[\begin{aligned}{}& {\kappa ^{\boldsymbol{g}}}(T+\varepsilon t,T+\varepsilon t)-2{\kappa ^{\boldsymbol{g}}}(T+\varepsilon t,T+\varepsilon s)+{\kappa ^{\boldsymbol{g}}}(T+\varepsilon s,T+\varepsilon s)\\ {} & \hspace{1em}={\big(\big({\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon t)-{\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon s)\big)\big)^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}\big(\big({\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon t)-{\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon s)\big)\big).\end{aligned}\]
Proposition 2.
Under Assumptions 1 and 3, one has
where ${\bar{\boldsymbol{r}}^{\boldsymbol{g}}}{(t)^{\intercal }}=({\bar{r}_{1}^{{g_{1}}}}(t),\dots ,{\bar{r}_{n}^{{g_{n}}}}(t))$ and ${\bar{r}_{i}^{{g_{i}}}}(t)$ is defined in (9) for $i=1,\dots ,n$.
\[ \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}^{\boldsymbol{g};x}}-{X_{T}^{\boldsymbol{g};x}},{X_{T+\varepsilon s}^{\boldsymbol{g};x}}-{X_{T}^{\boldsymbol{g};x}})}{{\gamma _{\varepsilon }^{2}}}={\bar{k}^{\boldsymbol{g}}}(t,s),\]
uniformly in $(t,s)\in [0,1]\times [0,1]$, with
(12)
\[ {\bar{k}^{\boldsymbol{g}}}(t,s)=\bar{k}(t,s)-{\bar{\boldsymbol{r}}^{\boldsymbol{g}}}{(t)^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}{\bar{\boldsymbol{r}}^{\boldsymbol{g}}}(s),\]Proof.
Taking into account equation (7), simple computations show that for $s,t\in [0,1]$,
(13)
\[\begin{aligned}{}& \operatorname{Cov}\big({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}},{X_{T+\varepsilon s}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}\big)\\ {} & \hspace{1em}=\big(k(T+\varepsilon t,T+\varepsilon s)-k(T+\varepsilon t,T)-k(T+\varepsilon s,T)+k(T,T)\big)+\\ {} & \hspace{2em}-{\big(\big({\bar{\boldsymbol{r}}^{\boldsymbol{g}}}(T+\varepsilon t)-{\bar{\boldsymbol{r}}^{\boldsymbol{g}}}(T)\big)\big)^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}\big(\big({\bar{\boldsymbol{r}}^{\boldsymbol{g}}}(T+\varepsilon s)-{\bar{\boldsymbol{r}}^{\boldsymbol{g}}}(T)\big)\big).\end{aligned}\]Remark 10.
Notice that ${\bar{k}^{\boldsymbol{g}}}$ is a continuous covariance function, being the (uniform) limit of continuous, symmetric and positive definite functions.
Proposition 3.
Under Assumptions 2 and 4 the family ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-\mathbb{E}[{X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}])_{t\in [0,1]}})_{\varepsilon >0}}$ is exponentially tight at the inverse speed function ${\gamma _{\varepsilon }^{2}}$.
Proof.
As ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-\mathbb{E}[{X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}])_{t\in [0,1]}})_{\varepsilon >0}}$ is a family of centered processes, it is enough to prove that (3) is satisfied with an appropriate speed function. For $\varepsilon >0$ the covariance of such process is given by (13). Therefore
\[\begin{aligned}{}& \operatorname{Var}\big({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T+\varepsilon s}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}\big)\\ {} & \hspace{1em}=k(T+\varepsilon t,T+\varepsilon t)-2k(T+\varepsilon t,T+\varepsilon s)+k(T+\varepsilon s,T+\varepsilon s)\\ {} & \hspace{2em}+{\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon t)-{\boldsymbol{r}^{\boldsymbol{g}}}{(T)))^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}(\big({\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon s)-{\boldsymbol{r}^{\boldsymbol{g}}}(T)\big).\end{aligned}\]
From Assumption 2 we already know that
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{k(T+\varepsilon t,T+\varepsilon t)-2k(T+\varepsilon t,T+\varepsilon s)+k(T+\varepsilon s,T+\varepsilon s)}{{\gamma _{\varepsilon }^{2}}\hspace{0.1667em}|t-s{|^{2\tau }}}\le M.\]
Furthermore, Assumption 4 implies that
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|({\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon t)-{\boldsymbol{r}^{\boldsymbol{g}}}{(T)))^{\intercal }}{({C^{\boldsymbol{g}}})^{-1}}(({\boldsymbol{r}^{\boldsymbol{g}}}(T+\varepsilon s)-{\boldsymbol{r}^{\boldsymbol{g}}}(T))|}{{\gamma _{\varepsilon }^{2}}\hspace{0.1667em}|t-s{|^{2\hat{\tau }}}}\le M.\]
Therefore condition (3) holds with the inverse speed ${\eta _{\varepsilon }}={\gamma _{\varepsilon }^{2}}$ and $\beta =\tau \wedge \hat{\tau }$. □
We are now ready to prove the main large deviation result of this section.
Theorem 4.
Suppose ${({X_{t}})_{t\ge 0}}$ satisfies Assumptions 1, 2, 3 and 4. Suppose, furthermore, that the (existing) covariance function ${\bar{k}^{\boldsymbol{g}}}$ defined in Proposition 2 is regular enough, then the family of processes ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle on $C[0,1]$ with the inverse speed ${\gamma _{\varepsilon }^{2}}$ and the good rate function
where ${\bar{\mathcal{H}}^{\boldsymbol{g}}}$ is the reproducing kernel Hilbert space associated to the covariance function ${\bar{k}^{\boldsymbol{g}}}$.
(14)
\[ {J_{X}^{\boldsymbol{g}}}(h)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\hspace{0.1667em}\| h{\| _{{\bar{\mathcal{H}}^{\boldsymbol{g}}}}^{2}},\hspace{1em}& h\in {\bar{\mathcal{H}}^{\boldsymbol{g}}},\\ {} +\infty ,\hspace{1em}& \textit{otherwise},\end{array}\right.\]Proof.
Cosider the family of centered processes ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-\mathbb{E}[{X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}])_{t\in [0,1]}})_{\varepsilon >0}}$. Thanks to Proposition 3 this family of processes is exponentially tight at the inverse speed ${\gamma _{\varepsilon }^{2}}$. Thanks to Proposition 2, for any $\lambda \in \mathcal{M}[0,1]$, one has
\[\begin{aligned}{}& \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Var}(\langle \lambda ,{X_{T+\varepsilon \cdot }^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}\rangle )}{{\gamma _{\varepsilon }^{2}}}\\ {} & \hspace{1em}=\underset{\varepsilon \to 0}{\lim }{\int _{0}^{1}}\hspace{0.1667em}d\lambda (v){\int _{0}^{1}}\hspace{0.1667em}d\lambda (u)\frac{\operatorname{Cov}({X_{T+\varepsilon v}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}},{X_{T+\varepsilon u}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})}{{\gamma _{\varepsilon }^{2}}}\\ {} & \hspace{1em}={\int _{0}^{1}}\hspace{0.1667em}d\lambda (v){\int _{0}^{1}}\hspace{0.1667em}d\lambda (u){\bar{k}^{\boldsymbol{g}}}(v,u),\end{aligned}\]
where ${\bar{k}^{\boldsymbol{g}}}$ is defined in (12). Since ${\bar{k}^{\boldsymbol{g}}}$ is the covariance function of a continuous Volterra process, a large deviation principle for ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-\mathbb{E}[{X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}])_{t\in [0,1]}})_{\varepsilon >0}}$ actually holds from Theorem 2 with the inverse speed ${\gamma _{\varepsilon }^{2}}$ and the good rate function given by (14). From Equation (10) and Remark 2 the same large deviation principle holds for the noncentered family ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})_{t\in [0,1]}})_{\varepsilon >0}}$. □3.3 Examples
In this section we consider some examples to which Theorem 4 applies. Therefore we want to verify that Assumptions 1, 2, 3 and 4 are fulfilled. Let X be a continuous, centered Volterra process process with kernel K. Suppose ${g_{1}}(t)={\text{1}_{[0,T)}}(t)$ and ${g_{2}}(t)=\frac{T-t}{T}{\text{1}_{[0,T)}}(t)$, that is,
\[ {W_{T}^{\alpha ,\tilde{\alpha }}}={\int _{0}^{T}}{g_{1}}(u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}}={x_{1}}\]
and by the integration by parts formula,
\[ \frac{1}{T}{\int _{0}^{T}}{W_{u}^{\alpha ,\tilde{\alpha }}}\hspace{0.1667em}du={\int _{0}^{T}}{g_{2}}(u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}}={x_{2}}.\]
Then the matrix ${({C^{\boldsymbol{g}}})^{-1}}$ is given by
\[ {\big({C^{\boldsymbol{g}}}\big)^{-1}}=\frac{1}{\det ({C^{\boldsymbol{g}}})}\left(\begin{array}{r@{\hskip10.0pt}r}{c_{22}^{{g_{2}}{g_{2}}}}& -{c_{12}^{{g_{1}}{g_{2}}}}\\ {} -{c_{12}^{{g_{1}}{g_{2}}}}& {c_{11}^{{g_{1}}{g_{1}}}}\end{array}\right),\]
where
\[\begin{aligned}{}{c_{11}^{{g_{1}}{g_{1}}}}& =\big({\alpha ^{2}}+{\tilde{\alpha }^{2}}\big)T,\hspace{2em}{c_{12}^{{g_{1}}{g_{2}}}}=\big({\alpha ^{2}}+{\tilde{\alpha }^{2}}\big)\frac{T}{2},\\ {} {c_{22}^{{g_{2}}{g_{2}}}}& =\big({\alpha ^{2}}+{\tilde{\alpha }^{2}}\big)\frac{T}{3},\hspace{2em}\det \big({C^{\boldsymbol{g}}}\big)={\big({\alpha ^{2}}+{\tilde{\alpha }^{2}}\big)^{2}}\frac{{T^{2}}}{12}.\end{aligned}\]
Example 1 (Fractional Brownian Motion).
Let X be the fractional Brownian motion of the Hurst index $H>1/2$. The fractional Brownian motion with the Hurst parameter $H\in (0,1)$ is the centered Gaussian process with covariance function
The fractional Brownian motion is a Volterra process with kernel, for $s\le t$,
where ${c_{H}}={(\frac{2H\hspace{0.1667em}\Gamma (3/2-H)}{\Gamma (H+1/2)\hspace{0.1667em}\Gamma (2-2H)})^{1/2}}$. Notice that when $H=1/2$ we have $K(t,s)={\textbf{1}_{[0,t]}}(s)$, and then the fractional Brownian motion reduces to the Wiener process.
(15)
\[ K(t,s)={c_{H}}\Bigg[{\bigg(\frac{t}{s}(t-s)\bigg)^{H-1/2}}-\bigg(H-\frac{1}{2}\bigg){s^{1/2-H}}{\int _{s}^{t}}\hspace{-0.1667em}{u^{H-3/2}}{(u-s)^{H-1/2}}\hspace{0.1667em}du\Bigg],\]First, let us prove that there exists a limit covariance and that it is regular enough. For $s\le t$, one has
\[ \frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{X_{T+\varepsilon s}}-{X_{T}})}{{\varepsilon ^{2H}}}=\operatorname{Cov}({X_{t}},{X_{s}}),\]
because of the homogeneity and self-similarity properties holding for the fractional Brownian motion, so that the limit in (4) trivially exists and Assumption 1 holds with $\bar{k}(t,s)=k(t,s)$ and ${\gamma _{\varepsilon }}={\varepsilon ^{H}}$. Now let us prove that Assumption 3 is fulfilled.
\[\begin{aligned}{}& \operatorname{Cov}\big({X_{T+\varepsilon t}}-{X_{T}},{W_{T}^{\alpha ,\tilde{\alpha }}}\big)\\ {} & ={\int _{0}^{T}}\big(K(T+\varepsilon t,u)-K(T,u)\big)\hspace{0.1667em}du\\ {} & ={c_{H}}{\int _{0}^{T}}\bigg({\bigg(\frac{T+\varepsilon t}{u}\bigg)^{H-1/2}}{(T+\varepsilon t-u)^{H-1/2}}-{\bigg(\frac{T}{u}\bigg)^{H-1/2}}{(T-u)^{H-1/2}}\bigg)\hspace{0.1667em}du+\\ {} & \hspace{1em}-{c_{H}}(H-1/2){\int _{0}^{T}}\frac{1}{{u^{H-1/2}}}{\int _{T}^{T+\varepsilon t}}{(v-u)^{H-1/2}}{v^{H-3/2}}\hspace{0.1667em}dv\hspace{0.1667em}du.\end{aligned}\]
Thanks to the Lagrange theorem,
\[\begin{aligned}{}& \bigg({\bigg(\frac{T+\varepsilon t}{u}\bigg)^{H-1/2}}{(T+\varepsilon t-u)^{H-1/2}}-{\bigg(\frac{T}{u}\bigg)^{H-1/2}}{(T-u)^{H-1/2}}\bigg)\\ {} & =\frac{H-1/2}{{u^{H-1/2}}}\big[{(T+{\xi _{\varepsilon }})^{H-3/2}}{(T+{\xi _{\varepsilon }}-u)^{H-1/2}}+{(T+{\xi _{\varepsilon }})^{H-1/2}}{(T+{\xi _{\varepsilon }}-u)^{H-3/2}}\big]\varepsilon t,\end{aligned}\]
for ${\xi _{\varepsilon }}\in [0,\varepsilon t]$. Therefore from the Lebesgue theorem,
\[\begin{aligned}{}& \underset{\varepsilon \to 0}{\lim }\frac{1}{\varepsilon }{\int _{0}^{T}}\bigg({\bigg(\frac{T+\varepsilon t}{u}\bigg)^{H-1/2}}{(T+\varepsilon t-u)^{H-1/2}}-{\bigg(\frac{T}{u}\bigg)^{H-1/2}}{(T-u)^{H-1/2}}\bigg)\hspace{0.1667em}du\\ {} & =t\hspace{0.1667em}(H-1/2){\int _{0}^{T}}\frac{1}{{u^{H-1/2}}}\big({T^{H-3/2}}{(T-u)^{H-1/2}}+{T^{H-1/2}}{(T-u)^{H-3/2}}\big)\hspace{0.1667em}du,\end{aligned}\]
uniformly in $t\in [0,1]$. Furthermore, in a similar way, we have,
\[\begin{aligned}{}& \underset{\varepsilon \to 0}{\lim }\frac{1}{\varepsilon }{\int _{0}^{T}}\frac{1}{{u^{H-1/2}}}{\int _{T}^{T+\varepsilon t}}{(v-u)^{H-1/2}}{v^{H-3/2}}dv\hspace{0.1667em}du\\ {} & \hspace{1em}=t\hspace{0.1667em}{\int _{0}^{T}}\frac{1}{{u^{H-1/2}}}{(T-u)^{H-1/2}}{T^{H-3/2}}\hspace{0.1667em}du.\end{aligned}\]
Therefore,
\[ {\bar{r}_{1}^{{g_{1}}}}(t)=\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{W_{T}^{\alpha ,\tilde{\alpha }}})}{{\varepsilon ^{H}}}=0,\]
uniformly in $t\in [0,1]$. Similar calculations show that
\[ {\bar{r}_{2}^{{g_{2}}}}(t)=\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{\textstyle\textstyle\int _{0}^{T}}\frac{T-u}{T}\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})}{{\varepsilon ^{H}}}=0.\]
So, we have ${\bar{k}^{\boldsymbol{g}}}(t,s)=k(t,s)$ and therefore the limit covariance exists and is regular enough.Now let us prove the exponential tightness of the family of processes. For $s<t$,
\[ \operatorname{Var}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}})={\varepsilon ^{2H}}{(t-s)^{2H}},\]
then Assumption 2 holds with $\tau =H$ and ${\gamma _{\varepsilon }}={\varepsilon ^{H}}$. For $s<t$, we have
\[\begin{aligned}{}& \operatorname{Cov}\big({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{W_{T}^{\alpha ,\tilde{\alpha }}}\big)\\ {} & ={\int _{0}^{T}}\big(K(T+\varepsilon t,u)-K(T+\varepsilon s,u)\big)\hspace{0.1667em}du\\ {} & ={c_{H}}{\int _{0}^{T}}\bigg({\bigg(\frac{T+\varepsilon t}{u}\bigg)^{H-1/2}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{(T+\varepsilon t-u)^{H-1/2}}\hspace{0.1667em}-\hspace{0.1667em}{\bigg(\frac{T+\varepsilon t}{u}\bigg)^{H-1/2}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{(T+\varepsilon s-u)^{H-1/2}}\bigg)\hspace{0.1667em}du+\\ {} & \hspace{1em}-{c_{H}}(H-1/2){\int _{0}^{T}}\frac{1}{{u^{H-1/2}}}{\int _{T+\varepsilon s}^{T+\varepsilon t}}{(v-u)^{H-1/2}}{v^{H-3/2}}\hspace{0.1667em}dv\hspace{0.1667em}du.\end{aligned}\]
Thanks to the Lagrange theorem we can find $M>0$ such that
\[\begin{aligned}{}& {\int _{0}^{T}}\bigg({\bigg(\frac{T+\varepsilon t}{u}\bigg)^{H-1/2}}{(T+\varepsilon t-u)^{H-1/2}}-{\bigg(\frac{T+\varepsilon s}{u}\bigg)^{H-1/2}}{(T+\varepsilon s-u)^{H-1/2}}\bigg)\hspace{0.1667em}du\\ {} & \le \varepsilon (t-s)\hspace{-0.1667em}\bigg(H-\frac{1}{2}\bigg)\hspace{-0.1667em}\hspace{-0.1667em}{\int _{0}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\frac{1}{{u^{H-\frac{1}{2}}}}\big[{T^{H-\frac{3}{2}}}{(T+1-u)^{H-\frac{1}{2}}}\hspace{-0.1667em}\hspace{-0.1667em}+{(T+1)^{H-\frac{1}{2}}}{(T-u)^{H-\frac{3}{2}}}\hspace{-0.1667em}\big]\hspace{0.1667em}du\\ {} & \le \varepsilon M(t-s)\end{aligned}\]
and
\[\begin{aligned}{}& {\int _{0}^{T}}\frac{1}{{u^{H-1/2}}}{\int _{T+\varepsilon s}^{T+\varepsilon t}}{(v-u)^{H-1/2}}{v^{H-3/2}}\hspace{0.1667em}dv\hspace{0.1667em}du\\ {} & =(t-s)\hspace{0.1667em}{\int _{0}^{T}}\frac{1}{{u^{H-1/2}}}{(T+{\xi _{\varepsilon }}-u)^{H-1/2}}{(T+{\xi _{\varepsilon }})^{H-3/2}}\hspace{0.1667em}du\le \varepsilon M(t-s).\end{aligned}\]
Therefore, a fortiori,
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{W_{T}^{\alpha ,\tilde{\alpha }}})|}{{\varepsilon ^{H}}|t-s|}\le M.\]
Similar calculations show that
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{\textstyle\textstyle\int _{0}^{T}}\frac{T-u}{T}\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})|}{{\varepsilon ^{H}}|t-s|}\le M.\]
Thus, Assumption 4 is fulfilled with $\hat{\tau }=1$. Therefore the family ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle with the inverse speed function ${\gamma _{\varepsilon }^{2}}={\varepsilon ^{2H}}$ as the nonconditioned process. Note that the same result was obtained in [4] for the n-fold conditional fractional Brownian motion.Example 2 (m-fold integrated Brownian motion).
For $m\ge 1$, let X be the m-fold integrated Brownian motion, i.e.
\[ {X_{t}}={\int _{0}^{t}}\hspace{0.1667em}du\Bigg({\int _{0}^{u}}\hspace{0.1667em}d{u_{m-1}}\cdots {\int _{0}^{{u_{2}}}}\hspace{0.1667em}d{u_{1}}{B_{{u_{1}}}}\Bigg).\]
It is a continuous Volterra process with kernel $K(t,u)=\frac{1}{m!}{(t-u)^{m}}$ and covariance function
\[ k(t,s)=\frac{1}{{(m!)^{2}}}{\int _{0}^{s\wedge t}}{(s-\xi )^{m}}{(t-\xi )^{m}}\hspace{0.1667em}d\xi .\]
First, let us prove that there exists a limit covariance and that it is regular enough. Assumption 1 is fulfilled. In fact, for $s\le t$, we have
\[\begin{aligned}{}& \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{X_{T+\varepsilon s}}-{X_{T}})}{{\varepsilon ^{2}}}\\ {} & =\underset{\varepsilon \to 0}{\lim }\frac{1}{{(m!)^{2}}}\frac{1}{{\varepsilon ^{2}}}{\int _{T}^{T+\varepsilon s}}{(T+\varepsilon t-u)^{m}}{(T+\varepsilon s-u)^{m}}\hspace{0.1667em}du\\ {} & \hspace{1em}+\underset{\varepsilon \to 0}{\lim }\frac{1}{{(m!)^{2}}{\varepsilon ^{2}}}{\int _{0}^{T}}\big({(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon t\hspace{0.1667em}-\hspace{0.1667em}u)^{m}}\hspace{0.1667em}-\hspace{0.1667em}{(T\hspace{0.1667em}-\hspace{0.1667em}u)^{m}}\big)\big({(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon s\hspace{0.1667em}-\hspace{0.1667em}u)^{m}}-{(T-u)^{m}}\big)\hspace{0.1667em}du.\end{aligned}\]
It is straightforward to show that
\[ \bar{k}(t,s)=\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{X_{T+\varepsilon s}}-{X_{T}})}{{\varepsilon ^{2}}}=\frac{1}{{(m!)^{2}}}\frac{{m^{2}}}{2m-1}{T^{2m-1}}st,\]
uniformly in $(t,s)\in [0,1]\times [0,1]$. Furthermore,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\bar{r}_{1}^{{g_{1}}}}(t)& \displaystyle =& \displaystyle \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{W_{T}^{\alpha ,\tilde{\alpha }}})}{\varepsilon }\\ {} & \displaystyle =& \displaystyle \underset{\varepsilon \to 0}{\lim }\frac{\alpha }{m!}\frac{1}{\varepsilon }{\int _{0}^{T}}\big({(T+\varepsilon t-u)^{m}}-{(T-u)^{m}}\big)\hspace{0.1667em}du=\frac{\alpha }{m!}{T^{m}}\hspace{0.1667em}t,\end{array}\]
and
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\bar{r}_{2}^{{g_{2}}}}(t)& \displaystyle =& \displaystyle \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{\textstyle\textstyle\int _{0}^{T}}\frac{T-u}{T}\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})}{\varepsilon }\\ {} & \displaystyle =& \displaystyle \underset{\varepsilon \to 0}{\lim }\frac{\alpha }{m!\hspace{0.1667em}T}\frac{1}{\varepsilon }{\int _{0}^{T}}\big({(T+\varepsilon t-u)^{m}}-{(T-u)^{m}}\big)(T-u)\hspace{0.1667em}du\\ {} & \displaystyle =& \displaystyle \frac{\alpha }{m!}\frac{m}{m+1}{T^{m}}\hspace{0.1667em}t,\end{array}\]
uniformly in $t\in [0,1]$. Therefore also Assumption 3 is fulfilled.Thus, we have ${\bar{k}^{\boldsymbol{g}}}(t,s)=a\hspace{0.1667em}st$, where
\[ a=\frac{1}{{(m!)^{2}}}\bigg(\frac{{m^{2}}}{2m-1}{T^{2m-1}}-{\alpha ^{2}}{T^{2m}}\bigg(1,\frac{m}{m+1}\bigg){\big({C^{\boldsymbol{g}}}\big)^{-1}}{\bigg(1,\frac{m}{m+1}\bigg)^{\intercal }}\bigg).\]
Note that ${\bar{k}^{\boldsymbol{g}}}$ is regular enough.Now let us prove the exponential tightness of the family of processes. For $s<t$, there exists a constant $M>0$, such that
\[\begin{aligned}{}& \operatorname{Var}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}})\\ {} & =\frac{1}{{(m!)^{2}}}\Bigg({\int _{T+\varepsilon s}^{T+\varepsilon t}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon t\hspace{0.1667em}-\hspace{0.1667em}u)^{2m}}\hspace{0.1667em}du\hspace{0.1667em}+\hspace{0.1667em}{\int _{0}^{T+\varepsilon s}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{\big({(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon t\hspace{0.1667em}-\hspace{0.1667em}u)^{m}}\hspace{0.1667em}-\hspace{0.1667em}{(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon s\hspace{0.1667em}-\hspace{0.1667em}u)^{m}}\big)^{2}}\hspace{0.1667em}du\Bigg)\\ {} & \le M{\varepsilon ^{2}}{(t-s)^{2}}.\end{aligned}\]
Then Assumption 2 holds with $\tau =1$ and ${\gamma _{\varepsilon }}=\varepsilon $. For $s<t$,
\[\begin{aligned}{}\big|\operatorname{Cov}\big({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{W_{T}^{\alpha ,\tilde{\alpha }}}\big)\big|& ={\int _{0}^{T}}\big({(T+\varepsilon t-u)^{m}}-{(T+\varepsilon s-u)^{m}}\big)\hspace{0.1667em}du\\ {} & ={\sum \limits_{k=0}^{m-1}}\left(\genfrac{}{}{0.0pt}{}{m}{k}\right)\frac{1}{k+1}{T^{k+1}}{\varepsilon ^{m-k}}{(t-s)^{m-k}}.\end{aligned}\]
Then we have
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{W_{T}^{\alpha ,\tilde{\alpha }}})|}{\varepsilon |t-s|}\le M.\]
Similar calculations show that
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{\textstyle\textstyle\int _{0}^{T}}\frac{T-u}{T}\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})|}{\varepsilon |t-s|}\le M.\]
Thus, Assumption 4 is fulfilled with $\hat{\tau }=1$. Therefore the family ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle with the inverse speed ${\gamma _{\varepsilon }^{2}}={\varepsilon ^{2}}$.Example 3 (Integrated Volterra Process).
Let Z be a Volterra process with kernel K satisfying condition (5) for some $A>0$. Let X be the integrated process, i.e.
The process X is a continuous, Volterra process with kernel
\[ h(t,s)={\int _{s}^{t}}K(u,s)\hspace{0.1667em}du,\hspace{1em}\text{i.e.}\hspace{2.5pt}{X_{t}}={\int _{0}^{t}}h(t,s)\hspace{0.1667em}d{B_{s}}.\]
First, let us prove that there exists a limit covariance and that it is regular enough. Assumption 1 is fulfilled, in fact, for $s\le t$, we have
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{X_{T+\varepsilon s}}-{X_{T}})}{{\varepsilon ^{2}}}\\ {} & & \displaystyle \hspace{1em}=\underset{\varepsilon \to 0}{\lim }\frac{{\textstyle\textstyle\int _{T}^{T+\varepsilon s}}h(T+\varepsilon t,u)h(T+\varepsilon s,u)\hspace{0.1667em}du}{{\varepsilon ^{2}}}\\ {} & & \displaystyle \hspace{2em}+\underset{\varepsilon \to 0}{\lim }\frac{{\textstyle\textstyle\int _{0}^{T}}(h(T+\varepsilon t,u)-h(T))(h(T+\varepsilon s,u)-h(T,u))\hspace{0.1667em}du}{{\varepsilon ^{2}}}.\end{array}\]
Now, one has
\[\begin{aligned}{}& {\int _{T}^{T+\varepsilon s}}h(T+\varepsilon t,u)h(T+\varepsilon s,u)\hspace{0.1667em}du\\ {} & \hspace{1em}={\varepsilon ^{3}}{\int _{0}^{s}}\Bigg({\int _{u}^{t}}K(T+\varepsilon x,T+\varepsilon u)\hspace{0.1667em}dx{\int _{u}^{s}}K(T+\varepsilon x,T+\varepsilon u)\hspace{0.1667em}dx\Bigg)\hspace{0.1667em}du\end{aligned}\]
and
\[\begin{aligned}{}& {\int _{0}^{T}}\big(h(T+\varepsilon t,u)-h(T,u)\big)\big(h(T+\varepsilon s,u)-h(T,u)\big)\hspace{0.1667em}du\\ {} & \hspace{1em}={\int _{0}^{T}}\Bigg({\int _{T}^{T+\varepsilon s}}K(v,u)\hspace{0.1667em}dv{\int _{T}^{T+\varepsilon t}}K(v,u)\hspace{0.1667em}dv\Bigg)\hspace{0.1667em}du.\end{aligned}\]
Therefore
\[ \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{X_{T+\varepsilon s}}-{X_{T}})}{{\varepsilon ^{2}}}=st\hspace{0.1667em}{\int _{0}^{T}}{K^{2}}(T,u)\hspace{0.1667em}du,\]
uniformly in $(t,s)\in [0,1]\times [0,1]$. Furthermore, with similar calculations we have
\[ {\bar{r}_{1}^{{g_{1}}}}(t)=\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{W_{T}^{\alpha ,\tilde{\alpha }}})}{\varepsilon }=\alpha \hspace{0.1667em}t{\int _{0}^{T}}K(T,u)\hspace{0.1667em}du,\]
and
\[ {\bar{r}_{2}^{{g_{2}}}}(t)=\underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T}},{\textstyle\textstyle\int _{0}^{T}}\frac{T-u}{T}\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})}{\varepsilon }=\alpha \hspace{0.1667em}\frac{t}{T}{\int _{0}^{T}}K(T,u)(T-u)\hspace{0.1667em}du,\]
uniformly in $t\in [0,1]$. Therefore also Assumption 3 is fulfilled.So, we have ${\bar{k}^{\boldsymbol{g}}}(t,s)=a\hspace{0.1667em}st$, where
\[ a={\int _{0}^{T}}{K^{2}}(T,u)\hspace{0.1667em}du-{\alpha ^{2}}{A^{\intercal }}{\big({C^{\boldsymbol{g}}}\big)^{-1}}A,\]
and ${A^{\intercal }}=({\textstyle\int _{0}^{T}}K(T,u)\hspace{0.1667em}du,\frac{1}{T}{\textstyle\int _{0}^{T}}K(T,u)(T-u)\hspace{0.1667em}du)$. Note that ${\bar{k}^{\boldsymbol{g}}}$ is regular enough. Let us now prove the exponential tightness. We have, for $s<t$,
\[\begin{aligned}{}& \operatorname{Var}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}})\\ {} & \hspace{1em}={\int _{T+\varepsilon s}^{T+\varepsilon t}}h{(T+\varepsilon t,u)^{2}}\hspace{0.1667em}du+{\int _{0}^{T+\varepsilon s}}{\big(h(T+\varepsilon t,u)-h(T+\varepsilon s,u)\big)^{2}}.\end{aligned}\]
Now, recalling that K is a square integrable function, there exists a constant $M>0$, such that
\[\begin{aligned}{}{\int _{T+\varepsilon s}^{T+\varepsilon t}}h{(T+\varepsilon t,u)^{2}}\hspace{0.1667em}du& ={\varepsilon ^{3}}{\int _{s}^{t}}{\Bigg({\int _{u}^{t}}K(T+\varepsilon x,T+\varepsilon u)\hspace{0.1667em}dx\Bigg)^{2}}\hspace{0.1667em}du\\ {} & \le {\varepsilon ^{3}}M{\int _{s}^{t}}(t-u)\hspace{0.1667em}du\le M{\varepsilon ^{3}}{(t-s)^{2}}\end{aligned}\]
and
\[\begin{aligned}{}{\int _{0}^{T+\varepsilon s}}(h(T+\varepsilon t,u)-h{(T+\varepsilon s,u)^{2}}\hspace{0.1667em}du& ={\varepsilon ^{2}}{\int _{0}^{T+\varepsilon s}}{\Bigg({\int _{s}^{t}}K(T+\varepsilon v,u)\hspace{0.1667em}dv\Bigg)^{2}}\hspace{0.1667em}du\\ {} & \le M{\varepsilon ^{2}}{(t-s)^{2}},\end{aligned}\]
therefore Assumption 2 holds with $\tau =1$ and ${\gamma _{\varepsilon }}=\varepsilon $.With similar computations we can prove that also Assumption 4 is fulfilled with $\hat{\tau }=1$ and ${\gamma _{\varepsilon }}=\varepsilon $. For $s<t$, we have
\[\begin{aligned}{}\big|\operatorname{Cov}\big({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{W_{T}^{\alpha ,\tilde{\alpha }}}\big)\big|& =\bigg|{\int _{0}^{T}}\big(h(T+\varepsilon t,u)-h(T+\varepsilon s)\big)\hspace{0.1667em}du\bigg|\\ {} & =\bigg|{\int _{0}^{T}}{\int _{T+\varepsilon s}^{T+\varepsilon t}}K(v,u)\hspace{0.1667em}dv\hspace{0.1667em}du\bigg|\le M\varepsilon (t-s).\end{aligned}\]
Therefore
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{W_{T}^{\alpha ,\tilde{\alpha }}})|}{\varepsilon |t-s|}\le M.\]
Similar calculations show that
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|\operatorname{Cov}({X_{T+\varepsilon t}}-{X_{T+\varepsilon s}},{\textstyle\textstyle\int _{0}^{T}}\frac{T-u}{T}\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})|}{\varepsilon |t-s|}\le M.\]
Therefore the family ${({({X_{T+\varepsilon t}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}}-{X_{T}^{\boldsymbol{g}\mathbf{;}\boldsymbol{x}}})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle with the inverse speed function ${\gamma _{\varepsilon }^{2}}={\varepsilon ^{2}}$.4 Conditioning to a path
4.1 Conditional law
Let $(\Omega ,\mathcal{F},{({\mathcal{F}_{t}})_{t\ge 0}},\mathbb{P})$ be a filtered probability space. On this space we consider a Brownian motion $B={(B)_{t\ge 0}}$, a continuous real Volterra process $X={({X_{t}})_{t\ge 0}}$ and another Brownian motion $\tilde{B}={(\tilde{B})_{t\ge 0}}$ independent of B. Fix $\alpha ,\tilde{\alpha }\in \mathbb{R}$ and define ${W^{\alpha ,\tilde{\alpha }}}=\alpha B+\tilde{\alpha }\tilde{B}$.
We are interested in the regular conditional law of the process X given the σ-algebra ${\mathcal{F}_{T}^{\alpha ,\tilde{\alpha }}}$, where ${({\mathcal{F}_{t}^{\alpha ,\tilde{\alpha }}})_{t\ge 0}}$ is the filtration generated by the mixed Brownian motion ${W^{\alpha ,\tilde{\alpha }}}$, i.e. we want to condition the process to the past of the mixed Brownian motion up to a fixed time $T>0$. To do this, consider the conditional law on $C[0,+\infty )$ endowed with the topology induced by the sup-norm on compact sets, $\mathbb{P}(X\in \cdot \mid {\mathcal{F}_{T}^{\alpha ,\tilde{\alpha }}})$. There exists a regular version of such conditional probability (see [11] and [14]), namely a version such that $\Gamma \mapsto \mathbb{P}(X\in \Gamma \mid {\mathcal{F}_{T}^{a,b}})$ is almost surely a Gaussian probability law.
The following theorem, Theorem 2.1 in [16], gives mean and covariance function of the Gaussian conditional law.
Theorem 5.
For $T>0$, the regular conditional law of $X\mid {\mathcal{F}_{T}^{a,b}}$ is a Gaussian measure with the random mean
\[ {\Psi _{t}}\big({W^{\alpha ,\tilde{\alpha }}}\big)=\mathbb{E}\big[{X_{t}}\big|{\mathcal{F}_{T}^{\alpha ,\tilde{\alpha }}}\big]=\frac{{\alpha ^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{\int _{0}^{T}}K(t,u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}}\]
and the deterministic covariance,
(16)
\[\begin{aligned}{}\Upsilon (t,s)& ={\int _{0}^{t\wedge s}}{\bigg(1-\frac{{\alpha ^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{\textit{1}_{[0,T]}}(v)\bigg)^{2}}K(t,v)\hspace{0.1667em}K(s,v)\hspace{0.1667em}dv\\ {} & \hspace{1em}+\frac{{\alpha ^{2}}\hspace{0.1667em}{\tilde{\alpha }^{2}}}{{({\alpha ^{2}}+{\tilde{\alpha }^{2}})^{2}}}{\int _{0}^{T}}K(t,v)\hspace{0.1667em}K(s,v)\hspace{0.1667em}dv.\end{aligned}\]Remark 11.
Observe that the mean process ${(\frac{{\alpha ^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{\textstyle\int _{0}^{T}}K(t,u)\hspace{0.1667em}d{W_{u}^{\alpha ,\tilde{\alpha }}})_{t\ge 0}}$ is a continuous process. Therefore for almost every continuous function ψ defined on $[0,T]$,
defines a continuous function ${m^{\psi }}:[0,+\infty )\longrightarrow \mathbb{R}$. Thus, we can consider the continuous Gaussian process ${({X_{t}^{\psi }})_{t\ge 0}}$ with mean function ${m^{\psi }}$ and covariance function Υ.
Remark 12.
Let us note that the covariance function of the conditioned process depends on the time T, but not on the function ψ as in the previous section.
Remark 13.
For $s\wedge t\ge T$ we have
For $\tilde{\alpha }=0$, i.e. ${\mathcal{F}_{t}^{\alpha ,0}}=\sigma \{{X_{u}}:u\le t\}$ (for details about the filtrations generated by X and B, see, for example, [18]), we have the same conditioned variance as in [9].
(19)
\[ \Upsilon (t,s)=\frac{{\alpha ^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{\int _{T}^{t\wedge s}}K(t,v)\hspace{0.1667em}K(s,v)\hspace{0.1667em}dv+\frac{{\tilde{\alpha }^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}\hspace{0.1667em}k(t,s).\]4.2 Large deviations
Let ${\gamma _{\varepsilon }}>0$ be an infinitesimal function, i.e. ${\gamma _{\varepsilon }}\to 0$ for $\varepsilon \to 0$. In this section ${({X_{t}})_{t\ge 0}}$ is a continuous Volterra process as in (1).
Now, in order to achieve a large deviation principle for the generalized conditioned process ${X^{\psi }}$, we have to investigate the behavior of the functions Υ and ${m^{\psi }}$ (defined in (16) and (17), respectively) in a small time interval of length ε. We want to investigate the behavior of the conditioned process ${({X_{t}^{\psi }})_{t\ge 0}}$ in the near future after T.
For $s\le t$, taking into account equation (19), simple computations show that
(20)
\[\begin{aligned}{}& \operatorname{Cov}\big({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }},{X_{T+\varepsilon s}^{\psi }}-{X_{T}^{\psi }}\big)\\ {} & \hspace{1em}=\varepsilon \frac{{\alpha ^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{\int _{0}^{t\wedge s}}K(T+\varepsilon t,T+\varepsilon u)K(T+\varepsilon s,T+\varepsilon u)\hspace{0.1667em}du+\\ {} & \hspace{2em}+\frac{{\tilde{\alpha }^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}\big(k(T+\varepsilon t,T+\varepsilon s)-k(T+\varepsilon t,T)-k(T,T+\varepsilon s)+k(T,T)\big).\end{aligned}\]Now we give conditions on the kernel K of the original Volterra process in order to guarantee that the hypotheses of Theorem 2 hold for the conditioned process. The next assumption (Assumption 5) implies the existence of a limit covariance.
Assumption 5.
For any $T>0$ there exists a square integrable function $\bar{K}$ (possibly 0) such that
\[ \underset{\varepsilon \to 0}{\lim }\sqrt{\varepsilon }\hspace{0.1667em}\frac{K(T+\varepsilon t,T+\varepsilon s)}{{\gamma _{\varepsilon }}\hspace{0.1667em}}=\bar{K}(t,s)\]
uniformly in $(t,s)\in [0,1]\times [0,1]$.
Remark 14.
Notice that we can choose ${\gamma _{\varepsilon }}$ so that ${\lim \nolimits_{\varepsilon \to 0}}{\gamma _{\varepsilon }}{\varepsilon ^{-\frac{1}{2}}}\in \{0,1,+\infty \}$.
The next assumption (Assumption 6) implies the exponential tightness of the family of processes.
Remark 15.
Note that
\[\begin{aligned}{}& {\int _{0}^{t}}{\big(K(T+\varepsilon t,T+\varepsilon u)-K(T+\varepsilon s,T+\varepsilon u)\big)^{2}}\hspace{0.1667em}du\\ {} & ={\int _{s}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}K{(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon t,T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon u)^{2}}\hspace{0.1667em}du\hspace{0.1667em}+\hspace{0.1667em}{\int _{0}^{s}}{\big(K(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon t,T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon u)\hspace{0.1667em}-\hspace{0.1667em}K(T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon s,T\hspace{0.1667em}+\hspace{0.1667em}\varepsilon u)\big)^{2}}\hspace{0.1667em}du,\end{aligned}\]
therefore in order to prove that Assumption 6 is fulfilled we can prove that there exists $c>0$ such that
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle \underset{s,t\in [0,1],s\ne t}{\sup }\frac{{\textstyle\textstyle\int _{s}^{t}}K{(T+\varepsilon t,T+\varepsilon u)^{2}}\hspace{0.1667em}du}{{\gamma _{\varepsilon }^{2}}|t-s{|^{2\hat{\tau }}}}\le c,\\ {} & & \displaystyle \underset{s,t\in [0,1],s\ne t}{\sup }\frac{{\textstyle\textstyle\int _{0}^{s}}{(K(T+\varepsilon t,T+\varepsilon u)-K(T+\varepsilon s,T+\varepsilon u))^{2}}\hspace{0.1667em}du}{{\gamma _{\varepsilon }^{2}}|t-s{|^{2\hat{\tau }}}}\le c.\end{array}\]
Proposition 4.
Proof.
Since ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }}-\mathbb{E}[{X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }}])_{t\in [0,1]}})_{\varepsilon >0}}$ is a family of centered processes, it is enough to prove that (3) is satisfied with an appropriate speed function.
The covariance of such process is given by (20). Therefore
\[\begin{aligned}{}& \operatorname{Var}\big({X_{T+\varepsilon t}^{\psi }}-{X_{T+\varepsilon s}^{\psi }}\big)\\ {} & \hspace{1em}=\varepsilon \frac{{\alpha ^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{\int _{0}^{t}}{\big(K(T+\varepsilon t,T+\varepsilon u)-K(T+\varepsilon s,T+\varepsilon u)\big)^{2}}\hspace{0.1667em}du+\\ {} & \hspace{2em}+\frac{{\tilde{\alpha }^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}k(T+\varepsilon t,T+\varepsilon t)-2k(T+\varepsilon t,T+\varepsilon s)+k(T+\varepsilon s,T+\varepsilon s).\end{aligned}\]
From Assumption 2 we already know that
\[ \underset{s,t\in [0,1],s\ne t}{\sup }\frac{|k(T+\varepsilon t,T+\varepsilon t)-2k(T+\varepsilon t,T+\varepsilon s)+k(T+\varepsilon s,T+\varepsilon s)|}{{\gamma _{\varepsilon }^{2}}\hspace{0.1667em}|t-s{|^{2\tau }}}\le M.\]
Furthermore, Assumption 6 implies that
Therefore condition (3) holds with the inverse speed ${\eta _{\varepsilon }}={\gamma _{\varepsilon }^{2}}$ and $\beta =\tau \wedge \hat{\tau }$. □
We are ready to state a large deviation principle for the conditioned Volterra process.
Theorem 6.
Suppose Assumptions 1, 2, 5 and 6 are fulfilled. If the (existing) covariance function $\bar{\Upsilon }$ defined in Proposition 4 is regular enough, then the family of processes ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }})_{t\in [0,1]}})_{\varepsilon >0}}$ satisfies a large deviation principle on $C[0,1]$ with the inverse speed ${\gamma _{\varepsilon }^{2}}$ and the good rate function
where $\bar{\mathcal{H}}$ and $\| .{\| _{\bar{\mathcal{H}}}}$, respectively, denote the reproducing kernel Hilbert space and the related norm associated to the covariance function $\bar{\Upsilon }$ given by (21).
(22)
\[ I(h)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{1}{2}\| h{\| _{\bar{\mathcal{H}}}^{2}},\hspace{1em}& h\in \bar{\mathcal{H}},\\ {} +\infty ,\hspace{1em}& \textit{otherwise},\end{array}\right.\]Proof.
Cosider the family of centered processes ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }}-\mathbb{E}[{X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }}])_{t\in [0,1]}})_{\varepsilon }}$. Thanks to Proposition 5 this family of processes is exponentially tight at the inverse speed ${\gamma _{\varepsilon }^{2}}$. Thanks to Proposition 4, for any $\lambda \in \mathcal{M}[0,1]$, one has
\[ \underset{\varepsilon \to 0}{\lim }\frac{\operatorname{Var}(\langle \lambda ,{X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }}\rangle )}{{\gamma _{\varepsilon }^{2}}}={\int _{0}^{1}}{\int _{0}^{1}}\bar{\Upsilon }(t,s)\hspace{0.1667em}d\lambda (t)\hspace{0.1667em}d\lambda (s)\]
where $\bar{\Upsilon }$ is defined in (21) Since $\bar{\Upsilon }$ is the covariance function of a continuous Volterra process, a large deviation principle for ${{({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }}-\mathbb{E}[{X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }}])_{t\in [0,1]}})_{t\in [0,1]}})_{\varepsilon >0}}$ actually holds from Theorem 2 with the inverse speed ${\gamma _{\varepsilon }^{2}}$ and the good rate function given by (22). From Equation (18) and Remark 2 the same large deviation principle holds for the noncentered family ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }})_{t\in [0,1]}})_{\varepsilon >0}}$. □4.3 Examples
In this section we consider some examples to which Theorem 6 applies. Therefore we want to verify that Assumptions 1, 2, 5 and 6 are fulfilled. Let X be a continuous, centered Volterra process process with kernel K.
Example 4 (Fractional Brownian Motion).
Consider a fractional Brownian motion with $H>1/2$ as in Example 1. We have already proved that Assumptions 1 and 2 are fulfilled with $\tau =H$ and ${\gamma _{\varepsilon }}={\varepsilon ^{H}}$.
We want to show that Assumptions 5 and 6 are fulfilled with $\hat{\tau }=1$, ${\gamma _{\varepsilon }}={\varepsilon ^{H}}$. From Example 4.17 in [9], we have that
\[ \underset{\varepsilon \to 0}{\lim }\sqrt{\varepsilon }\hspace{0.1667em}\frac{K(T+\varepsilon t,T+\varepsilon s)}{{\varepsilon ^{H}}}={c_{H}}{(t-s)^{H-\frac{1}{2}}}\]
uniformly for $t,s\in [0,1]$. Therefore
\[\begin{aligned}{}& \underset{\varepsilon \to 0}{\lim }\frac{1}{{\varepsilon ^{2H}}}{\int _{0}^{t\wedge s}}K(T+\varepsilon t,T+\varepsilon u)K(T+\varepsilon s,T+\varepsilon u)\hspace{0.1667em}du\\ {} & \hspace{1em}={c_{H}^{2}}{\int _{0}^{t\wedge s}}{(t-u)^{H-\frac{1}{2}}}{(s-u)^{H-\frac{1}{2}}}\hspace{0.1667em}du\end{aligned}\]
uniformly for $(t,s)\in [0,1]\times [0,1]$. So, we have that Assumption 5 is fulfilled with
\[ \bar{\Upsilon }(t,s)=\frac{{\tilde{\alpha }^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}k(t,s)+\frac{{\alpha ^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{c_{H}^{2}}{\int _{0}^{t\wedge s}}{(t-u)^{H-\frac{1}{2}}}{(s-u)^{H-\frac{1}{2}}}\hspace{0.1667em}du.\]
Note that $\bar{\Upsilon }$ is regular enough. Let us now prove the exponential tightness.Since ${(a+b)^{2}}\le 2({a^{2}}+{b^{2}})$ for $a,b\in \mathbb{R}$, from Equation (15), there exists a constant $c>0$ such that, for $s<t$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle K{(T+\varepsilon t,T+\varepsilon u)^{2}}\\ {} & \displaystyle \le & \displaystyle c\Bigg({\varepsilon ^{2H-1}}{(t-u)^{2H-1}}+{\Bigg({\int _{T+\varepsilon u}^{T+\varepsilon t}}{\big(v-(T+\varepsilon u)\big)^{H-\frac{1}{2}}}\hspace{0.1667em}dv\Bigg)^{2}}\Bigg)\\ {} & \displaystyle \le & \displaystyle c\big({\varepsilon ^{2H-1}}{(t-u)^{2H-1}}+{\varepsilon ^{2H+1}}{(t-u)^{2H+1}}\big).\end{array}\]
Thus,
\[ \varepsilon {\int _{s}^{t}}K{(T+\varepsilon t,T+\varepsilon u)^{2}}\hspace{0.1667em}du\le c\big({\varepsilon ^{2H}}{(t-s)^{2H}}+{\varepsilon ^{2H+2}}{(t-s)^{2H+2}}\big).\]
Furthermore,
\[ {\big(K(T+\varepsilon t,T+\varepsilon u)-K(T+\varepsilon s,T+\varepsilon u)\big)^{2}}={\big(A(u)-B(u)\big)^{2}}\le 2\big({A^{2}}(u)+{B^{2}}(u)\big),\]
where
\[\begin{aligned}{}& A(u)={c_{H}}\bigg[{\bigg(\frac{T+\varepsilon t}{T+\varepsilon u}\bigg)^{H-\frac{1}{2}}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{(t-u)^{H-\frac{1}{2}}}{\varepsilon ^{H-\frac{1}{2}}}-{\bigg(\frac{T+\varepsilon s}{T+\varepsilon u}\bigg)^{H-\frac{1}{2}}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{(s-u)^{H-\frac{1}{2}}}{\varepsilon ^{H-\frac{1}{2}}})\bigg]\\ {} & B(u)={c_{H}}\bigg(H-\frac{1}{2}\bigg)\frac{1}{{(T+\varepsilon u)^{H-\frac{1}{2}}}}{\int _{T+\varepsilon s}^{T+\varepsilon t}}{v^{H-\frac{3}{2}}}{\big(v-(T+\varepsilon u)\big)^{H-\frac{1}{2}}}\hspace{0.1667em}dv.\end{aligned}\]
Now, thanks to the Lagrange theorem, there exists $x\in [s,t]$ such that
\[\begin{aligned}{}A(u)& \le c\hspace{0.1667em}\big({(T+\varepsilon t)^{H-\frac{1}{2}}}{(t-u)^{H-\frac{1}{2}}}{\varepsilon ^{H-\frac{1}{2}}}-{(T+\varepsilon s)^{H-\frac{1}{2}}}{(s-u)^{H-\frac{1}{2}}}{\varepsilon ^{H-\frac{1}{2}}}\big)\\ {} & =c\big({(T+\varepsilon x)^{H-\frac{3}{2}}}{(x-u)^{H-\frac{1}{2}}}+{(T+\varepsilon x)^{H-\frac{1}{2}}}{(x-u)^{H-\frac{3}{2}}}\big){\varepsilon ^{H-\frac{1}{2}}}(t-s).\end{aligned}\]
The estimation of B is easily done. There exists $x\in [s,t]$ such that
\[\begin{aligned}{}B(u)\le c{\int _{T+\varepsilon s}^{T+\varepsilon t}}{\big(v-(T+\varepsilon u)\big)^{H-\frac{1}{2}}}\hspace{0.1667em}dv& =c{\varepsilon ^{H+\frac{1}{2}}}{\int _{s}^{t}}{(v-u)^{H-\frac{1}{2}}}\hspace{0.1667em}dv\\ {} & =c{\varepsilon ^{H+\frac{1}{2}}}{(x-u)^{H-\frac{3}{2}}}(t-s),\end{aligned}\]
and then
\[ \varepsilon {\int _{0}^{s}}{A^{2}}(u)\hspace{0.1667em}du\le c{\varepsilon ^{2H}}{(t-s)^{2}},\hspace{2em}\varepsilon {\int _{0}^{s}}{B^{2}}(u)\hspace{0.1667em}du\le c{\varepsilon ^{2H+2}}{(t-s)^{2}}.\]
From Remark 15 we have that
\[ {\int _{0}^{t}}{\big(K(T+\varepsilon t,T+\varepsilon u)-K(T+\varepsilon s,T+\varepsilon u)\big)^{2}}\hspace{0.1667em}du\le c{\varepsilon ^{2H}}{(t-s)^{2}}.\]
Assumption 6 is then fulfilled with $\hat{\tau }=1$ and ${\gamma _{\varepsilon }}={\varepsilon ^{H}}$.A large deviation principle is then established for the family of processes ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }})_{t\in [0,1]}})_{\varepsilon >0}}$, with the inverse speed ${\varepsilon ^{2H}}$.
Example 5 (m-fold integrated Brownian motion).
Let X be the process defined in Example 2. We have already proved that Assumptions 1 and 2 are fulfilled with $\tau =1$ and ${\gamma _{\varepsilon }}=\varepsilon $. We want to show that Assumptions 5 and 6 are fulfilled with $\hat{\tau }=1$, ${\gamma _{\varepsilon }}=\varepsilon $. For $s\le t$, we have
therefore
\[ \underset{\varepsilon \to 0}{\lim }\sqrt{\varepsilon }\hspace{0.1667em}\frac{K(T+\varepsilon t,T+\varepsilon s)}{\varepsilon }=0\]
uniformly in $t,s\in [0,1]$. Thus,
\[ \bar{\Upsilon }(t,s)=\frac{{\tilde{\alpha }^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}\frac{1}{{(m!)^{2}}}\frac{{m^{2}}}{2m-1}{T^{2m-1}}st,\]
and Assumption 5 is verified. Note that $\bar{\Upsilon }$ is regular enough. Let us now prove the exponential tightness of the family of processes. For $s<t$, there exist positive constants ${c_{1}},{c_{2}}$ such that
\[\begin{aligned}{}& {\int _{0}^{t}}{\big(K(T+\varepsilon t,T+\varepsilon u)-K(T+\varepsilon s,T+\varepsilon u)\big)^{2}}\hspace{0.1667em}du\\ {} & \hspace{1em}=\frac{1}{{(m!)^{2}}}{\int _{0}^{s}}{\varepsilon ^{2m}}{\big({(t-u)^{m}}-{(s-u)^{m}}\big)^{2}}\hspace{0.1667em}du\\ {} & \hspace{2em}+\frac{1}{{(m!)^{2}}}{\int _{s}^{t}}{\varepsilon ^{2m}}({(t-u)^{2m}}\hspace{0.1667em}du\le {\varepsilon ^{2m}}{c_{1}}{(t-s)^{2}}+{\varepsilon ^{2m}}{c_{2}}{(t-s)^{2m+1}}\end{aligned}\]
and Assumption 6 is verified with $\hat{\tau }=1$ ad ${\gamma _{\varepsilon }}=\varepsilon $. A large deviation principle is then established for the family of conditioned processes ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }})_{t\in [0,1]}})_{\varepsilon >0}}$, with the inverse speed ${\varepsilon ^{2}}$.Example 6 (Integrated Volterra Process).
Let X be the process defined in Example 3. We have already proved that Assumptions 1 and 2 are fulfilled with $\tau =1$, ${\gamma _{\varepsilon }}=\varepsilon $. We want to show that Assumptions 5 and 6 are fulfilled with $\hat{\tau }=1$ and ${\gamma _{\varepsilon }}=\varepsilon $. For $s\le t$, we have
\[ h(T+\varepsilon t,T+\varepsilon s)={\int _{T+\varepsilon s}^{T+\varepsilon t}}K(v,T+\varepsilon s)\hspace{0.1667em}dv=\varepsilon {\int _{s}^{t}}K(T+\varepsilon v,T+\varepsilon s)\hspace{0.1667em}dv,\]
therefore
\[ \underset{\varepsilon \to 0}{\lim }\sqrt{\varepsilon }\hspace{0.1667em}\frac{h(T+\varepsilon t,T+\varepsilon s)}{\varepsilon }=0\]
uniformly in $t,s\in [0,1]$. Thus,
\[ \bar{\Upsilon }(t,s)=\frac{{\tilde{\alpha }^{2}}}{{\alpha ^{2}}+{\tilde{\alpha }^{2}}}{\int _{0}^{T}}{K^{2}}(T,u)\hspace{0.1667em}du\hspace{0.1667em}st,\]
and Assumption 5 is verified. Note that $\bar{\Upsilon }$ is regular enough. Let us now prove the exponential tightness of the family of processes. For $s<t$, there exist a constant $c>0$ such that
\[\begin{aligned}{}& {\int _{0}^{t}}{\big(h(T+\varepsilon t,T+\varepsilon u)-h(T+\varepsilon s,T+\varepsilon u)\big)^{2}}\hspace{0.1667em}du\\ {} & \hspace{1em}={\int _{0}^{s}}{\Bigg({\int _{T+\varepsilon s}^{T+\varepsilon t}}K(v,T+\varepsilon u)\hspace{0.1667em}dv\Bigg)^{2}}\hspace{0.1667em}du+{\int _{s}^{t}}{\Bigg({\int _{T+\varepsilon u}^{T+\varepsilon t}}K(v,T+\varepsilon u)\hspace{0.1667em}dv\Bigg)^{2}}\hspace{0.1667em}du\\ {} & \hspace{1em}\le {\varepsilon ^{2}}c{(t-s)^{2}}\end{aligned}\]
and Assumption 6 is verified with $\hat{\tau }=1$ and ${\gamma _{\varepsilon }}=\varepsilon $. A large deviation principle is then established for the family of conditioned processes ${({({X_{T+\varepsilon t}^{\psi }}-{X_{T}^{\psi }})_{t\in [0,1]}})_{\varepsilon >0}}$, with the inverse speed ${\varepsilon ^{2}}$.