1 Introduction
Consider a fractional Brownian motion (fBm), a self-similar Gaussian process with stationary increments. It was introduced by Kolmogorov [5] and studied by Mandelbrot and Van Ness [6]. The fBm with Hurst parameter $H\in (0,1)$ is a centered Gaussian process with covariance function
\[R_{H}(t,s)=E\big({B_{t}^{H}}{B_{s}^{H}}\big)=\frac{1}{2}\big({t}^{2H}+{s}^{2H}-|t-s{|}^{2H}\big).\]
If $H=1/2$, then the process ${B}^{1/2}$ is a standard Brownian motion. When $H\ne \frac{1}{2}$, ${B}^{H}$ is neither a semimartingale nor a Markov process, so that many of the techniques employed in stochastic analysis are not available for an fBm. The self-similarity and stationarity of increments make the fBm an appropriate model for many applications in diverse fields from biology to finance. We refer to [7] for details on these notions.Consider the following stochastic differential equation (SDE)
where $b:[0,T]\times {\mathbb{R}}^{d}\to {\mathbb{R}}^{d}$ is a measurable function, and ${B}^{H}$ is a d-dimensional fBm with Hurst parameter $H<1/2$ whose components are one-dimensional independent fBms defined on a probability space $(\varOmega ,\mathcal{F},\{\mathcal{F}_{t}\}_{t\in [0,T]},P)$, where the filtration $\{\mathcal{F}_{t}\}_{t\in [0,T]}$ is generated by ${B_{t}^{H}}$, $t\in [0,T]$, augmented by the P-null sets. It has been proved in [2] that if b satisfies the assumption
for $H<\frac{1}{2(3d-1)}$, then Eq. (1) has a unique strong solution, which will be assumed throughout this paper.
(1)
\[\left\{\begin{array}{l}dX_{t}=b(t,X_{t})\hspace{0.1667em}dt+d{B_{t}^{H}},\hspace{1em}\\{} X_{0}=x\in {\mathbb{R}}^{d},\hspace{1em}\end{array}\right.\](2)
\[b\in {L_{\infty }^{1,\infty }}:={L}^{\infty }\big([0,T];{L}^{1}\big({\mathbb{R}}^{d}\big)\cap {L}^{\infty }\big({\mathbb{R}}^{d}\big)\big),\]Notice that if the drift coefficient is Lipschitz continuous, then Eq. (1) has a unique strong solution, which is continuous with respect to the initial condition. Moreover, the solution can be constructed using various numerical schemes.
Our purpose in this paper is to establish some stability results under the pathwise uniqueness of solutions and under weak regularity conditions on the drift coefficient b. We mention that a considerable result in this direction has been established in [1] when an fBm is replaced by a standard Brownian motion.
The paper is organized as follows. In Section 2, we introduce some properties, notation, definitions, and preliminary results. Section 3 is devoted to the study of the variation of solution with respect to the initial data. In the last section, we drop the continuity assumption on the drift and try to obtain the same result as in Section 3.
2 Preliminaries
In this section, we give some properties of an fBm, definitions, and some tools used in the proofs.
For any $H<1/2$, let us define the square-integrable kernel
\[K_{H}(t,s)=c_{H}\Bigg[{\bigg(\frac{t}{s}\bigg)}^{H-\frac{1}{2}}-\bigg(H-\frac{1}{2}\bigg){s}^{\frac{1}{2}-H}{\int _{s}^{t}}{(u-s)}^{H-\frac{1}{2}}{u}^{H-\frac{3}{2}}\hspace{0.1667em}du\Bigg],\hspace{1em}t>s,\]
where $c_{H}={[\frac{2H}{(1-2H)\beta (1-2H,H+\frac{1}{2}))}]}^{1/2}$, $t>s$.Note that
\[\frac{\partial K_{H}}{\partial t}(t,s)=c_{H}\bigg(H-\frac{1}{2}\bigg){\bigg(\frac{t}{s}\bigg)}^{H-\frac{1}{2}}{(t-s)}^{H-\frac{3}{2}}.\]
Let ${B}^{H}=\{{B_{t}^{H}},\hspace{2.5pt}t\in [0,T]\}$ be an fBm defined on $(\varOmega ,\mathcal{F},\{\mathcal{F}_{t}\}_{t\in [0,T]},P)$. We denote by ζ the set of step functions on $[0,T]$. Let $\mathcal{H}$ be the Hilbert space defined as the closure of ζ with respect to the scalar product
The mapping $\textbf{1}_{[0,t]}\to {B_{t}^{H}}$ can be extended to an isometry between $\mathcal{H}$ and the Gaussian subspace of ${L}^{2}(\varOmega )$ associated with ${B}^{H}$, and such an isometry is denoted by $\varphi \to {B}^{H}(\varphi )$.Now we introduce the linear operator ${K_{H}^{\ast }}$ from ζ to ${L}^{2}([0,T])$ defined by
\[\big({K_{H}^{\ast }}\varphi \big)(s)=K_{H}(b,s)\varphi (s)+{\int _{s}^{b}}\big(\varphi (t)-\varphi (s)\big)\frac{\partial K_{H}}{\partial t}(t,s)\hspace{0.1667em}dt.\]
The operator ${K_{H}^{\ast }}$ is an isometry between ζ and ${L}^{2}([0,T])$, which can be extended to the Hilbert space $\mathcal{H}$.Define the process $W=\{W_{t},t\in [0,T]\}$ by
Then W is a Brownian motion; moreover, ${B}^{H}$ has the integral representation
We need also to define an isomorphism $K_{H}$ from ${L}^{2}([0,T])$ onto ${I_{0+}^{H+\frac{1}{2}}}({L}^{2})$ associated with the kernel $K_{H}(t,s)$ in terms of the fractional integrals as follows:
\[(K_{H}\varphi )(s)={I_{{0}^{+}}^{2H}}{s}^{\frac{1}{2}-H}{I_{{0}^{+}}^{\frac{1}{2}-H}}{s}^{H-\frac{1}{2}}\varphi ,\hspace{1em}\varphi \in {L}^{2}\big([0,T]\big).\]
Note that, for $\varphi \in {L}^{2}([0,T])$, ${I_{{0}^{+}}^{\alpha }}$ is the left fractional Riemann-Liouville integral operator of order α defined by
\[{I_{{0}^{+}}^{\alpha }}\varphi (x)=\frac{1}{\varGamma (\alpha )}{\int _{0}^{x}}{(x-y)}^{\alpha -1}\varphi (y)\hspace{0.1667em}dy,\]
where Γ is the gamma function (see [3] for details).The inverse of $K_{H}$ is given by
\[\big({K_{H}^{-1}}\varphi \big)(s)={s}^{\frac{1}{2}-H}{D_{{0}^{+}}^{\frac{1}{2}-H}}{s}^{H-\frac{1}{2}}{D_{{0}^{+}}^{2H}}\varphi (s),\hspace{1em}\varphi \in {I_{0+}^{H+\frac{1}{2}}}\big({L}^{2}\big),\]
where for $\varphi \in {I_{{0}^{+}}^{H+\frac{1}{2}}}({L}^{2})$, ${D_{{0}^{+}}^{\alpha }}$ is the left-sided Riemann Liouville derivative of order α defined by
\[{D_{{0}^{+}}^{\alpha }}\varphi (x)=\frac{1}{\varGamma (1-\alpha )}\frac{d}{dx}{\int _{0}^{x}}\frac{\varphi (y)}{{(x-y)}^{\alpha }}\hspace{0.1667em}dy.\]
If φ is absolutely continuous (see [8]), then
Definition 2.2.
A sextuple $(\varOmega ,\mathcal{F},\{\mathcal{F}_{t}\}_{t\in [0,T]},P,X,{B}^{H})$ is called a weak solution to (1) if
The main tool used in the proofs is Skorokhod’s selection theorem given by the following lemma.
Lemma 2.4.
([4], p. 9) Let $(S,\rho )$ be a complete separable metric space, and let P, $P_{n}$, $n=1,2,\dots $, be probability measures on $(S,\mathbb{B}(S))$ such that $P_{n}$ converges weakly to P as $n\to \infty $. Then, on a probability space $(\widetilde{\varOmega },\widetilde{\mathcal{F}},\widetilde{P})$, we can construct S-valued random variables X, $X_{n}$, $n=1,2,\dots $, such that:
We will also make use of the following result, which gives a criterion for the tightness of sequences of laws associated with continuous processes.
Lemma 2.5.
([4], p. 18) Let $\{{X_{t}^{n}}$, $t\in [0,T]\}$, $n=1,2,\dots $, be a sequence of d-dimensional continuous processes satisfying the following two conditions:
Then, there exist a subsequence $(n_{k})$, a probability space $(\widetilde{\varOmega },\widetilde{\mathcal{F}},\widetilde{P})$, and d-dimensional continuous processes $\widetilde{X}$, ${\widetilde{X}}^{n_{k}}$, $k=1,2,\dots $, defined on $\widetilde{\varOmega }$ such that
3 Variation of solutions with respect to initial conditions
The purpose of this section is to ensure the continuous dependence of the solution with respect to the initial condition when the drift b is continuous and bounded. Note that, in the case of ordinary differential equation, the continuity of the coefficient is sufficient to ensure this dependence.
Next, we give a theorem that will be essential in establishing the desired result.
Theorem 3.1.
Let b be a continuous bounded function. Then, under the pathwise uniqueness for SDE (1), we have
Before we proceed to the proof of Theorem 3.1, we state the following technical lemma.
Lemma 3.2.
Let ${X}^{n}$ be the solution of (1) corresponding to the initial condition $x_{n}$. Then, for every $p>\frac{1}{2H}$, there exists a positive constant $C_{p}$ such that, for all $s,t\in [0,T]$,
Proof.
Fix $s<t$ in $[0,T]$. We have
Due to the stationarity of the increments and the scaling property of an fBm and the boundedness of b, we get that
which finishes the proof. □
Let us now turn to the proof of Theorem 3.1.
Proof.
Suppose that the result of the theorem is false. Then there exist a constant $\delta >0$ and a sequence $x_{n}$ converging to $x_{0}$ such that
Thanks to property $(\alpha )$, we have, for $k\ge 1$ and $t>0$,
\[\underset{n}{\inf }E\Big[\underset{0\le t\le T}{\sup }{\big|X_{t}(x_{n})-X_{t}(x_{0})\big|}^{2}\Big]\ge \delta .\]
Let ${X}^{n}$ (respectively, X) be the solution of (1) corresponding to the initial condition $x_{n}$ (respectively, $x_{0}$). According to Lemma 3.2, the sequence $({X}^{n},X,{B}^{H})$ satisfies conditions (i) and (ii) of Lemma 2.5. Then, by Skorokhod’s selection theorem there exist a subsequence $\{n_{k},k\ge 1\}$, a probability space $(\widetilde{\varOmega },\widetilde{\mathcal{F}},\widetilde{P})$, and stochastic processes $(\widetilde{X},\widetilde{Y},{\widetilde{B}}^{H})$, $(\widetilde{{X}^{k}},\widetilde{{Y}^{k}},{\widetilde{B}}^{H,k})$, $k\ge 1$, defined on $(\widetilde{\varOmega },\widetilde{\mathcal{F}},\widetilde{P})$ such that:
-
(α) for each $k\ge 1$, the laws of $(\widetilde{{X}^{k}},\widetilde{{Y}^{k}},{\widetilde{B}}^{H,k})$ and $({X}^{n_{k}},X,{B}^{H})$ coincide;
-
(β) $({\widetilde{X}}^{k},{\widetilde{Y}}^{k},{\widetilde{B}}^{H,k})$ converges to $(\widetilde{X},\widetilde{Y},{\widetilde{B}}^{H})$ uniformly on every finite time interval $\widetilde{P}$-a.s.
\[E{\Bigg|{\widetilde{X}_{t}^{k}}-x_{k}-{\int _{0}^{t}}b\big(s,{\widetilde{X}_{s}^{k}}\big)\hspace{0.1667em}ds-{\widetilde{B}_{t}^{H,k}}\Bigg|}^{2}=0.\]
In other words, ${\widetilde{X}_{t}^{k}}$ satisfies the following SDE:
\[{\widetilde{X}_{t}^{k}}=x_{k}+{\int _{0}^{t}}b\big(s,{\widetilde{X}_{s}^{k}}\big)\hspace{0.1667em}ds+{\widetilde{B}_{t}^{H,k}}.\]
Similarly,
\[{\widetilde{Y}_{t}^{k}}=x_{0}+{\int _{0}^{t}}b\big(s,{\widetilde{Y}_{s}^{k}}\big)\hspace{0.1667em}ds+{\widetilde{B}_{t}^{H,k}}.\]
Using $(\beta )$, we deduce that
\[\underset{k\to \infty }{\lim }{\int _{0}^{t}}b\big(s,{\widetilde{X}_{s}^{k}}\big)\hspace{0.1667em}ds={\int _{0}^{t}}b(s,\widetilde{X}_{s})\hspace{0.1667em}ds\]
and
\[\underset{k\to \infty }{\lim }{\int _{0}^{t}}b\big(s,{\widetilde{Y}_{s}^{k}}\big)\hspace{0.1667em}ds={\int _{0}^{t}}b(s,\widetilde{Y}_{s})\hspace{0.1667em}ds\]
in probability and uniformly in $t\in [0,T]$.Thus, the processes $\widetilde{X}$ and $\widetilde{Y}$ satisfy the same SDE on $(\widetilde{\varOmega },\widetilde{\mathcal{F}},\widetilde{P})$ with the same driving noise ${\widetilde{B}_{t}^{H}}$ and the initial condition $x_{0}$. Then, by pathwise uniqueness, we conclude that $\widetilde{X}_{t}=\widetilde{Y}_{t}$ for all $t\in [0,T]$, $\widetilde{P}$-a.s.
On the other hand, by uniform integrability we have that
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \delta & \displaystyle \le & \displaystyle \underset{n}{\liminf }E\Big[\underset{0\le t\le T}{\max }{\big|X_{t}(x_{n})-X_{t}(x_{0})\big|}^{2}\Big]\\{} & \displaystyle =& \displaystyle \underset{k}{\liminf }\widetilde{E}\Big[\underset{0\le t\le T}{\max }{\big|{\widetilde{X}_{t}^{k}}-{\widetilde{Y}_{t}^{k}}\big|}^{2}\Big]\\{} & \displaystyle \le & \displaystyle \widetilde{E}\Big[\underset{0\le t\le T}{\max }|\widetilde{X}_{t}-\widetilde{Y}_{t}{|}^{2}\Big],\end{array}\]
which is a contradiction. Then the desired result follows. □4 The case of discontinuous drift coefficient
In this section, we drop the continuity assumption on the drift coefficient and only assume that b is bounded. The goal of this section is to generate the same result as in Theorem 3.1 without the continuity assumption.
Next, in order to use the fractional Girsanov theorem given in $\text{[8, Thm. 2],}$ we should first check that the conditions imposed in the latter are satisfied in our context. This will be done in the following lemma.
Lemma 4.1.
Suppose that X is a solution of SDE (1), and let b be a bounded function. Then the process $v={K_{H}^{-1}}({\int _{0}^{\cdot }}b(r,X_{r})\hspace{0.1667em}dr)$ enjoys the following properties:
Proof.
(1) In light of (3), we can write
\[\begin{array}{r@{\hskip0pt}l}\displaystyle |v_{s}|& \displaystyle =\big|{s}^{H-\frac{1}{2}}{I_{{0}^{+}}^{\frac{1}{2}-H}}{s}^{\frac{1}{2}-H}\big|b(s,X_{s})\big|\big|\\{} & \displaystyle =\frac{1}{\varGamma (\frac{1}{2}-H)}{s}^{H-\frac{1}{2}}{\int _{0}^{s}}{(s-r)}^{-\frac{1}{2}-H}{r}^{\frac{1}{2}-H}\big|b(r,X_{r})\big|\hspace{0.1667em}dr\\{} & \displaystyle \le \hspace{0.1667em}\| b\| _{\infty }\frac{1}{\varGamma (\frac{1}{2}-H)}{s}^{H-\frac{1}{2}}{\int _{0}^{s}}{(s-r)}^{-\frac{1}{2}-H}{r}^{\frac{1}{2}-H}\hspace{0.1667em}dr\\{} & \displaystyle =\hspace{0.1667em}\| b\| _{\infty }\frac{\varGamma (\frac{3}{2}-H)}{\varGamma (2-2H)}{s}^{\frac{1}{2}-H}\\{} & \displaystyle \le \hspace{0.1667em}\| b\| _{\infty }\frac{\varGamma (\frac{3}{2}-H)}{\varGamma (2-2H)}{T}^{\frac{1}{2}-H},\end{array}\]
where $\| \cdot \| _{\infty }$ denotes the norm in ${L}^{\infty }([0,T];{L}^{\infty }({\mathbb{R}}^{d}))$.As a result, we get that
(2) The second item is obtained easily by the following estimate:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E& \displaystyle \Bigg[\exp \Bigg\{\frac{1}{2}{\int _{0}^{T}}|v_{s}{|}^{2}\hspace{0.1667em}ds\Bigg\}\Bigg]\le \exp \bigg\{\frac{1}{2}C_{H}{T}^{2(1-H)}\| b{\| _{\infty }^{2}}\bigg\},\end{array}\]
where $C_{H}=\frac{\varGamma {(\frac{3}{2}-H)}^{2}}{\varGamma {(2-2H)}^{2}}$, which finishes the proof. □Next, we will establish the following Krylov-type inequality that will play an essential role in the sequel.
Lemma 4.2.
Suppose that X is a solution of SDE (1). Then, there exists $\beta >1+dH$ such that, for any measurable nonnegative function $g:[0,T]\times {\mathbb{R}}^{d}\mapsto {\mathbb{R}_{+}^{d}}$, we have
where M is a constant depending only on T, d, β, and H.
Proof.
Let W be a d-dimensional Brownian motion such that
For the process v introduced in Lemma 4.1, let us define $\widehat{P}$ by
\[\frac{d\widehat{P}}{dP}=\exp \Bigg\{-{\int _{0}^{T}}v_{t}\hspace{0.1667em}dW_{t}-\frac{1}{2}{\int _{0}^{T}}{v_{t}^{2}}\hspace{0.1667em}dt\Bigg\}:={Z_{T}^{-1}}.\]
Then, in light of Lemma 4.1 together with the fractional Girsanov theorem [8, Thm. 2], we can conclude that $\widehat{P}$ is a probability measure under which the process $X-x$ is an fBm.Now, applying Hölder’s inequality, we have
where $1/\alpha +1/\rho =1$, and C is a positive constant depending only on T, α, and ρ.
(5)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E{\int _{0}^{T}}g(t,X_{t})\hspace{0.1667em}dt& \displaystyle =\widehat{E}\Bigg\{Z_{T}{\int _{0}^{T}}g(t,X_{t})\hspace{0.1667em}dt\Bigg\}\\{} & \displaystyle \le C{\big\{\widehat{E}\big[{Z_{T}^{\alpha }}\big]\big\}}^{1/\alpha }{\Bigg\{\widehat{E}{\int _{0}^{T}}{g}^{\rho }(t,X_{t})\hspace{0.1667em}dt\Bigg\}}^{1/\rho },\end{array}\]From [2, Lemma 4.3] we can see that $\widehat{E}[{Z_{T}^{\alpha }}]$ satisfies the following property:
where $C_{H,d,T}$ is a continuous increasing function depending only on H, d, and T.
On the other hand, applying again Hölder’s inequality with $1/\gamma +1/{\gamma ^{\prime }}=1$ and $\gamma >dH+1$, we obtain
A direct calculation gives
(7)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \widehat{E}{\int _{0}^{T}}{g}^{\rho }(t,X_{t})\hspace{0.1667em}dt& \displaystyle ={\int _{0}^{T}}\int _{{\mathbb{R}}^{d}}{g}^{\rho }(t,y){\big(2\pi {t}^{2H}\big)}^{-d/2}{\exp }^{-\| y-x{\| }^{2}/2{t}^{2H}}\hspace{0.1667em}dy\hspace{0.1667em}dt\\{} & \displaystyle \le {\Bigg({\int _{0}^{T}}\int _{{\mathbb{R}}^{d}}{\big(2\pi {t}^{2H}\big)}^{-d{\gamma ^{\prime }}/2}{\exp }^{-{\gamma ^{\prime }}\| y-x{\| }^{2}/2{t}^{2H}}\hspace{0.1667em}dy\hspace{0.1667em}dt\Bigg)}^{1/{\gamma ^{\prime }}}\\{} & \displaystyle \hspace{1em}\times {\Bigg({\int _{0}^{T}}\int _{{\mathbb{R}}^{d}}{g}^{\rho \gamma }(t,y)\hspace{0.1667em}dy\hspace{0.1667em}dt\Bigg)}^{1/\gamma }.\end{array}\]
\[\int _{{\mathbb{R}}^{d}}{\big(2\pi {t}^{2H}\big)}^{-d{\gamma ^{\prime }}/2}{\exp }^{-{\gamma ^{\prime }}\| y-x{\| }^{2}/2{t}^{2H}}\hspace{0.1667em}dy={(2\pi )}^{d/2-d{\gamma ^{\prime }}/2}{\big({\gamma ^{\prime }}\big)}^{-d/2}{t}^{(1-{\gamma ^{\prime }})\hspace{0.1667em}dH}.\]
Plugging this into (7), we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \widehat{E}{\int _{0}^{T}}{g}^{\rho }(t,X_{t})\hspace{0.1667em}dt& \displaystyle \le {\Bigg({\int _{0}^{T}}{(2\pi )}^{d/2-d{\gamma ^{\prime }}/2}{\big({\gamma ^{\prime }}\big)}^{-d/2}{t}^{(1-{\gamma ^{\prime }})\hspace{0.1667em}dH}\hspace{0.1667em}dt\Bigg)}^{1/{\gamma ^{\prime }}}\\{} & \displaystyle \hspace{1em}\times {\Bigg({\int _{0}^{T}}\int _{{\mathbb{R}}^{d}}{g}^{\rho \gamma }(t,y)\hspace{0.1667em}dy\hspace{0.1667em}dt\Bigg)}^{1/\gamma }\\{} & \displaystyle \le {\big({(2\pi )}^{d/2-d{\gamma ^{\prime }}/2}{\big({\gamma ^{\prime }}\big)}^{-d/2}\big)}^{1/{\gamma ^{\prime }}}{\Bigg({\int _{0}^{T}}{t}^{(1-{\gamma ^{\prime }})\hspace{0.1667em}dH}\hspace{0.1667em}dt\Bigg)}^{1/{\gamma ^{\prime }}}\\{} & \displaystyle \hspace{1em}\times {\Bigg({\int _{0}^{T}}\int _{{\mathbb{R}}^{d}}{g}^{\rho \gamma }(t,y)\hspace{0.1667em}dy\hspace{0.1667em}dt\Bigg)}^{1/\gamma }\\{} & \displaystyle \le C\big({\gamma ^{\prime }},T,d,H\big){\Bigg({\int _{0}^{T}}\int _{{\mathbb{R}}^{d}}{g}^{\rho \gamma }(t,y)\hspace{0.1667em}dy\hspace{0.1667em}dt\Bigg)}^{1/\gamma }.\end{array}\]
Finally, combining this with (5) and (6), we get estimate (4) with $\beta =\rho \gamma $. The proof is now complete. □Now we are able to state the main result of this section.
Proof.
The proof is similar to that of Theorem 3.1. The only difficulty is to show that
Let us first define
where ∗ denotes the convolution on ${\mathbb{R}}^{d}$, and ϕ is an infinitely differentiable function with support in the unit ball such that $\int \phi (x)\hspace{0.1667em}dx=1$.
\[\underset{k\to \infty }{\lim }{\int _{0}^{t}}b\big(s,{\widetilde{X}_{s}^{k}}\big)\hspace{0.1667em}ds={\int _{0}^{t}}b(s,\widetilde{X}_{s})\hspace{0.1667em}ds\]
in probability. In other words, for $\epsilon >0$, we will show that
(8)
\[\underset{k\to \infty }{\limsup }P\Bigg[\Bigg|{\int _{0}^{t}}\big(b\big(s,{\widetilde{X}_{s}^{k}}\big)-b(s,\widetilde{X}_{s})\big)\hspace{0.1667em}ds\Bigg|>\epsilon \Bigg]=0.\]Applying Chebyshev’s inequality, we obtain
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle P\Bigg[\Bigg|{\int _{0}^{t}}\big(b\big(s,{\widetilde{X}_{s}^{k}}\big)-b(s,\widetilde{X}_{s})\big)\hspace{0.1667em}ds\Bigg|>\epsilon \Bigg]\\{} & \displaystyle \hspace{1em}\le \frac{1}{{\epsilon }^{2}}E\Bigg[{\int _{0}^{t}}{\big|b\big(s,{\widetilde{X}_{s}^{k}}\big)-b(s,\widetilde{X}_{s})\big|}^{2}\hspace{0.1667em}ds\Bigg]\\{} & \displaystyle \hspace{1em}\le \frac{4}{{\epsilon }^{2}}\Bigg\{E\Bigg[{\int _{0}^{t}}{\big|b\big(s,{\widetilde{X}_{s}^{k}}\big)-{b}^{\delta }\big(s,{\widetilde{X}_{s}^{k}}\big)\big|}^{2}\hspace{0.1667em}ds\Bigg]\\{} & \displaystyle \hspace{2em}+E\Bigg[{\int _{0}^{t}}{\big|{b}^{\delta }\big(s,{\widetilde{X}_{s}^{k}}\big)-{b}^{\delta }(s,\widetilde{X}_{s})\big|}^{2}\hspace{0.1667em}ds\Bigg]\\{} & \displaystyle \hspace{2em}+E\Bigg[{\int _{0}^{t}}{\big|{b}^{\delta }(s,\widetilde{X}_{s})-b(s,\widetilde{X}_{s})\big|}^{2}\hspace{0.1667em}ds\Bigg]\Bigg\}\\{} & \displaystyle \hspace{1em}=\frac{4}{{\epsilon }^{2}}(J_{1}+J_{2}+J_{3}).\end{array}\]
From the continuity of ${b}^{\delta }$ in x and from the convergence of ${\widetilde{X}_{s}^{k}}$ to $\widetilde{X}_{s}$ uniformly on every finite time interval $\widetilde{P}$ a.s. it follows that $J_{2}$ converges to 0 as $k\to \infty $ for every $\delta >0$.On the other hand, let $\theta :{\mathbb{R}}^{d}\to \mathbb{R}_{+}$ be a smooth truncation function such that $\theta (z)=1$ in the unit ball and $\theta (z)=0$ for $|z|>1$.
By applying Lemma 4.2 we obtain
where N does not depend on δ and k, and $\| \cdot \| _{\beta ,R}$ denotes the norm in ${L}^{\beta }([0,T]\times B(0,R))$.
(9)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle J_{1}& \displaystyle =E{\int _{0}^{t}}\theta \big({\widetilde{X}_{s}^{k}}/R\big){\big|{b}^{\delta }\big(s,{\widetilde{X}_{s}^{k}}\big)-b\big(s,{\widetilde{X}_{s}^{k}}\big)\big|}^{2}\hspace{0.1667em}ds\\{} & \displaystyle \hspace{1em}+E{\int _{0}^{t}}\big(1-\theta \big({\widetilde{X}_{s}^{k}}/R\big)\big){\big|{b}^{\delta }\big(s,{\widetilde{X}_{s}^{k}}\big)-b\big(s,{\widetilde{X}_{s}^{k}}\big)\big|}^{2}\hspace{0.1667em}ds\\{} & \displaystyle \le N\big\| {b}^{\delta }-b\big\| _{\beta ,R}+2CE{\int _{0}^{t}}\big(1-\theta \big({\widetilde{X}_{s}^{k}}/R\big)\big)\hspace{0.1667em}ds,\end{array}\]The last expression in the right-hand side of the last inequality satisfies the following estimate:
But we know that $\sup _{k\ge 1}E[\sup _{s\le t}|{\widetilde{X}_{s}^{k}}{|}^{p}]<\infty $ for all $p>1$, and thus
Substituting estimate (10) into (9), letting $\delta \to 0$, and using (11), we deduce that the convergence of the term $J_{1}$ follows.
(10)
\[E{\int _{0}^{t}}\big(1-\theta \big({\widetilde{X}_{s}^{k}}/R\big)\big)\hspace{0.1667em}ds\le \underset{k\ge 1}{\sup }P\Big[\underset{s\le t}{\sup }\big|{\widetilde{X}_{s}^{k}}\big|>R\Big].\](11)
\[\underset{R\to \infty }{\lim }\underset{k\ge 1}{\sup }P\Big[\underset{s\le t}{\sup }\big|{\widetilde{X}_{s}^{k}}\big|>R\Big]=0.\]Finally, since estimate (10) also holds for $\widetilde{X}$, it suffices to use the same arguments as before to obtain the convergence of the term $J_{3}$, which completes the proof. □