1 Introduction
We consider the following stochastic Burgers equation with multiplicative space-time white noise, indexed by $\varepsilon \mathrm{>}0$, given by
with Dirichlet’s boundary conditions ${u^{\varepsilon }}(t,0)={u^{\varepsilon }}(t,1)=0$ for $t\in [0,T]$, and the initial condition ${u^{\varepsilon }}(0,x)={u_{0}}(x)$ for $x\in [0,1]$. We assume that ${u_{0}}$ is continuous on $[0,1]$ and σ is bounded and globally Lipschitz on $\mathbb{R}$. The driving noise W is a space-time Brownian sheet defined on some filtered probability space $(\varOmega ,\mathcal{F},{({\mathcal{F}_{t}})}_{t\in [0,T]},\mathbb{P})$.
(1)
\[\begin{aligned}{}\frac{\partial {u^{\varepsilon }}}{\partial t}(t,x)& =\Delta {u^{\varepsilon }}(t,x)+\frac{1}{2}\frac{\partial }{\partial x}{\big({u^{\varepsilon }}(t,x)\big)^{2}}\\ {} & \hspace{1em}+\sqrt{\varepsilon }\sigma \big({u^{\varepsilon }}(t,x)\big)\dot{W}(t,x),\hspace{1em}(t,x)\in [0,T]\times [0,1],\end{aligned}\]The deterministic Burgers equation was introduced in [7] as a simplified mathematical model describing the turbulence phenomena in fluids. Its stochastic version has been the subject of several works; see for instance [1, 17, 22], and the references therein. In particular, a large deviation principle is established in [23] for an “additive version” of (1), and in [8] and [14] for a class of Burgers’ type stochastic partial differential equations (SPDEs for short) including (1). Generally speaking, large deviations theory deals with determining how fast the probabilities $\mathbb{P}({A_{\varepsilon }})$ of a family of rare events $({A_{\varepsilon }})$ decay to 0 as ε tends to 0, and how to compute the precise rate of decay as a function of the rare events. A related natural important question is to study moderate deviations results which deals with probabilities of deviations of “smaller order” than in large deviations. We will precise below the main difference between moderate and large deviations principles in the context of stochastic Burgers equation, and for a deeper description and detail about these two kinds of deviations principles and their relationship, we refer the reader to [6].
Our first goal in this paper is to study the moderate deviations of ${u^{\varepsilon }}$ from the deterministic solution ${u^{0}}$ of the equation (4) below. More precisely, we deal with the deviations of the trajectory
where the deviation scale $a:{\mathbb{R}^{+}}\longrightarrow {\mathbb{R}^{+}}$ is such that
and ${u^{0}}$ stands for the solution of the following deterministic partial differential equation
with Dirichlet’s boundary conditions ${u^{0}}(t,0)={u^{0}}(t,1)=0$ for $t\in [0,T]$, and the initial condition ${u^{0}}(0,x)={u_{0}}(x)$.
(2)
\[ {\bar{u}^{\varepsilon }}(t,x):=\frac{{u^{\varepsilon }}(t,x)-{u^{0}}(t,x)}{a(\varepsilon )},\](3)
\[ a(\varepsilon )\longrightarrow 0\hspace{1em}\text{and}\hspace{1em}h(\varepsilon ):=\frac{a(\varepsilon )}{\sqrt{\varepsilon }}\longrightarrow \infty ,\hspace{1em}\text{as}\hspace{2.5pt}\varepsilon \longrightarrow 0,\](4)
\[ \frac{\partial {u^{0}}}{\partial t}(t,x)=\frac{{\partial ^{2}}{u^{0}}}{\partial {x^{2}}}(t,x)+\frac{1}{2}\frac{\partial }{\partial x}{\big({u^{0}}(t,x)\big)^{2}},\hspace{1em}(t,x)\in [0,T]\times [0,1],\]The deviation scale $a(\varepsilon )$ influences strongly the asymptotic behavior of ${\bar{u}^{\varepsilon }}$. In fact, for certain norm $\| \cdot \| $, bounds of the probabilities $\mathbb{P}(\frac{\| {u^{\varepsilon }}-{u^{0}}\| }{\sqrt{\varepsilon }}\in \cdot )$ are dealt with the central limit theorem, while probabilities $\mathbb{P}(\| {u^{\varepsilon }}-{u^{0}}\| \in \cdot )$ are estimated by large deviations results. Furthermore, when we are interested in probabilities of the form $\mathbb{P}(\frac{\| {u^{\varepsilon }}-{u^{0}}\| }{a(\varepsilon )}\in \cdot )$ under the condition (3) (e.g. $a(\varepsilon )={\varepsilon ^{1/4}}$), then we are in the framework of the so-called moderate deviations which fills the gap between the central limit theorem scale ($a(\varepsilon )=\sqrt{\varepsilon }$) and the large deviations scale ($a(\varepsilon )=1$). In this paper, we will establish the moderate deviations principle for (1). For the study of this topic for various kind of stochastic processes, see for instance e.g. [10, 12, 16, 21].
Furthermore, there are basically two approaches to analyzing moderate and large deviations for processes. The former, which is originally used by Freidlin and Wentzell [15] for diffusions processes, relies on discretization and localization arguments that allow to deduce the large deviations principle, for the solutions of equations under study, using a general contraction principle from some Schilder type theorems for the driving noises. The second one, which we are going to use in present paper, is the so-called weak convergence approach. It was introduced in [13] and developed in [2, 4] and [5], and its starting point is the equivalence between large deviations principle and Laplace principle in the setting of Polish spaces. It consists in using certain variational formulas that can be viewed as the minimal cost functions for associated stochastic optimal problems. These minimal cost functions have a form to which the theory of weak convergence of probability measures can be applied. We refer to [13] for a more complete exposition on this approach.
We stress here that, in the present paper, we mainly use the weak convergence approach to establish moderate deviations for stochastic Burgers’ equations while in the previous works ([8, 23, 14]) the authors studied the large deviations principle for this equation. The most likely advantage in using the weak convergence approach is that it allows one to avoid establishing technical exponential-type probability estimates usually needed in the classical studies of large deviations principle, and reduces the proofs to demonstrating qualitative properties like existence, uniqueness and tightness of certain analogues of the original processes.
We also note that the greatest difficulty in studying any aspect of Burgers’ type equations lies in their quadratic term. In fact, most of the techniques usually used to deal with stochastic differential equations with Lipschitz drift coefficients don’t longer work generally, and one resort to localization or tightness argument to circumvent this difficulty.
As pointed out before, we will prove a moderate deviations principle for the stochastic Burgers equation (1), and two first-step results toward a central limit theorem. It is worth bearing in mind that the most difficulty we have encountered in establishing a central limit theorem is mainly due to the quadratic term appearing in the Burgers equation for which the classical conditions (namely, the Lipschitz condition on the drift coefficient, the boundedness and the differentiability of its derivative) are no longer satisfied.
The paper is organized as follows. Section 2 is devoted to some preliminaries. The framework of our moderate deviations result and its proof are given in Section 3. In Section 4, toward a central limit theorem for the stochastic Burgers equation, we prove the uniform boundedness and the convergence of ${u^{\varepsilon }}$ to ${u^{0}}$ in the space ${\mathrm{L}^{q}}(\varOmega ;C([0,T];{\mathrm{L}^{2}}([0,1])))$ for $q\geqslant 2$. Furthermore, some technical results needed in our proofs are included in the Appendix.
In this paper all positive constants are denoted by c, and their values may change from line to line. Also, for $\rho \geqslant 1$ and $t\in [0,T]$, the usual norms on ${L^{\rho }}([0,1])$ and ${\mathcal{H}_{t}}:={L^{2}}([0,t]\times [0,1])$ are respectively denoted by $\| \cdot {\| _{\rho }}$ and $\| \cdot {\| _{{\mathcal{H}_{t}}}}$.
2 Preliminaries
Let $\{W(t,x),t\in [0,T],x\in [0,1]\}$ be a space-time Brownian sheet on a filtered probability space $(\varOmega ,\mathcal{F},{\mathcal{F}_{t}},\mathbb{P})$, that is, a zero-mean Gaussian field with covariance function given by
For each $t\in [0,T]$, ${\mathcal{F}_{t}}$ is the completion of the σ-field generated by the family of random variables $\{W(s,x),0\leqslant s\leqslant t,x\in [0,1]\}$.
A rigorous meaning to the solution of (1) is given by a jointly measurable and ${\mathcal{F}_{t}}$-adapted process ${u^{\varepsilon }}:=\{{u^{\varepsilon }}(t,x);(t,x)\in [0,T]\times [0,1]\}$ satisfying, for almost all $\omega \in \varOmega $ and all $t\in [0,T]$ the following evolution equation:
for $dx$-almost all $x\in [0,T]$, where ${G_{t}}(\cdot ,\cdot )$ denotes the Green kernel corresponding to the operator $\frac{\partial }{\partial t}-\Delta $ with the Dirichlet boundary conditions. The stochastic integral in (5) is understood in the Walsh sense, see [25].
(5)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {u^{\varepsilon }}(t,x)\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}& \displaystyle =& \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({u^{\varepsilon }}(s,y)\big)^{2}}dyds\\ {} & & \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}+\sqrt{\varepsilon }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{\varepsilon }}(s,y)\big)W(ds,dy),\end{array}\]By Theorem 2.1 in [17], there exists a unique ${L^{2}}[0,1]$-valued continuous stochastic process $\{{u^{\varepsilon }}(t,.),t\in [0,T]\}$ satisfying the equation (5).
The deterministic equation (4) obtained when the parameter ε tends to zero can be written in the following integral form
Since (6) corresponds to $\sigma \equiv 0$ in the degenerate case studied in [17], it admits a unique solution ${u^{0}}$ belonging to $C([0,T];{\mathrm{L}^{2}}([0,1]))$. Moreover, the continuity of ${u^{0}}$ on the compact set $[0,T]$ implies that
for all $q\geqslant 2$.
(6)
\[ {u^{0}}(t,x)={\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({u^{0}}(s,y)\big)^{2}}dyds.\]We now recall some estimations of the Green kernel function G, as stated in [17] and [22], that will be used in the sequel.
Lemma 2.1.
Let G denotes the Green kernel corresponding to the operator $\frac{\partial }{\partial t}-\Delta $ with the Dirichlet boundary conditions. Then, we have
Moreover, there exists a constant c, depending only on T, such that for all $y,z\in [0,1]$ and $t,{t^{\prime }}\in [0,T]$ such that $0\leqslant t\leqslant {t^{\prime }}\leqslant 1$,
-
i) for any $t\in [0,T]$ and $y\in [0,1]$: ${\displaystyle\int _{0}^{1}}{G_{t}}(x,y)dx=1$;
-
ii) for any $t\in [0,T]$ and $\frac{1}{2}\mathrm{<}\beta \mathrm{<}\frac{3}{2}$: ${\displaystyle\int _{0}^{t}}{\displaystyle\int _{0}^{1}}|{\partial _{x}}{G_{t-s}}(x,y){|^{\beta }}dxds\leqslant {c_{\beta ,T}}$, where ${c_{\beta ,T}}$ is a constant depending only on T and β.
-
iii) ${\displaystyle\int _{t}^{{t^{\prime }}}}{\displaystyle\int _{0}^{1}}{G_{{t^{\prime }}-s}^{2}}(x,y)dxds\leqslant c\sqrt{{t^{\prime }}-t}$ and ${\displaystyle\int _{0}^{t}}{\displaystyle\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dxds\leqslant c$;
-
iv) ${\displaystyle\int _{0}^{{t^{\prime }}}}{\displaystyle\int _{0}^{1}}{[{G_{t-s}}(x,y)-{G_{{t^{\prime }}-s}}(x,y)]^{2}}dxds\leqslant c\sqrt{{t^{\prime }}-t}$;
-
v) ${\displaystyle\int _{0}^{t}}{\displaystyle\int _{0}^{1}}{[{G_{s}}(x,y)-{G_{s}}(x,z)]^{2}}dxds\leqslant c|y-z|$.
3 Moderate deviations
3.1 Framework and the main result
According to Varadhan [24] and [3], a crucial step toward the large deviations principle is the Laplace principle. Therefore, we will focus later on establishing this principle which we formulate in the following
Definition 3.1 (Laplace principle).
A family of random variables $\{{X^{\varepsilon }};\hspace{0.1667em}\varepsilon \mathrm{>}0\}$ defined on a Polish space $\mathcal{E}$, is said to satisfy the Laplace principle with speed ${\lambda ^{2}}(\varepsilon )$ and rate function $I:\mathcal{E}\longrightarrow [0,\infty ]$ if for any bounded continuous function $F:\mathcal{E}\to \mathbb{R}$, we have
where E is the expectation with respect to P.
In the context of the weak convergence approach, the proof of the Laplace principle for functionals of the Brownian sheet is essentially based on the following variational representation formula, which was originally proved in [4].
Theorem 3.2.
Let $f\in \mathcal{C}([0,T]\times [0,1];\mathbb{R})\longrightarrow \mathbb{R}$ be a bounded measurable mapping $\mathcal{C}([0,T]\times [0,1];\mathbb{R})$ into $\mathbb{R}$, and let ${\mathcal{P}_{2}}$ be the class of all predictable processes u such that $\| u{\| _{{\mathcal{H}_{T}}}}\mathrm{<}\infty ,a.s$. Then
where
3.1.1 Sufficient conditions for the general Laplace principle
Here, we briefly describe the result needed, in our context, for proving the Laplace principle, and state our main result.
Let us first introduce some notations. For $\varepsilon \mathrm{>}0$, denote by ${\mathcal{G}^{\varepsilon }}:{\mathcal{E}_{0}}\times \mathcal{C}([0,T]\times [0,1];\mathbb{R})\to \mathcal{E}$ a measurable map, where ${\mathcal{E}_{0}}$ stands for a compact subspace of $\mathcal{E}$ in which the initial condition ${u_{0}}$ takes values, and let
Later, we will state sufficient conditions for the Laplace principle for ${X^{\varepsilon ,{u_{0}}}}$ to hold uniformly in ${u_{0}}$ for compact subsets of ${\mathcal{E}_{0}}$.
(9)
\[ {X^{\varepsilon ,{u_{0}}}}:={\mathcal{G}^{\varepsilon }}\big({u_{0}},h(\varepsilon )W\big).\]For any positive integer N, we introduce
\[ {S^{N}}:=\big\{\phi \in {\mathcal{H}_{T}}:\| \phi {\| _{{\mathcal{H}_{T}}}^{2}}\leqslant N\big\}\]
and
\[ {\mathcal{P}_{2}^{N}}:=\big\{v(\omega )\in {\mathcal{P}_{2}}:v(\omega )\in {S^{N}},P\text{-a.s.}\big\}.\]
It is worth noticing that the space ${S^{N}}$ is a compact metric space equipped with the weak topology on ${L^{2}}([0,T]\times [0,1])$ and that ${\mathcal{P}_{2}^{N}}$ is the space of controls, which plays a central role in the weak convergence approach.For $u\in {\mathcal{H}_{T}}$, define the element $\mathcal{I}(u)$ in $\mathcal{C}([0,T]\times [0,1];\mathbb{R})$ by
We are now in position to introduce the following result, due to Budhiraja et al. [5], ensuring sufficient condition for the Laplacian principle to hold.
Proposition 3.1 (Theorem 7 in [5]).
Assume that there exists a measurable
Then, the family $\{{X^{\varepsilon ,{u_{0}}}};\hspace{0.1667em}\varepsilon \mathrm{>}0\}$ defined by (9) satisfies the Laplace principle on $\mathcal{E}$ with speed ${\lambda ^{2}}(\varepsilon )$ and rate function ${I_{{u_{0}}}}$ given, for any $h\in \mathcal{E}$ and ${u_{0}}\in {\mathcal{E}_{0}}$, by
where the infimum over an empty set is taken to be ∞.
\[ {\mathcal{G}^{0}}:{\mathcal{E}_{0}}\times \mathcal{C}\big([0,T]\times [0,1];\mathbb{R}\big)\to \mathcal{E},\]
such that the following hold:
-
(A1) For any integer $M\mathrm{>}0$, any family $\{{v^{\varepsilon }};\varepsilon \mathrm{>}0\}\subset {{\mathcal{P}_{2}}^{M}}$ and $\{{u_{0}^{\varepsilon }}\}\subset {\mathcal{E}_{0}}$ such that ${v^{\varepsilon }}\to v$ and ${u_{0}^{\varepsilon }}\to {u_{0}}$ in distribution (as ${S^{N}}$-valued random elements), as $\varepsilon \to 0$. Then ${\mathcal{G}^{\varepsilon }}({u_{0}^{\varepsilon }},W+h(\varepsilon )\mathcal{I}({v^{\varepsilon }}))\to {\mathcal{G}^{0}}({u_{0}},\mathcal{I}(u))$, in distribution as $\varepsilon \to 0$;
-
(A2) For any integer $M\hspace{0.1667em}\mathrm{>}\hspace{0.1667em}0$ and compact set $K\hspace{0.1667em}\subset \hspace{0.1667em}{\mathcal{E}_{0}}$, the set ${\varGamma _{M,K}}\hspace{0.1667em}:=\hspace{0.1667em}\{{\mathcal{G}^{0}}({u_{0}},\mathcal{I}(u));u\in {S^{M}},{u_{0}}\in K\}$ is a compact subset of $\mathcal{E}$.
3.1.2 Controlled processes for SPDEs (1)
In this subsection, we adapt the general scheme described above to study moderate deviations for the equation (1).
We denote by $\mathcal{E}={\mathcal{E}_{0}}:=C([0,T];{L^{2}}([0,1]))$ the space of solutions of (1). As we are interested in proving the Laplace principle for ${\bar{u}^{\varepsilon }}(t,x)$ defined by (2), we interpret ${\bar{u}^{\varepsilon }}$ as a functional of the Brownian sheet W. Indeed, using (5) and (6) we deduce that ${\bar{u}^{\varepsilon }}(t,x)$ satisfies for all $\omega \in \varOmega $ and all $t\in [0,T]$ the following equation
for $dx$-almost all $x\in [0,T]$.
(11)
\[\begin{aligned}{}{\bar{u}^{\varepsilon }}(t,x)& =\frac{1}{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon }}(s,y)\big)W(dy,ds)\\ {} & \hspace{1em}-\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big[{\bar{u}^{\varepsilon }}(s,y)\big]^{2}}dyds\\ {} & \hspace{1em}-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{\varepsilon }}(s,y){u^{0}}(s,y)dyds,\end{aligned}\]This implies (see Theorem IV.9.1. of [19]) the existence of a measurable mapping
In Proposition 3.2 below we will establish that the map ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ is the unique solution of the following stochastic controlled analogue of equation (11)
We will call it the controlled process. Moreover, for any $v\in {S^{N}}$, we associate to (13) the following skeleton zero-noise equation:
Existence and uniqueness of the solution ${\bar{u}^{v}}$ for (14) is obtained in Proposition 3.3 below, and thereby, we define the map
\[ {\mathcal{G}^{\varepsilon }}:C\big([0,1];\mathbb{R}\big)\times C\big([0,T]\times [0,1];\mathbb{R}\big)\to C\big([0,T];{L^{2}}\big([0,1]\big)\big),\]
such that
As a first step toward the conditions (A1) and (A2) stated in Proposition 3.1, we define for ${v^{\varepsilon }}\in {\mathcal{P}_{2}^{N}}$,
(12)
\[ {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}:={\mathcal{G}^{\varepsilon }}\big({u_{0}},W+h(\varepsilon )\mathcal{I}\big({v^{\varepsilon }}\big)\big).\](13)
\[\begin{aligned}{}{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)& =\frac{1}{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dy)\\ {} & \hspace{1em}-\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big[{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big]^{2}}dyds\\ {} & \hspace{1em}-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y){u^{0}}(s,y)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big){v^{\varepsilon }}(s,y)dyds.\end{aligned}\](14)
\[\begin{aligned}{}{\bar{u}^{v}}(t,x)& =-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{v}}(s,y){u^{0}}(s,y)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)v(s,y)dyds.\end{aligned}\]With these notations in mind, the main result of this section is stated in the following
Theorem 3.3.
Assume that ${u_{0}}$ is continuous, σ is bounded and globally Lipschitz and that (3) holds. Then the family of processes $\{{\bar{u}^{\varepsilon }};\hspace{0.1667em}\varepsilon \mathrm{>}0\}$ satisfies a LDP on the space $C([0,T];{L^{2}}([0,1]))$ with speed ${\lambda ^{2}}(\varepsilon )$ and rate function given by
Remark 3.4.
Note that the conclusion of Theorem 3.3 is still valid for a quite large class of SPDEs containing stochastic Burgers equation. Namely, consider the following class of SPDEs introduced by Gyöngy in [17]:
with Dirichlet’s boundary conditions ${u^{\varepsilon }}(t,0)={u^{\varepsilon }}(t,1)=0$ for $t\in [0,T]$, and the initial condition ${u^{\varepsilon }}(0,x)={u_{0}}(x)$ for $x\in [0,1]$. Suitable conditions on the coefficients f, g and σ, for instance, the quadratic growth assumption on the nonlinear coefficient g, bring us back to the case of stochastic Burgers equation that we have considered in our paper. Notice here that papers closest to ours are two recent works by S. Hu, R. Li and X. Wang [18] and R. Zhang and J. Xiong [28]. In particular, these authors established a moderate deviations principle for the class (17). We learned about these works after we finished the first version of this paper.
(17)
\[\begin{aligned}{}\frac{\partial {u^{\varepsilon }}}{\partial t}(t,x)& =\frac{{\partial ^{2}}}{\partial {x^{2}}}{u^{\varepsilon }}(t,x)+\frac{\partial }{\partial x}g\big({u^{\varepsilon }}(t,x)\big)+f\big({u^{\varepsilon }}(t,x)\big)\\ {} & \hspace{1em}+\sqrt{\varepsilon }\sigma \big({u^{\varepsilon }}(t,x)\big)\dot{W}(t,x),\hspace{1em}(t,x)\in [0,T]\times [0,1],\end{aligned}\]3.2 Proof of the main result
We basically follow the same idea as in [5] and [23]. According to Proposition 3.1, it suffices to check that the conditions (A1) and (A2) are fulfilled. For (A1), we will establish well-posedness, tightness and convergence of controlled processes. The condition (A2), which gives that I is a rate function, will follow from the continuity of the map ${\mathcal{G}^{0}}$ with respect to the weak topology.
The proof of (A1) will be done in several steps.
Step 1: Existence and uniqueness of controlled and limiting processes.
Proof.
For ${v^{\varepsilon }}\in {\mathcal{P}_{2}^{N}}$, set
\[\begin{aligned}{}d{Q^{\varepsilon ,{v^{\varepsilon }}}}& :=\exp \Bigg\{-\sqrt{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{v^{\varepsilon }}(s,y)W(ds,dy)\\ {} & \hspace{1em}-\frac{1}{2}h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{v^{\varepsilon }}{(s,y)^{2}}dyds\Bigg\}dP.\end{aligned}\]
Since ${Q^{\varepsilon ,{v^{\varepsilon }}}}$ is defined through an exponential martingale, it is a probability measure on Ω. And thus, by the Girsanov theorem the process $\widetilde{W}$ defined by
\[ \widetilde{W}(dt,dx)=W(dt,dx)+h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{v^{\varepsilon }}(s,y)dyds\]
is a space-time white noise under the probability measure ${Q^{\varepsilon ,{v^{\varepsilon }}}}$. Plugging $\widetilde{W}(dt,dx)$ in (13) we obtain (11) with $\widetilde{W}(dt,dx)$ instead of $W(dt,dx)$. Now, if u denotes the unique solution of (11) with $\widetilde{W}(dt,dx)$ on the space $(\varOmega ,\mathcal{F},{Q^{\varepsilon ,{v^{\varepsilon }}}})$, then u satisfies (13), ${Q^{\varepsilon ,{v^{\varepsilon }}}}$ a.s. And hence by equivalence of probabilities, u satisfies (13), P a.s.For the uniqueness, if ${u_{1}}$ and ${u_{2}}$ are two solutions of (13) on $(\varOmega ,\mathcal{F},P)$, then ${u_{1}}$ and ${u_{2}}$ are solutions of (11) governed by $\widetilde{W}(dt,dx)$ on $(\varOmega ,\mathcal{F},{Q^{\varepsilon ,{v^{\varepsilon }}}})$. By the uniqueness of the solution of (13), we obtain ${u_{1}}={u_{2}}$, ${Q^{\varepsilon ,{v^{\varepsilon }}}}$ a.s. And thus ${u_{1}}={u_{2}}$, P a.s. by equivalence of probabilities. □
Proposition 3.3.
Assume that σ is bounded and globally Lipschitz. For any $v\in {S^{N}}$, for some $N\in \mathbb{N}$, the equation (14) admits a unique solution ${\bar{u}^{v}}$ belonging to $C([0,T];{\mathrm{L}^{2}}([0,1]))$. Moreover, for any $q\geqslant 2$
Proof.
The proof follows from a standard fixed point argument, and for the convenience of the reader, we include it in the Appendix. □
Step 2: Tightness of the family ${({u^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon \mathrm{>}0}}$ in $C([0,T];{L^{2}}([0,1]))$.
Let ${({v^{\varepsilon }})_{\varepsilon }}$ be a family of elements from ${{\mathcal{P}_{2}}^{N}}$ such that ${v^{\varepsilon }}\to v$ in distribution, as ${S^{N}}$-valued random elements, when $\varepsilon \to 0$.
We have the following proposition.
Proposition 3.4.
Assume that ${u_{0}}$ is continuous, σ is bounded and globally Lipschitz, and that (3) holds. Then ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ is tight in $C([0,T];{L^{2}}([0,1]))$.
Proof.
Recall that
where ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)$, $i=1,2,3,4$, stands for the ith summand of the RHS of the above equation.
(19)
\[\begin{aligned}{}{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)& =\frac{1}{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dy)\\ {} & \hspace{1em}-\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big[{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big]^{2}}dyds\\ {} & \hspace{1em}-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y){u^{0}}(s,y)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big){v^{\varepsilon }}(s,y)dyds\\ {} & =:{\sum \limits_{i=1}^{4}}{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x),\end{aligned}\]We first consider the cases where $i=1$ and $i=4$. Using Theorem 4.10 of Chapter 2 in [20], the following lemma states sufficient conditions for tightness.
Lemma 3.5.
Assume the same conditions as in Proposition 3.4. For $i=1$ or 4, we have
and for any $\zeta \mathrm{>}0$
In particular, the families ${({I_{1}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ and ${({I_{4}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ are tight in $C([0,T];{L^{2}}([0,1]))$.
(20)
\[ \underset{\zeta \longrightarrow +\infty }{\lim }\underset{\varepsilon \mathrm{>}0}{\sup }P\big(\big|{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big|\mathrm{>}\zeta \big)=0,\hspace{1em}\textit{for any}\hspace{2.5pt}(t,x)\in [0,T]\times [0,1],\](21)
\[ \hspace{-28.45274pt}\hspace{1em}\underset{\delta \longrightarrow 0}{\lim }\underset{\varepsilon \mathrm{>}0}{\sup }P\Big(\underset{|t-{t^{\prime }}|+|x-y|\leqslant \delta }{\sup }\big|{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}\big({t^{\prime }},y\big)\big|\mathrm{>}\zeta \Big)=0.\]Proof.
Let $x,y\in [0,1]$ and $t,{t^{\prime }}\in [0,T]$ such that ${t^{\prime }}\leqslant t$. To prove (20) and (21), it is enough to exhibit upper bounds for the square moments of ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)$ and ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}({t^{\prime }},y)$ for $i=1$ and $i=4$.
Using the Burkholder–Davis–Gundy inequality, the boundedness of σ, Lemma 2.1 and the condition (3) we infer that
which is finite. On the other hand, the same arguments as above yield
Therefore, (20) and (21) hold by (22) and (23), respectively.
(22)
\[\begin{aligned}{}& \mathbf{E}\big({\big|{I_{1}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big|^{2}}\big)\\ {} & \hspace{1em}\leqslant c.{h^{-2}}(\varepsilon ).\mathbf{E}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y){\sigma ^{2}}\big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)dyds\\ {} & \hspace{1em}\leqslant c.{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dyds,\end{aligned}\](23)
\[\begin{aligned}{}& \mathbf{E}\big({\big|{I_{1}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{1}^{\varepsilon ,{v^{\varepsilon }}}}\big({t^{\prime }},y\big)\big|^{2}}\big)\\ {} & \hspace{1em}={h^{-2}}(\varepsilon ).\mathbf{E}\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2.5pt}\times \sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dz)\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2.5pt}+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}}(y,z)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dz)\Bigg\}{^{2}}\\ {} & \hspace{1em}\leqslant c\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}{\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]^{2}}dzds+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(y,z)dzds\Bigg\}\\ {} & \hspace{1em}\leqslant c\big({\big|t-{t^{\prime }}\big|^{\frac{1}{2}}}+{\big\| x-{x^{\prime }}\big\| ^{\frac{1}{2}}}\big).\end{aligned}\]To deal with ${({I_{4}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$, we use the Cauchy–Schwarz inequality and Lemma 2.1 to write
where $c(N)$ is a constant depending on N. Similarly,
Therefore, (20) and (21) hold by (24) and (25), respectively. □
(24)
\[\begin{aligned}{}\mathbf{E}\big({\big|{I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big|^{2}}\big)& \leqslant c\mathbf{E}{\Bigg({\int _{0}^{t}}{\int _{0}^{1}}\big|{G_{t-s}}(x,y){v^{\varepsilon }}(s,y)\big|dyds\Bigg)^{2}}\\ {} & \leqslant c{\big\| {v^{\varepsilon }}\big\| _{{\mathcal{H}_{T}}}^{2}}.{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dyds\\ {} & \leqslant c(N),\end{aligned}\](25)
\[\begin{aligned}{}& \mathbf{E}\big({\big|{I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{4}^{\varepsilon ,{v^{\varepsilon }}}}\big({t^{\prime }},y\big)\big|^{2}}\big)\\ {} & \hspace{1em}=\mathbf{E}\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]\sigma \big({u^{\varepsilon ,{v^{\varepsilon }}}}(s,z)\big){v^{\varepsilon }}(s,y)dzds\\ {} & \hspace{2em}+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}}(y,z)\sigma \big({u^{\varepsilon ,{v^{\varepsilon }}}}(s,z)\big){v^{\varepsilon }}(s,y)dzds\Bigg\}{^{2}}\\ {} & \hspace{1em}\leqslant c\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}{\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]^{2}}dzds+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(y,z)dzds\Bigg\}\\ {} & \hspace{1em}\leqslant c\big({\big|t-{t^{\prime }}\big|^{\frac{1}{2}}}+{\big\| x-{x^{\prime }}\big\| ^{\frac{1}{2}}}\big).\end{aligned}\]For the tightness of ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$, we follow an idea introduced in [17] which is essentially based on Lemma 4.3 in the Appendix. More precisely, we state the following
Lemma 3.6.
Assume the same conditions as in Proposition 3.4. Then, the families ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ and ${({I_{3}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ are uniformly tight in $C([0,T];{L^{2}}([0,1]))$.
Proof.
The proof of the tightness of ${({I_{3}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ will be omitted since it can be done similarly to this of ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$.
To show the tightness of ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$, we will apply Lemma 4.3 with $q=1$, $\rho =2$ and ${\zeta _{\varepsilon }}(t,\cdot ):=\sqrt{\varepsilon }h(\varepsilon ){({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})^{2}}(t,\cdot )$. Set
Taking into account the condition (3), there exists ${\varepsilon _{0}}\mathrm{>}0$ such that $\sqrt{\varepsilon }h(\varepsilon )\leqslant 1$ for all $\varepsilon \leqslant {\varepsilon _{0}}$. Consequently
For this purpose, returning to (19) we note that ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ corresponds to the following SPDE
where
\[ {\theta _{\varepsilon }}:=\sqrt{\varepsilon }h(\varepsilon )\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\big({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}\big)^{2}}(t,\cdot )\big\| _{1}}=\sqrt{\varepsilon }h(\varepsilon )\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}^{2}}.\]
According to Lemma 4.3, it suffices to show that ${({\theta _{\varepsilon }})}_{\varepsilon }$ is bounded in probability. i.e.
(26)
\[ \underset{c\longrightarrow +\infty }{\lim }\underset{\varepsilon \mathrm{>}0}{\sup }\mathbb{P}({\theta _{\varepsilon }}\geqslant c)=0.\]
\[\begin{aligned}{}\underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}({\theta _{\varepsilon }}\geqslant c)& =\underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}^{2}}\geqslant \frac{c}{\sqrt{\varepsilon }h(\varepsilon )}\bigg)\\ {} & \leqslant \underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}^{2}}\geqslant c\Big).\end{aligned}\]
Then, to prove (26), it is enough to show that
(27)
\[ \underset{c\longrightarrow +\infty }{\lim }\underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant c\Big)=0.\](28)
\[\begin{aligned}{}\frac{\partial {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}}{\partial t}(t,x)& =\Delta {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)+\frac{\partial {g_{\varepsilon }}}{\partial x}\big(t,x,{u^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big)+{f_{\varepsilon }}\big(t,x,{u^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big)\\ {} & \hspace{1em}+{\sigma _{\varepsilon }}\big(t,x,{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big)\dot{W}(t,x),\end{aligned}\]
\[\begin{aligned}{}{g_{\varepsilon }}(t,x,r)& :=-\sqrt{\varepsilon }h(\varepsilon ){r^{2}}-2r{u^{0}}(t,x),\\ {} {f_{\varepsilon }}(t,x,r)& :=\sigma ({u^{0}}(t,x)+\sqrt{\varepsilon }h(\varepsilon )r){v^{\varepsilon }}(t,x)\hspace{1em}\text{and}\\ {} {\sigma _{\varepsilon }}(t,x,r)& :=\frac{1}{h(\varepsilon )}\sigma ({u^{0}}(t,x)+\sqrt{\varepsilon }h(\varepsilon )r).\end{aligned}\]
According to Theorem 2.1 in [17], the continuity of the initial condition ${u_{0}}$ implies the continuity of the solution ${u^{0}}$ of the equation (4) on the compact set $[0,T]\times [0,1]$. Consequently, ${u^{0}}$ is bounded.
This fact combined with the condition (3) allows us to consider the function ${g_{\varepsilon }}$ as a sum of two functions ${g_{\varepsilon }^{1}}$ and ${g_{\varepsilon }^{2}}$ satisfying major quadratic and linear conditions respectively, uniformly in ε being less than certain ${\varepsilon _{0}}$.
Using again the condition (3) and the hypotheses on the function σ, we see that ${\sigma _{\varepsilon }}$ is bounded and globally Lipschitzian, uniformly in ε being less than certain ${\varepsilon _{0}}$.
Thus, the equation (28) belongs to the class of semi-linear SPDE studied in [17], and for which the existence and uniqueness of the solution ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ is showed by an approximation procedure. This procedure is to define a sequence of truncated equations, and to establish existence and some convergence results for the corresponding sequence of solutions ${({\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}})_{n}}$, see [17, 14, 23]. In fact, in the course of the proof of Theorem 2.1 in [17] it was shown that
and that ${({\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}})_{n}}$ converges in probability in $C([0,T];{L^{2}}([0,1]))$ to the solution ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ of (19).
(29)
\[ \underset{c\longrightarrow \infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg)=0,\]Now, observe that
\[\begin{aligned}{}& \underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big\{\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\Big)\geqslant c\Big\}\\ {} & \hspace{1em}\leqslant \underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg)\\ {} & \hspace{2em}+\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg).\end{aligned}\]
Then, as c tends to infinity, the estimate (29) yields
\[\begin{aligned}{}& \underset{c\longrightarrow +\infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big\{\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\Big)\geqslant c\Big\}\\ {} & \hspace{1em}\leqslant \underset{c\longrightarrow +\infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg).\end{aligned}\]
By letting n tend to infinity and using the convergence in probability of ${\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}$ to ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ we get
\[ \underset{c\longrightarrow +\infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big\{\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\Big)\geqslant c\Big\}=0.\]
Hence, by applying Lemma 4.3 we obtain the tightness property for ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$.
Step 3: Convergence to the limit equation.
Having shown the tightness of each ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}$ for $i=1,2,3,4$, by Prohorov’s theorem, we can extract a subsequence, that we continue to denote by ε, and along which each of these processes and ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ converge in distribution (as ${S^{N}}$-valued random elements) in $C([0,T];{L^{2}}([0,1]))$ to limits denoted respectively by ${I_{i}^{0,v}}$ for $i=1,2,3,4$, and ${\bar{u}^{0,v}}$. We will show that
\[\begin{aligned}{}{I_{1}^{0,v}}& =0,\\ {} {I_{2}^{0,v}}& =0,\\ {} {I_{3}^{0,v}}& =-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){u^{0,v}}(s,y){u^{0}}(s,y)dyds,\\ {} {I_{4}^{0,v}}& ={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)v(s,y)dyds,\end{aligned}\]
and the proof will be completed by the uniqueness result given in Proposition 3.3.For $i=1$, Lemma 3 in [5] ensures the convergence of ${({I_{1}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ to 0 in probability in $C([0,T]\times [0,1])$. And, while the convergence in probability in $C([0,T]\times [0,1])$ implies the one in $C([0,T];{L^{2}}([0,1]))$, hence ${({I_{1}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges to 0 in probability in $C([0,T];{L^{2}}([0,1]))$ too.
To handle the convergence of each of the other terms, we invoke the Skorohod representation theorem and assume the almost sure convergence on a larger common probability space.
For $i=2$, applying Lemma 4.1 with $\rho =2$ and $\lambda =1$, we deduce there exists a constant $c\mathrm{>}0$ such that
Further, there exists a constant $c\mathrm{>}0$ such that for all $0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}$
\[ {\big\| {I_{2}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\leqslant c\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot ){\| _{2}^{2}}ds.\]
And since ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges a.s. in $C([0,T];{L^{2}}([0,1]))$ to ${\bar{u}^{0,v}}$, then there exists ${\varepsilon _{0}}\mathrm{>}0$ small enough such that
(30)
\[ \underset{\varepsilon \in ]0,{\varepsilon _{0}}]}{\sup }\underset{s\in [0,T]}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )\big\| _{2}}\mathrm{<}\infty ,\hspace{1em}\text{a.s.}\]
\[ \underset{t\in [0,T]}{\sup }{\big\| {I_{2}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\leqslant c\sqrt{\varepsilon }h(\varepsilon ),\hspace{1em}\text{a.s.}\]
Thus, ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges a.s. to 0 in $C([0,T];{L^{2}}([0,1]))$ as ε tends to 0.For $i=3$, let ${\tilde{I}_{3}^{0,v}}$ denote the RHS term of ${I_{3}^{0,v}}$. Applying again Lemma 4.1 and the Cauchy–Schwarz inequality, we conclude that there exists a constant $c\mathrm{>}0$ such that
\[\begin{aligned}{}{\big\| {I_{3}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{3}^{0,v}}(t,\cdot )\big\| _{2}}& \leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| \big({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big){u^{0}}(s,\cdot )\big\| _{1}}ds\\ {} & \leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big\| _{2}}{\big\| {u^{0}}(s,\cdot )\big\| _{2}}ds.\end{aligned}\]
Using the estimation (7) and the boundedness of ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ and ${\bar{u}^{0,v}}$ in $C([0,T];{L^{2}}([0,1]))$, we get
\[\begin{aligned}{}& {\big\| {I_{3}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{3}^{0,v}}(t,\cdot )\big\| _{2}}\\ {} & \hspace{1em}\leqslant c\underset{s\in [0,T]}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big\| _{2}}\underset{s\in [0,T]}{\sup }{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}ds\\ {} & \hspace{1em}\leqslant c\underset{s\in [0,T]}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big\| _{2}}.\end{aligned}\]
Again, since ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges a.s. in $C([0,T];{L^{2}}([0,1]))$ to ${\bar{u}^{0,v}}$, we obtain the a.s. convergence of ${I_{3}^{\varepsilon ,{v^{\varepsilon }}}}$ to ${\tilde{I}_{3}^{0,v}}$ in $C([0,T];{L^{2}}([0,1]))$. And by the uniqueness of the limit and the continuity of ${\tilde{I}_{3}^{0,v}}$, we conclude that ${I_{3}^{0,v}}={\tilde{I}_{3}^{0,v}}$.Concerning $i=4$, let ${\tilde{I}_{4}^{0,v}}$ denote the RHS term of ${I_{4}^{0,v}}$. We have
\[\begin{aligned}{}& {I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{4}^{0,v}}(t,\cdot )\\ {} & \hspace{1em}={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\big[\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big){v^{\varepsilon }}(s,y)\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}-\sigma \big({u^{0}}(s,y)\big)v(s,y)\big]dyds\\ {} & \hspace{1em}={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\big[\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}-\sigma \big({u^{0}}(s,y)\big)\big]{v^{\varepsilon }}(s,y)dyds\\ {} & \hspace{2em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\big[{v^{\varepsilon }}(s,y)-v(s,y)\big]\sigma \big({u^{0}}(s,y)\big)dyds\\ {} & \hspace{1em}=:{J_{4,1}^{\varepsilon }}(t,x)+{J_{4,2}^{\varepsilon }}(t,x).\end{aligned}\]
Then,
\[ {\big\| {I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{4}^{(0,v)}}(t,\cdot )\big\| _{2}}\leqslant {\big\| {J_{4,1}^{\varepsilon }}(t,\cdot )\big\| _{2}}+{\big\| {J_{4,2}^{\varepsilon }}(t,\cdot )\big\| _{2}}.\]
For ${J_{4,1}^{\varepsilon }}$, we use Lemma 4.1, the Lipschitz condition on σ and the Cauchy–Schwarz inequality to obtain
\[\begin{aligned}{}& {\big\| {J_{4,1}^{\varepsilon }}(t,\cdot )\big\| _{2}}\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| \big(\sigma \big({u^{0}}(s,\cdot )+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )\big)-\sigma \big({u^{0}}(s,\cdot )\big)\big){v^{\varepsilon }}(s,\cdot )\big\| _{1}}ds\\ {} & \hspace{1em}\leqslant c\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )\big\| _{2}}{\big\| {v^{\varepsilon }}(s,\cdot )\big\| _{2}}ds.\end{aligned}\]
Since $({v^{\varepsilon }})\subset {\mathcal{P}_{2}^{N}}$, the estimation (30) implies that there exists a constant c depending on N such that for all $0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}$
\[ \underset{t\in [0,T]}{\sup }{\big\| {J_{4,1}^{\varepsilon }}(t,\cdot )\big\| _{2}}\leqslant c\sqrt{\varepsilon }h(\varepsilon ),\hspace{1em}\text{a.s.}\]
Therefore, ${J_{4,1}^{\varepsilon }}$ converges to 0 in $C([0,T];{L^{2}}[0,1])$ as ε goes to 0.The proof of the convergence of ${J_{4,2}^{\varepsilon }}$ to 0 in $C([0,T];{L^{2}}[0,1])$ as ε goes to 0 will be omitted since it can be treated similarly to the case of the family $\{{K_{n}},n\geqslant 1\}$ defined below by (35).
Consequently, ${I_{4}^{\varepsilon ,{v^{\varepsilon }}}}$ converges to ${\tilde{I}_{4}^{0,v}}$ in $C([0,T];{L^{2}}([0,1]))$, and by the uniqueness of the limit and the continuity of ${\tilde{I}_{4}^{0,v}}$, we conclude that ${I_{4}^{0,v}}={\tilde{I}_{4}^{0,v}}$.
Thus, by the convergence of both the process ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ and each term ${I_{i}^{\varepsilon ,,{v^{\varepsilon }}}}$ for $i=1,2,3,4$ along a subsequence, and by the uniqueness of the solution of the equation (14), we conclude that the condition (A1) in Proposition 3.1 holds.
Now, let us prove the condition (A2). As it was mentioned before, it suffices to check the continuity of the map ${\mathcal{G}^{0}}:{\mathcal{E}_{0}}\times {S^{N}}\longrightarrow C([0,T];{L^{2}}([0,1]))$ with respect to the weak topology. Let v, $({v_{n}})\subset {S^{N}}$ such that for any $g\in {\mathcal{H}_{T}}$,
We claim that
Let $(t,x)\in [0,T]\times [0,1]$. The equation (14) implies
Hence,
(32)
\[\begin{aligned}{}{\bar{u}^{{v_{n}}}}(t,x)-{\bar{u}^{v}}(t,x)& =-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){u^{0}}(s,y)\big({\bar{u}^{{v_{n}}}}(s,y)-{\bar{u}^{v}}(s,y)\big)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)\big({v_{n}}(s,y)-v(s,y)\big)dyds.\end{aligned}\](33)
\[\begin{aligned}{}& {\big\| {\bar{u}^{{v_{n}}}}(t,\cdot )-{\bar{u}^{v}}(t,\cdot )\big\| _{2}}\\ {} & \hspace{1em}\leqslant c\Bigg\{{\Bigg\| {\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(\cdot ,y){u^{0}}(s,y)\big({\bar{u}^{{v_{n}}}}(s,y)-{\bar{u}^{v}}(s,y)\big)dyds\Bigg\| }_{2}\\ {} & \hspace{2em}+{\Bigg\| {\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(\cdot ,y)\sigma \big({u^{0}}(s,y)\big)\big({v_{n}}(s,y)-v(s,y)\big)dyds\Bigg\| }_{2}\Bigg\}.\end{aligned}\]On one hand, using Lemma 4.1, the Cauchy–Schwarz inequality and estimation (7) we get
(34)
\[\begin{aligned}{}& {\Bigg\| {\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(\cdot ,y){u^{0}}(s,y)\big({\bar{u}^{{v_{n}}}}(s,y)-{\bar{u}^{v}}(s,y)\big)dyds\Bigg\| }_{2}\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {u^{0}}(s,\cdot )\big({\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big)\big\| _{1}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}\underset{s\in [0,T]}{\sup }{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds.\end{aligned}\]On the other hand, in order to handle the second term in the right hand side of (33), we define, for any $(t,x)\in [0,T]\times [0,1]$, the sequence
whose properties are given in Lemma 4.4 in the Appendix. Then, summing up (33)–(34), we obtain for any $0\leqslant t\leqslant T$
Applying Gronwall’s lemma, we get the estimate
which implies together with (63) the claim (31), and henceforth the condition (A2) holds.
(35)
\[ {K_{n}}(t,x):={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)\big({v_{n}}(s,y)-v(s,y)\big)dyds,\](36)
\[ {\big\| {\bar{u}^{{v_{n}}}}(t,\cdot )-{\bar{u}^{v}}(t,\cdot )\big\| _{2}}\leqslant c{\big\| {K_{n}}(t)\big\| }_{2}+c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds.\](37)
\[ \underset{t\in [0,T]}{\sup }{\big\| {\bar{u}^{{v_{n}}}}(t,\cdot )-{\bar{u}^{v}}(t,\cdot )\big\| _{2}}\leqslant c\underset{t\in [0,T]}{\sup }{\big\| {K_{n}}(t,\cdot )\big\| }_{2},\]4 Toward a central limit theorem
Many results on central limit theorem has been recently established for various kinds of parabolic SPDEs under strong assumptions on the drift coefficient. More specifically, under the linear growth condition, the differentiability and the global Liptschitz condition on both the drift coefficient and its derivative, some central limit theorems have been established in [26, 27]. And while these conditions are not all fulfilled for the stochastic Burgers equation, it is not surprising that classical tools do not apply to establish a central limit theorem. Nevertheless, we will prove in this section two first-step results toward a central limit theorem. More specifically, the uniform boundedness and the convergence of ${u^{\varepsilon }}$ to ${u^{0}}$ in ${\mathrm{L}^{q}}(\varOmega ;C([0,T];{\mathrm{L}^{2}}([0,1])))$ for $q\geqslant 2$. We hope that our current estimates could be helpful for future works in this direction.
We begin with the following result.
Proof.
We will use similar arguments as in Cardon-Weber and Millet [9] and Gyöngy [17]. For $0\mathrm{<}\varepsilon \leqslant 1$, set
with Dirichlet’s boundary conditions and initial condition ${\vartheta ^{\varepsilon }}(0,x)={u_{0}}(x)$.
\[ {\eta _{\varepsilon }}(t,x):=\sqrt{\varepsilon }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{\varepsilon }}(s,y)\big)W(dy,ds),\]
and
\[\begin{aligned}{}{\vartheta ^{\varepsilon }}(t,x)& :={u^{\varepsilon }}(t,x)-{\eta _{\varepsilon }}(t,x)\\ {} & ={\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}\hspace{-0.1667em}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({u^{\varepsilon }}(s,y)\big)^{2}}dyds\\ {} & ={\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}\hspace{-0.1667em}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({\vartheta ^{\varepsilon }}(s,y)+{\eta _{\varepsilon }}(s,y)\big)^{2}}dyds.\end{aligned}\]
Then, ${\vartheta ^{\varepsilon }}$ is a solution of the equation
(39)
\[ \frac{\partial {\vartheta ^{\varepsilon }}}{\partial t}(t,x)=\Delta {\vartheta ^{\varepsilon }}(t,x)+\frac{\partial }{\partial x}{\big({\vartheta ^{\varepsilon }}(t,x)+{\eta _{\varepsilon }}(t,x)\big)^{2}},\hspace{1em}(t,x)\in [0,T]\times [0,1],\]Since $\sigma \circ {u^{\varepsilon }}$ is bounded uniformly in ε, arguing as in the proof of Theorem 2.1 in [17], page 286, by the Garsia–Rodemich–Rumsey lemma, one can deduce that
In particular, the random variable ${\bar{\eta }_{\varepsilon }}:={\sup _{0\leqslant t\leqslant T}}{\sup _{0\leqslant x\leqslant 1}}|{\eta _{\varepsilon }}(t,x)|$ is well defined a.s.
\[ \underset{\varepsilon }{\sup }\mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }\underset{0\leqslant x\leqslant 1}{\sup }{\big|{\tilde{\eta }_{\varepsilon }}(t,x)\big|^{q}}\Big)\mathrm{<}\infty ,\]
where ${\tilde{\eta }_{\varepsilon }}(t,x):=\frac{1}{\sqrt{\varepsilon }}{\eta _{\varepsilon }}(t,x)$. Consequently, there exists a universal constant $C(q)$ depending only on q such that
(40)
\[ \mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }\underset{0\leqslant x\leqslant 1}{\sup }{\big|{\eta _{\varepsilon }}(t,x)\big|^{q}}\Big)\leqslant C(p){\varepsilon ^{q/2}}.\]Moreover, using the SPDE (39) satisfied by ${\vartheta ^{\varepsilon }}$ and following the same arguments as in the proof of Theorem 2.1 in [17], we deduce the existence of a constant c independent of ε and ω (see [17] pages 286–289) such that
Consequently, for any $q\geqslant 2$
(41)
\[ \underset{0\leqslant t\leqslant T}{\sup }{\big\| {\vartheta ^{\varepsilon }}\big(t{,^{\cdot }}\big)\big\| _{2}^{2}}\leqslant \| {u_{0}}{\| _{2}^{2}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{4}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}.\]
\[\begin{aligned}{}\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}& =\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\vartheta ^{\varepsilon }}(t,\cdot )+{\eta _{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\\ {} & \leqslant {2^{q-1}}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\vartheta ^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}+\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\eta _{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \leqslant {2^{q-1}}\Bigg(\| {u_{0}}{\| _{2}^{q}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{2q}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}\\ {} & \hspace{1em}+\underset{0\leqslant t\leqslant T}{\sup }{\Bigg({\int _{0}^{1}}{\big|{\eta _{\varepsilon }}(t,x)\big|^{2}}dx\Bigg)^{q/2}}\Bigg)\\ {} & \leqslant {2^{q-1}}\big(\| {u_{0}}{\| _{2}^{q}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{2q}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}+{\bar{\eta }_{\varepsilon }^{q}}\big)\\ {} & \leqslant c\big(\| {u_{0}}{\| _{2}^{q}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{2q}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}\big).\end{aligned}\]
Hence, to prove (38) it suffices to show that
For this purpose, note first that
Setting $\varphi (x):=(1+{x^{2q}}){e^{cT(1+{x^{2}})}}$, which is a positive, continuous and increasing function on $[0,+\infty [$, we get for any $A\geqslant {C_{1}}\| \sigma {\| _{\infty }}$
\[ \underset{0\leqslant s\leqslant T}{\sup }\underset{0\leqslant x\leqslant 1}{\sup }\big|\sqrt{\varepsilon }\sigma \big({u^{\varepsilon }}(s,x)\big)\big|\leqslant \sqrt{\varepsilon }\| \sigma {\| _{\infty }},\hspace{1em}\text{where}\hspace{2.5pt}\| \sigma {\| _{\infty }}:=\underset{x\in \mathbb{R}}{\sup }\big|\sigma (x)\big|.\]
Thus, by Lemma 4.2, there exist two positive constants ${C_{1}}$ and ${C_{2}}$, independent of ε, such that for any $M\geqslant {C_{1}}\| \sigma {\| _{\infty }}$
(43)
\[ P({\bar{\eta }_{\varepsilon }}\geqslant M)\leqslant {C_{1}}\| \sigma {\| _{\infty }}\exp \bigg(-\frac{{M^{2}}}{\varepsilon {C_{2}}(1+{T^{\frac{1}{8}}})}\bigg).\]
\[\begin{aligned}{}\mathbf{E}\big(\varphi ({\bar{\eta }_{\varepsilon }})\big)& ={\int _{0}^{+\infty }}P\big(\varphi ({\bar{\eta }_{\varepsilon }})\mathrm{>}x\big)dx\\ {} & ={\int _{0}^{A}}P({\bar{\eta }_{\varepsilon }}\mathrm{>}x){\varphi ^{\prime }}(x)dx+{\int _{A}^{+\infty }}P({\bar{\eta }_{\varepsilon }}\mathrm{>}x){\varphi ^{\prime }}(x)dx\\ {} & \leqslant \varphi (A)+c{C_{1}}\| \sigma {\| _{\infty }}{\int _{A}^{+\infty }}\big(1+{x^{2q+1}}\big)\exp \bigg(cT{x^{2}}-\frac{{x^{2}}}{\varepsilon {C_{2}}(1+{T^{\frac{1}{8}}})}\bigg)dx\\ {} & \leqslant \varphi (A)+c{C_{1}}\| \sigma {\| _{\infty }}{\int _{A}^{+\infty }}\big(1+{x^{2q+1}}\big)\exp \bigg(cT{x^{2}}-\frac{{x^{2}}}{{C_{2}}(1+{T^{\frac{1}{8}}})}\bigg)dx,\end{aligned}\]
where the last integral is finite provided that $cT{C_{2}}(1+{T^{\frac{1}{8}}})\mathrm{<}1$. This implies that there exists ${T_{0}}\mathrm{>}0$, independent of ${u_{0}}$ and ε, such that (42) holds for $0\mathrm{<}T\leqslant {T_{0}}$. Using (41), and iterating the procedure finitely many times we conclude the proof. □Now, we can announce and state the following proposition.
Proof.
We will use a localization argument. For $0\leqslant t\leqslant T$, $\varepsilon \in ]0,1]$ and $M\mathrm{>}0$, set
We have
Then, for any $q\geqslant 2$,
For ${\eta ^{\varepsilon }}(t,\cdot )$, by the Hölder inequality we have
(45)
\[ {\varOmega _{\varepsilon }^{M}}(t):=\Big\{w\in \varOmega :\underset{s\in [0,t]}{\sup }{\big\| {u^{\varepsilon }}(s)\big\| _{2}}\vee \underset{s\in [0,t]}{\sup }{\big\| {u^{0}}(s)\big\| _{2}}\leqslant M\Big\}.\](46)
\[\begin{aligned}{}{u^{\varepsilon }}(t,x)-{u^{0}}(t,x)& =\sqrt{\varepsilon }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{\varepsilon }}(s,y)\big)W(ds,dy)\\ {} & \hspace{1em}-{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y)\big({\big({u^{\varepsilon }}(s,y)\big)^{2}}-{\big({u^{0}}(s,y)\big)^{2}}\big)dyds\\ {} & :={\eta ^{\varepsilon }}(t,x)+{I^{\varepsilon }}(t,x).\end{aligned}\](47)
\[ {\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\leqslant {2^{q-1}}\big({\big\| {\eta ^{\varepsilon }}(t,\cdot )\big\| ^{q}}+{\big\| {I^{\varepsilon }}(t,\cdot )\big\| ^{q}}\big).\]
\[\begin{aligned}{}\mathbf{E}\Big(\underset{0\leqslant s\leqslant t}{\sup }{\big\| {\eta ^{\varepsilon }}(s,\cdot )\big\| ^{q}}\Big)& \leqslant \mathbf{E}\Bigg(\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{1}}{\big|{\eta ^{\varepsilon }}(s,x)\big|^{q}}dx\Bigg)\\ {} & \leqslant {\int _{0}^{1}}\mathbf{E}\Big(\underset{0\leqslant s\leqslant t}{\sup }{\big|{\eta ^{\varepsilon }}(s,x)\big|^{q}}\Big)dx\\ {} & \leqslant \mathbf{E}\Big(\underset{0\leqslant x\leqslant 1}{\sup }\underset{0\leqslant s\leqslant t}{\sup }{\big|{\eta ^{\varepsilon }}(s,x)\big|^{q}}\Big)\\ {} & \leqslant C(q){\varepsilon ^{q/2}},\end{aligned}\]
where the last inequality follows from (40).For ${I^{\varepsilon }}(t,\cdot )$, according to Lemma 4.1 in the Appendix with $\rho =2$ and $\lambda =1$, we have
and using the following form of Hölder’s inequality
(48)
\[ {\big\| {I^{\varepsilon }}(t,\cdot )\big\| _{2}}\leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| \big({u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot )\big)\big({u^{\varepsilon }}(s,\cdot )+{u^{0}}(s,\cdot )\big)\big\| _{1}}ds,\]
\[ {\Bigg|{\int _{0}^{t}}f(s)g(s)ds\Bigg|^{q}}\leqslant {\Bigg({\int _{0}^{t}}\big|f(s)\big|ds\Bigg)^{q-1}}{\int _{0}^{t}}\big|f(s)\big|{\big|g(s)\big|^{q}}ds,\]
with $f(s):={(t-s)^{-\frac{3}{4}}}$ and $g(s):=\| ({u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot ))({u^{\varepsilon }}(s,\cdot )+{u^{0}}(s,\cdot )){\| _{1}}$, we get
Now, taking the supremum up to time $t\in [0,T]$, and setting $\varPhi (s):=\| ({u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot ))({u^{\varepsilon }}(s,\cdot )+{u^{0}}(s,\cdot )){\| _{1}^{q}}$, and $\varPsi (s):={\sup _{0\leqslant r\leqslant s}}\varPhi (r)$, (49) implies
Since
(50)
\[\begin{aligned}{}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}& \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{(s-r)^{-\frac{3}{4}}}\varPhi (r)dr\\ {} & \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{(s-r)^{-\frac{3}{4}}}\underset{0\leqslant {r^{\prime }}\leqslant r}{\sup }\varPhi \big({r^{\prime }}\big)dr\\ {} & \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{(s-r)^{-\frac{3}{4}}}\varPsi (r)dr\\ {} & =c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{r^{-\frac{3}{4}}}\varPsi (s-r)dr.\end{aligned}\]
\[ \varPsi (s-r)=\underset{0\leqslant {r^{\prime }}\leqslant s-r}{\sup }\varPhi ({r^{\prime }})\leqslant \underset{0\leqslant {r^{\prime }}\leqslant t-r}{\sup }\varPhi ({r^{\prime }})=\varPsi (t-r),\]
then
\[\begin{aligned}{}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}& \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{r^{-\frac{3}{4}}}\varPsi (t-r)dr\\ {} & =c{\int _{0}^{t}}{r^{-\frac{3}{4}}}\varPsi (t-r)dr\\ {} & =c{\int _{0}^{t}}{(t-r)^{-\frac{3}{4}}}\varPsi (r)dr.\end{aligned}\]
Introducing the expectation on ${\varOmega _{\varepsilon }^{M}}(t)$ and taking into account the facts that ${\varOmega _{\varepsilon }^{M}}(t)\in {\mathcal{F}_{t}}$ and ${\varOmega _{\varepsilon }^{M}}(t)\subset {\varOmega _{\varepsilon }^{M}}(s)$ for $0\leqslant s\leqslant t$, we get
(51)
\[ \mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(t)}}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}\Big)\leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\mathbf{E}\big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\varPsi (s)\big)ds.\]Notice that
\[\begin{aligned}{}{\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\varPsi (s)& \leqslant {\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| \big({u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big)\big({u^{\varepsilon }}(r,\cdot )+{u^{0}}(r,\cdot )\big)\big\| _{1}^{q}}\\ {} & \leqslant {\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}{\big\| {u^{\varepsilon }}(r,\cdot )+{u^{0}}(r,\cdot )\big\| _{2}^{q}}\\ {} & \leqslant {\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}\big({\big\| {u^{\varepsilon }}(r,\cdot )\big\| _{2}^{q}}+{\big\| {u^{0}}(r,\cdot )\big\| _{2}^{q}}\big)\\ {} & \leqslant 2{M^{q}}{\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}.\end{aligned}\]
This, together with (51), gives
(52)
\[\begin{aligned}{}& \mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(t)}}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}\leqslant 2c{M^{q}}{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}\Big)ds.\end{aligned}\]Combining (47)–(52) we get for any $0\leqslant t\leqslant T$
(53)
\[\begin{aligned}{}& \mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(t)}}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}\leqslant c\Bigg[{\varepsilon ^{q/2}}+2{M^{q}}{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}\Big)ds\Bigg].\end{aligned}\]Using Gronwall’s lemma we deduce that, for all $t\in [0,T]$,
Therefore, for any fixed $M\mathrm{>}0$ we have
On the other hand, by the Markov inequality and using again the estimations (7) and (38) we have
Then
\[\begin{aligned}{}& \mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}=\mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(T)}}\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{2em}+\mathbf{E}\Big({\mathbf{1}_{\varOmega \setminus {\varOmega _{\varepsilon }^{M}}(T)}}\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}\leqslant c{\varepsilon ^{q/2}}{e^{2c{M^{q}}}}+{\big(P\big(\varOmega \setminus {\varOmega _{\varepsilon }^{M}}(T)\big)\big)^{1/2}}{\Big(\mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{2q}}\Big)\Big)^{1/2}}.\end{aligned}\]
To deal with the second term of the last inequality, on one hand, estimations (7) and (38) imply that there exists $c\mathrm{>}0$ such that
(55)
\[ \underset{\varepsilon \in ]0,1]}{\sup }\mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\mathrm{<}c.\](56)
\[\begin{aligned}{}P\big(\varOmega \setminus {\varOmega _{\varepsilon }^{M}}(T)\big)& \leqslant P\Big(\underset{t\in [0,T]}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\mathrm{>}{M^{q}}\Big)+P\Big(\underset{t\in [0,T]}{\sup }{\big\| {u^{0}}(t,\cdot )\big\| _{2}^{q}}\mathrm{>}{M^{q}}\Big)\\ {} & \leqslant \frac{\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{\varepsilon }}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}+\frac{\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{0}}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}\\ {} & \leqslant \frac{{\sup _{\varepsilon \in ]0,1]}}\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{\varepsilon }}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}+\frac{{\sup _{t\in [0,T]}}\| {u^{0}}(t,\cdot ){\| _{2}^{q}}}{{M^{q}}}\\ {} & \leqslant \frac{c}{{M^{q}}}.\end{aligned}\]Letting ε tends to zero and taking into account the fact that ε and M are independent, we obtain
Finally, since M is arbitrary, we conclude that (44) holds. □