Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 6, Issue 2 (2019)
  4. Moderate deviations for a stochastic Bur ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Moderate deviations for a stochastic Burgers equation
Volume 6, Issue 2 (2019), pp. 167–193
Rachid Belfadli   Lahcen Boulanba   Mohamed Mellouk  

Authors

 
Placeholder
https://doi.org/10.15559/19-VMSTA134
Pub. online: 16 May 2019      Type: Research Article      Open accessOpen Access

Received
29 October 2018
Revised
29 March 2019
Accepted
17 April 2019
Published
16 May 2019

Abstract

A moderate deviations principle for the law of a stochastic Burgers equation is proved via the weak convergence approach. In addition, some useful estimates toward a central limit theorem are established.

1 Introduction

We consider the following stochastic Burgers equation with multiplicative space-time white noise, indexed by $\varepsilon \mathrm{>}0$, given by
(1)
\[\begin{aligned}{}\frac{\partial {u^{\varepsilon }}}{\partial t}(t,x)& =\Delta {u^{\varepsilon }}(t,x)+\frac{1}{2}\frac{\partial }{\partial x}{\big({u^{\varepsilon }}(t,x)\big)^{2}}\\ {} & \hspace{1em}+\sqrt{\varepsilon }\sigma \big({u^{\varepsilon }}(t,x)\big)\dot{W}(t,x),\hspace{1em}(t,x)\in [0,T]\times [0,1],\end{aligned}\]
with Dirichlet’s boundary conditions ${u^{\varepsilon }}(t,0)={u^{\varepsilon }}(t,1)=0$ for $t\in [0,T]$, and the initial condition ${u^{\varepsilon }}(0,x)={u_{0}}(x)$ for $x\in [0,1]$. We assume that ${u_{0}}$ is continuous on $[0,1]$ and σ is bounded and globally Lipschitz on $\mathbb{R}$. The driving noise W is a space-time Brownian sheet defined on some filtered probability space $(\varOmega ,\mathcal{F},{({\mathcal{F}_{t}})}_{t\in [0,T]},\mathbb{P})$.
The deterministic Burgers equation was introduced in [7] as a simplified mathematical model describing the turbulence phenomena in fluids. Its stochastic version has been the subject of several works; see for instance [1, 17, 22], and the references therein. In particular, a large deviation principle is established in [23] for an “additive version” of (1), and in [8] and [14] for a class of Burgers’ type stochastic partial differential equations (SPDEs for short) including (1). Generally speaking, large deviations theory deals with determining how fast the probabilities $\mathbb{P}({A_{\varepsilon }})$ of a family of rare events $({A_{\varepsilon }})$ decay to 0 as ε tends to 0, and how to compute the precise rate of decay as a function of the rare events. A related natural important question is to study moderate deviations results which deals with probabilities of deviations of “smaller order” than in large deviations. We will precise below the main difference between moderate and large deviations principles in the context of stochastic Burgers equation, and for a deeper description and detail about these two kinds of deviations principles and their relationship, we refer the reader to [6].
Our first goal in this paper is to study the moderate deviations of ${u^{\varepsilon }}$ from the deterministic solution ${u^{0}}$ of the equation (4) below. More precisely, we deal with the deviations of the trajectory
(2)
\[ {\bar{u}^{\varepsilon }}(t,x):=\frac{{u^{\varepsilon }}(t,x)-{u^{0}}(t,x)}{a(\varepsilon )},\]
where the deviation scale $a:{\mathbb{R}^{+}}\longrightarrow {\mathbb{R}^{+}}$ is such that
(3)
\[ a(\varepsilon )\longrightarrow 0\hspace{1em}\text{and}\hspace{1em}h(\varepsilon ):=\frac{a(\varepsilon )}{\sqrt{\varepsilon }}\longrightarrow \infty ,\hspace{1em}\text{as}\hspace{2.5pt}\varepsilon \longrightarrow 0,\]
and ${u^{0}}$ stands for the solution of the following deterministic partial differential equation
(4)
\[ \frac{\partial {u^{0}}}{\partial t}(t,x)=\frac{{\partial ^{2}}{u^{0}}}{\partial {x^{2}}}(t,x)+\frac{1}{2}\frac{\partial }{\partial x}{\big({u^{0}}(t,x)\big)^{2}},\hspace{1em}(t,x)\in [0,T]\times [0,1],\]
with Dirichlet’s boundary conditions ${u^{0}}(t,0)={u^{0}}(t,1)=0$ for $t\in [0,T]$, and the initial condition ${u^{0}}(0,x)={u_{0}}(x)$.
The deviation scale $a(\varepsilon )$ influences strongly the asymptotic behavior of ${\bar{u}^{\varepsilon }}$. In fact, for certain norm $\| \cdot \| $, bounds of the probabilities $\mathbb{P}(\frac{\| {u^{\varepsilon }}-{u^{0}}\| }{\sqrt{\varepsilon }}\in \cdot )$ are dealt with the central limit theorem, while probabilities $\mathbb{P}(\| {u^{\varepsilon }}-{u^{0}}\| \in \cdot )$ are estimated by large deviations results. Furthermore, when we are interested in probabilities of the form $\mathbb{P}(\frac{\| {u^{\varepsilon }}-{u^{0}}\| }{a(\varepsilon )}\in \cdot )$ under the condition (3) (e.g. $a(\varepsilon )={\varepsilon ^{1/4}}$), then we are in the framework of the so-called moderate deviations which fills the gap between the central limit theorem scale ($a(\varepsilon )=\sqrt{\varepsilon }$) and the large deviations scale ($a(\varepsilon )=1$). In this paper, we will establish the moderate deviations principle for (1). For the study of this topic for various kind of stochastic processes, see for instance e.g. [10, 12, 16, 21].
Furthermore, there are basically two approaches to analyzing moderate and large deviations for processes. The former, which is originally used by Freidlin and Wentzell [15] for diffusions processes, relies on discretization and localization arguments that allow to deduce the large deviations principle, for the solutions of equations under study, using a general contraction principle from some Schilder type theorems for the driving noises. The second one, which we are going to use in present paper, is the so-called weak convergence approach. It was introduced in [13] and developed in [2, 4] and [5], and its starting point is the equivalence between large deviations principle and Laplace principle in the setting of Polish spaces. It consists in using certain variational formulas that can be viewed as the minimal cost functions for associated stochastic optimal problems. These minimal cost functions have a form to which the theory of weak convergence of probability measures can be applied. We refer to [13] for a more complete exposition on this approach.
We stress here that, in the present paper, we mainly use the weak convergence approach to establish moderate deviations for stochastic Burgers’ equations while in the previous works ([8, 23, 14]) the authors studied the large deviations principle for this equation. The most likely advantage in using the weak convergence approach is that it allows one to avoid establishing technical exponential-type probability estimates usually needed in the classical studies of large deviations principle, and reduces the proofs to demonstrating qualitative properties like existence, uniqueness and tightness of certain analogues of the original processes.
We also note that the greatest difficulty in studying any aspect of Burgers’ type equations lies in their quadratic term. In fact, most of the techniques usually used to deal with stochastic differential equations with Lipschitz drift coefficients don’t longer work generally, and one resort to localization or tightness argument to circumvent this difficulty.
As pointed out before, we will prove a moderate deviations principle for the stochastic Burgers equation (1), and two first-step results toward a central limit theorem. It is worth bearing in mind that the most difficulty we have encountered in establishing a central limit theorem is mainly due to the quadratic term appearing in the Burgers equation for which the classical conditions (namely, the Lipschitz condition on the drift coefficient, the boundedness and the differentiability of its derivative) are no longer satisfied.
The paper is organized as follows. Section 2 is devoted to some preliminaries. The framework of our moderate deviations result and its proof are given in Section 3. In Section 4, toward a central limit theorem for the stochastic Burgers equation, we prove the uniform boundedness and the convergence of ${u^{\varepsilon }}$ to ${u^{0}}$ in the space ${\mathrm{L}^{q}}(\varOmega ;C([0,T];{\mathrm{L}^{2}}([0,1])))$ for $q\geqslant 2$. Furthermore, some technical results needed in our proofs are included in the Appendix.
In this paper all positive constants are denoted by c, and their values may change from line to line. Also, for $\rho \geqslant 1$ and $t\in [0,T]$, the usual norms on ${L^{\rho }}([0,1])$ and ${\mathcal{H}_{t}}:={L^{2}}([0,t]\times [0,1])$ are respectively denoted by $\| \cdot {\| _{\rho }}$ and $\| \cdot {\| _{{\mathcal{H}_{t}}}}$.

2 Preliminaries

Let $\{W(t,x),t\in [0,T],x\in [0,1]\}$ be a space-time Brownian sheet on a filtered probability space $(\varOmega ,\mathcal{F},{\mathcal{F}_{t}},\mathbb{P})$, that is, a zero-mean Gaussian field with covariance function given by
\[ \mathbf{E}\big(W(t,x)W(s,y)\big)=(t\wedge s)(x\wedge y),\hspace{1em}s,t\in [0,T],\hspace{2.5pt}x,y\in [0,1].\]
For each $t\in [0,T]$, ${\mathcal{F}_{t}}$ is the completion of the σ-field generated by the family of random variables $\{W(s,x),0\leqslant s\leqslant t,x\in [0,1]\}$.
A rigorous meaning to the solution of (1) is given by a jointly measurable and ${\mathcal{F}_{t}}$-adapted process ${u^{\varepsilon }}:=\{{u^{\varepsilon }}(t,x);(t,x)\in [0,T]\times [0,1]\}$ satisfying, for almost all $\omega \in \varOmega $ and all $t\in [0,T]$ the following evolution equation:
(5)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {u^{\varepsilon }}(t,x)\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}& \displaystyle =& \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({u^{\varepsilon }}(s,y)\big)^{2}}dyds\\ {} & & \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}+\sqrt{\varepsilon }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{\varepsilon }}(s,y)\big)W(ds,dy),\end{array}\]
for $dx$-almost all $x\in [0,T]$, where ${G_{t}}(\cdot ,\cdot )$ denotes the Green kernel corresponding to the operator $\frac{\partial }{\partial t}-\Delta $ with the Dirichlet boundary conditions. The stochastic integral in (5) is understood in the Walsh sense, see [25].
By Theorem 2.1 in [17], there exists a unique ${L^{2}}[0,1]$-valued continuous stochastic process $\{{u^{\varepsilon }}(t,.),t\in [0,T]\}$ satisfying the equation (5).
The deterministic equation (4) obtained when the parameter ε tends to zero can be written in the following integral form
(6)
\[ {u^{0}}(t,x)={\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({u^{0}}(s,y)\big)^{2}}dyds.\]
Since (6) corresponds to $\sigma \equiv 0$ in the degenerate case studied in [17], it admits a unique solution ${u^{0}}$ belonging to $C([0,T];{\mathrm{L}^{2}}([0,1]))$. Moreover, the continuity of ${u^{0}}$ on the compact set $[0,T]$ implies that
(7)
\[ \underset{t\in [0,T]}{\sup }{\big\| {u^{0}}(t,\cdot )\big\| _{2}^{q}}\mathrm{<}\infty ,\]
for all $q\geqslant 2$.
We now recall some estimations of the Green kernel function G, as stated in [17] and [22], that will be used in the sequel.
Lemma 2.1.
Let G denotes the Green kernel corresponding to the operator $\frac{\partial }{\partial t}-\Delta $ with the Dirichlet boundary conditions. Then, we have
  • i) for any $t\in [0,T]$ and $y\in [0,1]$: ${\displaystyle\int _{0}^{1}}{G_{t}}(x,y)dx=1$;
  • ii) for any $t\in [0,T]$ and $\frac{1}{2}\mathrm{<}\beta \mathrm{<}\frac{3}{2}$: ${\displaystyle\int _{0}^{t}}{\displaystyle\int _{0}^{1}}|{\partial _{x}}{G_{t-s}}(x,y){|^{\beta }}dxds\leqslant {c_{\beta ,T}}$, where ${c_{\beta ,T}}$ is a constant depending only on T and β.
Moreover, there exists a constant c, depending only on T, such that for all $y,z\in [0,1]$ and $t,{t^{\prime }}\in [0,T]$ such that $0\leqslant t\leqslant {t^{\prime }}\leqslant 1$,
  • iii) ${\displaystyle\int _{t}^{{t^{\prime }}}}{\displaystyle\int _{0}^{1}}{G_{{t^{\prime }}-s}^{2}}(x,y)dxds\leqslant c\sqrt{{t^{\prime }}-t}$ and ${\displaystyle\int _{0}^{t}}{\displaystyle\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dxds\leqslant c$;
  • iv) ${\displaystyle\int _{0}^{{t^{\prime }}}}{\displaystyle\int _{0}^{1}}{[{G_{t-s}}(x,y)-{G_{{t^{\prime }}-s}}(x,y)]^{2}}dxds\leqslant c\sqrt{{t^{\prime }}-t}$;
  • v) ${\displaystyle\int _{0}^{t}}{\displaystyle\int _{0}^{1}}{[{G_{s}}(x,y)-{G_{s}}(x,z)]^{2}}dxds\leqslant c|y-z|$.

3 Moderate deviations

3.1 Framework and the main result

According to Varadhan [24] and [3], a crucial step toward the large deviations principle is the Laplace principle. Therefore, we will focus later on establishing this principle which we formulate in the following
Definition 3.1 (Laplace principle).
A family of random variables $\{{X^{\varepsilon }};\hspace{0.1667em}\varepsilon \mathrm{>}0\}$ defined on a Polish space $\mathcal{E}$, is said to satisfy the Laplace principle with speed ${\lambda ^{2}}(\varepsilon )$ and rate function $I:\mathcal{E}\longrightarrow [0,\infty ]$ if for any bounded continuous function $F:\mathcal{E}\to \mathbb{R}$, we have
\[ \underset{\varepsilon \to 0}{\lim }{\lambda ^{2}}(\varepsilon )\log \mathbf{E}\bigg(\exp \bigg[-\frac{1}{{\lambda ^{2}}(\varepsilon )}F\big({X^{\varepsilon }}\big)\bigg]\bigg)=-\underset{f\in \mathcal{E}}{\inf }\big\{F(f)+I(f)\big\},\]
where E is the expectation with respect to P.
In the context of the weak convergence approach, the proof of the Laplace principle for functionals of the Brownian sheet is essentially based on the following variational representation formula, which was originally proved in [4].
Theorem 3.2.
Let $f\in \mathcal{C}([0,T]\times [0,1];\mathbb{R})\longrightarrow \mathbb{R}$ be a bounded measurable mapping $\mathcal{C}([0,T]\times [0,1];\mathbb{R})$ into $\mathbb{R}$, and let ${\mathcal{P}_{2}}$ be the class of all predictable processes u such that $\| u{\| _{{\mathcal{H}_{T}}}}\mathrm{<}\infty ,a.s$. Then
(8)
\[ -\log \mathbf{E}\exp \big\{-f(B)\big\}=\underset{u\in {\mathcal{P}_{2}}}{\inf }\bigg(\frac{1}{2}\| u{\| _{{\mathcal{H}_{T}}}^{2}}+f\big({B^{u}}\big)\bigg),\]
where
\[ {B^{u}}(t,x):=B(t,x)+{\int _{0}^{t}}{\int _{0}^{x}}u(s,y)dyds,\hspace{1em}\textit{for any}\hspace{2.5pt}(t,x)\in [0,T]\times [0,1].\]

3.1.1 Sufficient conditions for the general Laplace principle

Here, we briefly describe the result needed, in our context, for proving the Laplace principle, and state our main result.
Let us first introduce some notations. For $\varepsilon \mathrm{>}0$, denote by ${\mathcal{G}^{\varepsilon }}:{\mathcal{E}_{0}}\times \mathcal{C}([0,T]\times [0,1];\mathbb{R})\to \mathcal{E}$ a measurable map, where ${\mathcal{E}_{0}}$ stands for a compact subspace of $\mathcal{E}$ in which the initial condition ${u_{0}}$ takes values, and let
(9)
\[ {X^{\varepsilon ,{u_{0}}}}:={\mathcal{G}^{\varepsilon }}\big({u_{0}},h(\varepsilon )W\big).\]
Later, we will state sufficient conditions for the Laplace principle for ${X^{\varepsilon ,{u_{0}}}}$ to hold uniformly in ${u_{0}}$ for compact subsets of ${\mathcal{E}_{0}}$.
For any positive integer N, we introduce
\[ {S^{N}}:=\big\{\phi \in {\mathcal{H}_{T}}:\| \phi {\| _{{\mathcal{H}_{T}}}^{2}}\leqslant N\big\}\]
and
\[ {\mathcal{P}_{2}^{N}}:=\big\{v(\omega )\in {\mathcal{P}_{2}}:v(\omega )\in {S^{N}},P\text{-a.s.}\big\}.\]
It is worth noticing that the space ${S^{N}}$ is a compact metric space equipped with the weak topology on ${L^{2}}([0,T]\times [0,1])$ and that ${\mathcal{P}_{2}^{N}}$ is the space of controls, which plays a central role in the weak convergence approach.
For $u\in {\mathcal{H}_{T}}$, define the element $\mathcal{I}(u)$ in $\mathcal{C}([0,T]\times [0,1];\mathbb{R})$ by
\[ \mathcal{I}(u)(t,x):={\int _{0}^{t}}{\int _{0}^{x}}u(s,y)dsdy,\hspace{1em}t\in [0,T],\hspace{2.5pt}x\in [0,1].\]
We are now in position to introduce the following result, due to Budhiraja et al. [5], ensuring sufficient condition for the Laplacian principle to hold.
Proposition 3.1 (Theorem 7 in [5]).
Assume that there exists a measurable
\[ {\mathcal{G}^{0}}:{\mathcal{E}_{0}}\times \mathcal{C}\big([0,T]\times [0,1];\mathbb{R}\big)\to \mathcal{E},\]
such that the following hold:
  • (A1) For any integer $M\mathrm{>}0$, any family $\{{v^{\varepsilon }};\varepsilon \mathrm{>}0\}\subset {{\mathcal{P}_{2}}^{M}}$ and $\{{u_{0}^{\varepsilon }}\}\subset {\mathcal{E}_{0}}$ such that ${v^{\varepsilon }}\to v$ and ${u_{0}^{\varepsilon }}\to {u_{0}}$ in distribution (as ${S^{N}}$-valued random elements), as $\varepsilon \to 0$. Then ${\mathcal{G}^{\varepsilon }}({u_{0}^{\varepsilon }},W+h(\varepsilon )\mathcal{I}({v^{\varepsilon }}))\to {\mathcal{G}^{0}}({u_{0}},\mathcal{I}(u))$, in distribution as $\varepsilon \to 0$;
  • (A2) For any integer $M\hspace{0.1667em}\mathrm{>}\hspace{0.1667em}0$ and compact set $K\hspace{0.1667em}\subset \hspace{0.1667em}{\mathcal{E}_{0}}$, the set ${\varGamma _{M,K}}\hspace{0.1667em}:=\hspace{0.1667em}\{{\mathcal{G}^{0}}({u_{0}},\mathcal{I}(u));u\in {S^{M}},{u_{0}}\in K\}$ is a compact subset of $\mathcal{E}$.
Then, the family $\{{X^{\varepsilon ,{u_{0}}}};\hspace{0.1667em}\varepsilon \mathrm{>}0\}$ defined by (9) satisfies the Laplace principle on $\mathcal{E}$ with speed ${\lambda ^{2}}(\varepsilon )$ and rate function ${I_{{u_{0}}}}$ given, for any $h\in \mathcal{E}$ and ${u_{0}}\in {\mathcal{E}_{0}}$, by
(10)
\[ {I_{{u_{0}}}}(h):=\underset{\{v\in {\mathcal{H}_{T}}:h={\mathcal{G}^{0}}({u_{0}},\mathcal{I}(v))\}}{\inf }\bigg\{\frac{1}{2}\| v{\| _{{\mathcal{H}_{T}}}^{2}}\bigg\},\]
where the infimum over an empty set is taken to be ∞.

3.1.2 Controlled processes for SPDEs (1)

In this subsection, we adapt the general scheme described above to study moderate deviations for the equation (1).
We denote by $\mathcal{E}={\mathcal{E}_{0}}:=C([0,T];{L^{2}}([0,1]))$ the space of solutions of (1). As we are interested in proving the Laplace principle for ${\bar{u}^{\varepsilon }}(t,x)$ defined by (2), we interpret ${\bar{u}^{\varepsilon }}$ as a functional of the Brownian sheet W. Indeed, using (5) and (6) we deduce that ${\bar{u}^{\varepsilon }}(t,x)$ satisfies for all $\omega \in \varOmega $ and all $t\in [0,T]$ the following equation
(11)
\[\begin{aligned}{}{\bar{u}^{\varepsilon }}(t,x)& =\frac{1}{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon }}(s,y)\big)W(dy,ds)\\ {} & \hspace{1em}-\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big[{\bar{u}^{\varepsilon }}(s,y)\big]^{2}}dyds\\ {} & \hspace{1em}-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{\varepsilon }}(s,y){u^{0}}(s,y)dyds,\end{aligned}\]
for $dx$-almost all $x\in [0,T]$.
This implies (see Theorem IV.9.1. of [19]) the existence of a measurable mapping
\[ {\mathcal{G}^{\varepsilon }}:C\big([0,1];\mathbb{R}\big)\times C\big([0,T]\times [0,1];\mathbb{R}\big)\to C\big([0,T];{L^{2}}\big([0,1]\big)\big),\]
such that
\[ {\bar{u}^{\varepsilon }}={\mathcal{G}^{\varepsilon }}({u_{0}},W).\]
As a first step toward the conditions (A1) and (A2) stated in Proposition 3.1, we define for ${v^{\varepsilon }}\in {\mathcal{P}_{2}^{N}}$,
(12)
\[ {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}:={\mathcal{G}^{\varepsilon }}\big({u_{0}},W+h(\varepsilon )\mathcal{I}\big({v^{\varepsilon }}\big)\big).\]
In Proposition 3.2 below we will establish that the map ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ is the unique solution of the following stochastic controlled analogue of equation (11)
(13)
\[\begin{aligned}{}{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)& =\frac{1}{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dy)\\ {} & \hspace{1em}-\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big[{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big]^{2}}dyds\\ {} & \hspace{1em}-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y){u^{0}}(s,y)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big){v^{\varepsilon }}(s,y)dyds.\end{aligned}\]
We will call it the controlled process. Moreover, for any $v\in {S^{N}}$, we associate to (13) the following skeleton zero-noise equation:
(14)
\[\begin{aligned}{}{\bar{u}^{v}}(t,x)& =-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{v}}(s,y){u^{0}}(s,y)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)v(s,y)dyds.\end{aligned}\]
Existence and uniqueness of the solution ${\bar{u}^{v}}$ for (14) is obtained in Proposition 3.3 below, and thereby, we define the map
(15)
\[ {\mathcal{G}^{0}}\big({u_{0}},\mathcal{I}(v)\big):={\bar{u}^{v}}.\]
With these notations in mind, the main result of this section is stated in the following
Theorem 3.3.
Assume that ${u_{0}}$ is continuous, σ is bounded and globally Lipschitz and that (3) holds. Then the family of processes $\{{\bar{u}^{\varepsilon }};\hspace{0.1667em}\varepsilon \mathrm{>}0\}$ satisfies a LDP on the space $C([0,T];{L^{2}}([0,1]))$ with speed ${\lambda ^{2}}(\varepsilon )$ and rate function given by
(16)
\[ I(f)=\inf \bigg\{\frac{1}{2}\| v{\| _{{\mathcal{H}_{T}}}^{2}},\hspace{2.5pt}v\in {\mathcal{H}_{T}},\hspace{2.5pt}{\mathcal{G}^{0}}\big({u_{0}},\mathcal{I}(v)\big):=f\bigg\}.\]
Remark 3.4.
Note that the conclusion of Theorem 3.3 is still valid for a quite large class of SPDEs containing stochastic Burgers equation. Namely, consider the following class of SPDEs introduced by Gyöngy in [17]:
(17)
\[\begin{aligned}{}\frac{\partial {u^{\varepsilon }}}{\partial t}(t,x)& =\frac{{\partial ^{2}}}{\partial {x^{2}}}{u^{\varepsilon }}(t,x)+\frac{\partial }{\partial x}g\big({u^{\varepsilon }}(t,x)\big)+f\big({u^{\varepsilon }}(t,x)\big)\\ {} & \hspace{1em}+\sqrt{\varepsilon }\sigma \big({u^{\varepsilon }}(t,x)\big)\dot{W}(t,x),\hspace{1em}(t,x)\in [0,T]\times [0,1],\end{aligned}\]
with Dirichlet’s boundary conditions ${u^{\varepsilon }}(t,0)={u^{\varepsilon }}(t,1)=0$ for $t\in [0,T]$, and the initial condition ${u^{\varepsilon }}(0,x)={u_{0}}(x)$ for $x\in [0,1]$. Suitable conditions on the coefficients f, g and σ, for instance, the quadratic growth assumption on the nonlinear coefficient g, bring us back to the case of stochastic Burgers equation that we have considered in our paper. Notice here that papers closest to ours are two recent works by S. Hu, R. Li and X. Wang [18] and R. Zhang and J. Xiong [28]. In particular, these authors established a moderate deviations principle for the class (17). We learned about these works after we finished the first version of this paper.

3.2 Proof of the main result

We basically follow the same idea as in [5] and [23]. According to Proposition 3.1, it suffices to check that the conditions (A1) and (A2) are fulfilled. For (A1), we will establish well-posedness, tightness and convergence of controlled processes. The condition (A2), which gives that I is a rate function, will follow from the continuity of the map ${\mathcal{G}^{0}}$ with respect to the weak topology.
The proof of (A1) will be done in several steps.
Step 1: Existence and uniqueness of controlled and limiting processes.
Proposition 3.2.
Assume that σ is bounded and globally Lipschitz, and that (3) holds. Then, the ${\mathrm{L}^{2}}([0,1])$-valued process $\{{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t),t\in [0,T]\}$ defined by (12) is the unique solution of the equation (13).
Proof.
For ${v^{\varepsilon }}\in {\mathcal{P}_{2}^{N}}$, set
\[\begin{aligned}{}d{Q^{\varepsilon ,{v^{\varepsilon }}}}& :=\exp \Bigg\{-\sqrt{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{v^{\varepsilon }}(s,y)W(ds,dy)\\ {} & \hspace{1em}-\frac{1}{2}h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{v^{\varepsilon }}{(s,y)^{2}}dyds\Bigg\}dP.\end{aligned}\]
Since ${Q^{\varepsilon ,{v^{\varepsilon }}}}$ is defined through an exponential martingale, it is a probability measure on Ω. And thus, by the Girsanov theorem the process $\widetilde{W}$ defined by
\[ \widetilde{W}(dt,dx)=W(dt,dx)+h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{v^{\varepsilon }}(s,y)dyds\]
is a space-time white noise under the probability measure ${Q^{\varepsilon ,{v^{\varepsilon }}}}$. Plugging $\widetilde{W}(dt,dx)$ in (13) we obtain (11) with $\widetilde{W}(dt,dx)$ instead of $W(dt,dx)$. Now, if u denotes the unique solution of (11) with $\widetilde{W}(dt,dx)$ on the space $(\varOmega ,\mathcal{F},{Q^{\varepsilon ,{v^{\varepsilon }}}})$, then u satisfies (13), ${Q^{\varepsilon ,{v^{\varepsilon }}}}$ a.s. And hence by equivalence of probabilities, u satisfies (13), P a.s.
For the uniqueness, if ${u_{1}}$ and ${u_{2}}$ are two solutions of (13) on $(\varOmega ,\mathcal{F},P)$, then ${u_{1}}$ and ${u_{2}}$ are solutions of (11) governed by $\widetilde{W}(dt,dx)$ on $(\varOmega ,\mathcal{F},{Q^{\varepsilon ,{v^{\varepsilon }}}})$. By the uniqueness of the solution of (13), we obtain ${u_{1}}={u_{2}}$, ${Q^{\varepsilon ,{v^{\varepsilon }}}}$ a.s. And thus ${u_{1}}={u_{2}}$, P a.s. by equivalence of probabilities.  □
Proposition 3.3.
Assume that σ is bounded and globally Lipschitz. For any $v\in {S^{N}}$, for some $N\in \mathbb{N}$, the equation (14) admits a unique solution ${\bar{u}^{v}}$ belonging to $C([0,T];{\mathrm{L}^{2}}([0,1]))$. Moreover, for any $q\geqslant 2$
(18)
\[ \underset{v\in {S^{N}}}{\sup }\underset{0\le t\le T}{\sup }{\big\| {\bar{u}^{v}}(t,\cdot )\big\| _{2}^{q}}\mathrm{<}\infty .\]
Proof.
The proof follows from a standard fixed point argument, and for the convenience of the reader, we include it in the Appendix.  □
Step 2: Tightness of the family ${({u^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon \mathrm{>}0}}$ in $C([0,T];{L^{2}}([0,1]))$.
Let ${({v^{\varepsilon }})_{\varepsilon }}$ be a family of elements from ${{\mathcal{P}_{2}}^{N}}$ such that ${v^{\varepsilon }}\to v$ in distribution, as ${S^{N}}$-valued random elements, when $\varepsilon \to 0$.
We have the following proposition.
Proposition 3.4.
Assume that ${u_{0}}$ is continuous, σ is bounded and globally Lipschitz, and that (3) holds. Then ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ is tight in $C([0,T];{L^{2}}([0,1]))$.
Proof.
Recall that
(19)
\[\begin{aligned}{}{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)& =\frac{1}{h(\varepsilon )}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dy)\\ {} & \hspace{1em}-\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big[{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big]^{2}}dyds\\ {} & \hspace{1em}-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y){u^{0}}(s,y)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big){v^{\varepsilon }}(s,y)dyds\\ {} & =:{\sum \limits_{i=1}^{4}}{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x),\end{aligned}\]
where ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)$, $i=1,2,3,4$, stands for the ith summand of the RHS of the above equation.
In view of (19), in order to prove the claim of Proposition 3.4, we will state and prove the next two lemmas which give the tightness of each summand ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}$, $i=1,2,3,4$.  □
We first consider the cases where $i=1$ and $i=4$. Using Theorem 4.10 of Chapter 2 in [20], the following lemma states sufficient conditions for tightness.
Lemma 3.5.
Assume the same conditions as in Proposition 3.4. For $i=1$ or 4, we have
(20)
\[ \underset{\zeta \longrightarrow +\infty }{\lim }\underset{\varepsilon \mathrm{>}0}{\sup }P\big(\big|{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big|\mathrm{>}\zeta \big)=0,\hspace{1em}\textit{for any}\hspace{2.5pt}(t,x)\in [0,T]\times [0,1],\]
and for any $\zeta \mathrm{>}0$
(21)
\[ \hspace{-28.45274pt}\hspace{1em}\underset{\delta \longrightarrow 0}{\lim }\underset{\varepsilon \mathrm{>}0}{\sup }P\Big(\underset{|t-{t^{\prime }}|+|x-y|\leqslant \delta }{\sup }\big|{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}\big({t^{\prime }},y\big)\big|\mathrm{>}\zeta \Big)=0.\]
In particular, the families ${({I_{1}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ and ${({I_{4}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ are tight in $C([0,T];{L^{2}}([0,1]))$.
Proof.
Let $x,y\in [0,1]$ and $t,{t^{\prime }}\in [0,T]$ such that ${t^{\prime }}\leqslant t$. To prove (20) and (21), it is enough to exhibit upper bounds for the square moments of ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)$ and ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}({t^{\prime }},y)$ for $i=1$ and $i=4$.
Using the Burkholder–Davis–Gundy inequality, the boundedness of σ, Lemma 2.1 and the condition (3) we infer that
(22)
\[\begin{aligned}{}& \mathbf{E}\big({\big|{I_{1}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big|^{2}}\big)\\ {} & \hspace{1em}\leqslant c.{h^{-2}}(\varepsilon ).\mathbf{E}{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y){\sigma ^{2}}\big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)dyds\\ {} & \hspace{1em}\leqslant c.{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dyds,\end{aligned}\]
which is finite. On the other hand, the same arguments as above yield
(23)
\[\begin{aligned}{}& \mathbf{E}\big({\big|{I_{1}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{1}^{\varepsilon ,{v^{\varepsilon }}}}\big({t^{\prime }},y\big)\big|^{2}}\big)\\ {} & \hspace{1em}={h^{-2}}(\varepsilon ).\mathbf{E}\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2.5pt}\times \sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dz)\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2.5pt}+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}}(y,z)\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)W(ds,dz)\Bigg\}{^{2}}\\ {} & \hspace{1em}\leqslant c\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}{\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]^{2}}dzds+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(y,z)dzds\Bigg\}\\ {} & \hspace{1em}\leqslant c\big({\big|t-{t^{\prime }}\big|^{\frac{1}{2}}}+{\big\| x-{x^{\prime }}\big\| ^{\frac{1}{2}}}\big).\end{aligned}\]
Therefore, (20) and (21) hold by (22) and (23), respectively.
To deal with ${({I_{4}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$, we use the Cauchy–Schwarz inequality and Lemma 2.1 to write
(24)
\[\begin{aligned}{}\mathbf{E}\big({\big|{I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big|^{2}}\big)& \leqslant c\mathbf{E}{\Bigg({\int _{0}^{t}}{\int _{0}^{1}}\big|{G_{t-s}}(x,y){v^{\varepsilon }}(s,y)\big|dyds\Bigg)^{2}}\\ {} & \leqslant c{\big\| {v^{\varepsilon }}\big\| _{{\mathcal{H}_{T}}}^{2}}.{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dyds\\ {} & \leqslant c(N),\end{aligned}\]
where $c(N)$ is a constant depending on N. Similarly,
(25)
\[\begin{aligned}{}& \mathbf{E}\big({\big|{I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{4}^{\varepsilon ,{v^{\varepsilon }}}}\big({t^{\prime }},y\big)\big|^{2}}\big)\\ {} & \hspace{1em}=\mathbf{E}\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]\sigma \big({u^{\varepsilon ,{v^{\varepsilon }}}}(s,z)\big){v^{\varepsilon }}(s,y)dzds\\ {} & \hspace{2em}+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}}(y,z)\sigma \big({u^{\varepsilon ,{v^{\varepsilon }}}}(s,z)\big){v^{\varepsilon }}(s,y)dzds\Bigg\}{^{2}}\\ {} & \hspace{1em}\leqslant c\Bigg\{{\int _{0}^{{t^{\prime }}}}{\int _{0}^{1}}{\big[{G_{t-s}}(x,z)-{G_{{t^{\prime }}-s}}(y,z)\big]^{2}}dzds+{\int _{{t^{\prime }}}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(y,z)dzds\Bigg\}\\ {} & \hspace{1em}\leqslant c\big({\big|t-{t^{\prime }}\big|^{\frac{1}{2}}}+{\big\| x-{x^{\prime }}\big\| ^{\frac{1}{2}}}\big).\end{aligned}\]
Therefore, (20) and (21) hold by (24) and (25), respectively.  □
For the tightness of ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$, we follow an idea introduced in [17] which is essentially based on Lemma 4.3 in the Appendix. More precisely, we state the following
Lemma 3.6.
Assume the same conditions as in Proposition 3.4. Then, the families ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ and ${({I_{3}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ are uniformly tight in $C([0,T];{L^{2}}([0,1]))$.
Proof.
The proof of the tightness of ${({I_{3}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ will be omitted since it can be done similarly to this of ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$.
To show the tightness of ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$, we will apply Lemma 4.3 with $q=1$, $\rho =2$ and ${\zeta _{\varepsilon }}(t,\cdot ):=\sqrt{\varepsilon }h(\varepsilon ){({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})^{2}}(t,\cdot )$. Set
\[ {\theta _{\varepsilon }}:=\sqrt{\varepsilon }h(\varepsilon )\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\big({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}\big)^{2}}(t,\cdot )\big\| _{1}}=\sqrt{\varepsilon }h(\varepsilon )\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}^{2}}.\]
According to Lemma 4.3, it suffices to show that ${({\theta _{\varepsilon }})}_{\varepsilon }$ is bounded in probability. i.e.
(26)
\[ \underset{c\longrightarrow +\infty }{\lim }\underset{\varepsilon \mathrm{>}0}{\sup }\mathbb{P}({\theta _{\varepsilon }}\geqslant c)=0.\]
Taking into account the condition (3), there exists ${\varepsilon _{0}}\mathrm{>}0$ such that $\sqrt{\varepsilon }h(\varepsilon )\leqslant 1$ for all $\varepsilon \leqslant {\varepsilon _{0}}$. Consequently
\[\begin{aligned}{}\underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}({\theta _{\varepsilon }}\geqslant c)& =\underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}^{2}}\geqslant \frac{c}{\sqrt{\varepsilon }h(\varepsilon )}\bigg)\\ {} & \leqslant \underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}^{2}}\geqslant c\Big).\end{aligned}\]
Then, to prove (26), it is enough to show that
(27)
\[ \underset{c\longrightarrow +\infty }{\lim }\underset{\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant c\Big)=0.\]
For this purpose, returning to (19) we note that ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ corresponds to the following SPDE
(28)
\[\begin{aligned}{}\frac{\partial {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}}{\partial t}(t,x)& =\Delta {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)+\frac{\partial {g_{\varepsilon }}}{\partial x}\big(t,x,{u^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big)+{f_{\varepsilon }}\big(t,x,{u^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big)\\ {} & \hspace{1em}+{\sigma _{\varepsilon }}\big(t,x,{\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big)\dot{W}(t,x),\end{aligned}\]
where
\[\begin{aligned}{}{g_{\varepsilon }}(t,x,r)& :=-\sqrt{\varepsilon }h(\varepsilon ){r^{2}}-2r{u^{0}}(t,x),\\ {} {f_{\varepsilon }}(t,x,r)& :=\sigma ({u^{0}}(t,x)+\sqrt{\varepsilon }h(\varepsilon )r){v^{\varepsilon }}(t,x)\hspace{1em}\text{and}\\ {} {\sigma _{\varepsilon }}(t,x,r)& :=\frac{1}{h(\varepsilon )}\sigma ({u^{0}}(t,x)+\sqrt{\varepsilon }h(\varepsilon )r).\end{aligned}\]
According to Theorem 2.1 in [17], the continuity of the initial condition ${u_{0}}$ implies the continuity of the solution ${u^{0}}$ of the equation (4) on the compact set $[0,T]\times [0,1]$. Consequently, ${u^{0}}$ is bounded.
This fact combined with the condition (3) allows us to consider the function ${g_{\varepsilon }}$ as a sum of two functions ${g_{\varepsilon }^{1}}$ and ${g_{\varepsilon }^{2}}$ satisfying major quadratic and linear conditions respectively, uniformly in ε being less than certain ${\varepsilon _{0}}$.
Using again the condition (3) and the hypotheses on the function σ, we see that ${\sigma _{\varepsilon }}$ is bounded and globally Lipschitzian, uniformly in ε being less than certain ${\varepsilon _{0}}$.
Thus, the equation (28) belongs to the class of semi-linear SPDE studied in [17], and for which the existence and uniqueness of the solution ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ is showed by an approximation procedure. This procedure is to define a sequence of truncated equations, and to establish existence and some convergence results for the corresponding sequence of solutions ${({\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}})_{n}}$, see [17, 14, 23]. In fact, in the course of the proof of Theorem 2.1 in [17] it was shown that
(29)
\[ \underset{c\longrightarrow \infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg)=0,\]
and that ${({\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}})_{n}}$ converges in probability in $C([0,T];{L^{2}}([0,1]))$ to the solution ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ of (19).
Now, observe that
\[\begin{aligned}{}& \underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big\{\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\Big)\geqslant c\Big\}\\ {} & \hspace{1em}\leqslant \underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg)\\ {} & \hspace{2em}+\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg).\end{aligned}\]
Then, as c tends to infinity, the estimate (29) yields
\[\begin{aligned}{}& \underset{c\longrightarrow +\infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big\{\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\Big)\geqslant c\Big\}\\ {} & \hspace{1em}\leqslant \underset{c\longrightarrow +\infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\bigg(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\geqslant \frac{c}{2}\bigg).\end{aligned}\]
By letting n tend to infinity and using the convergence in probability of ${\bar{u}_{n}^{\varepsilon ,{v^{\varepsilon }}}}$ to ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ we get
\[ \underset{c\longrightarrow +\infty }{\lim }\underset{0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}}{\sup }\mathbb{P}\Big\{\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\Big)\geqslant c\Big\}=0.\]
Hence, by applying Lemma 4.3 we obtain the tightness property for ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$.
Step 3: Convergence to the limit equation.
Having shown the tightness of each ${I_{i}^{\varepsilon ,{v^{\varepsilon }}}}$ for $i=1,2,3,4$, by Prohorov’s theorem, we can extract a subsequence, that we continue to denote by ε, and along which each of these processes and ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ converge in distribution (as ${S^{N}}$-valued random elements) in $C([0,T];{L^{2}}([0,1]))$ to limits denoted respectively by ${I_{i}^{0,v}}$ for $i=1,2,3,4$, and ${\bar{u}^{0,v}}$. We will show that
\[\begin{aligned}{}{I_{1}^{0,v}}& =0,\\ {} {I_{2}^{0,v}}& =0,\\ {} {I_{3}^{0,v}}& =-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){u^{0,v}}(s,y){u^{0}}(s,y)dyds,\\ {} {I_{4}^{0,v}}& ={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)v(s,y)dyds,\end{aligned}\]
and the proof will be completed by the uniqueness result given in Proposition 3.3.
For $i=1$, Lemma 3 in [5] ensures the convergence of ${({I_{1}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ to 0 in probability in $C([0,T]\times [0,1])$. And, while the convergence in probability in $C([0,T]\times [0,1])$ implies the one in $C([0,T];{L^{2}}([0,1]))$, hence ${({I_{1}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges to 0 in probability in $C([0,T];{L^{2}}([0,1]))$ too.
To handle the convergence of each of the other terms, we invoke the Skorohod representation theorem and assume the almost sure convergence on a larger common probability space.
For $i=2$, applying Lemma 4.1 with $\rho =2$ and $\lambda =1$, we deduce there exists a constant $c\mathrm{>}0$ such that
\[ {\big\| {I_{2}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\leqslant c\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot ){\| _{2}^{2}}ds.\]
And since ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges a.s. in $C([0,T];{L^{2}}([0,1]))$ to ${\bar{u}^{0,v}}$, then there exists ${\varepsilon _{0}}\mathrm{>}0$ small enough such that
(30)
\[ \underset{\varepsilon \in ]0,{\varepsilon _{0}}]}{\sup }\underset{s\in [0,T]}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )\big\| _{2}}\mathrm{<}\infty ,\hspace{1em}\text{a.s.}\]
Further, there exists a constant $c\mathrm{>}0$ such that for all $0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}$
\[ \underset{t\in [0,T]}{\sup }{\big\| {I_{2}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )\big\| _{2}}\leqslant c\sqrt{\varepsilon }h(\varepsilon ),\hspace{1em}\text{a.s.}\]
Thus, ${({I_{2}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges a.s. to 0 in $C([0,T];{L^{2}}([0,1]))$ as ε tends to 0.
For $i=3$, let ${\tilde{I}_{3}^{0,v}}$ denote the RHS term of ${I_{3}^{0,v}}$. Applying again Lemma 4.1 and the Cauchy–Schwarz inequality, we conclude that there exists a constant $c\mathrm{>}0$ such that
\[\begin{aligned}{}{\big\| {I_{3}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{3}^{0,v}}(t,\cdot )\big\| _{2}}& \leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| \big({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big){u^{0}}(s,\cdot )\big\| _{1}}ds\\ {} & \leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big\| _{2}}{\big\| {u^{0}}(s,\cdot )\big\| _{2}}ds.\end{aligned}\]
Using the estimation (7) and the boundedness of ${\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}$ and ${\bar{u}^{0,v}}$ in $C([0,T];{L^{2}}([0,1]))$, we get
\[\begin{aligned}{}& {\big\| {I_{3}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{3}^{0,v}}(t,\cdot )\big\| _{2}}\\ {} & \hspace{1em}\leqslant c\underset{s\in [0,T]}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big\| _{2}}\underset{s\in [0,T]}{\sup }{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}ds\\ {} & \hspace{1em}\leqslant c\underset{s\in [0,T]}{\sup }{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )-{\bar{u}^{0,v}}(s,\cdot )\big\| _{2}}.\end{aligned}\]
Again, since ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ converges a.s. in $C([0,T];{L^{2}}([0,1]))$ to ${\bar{u}^{0,v}}$, we obtain the a.s. convergence of ${I_{3}^{\varepsilon ,{v^{\varepsilon }}}}$ to ${\tilde{I}_{3}^{0,v}}$ in $C([0,T];{L^{2}}([0,1]))$. And by the uniqueness of the limit and the continuity of ${\tilde{I}_{3}^{0,v}}$, we conclude that ${I_{3}^{0,v}}={\tilde{I}_{3}^{0,v}}$.
Concerning $i=4$, let ${\tilde{I}_{4}^{0,v}}$ denote the RHS term of ${I_{4}^{0,v}}$. We have
\[\begin{aligned}{}& {I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{4}^{0,v}}(t,\cdot )\\ {} & \hspace{1em}={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\big[\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big){v^{\varepsilon }}(s,y)\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}-\sigma \big({u^{0}}(s,y)\big)v(s,y)\big]dyds\\ {} & \hspace{1em}={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\big[\sigma \big({u^{0}}(s,y)+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,y)\big)\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}-\sigma \big({u^{0}}(s,y)\big)\big]{v^{\varepsilon }}(s,y)dyds\\ {} & \hspace{2em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\big[{v^{\varepsilon }}(s,y)-v(s,y)\big]\sigma \big({u^{0}}(s,y)\big)dyds\\ {} & \hspace{1em}=:{J_{4,1}^{\varepsilon }}(t,x)+{J_{4,2}^{\varepsilon }}(t,x).\end{aligned}\]
Then,
\[ {\big\| {I_{4}^{\varepsilon ,{v^{\varepsilon }}}}(t,\cdot )-{\tilde{I}_{4}^{(0,v)}}(t,\cdot )\big\| _{2}}\leqslant {\big\| {J_{4,1}^{\varepsilon }}(t,\cdot )\big\| _{2}}+{\big\| {J_{4,2}^{\varepsilon }}(t,\cdot )\big\| _{2}}.\]
For ${J_{4,1}^{\varepsilon }}$, we use Lemma 4.1, the Lipschitz condition on σ and the Cauchy–Schwarz inequality to obtain
\[\begin{aligned}{}& {\big\| {J_{4,1}^{\varepsilon }}(t,\cdot )\big\| _{2}}\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| \big(\sigma \big({u^{0}}(s,\cdot )+\sqrt{\varepsilon }h(\varepsilon ){\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )\big)-\sigma \big({u^{0}}(s,\cdot )\big)\big){v^{\varepsilon }}(s,\cdot )\big\| _{1}}ds\\ {} & \hspace{1em}\leqslant c\sqrt{\varepsilon }h(\varepsilon ){\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| {\bar{u}^{\varepsilon ,{v^{\varepsilon }}}}(s,\cdot )\big\| _{2}}{\big\| {v^{\varepsilon }}(s,\cdot )\big\| _{2}}ds.\end{aligned}\]
Since $({v^{\varepsilon }})\subset {\mathcal{P}_{2}^{N}}$, the estimation (30) implies that there exists a constant c depending on N such that for all $0\mathrm{<}\varepsilon \leqslant {\varepsilon _{0}}$
\[ \underset{t\in [0,T]}{\sup }{\big\| {J_{4,1}^{\varepsilon }}(t,\cdot )\big\| _{2}}\leqslant c\sqrt{\varepsilon }h(\varepsilon ),\hspace{1em}\text{a.s.}\]
Therefore, ${J_{4,1}^{\varepsilon }}$ converges to 0 in $C([0,T];{L^{2}}[0,1])$ as ε goes to 0.
The proof of the convergence of ${J_{4,2}^{\varepsilon }}$ to 0 in $C([0,T];{L^{2}}[0,1])$ as ε goes to 0 will be omitted since it can be treated similarly to the case of the family $\{{K_{n}},n\geqslant 1\}$ defined below by (35).
Consequently, ${I_{4}^{\varepsilon ,{v^{\varepsilon }}}}$ converges to ${\tilde{I}_{4}^{0,v}}$ in $C([0,T];{L^{2}}([0,1]))$, and by the uniqueness of the limit and the continuity of ${\tilde{I}_{4}^{0,v}}$, we conclude that ${I_{4}^{0,v}}={\tilde{I}_{4}^{0,v}}$.
Thus, by the convergence of both the process ${({\bar{u}^{\varepsilon ,{v^{\varepsilon }}}})_{\varepsilon }}$ and each term ${I_{i}^{\varepsilon ,,{v^{\varepsilon }}}}$ for $i=1,2,3,4$ along a subsequence, and by the uniqueness of the solution of the equation (14), we conclude that the condition (A1) in Proposition 3.1 holds.
Now, let us prove the condition (A2). As it was mentioned before, it suffices to check the continuity of the map ${\mathcal{G}^{0}}:{\mathcal{E}_{0}}\times {S^{N}}\longrightarrow C([0,T];{L^{2}}([0,1]))$ with respect to the weak topology. Let v, $({v_{n}})\subset {S^{N}}$ such that for any $g\in {\mathcal{H}_{T}}$,
\[ \underset{n\longrightarrow +\infty }{\lim }{\langle v-{v_{n}},g\rangle }_{{\mathcal{H}_{T}}}=0.\]
We claim that
(31)
\[ \underset{n\longrightarrow +\infty }{\lim }\underset{t\in [0,T]}{\sup }{\big\| {u^{{v_{n}}}}(t)-{u^{v}}(t)\big\| _{2}}=0.\]
Let $(t,x)\in [0,T]\times [0,1]$. The equation (14) implies
(32)
\[\begin{aligned}{}{\bar{u}^{{v_{n}}}}(t,x)-{\bar{u}^{v}}(t,x)& =-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){u^{0}}(s,y)\big({\bar{u}^{{v_{n}}}}(s,y)-{\bar{u}^{v}}(s,y)\big)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)\big({v_{n}}(s,y)-v(s,y)\big)dyds.\end{aligned}\]
Hence,
(33)
\[\begin{aligned}{}& {\big\| {\bar{u}^{{v_{n}}}}(t,\cdot )-{\bar{u}^{v}}(t,\cdot )\big\| _{2}}\\ {} & \hspace{1em}\leqslant c\Bigg\{{\Bigg\| {\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(\cdot ,y){u^{0}}(s,y)\big({\bar{u}^{{v_{n}}}}(s,y)-{\bar{u}^{v}}(s,y)\big)dyds\Bigg\| }_{2}\\ {} & \hspace{2em}+{\Bigg\| {\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(\cdot ,y)\sigma \big({u^{0}}(s,y)\big)\big({v_{n}}(s,y)-v(s,y)\big)dyds\Bigg\| }_{2}\Bigg\}.\end{aligned}\]
On one hand, using Lemma 4.1, the Cauchy–Schwarz inequality and estimation (7) we get
(34)
\[\begin{aligned}{}& {\Bigg\| {\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(\cdot ,y){u^{0}}(s,y)\big({\bar{u}^{{v_{n}}}}(s,y)-{\bar{u}^{v}}(s,y)\big)dyds\Bigg\| }_{2}\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {u^{0}}(s,\cdot )\big({\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big)\big\| _{1}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}\underset{s\in [0,T]}{\sup }{\big\| {u^{0}}(s,\cdot )\big\| _{2}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds\\ {} & \hspace{1em}\leqslant c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds.\end{aligned}\]
On the other hand, in order to handle the second term in the right hand side of (33), we define, for any $(t,x)\in [0,T]\times [0,1]$, the sequence
(35)
\[ {K_{n}}(t,x):={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)\big({v_{n}}(s,y)-v(s,y)\big)dyds,\]
whose properties are given in Lemma 4.4 in the Appendix. Then, summing up (33)–(34), we obtain for any $0\leqslant t\leqslant T$
(36)
\[ {\big\| {\bar{u}^{{v_{n}}}}(t,\cdot )-{\bar{u}^{v}}(t,\cdot )\big\| _{2}}\leqslant c{\big\| {K_{n}}(t)\big\| }_{2}+c{\int _{0}^{t}}{(t-s)^{-3/4}}{\big\| {\bar{u}^{{v_{n}}}}(s,\cdot )-{\bar{u}^{v}}(s,\cdot )\big\| _{2}}ds.\]
Applying Gronwall’s lemma, we get the estimate
(37)
\[ \underset{t\in [0,T]}{\sup }{\big\| {\bar{u}^{{v_{n}}}}(t,\cdot )-{\bar{u}^{v}}(t,\cdot )\big\| _{2}}\leqslant c\underset{t\in [0,T]}{\sup }{\big\| {K_{n}}(t,\cdot )\big\| }_{2},\]
which implies together with (63) the claim (31), and henceforth the condition (A2) holds.
Finally, the proof of Theorem 3.3 is completed since conditions of Proposition 3.1 are fulfilled.  □

4 Toward a central limit theorem

Many results on central limit theorem has been recently established for various kinds of parabolic SPDEs under strong assumptions on the drift coefficient. More specifically, under the linear growth condition, the differentiability and the global Liptschitz condition on both the drift coefficient and its derivative, some central limit theorems have been established in [26, 27]. And while these conditions are not all fulfilled for the stochastic Burgers equation, it is not surprising that classical tools do not apply to establish a central limit theorem. Nevertheless, we will prove in this section two first-step results toward a central limit theorem. More specifically, the uniform boundedness and the convergence of ${u^{\varepsilon }}$ to ${u^{0}}$ in ${\mathrm{L}^{q}}(\varOmega ;C([0,T];{\mathrm{L}^{2}}([0,1])))$ for $q\geqslant 2$. We hope that our current estimates could be helpful for future works in this direction.
We begin with the following result.
Proposition 4.1.
Assume that σ is bounded and globally Lipschitz. Then for all $q\geqslant 2$, we have
(38)
\[ \underset{\varepsilon \in ]0,1]}{\sup }\mathbf{E}\Big(\underset{t\in [0,T]}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\Big)\mathrm{<}\infty .\]
Proof.
We will use similar arguments as in Cardon-Weber and Millet [9] and Gyöngy [17]. For $0\mathrm{<}\varepsilon \leqslant 1$, set
\[ {\eta _{\varepsilon }}(t,x):=\sqrt{\varepsilon }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{\varepsilon }}(s,y)\big)W(dy,ds),\]
and
\[\begin{aligned}{}{\vartheta ^{\varepsilon }}(t,x)& :={u^{\varepsilon }}(t,x)-{\eta _{\varepsilon }}(t,x)\\ {} & ={\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}\hspace{-0.1667em}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({u^{\varepsilon }}(s,y)\big)^{2}}dyds\\ {} & ={\int _{0}^{1}}{G_{t}}(x,y){u_{0}}(y)dy\hspace{0.1667em}-\hspace{0.1667em}{\int _{0}^{t}}\hspace{-0.1667em}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y){\big({\vartheta ^{\varepsilon }}(s,y)+{\eta _{\varepsilon }}(s,y)\big)^{2}}dyds.\end{aligned}\]
Then, ${\vartheta ^{\varepsilon }}$ is a solution of the equation
(39)
\[ \frac{\partial {\vartheta ^{\varepsilon }}}{\partial t}(t,x)=\Delta {\vartheta ^{\varepsilon }}(t,x)+\frac{\partial }{\partial x}{\big({\vartheta ^{\varepsilon }}(t,x)+{\eta _{\varepsilon }}(t,x)\big)^{2}},\hspace{1em}(t,x)\in [0,T]\times [0,1],\]
with Dirichlet’s boundary conditions and initial condition ${\vartheta ^{\varepsilon }}(0,x)={u_{0}}(x)$.
Since $\sigma \circ {u^{\varepsilon }}$ is bounded uniformly in ε, arguing as in the proof of Theorem 2.1 in [17], page 286, by the Garsia–Rodemich–Rumsey lemma, one can deduce that
\[ \underset{\varepsilon }{\sup }\mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }\underset{0\leqslant x\leqslant 1}{\sup }{\big|{\tilde{\eta }_{\varepsilon }}(t,x)\big|^{q}}\Big)\mathrm{<}\infty ,\]
where ${\tilde{\eta }_{\varepsilon }}(t,x):=\frac{1}{\sqrt{\varepsilon }}{\eta _{\varepsilon }}(t,x)$. Consequently, there exists a universal constant $C(q)$ depending only on q such that
(40)
\[ \mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }\underset{0\leqslant x\leqslant 1}{\sup }{\big|{\eta _{\varepsilon }}(t,x)\big|^{q}}\Big)\leqslant C(p){\varepsilon ^{q/2}}.\]
In particular, the random variable ${\bar{\eta }_{\varepsilon }}:={\sup _{0\leqslant t\leqslant T}}{\sup _{0\leqslant x\leqslant 1}}|{\eta _{\varepsilon }}(t,x)|$ is well defined a.s.
Moreover, using the SPDE (39) satisfied by ${\vartheta ^{\varepsilon }}$ and following the same arguments as in the proof of Theorem 2.1 in [17], we deduce the existence of a constant c independent of ε and ω (see [17] pages 286–289) such that
(41)
\[ \underset{0\leqslant t\leqslant T}{\sup }{\big\| {\vartheta ^{\varepsilon }}\big(t{,^{\cdot }}\big)\big\| _{2}^{2}}\leqslant \| {u_{0}}{\| _{2}^{2}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{4}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}.\]
Consequently, for any $q\geqslant 2$
\[\begin{aligned}{}\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}& =\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\vartheta ^{\varepsilon }}(t,\cdot )+{\eta _{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\\ {} & \leqslant {2^{q-1}}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\vartheta ^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}+\underset{0\leqslant t\leqslant T}{\sup }{\big\| {\eta _{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \leqslant {2^{q-1}}\Bigg(\| {u_{0}}{\| _{2}^{q}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{2q}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}\\ {} & \hspace{1em}+\underset{0\leqslant t\leqslant T}{\sup }{\Bigg({\int _{0}^{1}}{\big|{\eta _{\varepsilon }}(t,x)\big|^{2}}dx\Bigg)^{q/2}}\Bigg)\\ {} & \leqslant {2^{q-1}}\big(\| {u_{0}}{\| _{2}^{q}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{2q}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}+{\bar{\eta }_{\varepsilon }^{q}}\big)\\ {} & \leqslant c\big(\| {u_{0}}{\| _{2}^{q}}+cT\big(1+{\bar{\eta }_{\varepsilon }^{2q}}\big){e^{(cT(1+{\bar{\eta }_{\varepsilon }^{2}}))}}\big).\end{aligned}\]
Hence, to prove (38) it suffices to show that
(42)
\[ \underset{\varepsilon \in ]0,1]}{\sup }\mathbf{E}\big(\big(1+{\bar{\eta }_{\varepsilon }^{2q}}\big){e^{cT(1+{\bar{\eta }_{\varepsilon }^{2}})}}\big)\hspace{1em}\hspace{2.5pt}\text{is finite.}\]
For this purpose, note first that
\[ \underset{0\leqslant s\leqslant T}{\sup }\underset{0\leqslant x\leqslant 1}{\sup }\big|\sqrt{\varepsilon }\sigma \big({u^{\varepsilon }}(s,x)\big)\big|\leqslant \sqrt{\varepsilon }\| \sigma {\| _{\infty }},\hspace{1em}\text{where}\hspace{2.5pt}\| \sigma {\| _{\infty }}:=\underset{x\in \mathbb{R}}{\sup }\big|\sigma (x)\big|.\]
Thus, by Lemma 4.2, there exist two positive constants ${C_{1}}$ and ${C_{2}}$, independent of ε, such that for any $M\geqslant {C_{1}}\| \sigma {\| _{\infty }}$
(43)
\[ P({\bar{\eta }_{\varepsilon }}\geqslant M)\leqslant {C_{1}}\| \sigma {\| _{\infty }}\exp \bigg(-\frac{{M^{2}}}{\varepsilon {C_{2}}(1+{T^{\frac{1}{8}}})}\bigg).\]
Setting $\varphi (x):=(1+{x^{2q}}){e^{cT(1+{x^{2}})}}$, which is a positive, continuous and increasing function on $[0,+\infty [$, we get for any $A\geqslant {C_{1}}\| \sigma {\| _{\infty }}$
\[\begin{aligned}{}\mathbf{E}\big(\varphi ({\bar{\eta }_{\varepsilon }})\big)& ={\int _{0}^{+\infty }}P\big(\varphi ({\bar{\eta }_{\varepsilon }})\mathrm{>}x\big)dx\\ {} & ={\int _{0}^{A}}P({\bar{\eta }_{\varepsilon }}\mathrm{>}x){\varphi ^{\prime }}(x)dx+{\int _{A}^{+\infty }}P({\bar{\eta }_{\varepsilon }}\mathrm{>}x){\varphi ^{\prime }}(x)dx\\ {} & \leqslant \varphi (A)+c{C_{1}}\| \sigma {\| _{\infty }}{\int _{A}^{+\infty }}\big(1+{x^{2q+1}}\big)\exp \bigg(cT{x^{2}}-\frac{{x^{2}}}{\varepsilon {C_{2}}(1+{T^{\frac{1}{8}}})}\bigg)dx\\ {} & \leqslant \varphi (A)+c{C_{1}}\| \sigma {\| _{\infty }}{\int _{A}^{+\infty }}\big(1+{x^{2q+1}}\big)\exp \bigg(cT{x^{2}}-\frac{{x^{2}}}{{C_{2}}(1+{T^{\frac{1}{8}}})}\bigg)dx,\end{aligned}\]
where the last integral is finite provided that $cT{C_{2}}(1+{T^{\frac{1}{8}}})\mathrm{<}1$. This implies that there exists ${T_{0}}\mathrm{>}0$, independent of ${u_{0}}$ and ε, such that (42) holds for $0\mathrm{<}T\leqslant {T_{0}}$. Using (41), and iterating the procedure finitely many times we conclude the proof.  □
Now, we can announce and state the following proposition.
Proposition 4.2.
Assume that σ is bounded and globally Lipschitz. Then, for all $q\geqslant 2$, we have
(44)
\[ \underset{\varepsilon \longrightarrow 0}{\lim }\mathbf{E}\Big(\underset{t\in [0,T]}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)=0.\]
Proof.
We will use a localization argument. For $0\leqslant t\leqslant T$, $\varepsilon \in ]0,1]$ and $M\mathrm{>}0$, set
(45)
\[ {\varOmega _{\varepsilon }^{M}}(t):=\Big\{w\in \varOmega :\underset{s\in [0,t]}{\sup }{\big\| {u^{\varepsilon }}(s)\big\| _{2}}\vee \underset{s\in [0,t]}{\sup }{\big\| {u^{0}}(s)\big\| _{2}}\leqslant M\Big\}.\]
We have
(46)
\[\begin{aligned}{}{u^{\varepsilon }}(t,x)-{u^{0}}(t,x)& =\sqrt{\varepsilon }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{\varepsilon }}(s,y)\big)W(ds,dy)\\ {} & \hspace{1em}-{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y)\big({\big({u^{\varepsilon }}(s,y)\big)^{2}}-{\big({u^{0}}(s,y)\big)^{2}}\big)dyds\\ {} & :={\eta ^{\varepsilon }}(t,x)+{I^{\varepsilon }}(t,x).\end{aligned}\]
Then, for any $q\geqslant 2$,
(47)
\[ {\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\leqslant {2^{q-1}}\big({\big\| {\eta ^{\varepsilon }}(t,\cdot )\big\| ^{q}}+{\big\| {I^{\varepsilon }}(t,\cdot )\big\| ^{q}}\big).\]
For ${\eta ^{\varepsilon }}(t,\cdot )$, by the Hölder inequality we have
\[\begin{aligned}{}\mathbf{E}\Big(\underset{0\leqslant s\leqslant t}{\sup }{\big\| {\eta ^{\varepsilon }}(s,\cdot )\big\| ^{q}}\Big)& \leqslant \mathbf{E}\Bigg(\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{1}}{\big|{\eta ^{\varepsilon }}(s,x)\big|^{q}}dx\Bigg)\\ {} & \leqslant {\int _{0}^{1}}\mathbf{E}\Big(\underset{0\leqslant s\leqslant t}{\sup }{\big|{\eta ^{\varepsilon }}(s,x)\big|^{q}}\Big)dx\\ {} & \leqslant \mathbf{E}\Big(\underset{0\leqslant x\leqslant 1}{\sup }\underset{0\leqslant s\leqslant t}{\sup }{\big|{\eta ^{\varepsilon }}(s,x)\big|^{q}}\Big)\\ {} & \leqslant C(q){\varepsilon ^{q/2}},\end{aligned}\]
where the last inequality follows from (40).
For ${I^{\varepsilon }}(t,\cdot )$, according to Lemma 4.1 in the Appendix with $\rho =2$ and $\lambda =1$, we have
(48)
\[ {\big\| {I^{\varepsilon }}(t,\cdot )\big\| _{2}}\leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| \big({u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot )\big)\big({u^{\varepsilon }}(s,\cdot )+{u^{0}}(s,\cdot )\big)\big\| _{1}}ds,\]
and using the following form of Hölder’s inequality
\[ {\Bigg|{\int _{0}^{t}}f(s)g(s)ds\Bigg|^{q}}\leqslant {\Bigg({\int _{0}^{t}}\big|f(s)\big|ds\Bigg)^{q-1}}{\int _{0}^{t}}\big|f(s)\big|{\big|g(s)\big|^{q}}ds,\]
with $f(s):={(t-s)^{-\frac{3}{4}}}$ and $g(s):=\| ({u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot ))({u^{\varepsilon }}(s,\cdot )+{u^{0}}(s,\cdot )){\| _{1}}$, we get
(49)
\[ {\big\| {I^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}{\big\| \big({u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot )\big)\big({u^{\varepsilon }}(s,\cdot )+{u^{0}}(s,\cdot )\big)\big\| _{1}^{q}}ds.\]
Now, taking the supremum up to time $t\in [0,T]$, and setting $\varPhi (s):=\| ({u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot ))({u^{\varepsilon }}(s,\cdot )+{u^{0}}(s,\cdot )){\| _{1}^{q}}$, and $\varPsi (s):={\sup _{0\leqslant r\leqslant s}}\varPhi (r)$, (49) implies
(50)
\[\begin{aligned}{}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}& \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{(s-r)^{-\frac{3}{4}}}\varPhi (r)dr\\ {} & \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{(s-r)^{-\frac{3}{4}}}\underset{0\leqslant {r^{\prime }}\leqslant r}{\sup }\varPhi \big({r^{\prime }}\big)dr\\ {} & \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{(s-r)^{-\frac{3}{4}}}\varPsi (r)dr\\ {} & =c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{r^{-\frac{3}{4}}}\varPsi (s-r)dr.\end{aligned}\]
Since
\[ \varPsi (s-r)=\underset{0\leqslant {r^{\prime }}\leqslant s-r}{\sup }\varPhi ({r^{\prime }})\leqslant \underset{0\leqslant {r^{\prime }}\leqslant t-r}{\sup }\varPhi ({r^{\prime }})=\varPsi (t-r),\]
then
\[\begin{aligned}{}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}& \leqslant c\underset{0\leqslant s\leqslant t}{\sup }{\int _{0}^{s}}{r^{-\frac{3}{4}}}\varPsi (t-r)dr\\ {} & =c{\int _{0}^{t}}{r^{-\frac{3}{4}}}\varPsi (t-r)dr\\ {} & =c{\int _{0}^{t}}{(t-r)^{-\frac{3}{4}}}\varPsi (r)dr.\end{aligned}\]
Introducing the expectation on ${\varOmega _{\varepsilon }^{M}}(t)$ and taking into account the facts that ${\varOmega _{\varepsilon }^{M}}(t)\in {\mathcal{F}_{t}}$ and ${\varOmega _{\varepsilon }^{M}}(t)\subset {\varOmega _{\varepsilon }^{M}}(s)$ for $0\leqslant s\leqslant t$, we get
(51)
\[ \mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(t)}}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}\Big)\leqslant c{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\mathbf{E}\big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\varPsi (s)\big)ds.\]
Notice that
\[\begin{aligned}{}{\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\varPsi (s)& \leqslant {\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| \big({u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big)\big({u^{\varepsilon }}(r,\cdot )+{u^{0}}(r,\cdot )\big)\big\| _{1}^{q}}\\ {} & \leqslant {\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}{\big\| {u^{\varepsilon }}(r,\cdot )+{u^{0}}(r,\cdot )\big\| _{2}^{q}}\\ {} & \leqslant {\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}\big({\big\| {u^{\varepsilon }}(r,\cdot )\big\| _{2}^{q}}+{\big\| {u^{0}}(r,\cdot )\big\| _{2}^{q}}\big)\\ {} & \leqslant 2{M^{q}}{\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}.\end{aligned}\]
This, together with (51), gives
(52)
\[\begin{aligned}{}& \mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(t)}}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {I^{\varepsilon }}(s,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}\leqslant 2c{M^{q}}{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}\Big)ds.\end{aligned}\]
Combining (47)–(52) we get for any $0\leqslant t\leqslant T$
(53)
\[\begin{aligned}{}& \mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(t)}}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}\leqslant c\Bigg[{\varepsilon ^{q/2}}+2{M^{q}}{\int _{0}^{t}}{(t-s)^{-\frac{3}{4}}}\mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(s)}}\underset{0\leqslant r\leqslant s}{\sup }{\big\| {u^{\varepsilon }}(r,\cdot )-{u^{0}}(r,\cdot )\big\| _{2}^{q}}\Big)ds\Bigg].\end{aligned}\]
Using Gronwall’s lemma we deduce that, for all $t\in [0,T]$,
(54)
\[ \mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(t)}}\underset{0\leqslant s\leqslant t}{\sup }{\big\| {u^{\varepsilon }}(s,\cdot )-{u^{0}}(s,\cdot )\big\| _{2}^{q}}\Big)\leqslant c{\varepsilon ^{q/2}}{e^{2c{M^{q}}}}.\]
Therefore, for any fixed $M\mathrm{>}0$ we have
\[\begin{aligned}{}& \mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}=\mathbf{E}\Big({\mathbf{1}_{{\varOmega _{\varepsilon }^{M}}(T)}}\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{2em}+\mathbf{E}\Big({\mathbf{1}_{\varOmega \setminus {\varOmega _{\varepsilon }^{M}}(T)}}\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\\ {} & \hspace{1em}\leqslant c{\varepsilon ^{q/2}}{e^{2c{M^{q}}}}+{\big(P\big(\varOmega \setminus {\varOmega _{\varepsilon }^{M}}(T)\big)\big)^{1/2}}{\Big(\mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{2q}}\Big)\Big)^{1/2}}.\end{aligned}\]
To deal with the second term of the last inequality, on one hand, estimations (7) and (38) imply that there exists $c\mathrm{>}0$ such that
(55)
\[ \underset{\varepsilon \in ]0,1]}{\sup }\mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\mathrm{<}c.\]
On the other hand, by the Markov inequality and using again the estimations (7) and (38) we have
(56)
\[\begin{aligned}{}P\big(\varOmega \setminus {\varOmega _{\varepsilon }^{M}}(T)\big)& \leqslant P\Big(\underset{t\in [0,T]}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )\big\| _{2}^{q}}\mathrm{>}{M^{q}}\Big)+P\Big(\underset{t\in [0,T]}{\sup }{\big\| {u^{0}}(t,\cdot )\big\| _{2}^{q}}\mathrm{>}{M^{q}}\Big)\\ {} & \leqslant \frac{\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{\varepsilon }}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}+\frac{\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{0}}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}\\ {} & \leqslant \frac{{\sup _{\varepsilon \in ]0,1]}}\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{\varepsilon }}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}+\frac{{\sup _{t\in [0,T]}}\| {u^{0}}(t,\cdot ){\| _{2}^{q}}}{{M^{q}}}\\ {} & \leqslant \frac{c}{{M^{q}}}.\end{aligned}\]
Then
(57)
\[ \mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\leqslant c{\varepsilon ^{q/2}}{e^{2c{M^{q}}}}+\frac{c}{{M^{q/2}}}.\]
Letting ε tends to zero and taking into account the fact that ε and M are independent, we obtain
\[ \underset{\varepsilon \longrightarrow 0}{\limsup }\mathbf{E}\Big(\underset{0\leqslant t\leqslant T}{\sup }{\big\| {u^{\varepsilon }}(t,\cdot )-{u^{0}}(t,\cdot )\big\| _{2}^{q}}\Big)\leqslant \frac{c}{{M^{q/2}}}.\]
Finally, since M is arbitrary, we conclude that (44) holds.  □

Appendix

This section contains some technical results needed in the proof of the main theorem of the paper.
First, we recall the following result proved in Lemma 3.1 in [17].
For $H(t,s;x,y):=G(t-s,x,y)$ or $H(t,s;x,y):=(\partial /{\partial _{y}})G(t-s,x,y)$, where $0\leqslant s\leqslant t\leqslant T$ and $x,y\in [0,1]$, define the linear operator J by
\[ J(v)(t,x):={\int _{0}^{t}}{\int _{0}^{1}}H(r,t;x,y)v(r,y)dydr,\hspace{1em}t\in [0,T],\hspace{2.5pt}x\in [0,1],\]
for every $v\in {\mathrm{L}^{\infty }}([0,T],{L^{1}}([0,1]))$.
Lemma 4.1.
Let $\rho \mathrm{>}1$, $\lambda \in [1,\rho [$ and set $\kappa :=1+\frac{1}{\rho }-\frac{1}{\lambda }$. Then, J is a bounded linear operator from ${\mathrm{L}^{\gamma }}([0,T],{L^{\lambda }}([0,1]))$ into $C([0,T],{L^{\rho }}([0,1]))$ for $\gamma \mathrm{>}2{\kappa ^{-1}}$. Moreover, there exists a positive constant c such that for all $t\in [0,T]$,
(58)
\[ {\big\| J(v)(t,\cdot )\big\| _{\rho }}\leqslant c{\int _{0}^{t}}{(t-r)^{\frac{\kappa }{2}-1}}{\big\| v(r,\cdot )\big\| _{\lambda }}dr.\]
The following lemma is a consequence of Lemma 3.1 in [11], its proof is omitted.
Lemma 4.2.
Let ${\mathcal{F}_{t}}=\sigma (W(s,x);0\leqslant s\leqslant t;0\leqslant x\leqslant 1)$ and let $Z:\varOmega \times [0,T]\times [0,1]\longrightarrow \mathbb{R}$ be a ${\mathcal{F}_{t}}$-predictable process such that
\[ \underset{0\leqslant s\leqslant T}{\sup }\underset{0\leqslant y\leqslant 1}{\sup }|Z(s,y)|\leqslant \rho .\]
Set $I(t,x):={\int _{0}^{t}}{\int _{0}^{1}}{G_{t-u}}(y,z)Z(u,z)W(du,dz)$. Then, there exist positive constants ${C_{1}}$ and ${C_{2}}$ such that for $M\mathrm{>}{C_{1}}\rho $,
(59)
\[ P\Big(\underset{0\leqslant s\leqslant T}{\sup }\underset{0\leqslant y\leqslant 1}{\sup }\big|I(s,y)\big|\geqslant M\Big)\leqslant {C_{1}}\exp \bigg(-\frac{{M^{2}}}{{\rho ^{2}}{C_{2}}(1+{T^{\frac{1}{8}}})}\bigg).\]
Proof of Proposition 3.3.
To use a fixed point argument, we consider, for any given ${\mathrm{L}^{2}}([0,1])$-valued function $\{w(t),t\in [0,T]\}$, the following operator
\[\begin{aligned}{}(\mathcal{A}w)(t,x)& :=-2{\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}{G_{t-s}}(x,y)w(s,y){u^{0}}(s,y)dyds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)v(s,y)dyds.\end{aligned}\]
We are going to prove that $\mathcal{A}$ is a contraction operator on the Banach space $\mathbb{H}$ of ${\mathrm{L}^{2}}([0,1])$-valued functions $\{w(t),t\in [0,T]\}$ such that $u(0)=0$ equipped with the norm
(60)
\[ \| w\| :={\int _{0}^{T}}{e^{-\lambda t}}{\big\| w(t,\cdot )\big\| _{2}^{2}}dt,\hspace{1em}\text{where}\hspace{2.5pt}\lambda \mathrm{>}0\hspace{2.5pt}\text{will be fixed later}.\]
Step 1. Let $t\in [0,T]$. We first prove that if w satisfies ${\sup _{0\leqslant s\leqslant t}}\| w(s,\cdot ){\| _{2}^{q}}\mathrm{<}\infty $ then $\mathcal{A}w$ satisfies also this estimate. By Lemma 4.1, the Cauchy–Schwarz inequality and the hypothesis on w we have
\[\begin{aligned}{}\Big(\underset{0\leqslant s\leqslant t}{\sup }{\big\| \mathcal{A}w(t,\cdot )\big\| _{2}^{q}}\Big)& \leqslant c\Bigg[1+{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}\Big(\underset{0\leqslant r\leqslant s}{\sup }{\big\| w(r,\cdot ){u^{0}}(r,\cdot )\big\| _{1}^{q}}\Big)ds\Bigg]\\ {} & \leqslant c\Bigg[1+{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}\Big(\underset{0\leqslant r\leqslant s}{\sup }{\big\| w(r,\cdot )\big\| _{2}^{q}}{\big\| {u^{0}}(r,\cdot )\big\| _{2}^{q}}\Big)ds\Bigg]\\ {} & \leqslant c\Bigg[1+{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}\Big(\underset{0\leqslant r\leqslant s}{\sup }{\big\| w(r,\cdot )\big\| _{2}^{q}}\Big)ds\Bigg]\\ {} & \leqslant c\Bigg[1+{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}ds\Bigg],\end{aligned}\]
which is clearly finite.
Step 2. Let ${w_{1}}$ and ${w_{2}}$ be two elements in $\mathbb{H}$. For any $t\in [0,T]$ we have
(61)
\[\begin{aligned}{}\big({\big\| \mathcal{A}{w_{1}}(t,\cdot )-\mathcal{A}{w_{2}}(t,\cdot )\big\| _{2}^{q}}\big)& \leqslant c{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}\big({\big\| \big({w_{1}}(r,\cdot )-{w_{2}}(r,\cdot )\big){u^{0}}(r,\cdot )\big\| _{1}^{q}}\big)ds\\ {} & \leqslant c{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}\big({\big\| {w_{1}}(s,\cdot )-{w_{2}}(s,\cdot )\big\| _{2}^{q}}{\big\| {u^{0}}(r,\cdot )\big\| _{2}^{q}}\big)ds\\ {} & \leqslant c{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}\big({\big\| {w_{1}}(s,\cdot )-{w_{2}}(s,\cdot )\big\| _{2}^{q}}\big)ds.\end{aligned}\]
Then, using Fubini’s theorem we have
\[\begin{aligned}{}& {\int _{0}^{T}}{e^{-\lambda t}}\big({\big\| \mathcal{A}{w_{1}}(t,\cdot )-\mathcal{A}{w_{2}}(t,\cdot )\big\| _{2}^{q}}\big)dt\\ {} & \hspace{1em}\leqslant c{\int _{0}^{T}}{e^{-\lambda t}}{\int _{0}^{t}}{(t-s)^{\frac{-3}{4}}}\big({\big\| {w_{1}}(s,\cdot )-{w_{2}}(s,\cdot )\big\| _{2}^{q}}\big)dsdt\\ {} & \hspace{1em}\leqslant c{\int _{0}^{T}}{\int _{s}^{T}}{e^{-\lambda t}}{(t-s)^{\frac{-3}{4}}}\big({\big\| {w_{1}}(s,\cdot )-{w_{2}}(s,\cdot )\big\| _{2}^{q}}\big)dsdt\\ {} & \hspace{1em}\leqslant c{\int _{0}^{T}}\big({\big\| {w_{1}}(s,\cdot )-{w_{2}}(s,\cdot )\big\| _{2}^{q}}\big){\int _{s}^{T}}{e^{-\lambda t}}{(t-s)^{\frac{-3}{4}}}dsdt\\ {} & \hspace{1em}\leqslant c\Bigg({\int _{0}^{T}}{e^{-\lambda r}}{r^{\frac{-3}{4}}}dr\Bigg)\| {w_{1}}-{w_{2}}{\| _{\mathbb{H}}^{q}}.\end{aligned}\]
Take λ and ${T_{0}}\mathrm{>}0$ such that
\[ c{\int _{0}^{{T_{0}}}}{e^{-\lambda r}}{r^{\frac{-3}{4}}}dr\mathrm{<}1.\]
Then, for $T\leqslant {T_{0}}$, the operator $\mathcal{A}$ is a contraction on $\mathbb{H}$. Consequently, for any $v\in {S^{N}}$, it admits a unique fixed point ${u^{v}}\in \mathbb{H}$ which satisfies the equation (14). By concatenation we can construct a solution on every interval $[0,T]$.
The continuity of the solution ${u^{v}}$ follows from the continuity of the integrals. For the estimation (18), one can use for ${u^{v}}$ the same computations as in (61) and Gronwall’s lemma.  □
In order to prove Lemma 3.6 we have used the following lemma whose proof can be found in Lemma 3.3 in [17].
Lemma 4.3.
For $v\in {L^{\infty }}([0,T];{L^{1}}([0,1]))$, set
\[ J(v)(t,x):={\int _{0}^{t}}{\int _{0}^{1}}{\partial _{y}}G(t,s,x,y)v(s,y)dyds,\hspace{1em}t\in [0,T],\hspace{2.5pt}x\in [0,1].\]
Let $\rho \in [1,+\infty [$ and $q\in [1,\rho [$. Moreover, let ${\zeta _{\varepsilon }}(t,x)$ be a family of random fields on $[0,T]\times [0,1]$ such that ${\sup _{t\leqslant T}}\| {\zeta _{\varepsilon }}(t,\cdot ){\| _{q}}\leqslant {\theta _{\varepsilon }}$, where ${\theta _{\varepsilon }}$ is a finite random variable for every ε. Assume that the family ${\theta _{\varepsilon }}$ is bounded in probability, i.e.,
\[ \underset{c\longrightarrow +\infty }{\lim }\underset{\varepsilon }{\sup }\mathbb{P}\{{\theta _{\varepsilon }}\geqslant c\}=0.\]
Then, the family ${(J({\zeta _{\varepsilon }}))}_{\varepsilon \mathrm{>}0}$ is uniformly tight in $C([0,T];{L^{\rho }}([0,1]))$.
We summarize some important proprieties of the sequence $\{{K_{n}},n\geqslant 1\}$ in the following lemma.
Lemma 4.4.
Let $({v_{n}})\subset {S^{N}}$ be a sequence converging weakly in ${\mathcal{H}_{T}}$ to an element v in ${S^{N}}$. The sequence $\{{K_{n}},n\geqslant 1\}$ defined in (35) satisfies the following:
  • i) the sequence $\{{K_{n}}(t,x),n\geqslant 1\}$ converges to zero, for any fixed $(t,x)\in [0,T]\times [0,1]$;
  • ii) there exists a constant $c(N,T)$ depending on N and T such that
    (62)
    \[ \underset{n\geqslant 1}{\sup }\underset{t\in [0,T]}{\sup }{\big\| {K_{n}}(t,\cdot )\big\| }_{2}\leqslant c(N,T);\]
  • iii)
    (63)
    \[ \underset{n\longrightarrow \infty }{\lim }\underset{t\in [0,T]}{\sup }{\big\| {K_{n}}(t,\cdot )\big\| }_{2}=0.\]
Proof.
First notice that since
\[\begin{aligned}{}{\big\| {\mathbf{1}_{[0,t]}}(\cdot ){G_{t-\cdot }}(x,\ast )\sigma \big({u^{0}}(\cdot ,\ast )\big)\big\| _{{\mathcal{H}_{T}}}^{2}}& :={\int _{0}^{T}}{\int _{0}^{1}}{\mathbf{1}_{[0,t]}}(s){G_{t-s}^{2}}(x,y){\sigma ^{2}}\big({u^{0}}(s,y)\big)dyds\\ {} & \leqslant c\underset{x\in [0,1]}{\sup }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dyds\mathrm{<}+\infty ,\end{aligned}\]
we have ${\mathbf{1}_{[0,t]}}(\cdot ){G_{t-\cdot }}(x,\ast )\sigma ({u^{0}}(\cdot ,\ast ))\in {\mathcal{H}_{T}}$ and hence
\[ {K_{n}}(t,x)={\big\langle {\mathbf{1}_{[0,t]}}(\cdot ){G_{t-\cdot }}(x,\ast )\sigma \big({u^{0}}(\cdot ,\ast )\big),{v_{n}}-v\big\rangle }_{{\mathcal{H}_{T}}}.\]
Therefore by the weak convergence of $({v_{n}})$ to v in ${\mathcal{H}_{T}}$, we get the point i) of Lemma 4.4.
Now, let us show (62) and (63). Using the Cauchy–Schwarz inequality, the boundedness of σ, the facts that ${v_{n}}$, $v\in {S^{N}}$ and Lemma 2.1, we have for any $0\leqslant t\leqslant T$,
(64)
\[\begin{aligned}{}{\big\| {K_{n}}(t,\cdot )\big\| _{2}^{2}}& ={\int _{0}^{1}}{\Bigg|{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)\big({v_{n}}(s,y)-v(s,y)\big)dyds\Bigg|^{2}}dx\\ {} & \leqslant \| {v_{n}}-v{\| _{{\mathcal{H}_{T}}}^{2}}\Bigg(\underset{x\in [0,1]}{\sup }{\int _{0}^{t}}{\int _{0}^{1}}{\big({G_{t-s}}(x,y)\sigma \big({u^{0}}(s,y)\big)\big)^{2}}dyds\Bigg)\\ {} & \leqslant c(N,T)\Bigg(\underset{x\in [0,1]}{\sup }{\int _{0}^{t}}{\int _{0}^{1}}{G_{t-s}^{2}}(x,y)dyds\Bigg)\\ {} & \leqslant c(N,T),\end{aligned}\]
for some constant $c(N,T)$ depending only on N and T, and not on n. This yields (62).
It remains to prove (63). Following similar arguments as above, we have, for any $t,{t^{\prime }}\in [0,T]$ such that $t\leqslant {t^{\prime }}$,
(65)
\[\begin{aligned}{}{\big\| {K_{n}}(t,\cdot )-{K_{n}}\big({t^{\prime }},\cdot \big)\big\| _{2}^{2}}& \leqslant c(N,T)\Bigg(\underset{x\in [0,1]}{\sup }{\int _{0}^{t}}{\int _{0}^{1}}{\big({G_{t-s}}(x,y)-{G_{{t^{\prime }}-s}}(x,y)\big)^{2}}dyds\\ {} & \hspace{1em}+\underset{x\in [0,1]}{\sup }{\int _{t}^{{t^{\prime }}}}{\int _{0}^{1}}{G_{{t^{\prime }}-s}^{2}}(x,y)dyds\Bigg)\\ {} & \leqslant c(N,T){\big|t-{t^{\prime }}\big|^{1/2}}.\end{aligned}\]
According to (64) and (65), the sequence $\{{K_{n}},n\geqslant 1\}$ is a bounded and Hölder continuous family in $C([0,T];{L^{2}}([0,1]))$; hence it is a bounded equicontinuous family and therefore by i) of Lemma 4.4 and the Arzelà–Ascoli theorem we get (63).  □

Acknowledgement

The authors are very thankful to the Editor for her very constructive criticism from which our final version of the article has benefited. Many thanks also to the referees for their careful reading and useful remarks. We are also very indebted to Professors R. Zhang and J. Xiong for some kind discussions we had about the Burkholder–Davis–Gundy inequality for SPDEs driven by a space-time white noise.

References

[1] 
Bertini, L., Cancrini, N., Jona-Lasinio, G.: The stochastic Burgers equation. Commun. Math. Phys. 165(2), 211–232 (1994). MR1301846
[2] 
Boué, M., Dupuis, P.: A variational representation for certain functionals of Brownian motion. Ann. Probab. 26(4), 1641–1659 (1998). MR1675051. https://doi.org/10.1214/aop/1022855876
[3] 
Bryc, W.: Large deviations by the asymptotic value method. Diffus. Process. Relat. Probl. Anal. 20, 1004–1030 (1992)
[4] 
Budhiraja, A., Dupuis, P.: A variational representation for positive functionals of infinite dimensional Brownian motion. Probab. Math. Stat., Wroclaw Univ. 20(1), 39–61 (2000). MR1785237
[5] 
Budhiraja, A., Dupuis, P., Maroulas, V.: Large deviations for infinite dimensional stochastic dynamical systems. Ann. Probab., 1390–1420 (2008). MR2435853. https://doi.org/10.1214/07-AOP362
[6] 
Budhiraja, A., Dupuis, P., Ganguly, A., et al.: Moderate deviation principles for stochastic differential equations with jumps. Ann. Probab. 44(3), 1723–1775 (2016). MR3502593. https://doi.org/10.1214/15-AOP1007
[7] 
Burgers, J.M.: The nonlinear diffusion equation. Asymptotic solutions and statistical problems, D. Reidel, Dordrecht-H, Boston (1974)
[8] 
Cardon-Weber, C.: Large deviations for a Burgers-type SPDE. Stoch. Process. Appl. 84(1), 53–70 (1999). MR1720097. https://doi.org/10.1016/S0304-4149(99)00047-2
[9] 
Cardon-Weber, C., Millet, A.: A support theorem for a generalized Burgers SPDE. Potential Anal. 15(4), 361–408 (2001). MR1856154. https://doi.org/10.1023/A:1011857909744
[10] 
Chen, Y., Gao, H.: Well-posedness and large deviations for a class of SPDEs with Lévy noise. J. Differ. Equ. 263(9), 5216–5252 (2017). MR3688413. https://doi.org/10.1016/j.jde.2017.06.016
[11] 
Chenal, F., Millet, A.: Uniform large deviations for parabolic SPDEs and applications. Stoch. Process. Appl. 72(2), 161–186 (1997). MR1486551. https://doi.org/10.1016/S0304-4149(97)00091-4
[12] 
De Acosta, A.: Moderate deviations and associated Laplace approximations for sums of independent random vectors. Trans. Am. Math. Soc. 329(1), 357–375 (1992). MR1046015. https://doi.org/10.2307/2154092
[13] 
Dupuis, P., Ellis, R.S.: A Weak Convergence Approach to the Theory of Large Deviations, vol. 902. John Wiley & Sons (2011). MR1431744. https://doi.org/10.1002/9781118165904
[14] 
Foondun, M., Setayeshgar, L.: Large deviations for a class of semilinear stochastic partial differential equations. Stat. Probab. Lett. 121, 143–151 (2017). MR3575422. https://doi.org/10.1016/j.spl.2016.10.019
[15] 
Freidlin, M., Wentzell, A.: Random perturbations of dynamical systems. Springer (1984). MR0722136. https://doi.org/10.1007/978-1-4684-0176-9
[16] 
Gao, F.-Q.: Moderate deviations for martingales and mixing random processes. Stoch. Process. Appl. 61(2), 263–275 (1996). MR1386176. https://doi.org/10.1016/0304-4149(95)00078-X
[17] 
Gyöngy, I.: Existence and uniqueness results for semilinear stochastic partial differential equations. Stoch. Process. Appl. 73(2), 271–299 (1998). MR1608641. https://doi.org/10.1016/S0304-4149(97)00103-8
[18] 
Hu, S., Li, R., Wang, X.: Central limit theorem and moderate deviations for a class of semilinear SPDEs. arXiv preprint arXiv:1811.05611 (2018)
[19] 
Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes vol. 24. Elsevier (2014). MR0892528
[20] 
Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus, vol. 113. Springer (2012). MR1121940. https://doi.org/10.1007/978-1-4612-0949-2
[21] 
Liming, W.: Moderate deviations of dependent random variables related to CLT. Ann. Probab., 420–445 (1995). MR1330777
[22] 
Morien, P.-L.: On the density for the solution of a Burgers-type SPDE. In: Annales de l’Institut Henri Poincare (B) Probability and Statistics, vol. 35, pp. 459–482 (1999). Elsevier. MR1702238. https://doi.org/10.1016/S0246-0203(99)00102-8
[23] 
Setayeshgar, L.: Large deviations for a stochastic Burgers equation. Commun. Stoch. Anal. 8, 141–154 (2014). MR3269841. https://doi.org/10.31390/cosa.8.2.01
[24] 
Varadhan, S.S.: Asymptotic probabilities and differential equations. Commun. Pure Appl. Math. 19(3), 261–286 (1966). MR0203230. https://doi.org/10.1002/cpa.3160190303
[25] 
Walsh, J.B.: An introduction to stochastic partial differential equations. In: École d’Été de Probabilités de Saint Flour XIV-1984, pp. 265–439. Springer (1986). MR0876085. https://doi.org/10.1007/BFb0074920
[26] 
Wang, R., Zhang, T.: Moderate deviations for stochastic reaction-diffusion equations with multiplicative noise. Potential Anal. 42(1), 99–113 (2015). MR3297988. https://doi.org/10.1007/s11118-014-9425-6
[27] 
Yang, J., Jiang, Y.: Moderate deviations for fourth-order stochastic heat equations with fractional noises. Stoch. Dyn. 16(06), 1650022 (2016). MR3568726. https://doi.org/10.1142/S0219493716500222
[28] 
Zhang, R., Xiong, J.: Semilinear stochastic partial differential equations: central limit theorem and moderate deviations. arXiv preprint arXiv:1904.00299 (2019)
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Moderate deviations
  • 4 Toward a central limit theorem
  • Appendix
  • Acknowledgement
  • References

Copyright
© 2019 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Stochastic Burgers equation space-time white noise stochastic partial differential equations moderate deviations principle weak convergence method

MSC2010
60F10 60F05 60H15

Metrics
since March 2018
1098

Article info
views

575

Full article
views

593

PDF
downloads

166

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    2
Theorem 3.2.
Theorem 3.3.
Theorem 3.2.
Let $f\in \mathcal{C}([0,T]\times [0,1];\mathbb{R})\longrightarrow \mathbb{R}$ be a bounded measurable mapping $\mathcal{C}([0,T]\times [0,1];\mathbb{R})$ into $\mathbb{R}$, and let ${\mathcal{P}_{2}}$ be the class of all predictable processes u such that $\| u{\| _{{\mathcal{H}_{T}}}}\mathrm{<}\infty ,a.s$. Then
(8)
\[ -\log \mathbf{E}\exp \big\{-f(B)\big\}=\underset{u\in {\mathcal{P}_{2}}}{\inf }\bigg(\frac{1}{2}\| u{\| _{{\mathcal{H}_{T}}}^{2}}+f\big({B^{u}}\big)\bigg),\]
where
\[ {B^{u}}(t,x):=B(t,x)+{\int _{0}^{t}}{\int _{0}^{x}}u(s,y)dyds,\hspace{1em}\textit{for any}\hspace{2.5pt}(t,x)\in [0,T]\times [0,1].\]
Theorem 3.3.
Assume that ${u_{0}}$ is continuous, σ is bounded and globally Lipschitz and that (3) holds. Then the family of processes $\{{\bar{u}^{\varepsilon }};\hspace{0.1667em}\varepsilon \mathrm{>}0\}$ satisfies a LDP on the space $C([0,T];{L^{2}}([0,1]))$ with speed ${\lambda ^{2}}(\varepsilon )$ and rate function given by
(16)
\[ I(f)=\inf \bigg\{\frac{1}{2}\| v{\| _{{\mathcal{H}_{T}}}^{2}},\hspace{2.5pt}v\in {\mathcal{H}_{T}},\hspace{2.5pt}{\mathcal{G}^{0}}\big({u_{0}},\mathcal{I}(v)\big):=f\bigg\}.\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy