A moderate deviations principle for the law of a stochastic Burgers equation is proved via the weak convergence approach. In addition, some useful estimates toward a central limit theorem are established.
We consider the following stochastic Burgers equation with multiplicative space-time white noise, indexed by ε>0}0$]]>, given by
∂uε∂t(t,x)=Δuε(t,x)+12∂∂x(uε(t,x))2+εσ(uε(t,x))W˙(t,x),(t,x)∈[0,T]×[0,1],
with Dirichlet’s boundary conditions uε(t,0)=uε(t,1)=0 for t∈[0,T], and the initial condition uε(0,x)=u0(x) for x∈[0,1]. We assume that u0 is continuous on [0,1] and σ is bounded and globally Lipschitz on R. The driving noise W is a space-time Brownian sheet defined on some filtered probability space (Ω,F,(Ft)t∈[0,T],P).
The deterministic Burgers equation was introduced in [7] as a simplified mathematical model describing the turbulence phenomena in fluids. Its stochastic version has been the subject of several works; see for instance [1, 17, 22], and the references therein. In particular, a large deviation principle is established in [23] for an “additive version” of (1), and in [8] and [14] for a class of Burgers’ type stochastic partial differential equations (SPDEs for short) including (1). Generally speaking, large deviations theory deals with determining how fast the probabilities P(Aε) of a family of rare events (Aε) decay to 0 as ε tends to 0, and how to compute the precise rate of decay as a function of the rare events. A related natural important question is to study moderate deviations results which deals with probabilities of deviations of “smaller order” than in large deviations. We will precise below the main difference between moderate and large deviations principles in the context of stochastic Burgers equation, and for a deeper description and detail about these two kinds of deviations principles and their relationship, we refer the reader to [6].
Our first goal in this paper is to study the moderate deviations of uε from the deterministic solution u0 of the equation (4) below. More precisely, we deal with the deviations of the trajectory
u¯ε(t,x):=uε(t,x)−u0(t,x)a(ε),
where the deviation scale a:R+⟶R+ is such that
a(ε)⟶0andh(ε):=a(ε)ε⟶∞,asε⟶0,
and u0 stands for the solution of the following deterministic partial differential equation
∂u0∂t(t,x)=∂2u0∂x2(t,x)+12∂∂x(u0(t,x))2,(t,x)∈[0,T]×[0,1],
with Dirichlet’s boundary conditions u0(t,0)=u0(t,1)=0 for t∈[0,T], and the initial condition u0(0,x)=u0(x).
The deviation scale a(ε) influences strongly the asymptotic behavior of u¯ε. In fact, for certain norm ‖·‖, bounds of the probabilities P(‖uε−u0‖ε∈·) are dealt with the central limit theorem, while probabilities P(‖uε−u0‖∈·) are estimated by large deviations results. Furthermore, when we are interested in probabilities of the form P(‖uε−u0‖a(ε)∈·) under the condition (3) (e.g. a(ε)=ε1/4), then we are in the framework of the so-called moderate deviations which fills the gap between the central limit theorem scale (a(ε)=ε) and the large deviations scale (a(ε)=1). In this paper, we will establish the moderate deviations principle for (1). For the study of this topic for various kind of stochastic processes, see for instance e.g. [10, 12, 16, 21].
Furthermore, there are basically two approaches to analyzing moderate and large deviations for processes. The former, which is originally used by Freidlin and Wentzell [15] for diffusions processes, relies on discretization and localization arguments that allow to deduce the large deviations principle, for the solutions of equations under study, using a general contraction principle from some Schilder type theorems for the driving noises. The second one, which we are going to use in present paper, is the so-called weak convergence approach. It was introduced in [13] and developed in [2, 4] and [5], and its starting point is the equivalence between large deviations principle and Laplace principle in the setting of Polish spaces. It consists in using certain variational formulas that can be viewed as the minimal cost functions for associated stochastic optimal problems. These minimal cost functions have a form to which the theory of weak convergence of probability measures can be applied. We refer to [13] for a more complete exposition on this approach.
We stress here that, in the present paper, we mainly use the weak convergence approach to establish moderate deviations for stochastic Burgers’ equations while in the previous works ([8, 23, 14]) the authors studied the large deviations principle for this equation. The most likely advantage in using the weak convergence approach is that it allows one to avoid establishing technical exponential-type probability estimates usually needed in the classical studies of large deviations principle, and reduces the proofs to demonstrating qualitative properties like existence, uniqueness and tightness of certain analogues of the original processes.
We also note that the greatest difficulty in studying any aspect of Burgers’ type equations lies in their quadratic term. In fact, most of the techniques usually used to deal with stochastic differential equations with Lipschitz drift coefficients don’t longer work generally, and one resort to localization or tightness argument to circumvent this difficulty.
As pointed out before, we will prove a moderate deviations principle for the stochastic Burgers equation (1), and two first-step results toward a central limit theorem. It is worth bearing in mind that the most difficulty we have encountered in establishing a central limit theorem is mainly due to the quadratic term appearing in the Burgers equation for which the classical conditions (namely, the Lipschitz condition on the drift coefficient, the boundedness and the differentiability of its derivative) are no longer satisfied.
The paper is organized as follows. Section 2 is devoted to some preliminaries. The framework of our moderate deviations result and its proof are given in Section 3. In Section 4, toward a central limit theorem for the stochastic Burgers equation, we prove the uniform boundedness and the convergence of uε to u0 in the space Lq(Ω;C([0,T];L2([0,1]))) for q⩾2. Furthermore, some technical results needed in our proofs are included in the Appendix.
In this paper all positive constants are denoted by c, and their values may change from line to line. Also, for ρ⩾1 and t∈[0,T], the usual norms on Lρ([0,1]) and Ht:=L2([0,t]×[0,1]) are respectively denoted by ‖·‖ρ and ‖·‖Ht.
Preliminaries
Let {W(t,x),t∈[0,T],x∈[0,1]} be a space-time Brownian sheet on a filtered probability space (Ω,F,Ft,P), that is, a zero-mean Gaussian field with covariance function given by
E(W(t,x)W(s,y))=(t∧s)(x∧y),s,t∈[0,T],x,y∈[0,1].
For each t∈[0,T], Ft is the completion of the σ-field generated by the family of random variables {W(s,x),0⩽s⩽t,x∈[0,1]}.
A rigorous meaning to the solution of (1) is given by a jointly measurable and Ft-adapted process uε:={uε(t,x);(t,x)∈[0,T]×[0,1]} satisfying, for almost all ω∈Ω and all t∈[0,T] the following evolution equation:
uε(t,x)=∫01Gt(x,y)u0(y)dy−∫0t∫01∂yGt−s(x,y)(uε(s,y))2dyds+ε∫0t∫01Gt−s(x,y)σ(uε(s,y))W(ds,dy),
for dx-almost all x∈[0,T], where Gt(·,·) denotes the Green kernel corresponding to the operator ∂∂t−Δ with the Dirichlet boundary conditions. The stochastic integral in (5) is understood in the Walsh sense, see [25].
By Theorem 2.1 in [17], there exists a unique L2[0,1]-valued continuous stochastic process {uε(t,.),t∈[0,T]} satisfying the equation (5).
The deterministic equation (4) obtained when the parameter ε tends to zero can be written in the following integral form
u0(t,x)=∫01Gt(x,y)u0(y)dy−∫0t∫01∂yGt−s(x,y)(u0(s,y))2dyds.
Since (6) corresponds to σ≡0 in the degenerate case studied in [17], it admits a unique solution u0 belonging to C([0,T];L2([0,1])). Moreover, the continuity of u0 on the compact set [0,T] implies that
supt∈[0,T]‖u0(t,·)‖2q<∞,
for all q⩾2.
We now recall some estimations of the Green kernel function G, as stated in [17] and [22], that will be used in the sequel.
Let G denotes the Green kernel corresponding to the operator∂∂t−Δwith the Dirichlet boundary conditions. Then, we have
for anyt∈[0,T]andy∈[0,1]:∫01Gt(x,y)dx=1;
for anyt∈[0,T]and12<β<32:∫0t∫01|∂xGt−s(x,y)|βdxds⩽cβ,T, wherecβ,Tis a constant depending only on T and β.
Moreover, there exists a constant c, depending only on T, such that for ally,z∈[0,1]andt,t′∈[0,T]such that0⩽t⩽t′⩽1,
According to Varadhan [24] and [3], a crucial step toward the large deviations principle is the Laplace principle. Therefore, we will focus later on establishing this principle which we formulate in the following
(Laplace principle).
A family of random variables {Xε;ε>0}}0\}$]]> defined on a Polish space E, is said to satisfy the Laplace principle with speed λ2(ε) and rate function I:E⟶[0,∞] if for any bounded continuous function F:E→R, we have
limε→0λ2(ε)logE(exp[−1λ2(ε)F(Xε)])=−inff∈E{F(f)+I(f)},
where E is the expectation with respect to P.
In the context of the weak convergence approach, the proof of the Laplace principle for functionals of the Brownian sheet is essentially based on the following variational representation formula, which was originally proved in [4].
Letf∈C([0,T]×[0,1];R)⟶Rbe a bounded measurable mappingC([0,T]×[0,1];R)intoR, and letP2be the class of all predictable processes u such that‖u‖HT<∞,a.s. Then−logEexp{−f(B)}=infu∈P2(12‖u‖HT2+f(Bu)),whereBu(t,x):=B(t,x)+∫0t∫0xu(s,y)dyds,for any(t,x)∈[0,T]×[0,1].
Sufficient conditions for the general Laplace principle
Here, we briefly describe the result needed, in our context, for proving the Laplace principle, and state our main result.
Let us first introduce some notations. For ε>0}0$]]>, denote by Gε:E0×C([0,T]×[0,1];R)→E a measurable map, where E0 stands for a compact subspace of E in which the initial condition u0 takes values, and let
Xε,u0:=Gε(u0,h(ε)W).
Later, we will state sufficient conditions for the Laplace principle for Xε,u0 to hold uniformly in u0 for compact subsets of E0.
For any positive integer N, we introduce
SN:={ϕ∈HT:‖ϕ‖HT2⩽N}
and
P2N:={v(ω)∈P2:v(ω)∈SN,P-a.s.}.
It is worth noticing that the space SN is a compact metric space equipped with the weak topology on L2([0,T]×[0,1]) and that P2N is the space of controls, which plays a central role in the weak convergence approach.
For u∈HT, define the element I(u) in C([0,T]×[0,1];R) by
I(u)(t,x):=∫0t∫0xu(s,y)dsdy,t∈[0,T],x∈[0,1].
We are now in position to introduce the following result, due to Budhiraja et al. [5], ensuring sufficient condition for the Laplacian principle to hold. (Theorem 7 in [5]).
Assume that there exists a measurableG0:E0×C([0,T]×[0,1];R)→E,such that the following hold:
For any integerM>0}0$]]>, any family{vε;ε>0}⊂P2M}0\}\subset {{\mathcal{P}_{2}}^{M}}$]]>and{u0ε}⊂E0such thatvε→vandu0ε→u0in distribution (asSN-valued random elements), asε→0. ThenGε(u0ε,W+h(ε)I(vε))→G0(u0,I(u)), in distribution asε→0;
For any integerM>0}\hspace{0.1667em}0$]]>and compact setK⊂E0, the setΓM,K:={G0(u0,I(u));u∈SM,u0∈K}is a compact subset ofE.
Then, the family{Xε,u0;ε>0}}0\}$]]>defined by (9) satisfies the Laplace principle onEwith speedλ2(ε)and rate functionIu0given, for anyh∈Eandu0∈E0, byIu0(h):=inf{v∈HT:h=G0(u0,I(v))}{12‖v‖HT2},where the infimum over an empty set is taken to be ∞.
Controlled processes for SPDEs (1)
In this subsection, we adapt the general scheme described above to study moderate deviations for the equation (1).
We denote by E=E0:=C([0,T];L2([0,1])) the space of solutions of (1). As we are interested in proving the Laplace principle for u¯ε(t,x) defined by (2), we interpret u¯ε as a functional of the Brownian sheet W. Indeed, using (5) and (6) we deduce that u¯ε(t,x) satisfies for all ω∈Ω and all t∈[0,T] the following equation
u¯ε(t,x)=1h(ε)∫0t∫01Gt−s(x,y)σ(u0(s,y)+εh(ε)u¯ε(s,y))W(dy,ds)−εh(ε)∫0t∫01∂yGt−s(x,y)[u¯ε(s,y)]2dyds−2∫0t∫01∂yGt−s(x,y)u¯ε(s,y)u0(s,y)dyds,
for dx-almost all x∈[0,T].
This implies (see Theorem IV.9.1. of [19]) the existence of a measurable mapping
Gε:C([0,1];R)×C([0,T]×[0,1];R)→C([0,T];L2([0,1])),
such that
u¯ε=Gε(u0,W).
As a first step toward the conditions (A1) and (A2) stated in Proposition 3.1, we define for vε∈P2N,
u¯ε,vε:=Gε(u0,W+h(ε)I(vε)).
In Proposition 3.2 below we will establish that the map u¯ε,vε is the unique solution of the following stochastic controlled analogue of equation (11)
u¯ε,vε(t,x)=1h(ε)∫0t∫01Gt−s(x,y)σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))W(ds,dy)−εh(ε)∫0t∫01∂yGt−s(x,y)[u¯ε,vε(s,y)]2dyds−2∫0t∫01∂yGt−s(x,y)u¯ε,vε(s,y)u0(s,y)dyds+∫0t∫01Gt−s(x,y)σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))vε(s,y)dyds.
We will call it the controlled process. Moreover, for any v∈SN, we associate to (13) the following skeleton zero-noise equation:
u¯v(t,x)=−2∫0t∫01∂yGt−s(x,y)u¯v(s,y)u0(s,y)dyds+∫0t∫01Gt−s(x,y)σ(u0(s,y))v(s,y)dyds.
Existence and uniqueness of the solution u¯v for (14) is obtained in Proposition 3.3 below, and thereby, we define the map
G0(u0,I(v)):=u¯v.
With these notations in mind, the main result of this section is stated in the following
Assume thatu0is continuous, σ is bounded and globally Lipschitz and that (3) holds. Then the family of processes{u¯ε;ε>0}}0\}$]]>satisfies a LDP on the spaceC([0,T];L2([0,1]))with speedλ2(ε)and rate function given byI(f)=inf{12‖v‖HT2,v∈HT,G0(u0,I(v)):=f}.
Note that the conclusion of Theorem 3.3 is still valid for a quite large class of SPDEs containing stochastic Burgers equation. Namely, consider the following class of SPDEs introduced by Gyöngy in [17]:
∂uε∂t(t,x)=∂2∂x2uε(t,x)+∂∂xg(uε(t,x))+f(uε(t,x))+εσ(uε(t,x))W˙(t,x),(t,x)∈[0,T]×[0,1],
with Dirichlet’s boundary conditions uε(t,0)=uε(t,1)=0 for t∈[0,T], and the initial condition uε(0,x)=u0(x) for x∈[0,1]. Suitable conditions on the coefficients f, g and σ, for instance, the quadratic growth assumption on the nonlinear coefficient g, bring us back to the case of stochastic Burgers equation that we have considered in our paper. Notice here that papers closest to ours are two recent works by S. Hu, R. Li and X. Wang [18] and R. Zhang and J. Xiong [28]. In particular, these authors established a moderate deviations principle for the class (17). We learned about these works after we finished the first version of this paper.
Proof of the main result
We basically follow the same idea as in [5] and [23]. According to Proposition 3.1, it suffices to check that the conditions (A1) and (A2) are fulfilled. For (A1), we will establish well-posedness, tightness and convergence of controlled processes. The condition (A2), which gives that I is a rate function, will follow from the continuity of the map G0 with respect to the weak topology.
The proof of (A1) will be done in several steps.
Step 1: Existence and uniqueness of controlled and limiting processes.
Assume that σ is bounded and globally Lipschitz, and that (3) holds. Then, theL2([0,1])-valued process{u¯ε,vε(t),t∈[0,T]}defined by (12) is the unique solution of the equation (13).
For vε∈P2N, set
dQε,vε:=exp{−h(ε)∫0t∫01vε(s,y)W(ds,dy)−12h(ε)∫0t∫01vε(s,y)2dyds}dP.
Since Qε,vε is defined through an exponential martingale, it is a probability measure on Ω. And thus, by the Girsanov theorem the process W˜ defined by
W˜(dt,dx)=W(dt,dx)+h(ε)∫0t∫01vε(s,y)dyds
is a space-time white noise under the probability measure Qε,vε. Plugging W˜(dt,dx) in (13) we obtain (11) with W˜(dt,dx) instead of W(dt,dx). Now, if u denotes the unique solution of (11) with W˜(dt,dx) on the space (Ω,F,Qε,vε), then u satisfies (13), Qε,vε a.s. And hence by equivalence of probabilities, u satisfies (13), P a.s.
For the uniqueness, if u1 and u2 are two solutions of (13) on (Ω,F,P), then u1 and u2 are solutions of (11) governed by W˜(dt,dx) on (Ω,F,Qε,vε). By the uniqueness of the solution of (13), we obtain u1=u2, Qε,vε a.s. And thus u1=u2, P a.s. by equivalence of probabilities. □
Assume that σ is bounded and globally Lipschitz. For anyv∈SN, for someN∈N, the equation (14) admits a unique solutionu¯vbelonging toC([0,T];L2([0,1])). Moreover, for anyq⩾2supv∈SNsup0≤t≤T‖u¯v(t,·)‖2q<∞.
The proof follows from a standard fixed point argument, and for the convenience of the reader, we include it in the Appendix. □
Step 2: Tightness of the family (uε,vε)ε>0}0}}$]]> in C([0,T];L2([0,1])).
Let (vε)ε be a family of elements from P2N such that vε→v in distribution, as SN-valued random elements, when ε→0.
We have the following proposition.
Assume thatu0is continuous, σ is bounded and globally Lipschitz, and that (3) holds. Then(u¯ε,vε)εis tight inC([0,T];L2([0,1])).
Recall that
u¯ε,vε(t,x)=1h(ε)∫0t∫01Gt−s(x,y)σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))W(ds,dy)−εh(ε)∫0t∫01∂yGt−s(x,y)[u¯ε,vε(s,y)]2dyds−2∫0t∫01∂yGt−s(x,y)u¯ε,vε(s,y)u0(s,y)dyds+∫0t∫01Gt−s(x,y)σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))vε(s,y)dyds=:∑i=14Iiε,vε(t,x),
where Iiε,vε(t,x), i=1,2,3,4, stands for the ith summand of the RHS of the above equation.
In view of (19), in order to prove the claim of Proposition 3.4, we will state and prove the next two lemmas which give the tightness of each summand Iiε,vε, i=1,2,3,4. □
We first consider the cases where i=1 and i=4. Using Theorem 4.10 of Chapter 2 in [20], the following lemma states sufficient conditions for tightness.
Assume the same conditions as in Proposition3.4. Fori=1or 4, we havelimζ⟶+∞supε>0P(|Iiε,vε(t,x)|>ζ)=0,for any(t,x)∈[0,T]×[0,1],}0}{\sup }P\big(\big|{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)\big|\mathrm{>}\zeta \big)=0,\hspace{1em}\textit{for any}\hspace{2.5pt}(t,x)\in [0,T]\times [0,1],\]]]>and for anyζ>0}0$]]>limδ⟶0supε>0P(sup|t−t′|+|x−y|⩽δ|Iiε,vε(t,x)−Iiε,vε(t′,y)|>ζ)=0.}0}{\sup }P\Big(\underset{|t-{t^{\prime }}|+|x-y|\leqslant \delta }{\sup }\big|{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}(t,x)-{I_{i}^{\varepsilon ,{v^{\varepsilon }}}}\big({t^{\prime }},y\big)\big|\mathrm{>}\zeta \Big)=0.\]]]>In particular, the families(I1ε,vε)εand(I4ε,vε)εare tight inC([0,T];L2([0,1])).
Let x,y∈[0,1] and t,t′∈[0,T] such that t′⩽t. To prove (20) and (21), it is enough to exhibit upper bounds for the square moments of Iiε,vε(t,x) and Iiε,vε(t,x)−Iiε,vε(t′,y) for i=1 and i=4.
Using the Burkholder–Davis–Gundy inequality, the boundedness of σ, Lemma 2.1 and the condition (3) we infer that
E(|I1ε,vε(t,x)|2)⩽c.h−2(ε).E∫0t∫01Gt−s2(x,y)σ2(u0(s,y)+εh(ε)u¯ε,vε(s,y))dyds⩽c.∫0t∫01Gt−s2(x,y)dyds,
which is finite. On the other hand, the same arguments as above yield
E(|I1ε,vε(t,x)−I1ε,vε(t′,y)|2)=h−2(ε).E{∫0t′∫01[Gt−s(x,z)−Gt′−s(y,z)]×σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))W(ds,dz)+∫t′t∫01Gt−s(y,z)σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))W(ds,dz)}2⩽c{∫0t′∫01[Gt−s(x,z)−Gt′−s(y,z)]2dzds+∫t′t∫01Gt−s2(y,z)dzds}⩽c(|t−t′|12+‖x−x′‖12).
Therefore, (20) and (21) hold by (22) and (23), respectively.
To deal with (I4ε,vε)ε, we use the Cauchy–Schwarz inequality and Lemma 2.1 to write
E(|I4ε,vε(t,x)|2)⩽cE(∫0t∫01|Gt−s(x,y)vε(s,y)|dyds)2⩽c‖vε‖HT2.∫0t∫01Gt−s2(x,y)dyds⩽c(N),
where c(N) is a constant depending on N. Similarly,
E(|I4ε,vε(t,x)−I4ε,vε(t′,y)|2)=E{∫0t′∫01[Gt−s(x,z)−Gt′−s(y,z)]σ(uε,vε(s,z))vε(s,y)dzds+∫t′t∫01Gt−s(y,z)σ(uε,vε(s,z))vε(s,y)dzds}2⩽c{∫0t′∫01[Gt−s(x,z)−Gt′−s(y,z)]2dzds+∫t′t∫01Gt−s2(y,z)dzds}⩽c(|t−t′|12+‖x−x′‖12).
Therefore, (20) and (21) hold by (24) and (25), respectively. □
For the tightness of (I2ε,vε)ε, we follow an idea introduced in [17] which is essentially based on Lemma 4.3 in the Appendix. More precisely, we state the following
Assume the same conditions as in Proposition3.4. Then, the families(I2ε,vε)εand(I3ε,vε)εare uniformly tight inC([0,T];L2([0,1])).
The proof of the tightness of (I3ε,vε)ε will be omitted since it can be done similarly to this of (I2ε,vε)ε.
To show the tightness of (I2ε,vε)ε, we will apply Lemma 4.3 with q=1, ρ=2 and ζε(t,·):=εh(ε)(u¯ε,vε)2(t,·). Set
θε:=εh(ε)sup0⩽t⩽T‖(u¯ε,vε)2(t,·)‖1=εh(ε)sup0⩽t⩽T‖u¯ε,vε(t,·)‖22.
According to Lemma 4.3, it suffices to show that (θε)ε is bounded in probability. i.e.
limc⟶+∞supε>0P(θε⩾c)=0.}0}{\sup }\mathbb{P}({\theta _{\varepsilon }}\geqslant c)=0.\]]]>
Taking into account the condition (3), there exists ε0>0}0$]]> such that εh(ε)⩽1 for all ε⩽ε0. Consequently
supε⩽ε0P(θε⩾c)=supε⩽ε0P(sup0⩽t⩽T‖u¯ε,vε(t,·)‖22⩾cεh(ε))⩽supε⩽ε0P(sup0⩽t⩽T‖u¯ε,vε(t,·)‖22⩾c).
Then, to prove (26), it is enough to show that
limc⟶+∞supε⩽ε0P(sup0⩽t⩽T‖u¯ε,vε(t,·)‖2⩾c)=0.
For this purpose, returning to (19) we note that u¯ε,vε corresponds to the following SPDE
∂u¯ε,vε∂t(t,x)=Δu¯ε,vε(t,x)+∂gε∂x(t,x,uε,vε(t,x))+fε(t,x,uε,vε(t,x))+σε(t,x,u¯ε,vε(t,x))W˙(t,x),
where
gε(t,x,r):=−εh(ε)r2−2ru0(t,x),fε(t,x,r):=σ(u0(t,x)+εh(ε)r)vε(t,x)andσε(t,x,r):=1h(ε)σ(u0(t,x)+εh(ε)r).
According to Theorem 2.1 in [17], the continuity of the initial condition u0 implies the continuity of the solution u0 of the equation (4) on the compact set [0,T]×[0,1]. Consequently, u0 is bounded.
This fact combined with the condition (3) allows us to consider the function gε as a sum of two functions gε1 and gε2 satisfying major quadratic and linear conditions respectively, uniformly in ε being less than certain ε0.
Using again the condition (3) and the hypotheses on the function σ, we see that σε is bounded and globally Lipschitzian, uniformly in ε being less than certain ε0.
Thus, the equation (28) belongs to the class of semi-linear SPDE studied in [17], and for which the existence and uniqueness of the solution u¯ε,vε is showed by an approximation procedure. This procedure is to define a sequence of truncated equations, and to establish existence and some convergence results for the corresponding sequence of solutions (u¯nε,vε)n, see [17, 14, 23]. In fact, in the course of the proof of Theorem 2.1 in [17] it was shown that
limc⟶∞sup0<ε⩽ε0P(sup0⩽t⩽T‖u¯nε,vε(t,·)‖2⩾c2)=0,
and that (u¯nε,vε)n converges in probability in C([0,T];L2([0,1])) to the solution u¯ε,vε of (19).
Now, observe that
sup0<ε⩽ε0P{(sup0⩽t⩽T‖u¯ε,vε(t,·)‖2)⩾c}⩽sup0<ε⩽ε0P(sup0⩽t⩽T‖u¯ε,vε(t,·)−u¯nε,vε(t,·)‖2⩾c2)+sup0<ε⩽ε0P(sup0⩽t⩽T‖u¯nε,vε(t,·)‖2⩾c2).
Then, as c tends to infinity, the estimate (29) yields
limc⟶+∞sup0<ε⩽ε0P{(sup0⩽t⩽T‖u¯ε,vε(t,·)‖2)⩾c}⩽limc⟶+∞sup0<ε⩽ε0P(sup0⩽t⩽T‖u¯ε,vε(t,·)−u¯nε,vε(t,·)‖2⩾c2).
By letting n tend to infinity and using the convergence in probability of u¯nε,vε to u¯ε,vε we get
limc⟶+∞sup0<ε⩽ε0P{(sup0⩽t⩽T‖u¯ε,vε(t,·)‖2)⩾c}=0.
Hence, by applying Lemma 4.3 we obtain the tightness property for (I2ε,vε)ε.
Step 3: Convergence to the limit equation.
Having shown the tightness of each Iiε,vε for i=1,2,3,4, by Prohorov’s theorem, we can extract a subsequence, that we continue to denote by ε, and along which each of these processes and u¯ε,vε converge in distribution (as SN-valued random elements) in C([0,T];L2([0,1])) to limits denoted respectively by Ii0,v for i=1,2,3,4, and u¯0,v. We will show that
I10,v=0,I20,v=0,I30,v=−2∫0t∫01∂yGt−s(x,y)u0,v(s,y)u0(s,y)dyds,I40,v=∫0t∫01Gt−s(x,y)σ(u0(s,y))v(s,y)dyds,
and the proof will be completed by the uniqueness result given in Proposition 3.3.
For i=1, Lemma 3 in [5] ensures the convergence of (I1ε,vε)ε to 0 in probability in C([0,T]×[0,1]). And, while the convergence in probability in C([0,T]×[0,1]) implies the one in C([0,T];L2([0,1])), hence (I1ε,vε)ε converges to 0 in probability in C([0,T];L2([0,1])) too.
To handle the convergence of each of the other terms, we invoke the Skorohod representation theorem and assume the almost sure convergence on a larger common probability space.
For i=2, applying Lemma 4.1 with ρ=2 and λ=1, we deduce there exists a constant c>0}0$]]> such that
‖I2ε,vε(t,·)‖2⩽cεh(ε)∫0t(t−s)−34‖u¯ε,vε(s,·)‖22ds.
And since (u¯ε,vε)ε converges a.s. in C([0,T];L2([0,1])) to u¯0,v, then there exists ε0>0}0$]]> small enough such that
supε∈]0,ε0]sups∈[0,T]‖u¯ε,vε(s,·)‖2<∞,a.s.
Further, there exists a constant c>0}0$]]> such that for all 0<ε⩽ε0supt∈[0,T]‖I2ε,vε(t,·)‖2⩽cεh(ε),a.s.
Thus, (I2ε,vε)ε converges a.s. to 0 in C([0,T];L2([0,1])) as ε tends to 0.
For i=3, let I˜30,v denote the RHS term of I30,v. Applying again Lemma 4.1 and the Cauchy–Schwarz inequality, we conclude that there exists a constant c>0}0$]]> such that
‖I3ε,vε(t,·)−I˜30,v(t,·)‖2⩽c∫0t(t−s)−34‖(u¯ε,vε(s,·)−u¯0,v(s,·))u0(s,·)‖1ds⩽c∫0t(t−s)−34‖u¯ε,vε(s,·)−u¯0,v(s,·)‖2‖u0(s,·)‖2ds.
Using the estimation (7) and the boundedness of u¯ε,vε and u¯0,v in C([0,T];L2([0,1])), we get
‖I3ε,vε(t,·)−I˜30,v(t,·)‖2⩽csups∈[0,T]‖u¯ε,vε(s,·)−u¯0,v(s,·)‖2sups∈[0,T]‖u0(s,·)‖2∫0t(t−s)−34ds⩽csups∈[0,T]‖u¯ε,vε(s,·)−u¯0,v(s,·)‖2.
Again, since (u¯ε,vε)ε converges a.s. in C([0,T];L2([0,1])) to u¯0,v, we obtain the a.s. convergence of I3ε,vε to I˜30,v in C([0,T];L2([0,1])). And by the uniqueness of the limit and the continuity of I˜30,v, we conclude that I30,v=I˜30,v.
Concerning i=4, let I˜40,v denote the RHS term of I40,v. We have
I4ε,vε(t,·)−I˜40,v(t,·)=∫0t∫01Gt−s(x,y)[σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))vε(s,y)−σ(u0(s,y))v(s,y)]dyds=∫0t∫01Gt−s(x,y)[σ(u0(s,y)+εh(ε)u¯ε,vε(s,y))−σ(u0(s,y))]vε(s,y)dyds+∫0t∫01Gt−s(x,y)[vε(s,y)−v(s,y)]σ(u0(s,y))dyds=:J4,1ε(t,x)+J4,2ε(t,x).
Then,
‖I4ε,vε(t,·)−I˜4(0,v)(t,·)‖2⩽‖J4,1ε(t,·)‖2+‖J4,2ε(t,·)‖2.
For J4,1ε, we use Lemma 4.1, the Lipschitz condition on σ and the Cauchy–Schwarz inequality to obtain
‖J4,1ε(t,·)‖2⩽c∫0t(t−s)−34‖(σ(u0(s,·)+εh(ε)u¯ε,vε(s,·))−σ(u0(s,·)))vε(s,·)‖1ds⩽cεh(ε)∫0t(t−s)−34‖u¯ε,vε(s,·)‖2‖vε(s,·)‖2ds.
Since (vε)⊂P2N, the estimation (30) implies that there exists a constant c depending on N such that for all 0<ε⩽ε0supt∈[0,T]‖J4,1ε(t,·)‖2⩽cεh(ε),a.s.
Therefore, J4,1ε converges to 0 in C([0,T];L2[0,1]) as ε goes to 0.
The proof of the convergence of J4,2ε to 0 in C([0,T];L2[0,1]) as ε goes to 0 will be omitted since it can be treated similarly to the case of the family {Kn,n⩾1} defined below by (35).
Consequently, I4ε,vε converges to I˜40,v in C([0,T];L2([0,1])), and by the uniqueness of the limit and the continuity of I˜40,v, we conclude that I40,v=I˜40,v.
Thus, by the convergence of both the process (u¯ε,vε)ε and each term Iiε,,vε for i=1,2,3,4 along a subsequence, and by the uniqueness of the solution of the equation (14), we conclude that the condition (A1) in Proposition 3.1 holds.
Now, let us prove the condition (A2). As it was mentioned before, it suffices to check the continuity of the map G0:E0×SN⟶C([0,T];L2([0,1])) with respect to the weak topology. Let v, (vn)⊂SN such that for any g∈HT,
limn⟶+∞⟨v−vn,g⟩HT=0.
We claim that
limn⟶+∞supt∈[0,T]‖uvn(t)−uv(t)‖2=0.
Let (t,x)∈[0,T]×[0,1]. The equation (14) implies
u¯vn(t,x)−u¯v(t,x)=−2∫0t∫01∂yGt−s(x,y)u0(s,y)(u¯vn(s,y)−u¯v(s,y))dyds+∫0t∫01Gt−s(x,y)σ(u0(s,y))(vn(s,y)−v(s,y))dyds.
Hence,
‖u¯vn(t,·)−u¯v(t,·)‖2⩽c{‖∫0t∫01∂yGt−s(·,y)u0(s,y)(u¯vn(s,y)−u¯v(s,y))dyds‖2+‖∫0t∫01Gt−s(·,y)σ(u0(s,y))(vn(s,y)−v(s,y))dyds‖2}.
On one hand, using Lemma 4.1, the Cauchy–Schwarz inequality and estimation (7) we get
‖∫0t∫01∂yGt−s(·,y)u0(s,y)(u¯vn(s,y)−u¯v(s,y))dyds‖2⩽c∫0t(t−s)−3/4‖u0(s,·)(u¯vn(s,·)−u¯v(s,·))‖1ds⩽c∫0t(t−s)−3/4‖u0(s,·)‖2‖u¯vn(s,·)−u¯v(s,·)‖2ds⩽c∫0t(t−s)−3/4‖u0(s,·)‖2‖u¯vn(s,·)−u¯v(s,·)‖2ds⩽c∫0t(t−s)−3/4sups∈[0,T]‖u0(s,·)‖2‖u¯vn(s,·)−u¯v(s,·)‖2ds⩽c∫0t(t−s)−3/4‖u¯vn(s,·)−u¯v(s,·)‖2ds.
On the other hand, in order to handle the second term in the right hand side of (33), we define, for any (t,x)∈[0,T]×[0,1], the sequence
Kn(t,x):=∫0t∫01Gt−s(x,y)σ(u0(s,y))(vn(s,y)−v(s,y))dyds,
whose properties are given in Lemma 4.4 in the Appendix. Then, summing up (33)–(34), we obtain for any 0⩽t⩽T‖u¯vn(t,·)−u¯v(t,·)‖2⩽c‖Kn(t)‖2+c∫0t(t−s)−3/4‖u¯vn(s,·)−u¯v(s,·)‖2ds.
Applying Gronwall’s lemma, we get the estimate
supt∈[0,T]‖u¯vn(t,·)−u¯v(t,·)‖2⩽csupt∈[0,T]‖Kn(t,·)‖2,
which implies together with (63) the claim (31), and henceforth the condition (A2) holds.
Finally, the proof of Theorem 3.3 is completed since conditions of Proposition 3.1 are fulfilled. □
Toward a central limit theorem
Many results on central limit theorem has been recently established for various kinds of parabolic SPDEs under strong assumptions on the drift coefficient. More specifically, under the linear growth condition, the differentiability and the global Liptschitz condition on both the drift coefficient and its derivative, some central limit theorems have been established in [26, 27]. And while these conditions are not all fulfilled for the stochastic Burgers equation, it is not surprising that classical tools do not apply to establish a central limit theorem. Nevertheless, we will prove in this section two first-step results toward a central limit theorem. More specifically, the uniform boundedness and the convergence of uε to u0 in Lq(Ω;C([0,T];L2([0,1]))) for q⩾2. We hope that our current estimates could be helpful for future works in this direction.
We begin with the following result.
Assume that σ is bounded and globally Lipschitz. Then for allq⩾2, we havesupε∈]0,1]E(supt∈[0,T]‖uε(t,·)‖2q)<∞.
We will use similar arguments as in Cardon-Weber and Millet [9] and Gyöngy [17]. For 0<ε⩽1, set
ηε(t,x):=ε∫0t∫01Gt−s(x,y)σ(uε(s,y))W(dy,ds),
and
ϑε(t,x):=uε(t,x)−ηε(t,x)=∫01Gt(x,y)u0(y)dy−∫0t∫01∂yGt−s(x,y)(uε(s,y))2dyds=∫01Gt(x,y)u0(y)dy−∫0t∫01∂yGt−s(x,y)(ϑε(s,y)+ηε(s,y))2dyds.
Then, ϑε is a solution of the equation
∂ϑε∂t(t,x)=Δϑε(t,x)+∂∂x(ϑε(t,x)+ηε(t,x))2,(t,x)∈[0,T]×[0,1],
with Dirichlet’s boundary conditions and initial condition ϑε(0,x)=u0(x).
Since σ∘uε is bounded uniformly in ε, arguing as in the proof of Theorem 2.1 in [17], page 286, by the Garsia–Rodemich–Rumsey lemma, one can deduce that
supεE(sup0⩽t⩽Tsup0⩽x⩽1|η˜ε(t,x)|q)<∞,
where η˜ε(t,x):=1εηε(t,x). Consequently, there exists a universal constant C(q) depending only on q such that
E(sup0⩽t⩽Tsup0⩽x⩽1|ηε(t,x)|q)⩽C(p)εq/2.
In particular, the random variable η¯ε:=sup0⩽t⩽Tsup0⩽x⩽1|ηε(t,x)| is well defined a.s.
Moreover, using the SPDE (39) satisfied by ϑε and following the same arguments as in the proof of Theorem 2.1 in [17], we deduce the existence of a constant c independent of ε and ω (see [17] pages 286–289) such that
sup0⩽t⩽T‖ϑε(t,·)‖22⩽‖u0‖22+cT(1+η¯ε4)e(cT(1+η¯ε2)).
Consequently, for any q⩾2sup0⩽t⩽T‖uε(t,·)‖2q=sup0⩽t⩽T‖ϑε(t,·)+ηε(t,·)‖2q⩽2q−1(sup0⩽t⩽T‖ϑε(t,·)‖2q+sup0⩽t⩽T‖ηε(t,·)‖2q)⩽2q−1(‖u0‖2q+cT(1+η¯ε2q)e(cT(1+η¯ε2))+sup0⩽t⩽T(∫01|ηε(t,x)|2dx)q/2)⩽2q−1(‖u0‖2q+cT(1+η¯ε2q)e(cT(1+η¯ε2))+η¯εq)⩽c(‖u0‖2q+cT(1+η¯ε2q)e(cT(1+η¯ε2))).
Hence, to prove (38) it suffices to show that
supε∈]0,1]E((1+η¯ε2q)ecT(1+η¯ε2))is finite.
For this purpose, note first that
sup0⩽s⩽Tsup0⩽x⩽1|εσ(uε(s,x))|⩽ε‖σ‖∞,where‖σ‖∞:=supx∈R|σ(x)|.
Thus, by Lemma 4.2, there exist two positive constants C1 and C2, independent of ε, such that for any M⩾C1‖σ‖∞P(η¯ε⩾M)⩽C1‖σ‖∞exp(−M2εC2(1+T18)).
Setting φ(x):=(1+x2q)ecT(1+x2), which is a positive, continuous and increasing function on [0,+∞[, we get for any A⩾C1‖σ‖∞E(φ(η¯ε))=∫0+∞P(φ(η¯ε)>x)dx=∫0AP(η¯ε>x)φ′(x)dx+∫A+∞P(η¯ε>x)φ′(x)dx⩽φ(A)+cC1‖σ‖∞∫A+∞(1+x2q+1)exp(cTx2−x2εC2(1+T18))dx⩽φ(A)+cC1‖σ‖∞∫A+∞(1+x2q+1)exp(cTx2−x2C2(1+T18))dx,}x\big)dx\\ {} & ={\int _{0}^{A}}P({\bar{\eta }_{\varepsilon }}\mathrm{>}x){\varphi ^{\prime }}(x)dx+{\int _{A}^{+\infty }}P({\bar{\eta }_{\varepsilon }}\mathrm{>}x){\varphi ^{\prime }}(x)dx\\ {} & \leqslant \varphi (A)+c{C_{1}}\| \sigma {\| _{\infty }}{\int _{A}^{+\infty }}\big(1+{x^{2q+1}}\big)\exp \bigg(cT{x^{2}}-\frac{{x^{2}}}{\varepsilon {C_{2}}(1+{T^{\frac{1}{8}}})}\bigg)dx\\ {} & \leqslant \varphi (A)+c{C_{1}}\| \sigma {\| _{\infty }}{\int _{A}^{+\infty }}\big(1+{x^{2q+1}}\big)\exp \bigg(cT{x^{2}}-\frac{{x^{2}}}{{C_{2}}(1+{T^{\frac{1}{8}}})}\bigg)dx,\end{aligned}\]]]>
where the last integral is finite provided that cTC2(1+T18)<1. This implies that there exists T0>0}0$]]>, independent of u0 and ε, such that (42) holds for 0<T⩽T0. Using (41), and iterating the procedure finitely many times we conclude the proof. □
Now, we can announce and state the following proposition.
Assume that σ is bounded and globally Lipschitz. Then, for allq⩾2, we havelimε⟶0E(supt∈[0,T]‖uε(t,·)−u0(t,·)‖2q)=0.
We will use a localization argument. For 0⩽t⩽T, ε∈]0,1] and M>0}0$]]>, set
ΩεM(t):={w∈Ω:sups∈[0,t]‖uε(s)‖2∨sups∈[0,t]‖u0(s)‖2⩽M}.
We have
uε(t,x)−u0(t,x)=ε∫0t∫01Gt−s(x,y)σ(uε(s,y))W(ds,dy)−∫0t∫01∂yGt−s(x,y)((uε(s,y))2−(u0(s,y))2)dyds:=ηε(t,x)+Iε(t,x).
Then, for any q⩾2,
‖uε(t,·)−u0(t,·)‖2q⩽2q−1(‖ηε(t,·)‖q+‖Iε(t,·)‖q).
For ηε(t,·), by the Hölder inequality we have
E(sup0⩽s⩽t‖ηε(s,·)‖q)⩽E(sup0⩽s⩽t∫01|ηε(s,x)|qdx)⩽∫01E(sup0⩽s⩽t|ηε(s,x)|q)dx⩽E(sup0⩽x⩽1sup0⩽s⩽t|ηε(s,x)|q)⩽C(q)εq/2,
where the last inequality follows from (40).
For Iε(t,·), according to Lemma 4.1 in the Appendix with ρ=2 and λ=1, we have
‖Iε(t,·)‖2⩽c∫0t(t−s)−34‖(uε(s,·)−u0(s,·))(uε(s,·)+u0(s,·))‖1ds,
and using the following form of Hölder’s inequality
|∫0tf(s)g(s)ds|q⩽(∫0t|f(s)|ds)q−1∫0t|f(s)||g(s)|qds,
with f(s):=(t−s)−34 and g(s):=‖(uε(s,·)−u0(s,·))(uε(s,·)+u0(s,·))‖1, we get
‖Iε(t,·)‖2q⩽c∫0t(t−s)−34‖(uε(s,·)−u0(s,·))(uε(s,·)+u0(s,·))‖1qds.
Now, taking the supremum up to time t∈[0,T], and setting Φ(s):=‖(uε(s,·)−u0(s,·))(uε(s,·)+u0(s,·))‖1q, and Ψ(s):=sup0⩽r⩽sΦ(r), (49) implies
sup0⩽s⩽t‖Iε(s,·)‖2q⩽csup0⩽s⩽t∫0s(s−r)−34Φ(r)dr⩽csup0⩽s⩽t∫0s(s−r)−34sup0⩽r′⩽rΦ(r′)dr⩽csup0⩽s⩽t∫0s(s−r)−34Ψ(r)dr=csup0⩽s⩽t∫0sr−34Ψ(s−r)dr.
Since
Ψ(s−r)=sup0⩽r′⩽s−rΦ(r′)⩽sup0⩽r′⩽t−rΦ(r′)=Ψ(t−r),
then
sup0⩽s⩽t‖Iε(s,·)‖2q⩽csup0⩽s⩽t∫0sr−34Ψ(t−r)dr=c∫0tr−34Ψ(t−r)dr=c∫0t(t−r)−34Ψ(r)dr.
Introducing the expectation on ΩεM(t) and taking into account the facts that ΩεM(t)∈Ft and ΩεM(t)⊂ΩεM(s) for 0⩽s⩽t, we get
E(1ΩεM(t)sup0⩽s⩽t‖Iε(s,·)‖2q)⩽c∫0t(t−s)−34E(1ΩεM(s)Ψ(s))ds.
Notice that
1ΩεM(s)Ψ(s)⩽1ΩεM(s)sup0⩽r⩽s‖(uε(r,·)−u0(r,·))(uε(r,·)+u0(r,·))‖1q⩽1ΩεM(s)sup0⩽r⩽s‖uε(r,·)−u0(r,·)‖2q‖uε(r,·)+u0(r,·)‖2q⩽1ΩεM(s)sup0⩽r⩽s‖uε(r,·)−u0(r,·)‖2q(‖uε(r,·)‖2q+‖u0(r,·)‖2q)⩽2Mq1ΩεM(s)sup0⩽r⩽s‖uε(r,·)−u0(r,·)‖2q.
This, together with (51), gives
E(1ΩεM(t)sup0⩽s⩽t‖Iε(s,·)‖2q)⩽2cMq∫0t(t−s)−34E(1ΩεM(s)sup0⩽r⩽s‖uε(r,·)−u0(r,·)‖2q)ds.
Combining (47)–(52) we get for any 0⩽t⩽TE(1ΩεM(t)sup0⩽s⩽t‖uε(s,·)−u0(s,·)‖2q)⩽c[εq/2+2Mq∫0t(t−s)−34E(1ΩεM(s)sup0⩽r⩽s‖uε(r,·)−u0(r,·)‖2q)ds].
Using Gronwall’s lemma we deduce that, for all t∈[0,T],
E(1ΩεM(t)sup0⩽s⩽t‖uε(s,·)−u0(s,·)‖2q)⩽cεq/2e2cMq.
Therefore, for any fixed M>0}0$]]> we have
E(sup0⩽t⩽T‖uε(t,·)−u0(t,·)‖2q)=E(1ΩεM(T)sup0⩽t⩽T‖uε(t,·)−u0(t,·)‖2q)+E(1Ω∖ΩεM(T)sup0⩽t⩽T‖uε(t,·)−u0(t,·)‖2q)⩽cεq/2e2cMq+(P(Ω∖ΩεM(T)))1/2(E(sup0⩽t⩽T‖uε(t,·)−u0(t,·)‖22q))1/2.
To deal with the second term of the last inequality, on one hand, estimations (7) and (38) imply that there exists c>0}0$]]> such that
supε∈]0,1]E(sup0⩽t⩽T‖uε(t,·)−u0(t,·)‖2q)<c.
On the other hand, by the Markov inequality and using again the estimations (7) and (38) we have
P(Ω∖ΩεM(T))⩽P(supt∈[0,T]‖uε(t,·)‖2q>Mq)+P(supt∈[0,T]‖u0(t,·)‖2q>Mq)⩽E(supt∈[0,T]‖uε(t,·)‖2q)Mq+E(supt∈[0,T]‖u0(t,·)‖2q)Mq⩽supε∈]0,1]E(supt∈[0,T]‖uε(t,·)‖2q)Mq+supt∈[0,T]‖u0(t,·)‖2qMq⩽cMq.}{M^{q}}\Big)+P\Big(\underset{t\in [0,T]}{\sup }{\big\| {u^{0}}(t,\cdot )\big\| _{2}^{q}}\mathrm{>}{M^{q}}\Big)\\ {} & \leqslant \frac{\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{\varepsilon }}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}+\frac{\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{0}}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}\\ {} & \leqslant \frac{{\sup _{\varepsilon \in ]0,1]}}\mathbf{E}({\sup _{t\in [0,T]}}\| {u^{\varepsilon }}(t,\cdot ){\| _{2}^{q}})}{{M^{q}}}+\frac{{\sup _{t\in [0,T]}}\| {u^{0}}(t,\cdot ){\| _{2}^{q}}}{{M^{q}}}\\ {} & \leqslant \frac{c}{{M^{q}}}.\end{aligned}\]]]>
Then
E(sup0⩽t⩽T‖uε(t,·)−u0(t,·)‖2q)⩽cεq/2e2cMq+cMq/2.
Letting ε tends to zero and taking into account the fact that ε and M are independent, we obtain
lim supε⟶0E(sup0⩽t⩽T‖uε(t,·)−u0(t,·)‖2q)⩽cMq/2.
Finally, since M is arbitrary, we conclude that (44) holds. □
Appendix
This section contains some technical results needed in the proof of the main theorem of the paper.
First, we recall the following result proved in Lemma 3.1 in [17].
For H(t,s;x,y):=G(t−s,x,y) or H(t,s;x,y):=(∂/∂y)G(t−s,x,y), where 0⩽s⩽t⩽T and x,y∈[0,1], define the linear operator J by
J(v)(t,x):=∫0t∫01H(r,t;x,y)v(r,y)dydr,t∈[0,T],x∈[0,1],
for every v∈L∞([0,T],L1([0,1])).
Letρ>1}1$]]>,λ∈[1,ρ[and setκ:=1+1ρ−1λ. Then, J is a bounded linear operator fromLγ([0,T],Lλ([0,1]))intoC([0,T],Lρ([0,1]))forγ>2κ−1}2{\kappa ^{-1}}$]]>. Moreover, there exists a positive constant c such that for allt∈[0,T],‖J(v)(t,·)‖ρ⩽c∫0t(t−r)κ2−1‖v(r,·)‖λdr.
The following lemma is a consequence of Lemma 3.1 in [11], its proof is omitted.
LetFt=σ(W(s,x);0⩽s⩽t;0⩽x⩽1)and letZ:Ω×[0,T]×[0,1]⟶Rbe aFt-predictable process such thatsup0⩽s⩽Tsup0⩽y⩽1|Z(s,y)|⩽ρ.
SetI(t,x):=∫0t∫01Gt−u(y,z)Z(u,z)W(du,dz). Then, there exist positive constantsC1andC2such that forM>C1ρ}{C_{1}}\rho $]]>,P(sup0⩽s⩽Tsup0⩽y⩽1|I(s,y)|⩾M)⩽C1exp(−M2ρ2C2(1+T18)).
To use a fixed point argument, we consider, for any given L2([0,1])-valued function {w(t),t∈[0,T]}, the following operator
(Aw)(t,x):=−2∫0t∫01∂yGt−s(x,y)w(s,y)u0(s,y)dyds+∫0t∫01Gt−s(x,y)σ(u0(s,y))v(s,y)dyds.
We are going to prove that A is a contraction operator on the Banach space H of L2([0,1])-valued functions {w(t),t∈[0,T]} such that u(0)=0 equipped with the norm
‖w‖:=∫0Te−λt‖w(t,·)‖22dt,whereλ>0will be fixed later.}0\hspace{2.5pt}\text{will be fixed later}.\]]]>
Step 1. Let t∈[0,T]. We first prove that if w satisfies sup0⩽s⩽t‖w(s,·)‖2q<∞ then Aw satisfies also this estimate. By Lemma 4.1, the Cauchy–Schwarz inequality and the hypothesis on w we have
(sup0⩽s⩽t‖Aw(t,·)‖2q)⩽c[1+∫0t(t−s)−34(sup0⩽r⩽s‖w(r,·)u0(r,·)‖1q)ds]⩽c[1+∫0t(t−s)−34(sup0⩽r⩽s‖w(r,·)‖2q‖u0(r,·)‖2q)ds]⩽c[1+∫0t(t−s)−34(sup0⩽r⩽s‖w(r,·)‖2q)ds]⩽c[1+∫0t(t−s)−34ds],
which is clearly finite.
Step 2. Let w1 and w2 be two elements in H. For any t∈[0,T] we have
(‖Aw1(t,·)−Aw2(t,·)‖2q)⩽c∫0t(t−s)−34(‖(w1(r,·)−w2(r,·))u0(r,·)‖1q)ds⩽c∫0t(t−s)−34(‖w1(s,·)−w2(s,·)‖2q‖u0(r,·)‖2q)ds⩽c∫0t(t−s)−34(‖w1(s,·)−w2(s,·)‖2q)ds.
Then, using Fubini’s theorem we have
∫0Te−λt(‖Aw1(t,·)−Aw2(t,·)‖2q)dt⩽c∫0Te−λt∫0t(t−s)−34(‖w1(s,·)−w2(s,·)‖2q)dsdt⩽c∫0T∫sTe−λt(t−s)−34(‖w1(s,·)−w2(s,·)‖2q)dsdt⩽c∫0T(‖w1(s,·)−w2(s,·)‖2q)∫sTe−λt(t−s)−34dsdt⩽c(∫0Te−λrr−34dr)‖w1−w2‖Hq.
Take λ and T0>0}0$]]> such that
c∫0T0e−λrr−34dr<1.
Then, for T⩽T0, the operator A is a contraction on H. Consequently, for any v∈SN, it admits a unique fixed point uv∈H which satisfies the equation (14). By concatenation we can construct a solution on every interval [0,T].
The continuity of the solution uv follows from the continuity of the integrals. For the estimation (18), one can use for uv the same computations as in (61) and Gronwall’s lemma. □
In order to prove Lemma 3.6 we have used the following lemma whose proof can be found in Lemma 3.3 in [17].
Forv∈L∞([0,T];L1([0,1])), setJ(v)(t,x):=∫0t∫01∂yG(t,s,x,y)v(s,y)dyds,t∈[0,T],x∈[0,1].Letρ∈[1,+∞[andq∈[1,ρ[. Moreover, letζε(t,x)be a family of random fields on[0,T]×[0,1]such thatsupt⩽T‖ζε(t,·)‖q⩽θε, whereθεis a finite random variable for every ε. Assume that the familyθεis bounded in probability, i.e.,limc⟶+∞supεP{θε⩾c}=0.Then, the family(J(ζε))ε>0}0}$]]>is uniformly tight inC([0,T];Lρ([0,1])).
We summarize some important proprieties of the sequence {Kn,n⩾1} in the following lemma.
Let(vn)⊂SNbe a sequence converging weakly inHTto an element v inSN. The sequence{Kn,n⩾1}defined in (35) satisfies the following:
the sequence{Kn(t,x),n⩾1}converges to zero, for any fixed(t,x)∈[0,T]×[0,1];
there exists a constantc(N,T)depending on N and T such thatsupn⩾1supt∈[0,T]‖Kn(t,·)‖2⩽c(N,T);
limn⟶∞supt∈[0,T]‖Kn(t,·)‖2=0.
First notice that since
‖1[0,t](·)Gt−·(x,∗)σ(u0(·,∗))‖HT2:=∫0T∫011[0,t](s)Gt−s2(x,y)σ2(u0(s,y))dyds⩽csupx∈[0,1]∫0t∫01Gt−s2(x,y)dyds<+∞,
we have 1[0,t](·)Gt−·(x,∗)σ(u0(·,∗))∈HT and hence
Kn(t,x)=⟨1[0,t](·)Gt−·(x,∗)σ(u0(·,∗)),vn−v⟩HT.
Therefore by the weak convergence of (vn) to v in HT, we get the point i) of Lemma 4.4.
Now, let us show (62) and (63). Using the Cauchy–Schwarz inequality, the boundedness of σ, the facts that vn, v∈SN and Lemma 2.1, we have for any 0⩽t⩽T,
‖Kn(t,·)‖22=∫01|∫0t∫01Gt−s(x,y)σ(u0(s,y))(vn(s,y)−v(s,y))dyds|2dx⩽‖vn−v‖HT2(supx∈[0,1]∫0t∫01(Gt−s(x,y)σ(u0(s,y)))2dyds)⩽c(N,T)(supx∈[0,1]∫0t∫01Gt−s2(x,y)dyds)⩽c(N,T),
for some constant c(N,T) depending only on N and T, and not on n. This yields (62).
It remains to prove (63). Following similar arguments as above, we have, for any t,t′∈[0,T] such that t⩽t′,
‖Kn(t,·)−Kn(t′,·)‖22⩽c(N,T)(supx∈[0,1]∫0t∫01(Gt−s(x,y)−Gt′−s(x,y))2dyds+supx∈[0,1]∫tt′∫01Gt′−s2(x,y)dyds)⩽c(N,T)|t−t′|1/2.
According to (64) and (65), the sequence {Kn,n⩾1} is a bounded and Hölder continuous family in C([0,T];L2([0,1])); hence it is a bounded equicontinuous family and therefore by i) of Lemma 4.4 and the Arzelà–Ascoli theorem we get (63). □
Acknowledgement
The authors are very thankful to the Editor for her very constructive criticism from which our final version of the article has benefited. Many thanks also to the referees for their careful reading and useful remarks. We are also very indebted to Professors R. Zhang and J. Xiong for some kind discussions we had about the Burkholder–Davis–Gundy inequality for SPDEs driven by a space-time white noise.
ReferencesBertini, L., Cancrini, N., Jona-Lasinio, G.: The stochastic Burgers equation. 165(2), 211–232 (1994). MR1301846Boué, M., Dupuis, P.: A variational representation for certain functionals of Brownian motion. 26(4), 1641–1659 (1998). MR1675051. https://doi.org/10.1214/aop/1022855876Bryc, W.: Large deviations by the asymptotic value method. 20, 1004–1030 (1992)Budhiraja, A., Dupuis, P.: A variational representation for positive functionals of infinite dimensional Brownian motion. 20(1), 39–61 (2000). MR1785237Budhiraja, A., Dupuis, P., Maroulas, V.: Large deviations for infinite dimensional stochastic dynamical systems. , 1390–1420 (2008). MR2435853. https://doi.org/10.1214/07-AOP362Budhiraja, A., Dupuis, P., Ganguly, A., et al.: Moderate deviation principles for stochastic differential equations with jumps. 44(3), 1723–1775 (2016). MR3502593. https://doi.org/10.1214/15-AOP1007Burgers, J.M.: , D. Reidel, Dordrecht-H, Boston (1974)Cardon-Weber, C.: Large deviations for a Burgers-type SPDE. 84(1), 53–70 (1999). MR1720097. https://doi.org/10.1016/S0304-4149(99)00047-2Cardon-Weber, C., Millet, A.: A support theorem for a generalized Burgers SPDE. 15(4), 361–408 (2001). MR1856154. https://doi.org/10.1023/A:1011857909744Chen, Y., Gao, H.: Well-posedness and large deviations for a class of SPDEs with Lévy noise. 263(9), 5216–5252 (2017). MR3688413. https://doi.org/10.1016/j.jde.2017.06.016Chenal, F., Millet, A.: Uniform large deviations for parabolic SPDEs and applications. 72(2), 161–186 (1997). MR1486551. https://doi.org/10.1016/S0304-4149(97)00091-4De Acosta, A.: Moderate deviations and associated Laplace approximations for sums of independent random vectors. 329(1), 357–375 (1992). MR1046015. https://doi.org/10.2307/2154092Dupuis, P., Ellis, R.S.: , vol. 902. John Wiley & Sons (2011). MR1431744. https://doi.org/10.1002/9781118165904Foondun, M., Setayeshgar, L.: Large deviations for a class of semilinear stochastic partial differential equations. 121, 143–151 (2017). MR3575422. https://doi.org/10.1016/j.spl.2016.10.019Freidlin, M., Wentzell, A.: . Springer (1984). MR0722136. https://doi.org/10.1007/978-1-4684-0176-9Gao, F.-Q.: Moderate deviations for martingales and mixing random processes. 61(2), 263–275 (1996). MR1386176. https://doi.org/10.1016/0304-4149(95)00078-XGyöngy, I.: Existence and uniqueness results for semilinear stochastic partial differential equations. 73(2), 271–299 (1998). MR1608641. https://doi.org/10.1016/S0304-4149(97)00103-8Hu, S., Li, R., Wang, X.: Central limit theorem and moderate deviations for a class of semilinear SPDEs. arXiv preprint arXiv:1811.05611 (2018)Ikeda, N., Watanabe, S.: vol. 24. Elsevier (2014). MR0892528Karatzas, I., Shreve, S.: , vol. 113. Springer (2012). MR1121940. https://doi.org/10.1007/978-1-4612-0949-2Liming, W.: Moderate deviations of dependent random variables related to CLT. , 420–445 (1995). MR1330777Morien, P.-L.: On the density for the solution of a Burgers-type SPDE. In: , vol. 35, pp. 459–482 (1999). Elsevier. MR1702238. https://doi.org/10.1016/S0246-0203(99)00102-8Setayeshgar, L.: Large deviations for a stochastic Burgers equation. 8, 141–154 (2014). MR3269841. https://doi.org/10.31390/cosa.8.2.01Varadhan, S.S.: Asymptotic probabilities and differential equations. 19(3), 261–286 (1966). MR0203230. https://doi.org/10.1002/cpa.3160190303Walsh, J.B.: An introduction to stochastic partial differential equations. In: , pp. 265–439. Springer (1986). MR0876085. https://doi.org/10.1007/BFb0074920Wang, R., Zhang, T.: Moderate deviations for stochastic reaction-diffusion equations with multiplicative noise. 42(1), 99–113 (2015). MR3297988. https://doi.org/10.1007/s11118-014-9425-6Yang, J., Jiang, Y.: Moderate deviations for fourth-order stochastic heat equations with fractional noises. 16(06), 1650022 (2016). MR3568726. https://doi.org/10.1142/S0219493716500222Zhang, R., Xiong, J.: Semilinear stochastic partial differential equations: central limit theorem and moderate deviations. arXiv preprint arXiv:1904.00299 (2019)