1 Introduction and preliminaries
Aim and motivation. In this paper, we study sample paths properties of a class of sub-Gaussian type random fields $X(t)$, $t\in T$, focusing on the case where a parameter set T is endowed with an anisotropic metric and imposing some kind of Hölder continuity condition on the field X. Our aim is to establish upper bounds for the distribution of the supremum $P\big\{{\sup _{t\in T}}|X(t)|\gt u\big\}$ for bounded T and to evaluate a rate of growth of X over an unbounded domain V by considering upper bounds for $P\big\{{\sup _{t\in V}}\frac{|X(t)|}{f(t)}\gt u\big\}$ for a properly chosen continuous function f. The study is motivated by applications to random fields related to stochastic heat equations. Extensive recent investigations of such equations resulted, in particular, in establishing the Hölder continuity of solutions in various settings. It is quite natural and appealing to make further steps towards consideration of different functionals of solutions. We evaluate the distribution of suprema of solutions and their asymptotic rate of growth.
Approach and tools. We present bounds for distributions of suprema assuming X to belong to a particular class of φ-sub-Gaussian random fields (to be defined below), which provides a generalization of Gaussian and sub-Gaussian fields. To derive the results we apply entropy methods. Recall that the entropy approach in studying sample paths of a stochastic process $X(t)$, $t\in T$, requires to evaluate entropy characteristics of the set T with respect to a particular metrics generated by the process X. The origins of this approach are due to Dudley, who stated conditions for the boundedness of Gaussian processes in the form of convergence of metric entropy integrals (which we call now Dudley entropy integrals). We address for corresponding references, e.g., to [1] and [5], where in the latter one the entropy approach was extended to different classes of processes, more general than Gaussian ones.
Some facts from the general theory of φ-sub-Gaussian random variables and fields. Note that it is important for applications to go beyond the Gaussianity assumption in considered models, and possible extensions are provided by sub-Gaussian and φ-sub-Gaussian random processes and fields. Recall that a random variable ξ is sub-Gaussian if its moment generating function is majorized by that of a Gaussian centered random variable $\eta \sim N(0,{\sigma ^{2}})$:
\[ \mathsf{E}\exp (\lambda \xi )\le \mathsf{E}\exp (\lambda \eta )=\exp ({\sigma ^{2}}{\lambda ^{2}}/2).\]
The generalization of this notion to the classes of φ-sub-Gaussian random variables is introduced as follows (see, [5, Ch.2], [9], [19], [25]).Consider a continuous even convex function φ such that $\varphi (0)=0$, $\varphi (x)\gt 0$ as $x\ne 0$ and $\underset{x\to 0}{\lim }\frac{\varphi (x)}{x}=0$, $\underset{x\to \infty }{\lim }\frac{\varphi (x)}{x}=\infty $. Note that such functions are called Orlicz N-functions. Suppose that φ additionally satisfies $\lim \underset{x\to 0}{\inf }\frac{\varphi (x)}{{x^{2}}}=c\gt 0$, where the case $c=\infty $ is possible.
Let φ be the function with the above properties and $\{\Omega ,\mathcal{F},\mathsf{P}\}$ be a standard probability space. The random variable ζ is φ-sub-Gaussian, or belongs to the space ${\mathrm{Sub}_{\varphi }}(\Omega )$, if $\mathsf{E}\zeta =0$, $\mathsf{E}\exp \{\lambda \zeta \}$ exists for all $\lambda \in \mathbb{R}$ and there exists a constant $a\gt 0$ such that the following inequality holds for all $\lambda \in \mathbb{R}$:
The random field $X(t)$, $t\in T$, is called φ-sub-Gaussian if the random variables $\{X(t),t\in T\}$ are φ-sub-Gaussian.
The space ${\mathrm{Sub}_{\varphi }}(\Omega )$ is a Banach space with respect to the norm (see [9, 19])
For a φ-sub-Gaussian random variable ζ the following estimate for its tail probability holds:
where the function ${\varphi ^{\ast }}$ defined by ${\varphi ^{\ast }}(x)={\sup _{\substack{y\in \mathbb{R}}}}(xy-\varphi (y))$ is called the Young–Fenchel transform (or Legendre transform, or convex conjugate) of the function φ. It was stated in [5] (Corollary 4.1, p. 68) that, moreover, a random variable ζ is a φ-sub-Gaussian if and only if $\mathsf{E}\zeta =0$ and there exist constants $C\gt 0$, $D\gt 0$ such that
(2)
\[ P\{|\zeta |\gt u\}\le 2\exp \left\{-{\varphi ^{\ast }}\left(\frac{u}{{\tau _{\varphi }}(\zeta )}\right)\right\},\hspace{1em}u\gt 0,\]This second characterization of φ-sub-Gaussian random variable by the tail behavior of its distribution is important for practical applications.
The class of φ-sub-Gaussian random variables includes centered compactly supported distributions, reflected Weibull distributions, centered bounded distributions, Gaussian, Poisson distributions. In the case when $\varphi =\frac{{x^{2}}}{2}$, the notion of φ-sub-Gaussianity reduces to the classical sub-Gaussianity. The main theory for the spaces of φ-sub-Gaussian random variables and stochastic processes was presented in [5, 9, 19] followed by numerous further studies. Various classes of φ-sub-Gaussian processes and fields were studied, in particular, in [4, 11, 16–18, 22].
The property of φ-sub-Gaussianity allows to evaluate different functionals of the stochastic processes, in particular, the behavior of their suprema.
Estimates for distribution of supremum $P\{{\sup _{t\in T}}|X(t)|\gt u\}$ of φ-sub-Gaussian stochastic process X were derived in various forms in the monograph [5] basing on entropy methods.
We will base our study on the following theorem (see [5], Theorems 4.1–4.2, pp. 100, 105).
Theorem 1 ([5]).
Let $X(t)$, $t\in T$, be a φ-sub-Gaussian process and ${\rho _{X}}$ be the pseudometrics generated by X, that is, ${\rho _{X}}(t,s)={\tau _{\varphi }}(X(t)-X(s))$, $t,s\in T$. Suppose further that
-
(i) the pseudometric space $(T,{\rho _{X}})$ is separable, the process X is separable on $(T,{\rho _{X}})$;
-
(ii) ${\varepsilon _{0}}:=\underset{t\in T}{\sup }{\tau _{\varphi }}(X(t))\lt \infty $;
-
(iii)
(3)
\[ {I_{\varphi }}({\varepsilon _{0}}):={\underset{0}{\overset{{\varepsilon _{0}}}{\int }}}\Psi (\ln (N(v)))\hspace{0.1667em}dv\lt \infty ,\]
Then for all $\lambda \gt 0$ and $0\lt \theta \lt 1$,
where
\[ Q(\lambda ,\theta )=\exp \Big\{\varphi \Big(\frac{\lambda {\varepsilon _{0}}}{1-\theta }\Big)+\frac{2\lambda }{\theta (1-\theta )}{I_{\varphi }}(\theta {\varepsilon _{0}})\Big\},\]
and for $0\lt \theta \lt 1$, $u\gt \frac{2{I_{\varphi }}(\theta {\varepsilon _{0}})}{\theta (1-\theta )}$,
where
In the above theorem and in what follows we denote by ${f^{(-1)}}$ the inverse function for a function f.
The integrals of the form (3) are called entropy integrals. Entropy characteristics of the parameter set T with respect to the pseudometric ${\rho _{X}}$ generated by the process X and the rate of growth of the metric massiveness $N(v)={N_{{\rho _{X}}}}(v)$, $v\gt 0$, or metric entropy $H(v):=\ln (N(v)$ are important for the study of sample paths properties of the underlying process X (see [5]).
Consider now a metric space $(T,d)$, with an arbitrary metric d, and suppose that this metric space is separable. Suppose that we can evaluate the metric massiveness ${N_{d}}$ of T with respect to the metric d and also have a bound for the function ${\rho _{X}}(t,s)={\tau _{\varphi }}(X(t)-X(s))$ in terms of $d(t,s)$. Then Theorem 1 implies the following result.
Theorem 2.
Let $X(t)$, $t\in T$, be a φ-sub-Gaussian process and T be supplied with a metric d. Assume that
-
(i) the metric space $(T,d)$ is separable, the process X is separable on $(T,d)$;
-
(ii) ${\varepsilon _{0}}:=\underset{t\in T}{\sup }{\tau _{\varphi }}(X(t))\lt \infty $;
-
(iii) there exists a monotonically increasing continuous function $\sigma (h)$, $0\lt h\le {\sup _{t,s\in T}}d(s,t)$, such that $\sigma (h)\to 0$ as $h\to 0$ and
(4)
\[ \underset{\substack{d(t,s)\le h,\\ {} t,s\in T}}{\sup }{\tau _{\varphi }}(X(t)-X(s))\le \sigma (h),\](5)
\[ {\widetilde{I}_{\varphi }}(\varepsilon ):={\underset{0}{\overset{\varepsilon }{\int }}}\Psi (\ln ({N_{d}}({\sigma ^{(-1)}}(v)))\hspace{0.1667em}dv\lt \infty ,\]
Then the statement of Theorem 1 holds for $\lambda \gt 0$ and $0\lt \theta \lt 1$ such that $\theta {\varepsilon _{0}}\le {\gamma _{0}}$ with $Q(\lambda ,\theta )$ and $A(u,\theta )$ replaced by $\widetilde{Q}(\lambda ,\theta )$ and $\widetilde{A}(u,\theta )$ which correspond to the integral ${\widetilde{I}_{\varphi }}$:
and, for $u\gt \frac{2{\widetilde{I}_{\varphi }}(\theta {\varepsilon _{0}})}{\theta (1-\theta )}$,
where
\[\begin{array}{l}\displaystyle \widetilde{Q}(\lambda ,\theta )=\exp \Big\{\varphi \Big(\frac{\lambda {\varepsilon _{0}}}{1-\theta }\Big)+\frac{2\lambda }{\theta (1-\theta )}{\widetilde{I}_{\varphi }}(\theta {\varepsilon _{0}})\Big\},\\ {} \displaystyle \widetilde{A}(u,\theta )=\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{1}{{\varepsilon _{0}}}\Big(u(1-\theta )-\frac{2}{\theta }{\widetilde{I}_{\varphi }}(\theta {\varepsilon _{0}})\Big)\Big)\Big\}.\end{array}\]
Proof.
Theorem 2 follows immediately from Theorem 1. We have from (4) that ${\sup _{\substack{d(t,s)\le h,\\ {} t,s\in T}}}{\rho _{X}}(t,s)\le \sigma (h)$, therefore, the smallest number of elements in an ε-covering of $(T,{\rho _{X}})$ can be bounded by the smallest number of elements in a ${\sigma ^{(-1)}}(\varepsilon )$-covering of $(T,d)$: ${N_{{\rho _{X}}}}(\varepsilon )\le {N_{d}}({\sigma ^{(-1)}}(\varepsilon ))$. This implies the estimate ${I_{\varphi }}(\varepsilon )\le {\widetilde{I}_{\varphi }}(\varepsilon )$, as $\varepsilon \le {\gamma _{0}}$, and the statement of the theorem follows. □
Theorem 2 has been mainly used in the literature with a choice of the metric space $(T,d)$ of the form: $T=\{{a_{i}}\le {t_{i}}\le {b_{i}},\hspace{0.1667em}i=1,2\}$ and $d(t,s)=\underset{i=1,2}{\max }|{t_{i}}-{s_{i}}|$, $t=({t_{1}},{t_{2}})$, $s=({s_{1}},{s_{2}})$ (see, for example, [4, 17, 12] for application to the analysis of solutions to the heat equation and higher order heat-type equations with random initial conditions).
In the paper we study a particular class of φ-sub-Gaussian random fields with $\varphi =\frac{|x{|^{\alpha }}}{\alpha }$, $\alpha \in (1,2]$, which is a natural generalization of Gaussian and sub-Gaussian random fields. Gaussian and sub-Gaussian cases are involved in our consideration, with the choice $\alpha =2$. We study the sample paths of such fields for the case of the parameter set T of the form $T=[{a_{1}},{b_{1}}]\times [{a_{2}},{b_{2}}]$ or $T=[0,+\infty )\times [-A,A]$ with the so-called anisotropic metric
\[ d(t,s)=\sum \limits_{i=1,2}|{t_{i}}-{s_{i}}{|^{{H_{i}}}},\hspace{1em}{H_{i}}\in (0,1],\hspace{2.5pt}i=1,2.\]
Theorem 2 will serve as the main tool in our study. Note that the above metric is useful for studying anisotropic random fields, which have different geometrical and statistical properties for different directions and also for space-time random fields, where one needs to treat spatial and temporal variables in a different way. This is the case, for example, for random fields arising as solutions to stochastic partial differential equations. We can refer to papers [20], [27] (among others), where the use of such metric was essential for investigating sample paths properties of some models of anisotropic Gaussian random fields.
Stochastic heat equations with fractional noises. Stochastic heat equations have been studied in various settings: with a time-space white noise, with generalizations of noise in space and/or in time, and also by considering differential operators more general than the Laplacian. The case of fractional noises was considered, e.g., in the recent papers [2], [10], among many others (see references therein). In the present paper we consider the stochastic heat equation:
where W is a centered Gaussian process which is white in time and is fractional Brownian motion in space with index $H\le \frac{1}{2}$. Namely, following [10] we consider the case of W with the covariance functional
\[ \mathsf{E}W(\phi )W(\psi )={\int _{0}^{T}}{\int _{\mathbb{R}}}F\phi (t,\cdot )(y)F\psi (t,\cdot )(y)\mu (dy)dt\]
for any $\phi ,\psi \in {C_{0}^{\infty }}([0,T],\mathbb{R})$ (F denotes the Fourier transform with respect to the space variable), where the spectral measure μ is of the form
(we give more details and references in Section 3). The problem (6)–(7) was considered, for example, in [10] and it was stated therein that under some continuity and boundedness conditions on ${u_{0}}(x)$ there exists a unique mild solution $u(t,x)$, $(t,x)\in (0,T]\times \mathbb{R}$, satisfying the Hölder condition
with some constant $c=c(p,T,H)$, and where $\Delta ((t,x),(s,y))=|t-s{|^{\frac{1}{2}}}+|x-y|$ is a parabolic metric, ρ is an index in the Hölder condition imposed on ${u_{0}}(x)$.For our consideration, the bounds for the increments $u(t,x)-u(s,y)$ in ${L_{2}}$ norm will be important. In view of this, we restate the bound (8) in another form for the case of $p=2$, with the constant c given by a closed expression, and then we use this bound to derive the results on the distribution of suprema and on the rate of growth for random fields representing the solution.
Contents. The paper is organized as follows. In Section 2 we study φ-sub-Gaussian random fields with $\varphi =\frac{|x{|^{\alpha }}}{\alpha }$, $\alpha \in (1,2]$. In Section 2.1 we present the estimates for the tail distribution of suprema on the bounded domain and in Section 2.2 we state the results on the rate of growth of random fields over unbounded domains. Section 3 presents applications of the results of Sections 2.1 and 2.2 to random fields related to stochastic heat equations with fractional noise.
2 ${\mathit{Sub}_{\varphi }}(\Omega )$ processes with $\varphi =\frac{|x{|^{\alpha }}}{\alpha }$, $\alpha \in (1,2]$, defined on spaces with anisotropic metrics
Consider the process $X(t)$, $t\in T$, from the class of φ-sub-Gaussian processes with $\varphi =\frac{|x{|^{\alpha }}}{\alpha }$, $1\lt \alpha \le 2$. This class is a natural generalization of Gaussian and sub-Gaussian processes, which correspond to $\alpha =2$.
For the function $\varphi (x)=\frac{|x{|^{\alpha }}}{\alpha }$, $1\lt \alpha \le 2$, we have ${\varphi ^{(-1)}}(x)={(\alpha x)^{1/\alpha }}$, $x\gt 0$, and the Young–Fenchel transform ${\varphi ^{\ast }}(x)=\frac{|x{|^{\beta }}}{\beta }$, where $\beta \gt 0$ is such that $\frac{1}{\beta }+\frac{1}{\alpha }=1$, that is, $\beta =\frac{\alpha }{\alpha -1}$.
The entropy integrals (3) and (5) take the form
and the bounds in Theorems 1 and 2 will be based on these integrals.
\[ {I_{\varphi }}(\varepsilon )={\underset{0}{\overset{\varepsilon }{\int }}}{\Big(\ln ({N_{{\rho _{X}}}}(u)))\Big)^{\frac{1}{\beta }}}\hspace{0.1667em}du\]
and
(9)
\[ {\widetilde{I}_{\varphi }}(\varepsilon )={\underset{0}{\overset{\varepsilon }{\int }}}{\Big(\ln ({N_{d}}({\sigma ^{(-1)}}(u)))\Big)^{\frac{1}{\beta }}}\hspace{0.1667em}du,\]As we can see, for such function φ the integrals appear in a quite simple form and can be evaluated for particular metrics d. Note that for more general φ sometimes it is more convenient to use entropy integrals of another form (see, e.g., [11, 16, 18]).
We next consider φ-sub-Gaussian fields $X(t),t\in T$, with the parameter set $T=[{a_{1}},{b_{1}}]\times [{a_{2}},{b_{2}}]$ endowed with the anisotropic metric
2.1 Estimates for the distribution of suprema
Theorem 3.
Let $X(t),t\in T$, be a φ-sub-Gaussian field with $\varphi (x)=\frac{|x{|^{\alpha }}}{\alpha }$, $\alpha \in (1,2]$, $T=[{a_{1}},{b_{1}}]\times [{a_{2}},{b_{2}}]$ with the metric $d(t,s)$ defined by (10), $\beta =\frac{\alpha }{\alpha -1}$. Suppose that the field X satisfies conditions (i)–(iii) of Theorem 2 with $\sigma (h)=c{h^{\gamma }}$, $c\gt 0$, $\gamma \in (0,1]$ and $\gamma \beta \ne 1$.
Then for all $\lambda \gt 0$ and $\theta \in (0,1)$
and for all $\theta \in (0,1)$, $\theta {\varepsilon _{0}}\lt {\gamma _{0}}$ and $u\gt \frac{2}{\theta (1-\theta )}{\big(\theta {\varepsilon _{0}}\big)^{1-\frac{1}{\gamma \beta }}}{c_{1}}$, we have
where ${c_{1}}=\frac{{2^{\frac{1}{\beta }}}{c^{\frac{1}{\gamma \beta }}}}{1-\frac{1}{\gamma \beta }}\textstyle\sum \limits_{i=1,2}\frac{1}{{H_{i}}}{\big(\frac{{T_{i}}}{2}\big)^{\frac{{H_{i}}}{\beta }}}$, with ${T_{i}}={b_{i}}-{a_{i}}$, $i=1,2$.
(11)
\[ \mathsf{E}\exp \left\{\lambda \underset{t\in T}{\sup }|X(t)|\right\}\le 2\exp \left\{\frac{1}{\alpha }{\Big(\frac{\lambda {\varepsilon _{0}}}{1-\theta }\Big)^{\alpha }}+\frac{2\lambda }{\theta (1-\theta )}{\Big(\theta {\varepsilon _{0}}\Big)^{1-\frac{1}{\gamma \beta }}}{c_{1}}\right\}\](12)
\[ \mathsf{P}\{\underset{t\in T}{\sup }|X(t)|\gt u\}\le 2\exp \Big\{-\frac{1}{\beta }{\Big(\frac{u(1-\theta )}{{\varepsilon _{0}}}-\frac{2}{\theta }{\big(\theta {\varepsilon _{0}}\big)^{1-\frac{1}{\gamma \beta }}}{c_{1}}\Big)^{\beta }}\Big\},\]Proof.
We apply Theorem 2. We need to estimate the entropy integral ${\widetilde{I}_{\varphi }}(\varepsilon )$ given by (5) for the particular φ, σ, d under consideration. For the metric d given by (10) we can write the estimate for the metric massiveness
This estimate can be deduced from the observation that a rectangle
\[ \Big[-{\Big(\frac{\varepsilon }{2}\Big)^{\frac{1}{{H_{1}}}}},{\Big(\frac{\varepsilon }{2}\Big)^{\frac{1}{{H_{1}}}}}\Big]\times \Big[-{\Big(\frac{\varepsilon }{2}\Big)^{\frac{1}{{H_{2}}}}},{\Big(\frac{\varepsilon }{2}\Big)^{\frac{1}{{H_{2}}}}}\Big]\]
is contained in the ball in metric d with center $(0,0)$ and radius ε, which is given as $B(\varepsilon )=\{({x_{1}},{x_{2}}):|{x_{1}}{|^{{H_{1}}}}+|{x_{2}}{|^{{H_{2}}}}\le \varepsilon \}$.Now, for the given functions φ and σ (note that ${\sigma ^{(-1)}}(u)={(\frac{u}{c})^{\frac{1}{\gamma }}}$), we consider the entropy integral (9):
\[\begin{aligned}{}{\widetilde{I}_{\varphi }}(\varepsilon )={\underset{0}{\overset{\varepsilon }{\int }}}{\Big(\ln ({N_{d}}({\sigma ^{(-1)}}(u)))\Big)^{\frac{1}{\beta }}}\hspace{0.1667em}du& \le {\underset{0}{\overset{\varepsilon }{\int }}}\Big(\ln \prod \limits_{i=1,2}{\Big(\frac{{2^{\frac{1}{{H_{i}}}}}{T_{i}}}{2{({\sigma ^{(-1)}}(u))^{\frac{1}{{H_{i}}}}}}+1\Big)\Big)^{\frac{1}{\beta }}}\hspace{0.1667em}du\\ {} & ={\underset{0}{\overset{\varepsilon }{\int }}}\Big(\ln \prod \limits_{i=1,2}{\Big(\frac{{T_{i}}{2^{\frac{1}{{H_{i}}}}}{c^{\frac{1}{\gamma {H_{i}}}}}}{2{u^{\frac{1}{\gamma {H_{i}}}}}}+1\Big)\Big)^{\frac{1}{\beta }}}\hspace{0.1667em}du.\end{aligned}\]
For any $0\lt \varkappa \le 1$, $\ln (1+x)=\frac{1}{\varkappa }\ln {(1+x)^{\varkappa }}\le \frac{{x^{\varkappa }}}{\varkappa }$, we apply this inequality for each term in the product in the above formula choosing $\varkappa ={H_{i}}$, $i=1,2$:
\[\begin{aligned}{}{\widetilde{I}_{\varphi }}(\varepsilon )& \le {\underset{0}{\overset{\varepsilon }{\int }}}\sum \limits_{i=1,2}\frac{1}{{H_{i}}}{\Big(\frac{{T_{i}}{2^{\frac{1}{{H_{i}}}}}{c^{\frac{1}{\gamma {H_{i}}}}}}{2{u^{\frac{1}{\gamma {H_{i}}}}}}\Big)^{\frac{{H_{i}}}{\beta }}}\hspace{0.1667em}du=\sum \limits_{i=1,2}{\underset{0}{\overset{\varepsilon }{\int }}}\frac{1}{{H_{i}}}{\Big(\frac{{T_{i}}}{2}\Big)^{\frac{{H_{i}}}{\beta }}}{2^{\frac{1}{\beta }}}{c^{\frac{1}{\gamma \beta }}}\hspace{0.1667em}\frac{du}{{u^{\frac{1}{\gamma \beta }}}}\\ {} & =\frac{{\varepsilon ^{1-\frac{1}{\gamma \beta }}}}{1-\frac{1}{\gamma \beta }}{2^{\frac{1}{\beta }}}{c^{\frac{1}{\gamma \beta }}}\sum \limits_{i=1,2}\frac{1}{{H_{i}}}{\Big(\frac{{T_{i}}}{2}\Big)^{\frac{{H_{i}}}{\beta }}}={\varepsilon ^{1-\frac{1}{\gamma \beta }}}{c_{1}}.\end{aligned}\]
□2.2 Estimates for the rate of growth
Consider now the field $X({t_{1}},{t_{2}})$, $({t_{1}},{t_{2}})\in V$, defined over the unbounded domain $V=[0,+\infty )\times [-A,A]$.
Let $f(t)\gt 0$, $t\ge 0$, be a continuous strictly increasing function such that $f(t)\to \infty $ as $t\to \infty $.
Introduce the sequence ${b_{0}}=0$, ${b_{k+1}}\gt {b_{k}}$, ${b_{k}}\to \infty $, $k\to \infty $.
We will use the following notations:
-
${l_{k}}={b_{k+1}}-{b_{k}}$, ${V_{k}}=[{b_{k}},{b_{k+1}}]\times [-A,A]$, $k=0,1,\dots \hspace{0.1667em}$, ${f_{k}}=f({b_{k}})$,
-
${\varepsilon _{k}}={\sup _{({t_{1}},{t_{2}})\in {V_{k}}}}{\tau _{\varphi }}(X({t_{1}},{t_{2}}))$, and suppose that $0\lt {\varepsilon _{k}}\lt \infty $;
-
$\tilde{\theta }={\inf _{k}}\frac{{\gamma _{k}}}{{\varepsilon _{k}}}$, where
-
${\gamma _{k}}={c_{k}}\underset{({t_{1}},{t_{2}}),({s_{1}},{s_{2}})\in {V_{k}}}{\max }{(d(({t_{1}},{t_{2}}),({s_{1}},{s_{2}})))^{\gamma }}={c_{k}}{\left({({l_{k}})^{{H_{1}}}}+{(2A)^{{H_{2}}}}\right)^{\gamma }}$,with ${c_{k}}$ being from (13) below,
-
$\beta =\frac{\alpha }{\alpha -1}$.
Theorem 4.
Let $X({t_{1}},{t_{2}})$, $({t_{1}},{t_{2}})\in V$, be a φ-sub-Gaussian separable field with $\varphi =\frac{|x{|^{\alpha }}}{\alpha }$, $\alpha \in (1,2]$. Suppose further that
-
(ii) $C={\textstyle\sum _{k=0}^{\infty }}\frac{{\varepsilon _{k}}}{{f_{k}}}\lt \infty $;
-
(iii) $S={\textstyle\sum _{k=0}^{\infty }}\frac{{\varepsilon _{k}^{1-\frac{1}{\gamma \beta }}}{c_{1}}(k)}{{f_{k}}}\lt \infty $, where ${c_{1}}(k)=\Big(\frac{1}{{H_{1}}}{({l_{k}}/2)^{\frac{{H_{1}}}{\beta }}}+\frac{1}{{H_{2}}}{A^{\frac{{H_{2}}}{\beta }}}\Big)\frac{{2^{\frac{1}{\beta }}}{c_{k}^{\frac{1}{\gamma \beta }}}}{1-\frac{1}{\gamma \beta }}$.
Proof.
We use the scheme of the proof which is similar to that in [8] (Theorem 2.4), [11] (Theorem 4). Using (11) we can write the estimate
\[\begin{aligned}{}I(\lambda )& =\mathsf{E}\exp \left\{\lambda \underset{({t_{1}},{t_{2}})\in V}{\sup }\frac{|X({t_{1}},{t_{2}})|}{f({t_{1}})}\right\}\le \mathsf{E}\exp \left\{\lambda {\sum \limits_{k=0}^{\infty }}\underset{({t_{1}},{t_{2}})\in {V_{k}}}{\sup }\frac{|X({t_{1}},{t_{2}})|}{{f_{k}}}\right\}\\ {} & \le {\prod \limits_{k=0}^{\infty }}{\Big(\mathsf{E}\exp \left\{\lambda \frac{{r_{k}}}{{f_{k}}}\underset{({t_{1}},{t_{2}})\in {V_{k}}}{\sup }|X({t_{1}},{t_{2}})|\right\}\Big)^{\frac{1}{{r_{k}}}}}\le 2{\prod \limits_{k=0}^{\infty }}{Q_{k}}{(\lambda ,\theta )^{\frac{1}{{r_{k}}}}},\end{aligned}\]
where
\[\begin{array}{l}\displaystyle {Q_{k}}(\lambda ,\theta )=\exp \left\{\frac{1}{\alpha }{\Big(\frac{\lambda {\varepsilon _{k}}{r_{k}}}{(1-\theta ){f_{k}}}\Big)^{\alpha }}+2\lambda \frac{{r_{k}}}{{f_{k}}}\frac{1}{(1-\theta ){\theta ^{\frac{1}{\gamma \beta }}}}{\varepsilon _{k}^{1-\frac{1}{\gamma \beta }}}{c_{1}}(k)\right\},\\ {} \displaystyle {c_{1}}(k)=\Big(\frac{1}{{H_{1}}}{({l_{k}}/2)^{\frac{{H_{1}}}{\beta }}}+\frac{1}{{H_{2}}}{A^{\frac{{H_{2}}}{\beta }}}\Big)\frac{{2^{\frac{1}{\beta }}}{c_{k}^{\frac{1}{\gamma \beta }}}}{1-\frac{1}{\gamma \beta }},\end{array}\]
and let here ${r_{k}}$, $k\ge 0$, be such that ${\textstyle\sum _{k=0}^{\infty }}\frac{1}{{r_{k}}}=1$. This implies that
\[ I(\lambda )\le 2\exp \left\{\frac{1}{\alpha }{\Big(\frac{\lambda }{1-\theta }\Big)^{\alpha }}{\sum \limits_{k=0}^{\infty }}{\Big(\frac{{\varepsilon _{k}}{r_{k}}}{{f_{k}}}\Big)^{\alpha }}{r_{k}^{-1}}+\frac{2\lambda }{(1-\theta ){\theta ^{\frac{1}{\gamma \beta }}}}{\sum \limits_{k=0}^{\infty }}\frac{{\varepsilon _{k}^{1-\frac{1}{\gamma \beta }}}{c_{1}}(k)}{{f_{k}}}\right\}.\]
Choose ${r_{k}}=\frac{{f_{k}}}{{\varepsilon _{k}}}C$, where $C={\textstyle\sum _{k=0}^{\infty }}\frac{{\varepsilon _{k}}}{{f_{k}}}$, then
Therefore,
\[ \mathsf{E}\exp \left\{\lambda \underset{({t_{1}},{t_{2}})\in V}{\sup }\frac{|X({t_{1}},{t_{2}})|}{f({t_{1}})}\right\}\le 2\exp \left\{\frac{{\lambda ^{\alpha }}}{\alpha }{\Big(\frac{C}{1-\theta }\Big)^{\alpha }}+\frac{2\lambda }{(1-\theta ){\theta ^{\frac{1}{\gamma \beta }}}}S\right\},\]
and for all $\lambda \gt 0$, $u\gt 0$, $0\lt \theta \lt \widetilde{\theta }$,
\[\begin{aligned}{}P\left\{\underset{({t_{1}},{t_{2}})\in V}{\sup }\frac{|X({t_{1}},{t_{2}})|}{f({t_{1}})}\gt u\right\}& \le \exp \{-\lambda u\}\mathsf{E}\exp \left\{\lambda \underset{({t_{1}},{t_{2}})\in V}{\sup }\frac{|X({t_{1}},{t_{2}})|}{f({t_{1}})}\right\}\\ {} & \le 2\exp \left\{\frac{{\lambda ^{\alpha }}}{\alpha }{\Big(\frac{C}{1-\theta }\Big)^{\alpha }}+\frac{2\lambda S}{(1-\theta ){\theta ^{\frac{1}{\gamma \beta }}}}-\lambda u\right\}.\end{aligned}\]
We minimize the right-hand side with respect to λ and obtain for $u\gt \frac{2S}{(1-\theta ){\theta ^{\frac{1}{\gamma \beta }}}}$
\[\begin{aligned}{}& P\left\{\underset{({t_{1}},{t_{2}})\in V}{\sup }\frac{|X({t_{1}},{t_{2}})|}{f({t_{1}})}\gt u\right\}\\ {} & \hspace{1em}\le 2\exp \left\{-{\Big(u-\frac{2S}{(1-\theta ){\theta ^{\frac{1}{\gamma \beta }}}}\Big)^{\frac{\alpha }{\alpha -1}}}{\Big(\frac{C}{1-\theta }\Big)^{-\frac{\alpha }{\alpha -1}}}\frac{\alpha -1}{\alpha }\right\}\\ {} & \hspace{1em}=2\exp \left\{-\frac{\alpha -1}{\alpha }{C^{-\frac{\alpha }{\alpha -1}}}{\left(u(1-\theta )-2S{\theta ^{-\frac{1}{\gamma \beta }}}\right)^{\frac{\alpha }{\alpha -1}}}\right\}.\end{aligned}\]
□Theorem 5.
Let $X({t_{1}},{t_{2}})$, $({t_{1}},{t_{2}})\in V$, be a φ-sub-Gaussian separable field with $\varphi =\frac{|x{|^{\alpha }}}{\alpha }$, $\alpha \in (1,2]$. Suppose further that
-
(iii) $\widetilde{C}=c(\delta ){\textstyle\sum _{k=0}^{\infty }}\frac{{b_{k+1}^{\delta }}}{{f_{k}}}\lt \infty $;
-
(iv) $\widetilde{S}={(c(\delta ))^{1-\frac{1}{\gamma \beta }}}{\textstyle\sum _{k=0}^{\infty }}\frac{{b_{k+1}^{1-\frac{1}{\gamma \beta }}}{c_{1}}(k)}{{f_{k}}}\lt \infty $,where ${c_{1}}(k)=\Big(\frac{1}{{H_{1}}}{({l_{k}}/2)^{\frac{{H_{1}}}{\beta }}}+\frac{1}{{H_{2}}}{A^{\frac{{H_{2}}}{\beta }}}\Big)\frac{{2^{\frac{1}{\beta }}}{c_{k}^{\frac{1}{\gamma \beta }}}}{1-\frac{1}{\gamma \beta }}$.
Remark 1.
The bound (15) can be rewritten in another form. We can choose $\theta ={u^{-\frac{\gamma \beta }{\gamma \beta +1}}}$ and then under conditions of Theorem 3 for $u\gt {(1+2S)^{\frac{\gamma \beta }{\gamma \beta +1}}}$ the following bound holds:
\[ \mathsf{P}\left\{\underset{({t_{1}},{t_{2}})\in V}{\sup }\frac{|X({t_{1}},{t_{2}})|}{f({t_{1}})}\gt u\right\}\le 2\exp \left\{-\frac{1}{\beta {C^{\beta }}}{\left(u-{u^{\frac{1}{\gamma \beta +1}}}(1+2S)\right)^{\beta }}\right\}.\]
Correspondingly, under conditions of Theorem 5 for $u\gt {(1+2\widetilde{S})^{\frac{\gamma \beta }{\gamma \beta +1}}}$ we can write the bound
Theorem 6.
Suppose that for the field $X({t_{1}},{t_{2}})$, $({t_{1}},{t_{2}})\in V$, and function f conditions of Theorem 5 are satisfied. Then there exists a random variable ξ such that with probability one
and ξ satisfies the assumption
for $u\gt {(1+2\widetilde{S})^{\frac{\gamma \beta }{\gamma \beta +1}}}$.
(21)
\[ \mathsf{P}\left\{\xi \gt u\right\}\le 2\exp \left\{-\frac{1}{\beta {\widetilde{C}^{\beta }}}{\left(u-{u^{\frac{1}{\gamma \beta +1}}}(1+2\widetilde{S})\right)^{\beta }}\right\}\]Remark 2.
Note that we introduced the condition (ii) in Theorem 5 having in mind the application of Theorems 5 and 6 to random fields arising as solutions to stochastic heat equations – see the field $V(t,x)$ in (26) and the estimate for its norm (29). At the same time, Theorem 5 gives an example of the case where ${\varepsilon _{k}}$ needed in the conditions (ii) and (iii) of Theorem 4 can be evaluated due to the additional assumption (17) on the norms of the field under consideration. As a result, we can check the validity of conditions for C and S (see conditions (ii) and (iii) of Theorem 4) which in this particular case take the form of $\widetilde{C}$ and $\widetilde{S}$ (see conditions (iii) and (iv) of Theorem 5). This implies Theorem 9 for the field $V(t,x)$. Therefore, one can see that to evaluate the rate of growth it is sufficient to have a kind of the Hölder condition (16) and some scaling for the norms like in (17), which gives the possibility to choose a particular f and a partition ${b_{k}}$ so that $\widetilde{C}$ and $\widetilde{S}$ are finite as needed to write down the estimates (18) and (19). The choice of f and ${b_{k}}$ is demonstrated in the proof of Theorem 9. Distributional properties of the field, which in the present paper appear as the assumption of φ-sub-Gaussianity with a particular function φ, lead to the exponential form of the bounds and are also taken into account in the functional form of the expression under the exponent (which depends on φ) in (18) and (19). More detailed study of these topics, including particular models, deserves further investigation.
3 Application to stochastic heat equation with fractional noise
Consider the stochastic heat equation with the following assumption about the noise:
A.1.
W is a centered Gaussian process which is white in time and is fractional Brownian motion in space with index $H\le \frac{1}{2}$.
Namely, following [10] we consider the case of W with the covariance functional
\[ \mathsf{E}W(\phi )W(\psi )={\int _{0}^{T}}{\int _{\mathbb{R}}}F\phi (t,\cdot )(y)F\psi (t,\cdot )(y)\mu (dy)dt\]
for any $\phi ,\psi \in {C_{0}^{\infty }}([0,T],\mathbb{R})$ (F denotes the Fourier transform with respect to the space variable), where the spectral measure μ is of the form
Note that there exists extensive literature on SPDEs driven by Gaussian and more general noises, and involving various differential operators, see, for example, the monographs [6], [21], [26], among many others.
As for the stochastic heat equation, it has been studied in various settings: as that one driven by a time-space white noise, further with generalizations of noise in space or in time, and also by considering differential operators more general than the Laplacian.
Since we are going to apply the results of the previous section, we are interested in the results for solution to the problem (22)–(23), which allow to write down the bounds for $\mathsf{E}|u(t,x)-u(s,y){|^{2}}$.
We will base our study on the results from the recent papers [2, 10] on regularity properties of the solution to the stochastic heat equation.
We consider the problem (22)–(23) with assumption A.1 on the noise and imposing the following assumption on the initial condition ${u_{0}}$.
A.2.
The process ${u_{0}}(x),x\in \mathbb{R}$, is continuous, possesses uniformly bounded p-th moments for $p\ge 2$ and is stochastically Hölder continuous with $\rho \in (0,1]$, i.e. for all $p\ge 1$
\[ \| {u_{0}}(x)-{u_{0}}(y){\| _{{L^{p}}}}\le {L_{0}}(p)|x-y{|^{\rho }},\hspace{1em}x,y\in \mathbb{R},\]
where ${L_{0}}(p)$ is a constant and $\| .{\| _{{L^{p}}}}$ denotes the norm in ${L^{p}}(\Omega ,\mathbb{R})$: $\| u{\| _{{L^{p}}}}={\left(\mathsf{E}|u{|^{p}}\right)^{\frac{1}{p}}}$, $p\ge 1$.From [10] (see Theorem 1.1) it follows that under assumptions A.1 and A.2 equation (22) has a unique mild solution $u(t,x),(t,x)\in (0,T]\times \mathbb{R}$, satisfying $\underset{(t,x)\in (0,T]\times \mathbb{R}}{\sup }\mathsf{E}|u(t,x){|^{p}}\lt \infty $ and the Hölder condition
with some constant $c=c(p,T,H)$ and where $\Delta ((t,x),(s,y))=|t-s{|^{\frac{1}{2}}}+|x-y|$ is the parabolic metric.
The mild solution is defined as the random field
where ${G_{t}}(x)$ is the Green’s function (fundamental solution) of the equation $(\frac{\partial }{\partial t}-\frac{{\partial ^{2}}}{\partial {x^{2}}})u=0$, that is,
(26)
\[ u(t,x)={\int _{\mathbb{R}}}{G_{t}}(x-y){u_{0}}(y)\hspace{0.1667em}dy+{\int _{0}^{t}}{\int _{\mathbb{R}}}{G_{t-\theta }}(x-\eta )W(d\theta ,d\eta )=\omega (t,x)+V(t,x),\]We refer for the rigorous definitions of the integrals in (26), for example, to [2], [10] (see also [3], [14]). Note that the construction of the integral with respect to space-time fractional noise is rather subtle and involved, and all the details can be followed and consulted in the mentioned papers. Here we just recall some facts. The domain of the Wiener integral with respect to the fractional Brownian motion B of index $H\in (0,1)$ is the completion of the space ${C_{0}^{\infty }}$ of infinitely differentiable functions with compact support with respect to the inner product
\[ (\phi ,\psi )=\mathsf{E}B(\phi )B(\psi )={\int _{\mathbb{R}}}F\phi (y)F\psi (y)\mu (dy),\hspace{1em}\phi ,\psi \in {C_{0}^{\infty }},\]
and coincides with the space of distributions $S\in {\mathcal{S}^{\prime }}({C_{0}^{\infty }})$ whose Fourier transforms $FS$ are locally integrable functions satisfying ${\textstyle\int _{\mathbb{R}}}|FS(y){|^{2}}\mu (dy)\lt \infty $. The construction of the integral can be extended by adding one more variable. The integral with respect to the space-time fractional process W is defined over ${H_{T}}$ which is the completion of the space ${C_{0}^{\infty }}([0,T]\times \mathbb{R})$ with respect to the inner product
\[\begin{array}{r}\displaystyle (\phi ,\psi )=\mathsf{E}W(\phi )W(\psi )={\int _{0}^{T}}{\int _{\mathbb{R}}}F\phi (t,\cdot )(y)F\psi (t,\cdot )(y)\mu (dy)dt,\\ {} \displaystyle \hspace{0.1667em}\hspace{0.1667em}\phi ,\psi \in {C_{0}^{\infty }}([0,T]\times \mathbb{R}),\end{array}\]
where the Fourier transforms F is taken with respect to the space variable. The space ${H_{T}}$ can be characterized, in particular, as follows (see, e.g., [10]):
The following Îto isometry holds.
In the next theorem we state the Hölder continuity of the fields $\omega (t,x)$ and $V(t,x)$ in the form, where explicit expressions for the constants are given.
Theorem 7.
Let assumptions A.1 and A.2 hold. Then the following bounds hold:
where the constants ${c_{\omega }}$, ${c_{V}}$, $A(H)$ are calculated as follows:
(28)
\[ \| \omega (t,x){\| _{{L^{2}}}}\le \underset{x\in \mathbb{R}}{\sup }\| {u_{0}}(x){\| _{{L^{2}}}}\le {c_{0}};\]
\[\begin{aligned}{}& {c_{\omega }}={(2L\max ({C_{1}},L))^{1/2}},\hspace{0.1667em}\hspace{0.1667em}{c_{V}}={(3{C_{H}}\max (({c_{1,H}}+{c_{2,H}}),{c_{3,H}}))^{1/2}},\\ {} & A(H)={({C_{H}}{c_{1,H}})^{1/2}}\end{aligned}\]
with
-
${C_{1}}=\frac{{4^{\rho }}}{\sqrt{\pi }}\Gamma (\rho +\frac{1}{2})$, $L={L_{0}}(2)$, ${c_{1,H}}={2^{-(H+1)}}\frac{\Gamma (H+1)}{H}$,
-
${c_{2,H}}=\frac{1}{2}{\textstyle\int _{\mathbb{R}}}\frac{{\left(1-\exp (-{u^{2}})\right)^{2}}}{{u^{1+2H}}}\hspace{0.1667em}du$,
-
${c_{3,H}}={(2H)^{-1}}\Gamma (1-2H)\cos (H\pi )$ for $H\lt \frac{1}{2}$ and ${c_{3,H}}=\frac{\pi }{2}$ for $H=\frac{1}{2}$,
-
${C_{H}}$ is given by (24).
Proof.
For the proof we use the reasoning similar to that applied for a more general case in [10] (Theorem 1.1) (see also [2]). Therefore, we present only the main steps, our interest is in keeping the values of constants involved in each step through the entire chain of bounds.
We have
\[\begin{aligned}{}\mathsf{E}|\omega (t,x)-\omega (s,x){|^{2}}& =\mathsf{E}{\left|{\int _{\mathbb{R}}}\left({G_{t}}(x-y)-{G_{s}}(x-y)\right){u_{0}}(y)\hspace{0.1667em}dy\right|^{2}}\\ {} & =\mathsf{E}{\left|{\int _{\mathbb{R}}}{G_{t-s}}(y){\int _{\mathbb{R}}}{G_{s}}(x-z)({u_{0}}(z-y)-{u_{0}}(z))\hspace{0.1667em}dz\hspace{0.1667em}dy\right|^{2}}\\ {} & \le {\int _{\mathbb{R}}}{G_{t-s}}(y){\int _{\mathbb{R}}}{G_{s}}(x-z)\mathsf{E}{\left|{u_{0}}(z-y)-{u_{0}}(z)\right|^{2}}\hspace{0.1667em}dzdy\\ {} & \le L{\int _{\mathbb{R}}}{G_{t-s}}(y)|y{|^{2\rho }}\hspace{0.1667em}dy=L{C_{1}}|t-s{|^{\rho }},\end{aligned}\]
since
\[\begin{aligned}{}{\int _{\mathbb{R}}}{G_{h}}(y)|y{|^{2\rho }}\hspace{0.1667em}dy& ={\int _{\mathbb{R}}}\frac{1}{\sqrt{4\pi h}}\exp \left(-\frac{{y^{2}}}{4h}\right)|y{|^{2\rho }}\hspace{0.1667em}dy\\ {} & =\frac{1}{\sqrt{4\pi h}}\hspace{0.1667em}2{\int _{0}^{\infty }}\exp \left(-\frac{{y^{2}}}{4h}\right){y^{2\rho }}\hspace{0.1667em}dy\\ {} & =\frac{{4^{\rho }}}{\sqrt{\pi }}\Gamma (\rho +\frac{1}{2}){h^{\rho }}={C_{1}}{h^{\rho }},\end{aligned}\]
where ${C_{1}}=\frac{{4^{\rho }}}{\sqrt{\pi }}\Gamma (\rho +\frac{1}{2})$, $L={L_{0}}(2)$, and we have used above the property ${\textstyle\int _{\mathbb{R}}}{G_{t}}(y)\hspace{0.1667em}dy=1$ and assumption A.2.We now consider the more general increment:
\[\begin{aligned}{}& \mathsf{E}|\omega (t,x)-\omega (s,y){|^{2}}\\ {} & \hspace{1em}\le 2\left(\mathsf{E}|\omega (t,x)-\omega (s,x){|^{2}}+\mathsf{E}|\omega (s,x)-\omega (s,y){|^{2}}\right)\\ {} & \hspace{1em}\le 2L{C_{1}}|t-s{|^{\rho }}+2\mathsf{E}{\left|{\int _{\mathbb{R}}}\left({G_{s}}(x-z)-{G_{s}}(y-z)\right){u_{0}}(z)\hspace{0.1667em}dz\right|^{2}}\\ {} & \hspace{1em}=2L{C_{1}}|t-s{|^{\rho }}+2\mathsf{E}{\left|{\int _{\mathbb{R}}}{G_{s}}(z)\left({u_{0}}(x-z)-{u_{0}}(y-z)\right)\hspace{0.1667em}dz\right|^{2}}\\ {} & \hspace{1em}\le 2L{C_{1}}|t-s{|^{\rho }}+2{L^{2}}|x-y{|^{2\rho }}\le c\left(|t-s{|^{\rho }}+|x-y{|^{2\rho }}\right),\end{aligned}\]
where $c=2L\max ({C_{1}},L)$, and assumption A.2 is used again. Therefore,
Considering $V(t,x)$, we follow again the similar lines as those in [10] keeping track of constants.
We can write:
\[\begin{aligned}{}& \mathsf{E}{\left|V(t,x)-V(s,y)\right|^{2}}\\ {} & \hspace{1em}=\mathsf{E}{\left|{\int _{0}^{t}}{\int _{\mathbb{R}}}{G_{t-\theta }}(x-\eta )W(d\theta ,d\eta )-{\int _{0}^{s}}{\int _{\mathbb{R}}}{G_{s-\theta }}(y-\eta )W(d\theta ,d\eta )\right|^{2}}\\ {} & \hspace{1em}=\mathsf{E}\left|{\int _{0}^{t}}{\int _{\mathbb{R}}}{G_{t-\theta }}(x-\eta )W(d\theta ,d\eta )\right.\\ {} & \hspace{2em}+{\int _{0}^{s}}{\int _{\mathbb{R}}}\left({G_{t-\theta }}(x-\eta )-{G_{s-\theta }}(x-\eta )\right)W(d\theta ,d\eta )\\ {} & \hspace{2em}+{\left.{\int _{0}^{s}}{\int _{\mathbb{R}}}\left({G_{s-\theta }}(x-\eta )-{G_{s-\theta }}(y-\eta )\right)W(d\theta ,d\eta )\right|^{2}}\\ {} & \hspace{1em}\le 3{C_{H}}\left({\int _{0}^{t-s}}{\int _{\mathbb{R}}}{\left|F{G_{\theta }}(\xi )\right|^{2}}|\xi {|^{1-2H}}\hspace{0.1667em}d\xi d\theta \right.\\ {} & \hspace{2em}+{\int _{0}^{s}}{\int _{\mathbb{R}}}{\left|F{G_{t-s+\theta }}(\xi )-F{G_{\theta }}(\xi )\right|^{2}}|\xi {|^{1-2H}}\hspace{0.1667em}d\xi d\theta \\ {} & \hspace{2em}+\left.{\int _{0}^{s}}{\int _{\mathbb{R}}}\left(1-\cos (\xi (x-y))\right)|F{G_{\theta }}(\xi ){|^{2}}|\xi {|^{1-2H}}\hspace{0.1667em}d\xi d\theta \right)\\ {} & \hspace{1em}=3{C_{H}}({I_{1}}+{I_{2}}+{I_{3}}),\end{aligned}\]
where by F we mean the Fourier transform and use the fact that $F{G_{t}}(\xi )=\exp (-t{\xi ^{2}})$ and also the isometry relation (27).Evaluate now the integrals ${I_{1}}$, ${I_{2}}$, ${I_{3}}$.
\[ {I_{1}}={\int _{0}^{t}}{\int _{\mathbb{R}}}\exp (-2s{\xi ^{2}})|\xi {|^{1-2H}}\hspace{0.1667em}d\xi ds={2^{-(H+1)}}\frac{\Gamma (H+1)}{H}{t^{H}}={c_{1,H}}{t^{H}},\]
where ${c_{1,H}}={2^{-(H+1)}}\frac{\Gamma (H+1)}{H}$;
\[\begin{aligned}{}{I_{2}}& ={\int _{0}^{s}}{\int _{\mathbb{R}}}{\left|F{G_{t-s+\theta }}(\xi )-F{G_{\theta }}(\xi )\right|^{2}}|\xi {|^{1-2H}}\hspace{0.1667em}d\xi d\theta \\ {} & ={\int _{0}^{s}}{\int _{\mathbb{R}}}\exp (-2\theta {\xi ^{2}}){\left(1-\exp (-(t-s){\xi ^{2}})\right)^{2}}\times |\xi {|^{1-2H}}\hspace{0.1667em}d\xi d\theta \\ {} & ={\int _{\mathbb{R}}}\frac{1-\exp (-2s{\xi ^{2}})}{2|\xi {|^{2}}}{\left(1-\exp (-(t-s){\xi ^{2}})\right)^{2}}|\xi {|^{1-2H}}\hspace{0.1667em}d\xi \\ {} & \le \frac{1}{2}{\int _{\mathbb{R}}}\frac{{\left(1-\exp (-(t-s){\xi ^{2}})\right)^{2}}}{|\xi {|^{2-1+2H}}}\hspace{0.1667em}d\xi \\ {} & ={(t-s)^{H}}\frac{1}{2}{\int _{\mathbb{R}}}\frac{{\left(1-\exp (-{u^{2}})\right)^{2}}}{{u^{1+2H}}}\hspace{0.1667em}du={c_{2,H}}{(t-s)^{H}},\end{aligned}\]
where ${c_{2,H}}=\frac{1}{2}{\textstyle\int _{\mathbb{R}}}\frac{{\left(1-\exp (-{u^{2}})\right)^{2}}}{{u^{1+2H}}}\hspace{0.1667em}du$;
\[\begin{aligned}{}{I_{3}}& ={\int _{0}^{s}}{\int _{\mathbb{R}}}\left(1-\cos (\xi (x-y))\right)\exp (-2\theta {\xi ^{2}})|\xi {|^{1-2H}}\hspace{0.1667em}d\xi d\theta \\ {} & ={\int _{\mathbb{R}}}\frac{1-\exp (-2s{\xi ^{2}})}{2|\xi {|^{2}}}\left(1-\cos (\xi (x-y))\right)|\xi {|^{1-2H}}\hspace{0.1667em}d\xi \\ {} & \le {\int _{\mathbb{R}}}\left(1-\cos (\xi (x-y))\right)\frac{1}{2}|\xi {|^{-1-2H}}\hspace{0.1667em}d\xi ={c_{3,H}}{(x-y)^{2H}},\end{aligned}\]
where
(We used here Lemma D.1 from [2]).Therefore,
\[ \mathsf{E}{\left|V(t,x)-V(s,y)\right|^{2}}\le 3{C_{H}}(({c_{1,H}}+{c_{2,H}})|t-s{|^{H}}+{c_{3,H}}|x-y{|^{2H}})\]
and
\[ {\left(\mathsf{E}{\left|V(t,x)-V(s,y)\right|^{2}}\right)^{\frac{1}{2}}}\le {c_{V}}\left(|t-s{|^{\frac{H}{2}}}+|x-y{|^{H}}\right),\]
where ${c_{V}}={(3{C_{H}}\max (({c_{1,H}}+{c_{2,H}}),{c_{3,H}}))^{1/2}}$.Remark 3.
To derive further results for the fields ω and V, basing on the approach of the previous section, it is essential that we deal with space-time random fields having different behavior with respect to space and time arguments as reflected, in particular, in the different indices of the Hölder regularity in space and time in (30), (31). Note that the estimates of the norms of increments of random fields (like those in (30), (31)) are important for the study of sample path properties of anisotropic fields, and in particular, the fields related to stochastic heat equation (see, e.g., [27] and references therein). Here we present the bounds for the distributions of suprema for w, V in Theorem 8 and the estimate for the rate of growth of V in Theorem 9 which have not been presented in the literature before.
For the next results we will use an additional assumption on the initial condition ${u_{0}}$ and will need the following definition ([15]).
A family Δ of φ-sub-Gaussian random variables is called strictly φ-sub-Gaussian if there exists a constant ${C_{\Delta }}$ (called determining constant) such that for all countable sets I of random variables ${\zeta _{i}}\in \Delta $, $i\in I$, the inequality ${\tau _{\varphi }}\big({\textstyle\sum _{i\in I}}{\lambda _{i}}{\zeta _{i}}\big)\le {C_{\Delta }}{\big(\mathsf{E}{\left({\textstyle\sum _{i\in I}}{\lambda _{i}}{\zeta _{i}}\right)^{2}}\big)^{1/2}}$ holds. A random field $\zeta (t)$, $t\in T$, is called strictly φ-sub-Gaussian if the family of random variables $\{\zeta (t),t\in T\}$ is strictly φ-sub-Gaussian.
Let K be a deterministic kernel and $\eta (x)={\textstyle\int _{T}}K(x,y)d\xi (y)$, $x\in X$, where $\xi (y)$, $y\in Y$, is a strictly φ-sub-Gaussian field and the integral is defined in the mean-square sense. Then $\eta (x)$, $x\in X$, is strictly φ-sub-Gaussian field with the same determining constant (see [15]).
Theorem 8.
Let assumptions A.1 and A.2 hold, ${u_{0}}(x)$ satisfy assumption A.3 with $\varphi (x)=\frac{|x{|^{\alpha }}}{\alpha }$, $\alpha \in (1,2]$, and let ${c_{\varphi }}$ be the determining constant. Then for fields $\omega (t,x)$ and $V(t,x)$, $(t,x)\in {D_{ab}}=[{a_{1}},{b_{1}}]\times [{a_{2}},{b_{2}}]$, the following estimates hold:
-
(i)
(33)
\[ \mathsf{P}\left\{\underset{(t,x)\in {D_{ab}}}{\sup }|\omega (t,x)|\gt u\right\}\le 2\exp \left\{-\frac{1}{\beta }{\left(\frac{u(1-\theta )}{{c_{0}}{c_{\varphi }}}+2{(\theta {c_{0}}{c_{\varphi }})^{-\frac{1}{\beta }}}{\widetilde{c}_{1}}\right)^{\beta }}\right\}\] -
(ii)\[ \mathsf{P}\left\{\underset{(t,x)\in {D_{ab}}}{\sup }|V(t,x)|\gt u\right\}\le 2\exp \left\{-\frac{1}{2}{\left(\frac{u(1-\theta )}{{\varepsilon _{V}}}+2{(\theta {\varepsilon _{V}})^{-\frac{1}{2}}}{\widetilde{\widetilde{c}}_{1}}\right)^{2}}\right\}\]for $u\gt \frac{2}{(1-\theta )\theta }{(\theta {\varepsilon _{V}})^{\frac{1}{2}}}{\widetilde{\widetilde{c}}_{1}}$, where ${\varepsilon _{V}}=A(H){b_{1}^{H/2}}$,${\widetilde{\widetilde{c}}_{1}}=2\sqrt{2{c_{V}}}\left(\frac{2}{H}{\big(\frac{{T_{1}}}{2}\big)^{\frac{H}{4}}}+\frac{1}{H}{\big(\frac{{T_{2}}}{2}\big)^{\frac{H}{2}}}\right)$, ${T_{i}}={b_{i}}-{a_{i}}$, $i=1,2$.
Remark 4.
Under the assumption of Gaussianity of the initial condition ${u_{0}}$ we can present a similar result for the original solution $u(t,x)$ in the following form:
\[ \mathsf{P}\left\{\underset{(t,x)\in {D_{ab}}}{\sup }|u(t,x)|\gt y\right\}\le 2\exp \left\{-\frac{1}{2}{\left(\frac{y(1-\theta )}{{\varepsilon _{u}}}+2{(\theta {\varepsilon _{u}})^{-\frac{1}{2}}}{\hat{c}_{1}}\right)^{2}}\right\}\]
for $y\gt \frac{2}{(1-\theta )\theta }{(\theta {\varepsilon _{u}})^{\frac{1}{2}}}{\hat{c}_{1}}$, where ${\varepsilon _{u}}={c_{0}}+A(H){b_{1}^{H/2}}$, ${\hat{c}_{1}}=2\sqrt{2\max \{{c_{\omega }},{c_{V}}\}}\left(\frac{2}{{H^{\prime }}}{\big(\frac{{T_{1}}}{2}\big)^{\frac{{H^{\prime }}}{4}}}+\frac{1}{{H^{\prime }}}{\big(\frac{{T_{2}}}{2}\big)^{\frac{{H^{\prime }}}{2}}}\right)$, ${H^{\prime }}=H\wedge \rho $, ${T_{i}}={b_{i}}-{a_{i}}$, $i=1,2$. Otherwise, combining two parts will lead to a rather complicated expression for the bound within this approach.The next theorem presents the power upper bound for the asymptotic growth of the trajectories of the field $V(t,x)$, $(t,x)\in D=[0,+\infty )\times [-A,A]$.
Theorem 9.
Let assumptions A.1 and A.2 hold. Then for any $p\gt 1$ there exists a random variable $\xi (p)$ such that for any $(t,x)\in D$, with probability one,
where $\xi (p)$ satisfies assumption (21) with $\gamma =1$, $\beta =2$ and some constants $\widetilde{C}$ and $\widetilde{S}$.
Proof.
We apply Theorem 6 and Theorem 5. Condition (i) holds since $V(t,x)$ is continuous, condition (ii) holds in view of (29).
Consider conditions (iii) and (iv). Let $f(t)=({t^{\frac{H}{2}}}|\log t{|^{p}})\vee 1$ for $t\gt 0$ and some $p\gt 1$. Let us choose ${b_{k}}={e^{k}}$, $k\ge 0$.
Then ${f_{k}}={b_{k}^{\frac{H}{2}}}{(\log {b_{k}})^{p}}={e^{k\frac{H}{2}}}{(\log {e^{k}})^{p}}={e^{k\frac{H}{2}}}{k^{p}}$ and we have
Consider
\[ \widetilde{S}={(A(H))^{\frac{1}{2}}}{\sum \limits_{k=0}^{\infty }}\frac{{b_{k+1}^{\frac{H}{4}}}{c_{1}}(k)}{{f_{k}}}={(A(H))^{\frac{1}{2}}}{\sum \limits_{k=0}^{\infty }}\frac{{e^{(k+1)\frac{H}{4}}}{c_{1}}(k)}{{e^{k\frac{H}{2}}}{k^{p}}},\]
where ${c_{1}}(k)=\frac{2}{H}{e^{k\frac{H}{4}}}{(\frac{e-1}{2})^{\frac{H}{4}}}+{A^{\frac{H}{2}}}$, and we can see that $\widetilde{S}\lt \infty $.We consider now an assumption on the initial condition which can be used instead of assumption A.2.
A.2′.
The process ${u_{0}}(x),x\in \mathbb{R}$, is a real, measurable, mean-square continuous stationary stochastic process.
Let $B(x),x\in \mathbb{R}$, is a covariance function of the process ${u_{0}}(x),x\in \mathbb{R}$, with the representation
where $F(\lambda )$ is a spectral measure, and for the process itself we can write the spectral representation
The stochastic integral is considered as ${L_{2}}(\Omega )$ integral. Orthogonal random measure Z is such that $\mathsf{E}|Z(d\lambda ){|^{2}}=F(d\lambda )$.
Then the field ω can be writen in the form
and its covariance function has the representation (see [11])
(36)
\[ \omega (t,x)={\int _{\mathbb{R}}}\exp \Big\{i\lambda x-\mu t{\lambda ^{2}}\Big\}Z(d\lambda )\]Theorem 10.
Let assumption A.3 hold. Then
and if for some $\varepsilon \in (0,\frac{1}{2}]$
then the following estimate holds:
(38)
\[ \| \omega (t,x){\| _{{L^{2}}}}\le {\Big({\int _{\mathbb{R}}}F(d\lambda )\Big)^{\frac{1}{2}}},\]Proof.
We have
\[ \mathsf{E}{\big(\omega (t,x)-\omega (s,y)\big)^{2}}={\int _{\mathbb{R}}}|b(\lambda ){|^{2}}F(d\lambda ),\]
where
\[ b(\lambda )=\exp \{i\lambda x\}\exp \{-\mu {\lambda ^{2}}t\}-\exp \{i\lambda y\}\exp \{-\mu {\lambda ^{2}}s\},\]
and we can estimate
\[\begin{aligned}{}|b(\lambda ){|^{2}}& \le \Big(1-\exp {\Big\{-\mu {\lambda ^{2}}|t-s|\Big\}\Big)^{2}}+4{\sin ^{2}}\Big(\frac{1}{2}\lambda (x-y)\Big)\\ {} & \le {\Big(\min ({\lambda ^{2}}|t-s|,1)\Big)^{2}}+4\min {\Big(\frac{1}{2}|\lambda ||x-y|,1\Big)^{2}}\\ {} & \le {\Big({\lambda ^{2}}|t-s|\Big)^{2{\varepsilon _{1}}}}+4{\Big(\frac{1}{2}|\lambda ||x-y|\Big)^{2{\varepsilon _{2}}}}\end{aligned}\]
for any ${\varepsilon _{1}},{\varepsilon _{2}}\in (0,1]$. Let us choose $\varepsilon :={\varepsilon _{1}}={\varepsilon _{2}}/2$, $\varepsilon \in (0,1/2]$, and suppose ${\textstyle\int _{\mathbb{R}}}{\lambda ^{2\varepsilon }}F(d\lambda )\lt \infty $. Then we can write the bound
\[ {\int _{\mathbb{R}}}|b(\lambda ){|^{2}}F(d\lambda )\le \Big({\int _{\mathbb{R}}}{\lambda ^{2\varepsilon }}F(d\lambda )\Big)\big({4^{1-\varepsilon }}|x-y{|^{2\varepsilon }}+|t-s{|^{\varepsilon }}\big),\]
which implies (40). The estimate (38) follows from (37). □In view of Theorem 10, under assumption A.3 and assuming ${u_{0}}$ to be strictly φ-sub-Gaussian, we can write the estimate for the tail distribution of supremum of $\omega (t,x)$ which is analogous to (33), where the constants ${c_{0}}$ and ${c_{\omega }}$ will come now from (38), (39).
In the example below we present the process which can be used as initial condition, for which (39) is satisfied.
Example 1.
Let the initial condition ${u_{0}}(x)$, $x\in \mathbb{R}$, be a Gaussian stationary process with the spectral density
The corresponding covariance function is of the form
where ${K_{\nu }}$ is the modified Bessel function of the second kind, in particular, ${K_{1/2}}(x)=\sqrt{\frac{\pi }{2x}}{e^{-x}}$. Covariances (42) constitute the so-called Matérn class, a parameter $\nu =2\alpha -1/2\gt 0$ controls the level of smoothness of the stochastic process.
(41)
\[ f(\lambda )=\frac{{\sigma ^{2}}}{{(1+{\lambda ^{2}})^{2\alpha }}},\hspace{1em}\lambda \in \mathbb{R}.\](42)
\[ {B_{\eta }}(x)=\frac{{\sigma ^{2}}}{\sqrt{\pi }\Gamma (2\alpha )}{\Big(\frac{|x|}{2}\Big)^{2\alpha -1/2}}{K_{2\alpha -1/2}}(|x|),\hspace{1em}x\in \mathbb{R},\]Note that the Gaussian stochastic process with the above covariance and spectral density can be obtained as solution to the fractional partial differential equation
with w being a white noise: $\mathsf{E}w(x)=0$ and $\mathsf{E}w(x)w(y)={\sigma ^{2}}\delta (x-y)$ (see, e.g., [7, Thm. 3.1]).
The Matérn model is popular in spatial statistics and modeling random fields (with corresponding adjustment of (42) for n-dimensional case). The relation between the spatial Matérn covariance model and stochastic partial differential equation ${(\mu -\Delta )^{\alpha }}\eta (x)=w(x),\hspace{2.5pt}x\in {\mathbb{R}^{n}}$, was established by Whittle in 1963 and since then has been widely used in various applied and theoretical contexts.
For the stationary initial condition ${u_{0}}$ with spectral density (41) the condition (39) holds and we are able to calculate the constant $c(\varepsilon )$ defined in (39). We have
\[ {\int _{\mathbb{R}}}\frac{{\lambda ^{2\varepsilon }}}{{(1+{\lambda ^{2}})^{2\alpha }}}={\int _{0}^{\infty }}\frac{{t^{\varepsilon +1/2-1}}}{{(1+t)^{\varepsilon +1/2+2\alpha -\varepsilon -1/2}}}\hspace{0.1667em}dt=\mathcal{B}(\varepsilon +1/2,2\alpha -\varepsilon -1/2),\]
where $\mathcal{B}$ is the Beta-function, $\varepsilon \in (0,1/2]$, $2\alpha -\varepsilon -1/2\gt 0$, and the formula ${\textstyle\int _{0}^{\infty }}\frac{{t^{\mu -1}}}{{(1+t)^{\mu +\nu }}}\hspace{0.1667em}dt=\mathcal{B}(\mu ,\nu )$ is used.Therefore, in this case we obtain ${c^{2}}(\varepsilon )={\sigma ^{2}}\mathcal{B}(\varepsilon +1/2,2\alpha -\varepsilon -1/2)$. In particular, having in (41) $\alpha \gt 1$ and choosing $\varepsilon =1/2$, we get $c(1/2)={\sigma ^{2}}\frac{1}{\alpha -1}$.