1 Introduction
This paper is devoted to the investigation of important classes of exponential type Orlicz spaces of random variables, namely, φ-sub-Gaussian random variables. Such spaces of random variables and corresponding stochastic proceses provide generalizations of Gaussian and sub-Gaussian random variables and processes and are important for various applications.
The main theory for the spaces of φ-sub-Gaussian random variables and stochastic processes was elaborated and presented in [4, 5, 15, 29] and has gained numerous further developments in the recent literature.
Recall that a random variable ξ is sub-Gaussian if its moment generating function (Laplace transform) is majorized by that of a Gaussian centered random variable $\eta \sim N(0,{\sigma ^{2}})$, that is, $\mathbb{E}\exp (\lambda \xi )\le \mathbb{E}\exp (\lambda \eta )=\exp ({\sigma ^{2}}{\lambda ^{2}}/2)$.
The generalization of this notion to the classes of φ-sub-Gaussian random variables is introduced as follows (see [4, Ch. 2]).
Definition 1 ([4, 15]).
A continuous even convex function φ is called an Orlicz N-function if $\varphi (0)=0$, $\varphi (x)>0$ as $x\ne 0$ and ${\lim \nolimits_{x\to 0}}\frac{\varphi (x)}{x}=0$, ${\lim \nolimits_{x\to \infty }}\frac{\varphi (x)}{x}=\infty $.
Condition Q.
Let φ be an N-function which satisfies $\lim {\inf _{x\to 0}}\frac{\varphi (x)}{{x^{2}}}=c>0$, where the case $c=\infty $ is possible.
Definition 2 ([5, 15]).
Let φ be an N-function satisfying condition Q and $\{\Omega ,L,\mathsf{P}\}$ be a standard probability space. The random variable ζ is φ-sub-Gaussian, or belongs to the space ${\mathrm{Sub}_{\varphi }}(\Omega )$, if $\mathbb{E}\zeta =0$, $\mathbb{E}\exp \{\lambda \zeta \}$ exists for all $\lambda \in \mathbb{R}$ and there exists a constant $a>0$ such that the following inequality holds for all $\lambda \in \mathbb{R}$
The random process $\zeta =\{\zeta (t),t\in T\}$ is called φ-sub-Gaussian if the random variables $\{\zeta (t),t\in T\}$ are φ-sub-Gaussian.
The space ${\mathrm{Sub}_{\varphi }}(\Omega )$ is a Banach space with respect to the norm (see [5, 15]):
The Young–Fenchel transform is also known as the Legendre (or Legendre–Fenchel) transform, especially in the convex analysis and in the theory of large deviations.
The function ${\varphi ^{\ast }}$ plays an important role in the theory of φ-sub-Gaussian random variables and processes.
It is known that one can estimate the ‘tail’ distribution of a centered random variable by using the convex conjugate of its cumulant generating function.
For a φ-sub-Gaussian random variable ξ the following important estimate for its ‘tail’ probability holds ([15]):
thus, the estimate can be written in terms of the convex conjugate of the function φ.
The property of φ-sub-Gaussianity for stochastic processes allows to evaluate the behavior of their suprema, to derive estimates for various functionals of such processes, to treat their sample paths properties.
The classical monograph [4] contains the detailed account of conditions for φ-sub-Gaussian processes $X(t)$, $t\in T$, to have bounded and continuous sample paths, and also presents estimates for modulii of continuity and distribution of supremum for such processes. For derivation of these results the entropy approach is used, which is based on evaluation of entropy characteristics of the parameter set T with respect to a particular metrics generated by the process X. Conditions needed to state the results are formulated in terms of the so-called entropy integrals.
The origins of entropy methods can be traced back to the paper [7] where sufficient conditions for the boundedness of Gaussian processes were stated in terms of the corresponding entropy integrals. Further these ideas were extended in [9, 21] using the majorizing measures methods. A thorough presentation of the results on sample paths properties stated via the entropy approach for Gaussian and related processes can be found, for example, in the classical monographs [8, 22, 27, 28], and others.
The questions of applicability of entropy based methods for general classes of processes from Orlicz spaces were treated in the monograph [4]. The power of entropy methods was combined with the fine structure of Orlicz spaces, which are Banach spaces with special properties. In such a way the substantial progress was achived in studing sample paths properties of stochastich prosesses more general than Gaussian ones, and strong and attractive theory was developed, which is of great importance for various applications.
One of possible applications of this theory is in the study of partial differential equations with random factors. The important practical demand in this field is in relating the behavior of solutions to the correspoding initial conditions.
Partial differential equations with random initial conditions have been intensively studied, especially, starting from the papers by J. Kampé de Feriet [10] and M. Rosenblatt [26] who introduced rigorous probabilistic tools in this area. In particular, the paper [26] considers the heat equation with stationary initial conditions and presents the spectral representation of the spatially stationary solutions in the form of stochastic integrals.
Much attention in the literature has been devoted to the study of rescaling procedures for partial differential equations with random data. Limiting behavior of rescaled solutions have been investigated for the heat, fractional heat, Burgers and some other equations with Gaussian and non-Gaussian initial conditions possessing weak or strong dependence. In particular, the Gaussian and non-Gaussian limiting distributions for the heat equations with singular data are presented in [1, 23], which can be considered as the starting point for numerous further studies in this area.
In another series of papers, solutions to partial differential equations subject to random initial conditions were investigated by means of Fourier methods, representations of solutions by uniformly convergent series and their approximations in different functional spaces were developed (see, for example, [11, 19, 20] among many others).
We cite some publications the most closely related to our study. In [13, 14] the heat equations with sub-Gaussian stationary initial conditions were studied, exponential bounds for the distribution of supremum of the solution were presented. Higher-order heat-type equations with φ-sub-Gaussian harmonizable initial conditins were investigated in [2, 16, 17].
In this paper our interest is focused on further investigation of sample paths properties of φ-sub-Gaussian processes related to the heat equation with random initial condition.
The structure of the paper is as follows. Section 2 contains some definitions and preliminary results which will be used further in the paper. In Section 3 we treat the questions on continuity and the rate of growth of trajectories of φ-sub-Gaussian processes. The conditions are stated in terms of a particular entropy integral. The results of Sections 2 and 3 are then used in Section 4 to investigate the sample paths properties for a particular class of φ-sub-Gaussian processes related to the random heat equation. We derive the estimates for the distribution of suprema of such processes and evaluate their rate of growth. Section 5 outlines some directions for future studies.
We can compare our study with some close publications on this topic. Results presented in our Section 3 extend and generalize the results on asymptotic bounds with probability 1 for the rate of growth of Gaussian and φ-sub-Gaussian processes stated in [6, 12] and in [20] correspondingly. The questions of continuity of sample paths are treated in the similar way as in [29], but basing on different conditions. Within a similar approach, estimates for the distribution of increments were derived in [18] for square-Gaussian processes. The results obtained in Section 4 concerning the distribution of supremum for the processes related to the heat equations with φ-sub-Gaussian initial conditions provide the generalization and extension of results from papers [13, 14] where the cases of Gaussian and sub-Gaussian initial conditions were considered. In [16, 17] similar results were obtained for the case of higher order heat-type equations, but under the conditions stated in terms of a different entropy integral (see also [16] for more references on the theory of φ-sub-Gaussian processes and additional references on partial differential equations with random initial conditions).
2 Preliminaries
In this section we present some results for φ-sub-Gaussian processes which will be used to obtain the results in Section 3.
Let $(\mathbf{T},\rho )$ be a metric (pseudometric) space and $X=\{X(t),t\in \mathbf{T}\}$ be a φ-sub-Gaussian process. Introduce the following conditions.
Let $N(u)={N_{\mathbf{T}}}(u)$, $u>0$, be the metric massiveness of the space $(\mathbf{T},\rho )$, that is, $N(u)$ is the number of elements in the minimal u-covering of $(\mathbf{T},\rho )$.
A.1
${\varepsilon _{0}}={\sup _{t\in \mathbf{T}}}{\tau _{\varphi }}(X(t))<\infty $.
A.2
The space $(\mathbf{T},\rho )$ is separable and the process X is separable on this space.
A.3
Denote
For a function $f(t)$, $t\ge 0$, we will denote by ${f^{(-1)}}(u),u\ge 0$, the inverse function. Denote ${\gamma _{0}}=\sigma ({\sup _{t,s\in \mathbf{T}}}\rho (t,s))$.
(2)
\[ {I_{r}}(\delta )={\int _{0}^{\delta }}r(N({\sigma ^{(-1)}}(u)))du,\hspace{0.1667em}\hspace{0.1667em}\delta >0.\]Theorem 1 below is a variant of the result stated in [4, Theorem 4.4, p. 107] (see also [17, Theorem 2.3]). The analogous result for a Gaussian process is presented in [6].
Theorem 1.
Let $X=\{X(t),t\in \mathbf{T}\}$ be a φ-sub-Gaussian process and Conditions A.1–A.4 hold. Suppose ${I_{r}}({\gamma _{0}})<\infty $.
Then for all $0<\theta <1$ such that $\theta {\varepsilon _{0}}<{\gamma _{0}}$ and $u>0,\lambda >0$
and
where
(3)
\[ \mathbb{E}\exp \Big\{\lambda \underset{t\in \mathbf{T}}{\sup }|X(t)|\Big\}\le 2Q(\lambda ,\theta ),\]Corollary 1.
Let us consider a separable metric space $(\mathbf{T},d),\hspace{2.5pt}\mathbf{T}=\{{a_{i}}\le {t_{i}}\le {b_{i}},i=1,2\},\hspace{2.5pt}d(t,s)={\max _{i=1,2}}|{t_{i}}-{s_{i}}|$. Then
For a particular form of σ, by choosing an appropriate function r, the expression ${r^{(-1)}}\Big(\frac{{I_{r}}(\theta {\varepsilon _{0}})}{\theta {\varepsilon _{0}}}\Big)$ can be calculated in the closed form, and we obtain the next two corollaries.
Corollary 2.
Let in Corollary 1 $\sigma (h)={c_{1}}{h^{\beta }}$ with ${c_{1}}>0,0<\beta \le 1$. Let $\varkappa =\max \{{b_{i}}-{a_{i}},i=1,2\}$. Then for any $\theta \in (0,1)$ such that $\theta {\varepsilon _{0}}<\sigma (\varkappa )$ and any $u>0,\lambda >0$
\[\begin{array}{l}\displaystyle \mathbb{E}\exp \Big\{\lambda \underset{t\in \mathbf{T}}{\sup }|X(t)|\Big\}\le 2\exp \Big\{\varphi \Big(\frac{\lambda {\varepsilon _{0}}}{1-\theta }\Big)\Big\}{A_{1}}(\theta {\varepsilon _{0}}),\\ {} \displaystyle P\Big\{\underset{t\in \mathbf{T}}{\sup }|X(t)|\ge u\Big\}\le 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\varepsilon _{0}}}\Big)\Big\}{A_{1}}(\theta {\varepsilon _{0}}),\end{array}\]
where
Corollary 3.
Let in Corollary 1 $\sigma (h)={c_{2}}{\Big(\ln ({e^{\beta }}+\frac{1}{u})\Big)^{-\beta }},\beta >1,{c_{2}}>0$. Let $\varkappa =\max \{{b_{i}}-{a_{i}},i=1,2\}$. Then for any $\theta \in (0,1)$ such that $\theta {\varepsilon _{0}}<\sigma (\varkappa )$ and any $u>0,\lambda >0$
\[\begin{array}{l}\displaystyle E\exp \Big\{\lambda \underset{t\in \mathbf{T}}{\sup }|X(t)|\Big\}\le 2\exp \Big\{\varphi \Big(\frac{\lambda {\varepsilon _{0}}}{1-\theta }\Big)\Big\}{A_{2}}(\theta {\varepsilon _{0}}),\\ {} \displaystyle P\Big\{\underset{t\in \mathbf{T}}{\sup }|X(t)|\ge u\Big\}\le 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\varepsilon _{0}}}\Big)\Big\}{A_{2}}(\theta {\varepsilon _{0}}),\end{array}\]
where
Proof.
To prove Corollaries 2 and 3 we calculate the expression ${r^{(-1)}}\Big(\frac{{\hat{I}_{r}}(\theta {\varepsilon _{0}})}{\theta {\varepsilon _{0}}}\Big)$ for two different forms of σ, choosing an appropriate function r for each case.
For the case $\sigma (h)={c_{1}}{h^{\beta }}$ with $c>0$, $0<\beta \le 1$, we choose $r(x)={x^{\alpha }}-1$, $0<\alpha <\beta /2$.
We have ${\sigma ^{(-1)}}(h)={\Big(\frac{h}{{c_{1}}}\Big)^{1/\beta }}$ and ${r^{(-1)}}(x)={(x+1)^{1/\alpha }}$.
Denote $\varkappa =\max \{{b_{i}}-{a_{i}},i=1,2\}$.
By using calculations similar to those for Corollary 2.3 in [6], we obtain the estimate
and
Applying then the inequality ${(a+b)^{p}}\le {2^{p-1}}({a^{p}}+{b^{p}});\hspace{2.5pt}p\ge 1$, and choosing $\alpha =\beta /4$ we get
Consider now the case $\sigma (h)=c{(\ln ({e^{\beta }}+\frac{1}{h}))^{-\beta }}$ with $c>0$, $\beta >0$. Let us choose $r(x)=\ln x$.
We have ${\sigma ^{(-1)}}(h)=\Big(\exp {\Big\{{\Big(\frac{c}{h}\Big)^{1/\beta }}\Big\}-{e^{\beta }}\Big)^{-1}}$ and ${r^{(-1)}}(x)=\exp \{x\}$.
We can estimate ${\hat{I}_{r}}(\delta )$ as follows. We have
\[\begin{aligned}{}{\hat{I}_{r}}(\delta )=& {\int _{0}^{\delta }}r\Big(\Big(\frac{{b_{1}}-{a_{1}}}{2}\Big(\exp \Big\{{\Big(\frac{c}{u}\Big)^{1/\beta }}\Big\}-{e^{\beta }}\Big)+1\Big)\\ {} & \times \Big(\frac{{b_{2}}-{a_{2}}}{2}\Big(\exp \Big\{{\Big(\frac{c}{u}\Big)^{1/\beta }}\Big\}-{e^{\beta }}\Big)+1\Big)\Big)du.\end{aligned}\]
Let $\beta >\max \Big\{1,\ln \Big(\frac{2}{{b_{1}}-{a_{1}}}\Big),\ln \Big(\frac{2}{{b_{2}}-{a_{2}}}\Big)\Big\}$, then
\[\begin{aligned}{}{\hat{I}_{r}}(\delta )& \le {\int _{0}^{\delta }}r\Big(\frac{({b_{1}}-{a_{1}})({b_{2}}-{a_{2}})}{4}\exp \Big\{2{\Big(\frac{c}{u}\Big)^{1/\beta }}\Big\}\Big)du\\ {} & ={\int _{0}^{\delta }}\ln \Big(\frac{({b_{1}}-{a_{1}})({b_{2}}-{a_{2}})}{4}\exp \Big\{2{\Big(\frac{c}{u}\Big)^{1/\beta }}\Big\}\Big)du\\ {} & =\delta \ln \frac{({b_{1}}-{a_{1}})({b_{2}}-{a_{2}})}{4}+{\int _{0}^{\delta }}2{\Big(\frac{c}{u}\Big)^{1/\beta }}du\\ {} & =\delta \ln \frac{({b_{1}}-{a_{1}})({b_{2}}-{a_{2}})}{4}+\frac{2{c^{1/\beta }}{\delta ^{1-1/\beta }}}{1-\frac{1}{\beta }}\le \delta \ln \frac{{\varkappa ^{2}}}{4}+\frac{2{c^{1/\beta }}{\delta ^{1-1/\beta }}}{1-\frac{1}{\beta }}.\end{aligned}\]
Therefore, we can write the estimate
□
We will need some additional definitions and facts on φ-sub-Gaussian variables and processes.
Definition 4 ([11]).
A family Δ of random variables $\zeta \in {\mathrm{Sub}_{\varphi }}(\Omega )$ is called strictly φ-sub-Gaussian if there exists a constant ${C_{\Delta }}$ such that for all countable sets I of random variables ${\zeta _{i}}\in \Delta $, $i\in I$, the following inequality holds:
The constant ${C_{\Delta }}$ is called the determining constant of the family Δ.
(7)
\[ {\tau _{\varphi }}\left(\sum \limits_{i\in I}{\lambda _{i}}{\zeta _{i}}\right)\le {C_{\Delta }}{\left(\mathbb{E}{\left(\sum \limits_{i\in I}{\lambda _{i}}{\zeta _{i}}\right)^{2}}\right)^{1/2}}.\]The linear closure of a strictly φ-sub-Gaussian family Δ in the space ${L_{2}}(\Omega )$ is the strictly φ-sub-Gaussian with the same determining constant ([11]).
Definition 5 ([11]).
Random process $\zeta =\{\zeta (t),t\in T\}$ is called strictly φ-sub-Gaussian if the family of random variables $\{\zeta (t),t\in T\}$ is strictly φ-sub-Gaussian with a determining constant ${C_{\zeta }}$.
Let K be a deterministic kernel and suppose that the process $X(t)$, $t\in T$, can be represented in the form $X(t)={\textstyle\int _{T}}K(t,s)\hspace{0.1667em}d\xi (s)$, where $\xi (t)$, $t\in T$, is a strictly φ-sub-Gaussian random process and the integral above is defined in the mean-square sense. Then the process $X(t)$, $t\in T$, is strictly φ-sub-Gaussian random process with the same determining constant (see [11]).
3 Properties of sample paths of φ-sub-Gaussian processes
Let $(\mathbf{T},\rho )$ be a metric space and $X=\{X(t),t\in \mathbf{T}\}$ be a φ-sub-Gaussian process. In the next two theorems we evaluate the increments of the process X in terms of the integral ${I_{r}}(\cdot )$ given by (2).
Theorem 2.
Let $X=\{X(t),t\in \mathbf{T}\}$ be a φ-sub-Gaussian process, Conditions A.1–A.4 hold and ${I_{r}}({\gamma _{0}})<\infty $. Then for any $p\in (0,1)$ and any $\varepsilon >0,\lambda >0$
\[\begin{aligned}{}& \mathbb{E}\exp \Big\{\lambda \underset{\rho (t,s)<\varepsilon }{\sup }|X(t)-X(s)|\Big\}\\ {} & \le 2\exp \Big\{(1-p)\varphi \Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\Big)+p\varphi \Big(\frac{2\lambda \sigma (\varepsilon )}{1-p}\Big)\Big\}{r^{(-1)}}\Big(\frac{{I_{r}}(\sigma (\varepsilon ))}{p\sigma (\varepsilon )}\Big)\\ {} & \le 2\exp \Big\{\varphi \Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\Big)\Big\}{r^{(-1)}}\Big(\frac{{I_{r}}(\sigma (\varepsilon ))}{p\sigma (\varepsilon )}\Big).\end{aligned}\]
Proof.
The scheme of the proof is analogous to that used in [29], however, with some modifications, since the condition of convergence of another entropy integral is used. Note also that the same method was applied in [18] to prove a similar result for the case of square-Gaussian stochastic processes. In view of this, we present only the main steps.
Let $\alpha =\sigma ({\inf _{s\in \mathbf{T}}}{\sup _{t\in \mathbf{T}}}\rho (t,s))$, ${\varepsilon _{k}}={\sigma ^{(-1)}}(\alpha {p^{k}}),k=1,2,\dots \hspace{0.1667em}$.
Consider a minimal ${\varepsilon _{k}}$-covering of T, formed by closed balls of the radius ${\varepsilon _{k}}$ and denote by ${V_{{\varepsilon _{k}}}}$ the set of centers of these balls. The number of points in ${V_{{\varepsilon _{k}}}}$ is equal to ${N_{T}}({\varepsilon _{k}})$.
For the points $t,s\in \mathbf{T}$ such that $\rho (t,s)<\varepsilon $, we choose k: ${\varepsilon _{k}}<\varepsilon <{\varepsilon _{k-1}}$, that is $\sigma ({\varepsilon _{k}})<\sigma (\varepsilon )<\sigma ({\varepsilon _{k-1}})$ or, in other words, $\alpha {p^{k}}<\sigma (\varepsilon )<\alpha {p^{k-1}}$.
Denote ${V_{k}}={\cup _{j=k}^{\infty }}{V_{{\varepsilon _{j}}}}$. The set ${V_{k}}$ is a ρ-separability set for the process X and ${\sup _{t,s\in \mathbf{T}}}|X(t)-X(s)|={\sup _{t,s\in {V_{k}}}}|X(t)-X(s)|$.
Define functions ${\alpha _{n}}$: ${V_{k}}\to {V_{{\varepsilon _{n}}}},n\ge 0$ as ${\alpha _{n}}(t)=t$ if $t\in {V_{{\varepsilon _{n}}}}$, and if $t\notin {V_{{\varepsilon _{n}}}}$, then ${\alpha _{n}}(t)$ is the point of ${V_{{\varepsilon _{n}}}}$ such that $\rho (t,{\alpha _{n}}(t))<{\varepsilon _{n}}$. If there is more than one of such points, then we choose any of them (see [29], p. 74).
Note that the family of maps $\{{\alpha _{n}},n\ge 0\}$ is called the α-procedure for choosing points in ${V_{k}}$ (see [4, Definition 2.5, p. 94]).
We will use the following inequality from [29] (see Formula (2.76)):
\[\begin{aligned}{}\underset{\rho (t,s)<\varepsilon }{\sup }|X(t)-X(s)|\le & 2{\sum \limits_{l=k}^{\infty }}\underset{\substack{\hspace{0.1667em}\hspace{0.1667em}\hspace{0.1667em}u\in {V_{{\varepsilon _{l+1}}}}}}{\max }|X(u)-X({\alpha _{l}}(u))|\\ {} & +\underset{\substack{u,v\in {V_{{\varepsilon _{k}}}};\\ {} {\tau _{\varphi }}(X(u)-X(v))\le \sigma (\varepsilon )\frac{3-p}{1-p}}}{\max }|X(u)-X(v)|.\end{aligned}\]
(We omit here the details, but note that the derivation of the similar formula for the case of square-Gaussian processes can be found also in [18], see Formula (10) therein.)Then for all $\lambda >0$
\[\begin{aligned}{}I& :=\mathbb{E}\exp \Big\{\lambda \underset{\rho (t,s)<\varepsilon }{\sup }\hspace{2.5pt}|X(t)-X(s)|\Big\}\\ {} & \le \mathbb{E}\exp \Big\{\lambda \Big(2{\sum \limits_{l=k}^{\infty }}\underset{\substack{\hspace{0.1667em}\hspace{0.1667em}\hspace{0.1667em}u\in {V_{{\varepsilon _{l+1}}}}}}{\max }|X(u)-X({\alpha _{l}}(u))|\\ {} & +\underset{\substack{u,v\in {V_{{\varepsilon _{k}}}};\\ {} {\tau _{\varphi }}(X(u)-X(v))\le \sigma (\varepsilon )\frac{3-p}{1-p}}}{\max }|X(u)-X(v)|\Big)\Big\}.\end{aligned}\]
Using the Hölder inequality ([4], p. 220) we can write
\[\begin{aligned}{}I\le & \Big(\mathbb{E}\exp {\Big\{\lambda {q_{k}}\underset{u,v\in {V_{{\varepsilon _{k}}}}}{\max }|X(u)-X(v)|\Big\}\Big)^{\frac{1}{{q_{k}}}}}\\ {} & \times {\prod \limits_{l=k+1}^{\infty }}\Big(\mathbb{E}\exp {\Big\{\lambda {q_{l}}2\underset{u\in {V_{{\varepsilon _{l}}}}}{\max }|X(u)-X({\alpha _{l}}(u))|\Big\}\Big)^{\frac{1}{{q_{l}}}}},\end{aligned}\]
where $\{{q_{l}},l=k,k+1,\dots \hspace{0.1667em}\}$ is a sequence satisfying ${\textstyle\sum _{l=k}^{\infty }}\frac{1}{{q_{l}}}=1$.We can write the following inequalities:
\[\begin{aligned}{}\mathbb{E}\exp & \Big\{\lambda {q_{l}}2\underset{u\in {V_{{\varepsilon _{l}}}}}{\max }|X(u)-X({\alpha _{l}}(u))|\Big\}\\ {} & \le N({\varepsilon _{l}})\underset{u\in {V_{{\varepsilon _{l}}}}}{\max }\mathbb{E}\exp \Big\{\lambda {q_{l}}2|X(u)-X({\alpha _{l}}(u))|\Big\}\\ {} & \le N({\varepsilon _{l}})\underset{u\in {V_{{\varepsilon _{l}}}}}{\max }\exp \Big\{\varphi \Big(\lambda {q_{l}}2{\tau _{\varphi }}(X(u)-X({\alpha _{l}}(u)))\Big)\Big\}\\ {} & \le N({\varepsilon _{l}})2\exp \Big\{\varphi \Big(\lambda {q_{l}}2\sigma ({\varepsilon _{l}})\Big)\Big\},\end{aligned}\]
where we used the inequlity ${\tau _{\varphi }}(X(u)-X({\alpha _{l}}(u))\le \sigma ({\varepsilon _{l}})$, since for $u\in {V_{{\varepsilon _{l}}}}$ we have $\rho (u,{\alpha _{l}}(u))\le {\varepsilon _{l}}$. For more details on the last chain of inequalities we can refer to the book [4] (see, for example, the proof of Theorem 4.1, p. 102 therein). Analogously we obtain
\[\begin{aligned}{}\mathbb{E}\exp & \Big\{\lambda {q_{k}}\underset{u,v\in {V_{{\varepsilon _{k}}}}}{\max }|X(u)-X(v)|\Big\}\\ {} & \le N({\varepsilon _{k}})2\underset{u,v\in {V_{{\varepsilon _{k}}}}}{\max }\exp \Big\{\varphi \Big(\lambda {q_{k}}{\tau _{\varphi }}(X(u)-X(v))\Big)\Big\}\\ {} & \le N({\varepsilon _{k}})2\exp \Big\{\varphi \Big(\lambda {q_{k}}\Big(\sigma (\varepsilon )+2\alpha \frac{{p^{k}}}{1-p}\Big)\Big)\Big\},\end{aligned}\]
where for the last inequality we used the following estimate: for $u,v\in {V_{{\varepsilon _{k}}}}$,
(see Formula (2.74) in [29], or Formula (7) in [18], where a quite similar derivation is given for the case of a square-Gaussian process).We obtain
\[\begin{aligned}{}I& \le {\Big(N({\varepsilon _{k}})\Big)^{\frac{1}{{q_{k}}}}}{2^{\frac{1}{{q_{k}}}}}\exp \Big\{\frac{1}{{q_{k}}}\varphi \Big(\lambda {q_{k}}\Big(\sigma (\varepsilon )+2\alpha \frac{{p^{k}}}{1-p}\Big)\Big)\Big\}\\ {} & \times {\prod \limits_{l=k+1}^{\infty }}{\Big(N({\varepsilon _{l}})\Big)^{\frac{1}{{q_{l}}}}}{2^{\frac{1}{{q_{l}}}}}\exp \Big\{\frac{1}{{q_{l}}}\varphi \Big(\lambda {q_{l}}2\sigma ({\varepsilon _{l}})\Big)\Big\}\\ {} & =2\exp \Big\{{\sum \limits_{l=k}^{\infty }}\frac{H({\varepsilon _{l}})}{{q_{l}}}+\frac{1}{{q_{k}}}\varphi \Big(\lambda {q_{k}}\Big(\sigma (\varepsilon )+2\alpha \frac{{p^{k}}}{1-p}\Big)\Big)+{\sum \limits_{l=k+1}^{\infty }}\frac{1}{{q_{l}}}\varphi \Big(\lambda {q_{l}}2\sigma ({\varepsilon _{l}})\Big).\end{aligned}\]
Let ${q_{l}}=\frac{1}{{p^{l-k}}(1-p)},l=k,k+1,\dots \hspace{0.1667em}$. Then ${\textstyle\sum _{l=k}^{\infty }}\frac{1}{{q_{l}}}=1,{\textstyle\sum _{l=k+1}^{\infty }}\frac{1}{{q_{l}}}=p$. We have also $\varphi \Big(\lambda {q_{l}}2\sigma ({\varepsilon _{l}})\Big)=\varphi \Big(\frac{2\lambda \alpha {p^{k}}}{1-p}\Big)$, $\varphi \Big(\lambda {q_{k}}(\sigma (\varepsilon )+2\alpha \frac{{p^{k}}}{1-p})\Big)\le \varphi \Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\Big)$.
Using convexity of $r({e^{x}})$ we can write:
\[\begin{aligned}{}\exp \Big\{{\sum \limits_{l=k}^{\infty }}\frac{H({\varepsilon _{l}})}{{q_{l}}}\Big\}& =\exp \Big\{{\sum \limits_{l=k}^{\infty }}{p^{l-k}}(1-p)H({\sigma ^{(-1)}}(\alpha {p^{l}}))\Big\}\\ {} & ={r^{(-1)}}\Big(r\Big(\exp \Big\{{\sum \limits_{l=k}^{\infty }}{p^{l-k}}(1-p)H({\sigma ^{(-1)}}(\alpha {p^{l}}))\Big\}\Big)\Big)\\ {} & \le {r^{(-1)}}\Big({\sum \limits_{l=k}^{\infty }}{p^{l-k}}(1-p)r\Big(N({\sigma ^{(-1)}}(\alpha {p^{l}}))\Big)\Big)\\ {} & \le {r^{(-1)}}\Big(\frac{1}{\alpha {p^{k}}}{\int _{0}^{\alpha {p^{k}}}}r\Big(N({\sigma ^{(-1)}}(u))\Big)\Big)du\\ {} & \le {r^{(-1)}}\Big(\frac{1}{p\sigma (\varepsilon )}{\int _{0}^{\sigma (\varepsilon )}}r\Big(N({\sigma ^{(-1)}}(u))\Big)du\Big).\end{aligned}\]
(Omitted intermediary steps can be checked, for example, in [18], see the derivation of Formula (23).) Finally, we obtain:
\[\begin{aligned}{}I\le & 2\exp \Big\{p\varphi \Big(\frac{2\lambda \sigma (\varepsilon )}{1-p}\Big)+(1-p)\varphi \Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\Big)\Big\}\\ {} & \times {r^{(-1)}}\Big(\frac{1}{p\sigma (\varepsilon )}{\int _{0}^{\sigma (\varepsilon )}}r\Big(N({\sigma ^{(-1)}}(u))\Big)du\Big).\end{aligned}\]
□Theorem 3.
Let $X=\{X(t),t\in \mathbf{T}\}$ be a φ-sub-Gaussian process, Conditions A.1–A.4 hold and ${I_{r}}({\gamma _{0}})<\infty $. Then for any $p\in (0,1)$ and any $\varepsilon >0$
Proof.
Using Theorem 2 and Chebyshev’s inequality, we obtain for any $\lambda >0$
\[\begin{array}{r}\displaystyle P\Big\{\underset{\rho (t,s)<\varepsilon }{\sup }|X(t)-X(s)|>x\Big\}\le \mathbb{E}\exp \Big\{\lambda \underset{\rho (t,s)<\varepsilon }{\sup }|X(t)-X(s)|\Big\}\exp \{-\lambda x\}\\ {} \displaystyle \le 2\exp \Big\{\varphi \Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\Big)-\lambda x\Big\}{r^{(-1)}}\Big(\frac{1}{p\sigma (\varepsilon )}{\int _{0}^{\sigma (\varepsilon )}}r(N({\sigma ^{(-1)}}(u)))du\Big).\end{array}\]
It remains to note that
\[\begin{aligned}{}\underset{\lambda >0}{\inf }& \Big(-\lambda x+\varphi \Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\Big)\Big)\\ {} & =-\underset{\lambda >0}{\sup }\Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\frac{x{(1-p)^{2}}}{\sigma (\varepsilon )(3-p)}-\varphi \Big(\frac{\lambda \sigma (\varepsilon )(3-p)}{{(1-p)^{2}}}\Big)\Big)\\ {} & =-{\varphi ^{\ast }}\Big(\frac{x{(1-p)^{2}}}{\sigma (\varepsilon )(3-p)}\Big).\end{aligned}\]
□Remark 1.
The most general results on the behavior of increments of stochastic processes in Orlicz spaces are presented in [4] (see, e.g., Theorems 5.1, 5.2, pp. 109–112). Specifications of these results for φ-sub-Gaussian processes are given in [29], with conditions stated in terms of the entropy integrals $I(\varepsilon )={\textstyle\int _{0}^{\varepsilon }}\Psi (\ln (v))\hspace{0.1667em}dv,\hspace{0.1667em}\varepsilon >0$, with $\Psi (v)=\frac{v}{{\varphi ^{(-1)}}(v)},v>0$. In Theorems 2, 3 we state the results analogous to Lemma 2.8, Theorem 2.9 ([29]), but under the conditions given in terms of the entropy integral (2). We use the α-procedure technique, which is a usual approach to derive results of this kind (see [4]). Similar results are stated in [18] for increments of square-Gaussian processes, with conditions given in terms of the integral (2).
Remark 2.
Theorem 3 suggests a way to check if a given φ-sub-Gaussian process is sample continuous. Indeed, under the conditions of Theorem 3, if the right hand side of Formula (8) tends to 0 for $\varepsilon \to 0$, then $P\Big\{{\sup _{\rho (t,s)<\varepsilon }}|X(t)-X(s)|>x\Big\}\to 0$. Therefore, as $\varepsilon \to 0$, ${\sup _{\rho (t,s)<\varepsilon }}|X(t)-X(s)|\to 0$ in probability, but also (due to the monotonicity of the supremum) with probability 1. This would entail the sample continuity of the process $X=\{X(t),t\in \mathbf{T}\}$ with probability 1. Theorem 3 is specified below for particular cases of σ. For example, from Corollary 5 below we can see that for $\sigma (h)=c{h^{\beta }}$ with $c>0$, $0<\beta \le 1$, and $\varphi (x)=\frac{|x{|^{\alpha }}}{\alpha }$, $1<\alpha \le 2$ (in which case ${\varphi ^{\ast }}(x)=\frac{|x{|^{\gamma }}}{\gamma }$, $\gamma \ge 2$ and $\frac{1}{\alpha }+\frac{1}{\gamma }=1$), the convergence to zero of the right hand side of Formula (8) holds. Therefore, in this case one can conclude that the process is sample continuous with probability 1.
Corollary 4.
Let under the conditions of Theorem 3, $\mathbf{T}=\{{a_{i}}\le {t_{i}}\le {b_{i}},i=1,2\},\rho (t,s)={\max _{i=1,2}}|{t_{i}}-{s_{i}}|$. Then
\[\begin{array}{r}\displaystyle P\Big\{\underset{\rho (t,s)<\varepsilon }{\sup }|X(t)-X(s)|>x\Big\}\le 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{x{(1-p)^{2}}}{\sigma (\varepsilon )(3-p)}\Big)\Big\}\\ {} \displaystyle \times {r^{(-1)}}\Big(\frac{1}{p\sigma (\varepsilon )}{\int _{0}^{\sigma (\varepsilon )}}r\Big(\Big(\frac{{b_{1}}-{a_{1}}}{2{\sigma ^{(-1)}}(u)}+1\Big)\Big(\frac{{b_{2}}-{a_{2}}}{2{\sigma ^{(-1)}}(u)}+1\Big)\Big)du\Big).\end{array}\]
Corollary 5.
Corollary 6.
Let under the conditions of Theorem 3 and Corollary 4, $\mathbf{T}=\{t=({t_{1}},{t_{2}}):{a_{i}}\le {t_{i}}\le {b_{i}},i=1,2\}$, $\sigma (h)=c{(\ln ({e^{\beta }}+\frac{1}{h}))^{-\beta }}$ with $c>0$, $\beta >0$. Let $\varkappa =\max \{{b_{i}}-{a_{i}},i=1,2\}$.
Then
\[\begin{aligned}{}P\Big\{\underset{\rho (t,s)<\varepsilon }{\sup }|X(t)-X(s)|>x\Big\}\le & 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{x{(1-p)^{2}}}{c(3-p)}\Big(\ln {\Big({e^{\beta }}+\frac{1}{\varepsilon }\Big)\Big)^{\beta }}\Big)\Big\}\\ {} & \times {\Big(\frac{{\varkappa ^{2}}}{4}\Big)^{1/p}}{\Big({e^{\beta }}+\frac{1}{\varepsilon }\Big)^{\frac{2\beta }{\beta -1}}}.\end{aligned}\]
Now we investigate the rate of growth of φ-sub-Gaussian processes. Namely, we present exponential upper bounds for the weighted φ-sub-Gaussian process defined on the half-axis $X=\{X(t),t\ge 0\}$.
Let us introduce the following condition.
A.5
Let $a(t)$, $t\ge 0$, be a continuous strictly increasing function such that $a(t)>0$ and $a(t)\to \infty $ as $t\to \infty $. Introduce the sequence ${b_{0}}=0,{b_{k+1}}>{b_{k}},{b_{k}}\to \infty ,k\to \infty $. Denote ${B_{k}}=[{b_{k}},{b_{k+1}}],k=0,1,\dots \hspace{0.1667em}$, ${a_{k}}=a({b_{k}})$, ${\varepsilon _{k}}={\sup _{t\in {B_{k}}}}{\tau _{\varphi }}(X(t))$, and suppose that $0<{\varepsilon _{k}}<\infty $.
Denote ${\gamma _{k}}={\sigma _{k}}({b_{k+1}}-{b_{k}})$, where ${\sigma _{k}}$ are defined below in the condition (i) of Theorem 4, and $\tilde{\theta }={\inf _{k}}\frac{{\gamma _{k}}}{{\varepsilon _{k}}}$.
Theorem 4.
Let $X=\{X(t),t\ge 0\}$ be a φ-sub-Gaussian separable process and Conditions A.4 and A.5 hold.
Proof.
Let ${r_{k}}>0,k=0,1,\dots \hspace{0.1667em}$ and ${\textstyle\sum _{k=0}^{\infty }}\frac{1}{{r_{k}}}=1$. Then for any $\lambda >0$
\[\begin{aligned}{}I(\lambda )=\mathbb{E}\exp \Big\{\lambda \underset{t>0}{\sup }\frac{|X(t)|}{a(t)}\Big\}& \le \mathbb{E}\exp \Big\{\lambda {\sum \limits_{k=0}^{\infty }}\underset{t\in {B_{k}}}{\sup }\frac{|X(t)|}{{a_{k}}}\Big\}\\ {} & \le {\prod \limits_{k=0}^{\infty }}\Big(\mathbb{E}\exp {\Big\{\lambda {r_{k}}\underset{t\in {B_{k}}}{\sup }\frac{|X(t)|}{{a_{k}}}\Big\}\Big)^{\frac{1}{{r_{k}}}}}.\end{aligned}\]
By using Theorem 1 we obtain
\[\begin{aligned}{}I(\lambda )& \le {\prod \limits_{k=0}^{\infty }}{2^{1/{r_{k}}}}\Big(\exp \Big\{\varphi {\Big(\frac{\lambda {r_{k}}{\varepsilon _{k}}}{(1-\theta ){a_{k}}}\Big)\Big\}\Big)^{\frac{1}{{r_{k}}}}}\Big({r^{(-1)}}{\Big(\frac{{I_{r,k}}(\theta {\varepsilon _{k}})}{\theta {\varepsilon _{k}}}\Big)\Big)^{\frac{1}{{r_{k}}}}}\\ {} & =\exp \Big\{{\sum \limits_{k=0}^{\infty }}\varphi \Big(\frac{\lambda {r_{k}}{\varepsilon _{k}}}{(1-\theta ){a_{k}}}\Big)\frac{1}{{r_{k}}}\Big\}{\prod \limits_{k=0}^{\infty }}{2^{1/{r_{k}}}}\Big({r^{(-1)}}{\Big(\frac{{I_{r,k}}(\theta {\varepsilon _{k}})}{\theta {\varepsilon _{k}}}\Big)\Big)^{\frac{1}{{r_{k}}}}}\\ {} & =2\exp \Big\{{\sum \limits_{k=0}^{\infty }}\varphi \Big(\frac{\lambda {r_{k}}{\varepsilon _{k}}}{(1-\theta ){a_{k}}}\Big)\frac{1}{{r_{k}}}\Big\}\exp \Big\{{\sum \limits_{k=0}^{\infty }}\frac{1}{{r_{k}}}\log \Big({r^{(-1)}}\Big(\frac{{I_{r,k}}(\theta {\varepsilon _{k}})}{\theta {\varepsilon _{k}}}\Big)\Big)\Big\}.\end{aligned}\]
Let ${r_{k}}=\frac{A{a_{k}}}{{\varepsilon _{k}}}$. Then we obtain the claimed bound (9):
We obtain the second bound (10) by applying Chebyshev’s inequality. □
Note that under the conditions of Theorem 4 we have the estimate
\[ {I_{r,k}}({\gamma _{k}})\le {\hat{I}_{r,k}}({\gamma _{k}})={\int _{0}^{{\gamma _{k}}}}r\Big(\frac{{b_{k+1}}-{b_{k}}}{2{\sigma _{k}^{(-1)}}(u)}+1\Big)du.\]
Therefore, we can state Theorem 4 in terms of the integral ${\hat{I}_{r,k}}$. Corollary 7.
Let conditions of Theorem 4 hold with ${\sigma _{k}}(h)={c_{k}}{h^{\beta }}$, ${c_{k}}>0$, $0<\beta \le 1$, but condition $(iii)$ be replaced by the following one:
Then
Proof.
We estimate the expression for $S(\theta ,r)$ given in condition (iii) of Theorem 4 for the case when ${\sigma _{k}}(h)={c_{k}}{h^{\beta }}$, ${c_{k}}>0$, $0<\beta \le 1$, choosing $r(x)={x^{\alpha }}-1,x\ge 1,0<\alpha <\beta $. We obtain
\[ {r^{(-1)}}\Big(\frac{{\hat{I}_{r,k}}(\theta {\varepsilon _{k}})}{\theta {\varepsilon _{k}}}\Big)\le {2^{\frac{2}{\beta }-1}}\Big(1+\frac{{b_{k+1}}-{b_{k}}}{{\theta ^{\frac{1}{\beta }}}}{\Big(\frac{{c_{k}}}{{\varepsilon _{k}}}\Big)^{\frac{1}{\beta }}}{2^{\frac{2}{\beta }-1}}\Big).\]
Let us use the inequality: for $0<\gamma \le 1$ and $x\ge 0$, $\log (1+x)\le \frac{{x^{\gamma }}}{\gamma }$. Then
\[ \log \Big(1+\frac{{b_{k+1}}-{b_{k}}}{{\theta ^{\frac{1}{\beta }}}}{\Big(\frac{{c_{k}}}{{\varepsilon _{k}}}\Big)^{\frac{1}{\beta }}}{2^{\frac{2}{\beta }-1}}\Big)\le \frac{1}{\gamma }{\Big(\frac{{b_{k+1}}-{b_{k}}}{{\theta ^{\frac{1}{\beta }}}}{\Big(\frac{{c_{k}}}{{\varepsilon _{k}}}\Big)^{\frac{1}{\beta }}}{2^{\frac{2}{\beta }-1}}\Big)^{\gamma }},\]
and we can write the estimate for $S(\theta ,r)$:
\[ S(\theta ,r)\le \log ({2^{\frac{2}{\beta }-1}})+\frac{1}{\gamma }{\sum \limits_{k=0}^{\infty }}\frac{{\varepsilon _{k}^{1-\frac{\gamma }{\beta }}}}{{a_{k}}}{({b_{k+1}}-{b_{k}})^{\gamma }}{\Big(\frac{{c_{k}^{\frac{1}{\beta }}}{2^{\frac{2}{\beta }-1}}}{{\theta ^{\frac{1}{\beta }}}}\Big)^{\gamma }},\]
from which we obtain the expression for ${A_{1}}(\theta )$.Statement $(ii)$ follows from $(i)$ in view of Chebyshev’s inequality. □
Corollary 8.
Let conditions of Theorem 4 hold with ${\sigma _{k}}(h)={c_{k}}{(\ln ({e^{\alpha }}+\frac{1}{h}))^{-\alpha }}$, ${c_{k}}>0$, $\alpha >1$, but condition $(iii)$ be replaced by the following one:
Proof.
The proof is analogous to the proof of Corollary 7. With the given ${\sigma _{k}}(h)$ and the choice $r(x)=\ln (x)$, we obtain:
\[ {r^{(-1)}}\Big(\frac{{\hat{I}_{r,k}}(\theta {\varepsilon _{k}})}{\theta {\varepsilon _{k}}}\Big)=\exp \Big\{\ln \Big(\frac{{b_{k+1}}-{b_{k}}}{2}\Big)+\frac{\alpha {c_{k}^{\frac{1}{\alpha }}}}{(\alpha -1){(\theta {\varepsilon _{k}})^{\frac{1}{\alpha }}}}\Big\}\]
and then we estimate the expression for $S(\theta ,r)$ given in condition (iii) of Theorem 4. □4 Stochastic processes related to the heat equation
In this section we consider the Cauchy problem for the heat equation
subject to the random initial condition
where $\eta (x),x\in \mathbb{R}$, is a stochastic process.
(11)
\[ \frac{\partial u}{\partial t}=\mu \frac{{\partial ^{2}}u}{{\partial ^{2}}x},\hspace{0.1667em}\hspace{0.1667em}t>0,\hspace{0.1667em}\hspace{0.1667em}x\in \mathbb{R},\hspace{0.1667em}\hspace{0.1667em}\mu >0,\]We will follow the approach used in the paper [14] for the case where η is a sub-Gaussian process. Here instead we suppose that η is a φ-sub-Gaussian process.
Introduce the following assumption.
H.1
$\eta (x),x\in \mathbb{R}$, is a real, measurable, mean-square continuous stationary (in wide sense) stochastic process, which is strictly φ-sub-Gaussian with the determining constant ${c_{\eta }}$ (see Definition 4).
Recall that $\mathbb{E}\eta (x)=0$, $x\in \mathbb{R}$, since the process η is φ-sub-Gaussian.
Let $B(x),x\in \mathbb{R}$, be a covariance function of our stationary process $\eta (x),x\in \mathbb{R}$, therefore, we have the representation
where $F(\lambda )$ is a spectral measure, and for the process itself we can write the spectral representation
The stochastic integral (14) is considered as ${L_{2}}(\Omega )$ integral. The orthogonal complex-valued random measure Z is such that $\mathbb{E}|Z(d\lambda ){|^{2}}=F(d\lambda )$.
Consider the process $\{u(t,x),t>0,x\in \mathbb{R}\}$ defined by
where
is the fundamental solution to the heat equation (11).
(16)
\[ g(t,x)=\frac{1}{{(4\pi \mu t)^{1/2}}}\exp \Big\{-\frac{{x^{2}}}{4\mu t}\Big\},t>0,x\in \mathbb{R},\]In view of the representation (14), the process given by (15) can be written in the form
The process (17) can be interpreted as the mean-square or ${L_{2}}(\Omega )$ solution to the Cauchy problem (11)–(12), as justified in [14]. Note that the random solution in the form (17) to the heat equation with stationary initial condition appears already in the paper by Rosenblatt [26] (see Formula (1.11) therein).
Theorem 5.
Let $u(t,x),t>0,x\in \mathbb{R}$, be the stochastic process given by (17) and Assumption H.1 hold. Then the following statements hold:
1) if fore some $\varepsilon \in (0,1]$
then
where
Proof.
The process $u(t,x)$ is strictly φ-sub-Gaussian with the determining constant ${c_{\eta }}$. Therefore, we can write
We need to evaluate the expression in the right hand side of Formula (24). From this point the proof is the same as that of Theorem 3.1 in [14]. So, we just present the main steps.
We will use below some calculations from [14].
The covariance function of the process (17) is of the form
where
and
(taking into account that $|x|\le |x{|^{\varepsilon }}$ for $|x|<1$, $\varepsilon \in (0,1]$).
\[ Cov\Big(u(t,x),u({t_{1}},{x_{1}})\Big)={\int _{\mathbb{R}}}\exp \Big\{i\lambda (x-{x_{1}})-\mu {\lambda ^{2}}(t+{t_{1}})\Big\}F(d\lambda )\]
and
(25)
\[ \mathbb{E}{\Big(u(t,x)-u({t_{1}},{x_{1}})\Big)^{2}}={\int _{\mathbb{R}}}|b(\lambda ){|^{2}}F(d\lambda ),\]
\[ b(\lambda )=\exp \{i\lambda x\}\exp \{-\mu {\lambda ^{2}}t\}-\exp \{i\lambda {x_{1}}\}\exp \{-\mu {\lambda ^{2}}{t_{1}}\}.\]
From Formula (3.11) from [14] we have:
\[ |b(\lambda )|\le \Big(1-\exp {\Big\{-\mu {\lambda ^{2}}|t-{t_{1}}|\Big\}\Big)^{2}}+4{\sin ^{2}}(\frac{1}{2}\lambda (x-{x_{1}})).\]
For $|t-{t_{1}}|\le h$ and $|x-{x_{1}}|\le h$ we can write for any $\varepsilon \in (0,1]$ (see [14]):
(26)
\[ 4{\sin ^{2}}(\frac{1}{2}\lambda (x-{x_{1}}))\le 4\min {\Big(\frac{h}{2}|\lambda |,1\Big)^{2\varepsilon }}\](27)
\[ \Big(1-\exp {\Big\{-\mu {\lambda ^{2}}|t-{t_{1}}|\Big\}\Big)^{2}}\le {\Big(\min (\mu {\lambda ^{2}}h,1)\Big)^{2\varepsilon }}\]Using (26)–(27) we estimate the integral in the right hand side (25) for $|t-{t_{1}}|\le h$ and $|x-{x_{1}}|\le h$.
From Theorem 5 and Corollaries 2 and 3 we derive now the estimate for the distribution of supremum of the field $u(t,x)$ considered in the domain $\{a\le t\le b,c\le x\le d\}$.
Denote ${\widetilde{\varepsilon }_{0}}={\sup _{\substack{a\le t\le b;\\ {} c\le x\le d}}}{\tau _{\varphi }}(u(t,x))$, $\varkappa =\max (b-a,d-c)$, ${\tilde{\theta }_{i}}={\sigma _{i}}(\varkappa )/{\widetilde{\varepsilon }_{0}}$, $i=1,2$, where ${\sigma _{1}}$ and ${\sigma _{2}}$ are defined in (19) and (22) respectively.
Theorem 6.
Let $u(t,x),a\le t\le b,c\le x\le d$, be a separable modification of the stochastic process given by (17) and Assumption H.1 hold. Then the following bounds for the distribution of supremum hold:
1) if for some $\beta \in (0,1]$
then for all $0<\theta <\min (1,{\tilde{\theta }_{1}})$ and $u>0$
\[ P\Big\{\underset{\substack{a\le t\le b;\\ {} c\le x\le d}}{\sup }|u(t,x)|>u\Big\}\le 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\widetilde{\varepsilon }_{0}}}\Big)\Big\}{\widetilde{A}_{1}}(\theta {\widetilde{\varepsilon }_{0}}),\]
where
\[ {\widetilde{A}_{1}}(\theta {\widetilde{\varepsilon }_{0}})={2^{\frac{4}{\beta }-1}}\Big(\frac{{(\varkappa {c_{\eta }}{c_{1}^{1/\beta }})^{2}}{2^{2(2/\beta -1)}}}{{(\theta {\widetilde{\varepsilon }_{0}})^{2/\beta }}}+1\Big),\]
${c_{1}}={c_{1}}(\beta )$ is given by Formula (20);
2) if for some $\beta >1$
then for all $0<\theta <\min (1,{\tilde{\theta }_{2}})$ and $u>0$
\[ P\Big\{\underset{\substack{a\le t\le b;\\ {} c\le x\le d}}{\sup }|u(t,x)|>u\Big\}\le 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\widetilde{\varepsilon }_{0}}}\Big)\Big\}{\widetilde{A}_{2}}(\theta {\widetilde{\varepsilon }_{0}}),\]
where
\[ {\widetilde{A}_{2}}(\theta {\widetilde{\varepsilon }_{0}})=\frac{{\varkappa ^{2}}}{4}\exp \Big\{\frac{2\beta {c_{\eta }}{c_{2}^{1/\beta }}}{{(\theta {\widetilde{\varepsilon }_{0}})^{1/\beta }}(\beta -1)}\Big\},\]
${c_{2}}={c_{2}}(\beta )$ is given by (23).
Proof.
The assertion of this theorem follows directly from Theorem 5 and Corollaries 2 and 3. We need only to check that ${\widetilde{\varepsilon }_{0}}={\sup _{\substack{a\le t\le b;\\ {} c\le x\le d}}}{\tau _{\varphi }}(u(t,x))<\infty $. We have indeed
\[\begin{aligned}{}{\widetilde{\varepsilon }_{0}}& \le {c_{\eta }}\underset{\substack{a\le t\le b;\\ {} c\le x\le d}}{\sup }{\Big(\mathbb{E}|u(t,x){|^{2}}\Big)^{1/2}}\\ {} & \le {c_{\eta }}\underset{\substack{a\le t\le b;\\ {} c\le x\le d}}{\sup }{\Big({\int _{\mathbb{R}}}\exp \{-2\mu {\lambda ^{2}}t\}F(d\lambda )\Big)^{1/2}}\le {c_{\eta }}{\Big({\int _{\mathbb{R}}}F(d\lambda )\Big)^{1/2}}<\infty .\end{aligned}\]
□Consider now the process $u(t,x),(t,x)\in V$, defined on the unbounded domain of the form $V=[0,\infty )\times [-A,A]$.
Define the sets ${V_{k}}=[{b_{k}},{b_{k+1}}]\times [-A,A]$, $k=0,1,\dots \hspace{0.1667em}$, where a family $\{[{b_{k}},{b_{k+1}}],k=0,1,\dots \hspace{0.1667em}\}$ is introduced in Condition A.5, here we suppose additionally ${b_{k+1}}-{b_{k}}\ge 2A$, and $V={\cup _{k=0}^{\infty }}{V_{k}}$. Denote ${\varepsilon _{k}}={\sup _{(t,x)\in {V_{k}}}}{\tau _{\varphi }}(u(t,x))$, ${\widehat{\theta }_{i}}={\inf _{k}}\hspace{0.1667em}\{{\sigma _{i}}({b_{k+1}}-{b_{k}})/{\varepsilon _{k}}\}$, $i=1,2$, where ${\sigma _{1}}$ and ${\sigma _{2}}$ are defined in (19) and (22) respectively.
Theorem 7.
Let $\{u(t,x),(t,x)\in V\}$ be a separable modification of the stochastic process given by (17), Condition H.1 hold, and Condition A.5 hold with ${\varepsilon _{k}}={\sup _{(t,x)\in {V_{k}}}}{\tau _{\varphi }}(u(t,x))$. Suppose that for some $\beta \in (0,1]$
\[\begin{array}{l}\displaystyle {\int _{\mathbb{R}}}{\lambda ^{4\beta }}F(d\lambda )<\infty ;\\ {} \displaystyle A={\sum \limits_{k=0}^{\infty }}\frac{{\varepsilon _{k}}}{{a_{k}}}<\infty ;\end{array}\]
and there exists $0<\gamma \le 1$ such that
Then for any $\theta \in (0,\min (1,{\widehat{\theta }_{1}}))$ and any $u>0$
\[ P\Big\{\underset{(x,t)\in V}{\sup }\frac{|u(t,x)|}{a(t)}>u\Big\}\le {2^{4/\beta -1}}\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{A}\Big)\Big\}{\widehat{A}_{1}}(\theta ),\]
where
\[ {\widehat{A}_{1}}(\theta )=\exp \Big\{\frac{{({c_{\eta }}{c_{1}}(\beta ))^{2\gamma /\beta }}{\widehat{S}_{1}}}{\gamma A}{\Big(\frac{{2^{4/\beta -2}}}{{\theta ^{2/\beta }}}\Big)^{\gamma }}\Big\},\]
and ${c_{1}}(\beta )$ is given by Formula (20).
Theorem 8.
Let $\{u(t,x),(t,x)\in V\}$ be a separable modification of the stochastic process given by (17), Condition H.1 hold, and Condition A.5 hold with ${\varepsilon _{k}}={\sup _{(t,x)\in {V_{k}}}}{\tau _{\varphi }}(u(t,x))$. Suppose that for some $\beta >1$ and any $\theta \in (0,\min (1,{\widehat{\theta }_{2}}))$
\[\begin{array}{l}\displaystyle {\int _{\mathbb{R}}}{\Big(\ln (1+|\lambda |)\Big)^{2\beta }}F(d\lambda )<\infty ;\\ {} \displaystyle A={\sum \limits_{k=0}^{\infty }}\frac{{\varepsilon _{k}}}{{a_{k}}}<\infty ;\\ {} \displaystyle {\widehat{S}_{2}}(\theta )={\sum \limits_{k=0}^{\infty }}\frac{{\varepsilon _{k}}}{{a_{k}}}\Big\{\ln \Big(\frac{{b_{k+1}}-{b_{k}}}{2}\Big)+\frac{\beta }{\beta -1}{\Big(\frac{{c_{\eta }}{c_{2}}(\beta )}{\theta {\varepsilon _{k}}}\Big)^{1/\beta }}\Big\}<\infty ,\end{array}\]
where ${c_{2}}(\beta )$ is given by Formula (23).
5 Conclusions and future studies
The upper bounds for the distribution of increments for φ-sub-Gaussian processes are stated in Theorems 2, 3 in the forms different than those obtained previously in [4, 29] and other literature on the topic. This form of bounds can be more useful in particular situations, as one can calculate the bounds explicitly. We address to the future research the investigation of their exact expressions for different functions φ, in particular, by graphical methods.
Theorem 4 and Corollaries 7, 8 on the rate of growth of φ-sub-Gaussian processes generalize Theorem 2.4 [6] (which is stated for Gaussian case) and are different from the analogous results in [20] stated for φ-sub-Gaussian processes. The use of the integral (2) allows to simplify some expressions in the statements of the results, and, correspondingly, to simplify their application for study of solutions to the random heat equation.
The results on the distribution of supremum for the processes related to the heat equations with φ-sub-Gaussian initial conditions generalize the corresponding results from papers [13, 14] where sub-Gaussian initial conditions were considered. The results on the rate of growth under this setting was not stated before. Note that the results obtained in Section 4 hold also for Gaussian and sub-Gaussian case, with substitution $\varphi (x)=\frac{{x^{2}}}{2}$ in the corresponding bounds. Due to the use of Theorem 1, which is based on the entropy integral (2), for the Gaussian and sub-Gaussian case our bounds appear in the form which is simpler but somewhat different from the corresponding bounds in [13, 14]. We postpone for the future research the accurate comparison of these bounds, in particular, by simulation studies.
In the future studies it would be also interesting to study the cases of generalized heat equations with random initial conditions, in particular, equations of fractional order (for possible models of equations we address, for example, to papers [3, 24, 25] among many others). One can also consider some other classes of equations with fundamental solutions being of the form which allows to construct and investigate random solutions by the methods of the present paper.