1 Introduction
Numerous recent studies are concerned with evolution equations of the form
\[ \frac{\partial u}{\partial t}+{\sum \limits_{j=1}^{l}}\frac{{\partial ^{2j+1}}u}{\partial {x^{2j+1}}}+{u^{k}}\frac{\partial u}{\partial x}=0,\hspace{1em}l,k\in \mathbb{N},\]
which are dispersive equations of order $2l+1$ with a convective term ${u^{k}}\frac{\partial u}{\partial x}$; equations with coefficients of more general form and different kinds of nonlinearity are also the subject of active research.The most celebrated equation of this class is the Korteweg–De Vries (KdV) equation
which describes the evolution of small amplitude long waves in fluids and other media.
The Kawahara equation with dispersive terms of the third and fifth orders
is used to model various dispersive phenomena such as plasma waves, capilarity-gravity water waves, etc. in situations when the cubic dispersive term is weak or not sufficient. In the most recent research, generalisations of Equations (1.1), (1.2) have been suggested and treated.
(1.2)
\[ \frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+\alpha \frac{{\partial ^{3}}u}{\partial {x^{3}}}+\beta \frac{{\partial ^{5}}u}{\partial {x^{5}}}=0\]In the physical and mathematical literature the existence, uniqueness and analytic properties of solutions to the initial value problem have been intensively investigated for various linear and nonlinear dispersive equations. Boundary value problems for such equations were also considered. We refer, for example, to the comprehensive study undertaken in the book by Tao [16], among many other books and papers on the topic.
One should note the importance of the study of constant coefficient linear dispersive equations for its own sake and also because this provides prerequisites for the theory of nonlinear dispersive equations, since the latter are often obtained by perturbation of the linear theory ([16]). Developing the theory of linear equations is also essential for describing those evolution phenomena where the linear effects compensate or dominate nonlinear ones. In such situations, one can expect that the nonlinear solutions display almost the same behavior as the linear ones.
In the probabilistic literature significant attention has been paid to the equations of the form
The investigation of fundamental solutions to the equation (1.3) can be traced back to works by Bernštein and Lévy. Such solutions are sign-varying and, based on them, the so-called pseudoprocesses have been introduced and extensively investigated in the literature. Note, that Equations (1.3) of even and odd order possess solutions of different structure and behaviour (for example, see [13]). We refer to [7], where a review of the recent results on this topic and additional literature are presented.
We note that in the probabilistic literature equations of the form (1.3) and their generalizations are often called higher-order heat-type equations.
Equations of the form (1.3) subject to random initial conditions were studied in [1], namely, the asymptotic behavior was analysed for the rescaled solution to the Airy equation with random initial conditions given by weakly dependent stationary processes.
More general odd-order equations of the form
subject to the random initial conditions represented by a strictly φ-sub-Gaussian harmonizable processes were considered in [2, 7]. Rigorous conditions were stated therein for the existence of solutions and some distributional properties of solutions were investigated.
(1.4)
\[ \frac{\partial u}{\partial t}={\sum \limits_{k=1}^{N}}{a_{k}}\frac{{\partial ^{2k+1}}u}{\partial {x^{2k+1}}}\hspace{0.2778em},\hspace{1em}N=1,2,\dots ,\]The present paper continues the line of research initiated in the papers [2, 7]. Note that in the mathematical literature the initial value problems for partial differential equations have been studied within the framework of different functional spaces, including the most abstract ones. Here we take into consideration Equation (1.4) in the framework of special Banach spaces of random variables, which constitute a subclass of Orlicz spaces of exponential type, more precisely, we deal with the spaces of strictly φ-sub-Gaussian random processes. These spaces play an important role in extensions of properties of Gaussian and sub-Gaussian processes. Basic results on φ-sub-Gaussian processes and fields can be found, for example, in [3, 4, 8, 17].
The general methods and techniques developed for φ-sub-Gaussian processes, applied to the problems under consideration in the present paper, permit us to obtain bounds for the distributions of suprema of the solutions to the initial value problem for Equation (1.4). The bounds are presented in a form different than those obtained in the paper [7], and can be more useful in particular situations. In such a way, the results of the present paper complement and extend the results presented in [7].
To make the paper self-contained, in Sections 2 we present all important definitions and facts on harmonizable φ-sub-Gaussian processes, which will be used in the derivation of the main results. We also formulate the result on the conditions of existence of solutions to (1.4) with the φ-sub-Gaussian initial condition (see [2, 7] for its derivation). The main result on the bounds for the distributions of supremum of the solutions is stated in Section 3. We present examples of processes for which the assumptions of the general result are verified and bounds are written explicitly. The main result is also specified for the case of Gaussian initial condition.
2 Preliminaries
Since in the paper we consider a partial differential equation with random initial condition given by a real-valued harmonizable φ-sub-Gaussian process, in this section we present the necessary definitions and facts concerning such processes.
2.1 Harmonizable processes
Harmonizable processes are a natural extension of stationary processes to second-order nonstationary ones. Such class of processes allows us to retain advantages of the Fourier analysis. Harmonizable processes were introduced by Loève [12]. Recent developments on this theory are due to Rao [14] and Swift [15] among others.
Definition 2.1 ([12]).
The second-order random function $X=\{X(t),t\in \mathbb{R}\}$, $\mathsf{E}X(t)=0$, is called harmonizable if there exists a second-order random function $y=y(t),t\in \mathbb{R}$, $\mathsf{E}y(t)=0$ such that the covariance ${\Gamma _{y}}(t,s)=\mathsf{E}y(t)\overline{y(s)}$ has finite variation and $X(t)={\textstyle\int _{\mathbb{R}}}{e^{itu}}\hspace{0.1667em}\mathrm{d}y(u)$, where the integral is defined in the mean-square sense.
Theorem 2.1 ([12] Loève theorem).
The second-order random function $X=\{X(t),t\in \mathbb{R}\}$, $\mathsf{E}X(t)=0$, is harmonizable if and only if there exists a covariance function ${\Gamma _{y}}(u,v)$ with finite variation such that
Remark 2.2.
In what follows, an integral of the type $\textstyle\int {\textstyle\int _{A}}f(t,s)\mathrm{d}g(t,s)$ is understood as a common Lebesgue–Stieltjes integral, that is, the limit of the sum $\textstyle\sum \textstyle\sum f(t,s){\Delta _{i}}{\Delta _{j}}g(t,s)$, and an integral of the type $\textstyle\int {\textstyle\int _{A}}f(t,s)|\mathrm{d}g(t,s)|$ is is understood as the limit of the sum $\textstyle\sum \textstyle\sum f(t,s)|{\Delta _{i}}{\Delta _{j}}g(t,s)|$.
Below we shall focus on real-valued harmonizable processes.
Definition 2.2.
Real-valued second order random function $X=\{X(t),t\in \mathbb{R}\}$ is called harmonizable, if there exists a real-valued second order function $y(u)$, $\mathsf{E}y(u)\hspace{0.1667em}=0$, $u\in \mathbb{R}$, such that $X(t)={\textstyle\int _{-\infty }^{\infty }}\sin tu\hspace{0.1667em}\mathrm{d}y(u)$ or $X(t)={\textstyle\int _{-\infty }^{\infty }}\cos tu\hspace{0.1667em}\mathrm{d}y(u)$ and the covariance function ${\Gamma _{y}}(t,s)=\mathsf{E}y(t)y(s)$ has finite variation. The integral is defined in the mean-square sense.
Theorem 2.2.
Real-valued second order function $X=\{X(t),t\in \mathbb{R}\}$, $\mathsf{E}X(t)=0$, is harmonizable if and only if there exists the covariance function ${\Gamma _{y}}(u,v)$ with finite variation such that
\[ {\Gamma _{x}}(t,s)=\mathsf{E}X(t)X(s)={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}\kappa (tu)\kappa (sv)\hspace{0.1667em}\mathrm{d}{\Gamma _{y}}(u,v),\]
where $\kappa (v)=\cos v$ or $\kappa (v)=\sin v$.
The theorem above follows from Theorem 2.1.
2.2 φ-sub-Gaussian random variables and processes
Here we present some basic facts from the theory of φ-sub-Gaussian random variables and processes, as well as some necessary results.
A continuous even convex function φ is called an Orlicz N-function, if $\varphi (0)=0,\varphi (x)>0$ as $x\ne 0$, and the following conditions hold: ${\lim \nolimits_{x\to 0}}\frac{\varphi (x)}{x}=0$, ${\lim \nolimits_{x\to \infty }}\frac{\varphi (x)}{x}=\infty $. The function ${\varphi ^{\ast }}$ defined by ${\varphi ^{\ast }}(x)={\sup _{y\in \mathbb{R}}}(xy-\varphi (y))$ is called the Young–Fenchel transform (or convex conjugate) of the function φ [3, 11].
We say that φ satisfies the Condition Q, if φ is such an N-function that ${\liminf _{x\to 0}}\frac{\varphi (x)}{{x^{2}}}=c>0$, where the case $c=\infty $ is possible [4, 8].
In what follows we will always deal with N-functions for which condition Q holds.
Examples of N-functions, for which the condition Q is satisfied:
(2.2)
\[\begin{aligned}{}& \varphi (x)=\frac{|x{|^{\alpha }}}{\alpha },\hspace{1em}1<\alpha \le 2,\\ {} & \varphi (x)=\left\{\begin{array}{l}\frac{|x{|^{\alpha }}}{\alpha },\hspace{1em}|x|\ge 1,\alpha >2;\hspace{1em}\\ {} \frac{|x{|^{2}}}{\alpha },\hspace{1em}|x|\le 1,\alpha >2.\hspace{1em}\end{array}\right.\end{aligned}\]Definition 2.3 ([4, 8]).
Let $\{\Omega ,L,\mathbf{P}\}$ be a standard probability space. The random variable ζ is said to be φ-sub-Gaussian (or belongs to the space ${\mathrm{Sub}_{\varphi }}(\Omega )$), if $\mathsf{E}\zeta =0$, $\mathsf{E}\exp \{\lambda \zeta \}$ exists for all $\lambda \in \mathbb{R}$ and there exists a constant $a>0$ such that the following inequality holds for all $\lambda \in \mathbb{R}:$ $\mathsf{E}\exp \{\lambda \zeta \}\le \exp \{\varphi (\lambda a)\}$.
The space ${\mathrm{Sub}_{\varphi }}(\Omega )$ is a Banach space with respect to the norm [4, 8]
which is called the φ-sub-Gaussian standard of the random variable ζ.
Centered Gaussian random variables $\zeta \sim N(0,{\sigma ^{2}})$ are φ-sub-Gaussian with $\varphi (x)=\frac{{x^{2}}}{2}$ and ${\tau _{\varphi }^{2}}(\zeta )=\mathsf{E}{\zeta ^{2}}={\sigma ^{2}}$. In the case $\varphi (x)=\frac{{x^{2}}}{2}$, φ-sub-Gaussian random variables are simply sub-Gaussian.
Definition 2.4.
A family Δ of random variables $\zeta \in {\mathrm{Sub}_{\varphi }}(\Omega )$ is called strictly φ-sub-Gaussian (see [5]), if there exists a constant ${C_{\Delta }}$ such that for all countable sets I of random variables ${\zeta _{i}}\in \Delta $, $i\in I$, the following inequality holds:
The constant ${C_{\Delta }}$ is called the determining constant of the family Δ.
The linear closure of a strictly φ-sub-Gaussian family Δ in the space ${L_{2}}(\Omega )$ is the strictly φ-sub-Gaussian with the same determining constant [5].
Definition 2.5.
The random process $\zeta =\{\zeta (t),t\in \mathbf{T}\}$ is called (strictly) φ-sub-Gaussian if the family of random variables $\{\zeta (t),t\in \mathbf{T}\}$ is (strictly) φ-sub-Gaussian [5].
The following example of strictly φ-sub-Gaussian random process is important for our study. The solutions of partial differential equations considered in the next sections are of the same form as in this example.
Example 2.1 ([5]).
Let K be a deterministic kernel and suppose that the process $X=\{X(t),t\in \mathbf{T}\}$ can be represented in the form
where $\xi (t)$, $t\in \mathbf{T}$, is a strictly φ-sub-Gaussian random process and the integral above is defined in the mean-square sense. Then the process $X(t)$, $t\in \mathbf{T}$, is a strictly φ-sub-Gaussian random process with the same determining constant.
The notion of admissible function for the space ${\mathrm{Sub}_{\varphi }}(\Omega )$ will be used to state the conditions of the existence of solutions of partial differential equations considered in the paper and to write down the bounds for suprema of these solutions.
Lemma 2.1 ([9, 10], p. 146).
Let $Z(u),u\ge 0$, be a continuous, increasing function such that $Z(u)>0$ and the function $\frac{u}{Z(u)}$ is non-decreasing for $u>{u_{0}}$, where ${u_{0}}\ge 0$ is a constant. Then for all $u,v\ne 0$
Definition 2.6.
Function $Z(u),u\ge 0$, is called admissible for the space ${\mathrm{Sub}_{\varphi }}(\Omega )$, if for $Z(u)$ conditions of Lemma 2.1 hold and for some $\varepsilon >0$ the integral
\[ {\int _{0}^{\varepsilon }}\Psi \left(\ln \left({Z^{(-1)}}\left(\frac{1}{s}\right)-{u_{0}}\right)\right)ds\]
converges, where $\Psi (v)=\frac{v}{{\varphi ^{(-1)}}(v)},v>0$.For example, the function $Z(u)={u^{\rho }}$, $0<\rho <1$, is an admissible function for the space ${\mathrm{Sub}_{\varphi }}(\Omega )$ if φ is defined in (2.2).
Characteristic feature of φ-sub-Gaussian random variables is the exponential bounds for their tail probabilities. For φ-sub-Gaussian processes estimates for their suprema are available in different forms, see, for example, the book [3].
To derive our main results, we shall use the following theorem on the distribution of supremum of a φ-sub-Gaussian random process, proved in the paper [6] (see also [3]).
Theorem 2.3.
Let $X=\{X(t),\hspace{0.1667em}t\in \mathbf{T}\}$ be a φ-sub-Gaussian process and ${\rho _{X}}$ be the pseudometrics generated by X, that is, ${\rho _{X}}(t,s)={\tau _{\varphi }}(X(t)-X(s))$, $t,s\in \mathbf{T}$. Assume that the pseudometric space $(\mathbf{T},{\rho _{X}})$ is separable, the process X is separable on $(\mathbf{T},{\rho _{X}})$ and ${\varepsilon _{0}}:={\sup _{t\in \mathbf{T}}}{\tau _{\varphi }}(X(t))<\infty $.
Let $r(x),x\ge 1$, be a non-negative, monotone increasing function such that the function $r({e^{x}}),x\ge 0$, is convex, and for $0<\varepsilon \le {\varepsilon _{0}}$
where $\{N(v),v>0\}$ is the massiveness of the pseudometric space $(\mathbf{T},{\rho _{X}})$, that is, $N(v)$ denotes the smallest number of elements in a v-covering of T, and the covering is formed by closed balls of radius of at most v.
(2.5)
\[ {I_{r}}(\varepsilon ):={\underset{0}{\overset{\varepsilon }{\int }}}r(N(v))\hspace{0.1667em}dv<\infty ,\]
Then for all $\lambda >0$, $0<\theta <1$ and $u>0$ it holds
and
where
(2.6)
\[ \mathsf{E}\exp \left\{\lambda \underset{t\in \mathbf{T}}{\sup }|X(t)|\right\}\le Q(\lambda ,\theta )\]
\[\begin{aligned}{}Q(\lambda ,\theta )& :=\exp \left\{\varphi \left(\frac{\lambda {\varepsilon _{0}}}{1-\theta }\right)\right\}{r^{(-1)}}\left(\frac{{I_{r}}(\theta {\varepsilon _{0}})}{\theta {\varepsilon _{0}}}\right),\\ {} A(\theta ,u)& =\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\varepsilon _{0}}}\Big)\Big\}{r^{(-1)}}\left(\frac{{I_{r}}(\theta {\varepsilon _{0}})}{\theta {\varepsilon _{0}}}\right).\end{aligned}\]
Remark 2.3.
The integrals of the form
with $g(v)$, $v\ge 1$, being a nonnegative nondecreasing function, are called entropy integrals. Entropy characteristics of the parametric set T with respect to the pseudometrics ${\rho _{X}}(t,s)={\tau _{\varphi }}(X(t)-X(s))$, $t,s\in \mathbf{T}$, generated by the process $X=\{X(t),t\in \mathbf{T}\}$, and the rate of growth of metric massiveness $N(v)$, or metric entropy $H(v):=\ln (N(v))$, are closely related to the properties of the process X (see [3] for details).
(2.8)
\[ I(\varepsilon ):={\underset{0}{\overset{\varepsilon }{\int }}}g(N(v))\hspace{0.1667em}dv,\hspace{1em}\varepsilon >0,\]The integrals (2.8) play an important role in the study of such properties as boundedness and continuity of sample paths of a process, these integrals appear in estimates for modulii of continuity and distribution of supremum.
General results of this kind for φ-sub-Gaussian processes are related to the convergence of the integrals (2.8), where for $g(v)$ one takes $\Psi (\ln (v))$ with $\Psi (v)=\frac{v}{{\varphi ^{(-1)}}(v)},v>0$. Theorem 2.3 is more suitable for the case of “moderate” growth of the metric entropy and can lead to improved inequalities for upper bound for the distribution of supremum of the process, in comparison with more general inequalities involving the integrals based on the above function Ψ (see [3]).
2.3 Solutions of linear odd-order heat-type equations with random initial conditions
Let us consider the linear equation
subject to the random initial condition
and ${\{{a_{k}}\}_{k=1}^{N}}$ are some constants.
(2.9)
\[ {\sum \limits_{k=1}^{N}}{a_{k}}\frac{{\partial ^{2k+1}}U(t,x)}{\partial {x^{2k+1}}}=\frac{\partial U(t,x)}{\partial t},\hspace{1em}t>0,\hspace{0.1667em}x\in \mathbb{R},\]The next theorem gives the conditions of the existence of the solutions of the equation above with a φ-sub-Gaussian initial condition $\eta (x)$ (see [2, 7]).
Theorem 2.4.
Let $\eta =\{\eta (x),x\in \mathbb{R}\}$ be a real harmonizable (see Definition 2.2) and strictly φ-sub-Gaussian random process. Also let $Z=\{Z(u),u\ge 0\}$ be a function admissible for the space ${\mathrm{Sub}_{\varphi }}(\Omega )$. Assume that the following integral converges
Then
is the classical solution to the problem (2.9)–(2.10), that is, $U(t,x)$ satisfies Equation (2.9) with probability one and $U(0,x)=\eta (x)$. Here
and $\kappa (v)=\cos v$ or $\kappa (v)=\sin v$ for the cases when $\eta (x)={\textstyle\int _{\mathbb{R}}}cos(xu)\hspace{0.1667em}dy(u)$ or $\eta (x)={\textstyle\int _{\mathbb{R}}}\sin (xu)\hspace{0.1667em}dy(u)$ respectively.
(2.11)
\[ {\int _{\mathbb{R}}}{\int _{\mathbb{R}}}{\left|\lambda \right|^{2N+1}}{\left|\mu \right|^{2N+1}}Z\left({u_{0}}+{\left|\lambda \right|^{2N+1}}\right)Z\left({u_{0}}+{\left|\mu \right|^{2N+1}}\right)d|{\Gamma _{y}}(\lambda ,\mu )|<\infty .\](2.13)
\[ I(t,x,\lambda )=\kappa \left(\lambda x+t{\sum \limits_{k=1}^{N}}{a_{k}}{\lambda ^{2k+1}}{(-1)^{k}}\right),\]Remark 2.4.
Note that under the condition (2.11) all the integrals
converge uniformly in probability for $|x|\le A$ and $0\le t\le T$ for all $A>0$, $T>0$. This guarantees that the derivatives of orders $s=1,2,\dots ,2N+1$ of solution $U(t,x)$ given by (2.12) exist with probability one. In this sense we can treat $U(t,x)$ as a classical solution. We refer for more details to [2].
(2.14)
\[ {\int _{\mathbb{R}}}{\lambda ^{s}}I\left(t,x,\lambda \right)dy\left(\lambda \right),\hspace{1em}s=0,1,2,\dots ,2N+1,\]Remark 2.5.
Similar result can be stated for the case $\eta (x)={\textstyle\int _{-\infty }^{\infty }}(a\sin xu+b\cos xu)\hspace{0.1667em}\mathrm{d}y(u)$, where a and b are some real constants.
Remark 2.6.
Let $\varphi (x)=\frac{|x{|^{p}}}{p}$, $p>1$, for sufficiently large x. Then the statement of Theorem 2.4 holds if the following integral converges
where α is a constant such that $\alpha >1-\frac{1}{p}$ (see [2]).
(2.15)
\[ {\int _{\mathbb{R}}}{\int _{\mathbb{R}}}{\left|\lambda \mu \right|^{2N+1}}{\left(\ln \left(1+\lambda \right)\ln \left(1+\mu \right)\right)^{\alpha }}d|{\Gamma _{y}}\left(\lambda ,\mu \right)|,\]3 Main results
3.1 Some auxiliary estimates
Let us consider a separable metric space $(\mathbf{T},d)$, where $\mathbf{T}=\{{a_{i}}\le {t_{i}}\le {b_{i}},\hspace{0.1667em}i=1,2\}$ and $d(t,s)={\max _{i=1,2}}|{t_{i}}-{s_{i}}|$, $t=({t_{1}},{t_{2}})$, $s=({s_{1}},{s_{2}})$.
Theorem 3.1.
Let $X=\{X(t),t\in \mathbf{T}\}$ be a separable φ-sub-Gaussian random process such that ${\varepsilon _{0}}={\sup _{t\in \mathbf{T}}}{\tau _{\varphi }}(X(t))<\infty $ and
where $\{\sigma (h),\hspace{0.2778em}0<h\le {\max _{i=1,2}}|{b_{i}}-{a_{i}}|\}$ is a monotonically increasing continuous function such that $\sigma (h)\to 0$ as $h\to 0$, and for $0<\varepsilon \le {\gamma _{0}}$
where ${\gamma _{0}}=\sigma ({\max _{i=1,2}}|{b_{i}}-{a_{i}}|)$ and $r(x),x\ge 1$, is defined in Theorem 2.3. Then for all $0<\theta <1$ and $u>0$ it holds
where
(3.1)
\[ \underset{\begin{array}{c}d(t,s)\le h,\\ {} t,s\in \mathbf{T}\end{array}}{\sup }{\tau _{\varphi }}(X(t)-X(s))\le \sigma (h),\](3.2)
\[ {\tilde{I}_{r}}(\varepsilon ):={\underset{0}{\overset{\varepsilon }{\int }}}r\left(\left(\frac{{b_{1}}-{a_{1}}}{2{\sigma ^{(-1)}}(v)}+1\right)\left(\frac{{b_{2}}-{a_{2}}}{2{\sigma ^{(-1)}}(v)}+1\right)\right)\hspace{0.1667em}dv<\infty ,\]Proof.
To prove this theorem, we apply Theorem 2.3 in the case of the separable metric space $(\mathbf{T},d)$ with $T=\{{a_{i}}\le {t_{i}}\le {b_{i}},\hspace{0.1667em}i=1,2\}$ and $d(t,s)={\max _{i=1,2}}|{t_{i}}-{s_{i}}|$, $t=({t_{1}},{t_{2}})$, $s=({s_{1}},{s_{2}})$.
In particular, condition (3.1) means that
\[ \underset{\begin{array}{c}d(t,s)\le h,\\ {} t,s\in \mathbf{T}\end{array}}{\sup }{\tau _{\varphi }}(X(t)-X(s))=\underset{\begin{array}{c}d(t,s)\le h,\\ {} t,s\in \mathbf{T}\end{array}}{\sup }{\rho _{X}}(t,s)\le \sigma (h),\hspace{1em}0<h\le \underset{i=1,2}{\max }|{b_{i}}-{a_{i}}|.\]
From the fact that the process X is separable on $(\mathbf{T},{\rho _{X}})$ and the function $\{\sigma (h),\hspace{0.2778em}0<h\le {\max _{i=1,2}}|{b_{i}}-{a_{i}}|\}$ is a monotonically increasing continuous function, we get that for $\varepsilon \le {\gamma _{0}}$ the smallest number of elements in an ε-covering of the pseudometric space $(\mathbf{T},{\rho _{X}})$ can be estimated as the smallest number of elements in a $({\sigma ^{(-1)}}(\varepsilon ))$-covering of the metric space $(\mathbf{T},d)$ as follows:
Hence, from condition (3.2) we get that
\[ {I_{r}}(\varepsilon )={\underset{0}{\overset{\varepsilon }{\int }}}r(N(v))\hspace{0.1667em}dv\le {\tilde{I}_{r}}(\varepsilon )<\infty ,\hspace{1em}\varepsilon \le {\gamma _{0}},\]
that is, conditions of Theorem 2.3 are satified for the process X. Finally, taking into account the estimates above and the properties of the function r we derive the estimate for distribution of supremum of the process X for all $0<\theta <1$ and $u>0$:
\[\begin{array}{l}\displaystyle P\{\underset{t\in \mathbf{T}}{\sup }|X(t)|\ge u\}\le 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\varepsilon _{0}}}\Big)\Big\}{r^{(-1)}}\left(\frac{{I_{r}}(\min (\theta {\varepsilon _{0}},{\gamma _{0}}))}{\theta {\varepsilon _{0}}}\right)\\ {} \displaystyle \le 2\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\varepsilon _{0}}}\Big)\Big\}{r^{(-1)}}\left(\frac{{\tilde{I}_{r}}(\min (\theta {\varepsilon _{0}},{\gamma _{0}}))}{\theta {\varepsilon _{0}}}\right)=2\tilde{A}(\theta ,u).\end{array}\]
□As can be seen from Theorem 3.1, it is crucial to guarantee some kind of continuity of X on T in the form (3.1), that is, with respect to the norm ${\tau _{\varphi }}$ induced by the process X itself. Fulfilment of condition (3.1) enables us to write down the upper bound (3.3) for the distribution of supremum of a φ-sub-Gaussian process $X=\{X(t),t\in \mathbf{T}\}$ defined on a separable metric space $(\mathbf{T},d)$.
Below we present as a separate theorem the very useful result giving the conditions for the estimate (3.1) to hold for the field $\{U(t,x),\hspace{0.1667em}a\le t\le b,\hspace{0.1667em}c\le x\le d\}$ representing the solution to (2.9)–(2.10).
Theorem 3.2.
Let $y=\{y(u),\hspace{0.1667em}u\in \mathbb{R}\}$ be a strictly φ-sub-Gaussian random process with a determining constant ${C_{y}}$ and $U(t,x)={\textstyle\int _{-\infty }^{\infty }}I(t,x,\lambda )\hspace{0.1667em}dy(\lambda )$, where $I(t,x,\lambda )$ is given in Theorem 2.4, $a\le t\le b$, $c\le x\le d$. Assume that $U(t,x)$ exists and is continuous with probability one (this condition holds if Theorem 2.4 holds). Let $\mathsf{E}y(t)y(s)={\Gamma _{y}}(s,t)$. Assume that $\{Z(u),u\ge 0\}$ is an admissible function for the space ${\mathrm{Sub}_{\varphi }}(\Omega )$. If the integral
converges, then there exists the function
such that
(3.4)
\[\begin{aligned}{}{C_{Z}^{2}}& ={\int _{-\infty }^{\infty }}{\int _{-\infty }^{\infty }}\left(Z\Big(\frac{|\lambda |}{2}+{u_{0}}\Big)+Z\Big(\frac{1}{2}\Big|{\sum \limits_{k=1}^{N}}{a_{k}}{\lambda ^{2k+1}}{(-1)^{k}}\Big|+{u_{0}}\Big)\right)\\ {} & \times \left(Z\Big(\frac{|\mu |}{2}+{u_{0}}\Big)+Z\Big(\frac{1}{2}\hspace{0.1667em}\Big|{\sum \limits_{k=1}^{N}}{a_{k}}{\mu ^{2k+1}}{(-1)^{k}}\Big|+{u_{0}}\Big)\right)\hspace{0.1667em}d|{\Gamma _{y}}(\lambda ,\mu )|\end{aligned}\]This result was obtained in the paper [7] as an intermediate statement in the course of the proof of Theorem 5.1. To make the present paper self-contained, we present in Appendix the main steps of the proof.
3.2 On the distribution of supremum of solution to the problem (2.9)–(2.10)
Now we have all the necessary tools to derive the estimate for the distribution of supremum of the field $U(t,x)$ representing the solution to (2.9)–(2.10).
Theorem 3.3.
Let $y=\{y(u),\hspace{0.1667em}u\in \mathbb{R}\}$ be a strictly φ-sub-Gaussian random process with a determining constant ${C_{y}}$ and $U(t,x)={\textstyle\int _{-\infty }^{\infty }}I(t,x,\lambda )\hspace{0.1667em}dy(\lambda )$, where $I(t,x,\lambda )$ is given in Theorem 2.4, $a\le t\le b$, $c\le x\le d$. Assume that for $U(t,x)$ the conditions of Theorem 3.2 hold. Let $r(x)$, $x\ge 1$, be a non-negative, monotone increasing function such that the function $r({e^{x}}),x\ge 0$, is convex, and condition (3.2) is satisfied for $\sigma (h)$ given by (3.5).
Then for all $0<\theta <1$ and $u>0$ the following inequality holds true
where
(3.7)
\[ P\big\{\underset{\begin{array}{c}a\le t\le b\\ {} c\le x\le d\end{array}}{\sup }|U(t,x)|>u\big\}\le 2\hat{A}(\theta ,u),\](3.8)
\[\begin{aligned}{}\hat{A}(\theta ,u)& =\exp \Big\{-{\varphi ^{\ast }}\Big(\frac{u(1-\theta )}{{\varepsilon _{0}}}\Big)\Big\}{r^{(-1)}}\left(\frac{{\hat{I}_{r}}(\min (\theta {\varepsilon _{0}},{\gamma _{0}}))}{\theta {\varepsilon _{0}}}\right),\end{aligned}\](3.9)
\[\begin{aligned}{}{\hat{I}_{r}}(\delta )& ={\int _{0}^{\delta }}r\left(\left[\Big(\frac{b-a}{2}\Big({Z^{(-1)}}\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)-{u_{0}}\Big)+1\Big)\right.\right.\\ {} & \left.\left.\times \Big(\frac{d-c}{2}\Big({Z^{(-1)}}\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)-{u_{0}}\Big)+1\Big)\right]\right)\hspace{0.1667em}ds,\\ {} {\varepsilon _{0}}& =\underset{\begin{array}{c}a\le t\le b\\ {} c\le x\le d\end{array}}{\sup }{\tau _{\varphi }}(U(t,x)),\\ {} {\gamma _{0}}& =\frac{2{C_{y}}{C_{Z}}}{Z(\frac{1}{\varkappa }+{u_{0}})},\hspace{1em}\varkappa =\max (b-a,d-c).\end{aligned}\]Proof.
The assertion of this theorem follows from Theorems 3.1 and 3.2. Since the conditions of Theorem 3.2 are satisfied, then there exists the function $\sigma (h)=2{C_{y}}{C_{Z}}{\big(Z\big(\frac{1}{h}+{u_{0}}\big)\big)^{-1}},0<h<\max (b-a,d-c)$, such that
\[ \underset{\begin{array}{c}t,{t_{1}}\in [a,b]:|t-{t_{1}}|\le h\\ {} x,{x_{1}}\in [c,d]:|x-{x_{1}}|\le h\end{array}}{\sup }{\tau _{\varphi }}(U(t,x)-U({t_{1}},{x_{1}}))\le \sigma (h).\]
In this case,
\[ {\sigma ^{(-1)}}(v)=\Big({Z^{(-1)}}{\Big(\frac{2{C_{y}}{C_{Z}}}{v}\Big)-{u_{0}}\Big)^{-1}},\hspace{1em}0<v<\frac{2{C_{y}}{C_{Z}}}{Z(\frac{1}{\varkappa }+{u_{0}})}={\gamma _{0}},\]
and for ${\varepsilon _{0}}$ the upper bound (A.1) holds (see Appendix A).Remark 3.1.
The derivation of our main result is based on Theorem 2.3, and due to this we present the bounds for the distribution of the supremum of the process $U(t,x)$ in the form different than those obtained in the paper [7]. This form of bounds can be more useful in particular situations giving the possibility to calculate the explicit expressions for the bounds.
Now we will specify the statement of Theorem 3.3 for particular choices of the admissible function Z and the function φ.
Example 3.1.
Consider $\varphi (x)=\frac{|x{|^{\alpha }}}{\alpha }$, $1<\alpha \le 2$. Then ${\varphi ^{\ast }}(x)=\frac{|x{|^{\gamma }}}{\gamma }$, where $\gamma \ge 2$, and $\frac{1}{\alpha }+\frac{1}{\gamma }=1$. Hence,
Now it is necessary to make the following steps:
So, let us choose the admissible function $Z(u)=|u{|^{\rho }}$, $0<\rho \le 1$.
In this case ${u_{0}}=0$, ${Z^{(-1)}}(u)={u^{\frac{1}{\rho }}}$, $u>0$, and
This integral converges if the following integral converges
(3.10)
\[\begin{aligned}{}{C_{Z}^{2}}& ={\int _{-\infty }^{\infty }}{\int _{-\infty }^{\infty }}\Big({\Big(\frac{|\lambda |}{2}\Big)^{\rho }}+\Big|\frac{1}{2}{\sum \limits_{k=1}^{N}}{a_{k}}{\lambda ^{2k+1}}{(-1)^{k}}{\Big|^{\rho }}\Big)\end{aligned}\](3.11)
\[\begin{aligned}{}& \times \Big({\Big(\frac{|\mu |}{2}\Big)^{\rho }}+\Big|\frac{1}{2}{\sum \limits_{k=1}^{N}}{a_{k}}{\mu ^{2k+1}}{(-1)^{k}}{\Big|^{\rho }}\Big)\hspace{0.1667em}d|{\Gamma _{y}}(\lambda ,\mu )|\\ {} & \le {\int _{-\infty }^{\infty }}{\int _{-\infty }^{\infty }}\frac{1}{{2^{2\rho }}}\Big(|\lambda {|^{\rho }}+{\Big({\sum \limits_{k=1}^{N}}|{a_{k}}||\lambda {|^{2k+1}}\Big)^{\rho }}\Big)\\ {} & \times \Big(|\mu {|^{\rho }}+{\Big({\sum \limits_{k=1}^{N}}|{a_{k}}||\mu {|^{2k+1}}\Big)^{\rho }}\Big)\hspace{0.1667em}d|{\Gamma _{y}}(\lambda ,\mu )|.\end{aligned}\]Note that for the existence of the solution $U(t,x)$ we have to impose the condition (2.11), which for the admissible function $Z(u)=|u{|^{\rho }}$ takes the form
and implies the fulfilment of (3.12). Therefore, ${C_{Z}^{2}}$ is well defined.
(3.13)
\[ {\int _{-\infty }^{\infty }}{\int _{-\infty }^{\infty }}{\left|\lambda \right|^{(2N+1)(\rho +1)}}{\left|\mu \right|^{(2N+1)(\rho +1)}}\hspace{0.1667em}d|{\Gamma _{y}}(\lambda ,\mu )|<\infty ,\]If (3.13) holds true, then we can define $\sigma (h)$ by means of the formula (3.5), and, for our choice of Z, we have that
where we have denoted $C=2{C_{y}}{C_{Z}}$.
Therefore, in view of Theorem 3.2, condition (3.6) holds with $\sigma (h)$ of the form (3.14), that is, we have the Hölder continuity of sample paths of the solution $U(t,x)$.
Let $r(v)={v^{\beta }}-1$, $0<\beta <\rho /2$, then ${r^{(-1)}}(v)={(v+1)^{1/\beta }}$. For such $r(v)$ and the above choice of Z we have
Consider $\delta \in (0,\theta {\varepsilon _{0}}]$ and let us choose θ such that $\frac{\varkappa }{2}{\big(\frac{2{C_{Z}}{C_{y}}}{\theta {\varepsilon _{0}}}\big)^{1/\rho }}>1$, that is, $\theta \in (0,\frac{2{C_{Z}}{C_{y}}}{{\varepsilon _{0}}}{\big(\frac{\varkappa }{2}\big)^{\rho }})$. Then we can write the following estimate:
(3.15)
\[\begin{aligned}{}{\hat{I}_{r}}(\delta )& ={\int _{0}^{\delta }}r\left(\left[\Big(\frac{b-a}{2}\Big({Z^{(-1)}}\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)-{u_{0}}\Big)+1\Big)\right.\right.\\ {} & \left.\left.\times \Big(\frac{d-c}{2}\Big({Z^{(-1)}}\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)-{u_{0}}\Big)+1\Big)\right]\right)\hspace{0.1667em}ds\\ {} & \le {\int _{0}^{\delta }}\left({\left(\frac{\varkappa }{2}\frac{{(2{C_{Z}}{C_{y}})^{1/\rho }}}{{s^{1/\rho }}}+1\right)^{2\beta }}-1\right)\hspace{0.1667em}ds.\end{aligned}\]
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\hat{I}_{r}}(\delta )& \displaystyle \le & \displaystyle {\int _{0}^{\delta }}\left({\Big(\varkappa {\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{1/\rho }}\Big)^{2\beta }}-1\right)\hspace{0.1667em}ds\\ {} & \displaystyle =& \displaystyle {(2{C_{Z}}{C_{y}})^{2\beta /\rho }}{\varkappa ^{2\beta }}{\left(1-\frac{2\beta }{\rho }\right)^{-1}}{\delta ^{1-2\beta /\rho }}-\delta .\end{array}\]
Suppose that $\theta {\varepsilon _{0}}<{\gamma _{0}}$, then
\[ {r^{(-1)}}\left(\frac{{\hat{I}_{r}}(\min (\theta {\varepsilon _{0}},{\gamma _{0}}))}{\theta {\varepsilon _{0}}}\right)\le {(2{C_{Z}}{C_{y}})^{2/\rho }}{\varkappa ^{2}}{\left(1-\frac{2\beta }{\rho }\right)^{-1/\beta }}{(\theta {\varepsilon _{0}})^{-2/\rho }},\]
and
\[ \hat{A}(\theta ,u)\le \exp \Big\{-\frac{{u^{\gamma }}{(1-\theta )^{\gamma }}}{\gamma {\varepsilon _{0}^{\gamma }}}\Big\}{(2{C_{Z}}{C_{y}})^{2/\rho }}{\varkappa ^{2}}{\left(1-\frac{2\beta }{\rho }\right)^{-1/\beta }}{(\theta {\varepsilon _{0}})^{-2/\rho }}.\]
If $\beta \to 0$, then ${\big(1-\frac{2\beta }{\rho }\big)^{-1/\beta }}\to \hspace{0.1667em}{e^{2/\rho }}$, and we obtain
\[ \hat{A}(\theta ,u)\le \exp \Big\{-\frac{{u^{\gamma }}{(1-\theta )^{\gamma }}}{\gamma {\varepsilon _{0}^{\gamma }}}\Big\}{(2e{C_{Z}}{C_{y}})^{2/\rho }}{\varkappa ^{2}}{(\theta {\varepsilon _{0}})^{-2/\rho }},\]
for $\theta \in (0,\frac{2{C_{Z}}{C_{y}}}{{\varepsilon _{0}}}{\big(\frac{\varkappa }{2}\big)^{\rho }})$, and, therefore, for such θ we obtain
Let $y=\{y(u),u\in \mathbb{R}\}$ be a centered Gaussian random process. Then ${C_{y}}=1$, $\varphi (x)=\frac{{x^{2}}}{2}$, ${\varphi ^{\ast }}(x)=\frac{{x^{2}}}{2}$.
The choice of the function φ in Example 3.1 is reasoned by the fact that the corresponding class of random process is the natural generalization of Gaussian processes. This example is rather simple and, at the same time, it is very illustrative and instructive. Therefore, the derivations above are worth to be summarized as a separate statement.
Corollary 3.1.
Let $y=\{y(u),\hspace{0.1667em}u\in \mathbb{R}\}$ be a real strictly φ-sub-Gaussian random process with $\varphi (x)=\frac{|x{|^{\alpha }}}{\alpha }$, $1<\alpha \le 2$, determining constant ${C_{y}}$ and $\mathsf{E}y(t)y(s)={\Gamma _{y}}(s,t)$. Let $U(t,x)={\textstyle\int _{-\infty }^{\infty }}I(t,x,\lambda )\hspace{0.1667em}dy(\lambda )$, where $I(t,x,\lambda )$ is given by (2.13), $a\le t\le b$, $c\le x\le d$, and ${\varepsilon _{0}}={\sup _{\begin{array}{c}a\le t\le b\\ {} c\le x\le d\end{array}}}{\tau _{\varphi }}(U(t,x))$. Further, let the constant $\rho \in (0,1]$ be such that (3.13) holds. Then
(i) $U(t,x)$ exists, is continuous with probability one and for its sample paths the Hölder continuity holds in the form
\[ \underset{\begin{array}{c}t,{t_{1}}\in [a,b]:|t-{t_{1}}|\le h\\ {} x,{x_{1}}\in [c,d]:|x-{x_{1}}|\le h\end{array}}{\sup }{\tau _{\varphi }}(U(t,x)-U({t_{1}},{x_{1}}))\le 2{C_{Z}}{C_{y}}{h^{\rho }},\]
where ${C_{Z}}$ is defined in (3.10);
(ii) for all $0<\theta <1$ such that $\theta {\varepsilon _{0}}<2{C_{Z}}{C_{y}}{(\varkappa /2)^{\rho }}$ with $\varkappa =\max (b-a,d-c)$, γ such that $\frac{1}{\alpha }+\frac{1}{\gamma }=1$, and $u>0$ the following inequality holds true
In particular, if the process $y=\{y(u),\hspace{0.1667em}u\in \mathbb{R}\}$ is Gaussian, then
for all $0<\theta <1$ such that $\theta \Gamma <2{C_{Z}}{(\varkappa /2)^{\rho }}$ and $u>0$.
(3.18)
\[ P\big\{\underset{\begin{array}{c}a\le t\le b\\ {} c\le x\le d\end{array}}{\sup }|U(t,x)|>u\big\}\le 2\exp \Big\{-\frac{{u^{2}}{(1-\theta )^{2}}}{2{\varepsilon _{0}^{2}}}\Big\}{(2e{C_{Z}})^{2/\rho }}{\varkappa ^{2}}{(\theta \varepsilon )^{-2/\rho }}\]Example 3.2.
Consider $\varphi (x)=\exp \{|x|\}-|x|-1$, $x\in \mathbb{R}$. Then ${\varphi ^{\ast }}(x)=(|x|+1)\ln (|x|+1)-|x|,\hspace{0.1667em}x\in \mathbb{R}$. Hence,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \hat{A}(\theta ,u)& \displaystyle =& \displaystyle \exp \left\{-\left(\frac{u(1-\theta )}{{\varepsilon _{0}}}+1\right)\ln \left(\frac{u(1-\theta )}{{\varepsilon _{0}}}+1\right)+\frac{u(1-\theta )}{{\varepsilon _{0}}}\right\}\\ {} & \displaystyle \times & \displaystyle {r^{(-1)}}\left(\frac{{\hat{I}_{r}}(\min (\theta {\varepsilon _{0}},{\gamma _{0}}))}{\theta {\varepsilon _{0}}}\right).\end{array}\]
Let us take the function $Z(u)={\ln ^{\alpha }}(u+1)$, $u\ge 0$, $\alpha >1$, as an admissible function for the space ${\mathrm{Sub}_{\varphi }}(\Omega )$. In this case,
\[\begin{aligned}{}{u_{0}}& ={e^{\alpha }}-1,\hspace{2em}{Z^{(-1)}}(v)=\exp \left\{{v^{\frac{1}{\alpha }}}\right\}-1,\hspace{2em}Z(v+{u_{0}})={\ln ^{\alpha }}(v+{e^{\alpha }}),\\ {} {C_{Z}^{2}}& ={\int _{-\infty }^{\infty }}{\int _{-\infty }^{\infty }}\Big({\ln ^{\alpha }}\Big(\frac{|\lambda |}{2}+{e^{\alpha }}\Big)+{\ln ^{\alpha }}\Big(\frac{1}{2}\Big|{\sum \limits_{k=1}^{N}}{a_{k}}{\lambda ^{2k+1}}{(-1)^{k}}\Big|+{e^{\alpha }}\Big)\Big)\\ {} & \times \Big({\ln ^{\alpha }}\Big(\frac{|\mu |}{2}+{e^{\alpha }}\Big)+{\ln ^{\alpha }}\Big(\frac{1}{2}\Big|{\sum \limits_{k=1}^{N}}{a_{k}}{\mu ^{2k+1}}{(-1)^{k}}\Big|+{e^{\alpha }}\Big)\Big)\hspace{0.1667em}d|{\Gamma _{y}}(\lambda ,\mu )|.\end{aligned}\]
The above integral converges if the following integral converges
That is, if condition (3.19) holds true, then Theorem 3.3 holds with
Let $\frac{d-c}{2}\hspace{0.1667em}{e^{\alpha }}>1$ and $\frac{b-a}{2}{e^{\alpha }}>1$.
(3.20)
\[\begin{aligned}{}{\hat{I}_{r}}(\delta )& ={\int _{0}^{\delta }}r\left(\left[\Big(\frac{b-a}{2}\Big(\exp \left\{{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\right\}-1-({e^{\alpha }}-1)\Big)+1\Big)\right.\right.\\ {} & \left.\left.\times \Big(\frac{d-c}{2}\Big(\exp \left\{{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\right\}-1-({e^{\alpha }}-1)\Big)+1\Big)\right]\right)\hspace{0.1667em}ds\\ {} & ={\int _{0}^{\delta }}r\left(\left[\Big(\frac{b-a}{2}\Big(\exp \left\{{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\right\}-{e^{\alpha }}\Big)+1\Big)\right.\right.\\ {} & \left.\left.\times \Big(\frac{d-c}{2}\Big(\exp \left\{{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\right\}-{e^{\alpha }}\Big)+1\Big)\right]\right)\hspace{0.1667em}ds.\end{aligned}\]That is, we choose some $\alpha >\max \big\{1,\ln \big(\frac{2}{b-a}\big),\ln \big(\frac{2}{d-c}\big)\big\}$. Then
In our case, the function $r(v)=\ln v$, $v\ge 1$, satisfies the conditions defined in Theorem 2.3 and is convenient for the estimation of ${\hat{I}_{r}}(\delta )$. Substituting it in the expression above, we get
Since ${r^{(-1)}}(v)={e^{v}},\hspace{0.1667em}v\ge 0$, for such $\theta \in (0,1)$ that $\theta {\varepsilon _{0}}<{\gamma _{0}}$ we obtain
(3.21)
\[\begin{aligned}{}{\hat{I}_{r}}(\delta )& \le {\int _{0}^{\delta }}r\left(\frac{(b-a)(d-c)}{4}\exp \left\{2{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\right\}\right)\hspace{0.1667em}ds\\ {} & \le {\int _{0}^{\delta }}r\left(\frac{{\varkappa ^{2}}}{4}\exp \left\{2{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\right\}\right)\hspace{0.1667em}ds.\end{aligned}\](3.22)
\[\begin{aligned}{}{\hat{I}_{r}}(\delta )& \le {\int _{0}^{\delta }}\ln \left(\frac{{\varkappa ^{2}}}{4}\exp \left\{2{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\right\}\right)\hspace{0.1667em}ds\\ {} & =\delta \ln \left(\frac{{\varkappa ^{2}}}{4}\right)+{\int _{0}^{\delta }}2{\Big(\frac{2{C_{Z}}{C_{y}}}{s}\Big)^{\frac{1}{\alpha }}}\hspace{0.1667em}ds\\ {} & =\delta \ln \left(\frac{{\varkappa ^{2}}}{4}\right)+\frac{2{(2{C_{Z}}{C_{y}})^{\frac{1}{\alpha }}}{\delta ^{1-\frac{1}{\alpha }}}}{1-\frac{1}{\alpha }}.\end{aligned}\]
\[ {r^{(-1)}}\left(\frac{{\hat{I}_{r}}(\min (\theta {\varepsilon _{0}},{\gamma _{0}}))}{\theta {\varepsilon _{0}}}\right)\le \exp \left\{\ln \left(\frac{{\varkappa ^{2}}}{4}\right)+\frac{2\alpha }{\alpha -1}{\left(\frac{2{C_{Z}}{C_{y}}}{\theta {\varepsilon _{0}}}\right)^{\frac{1}{\alpha }}}\right\}\]
and, finally, for all $u>0$ the following inequality emerges
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{\underset{\begin{array}{c}a\le t\le b\\ {} c\le x\le d\end{array}}{\sup }|U(t,x)|>u\big\}& \displaystyle \le & \displaystyle 2\hat{A}(\theta ,u)\\ {} & \displaystyle \le & \displaystyle 2\exp \left\{-\left(\frac{u(1-\theta )}{{\varepsilon _{0}}}+1\right)\ln \left(\frac{u(1-\theta )}{{\varepsilon _{0}}}+1\right)\right.\\ {} & \displaystyle +& \displaystyle \left.\frac{u(1-\theta )}{{\varepsilon _{0}}}+\ln \left(\frac{{\varkappa ^{2}}}{4}\right)+\frac{2\alpha }{\alpha -1}{\left(\frac{2{C_{Z}}{C_{y}}}{\theta {\varepsilon _{0}}}\right)^{\frac{1}{\alpha }}}\right\}.\end{array}\]