1 Introduction
A self-similar process is a process invariant by distribution under specific time and/or space scaling. Namely, a stochastic process $\{X(t),t\in \mathbb{R}\}$ is self-similar with index $H\ge 0$, if for any $a>0$ $\{X(at),t\in \mathbb{R}\}\stackrel{d}{=}\{{a}^{H}X(t),t\in \mathbb{R}\}$, where $\stackrel{d}{=}$ denotes the equality of the finite-dimensional distributions. The books by Embrechts & Maejima [6] and Samorodnitsky & Taqqu [14] are devoted to the theory of self-similar processes.
A classical example of a self-similar process with index $H\in (0,1)$ is a fractional Brownian motion $\{B_{H}(t),t\in \mathbb{R}_{+}\}$ ($\mathbb{R}_{+}=[0,+\infty )$) with a corresponding Hurst index. This process has centered stationary increments and the following covariance function
The investigation of self-similar random fields (multiparameter processes) was caused by the evidence of the self-similarity property of phenomena in climatology, environmental sciences, etc. (see [10, 13]). In particular, so called anisotropic random fields are used for modeling phenomena in spatial statistics, statistical hydrology and image processing (see [2, 3, 5]). The attempts to extend the self-similarity concept from processes to fields resulted in arising of the several approaches. In our paper we present three different definitions of the self-similar fields and establish the interconnection between them. We show that the covariance function of the centered Gaussian field determines the self-similarity property and its type. The definitions of the fractional Brownian fields and sheets are also presented in the paper. It is proved that they are self-similar fields but according to the different definitions. We consider the fields which are self-similar with respect to every coordinate with individual index. Such fields are used to call anisotropic and in the Brownian case they usually are called as Brownian sheets. The paper’s aim is to investigate the asymptotic growth of the sample paths of these fields.
We introduce the notions of upper and lower functions for the sample paths of the random field which are similar to the paper [15] and prove the zero–one law for such functions in the case of growth at 0 and at $\infty .$ We also assume the ergodicity of the scaling transformation. The ergodicity property should be proved independently for every particular case and this can be easily done for the stationary fields and processes.
Let us also mention that the non-singular self-similar process has not to be stationary. But there is a one-to-one correspondence between self-similar and stationary processes. For every self-similar process X with index $H>0$, its Lamperti transformation $Z=\{Z(t)={t}^{-H}X({e}^{t})\}$ is a stationary process. The Lamperti transformation for anisotropic random fields was introduced in the paper [7] and there was established the correspondence between self-similar and stationary random fields as well. In the present paper it is proved that the ergodicity of the shift transformation for the corresponding stationary field is a sufficient condition for the ergodicity of the scaling transformation. For the Gaussian fields the last statement can be ensured by the proper conditions on the covariance function. In particular, we prove that the fractional Brownian sheet has the ergodic scaling transformation.
In this paper the strong limit theorems for the anisotropic growth of the sample paths of the self-similar fields for the upper and lower functions arising in the zero–one law is proved. The similar theorems for the self-similar stochastic processes were proved in the paper [8]. Application of these theorems to the Gaussian fields allows to obtain the iterated log-type lows. Comparing the results for the fractional Brownian fields and sheets with the results from the paper [11] we can conclude that our theorems enable us to obtain more precise estimates.
The paper is organized as follows. Section 2 contains the different definitions of a self-similar field and the interconnection between them is proved. We focused on the Gaussian case and present the definitions of the fractional Brownian field and sheet. In Section 3, the Lamperti transformation for the self-similar field is considered and the connection between the scaling transformation for such a field and the shift transformation for the corresponding stationary field is stated proved. It is shown that the fractional Brownian sheet has the ergodic scaling transformation. In Section 4, we introduce the definitions of the upper and lower functions for asymptotic growth of the sample paths of the self-similar field. The zero–one law is proved for the fields with ergodic scaling transformation. The strong limit theorems for asymptotic growth of sample paths at 0 and at ∞ are obtained in Section 5. We also establish there the upper bounds for growth of the field with ergodic scaling transformation for the case of the slowly varying functions. In Section 6 we apply the theorem to prove the iterated log-type laws for the Gaussian self-similar fields.
2 Definition of self-similarity for random fields
Let us start from giving three different definitions of the self-similar random fields and then show their interrelation. We assume that $\{\varOmega ,\mathcal{F},\mathbf{P}\}$ is a standard probability space defining all the random objects considered further on.
Definition 2.1 ([14]).
A random field $\{X(\mathbf{t}),\mathbf{t}=(t_{1},\dots ,t_{n})\in {\mathbb{R}}^{n}\}$ is self-similar with index $H>0$ if for every $a>0$ $\{X(a\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}\stackrel{d}{=}\{{a}^{H}X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}$.
Hereinafter we shall use the designation $\mathbf{x}\cdot \mathbf{y}$ in order to denote the vector consisting of the coordinatewise products of two vectors $\mathbf{x},\mathbf{y}\in {\mathbb{R}}^{n}$
where $\mathbf{x}=(x_{1},\dots ,x_{n}),\mathbf{y}=(y_{1},\dots ,y_{n})$.
Definition 2.2 ([7]).
A random field $\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}$ is self-similar with index $\mathbf{H}=(H_{1},\dots ,H_{n})\in {\mathbb{R}_{+}^{n}}$ if for any $\mathbf{a}=(a_{1},\dots ,a_{n})\in {(0,+\infty )}^{n}$
In addition it is possible to give the third definition of the self-similar field as a field which is self-similar with respect to every time coordinate.
Definition 2.3.
A random field $\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}$ is coordinatewise self-similar with index $\mathbf{H}=(H_{1},\dots ,H_{n})\in {\mathbb{R}_{+}^{n}}$, if for any $a>0$ and $1\le k\le n$
Now let us explain how these definitions interact between each other.
Proof.
(2.2 ⇒ 2.3) Assume that a random field $\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}$ is self-similar by Definition 2.2 with index $\mathbf{H}=(H_{1},\dots ,H_{n})\in {\mathbb{R}_{+}^{n}}$.
For arbitrary $a>0$ and $1\le k\le n$ we put $a_{1}=1,\dots ,a_{k-1}=1$, $a_{k}=a$, $a_{k+1}=1,\dots ,a_{n}=1$, $\mathbf{a}=(a_{1},\dots ,a_{n})$. Then X is self-similar with respect to the k-th coordinate.
(2.2 ⇐ 2.3) Let a random field $\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}$ is self-similar by Definition 2.3 with index $\mathbf{H}=(H_{1},\dots ,H_{n})\in {\mathbb{R}_{+}^{n}}$. Then, for any $\mathbf{a}\in {(0,+\infty )}^{n}$, $m\ge 1$, ${\mathbf{t}}^{1},\dots ,{\mathbf{t}}^{m}\in {\mathbb{R}}^{n}$, ${\mathbf{t}}^{i}=({t_{1}^{i}},\dots ,{t_{n}^{i}})$, $x_{1},\dots ,x_{m}\in \mathbb{R}$ we get
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle \mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{X\big(\mathbf{a}\cdot {\mathbf{t}}^{i}\big)<x_{i}\big\}\Bigg)\stackrel{\text{Def. 2.3}}{=}\mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{{a_{1}^{H_{1}}}X\big({t_{1}^{i}},a_{2}{t_{2}^{i}},\dots ,a_{n}{t_{n}^{i}}\big)<x_{i}\big\}\Bigg)\\{} & & \displaystyle \hspace{1em}=\mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{X\big({t_{1}^{i}},a_{2}{t_{2}^{i}},\dots ,a_{n}{t_{n}^{i}}\big)<x_{i}{a_{1}^{-H_{1}}}\big\}\Bigg)\\{} & & \displaystyle \hspace{1em}=\cdots =\mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{X\big({\mathbf{t}}^{i}\big)<x_{i}{a_{1}^{-H_{1}}}\cdots {a_{n}^{-H_{n}}}\big\}\Bigg)\\{} & & \displaystyle \hspace{1em}=\mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{{a_{1}^{H_{1}}}\cdots {a_{n}^{H_{n}}}X\big({\mathbf{t}}^{i}\big)<x_{i}\big\}\Bigg).\end{array}\]
The lemma is proved. □Proof.
Let us put $\mathbf{a}=(a,\dots ,a)$ where $a>0$ is an arbitrary number. Then for any $m\ge 1$, ${\mathbf{t}}^{1},\dots ,{\mathbf{t}}^{m}\in {\mathbb{R}}^{n}$, ${\mathbf{t}}^{i}=({t_{1}^{i}},\dots ,{t_{n}^{i}})$, $x_{1},\dots ,x_{m}\in \mathbb{R}$ we obtain
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{X\big(a{\mathbf{t}}^{i}\big)<x_{i}\big\}\Bigg)& \displaystyle =\mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{X\big(\mathbf{a}\cdot {\mathbf{t}}^{i}\big)<x_{i}\big\}\Bigg)\\{} & \displaystyle =\mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{{a}^{H_{1}+\cdots +H_{n}}X\big({\mathbf{t}}^{i}\big)<x_{i}\big\}\Bigg)\\{} & \displaystyle =\mathbf{P}\Bigg({\bigcap \limits_{i=1}^{m}}\big\{{a}^{H}X\big({\mathbf{t}}^{i}\big)<x_{i}\big\}\Bigg).\end{array}\]
The lemma is proved. □There is a strong correspondence between a type of covariance function and a certain type of self-similarity property for centered Gaussian random fields.
Lemma 2.3.
Let the covariance functions $C_{1},C_{2}:{\mathbb{R}}^{n}\times {\mathbb{R}}^{n}\to \mathbb{R}$ of the centered Gaussian fields $\{X_{1}(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}$ and $\{X_{2}(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{n}\}$ respectively, satisfy the following properties:
Then the field $X_{1}$ is self-similar with index $\mathbf{H}=(H_{1},\dots ,H_{n})$ by Definition 2.2, and the field $X_{2}$ is self-similar with index H by Definition 2.1.
Proof.
The fact that the finite-dimensional distributions of the centered Gaussian fields are uniquely determined by the covariance function implies the lemma’s proof. Within the lemma’s conditions the covariance matrices $\varSigma _{1}$ and $\varSigma _{2}$ of the corresponding random fields $X_{1}$ and $X_{2}$ have the following properties:
The lemma is proved. □
Let us give a few examples of Gaussian self-similar random fields.
Definition 2.4.
As a standard Levy fractional Brownian field with index $H>0$ we shall call a centered Gaussian random field $B_{H}=\{B_{H}(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{n}}\}$ with a covariance function
\[ \mathbf{E}\big(B_{H}(\mathbf{t})B_{H}(\mathbf{s})\big)=\frac{1}{2}\big(\| \mathbf{t}{\| }^{2H}+\| \mathbf{s}{\| }^{2H}-\| \mathbf{t}-\mathbf{s}{\| }^{2H}\big),\hspace{1em}\mathbf{t},\mathbf{s}\in {\mathbb{R}_{+}^{n}},\]
where $\| \cdot \| $ is set for the Euclidean norm in ${\mathbb{R}}^{n}$. This field is self-similar by Definition 2.1 (see [14], Example 8.1.3).The Lévy fractional Brownian field is isotropic. This field is the only one in law Gaussian self-similar field within Definition 2.1 with stationary isotropic increments.
Definition 2.5.
As a standard fractional Brownian sheet with index $\mathbf{H}=(H_{1},\dots ,H_{n})$, $0<H_{i}<1$, $i=\overline{1,n}$ we shall call a centered Gaussian random field $B_{\mathbf{H}}=\{B_{\mathbf{H}}(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{n}}\}$ with a covariance function
This field is self-similar by Definition 2.2 and has stationary rectangular increments. The proof of this property for the ${\mathbb{R}}^{2}$ case can be found in the paper [1]. A similar property for the case $n>2$ can be easily proved as well.
Remark 2.1.
A random field satisfying Definition 2.1 is not necessary self-similar in a sense of Definition 2.2. Indeed, let us consider the Levy fractional Brownian field $\{B_{H}(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{n}}\}$. It is self-similar by Definition 2.1 ([14], Example 8.1.3). We intend to prove that this field is not self-similar by Definition 2.2. Let $a_{1}=a>0$, $a_{2}=1,\dots ,a_{n}=1$, then
\[ \mathbf{E}{X}^{2}(a_{1},1,\dots ,1)={\big\| (a,1,\dots ,1)\big\| }^{2H}\mathbf{E}{X}^{2}(1,\dots ,1).\]
But, if the field satisfies Definition 2.2, then there should be
3 Self-similar fields with ergodic scaling transformation
Further in the paper, we assume that the fields satisfy Definition 2.2 and are real-valued and continuous in probability. Under such assumptions we could work with separable versions without loss of generality. Moreover, we shall consider only the case $n=2$ since switching to the parameter of the higher dimension is rather technical.
A scaling transformation ${S_{\mathbf{a}}^{\mathbf{H}}}$ for the random field $X=\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ and for $\mathbf{H}=(H_{1},H_{2})\in {(0,+\infty )}^{2}$, $\mathbf{a}=(a_{1},a_{2})\in {(0,+\infty )}^{2}$ is defined as:
Using the notion of scaling transformation we can formulate Definition 2.2 for the case ${\mathbb{R}_{+}^{2}}$ as follows.
(1)
\[ \big({S_{\mathbf{a}}^{\mathbf{H}}}X\big)(\mathbf{t})={a_{1}^{-H_{1}}}{a_{2}^{-H_{2}}}X(\mathbf{a}\cdot \mathbf{t}),\hspace{1em}\mathbf{t}\in {\mathbb{R}_{+}^{2}}.\]Definition 3.1.
A random field $X=\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ is said to be self-similar with index $\mathbf{H}=(H_{1},H_{2})\in {(0,+\infty )}^{2}$, if for any $a_{1}>0$, $a_{2}>0$ the field $\{({S_{\mathbf{a}}^{\mathbf{H}}}X)(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ has the same finite-dimensional distributions as the field X.
Hereinafter we shall use the notation $S_{\mathbf{a}}={S_{\mathbf{a}}^{\mathbf{H}}}$. Let us consider a self-similar field X with index $\mathbf{H}=(H_{1},H_{2})\in {(0,+\infty )}^{2}$, then $X(0,s)=X(s,0)=0$, $s\ge 0$ a.s. ([7], Proposition 2.4.1). For such a field the Lamperti transformation $\tau _{\mathbf{H}}$ was introduced in the paper [7]:
\[ \tau _{\mathbf{H}}X(\mathbf{t})=Z(\mathbf{t})={e}^{-H_{1}t_{1}}{e}^{-H_{2}t_{2}}X\big({e}^{t_{1}},{e}^{t_{2}}\big),\hspace{1em}\mathbf{t}\in {\mathbb{R}}^{2}.\]
The field Z is stationary. The converse is also true: for any stationary field Z the field $X=\{X(\mathbf{s})={s_{1}^{H_{1}}}{s_{2}^{H_{2}}}Z(\ln s_{1},\ln s_{2}),\mathbf{s}\in {(0,+\infty )}^{2}\}$ is such that $X(0,s)=X(s,0)=0$ a.s., $s\ge 0\}$ is self-similar with index $\mathbf{H}=(H_{1},H_{2})$. The scaling transformation $S_{\mathbf{a}}$ of the field X corresponds to the shift transformation $\theta _{\mathbf{u}}$ of the field Z where $\mathbf{u}=(u_{1},u_{2})=(\ln a_{1},\ln a_{2})$ and the shift transformation is defined as $(\theta _{\mathbf{u}}Z)(\mathbf{s})=Z(s_{1}+u_{1},s_{2}+u_{2}),\mathbf{s}\in {\mathbb{R}}^{2}$. Indeed,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle (\tau _{\mathbf{H}}S_{\mathbf{a}}X)(\mathbf{t})=& \displaystyle \tau _{\mathbf{H}}\big({a_{1}^{-H_{1}}}{a_{2}^{-H_{2}}}X(\mathbf{a}\cdot \mathbf{t})\big)\\{} \displaystyle =& \displaystyle {a_{1}^{-H_{1}}}{a_{2}^{-H_{2}}}{e}^{-t_{1}H_{1}}{e}^{-t_{2}H_{2}}X\big(a_{1}{e}^{t_{1}},a_{2}{e}^{t_{2}}\big)\\{} \displaystyle =& \displaystyle {e}^{-H_{1}(t_{1}+u_{1})}{e}^{-H_{2}(t_{2}+u_{2}))}X\big({e}^{t_{1}+u_{1}},{e}^{t_{2}+u_{2}}\big)\\{} \displaystyle =& \displaystyle \theta _{\mathbf{u}}{e}^{-H_{1}t_{1}}{e}^{-H_{2}t_{2}}X\big({e}^{t_{1}},{e}^{t_{2}}\big)=(\theta _{\mathbf{u}}\tau _{\mathbf{H}}X)(\mathbf{t})=(\theta _{\mathbf{u}}Z)(\mathbf{t}),\hspace{1em}\mathbf{t}\in {\mathbb{R}_{+}^{2}}.\end{array}\]
So $\tau _{\mathbf{H}}\circ S_{\mathbf{a}}=\theta _{\mathbf{u}}\circ \tau _{\mathbf{H}}$.Zero–one laws naturally occur for the processes and field with ergodicity property. It follows from Definition 3.1 that the scaling transformation $S_{\mathbf{a}}$ of the field X preserves the same distribution so the notion of ergodicity of $S_{\mathbf{a}}$ can be defined in a usual way (see [4]).
Definition 3.2.
Let $T:\varOmega \to \varOmega $ be some transformation defined on the probability space $(\varOmega ,\mathcal{F},\mathbf{P})$. The transformation T, which preserves the measure, is ergodic if for every set $E\in \mathcal{F}$ such that $\mathbf{P}({T}^{-1}(E)\Delta E)=0$ either $\mathbf{P}(E)=0$ or $\mathbf{P}(E)=1$.
We shall call the field X self-similar with ergodic scaling transformation if $S_{\mathbf{a}}$ is ergodic. Further in this section we shall assume that any scaling transformation $S_{\mathbf{a}}$, $\mathbf{a}\in {(0,+\infty )}^{2}$, $\mathbf{a}\ne (1,1)$ for the self-similar field is ergodic.
It follows from the interconnection between the transformations $S_{\mathbf{a}}$ and $\theta _{\mathbf{u}}$ that the ergodicity of the field $Z=\tau _{\mathbf{H}}X$ implies the ergodicity of the field X. In particular, the ergodicity of the scaling transformation for the Gaussian stationary processes follows from the covariance function properties. The class of the stationary fields that are ergodic is quite wide.
Theorem 3.1 ([16], Proposition 4.1).
Let $Z=\{Z(\mathbf{t}),\mathbf{t}\in {\mathbb{R}}^{2}\}$ be a stationary Gaussian field with a mean M and a continuous covariance function $R(\mathbf{y})=\mathbf{E}Z(\mathbf{x})Z(\mathbf{x}+\mathbf{y})-{M}^{2}$, $\mathbf{y}\in {\mathbb{R}}^{2}$. If $\lim _{\| \mathbf{y}\| \to \infty }R(\mathbf{y})=0$, then Z has the ergodic shift transformation.
Let us show that fractional Brownian sheet is ergodic.
Corollary 3.1.
Let $B_{\mathbf{H}}=\{B_{\mathbf{H}}(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ be fractional Brownian sheet with index $\mathbf{H}=(H_{1},H_{2})\in {(0,1)}^{2}$ (Definition 2.5). Then $B_{\mathbf{H}}$ is the self-similar field with ergodic scaling transformation and the stationary field $Z=\tau _{\mathbf{H}}B_{\mathbf{H}}$ is centered with the following covariance function
Proof.
The proof of the equality (2) can be found in the paper [7]. All we need to do is to prove that the field Z is ergodic. Let’s check the conditions of Theorem 3.1. The covariance function R is continuous and bounded, and can be represented as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle R(\mathbf{y})=& \displaystyle \frac{1}{4}\prod \limits_{i=1,2}\big({e}^{H_{i}y_{i}}+{e}^{-H_{i}y_{i}}-{e}^{H_{i}|y_{i}|}{\big(1-{e}^{-|y_{i}|}\big)}^{2H_{i}}\big)\\{} \displaystyle \le & \displaystyle \frac{1}{4}\prod \limits_{i=1,2}\big({e}^{H_{i}y_{i}}+{e}^{-H_{i}y_{i}}-{e}^{H_{i}|y_{i}|}\big)=\frac{{e}^{-H_{1}|y_{1}|-H_{2}|y_{2}|}}{4}\to 0,\hspace{1em}\| \mathbf{y}\| \to \infty .\end{array}\]
Thus, it follows from Theorem 3.1 that the field Z is stationary with ergodic shift transformation. This implies that the corresponding anisotropic Brownian sheet $B_{\mathbf{H}}$ is self-similar with ergodic scaling transformation. The corollary is proved. □4 Upper and lower functions for ergodic fields
In this section we continue considering of the self-similar fields with ergodic scaling transformation and prove the zero–one laws for the asymptotic growth of the field’s sample paths. Let us introduce the following definitions.
For the positively defined function $g:{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ we consider the following random events:
\[ {E_{g}^{0}}=\big\{\omega \in \varOmega :\exists \delta =\delta (\omega )>0,\hspace{2.5pt}\forall \mathbf{t},\hspace{2.5pt}0<t_{1}\vee t_{2}<\delta :\hspace{2.5pt}\big|X(\omega ,\mathbf{t})\big|\le g(\mathbf{t})\big\},\]
\[ {E_{g}^{\infty }}=\big\{\omega \in \varOmega :\exists N=N(\omega )>0,\hspace{2.5pt}\forall \mathbf{t},\hspace{2.5pt}t_{1}\wedge t_{2}>N:\hspace{2.5pt}\big|X(\omega ,\mathbf{t})\big|\le g(\mathbf{t})\big\},\]
where $t_{1}\vee t_{2}=\max \{t_{1},t_{2}\}$, $t_{1}\wedge t_{2}=\min \{t_{1},t_{2}\}$.In addition we define the functionals ${L_{\varLambda ,\varphi }^{0}}$, ${L_{\varLambda ,\varphi }^{\infty }}$ for $\varLambda =(\lambda _{1},\lambda _{2})$, $\lambda _{1}>0$, $\lambda _{2}>0$ and positive function $\varphi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ in the following way:
\[ {L_{\varLambda ,\varphi }^{0}}=\underset{t_{1}\vee t_{2}\to 0}{\limsup }\frac{|X(\mathbf{t})|}{{t_{1}^{\lambda _{1}}}{t_{2}^{\lambda _{2}}}\varphi (\mathbf{t})},\hspace{2em}{L_{\varLambda ,\varphi }^{\infty }}=\underset{t_{1}\wedge t_{2}\to \infty }{\limsup }\frac{|X(\mathbf{t})|}{{t_{1}^{\lambda _{1}}}{t_{2}^{\lambda _{2}}}\varphi (\mathbf{t})}.\]
Remark 4.1.
Let the function $g(\mathbf{t})={t_{1}^{\lambda _{1}}}{t_{2}^{\lambda _{2}}}\varphi (\mathbf{t})$, $\mathbf{t}\in {\mathbb{R}_{+}^{2}}$. If ${L_{\varLambda ,\varphi }^{0}}=0$ a.s., then the function g is upper with respect to the growth at 0. If ${L_{\varLambda ,\varphi }^{0}}=\infty $ a.s., then the function g is lower with respect to the growth at 0. If ${L_{\varLambda ,\varphi }^{\infty }}=0$ a.s., then the function g is upper with respect to the growth at ∞. If ${L_{\varLambda ,\varphi }^{\infty }}=\infty $ a.s., then the function g is lower with respect to the growth at ∞.
Theorem 4.1.
Let the function $\varphi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ be either non-decreasing or non-increasing in every coordinate. Then
\[ \mathbf{P}\big({E_{\mathbf{H},\varphi }^{0}}\big)=0\hspace{2.5pt}\textit{or}\hspace{2.5pt}1,\hspace{2em}\mathbf{P}\big({E_{\mathbf{H},\varphi }^{\infty }}\big)=0\hspace{2.5pt}\textit{or}\hspace{2.5pt}1,\]
where the notation ${E_{\mathbf{H},\varphi }^{0}}$ (or ${E_{\mathbf{H},\varphi }^{\infty }}$) is used for ${E_{g}^{0}}$ (or ${E_{g}^{\infty }}$) with a function $g(\mathbf{t})={t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t})$, $\mathbf{t}\in {\mathbb{R}_{+}^{2}}$ and $\mathbf{H}=(H_{1},H_{2})\in {(0,+\infty )}^{2}$.
Proof.
Let us consider the case when the function φ is non-decreasing in the first coordinate and non-increasing in the second one. First we prove the theorem for ${E_{\mathbf{H},\varphi }^{0}}$. Let $0<a_{1}<1$, $a_{2}>1$ and $\omega \in {E_{\mathbf{H},\varphi }^{0}}$. In this case
\[ \exists \delta =\delta (\omega )>0:\hspace{1em}\big|X(\mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t}),\hspace{1em}0<t_{1}\vee t_{2}<\delta .\]
If $a_{1}t_{1}\vee a_{2}t_{2}<\delta $, then the definition of the scaling transformation and the last inequality imply
\[ \big|(S_{\mathbf{a}}X)(\mathbf{t})\big|={a_{1}^{-H_{1}}}{a_{2}^{-H_{2}}}\big|X(\mathbf{a}\cdot \mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{a}\cdot \mathbf{t}).\]
Here $a_{1}t_{1}<t_{1}$, $a_{2}t_{2}>t_{2}$, so it follows from the monotonicity of the function φ that $\varphi (\mathbf{a}\cdot \mathbf{t})\le \varphi (\mathbf{t})$, $t_{1}>0$, $t_{2}>0$. Thus, for ${E_{\mathbf{H},\varphi }^{0}}$ the following inequality holds true
\[ \big|(S_{\mathbf{a}}X)(\mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t}),\hspace{1em}0<t_{1}\vee t_{2}<\frac{\delta }{a_{2}}.\]
So, we have proved that ${E_{\mathbf{H},\varphi }^{0}}\subset {S_{\mathbf{a}}^{-1}}{E_{\mathbf{H},\varphi }^{0}}$, where ${S_{\mathbf{a}}^{-1}}{E_{\mathbf{H},\varphi }^{0}}$ denotes the set ${E_{\mathbf{H},\varphi }^{0}}$ for the field $S_{\mathbf{a}}X$. This implies that $\mathbf{P}({E_{\mathbf{H},\varphi }^{0}}\hspace{0.1667em}\bigtriangleup \hspace{0.1667em}{S_{\mathbf{a}}^{-1}}{E_{\mathbf{H},\varphi }^{0}})=0$. Since $S_{\mathbf{a}}$ is ergodic for any $\mathbf{a}\in {(0,+\infty )}^{2}$, $\mathbf{a}\ne (1,1)$, then $\mathbf{P}({E_{\mathbf{H},\varphi }^{0}})=0$ or 1.Now let $\omega \in {E_{\mathbf{H},\varphi }^{\infty }}$. In this case
\[ \exists N=N(\omega )>0:\hspace{1em}\big|X(\mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t}),\hspace{1em}t_{1}\wedge t_{2}>N.\]
If $a_{1}t_{1}\wedge a_{2}t_{2}>N$, then the definition of the scaling transformation and the last inequality imply that
\[ \big|(S_{\mathbf{a}}X)(\mathbf{t})\big|={a_{1}^{-H_{1}}}{a_{2}^{-H_{2}}}\big|X(\mathbf{a}\cdot \mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{a}\cdot \mathbf{t}).\]
Here $a_{1}t_{1}<t_{1}$, $a_{2}t_{2}>t_{2}$, and it follows from the monotonicity of the function φ that $\varphi (\mathbf{a}\cdot \mathbf{t})\le \varphi (\mathbf{t})$, $t_{1}>0$, $t_{2}>0$. Thus, for ${E_{\mathbf{H},\varphi }^{\infty }}$ the following inequality holds true
\[ \big|(S_{\mathbf{a}}X)(\mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t}),\hspace{1em}t_{1}\wedge t_{2}>\frac{N}{a_{1}}.\]
So, we have proved that ${E_{\mathbf{H},\varphi }^{\infty }}\subset {S_{\mathbf{a}}^{-1}}{E_{\mathbf{H},\varphi }^{\infty }}$. This implies that $\mathbf{P}({E_{\mathbf{H},\varphi }^{\infty }}\hspace{0.1667em}\bigtriangleup \hspace{0.1667em}{S_{\mathbf{a}}^{-1}}{E_{\mathbf{H},\varphi }^{\infty }})=0$. Since $S_{\mathbf{a}}$ is ergodic for any $\mathbf{a}\in {(0,+\infty )}^{2}$, $\mathbf{a}\ne (1,1)$, so $\mathbf{P}({E_{\mathbf{H},\varphi }^{\infty }})=0$ or 1.For the other monotonicity types of the function φ the proofs are similar. The theorem is proved. □
Corollary 4.1.
Under the condition of Theorem 4.1 there exist such constants $0\le {c_{\mathbf{H},\varphi }^{0}}\le +\infty $, $0\le {c_{\mathbf{H},\varphi }^{\infty }}\le +\infty $ that ${L_{\mathbf{H},\varphi }^{0}}={c_{\mathbf{H},\varphi }^{0}}$, ${L_{\mathbf{H},\varphi }^{\infty }}={c_{\mathbf{H},\varphi }^{\infty }}$ a.s.
Proof.
Let us put ${c_{\mathbf{H},\varphi }^{0}}=\sup \{c\ge 0,\mathbf{P}({E_{\mathbf{H},c\varphi }^{0}})=0\}$. It follows from Theorem 4.1 that $\mathbf{P}({E_{\mathbf{H},c\varphi }^{0}})=0$ or 1. Thus
\[ \forall \varepsilon >0\hspace{1em}\mathbf{P}\big({E_{\mathbf{H},({c_{\mathbf{H},\varphi }^{0}}-\varepsilon )\varphi }^{0}}\big)=0\hspace{1em}\text{and}\hspace{1em}\mathbf{P}\big({E_{\mathbf{H},({c_{\mathbf{H},\varphi }^{0}}+\varepsilon )\varphi }^{0}}\big)=1.\]
So, the following events occur with probability one
\[ \exists \delta >0\hspace{2.5pt}\forall t_{1}\vee t_{2}<\delta :\hspace{1em}\frac{|X(\mathbf{t})|}{{t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t})}\le {c_{\mathbf{H},\varphi }^{0}}+\varepsilon \]
and
\[ \forall \delta >0\hspace{2.5pt}\exists (\mathbf{t}):\hspace{1em}t_{1}\vee t_{2}<\delta ,\hspace{1em}\frac{|X(\mathbf{t})|}{{t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t})}>{c_{\mathbf{H},\varphi }^{0}}-\varepsilon .\]
This means that
\[ {c_{\mathbf{H},\varphi }^{0}}\le {L_{\mathbf{H},\varphi }^{0}}\le {c_{\mathbf{H},\varphi }^{0}}+\varepsilon \hspace{1em}\text{a.s.}\]
Since $\varepsilon >0$ is arbitrary, it concludes the corollary statement. The proof for the case of the growth at ∞ can be done in a similar way. □If we consider slowly varying functions we are able to obtain more specific result for the functionals ${L_{\varLambda ,\varphi }^{0}}$ and ${L_{\varLambda ,\varphi }^{\infty }}$.
Definition 4.2.
Lemma 4.1.
Let $\varLambda =(\lambda _{1},\lambda _{2})\ne (H_{1},H_{2})=\mathbf{H}$. If a function $\varphi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ is slowly varying
Proof.
Let us introduce an auxiliary function $\psi (\mathbf{t})={t_{1}^{\lambda _{1}-H_{1}}}{t_{2}^{\lambda _{2}-H_{2}}}\varphi (\mathbf{t})$, $\mathbf{t}\in {\mathbb{R}_{+}^{2}}$. Then ${t_{1}^{\lambda _{1}}}{t_{2}^{\lambda _{2}}}\varphi (\mathbf{t})={t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\psi (\mathbf{t})$, $\mathbf{t}\in {\mathbb{R}_{+}^{2}}$. The function ψ is not necessary monotone in every coordinate. But we can show that ψ is monotone on some neighborhood of 0, if φ is slowly varying at 0 and ∞ if φ is slowly varying at ∞.
Let us consider the case when $\lambda _{1}>H_{1}$, $\lambda _{2}<H_{2}$ and investigate the growth at 0. We intend to prove that the function ψ increases in the first coordinate and decreases in the second one on some neighborhood of 0. It follows from the equality (3) that for any $a_{1}<1$, $a_{2}>1$ the following holds true
\[ \forall \varepsilon >0\hspace{2.5pt}\exists \delta >0:\hspace{1em}t_{1}\vee t_{2}<\delta \hspace{1em}\varphi (\mathbf{a}\cdot \mathbf{t})<(1+\varepsilon )\varphi (\mathbf{t}).\]
Then, for $0<t_{1}\vee t_{2}<\delta $ we get
\[ \psi (\mathbf{a}\cdot \mathbf{t})={a_{1}^{\lambda _{1}-H_{1}}}{a_{2}^{\lambda _{2}-H_{2}}}{t_{1}^{\lambda _{1}-H_{1}}}{t_{2}^{\lambda _{2}-H_{2}}}\varphi (\mathbf{a}\cdot \mathbf{t})<{a_{1}^{\lambda _{1}-H_{1}}}{a_{2}^{\lambda _{2}-H_{2}}}(1+\varepsilon )\psi (\mathbf{t}).\]
Therefore, for $\varepsilon <{\big(\frac{1}{a_{1}}\big)}^{\lambda _{1}-H_{1}}{a_{2}^{H_{2}-\lambda _{2}}}-1$ there exists such $\delta >0$ that the following inequality is true
In a similar way we can prove the monotonicity for the function ψ with other choices of $\lambda _{1}$, $\lambda _{2}$ and with respect to the growth at ∞. Thus, it follows from Theorem 4.1 that $\mathbf{P}({E_{\mathbf{H},\psi }^{0}})=0\hspace{2.5pt}\text{or}\hspace{2.5pt}1$ ($\mathbf{P}({E_{\mathbf{H},\psi }^{\infty }})=0\hspace{2.5pt}\text{or}\hspace{2.5pt}1$). According to Corollary 4.1 there exist such constants ${c_{\mathbf{H},\psi }^{0}}$ and ${c_{\mathbf{H},\psi }^{\infty }}$ that ${c_{\mathbf{H},\psi }^{0}}={L_{\mathbf{H},\psi }^{0}}$ and ${c_{\mathbf{H},\psi }^{\infty }}={L_{\mathbf{H},\psi }^{\infty }}$ a.s.
Let us consider ${c_{\mathbf{H},\psi }^{0}}$. It is evident that ${L_{\varLambda ,\varphi }^{0}}={L_{\mathbf{H},\psi }^{0}}={c_{\mathbf{H},\psi }^{0}}={c_{\varLambda ,\varphi }^{0}}$ a.s. Therefore the event $\{{L_{\mathbf{H},\psi }^{0}}={L_{\varLambda ,\varphi }^{0}}={c_{\varLambda ,\varphi }^{0}}\}$ occurs with probability one under an arbitrary ergodic scaling transformation $S_{\mathbf{a}}$. Let ${L_{\varLambda ,\varphi }^{0}}\circ S_{\mathbf{a}}$ be a functional ${L_{\varLambda ,\varphi }^{0}}$ applied to the field $S_{\mathbf{a}}X$. Since the function φ is slowly varying, then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {L_{\varLambda ,\varphi }^{0}}\circ S_{\mathbf{a}}=& \displaystyle \underset{t_{1}\vee t_{2}\to 0}{\limsup }\frac{{a_{1}^{-H_{1}}}{a_{2}^{-H_{2}}}|X(\mathbf{a}\cdot \mathbf{t})|}{{t_{1}^{\lambda _{1}}}{t_{2}^{\lambda _{2}}}\varphi (\mathbf{t})}\\{} \displaystyle =& \displaystyle \underset{t_{1}\vee t_{2}\to 0}{\limsup }\frac{{a_{1}^{\lambda _{1}-H_{1}}}{a_{2}^{\lambda _{2}-H_{2}}}|X(\mathbf{a}\cdot \mathbf{t})|}{{(a_{1}t_{1})}^{\lambda _{1}}{(a_{2}t_{2})}^{\lambda _{2}}\varphi (\mathbf{a}\cdot \mathbf{t})}\frac{\varphi (\mathbf{a}\cdot \mathbf{t})}{\varphi (\mathbf{t})}={a_{1}^{\lambda _{1}-H_{1}}}{a_{2}^{\lambda _{2}-H_{2}}}{L_{\varLambda ,\varphi }^{0}}.\end{array}\]
It follows from the condition $\varLambda =(\lambda _{1},\lambda _{2})\ne (H_{1},H_{2})=\mathbf{H}$ that the last equality holds true for all $a_{1}>0$, $a_{2}>0$ only in the case when ${c_{\varLambda ,\varphi }^{0}}={L_{\varLambda ,\varphi }^{0}}=0$ or $+\infty $ a.s.The proof for the case when we have the growth at $+\infty $ can be done in a similar way. □
5 Strong limit theorems
This section is devoted to strong limit theorems for real-valued self-similar fields within Definition 2.2. Let us prove these theorems for the function ${t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t})$, $\mathbf{t}\in {\mathbb{R}_{+}^{2}}$, arising in Theorem 4.1, for the fields with ergodic scaling transformation. It worth to mention that it is possible to prove the theorems in this section without imposing the additional condition about ergodicity of the scaling transformation.
We use the following notation defined for the self-similar field $X=\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ with index $\mathbf{H}\in {(0,+\infty )}^{2}$:
Since the distributions of a self-similar field are invariant under the scale transformation $S_{\mathbf{a}},$ all distribution properties can be concentrated on any finite interval. That is why all theorems within this section deal with the random variable ${X}^{\ast }$ defined by the values of the random field on the unit square. The following theorems are focused on establishing the sufficient conditions for the function to be upper or lower for the self-similar field. Let’s start from proving one auxiliary result.
Lemma 5.1.
Let a function $f:\mathbb{R}_{+}\to (0,+\infty )$ be non-decreasing and continuous. If $\mathbf{E}[f({X}^{\ast })]=K<+\infty $, then for $x>0$
Proof.
It follows from the self-similarity of the field that
\[ \underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le \lambda _{1},}{0\le t_{2}\le \lambda _{2}}}{\sup }\big|X(\mathbf{t})\big|\stackrel{d}{=}{\lambda _{1}^{H_{1}}}{\lambda _{2}^{H_{2}}}\underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le 1,}{0\le t_{2}\le 1}}{\sup }\big|X(\mathbf{t})\big|={\lambda _{1}^{H_{1}}}{\lambda _{2}^{H_{2}}}{X}^{\ast },\]
and therefore
\[ \mathbf{P}\Big(\underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le \lambda _{1},}{0\le t_{2}\le \lambda _{2}}}{\sup }\big|X(\mathbf{t},\omega )\big|\ge x\Big)=\mathbf{P}\big({\lambda _{1}^{H_{1}}}{\lambda _{2}^{H_{2}}}{X}^{\ast }(\omega )\ge x\big)=\mathbf{P}\big({X}^{\ast }(\omega )\ge {\lambda _{1}^{-H_{1}}}{\lambda _{2}^{-H_{2}}}x\big).\]
Further in the text we shall use the following notation $\mathbf{1}=(1,1)$.
Theorem 5.1.
Let $f:\mathbb{R}_{+}\to (0,+\infty )$ be such a non-decreasing continuous function that $\mathbf{E}[f({X}^{\ast })]$ is finite. We assume that a continuous function $\varphi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ satisfies the following conditions
\[\begin{array}{l@{\hskip10.0pt}l}\textit{(i)}\hspace{1em}& \varphi \hspace{2.5pt}\textit{is non-decreasing in every coordinate,}\\{} \textit{(ii)}\hspace{1em}& \underset{x\downarrow 1}{\lim }\underset{n,m=1,2,\dots }{\sup }\displaystyle\displaystyle \frac{\varphi ({x}^{n},{x}^{m})}{\varphi ({x}^{n-1},{x}^{m-1})}=c<+\infty ,\\{} \textit{(iii)}\hspace{1em}& {\displaystyle\displaystyle \int _{1}^{+\infty }}\displaystyle\displaystyle \frac{dx}{xf(\varphi (x\cdot \mathbf{1}))}<+\infty .\end{array}\]
Then
Proof.
Let $\xi >1$, $n,m\in \mathbb{N}$. We put $x_{nm}={\xi }^{nH_{1}}{\xi }^{(n+m)H_{2}}\varphi ({\xi }^{n},{\xi }^{n+m})$, and
The following inequality follows from Lemma 5.1
Now let’s prove the convergence of the series ${\sum _{n=1}^{\infty }}\mathbf{P}(A_{nm})$.
The functions f and φ are non-decreasing and their superposition $\{f(\varphi (\mathbf{x})),\mathbf{x}\in {\mathbb{R}_{+}^{2}}\}$ is also non-decreasing in every coordinate. Thus, $f(\varphi ({x}^{n},{x}^{n}))\le f(\varphi ({x}^{n},{x}^{n+m}))$ for $x>1$.
Taking into account the inequality (7) we obtain
\[ {\sum \limits_{n=1}^{\infty }}\mathbf{P}(A_{nm})\le {\sum \limits_{n=1}^{\infty }}\frac{K}{f(\varphi ({\xi }^{n},{\xi }^{n+m}))}\le {\sum \limits_{n=1}^{\infty }}\frac{K}{f(\varphi ({\xi }^{n}\mathbf{1}))}.\]
According to the integral criterion of series convergence for the positive non-decreasing function $f(\varphi ({\xi }^{x}\mathbf{1}))$, $x>0$ it can be concluded that the series converges if
Let us make the substitution $y={\xi }^{x}$ in the integral $I(f,\varphi )$. Then $dx=dy/(y\ln \xi )$ and
\[ I(f,\varphi )={\int _{\xi }^{+\infty }}\frac{dy}{yf(\varphi (y\mathbf{1}))\ln \xi }<{\int _{1}^{+\infty }}\frac{dy}{yf(\varphi (y\mathbf{1}))\ln \xi }.\]
Thus, the integral $I(f,\varphi )$ is finite by the condition (iii) of the theorem.So, it follows from the Borel-Cantelli’s lemma that there exists with probability one such a number ${n_{0}^{m}}(\omega )$ that $\mathbf{P}(A_{nm})=0$ for all $n\ge {n_{0}^{m}}(\omega )$. It means that for all $m>1$ and $n\ge {n_{0}^{m}}(\omega )$
\[ \underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le {\xi }^{n},}{0\le t_{2}\le {\xi }^{n+m}}}{\sup }\big|X(\mathbf{t},\omega )\big|\le {\xi }^{nH_{1}}{\xi }^{(n+m)H_{2}}\varphi \big({\xi }^{n},{\xi }^{n+m}\big)\hspace{1em}\text{a.s.}\]
Moreover, for every $\xi >1$, $m>1$ and $n>{n_{0}^{m}}(\omega )$ we choose a point $\mathbf{s}=(s_{1},s_{2})$ in such a way that ${\xi }^{n-1}\le s_{1}\le {\xi }^{n}$ and ${\xi }^{n+m-1}\le s_{2}\le {\xi }^{n+m}$. Then we obtain with probability one that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{|X(\mathbf{s},\omega )|}{{s_{1}^{H_{1}}}{s_{2}^{H_{2}}}\varphi (\mathbf{s})}\le & \displaystyle \frac{{\xi }^{nH_{1}}{\xi }^{(n+m)H_{2}}\varphi ({\xi }^{n},{\xi }^{n+m})}{{\xi }^{(n-1)H_{1}}{\xi }^{(n+m-1)H_{2}}\varphi ({\xi }^{n-1},{\xi }^{n+m-1})}\\{} \displaystyle \le & \displaystyle {\xi }^{H_{1}+H_{2}}\underset{n,m\ge 1}{\sup }\frac{\varphi ({\xi }^{n},{\xi }^{m})}{\varphi ({\xi }^{n-1},{\xi }^{m-1})}.\end{array}\]
For the case $s_{1}\ge s_{2}$ we get the same inequality using the similar reasoning. So,
\[ \underset{s_{1}\wedge s_{2}\to +\infty }{\limsup }\frac{|X(\mathbf{s})|}{{s_{1}^{H_{1}}}{s_{2}^{H_{2}}}\varphi (\mathbf{s})}\le {\xi }^{H_{1}+H_{2}}\underset{n,m\ge 1}{\sup }\frac{\varphi ({\xi }^{n},{\xi }^{m})}{\varphi ({\xi }^{n-1},{\xi }^{m-1})}\hspace{1em}\text{a.s.},\]
and
\[ \underset{s_{1}\wedge s_{2}\to +\infty }{\limsup }\frac{|X(\mathbf{s})|}{{s_{1}^{H_{1}}}{s_{2}^{H_{2}}}\varphi (\mathbf{s})}\le \underset{\xi \downarrow 1}{\lim }\underset{n,m\ge 1}{\sup }\frac{\varphi ({\xi }^{n},{\xi }^{m})}{\varphi ({\xi }^{n-1},{\xi }^{m-1})}=c\hspace{1em}\text{a.s.}\hspace{2.5pt}\]
The theorem is proved. □Let us consider the asymptotic growth at 0.
Theorem 5.2.
Let $f:\mathbb{R}_{+}\to (0,+\infty )$ be such a non-decreasing continuous function that $\mathbf{E}[f({X}^{\ast })]$ is finite. We assume that a continuous function $\varphi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ satisfies the following conditions
Then
Proof.
Let $\xi >1$. We put $x_{nm}:={\xi }^{-nH_{1}}{\xi }^{-(n+m)H_{2}}\varphi ({\xi }^{-n},{\xi }^{-n-m})$, $n,m\in \mathbb{N}$, and
Lemma 5.1 implies the following inequality
It follows immediately from the inequality (9) that in order to prove the convergence of the series ${\sum _{n=1}^{\infty }}\mathbf{P}(A_{nm})$ it is sufficient to prove that
The function f is non-decreasing and the function φ is non-increasing, so their super-position $f(\varphi (\cdot ))$ is non-increasing in coordinate.
Thus, $f(\varphi ({x}^{-n}\mathbf{1}))\le f(\varphi ({x}^{-n},{x}^{-n-m}))$ for $x>1$. Therefore,
The last series converges if
Let’s make the substitution $y={\xi }^{-x}$ in the integral $I(f,\varphi )$. Then $dx=-dy/(y\ln \xi )$ and
\[ I(f,\varphi )={\int _{0}^{{\xi }^{-1}}}\frac{dy}{yf(\varphi (y\mathbf{1}))\ln \xi }<{\int _{0}^{1}}\frac{dy}{yf(\varphi (y\mathbf{1}))\ln \xi }.\]
The integral $I(f,\varphi )$ is finite by the condition (ii) of the theorem.Thus, it follows from the Borel-Cantelli’s lemma that there exists with probability one such a number ${n_{0}^{m}}(\omega )$ that $\mathbf{P}(A_{nm})=0$ for all $n\ge {n_{0}^{m}}(\omega )$. It means that for all $m>1$ and $n\ge {n_{0}^{m}}(\omega )$
\[ \underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le {\xi }^{-n},}{0\le t_{2}\le {\xi }^{-n-m}}}{\sup }\big|X(\mathbf{t},\omega )\big|\le {\xi }^{-nH_{1}}{\xi }^{-(n+m)H_{2}}\varphi \big({\xi }^{-n},{\xi }^{-n-m}\big)\hspace{1em}\text{a.s.}\]
Furthermore, for every $\xi >1$, $m>1$ and $n>{n_{0}^{m}}(\omega )$ we choose the point $\mathbf{s}=(s_{1},s_{2})$ in such a way that ${\xi }^{-n-1}\le s_{1}\le {\xi }^{-n}$ and ${\xi }^{-n-m-1}\le s_{2}\le {\xi }^{-n-m}$. Then, we obtain with probability one the following
and
The theorem is proved. □Now we can use these theorems for the self-similar fields with ergodic scaling transformation. The following corollary gives the sufficient conditions for the function to be upper one for such fields.
Corollary 5.1.
Let for the self-similar field $X=\{X(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ with ergodic scaling transformation there exists such a constant $\gamma >0$ that
Then for any $\varepsilon >0$ and an arbitrary slowly varying function $\varphi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ with respect to the growth at 0 (at ∞)
Proof.
Let $\varepsilon >0$ be fixed. We put $f(x)={x}^{\gamma }$, $x>0$. Then $\mathbf{E}[f({X}^{\ast })]<+\infty $. Let us check whether the functions ${\psi }^{\infty }(\mathbf{t})={t_{1}^{\varepsilon }}{t_{2}^{\varepsilon }}$ and ${\psi }^{0}(\mathbf{t})={t_{1}^{-\varepsilon }}{t_{2}^{-\varepsilon }}$, $\mathbf{t}\in {(0,+\infty )}^{2}$ satisfy the conditions of Theorems 5.1 and 5.2, respectively. It is evident that the conditions (i) of both theorems are fulfilled.
Now we consider the behavior on ∞. Let us check the condition (ii) of Theorem 5.1 for the function ${\psi }^{\infty }$:
\[ \underset{x\downarrow 1}{\lim }\underset{n,m\ge 1}{\sup }\frac{{\psi }^{\infty }({x}^{n},{x}^{m})}{{\psi }^{\infty }({x}^{n-1},{x}^{m-1})}=\underset{x\downarrow 1}{\lim }\underset{n,m\ge 1}{\sup }{x}^{2\varepsilon }=1.\]
The condition (iii) is also fulfilled since
\[ {\int _{1}^{+\infty }}\frac{dx}{xf({x}^{2\varepsilon })}={\int _{1}^{+\infty }}\frac{dx}{{x}^{1+2\gamma \varepsilon }}<+\infty .\]
Thus, Theorem 5.1 implies that ${L_{\mathbf{H}+\varepsilon ,1}^{\infty }}\le 1$ a.s. Since the constant 1 can be regarded as slowly varying function, so it follows from Lemma 4.1 that ${L_{\mathbf{H}+\varepsilon ,1}^{\infty }}=0$ or ∞ a.s. Therefore, ${L_{\mathbf{H}+\varepsilon ,1}^{\infty }}=0$ a.s. for any $\varepsilon >0$.Let us now consider the behavior at 0. We check the condition (ii) of Theorem 5.2 for the function ${\psi }^{0}$.
\[ {\int _{0}^{1}}\frac{dx}{xf({x}^{-2\varepsilon })}={\int _{0}^{1}}\frac{dx}{{x}^{1-2\gamma \varepsilon }}<+\infty .\]
Thus, Theorem 5.2 implies that ${L_{\mathbf{H}-\varepsilon ,1}^{0}}\le 1$ a.s. It follows from Lemma 4.1 that ${L_{\mathbf{H}-\varepsilon ,1}^{0}}=0$ or ∞ a.s. Therefore, ${L_{\mathbf{H}-\varepsilon ,1}^{0}}=0$ a.s. for any $\varepsilon >0$.Further we intend to prove that ${L_{\mathbf{H}-\varepsilon ,\varphi }^{0}}=0$ a.s. (${L_{\mathbf{H}+\varepsilon ,\varphi }^{\infty }}=0$ a.s.). Let’s investigate the behavior at ∞. We assume that $a_{0}>1$, $\mathbf{a}=(a_{1},a_{2})$. The convergence in (4) is uniform in a on any finite rectangle, for instance, on ${[1,{a_{0}^{2}}]}^{2}$. Let $0<\alpha <\varepsilon $. We choose such $\delta >0$ that $0<\delta <1-{a_{0}^{\alpha -\varepsilon }}$. Since the limit in (4) is uniform then there exists $t_{0}>0$ $\forall (t_{1},t_{2})\in {\mathbb{R}_{+}^{2}},t_{1}\wedge t_{2}>t_{0}:\varphi (\mathbf{a}\cdot \mathbf{t})>\varphi (\mathbf{t})(1-\delta )$, $\forall \mathbf{a}\in {[1,{a_{0}^{2}}]}^{2}$.
Now for the arbitrary t with $t_{1}>t_{0}$, $t_{2}>t_{0}$ we can define such numbers $n,m\in \mathbb{N}$, $\mathbf{a}\in {[a_{0},{a_{0}^{2}}]}^{2}$ that ${a_{0}^{n}}\le \frac{t_{1}}{t_{0}}\le {a_{0}^{n+1}}$, ${a_{0}^{m}}\le \frac{t_{2}}{t_{0}}\le {a_{0}^{m+1}}$ and $t_{1}={a_{1}^{n}}t_{0}$, $t_{2}={a_{2}^{m}}t_{0}$. Then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {t_{1}^{\varepsilon }}{t_{2}^{\varepsilon }}\varphi (\mathbf{t})=& \displaystyle {t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi \big({a_{1}^{n}}t_{0},{a_{2}^{m}}t_{0}\big)>{t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi \big({a_{1}^{n}}t_{0},{a_{2}^{m-1}}t_{0}\big)(1-\delta )>\cdots \\{} \displaystyle >& \displaystyle {t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi \big({a_{1}^{n}}t_{0},t_{0}\big){(1-\delta )}^{m}>\cdots >{t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi (t_{0}\mathbf{1}){(1-\delta )}^{n+m}\\{} \displaystyle >& \displaystyle {t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi (t_{0}\mathbf{1}){a_{0}^{(\alpha -\varepsilon )(n+m)}}>{t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi (t_{0}\mathbf{1}){a_{1}^{(\alpha -\varepsilon )n}}{a_{2}^{(\alpha -\varepsilon )m}}\\{} \displaystyle >& \displaystyle {\big(t_{0}{a_{1}^{n}}\big)}^{\alpha }{\big(t_{0}{a_{2}^{m}}\big)}^{\alpha }{t_{0}^{2(\varepsilon -\alpha )}}\varphi (t_{0}\mathbf{1})={t_{1}^{\alpha }}{t_{2}^{\alpha }}\varphi (t_{0}\mathbf{1}){t_{0}^{2(\varepsilon -\alpha )}}.\end{array}\]
Thus, ${t_{1}^{\varepsilon }}{t_{2}^{\varepsilon }}\varphi (\mathbf{t})\to +\infty $, $t_{1}\wedge t_{2}\to +\infty $ and ${L_{\mathbf{H}+\varepsilon ,\varphi }^{\infty }}=0$.The proof of the equality ${L_{\mathbf{H}-\varepsilon ,\varphi }^{0}}=0$ a.s. can be done in a similar way. □
The following theorem includes the sufficient conditions for the function to be lower one. But before we shall prove the auxiliary lemma.
Lemma 5.2.
Let $g:\mathbb{R}_{+}\to (0,+\infty )$ be such a continuous non-increasing function that $\mathbf{E}[g({X}^{\ast })]={K^{\prime }}<+\infty $. Then for any $x>0$
Proof.
Using the similar argumentation as in Lemma 5.1 we obtain
Since the condition of the lemma implies that the function g is positively defined and continuous, then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}g\big({X}^{\ast }\big)\ge & \displaystyle \mathbf{E}g\big({\lambda _{1}^{-H_{1}}}{\lambda _{2}^{-H_{2}}}x\big)\chi _{\{{X}^{\ast }\le {\lambda _{1}^{-H_{1}}}{\lambda _{2}^{-H_{2}}}x\}}\\{} \displaystyle =& \displaystyle \mathbf{P}\big({X}^{\ast }\le {\lambda _{1}^{-H_{1}}}{\lambda _{2}^{-H_{2}}}x\big)g\big({\lambda _{1}^{-H_{1}}}{\lambda _{2}^{-H_{2}}}x\big).\end{array}\]
The lemma is proved. □Theorem 5.3.
Let $g:\mathbb{R}_{+}\to (0,+\infty )$ be such a continuous non-increasing function that $\mathbf{E}[g({X}^{\ast })]$ is finite. We assume that a function $\psi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ is continuous and satisfies the conditions:
Then
Proof.
Let $\xi >1$, $x,m\in \mathbb{N}$. We put $y_{nm}={\xi }^{nH_{1}}{\xi }^{(n+m)H_{2}}\psi ({\xi }^{n},{\xi }^{n+m})$ and define a sequence of the random events
\[ B_{nm}=\Big\{\omega \in \varOmega \Big|\underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le {\xi }^{n},}{0\le t_{2}\le {\xi }^{n+m}}}{\sup }\big|X(\mathbf{t},\omega )\big|\le y_{nm}\Big\}.\]
The following inequality for the probability of such events follows from Lemma 5.2
In order to prove the theorem we shall show that starting from some number the events $B_{nm}$ have zero probability. For this we need to prove the convergence of the series ${\sum _{n=1}^{\infty }}\mathbf{P}(B_{nm})$.Let us recall that the functions g and ψ are non-decreasing under the theorem conditions and their superposition $g(\psi (\cdot ,\cdot ))$ is a non-decreasing function in every coordinate. So, the reasons similar to the ones from Theorem 5.1 will lead us to the relations
\[ {\sum \limits_{n=1}^{\infty }}\mathbf{P}(B_{nm})\le {\sum \limits_{n=1}^{\infty }}\frac{{K^{\prime }}}{g(\psi ({\xi }^{n},{\xi }^{n+m}))}\le {\sum \limits_{n=1}^{\infty }}\frac{{K^{\prime }}}{g(\psi ({\xi }^{n}\mathbf{1}))}.\]
Since the function $g(\psi ({\xi }^{x},{\xi }^{x}))$, $x>0$ is non-decreasing, the integral criterion of series convergence implies that it is sufficient to prove the finiteness of the integral
Let’s make the substitution $y={\xi }^{x}$ in the integral $I(g,\psi )$. Then $dy=y(\ln \xi )dx$ and
\[ I(g,\psi )={\int _{\xi }^{+\infty }}\frac{dy}{yg(\psi (y\mathbf{1}))\ln \xi }\le {\int _{1}^{+\infty }}\frac{dy}{y\hspace{2.5pt}g(\psi (y\mathbf{1}))\ln \xi }.\]
So, the integral $I(g,\psi )$ is finite according to the condition (ii) of the theorem.Therefore, the series ${\sum _{n=1}^{\infty }}\mathbf{P}(B_{nm})$ is convergent. It follows from the Borel-Cantelli’s lemma that there exists with probability one such a number ${N_{0}^{m}}(\omega )$ that for all $n\ge {N_{0}^{m}}(\omega ):$ $\mathbf{P}(B_{nm})=0$. It means that
Now, for arbitrary $\xi >1$, $m>0$ and $n>{N_{0}^{m}}(\omega )$ we choose such a point $\mathbf{s}=(s_{1},s_{2})$ that ${\xi }^{n}\le s_{1}\le {\xi }^{n+1}$, ${\xi }^{n+m}\le s_{2}\le {\xi }^{n+m+1}$. Then, the following is true with probability one
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{\sup _{0\le t_{1}\le s_{1},0\le t_{2}\le s_{2}}|X(\mathbf{t},\omega )|}{{s_{1}^{H_{1}}}{s_{2}^{H_{2}}}\psi (\mathbf{s})}\ge & \displaystyle \frac{\sup _{0\le t_{1}\le {\xi }^{n},0\le t_{2}\le {\xi }^{n+m}}|X(\mathbf{t},\omega )|}{{\xi }^{(n+1)H_{1}}{\xi }^{(n+m+1)H_{2}}\psi ({\xi }^{n},{\xi }^{n+m})}\\{} \displaystyle \ge & \displaystyle \frac{{\xi }^{nH_{1}}{\xi }^{(n+m)H_{2}}\psi ({\xi }^{n},{\xi }^{n+m})}{{\xi }^{(n+1)H_{1}}{\xi }^{(n+m+1)H_{2}}\psi ({\xi }^{n},{\xi }^{n+m})}={\xi }^{-H_{1}-H_{2}}.\end{array}\]
And therefore,
The theorem is proved. □6 Strong limit theorems for Gaussian fields
Let us consider a few examples of how Theorems 5.1 and 5.3 can be applied to centered Gaussian fields. In this section we assume that the real-valued Gaussian fields have continuous sample paths.
The first condition of Theorems 5.1 and 5.3 is an existence of such a non-decreasing function f and a non-increasing function g that $\mathbf{E}[f({X}^{\ast })]<+\infty $, $\mathbf{E}[g({X}^{\ast })]<+\infty $. It is not so easy to check these conditions directly. But there are a lot of well-known results for the Gaussian fields concerning the tail probability behavior and the probability of the small deviations. The following lemma shows how this information can be utilized for checking the first condition of Theorems 5.1 and 5.3.
Lemma 6.1.
Let $f,g:\mathbb{R}_{+}\to (0,+\infty )$, f be non-decreasing, g be non-increasing, $f,g\in {C}^{1}(\mathbb{R}_{+})$. We assume that Z is a positive random variable and the functions $a,b:\mathbb{R}_{+}\to \mathbb{R}_{+}$ are such that $\mathbf{P}(Z>x)\le a(x)$ and $\mathbf{P}(Z\le x)\le b(x)$. If
\[ {\int _{\cdot }^{+\infty }}{f^{\prime }}(x)a(x)dx<+\infty \hspace{2em}\Bigg(\textit{or}\hspace{1em}{\int _{0}^{\cdot }}-{g^{\prime }}(x)b(x)dx<+\infty \Bigg),\]
then $\mathbf{E}[f(Z)]<+\infty $ (or $\mathbf{E}[g(Z)]<+\infty $ respectively).
Proof.
The following relations are valid for the function f
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}\big[f(Z)\big]=& \displaystyle {\int _{0}^{+\infty }}f(x)d\mathbf{P}(Z\le x)=-{\int _{0}^{+\infty }}f(x)d\mathbf{P}(Z>x)\\{} \displaystyle =& \displaystyle -f(x)\mathbf{P}(Z>x){|_{0}^{\infty }}+{\int _{0}^{+\infty }}{f^{\prime }}(x)\mathbf{P}(Z>x)dx\\{} \displaystyle =& \displaystyle -\underset{x\to \infty }{\lim }f(x)\mathbf{P}(Z>x)+f(0)+{\int _{0}^{+\infty }}{f^{\prime }}(x)\mathbf{P}(Z>x)dx\\{} \displaystyle \le & \displaystyle f(0)+{\int _{0}^{+\infty }}{f^{\prime }}(x)a(x)dx<+\infty .\end{array}\]
And for the function g the following is true
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}\big[g(Z)\big]=& \displaystyle {\int _{0}^{+\infty }}g(x)d\mathbf{P}(Z\le x)=g(x)\mathbf{P}(Z\le x){|_{0}^{\infty }}-{\int _{0}^{+\infty }}{g^{\prime }}(x)\mathbf{P}(Z\le x)dx\\{} \displaystyle =& \displaystyle g(+\infty )-\underset{x\to 0}{\lim }g(x)\mathbf{P}(Z\le x)-{\int _{0}^{+\infty }}{g^{\prime }}(x)\mathbf{P}(Z\le x)dx\\{} \displaystyle \le & \displaystyle g(+\infty )-{\int _{0}^{+\infty }}{g^{\prime }}(x)b(x)dx<+\infty .\end{array}\]
The lemma is proved. □So, if there is an inequality for the tail probability $\mathbf{P}({X}^{\ast }\ge x)\le a(x)$ then it is sufficient to find such a positive non-decreasing function f that ${\int _{\cdot }^{+\infty }}{f^{\prime }}(x)a(x)dx<+\infty $. Lemma 6.1 implies that the expectation $\mathbf{E}[f({X}^{\ast })]$ will be finite. Similarly, Lemma 6.1 can be used for the probability of small deviations $\mathbf{P}({X}^{\ast }\le x)\le b(x)$.
Example 1. Let us apply Theorem 5.1 to the centered Gaussian self-similar field $X=\{X(\mathbf{t}),\hspace{2.5pt}\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ with index $\mathbf{H}=(H_{1},H_{2})\in {(0,1)}^{2}$. It follows from [9] that there exists such a constant $c_{1}>0$, that $\mathbf{E}[f({X}^{\ast })]<+\infty $ for the function
\[ f(y)=\exp \bigg\{(c_{1}-\varepsilon )\frac{{y}^{2}}{2}\bigg\},\hspace{1em}0<\varepsilon <c_{1}.\]
Now we need to define the non-decreasing function $\varphi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ in such a way that the condition (iii) of Theorem 5.1 is fulfilled; namely, ${\int _{1}^{+\infty }}\frac{dx}{xf(\varphi (x\mathbf{1}))}<+\infty $. Let us choose φ satisfying $f(\varphi (x\mathbf{1}))={(\ln (x+e))}^{1+\eta }$. In this case the condition (iii) of Theorem 5.1 holds true for every $\eta >0$. Thus, for $\eta =\varepsilon $ we obtain
Furthermore, the function φ is defined on $\{(x\mathbf{1}),x\ge 0\}$
Moreover, the function φ can be extended to the plane arbitrarily with imposing the conditions (i) and (ii) of Theorem 5.1. For example, the following three functions satisfy the mentioned conditions:
where $\mathbf{x}=(x_{1},x_{2})\in {\mathbb{R}_{+}^{2}}$ and $\delta =2\frac{3\varepsilon +1-c_{1}}{c_{1}-\varepsilon }>2\frac{1-c_{1}}{c_{1}}$.
Indeed, these functions are convex and non-decreasing in every coordinate. So, the supremum in the condition (ii) of Theorem 5.1 is attained when $n,m=1$. Therefore, $c=1$ and
Example 2. Now let us consider how Theorem 5.3 can be applied. As it was mention before, the estimates for the probability of small deviations can be used for checking the first condition of the theorem. Such estimates are quite crude for the general Gaussian random fields. But for the narrower class of fields, namely, for the fractional Brownian sheet, there exist more precise results.
Let $\{B_{\mathbf{H}}(\mathbf{t}),\mathbf{t}\in {\mathbb{R}_{+}^{2}}\}$ be an fractional Brownian sheet with index $\mathbf{H}=(H_{1},H_{2})\in {(0,1)}^{2}$ (Definition 2.5). It had been proved in the paper [12] that the following limit holds for $H_{1}\ne H_{2}$
\[ -\underset{x\searrow 0}{\lim }{x}^{2/H}\ln \mathbf{P}\big\{{B_{\mathbf{H}}^{\ast }}\le x\big\}=\tau _{H},\]
where H is a minimum between $H_{1}$, $H_{2}$; $\tau _{H}$ is some constant depending on H. And for the case $H_{1}=H_{2}=H$ there is an inequality for the probability of small deviations
\[ \ln \mathbf{P}\big\{{B_{\mathbf{H}}^{\ast }}\le x\big\}\le -K_{1}\frac{{(-\ln x)}^{2/H}}{{x}^{2/H}}\le -K_{1}\frac{1}{{x}^{2/H}},\hspace{1em}0<x<1,\]
where $K_{1}$ is some constant.Summarizing the results for both cases we conclude that there exist such constants $c_{2}>0$ and $b>0$ that for all $y\in (0,b)$ the following inequality holds true
\[ \mathbf{P}\big\{{B_{\mathbf{H}}^{\ast }}\le y\big\}\le \exp \bigg\{-\frac{c_{2}}{{y}^{2/H}}\bigg\}.\]
Let us define a function $g:\mathbb{R}_{+}\to (0,+\infty )$ for arbitrary $0<\delta <c_{2}$ as
Then Lemma 6.1 implies that $\mathbf{E}[g({B_{\mathbf{H}}^{\ast }})]<+\infty $.Let us choose the function $\psi :{\mathbb{R}_{+}^{2}}\to (0,+\infty )$ in such a way that the condition (ii) of Theorem 5.3 is fulfilled for it. It means that ${\int _{1}^{+\infty }}\frac{dx}{xg(\psi (x\mathbf{1}))}<+\infty $. This condition holds if $g(\psi (x\mathbf{1}))={(\ln (x+e))}^{1+\varepsilon },\varepsilon >0$. Then,
\[ \psi (x\mathbf{1})={\bigg(\frac{c_{2}-\delta }{1+\varepsilon }\bigg)}^{H/2}{\big(\ln \ln (x+e)\big)}^{-H/2},\hspace{1em}x>0.\]
Further, we need to define the function ψ on ${\mathbb{R}_{+}^{2}}$ in such a way that this function remains non-increasing in every coordinate. Let $\varepsilon =\frac{\delta }{c_{2}-2\delta }$, $2\delta <c_{2}$, then the following functions satisfy the conditions (i)–(ii) of Theorem 5.3:
\[ \psi _{1}(\mathbf{x})={(c_{2}-2\delta )}^{\frac{H}{2}}{\bigg(\ln \ln \bigg(\frac{x_{1}+x_{2}}{2}+e\bigg)\bigg)}^{-\frac{H}{2}},\]
\[ \psi _{2}(\mathbf{x})={(c_{2}-2\delta )}^{\frac{H}{2}}{\big(\ln \ln (x_{1}+e)\big)}^{-\frac{H}{4}}{\big(\ln \ln (x_{2}+e)\big)}^{-\frac{H}{4}},\]
where $\mathbf{x}=(x_{1},x_{2})\in \mathbb{R}_{+}$.Thus, we obtain the following inequalities
\[ \underset{s_{1}\wedge s_{2}\to \infty }{\liminf }\frac{{[\ln \ln (\frac{s_{1}+s_{2}}{2}+e)]}^{\frac{H}{2}}}{{s_{1}^{H_{1}}}{s_{2}^{H_{2}}}}\underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le s_{1},}{0\le t_{2}\le s_{2}}}{\sup }\big|B_{\mathbf{H}}(\mathbf{t})\big|\ge {c_{2}^{\frac{H}{2}}}\hspace{1em}\text{a.s.,}\]
\[ \underset{s_{1}\wedge s_{2}\to \infty }{\liminf }\frac{{[\ln \ln (s_{1}+e)]}^{\frac{H}{4}}{[\ln \ln (s_{2}+e)]}^{\frac{H}{4}}}{{s_{1}^{H_{1}}}{s_{2}^{H_{2}}}}\underset{\genfrac{}{}{0pt}{}{0\le t_{1}\le s_{1},}{0\le t_{2}\le s_{2}}}{\sup }\big|B_{\mathbf{H}}(\mathbf{t})\big|\ge {c_{2}^{\frac{H}{2}}}\hspace{1em}\text{a.s.}\]
So, we have presented the examples of the upper and lower limiting functions for fractional Brownian sheet $B_{\mathbf{H}}$.