1 Introduction
Simulation of random processes is a wide area nowadays, and there are many simulation methods (see, e.g., [9, 10]). There is one substantial problem: for most of traditional simulation methods, it is difficult to measure the quality of approximation of a process by its model in terms of “distance” between paths of the process and the corresponding paths of the model. Therefore, models for which such distance can be estimated are quite interesting.
There is a concept for simulation by such models called simulation with given accuracy and reliability. Simulation with given accuracy and reliability is considered, for example, in [7, 4, 6, 11, 12].
Simulation with given accuracy and reliability can be described in the following way. Suppose that an approximation $\hat{X}(t)$ of a random process $X(t)$ is constructed. The random process $\hat{X}(t)$ is called a model of $X(t)$. A model depends on certain parameters. The rate of convergence of a model to a process is given by a statement of the following type: if numbers δ (accuracy) and ε ($1-\varepsilon $ is called reliability) are given and the parameters of the model satisfy certain restrictions (for instance, they are not less than certain lower bounds), then
Many such results have been proved for the cases where the norm in (1) is the $L_{p}$ norm or the uniform norm. But simulation with given accuracy and reliability has been developed so far mostly for processes with one-dimensional distributions having tails not heavier than Gaussian tails (e.g., for sub-Gaussian processes), and such a simulation for processes with tails heavier than Gaussian tails deserves attention.
We consider a random process $Y(t)=\exp \{X(t)\}$ and an f-wavelet $\phi (x)$ with the corresponding m-wavelet $\psi (x)$, where $X(t)$ is a centered second-order process such that its correlation function $R(t,s)$ can be represented as
We prove that
\[Y(t)=\prod \limits_{k\in \mathbb{Z}}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{\infty }\prod \limits_{l\in \mathbb{Z}}\exp \big\{\eta _{jl}b_{jl}(t)\big\},\]
where $\xi _{0k}$, $\eta _{jl}$ are random variables, and $a_{0k}(t)$, $b_{jl}(t)$ are functions that depend on $X(t)$ and the wavelet.As a model of $Y(t)$, we take the process
Let us consider the case where $X(t)$ is a strictly sub-Gaussian process. Note that the class of processes $Y(t)=\exp \{X(t)\}$, where $X(t)$ is a strictly sub-Gaussian process, is a rich class that includes many processes with one-dimensional distributions having tails heavier than Gaussian tails; for example, when $X(t)$ is a Gaussian process, one-dimensional distributions of $Y(t)$ are lognormal.
We describe the rate of convergence of $\hat{Y}(t)$ to a process $Y(t)$ in $C([0,T])$ in such a way: if $\varepsilon \in (0;1)$ and $\delta >0$ are given and the parameters $N_{0}$, N, $M_{j}$ are big enough, then
A similar statement that characterizes the rate of convergence of $\hat{Y}(t)$ to $Y(t)$ in $L_{p}([0,T])$ is also proved for the case where (2) is replaced by the inequality
(2)
\[\mathrm{P}\Big\{\underset{t\in [0,T]}{\sup }\big|Y(t)/\hat{Y}(t)-1\big|>\delta \Big\}\le \varepsilon .\]If the process $X(t)=\ln Y(t)$ is Gaussian, then the model $\hat{Y}(t)$ can be used for computer simulation of $Y(t)$.
One of the merits of our model is its simplicity. Besides, it can be used for simulation of processes with one-dimensional distributions having tails heavier than Gaussian tails.
2 Auxiliary facts
A random variable ξ is called sub-Gaussian if there exists a constant $a\ge 0$ such that
for all $\lambda \in \mathbb{R}$.
The class of all sub-Gaussian random variables on a standard probability space $\{\varOmega ,\mathcal{B},P\}$ is a Banach space with respect to the norm
A centered Gaussian random variable and a random variable uniformly distributed on $[-b,b]$ are examples of sub-Gaussian random variables.
A sub-Gaussian random variable ξ is called strictly sub-Gaussian if
For any sub-Gaussian random variable ξ,
and
A family Δ of sub-Gaussian random variables is called strictly sub-Gaussian if for any finite or countable set I of random variables $\xi _{i}\in \varDelta $ and for any $\lambda _{i}\in \mathbb{R}$,
\[{\tau }^{2}\bigg(\sum \limits_{i\in I}\lambda _{i}\xi _{i}\bigg)=\mathsf{E}{\bigg(\sum \limits_{i\in I}\lambda _{i}\xi _{i}\bigg)}^{2}.\]
A stochastic process $X=\{X(t),t\in \mathbf{T}\}$ is called sub-Gaussian if all the random variables $X(t)$, $t\in \mathbf{T}$, are sub-Gaussian and $\sup _{t\in \mathbf{T}}\tau (X(t))<\infty $. We call a sub-Gaussian stochastic process $X=\{X(t),t\in \mathbf{T}\}$ strictly sub-Gaussian if the family $\{X(t),t\in \mathbf{T}\}$ is strictly sub-Gaussian. Any centered Gaussian process $X=\{X(t),t\in \mathbf{T}\}$ for which $\sup _{t\in \mathbf{T}}\mathsf{E}{(X(t))}^{2}<\infty $ is strictly sub-Gaussian.Details about sub-Gaussian random variables and sub-Gaussian and strictly sub-Gaussian random processes can be found in [1] and [3].
We will use wavelets (see [2] for details) for an expansion of a stochastic process. Namely, we use a father wavelet $\phi (x)$ and the corresponding mother wavelet $\psi (x)$ (we will use the terms “f-wavelet” and “m-wavelet” instead of the terms “father wavelet” and “mother wavelet,” respectively). Set
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \phi _{0k}(x)& \displaystyle =\phi (x-k),\hspace{1em}k\in \mathbb{Z},\hspace{2em}\\{} \displaystyle \psi _{jl}(x)& \displaystyle ={2}^{j/2}\psi \big({2}^{j}x-l\big),\hspace{1em}j,l\in \mathbb{Z}.\hspace{2em}\end{array}\]
Note that $\{\phi _{0l},\psi _{jk},l\in \mathbb{Z},k\in \mathbb{Z},j=0,1,\dots \}$ is an orthonormal basis in $L_{2}(\mathbb{R})$. We will further consider only wavelets for which both $\phi (x)$ and $\psi (x)$ are real-valued.We denote by $\hat{f}$ the Fourier transform of a function $f\in L_{2}(\mathbb{R})$:
The following statement is crucial for us.
Theorem 1.
([5]) Let $X=\{X(t),t\in \mathbb{R}\}$ be a centered random process such that $\mathsf{E}|X(t){|}^{2}<\infty $ for all $t\in \mathbb{R}$. Let $R(t,s)=\mathsf{E}X(t)\overline{X(s)}$ and suppose that there exists a Borel function $u(t,\lambda )$, $t\in \mathbb{R}$, $\lambda \in \mathbb{R}$, such that
where
and $\xi _{0k}$, $\eta _{jl}$ are centered random variables such that
\[\int _{\mathbb{R}}{\big|u(t,\lambda )\big|}^{2}d\lambda <\infty \hspace{1em}\textit{for all }t\in \mathbb{R}\]
and
Let $\phi (x)$ be an f-wavelet, and $\psi (x)$ the corresponding m-wavelet. Then the process $X(t)$ can be presented as the following series, which converges for any $t\in \mathbb{R}$ in $L_{2}(\varOmega )$:
(5)
\[X(t)=\sum \limits_{k\in \mathbb{Z}}\xi _{0k}a_{0k}(t)+\sum \limits_{j=0}^{\infty }\sum \limits_{l\in \mathbb{Z}}\eta _{jl}b_{jl}(t),\](6)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle a_{0k}(t)& \displaystyle =\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }_{0k}(y)}dy=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }(y)}{e}^{iyk}dy,\end{array}\](7)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle b_{jl}(t)& \displaystyle =\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\psi }_{jl}(y)}dy=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y){2}^{-j/2}\exp \bigg\{i\frac{y}{{2}^{j}}l\bigg\}\overline{\hat{\psi }\bigg(\frac{y}{{2}^{j}}\bigg)}dy,\end{array}\]Remark 1.
There have been obtained explicit formulae for random variables $\xi _{0k}$, $\eta _{jl}$ from an expansion more general than (5) under certain restrictions on the process $X(t)$ (see [8], Theorem 2.1). It seems that getting explicit formulae for $\xi _{0k}$ and $\eta _{jl}$ in the general case is either impossible or quite nontrivial.
Definition.
Condition RC holds for stochastic process $X(t)$ if it satisfies the conditions of Theorem 1, $u(t,\cdot )\in L_{1}(\mathbb{R})\cap L_{2}(\mathbb{R})$, and the inverse Fourier transform $\tilde{u}_{x}(t,x)$ of the function $u(t,x)$ with respect to x is a real function.
Remark 2.
Condition RC guarantees that the coefficients $a_{0k}(t)$, $b_{jl}(t)$ of expansion (5) are real. Indeed, this follows from the formulae
Suppose that $X(t)$ is a process that satisfies the conditions of Theorem 1. Let us consider the following approximation (or model) of $X(t)$:
where $\xi _{0k}$, $\eta _{jl}$, $a_{0k}(t)$, $b_{jl}(t)$ are defined in Theorem 1.
(8)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{X}(t)& \displaystyle =\hat{X}(N_{0},N,M_{0},\dots ,M_{N-1},t)\\{} & \displaystyle =\sum \limits_{k=-(N_{0}-1)}^{N_{0}-1}\xi _{0k}a_{0k}(t)+\sum \limits_{j=0}^{N-1}\sum \limits_{l=-(M_{j}-1)}^{M_{j}-1}\eta _{jl}b_{jl}(t),\end{array}\]3 A multiplicative representation
We will obtain a multiplicative representation for a wide class of stochastic processes.
Theorem 2.
Suppose that a random process $Y(t)$ can be represented as $Y(t)=\exp \{X(t)\}$, where the process $X(t)$ satisfies the conditions of Theorem 1. Then the equality
holds, where product (9) converges in probability for any fixed t, and $\xi _{0k}$, $\eta _{jl}$, $a_{0k}(t)$, $b_{jl}(t)$ are defined in Theorem 1.
(9)
\[Y(t)=\prod \limits_{k\in \mathbb{Z}}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{\infty }\prod \limits_{l\in \mathbb{Z}}\exp \big\{\eta _{jl}b_{jl}(t)\big\}\]The statement of the theorem immediately follows from Theorem 1.
Remark 4.
It was shown in [5] that any centered second-order wide-sense stationary process $X(t)$ that has the spectral density satisfies the conditions of Theorem 1. The process $Y(t)=\exp \{X(t)\}$ can be represented as product (9), and therefore the class of processes that satisfy the conditions of Theorem 2 is wide enough.
It is natural to approximate a stochastic process $Y(t)=\exp \{X(t)\}$ that satisfies the conditions of Theorem 2 by the model
(10)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{Y}(t)& \displaystyle =\hat{Y}(N_{0},N,M_{0},\dots ,M_{N-1},t)\\{} & \displaystyle =\prod \limits_{k=-(N_{0}-1)}^{N_{0}-1}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{N-1}\prod \limits_{l=-(M_{j}-1)}^{M_{j}-1}\exp \big\{\eta _{jl}b_{jl}(t)\big\}=\exp \big\{\hat{X}(t)\big\}.\end{array}\]Remark 5.
If $X(t)=\ln Y(t)$ is a Gaussian process, then we can use the model $\hat{Y}(t)$ for computer simulation of $Y(t)$, taking as $\xi _{0k}$, $\eta _{jl}$ in (10) independent random variables with distribution $N(0;1)$.
4 Simulation with given relative accuracy and reliability in $C([0,T])$
Let us study the rate of convergence in $C([0,T])$ of model (10) to a process $Y(t)$. We will need several auxiliary facts.
Lemma 1.
([13]) Let $X=\{X(t),t\in \mathbb{R}\}$ be a centered stochastic process that satisfies the requirements of Theorem 1, $T>0$, ϕ be an f-wavelet, ψ be the corresponding m-wavelet, the function $\hat{\phi }(y)$ be absolutely continuous on any interval, the function $u(t,y)$ be absolutely continuous with respect to y for any fixed t, there exist the derivatives ${u^{\prime }_{\lambda }}(t,\lambda )$, ${\hat{\phi }^{\prime }}(y)$, ${\hat{\psi }^{\prime }}(y)$, the inequalities $|{\hat{\psi }^{\prime }}(y)|\le C$, $|u(t,\lambda )|\le |t|u_{1}(\lambda )$, $|{u^{\prime }_{\lambda }}(t,\lambda )|\le |t|\hspace{0.1667em}u_{2}(\lambda )$ hold,
and
(11)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{\mathbb{R}}u_{1}(y)|y|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{1}(y)dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{1}(y)\big|{\hat{\phi }^{\prime }}(y)\big|dy<\infty ,\hspace{2em}\end{array}\](12)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{\mathbb{R}}u_{1}(y)\big|\hat{\phi }(y)\big|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{2}(y)|y|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{2}(y)\big|\hat{\phi }(y)\big|dy<\infty ,\hspace{2em}\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\hspace{0.1667em}\overline{\hat{\psi }\big(y/{2}^{j}\big)}=0\hspace{1em}\forall j=0,1,\dots ,\hspace{2.5pt}\forall t\in [0,T],\hspace{2em}\end{array}\]
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\overline{\hat{\phi }(y)}=0\hspace{1em}\forall t\in [0,T],\\{} & \displaystyle E_{1}=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)|\widehat{\phi }(y)|dy,\\{} & \displaystyle E_{2}=\frac{1}{\sqrt{2\pi }}\bigg(\int _{\mathbb{R}}u_{1}(y)|{\widehat{\phi }^{\prime }}(y)|dy+\int _{\mathbb{R}}u_{2}(y)|\widehat{\phi }(y)|dy\bigg),\\{} & \displaystyle F_{1}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)|y|dy,\\{} & \displaystyle F_{2}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}\big(u_{1}(y)+|y|u_{2}(y)\big)dy.\end{array}\]
Let the process $\hat{X}(t)$ be defined by (8), $\delta >0$. If $N_{0}$, N, $M_{j}$ ($j=0,1,\dots ,N-1$) satisfy the inequalities
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\frac{6}{\delta }{E_{2}^{2}}{T}^{2}+1,\\{} \displaystyle N& \displaystyle >\max \bigg\{1+\log _{2}\bigg(\frac{72{F_{2}^{2}}{T}^{2}}{5\delta }\bigg),1+\log _{8}\bigg(\frac{18{F_{1}^{2}}{T}^{2}}{7\delta }\bigg)\bigg\},\\{} \displaystyle M_{j}& \displaystyle >1+\frac{12}{\delta }{F_{2}^{2}}{T}^{2},\end{array}\]
then
Lemma 2.
([13]) Let $X=\{X(t),t\in \mathbb{R}\}$ be a centered stochastic process satisfying the requirements of Theorem 1, $T>0$, ϕ be an f-wavelet, ψ be the corresponding m-wavelet, $S(y)=\overline{\hat{\psi }(y)}$, $S_{\phi }(y)=\overline{\hat{\phi }(y)}$. Suppose that $\phi (y),u(t,\lambda )$, $S(y)$, $S_{\phi }(y)$ satisfy the following conditions: the function $u(t,y)$ is absolutely continuous with respect to y, the function $\hat{\phi }(y)$ is absolutely continuous,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \big|{S^{\prime }}(y)\big|\le M<\infty ,\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)S\big(y/{2}^{j}\big)=0,\hspace{1em}j=0,1,\dots ,\hspace{2.5pt}t\in [0,T],\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)S_{\phi }(y)=0,\hspace{1em}t\in [0,T],\end{array}\]
there exist functions $v(y)$ and $w(y)$ such that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|{u^{\prime }_{y}}(t_{1},y)-{u^{\prime }_{y}}(t_{2},y)\big|& \displaystyle \le |t_{2}-t_{1}|v(y),\\{} \displaystyle \big|u(t_{1},y)-u(t_{2},y)\big|& \displaystyle \le |t_{2}-t_{1}|w(y),\end{array}\]
and
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{\mathbb{R}}|y|v(y)dy<\infty ,\hspace{2em}\int _{\mathbb{R}}v(y)\big|S_{\phi }(y)\big|dy<\infty ,\\{} & \displaystyle \int _{\mathbb{R}}w(y)\big|{S^{\prime }_{\phi }}(y)\big|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}w(y)dy<\infty ,\\{} & \displaystyle \int _{\mathbb{R}}w(y)|y|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}w(y)\big|S_{\phi }(y)\big|dy<\infty ;\end{array}\]
$a_{0k}(t)$ and $b_{jl}(t)$ are defined by Eqs. (6) and (7),
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {A}^{(1)}& \displaystyle =\frac{1}{\sqrt{2\pi }}\bigg(\int _{\mathbb{R}}v(y)\big|S_{\phi }(y)\big|dy+\int _{\mathbb{R}}w(y)\big|{S^{\prime }_{\phi }}(y)\big|dy\bigg),\\{} \displaystyle {B}^{(0)}& \displaystyle =\frac{M}{\sqrt{2\pi }}\int _{\mathbb{R}}w(y)|y|dy,\\{} \displaystyle {B}^{(1)}& \displaystyle =\frac{M}{\sqrt{2\pi }}\int _{\mathbb{R}}\big(w(y)+|y|v(y)\big)dy,\\{} \displaystyle C_{\varDelta X}& \displaystyle =\sqrt{\frac{2{({A}^{(1)})}^{2}}{N_{0}-1}+\frac{{({B}^{(0)})}^{2}}{7\cdot {8}^{N-1}}+\frac{{({B}^{(1)})}^{2}}{{2}^{N-3}}+{\big({B}^{(1)}\big)}^{2}\sum \limits_{j=0}^{N-1}\frac{1}{{2}^{j-1}(M_{j}-1)}}\hspace{2.5pt}.\end{array}\]
Then, for $t_{1},t_{2}\in [0,T]$ and $N>1$, $N_{0}>1$, $M_{j}>1$, we have the inequality
(14)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sum \limits_{|k|\ge N_{0}}{\big|a_{0k}(t_{1})-a_{0k}(t_{2})\big|}^{2}+\sum \limits_{j\ge N}\sum \limits_{l\in \mathbb{Z}}{\big|b_{jl}(t_{1})-b_{jl}(t_{2})\big|}^{2}\\{} & \displaystyle \hspace{2em}+\sum \limits_{j=0}^{N-1}\sum \limits_{|l|\ge M_{j}}{\big|b_{jl}(t_{1})-b_{jl}(t_{2})\big|}^{2}\\{} & \displaystyle \hspace{1em}\le {C_{\varDelta X}^{2}}{(t_{2}-t_{1})}^{2}.\end{array}\]Remark 6.
It is easy to see that the functions $a_{0k}(t)$ and $b_{jl}(t)$ are continuous under the conditions of Lemma 2.
Lemma 3.
If
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle \ge 1+\frac{8{({A}^{(1)})}^{2}}{{\varepsilon }^{2}},\\{} \displaystyle N& \displaystyle \ge \max \bigg\{1+\log _{8}\frac{4{({B}^{(0)})}^{2}}{7{\varepsilon }^{2}},\hspace{2.5pt}3+\log _{2}\frac{4{({B}^{(1)})}^{2}}{{\varepsilon }^{2}}\bigg\},\\{} \displaystyle M_{j}& \displaystyle \ge 1+16\frac{{({B}^{(1)})}^{2}}{{\varepsilon }^{2}}\end{array}\]
for some $\varepsilon >0$, then
where ${A}^{(1)}$, ${B}^{(0)}$, ${B}^{(1)}$, $C_{\varDelta X}$ are defined in Lemma 2.
We omit the proof due to its simplicity.
Definition.
We say that a model $\hat{Y}(t)$ approximates a stochastic process $Y(t)$ with given relative accuracy δ and reliability $1-\varepsilon $ (where $\varepsilon \in (0;1)$) in $C([0,T])$ if
Now we can state a result on the rate of convergence in $C([0,T])$.
Theorem 3.
Suppose that a random process $Y=\{Y(t),t\in \mathbb{R}\}$ can be represented as $Y(t)=\exp \{X(t)\}$, where a separable strictly sub-Gaussian random process $X=\{X(t),t\in \mathbb{R}\}$ is mean-square continuous, satisfies the condition RC and the conditions of Lemmas 1 and 2 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables $\xi _{0k}$, $\eta _{jl}$ in expansion (5) of the process $X(t)$ are independent strictly sub-Gaussian, $\hat{X}(t)$ is a model of $X(t)$ defined by (8), $\hat{Y}(t)$ is defined by (10), $\theta \in (0;1)$, $\delta >0$, $\varepsilon \in (0;1)$, $T>0$, the numbers ${A}^{(1)}$, ${B}^{(0)}$, ${B}^{(1)}$, $E_{2}$, $F_{1}$, $F_{2}$ are defined in Lemmas 1 and 2,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{\varepsilon }& \displaystyle =\delta \sqrt{\varepsilon },\\{} \displaystyle A(\theta )& \displaystyle ={\int _{1/(2\theta )}^{\infty }}\frac{\sqrt{v+1}}{{v}^{2}}dv,\\{} \displaystyle \tau _{1}& \displaystyle =\frac{{e}^{1/2}\hspace{0.1667em}\hat{\varepsilon }}{{2}^{7/4}{(64+{\hat{\varepsilon }}^{2})}^{1/4}},\\{} \displaystyle \tau _{2}& \displaystyle ={\big(32\ln \big(1+{\hat{\varepsilon }}^{2}/60\big)\big)}^{1/2},\\{} \displaystyle \tau _{3}& \displaystyle =\sqrt{\ln \big(1+{\hat{\varepsilon }}^{3}/8\big)}/\sqrt{2},\\{} \displaystyle \tau _{\ast }& \displaystyle =\min \{\tau _{1},\tau _{2},\tau _{3}\},\\{} \displaystyle Q& \displaystyle =\frac{{e}^{1/2}\hat{\varepsilon }\hspace{0.1667em}\theta (1-\theta )}{{2}^{9/4}A(\theta )T(1+{\hat{\varepsilon }}^{3}/8)},\\{} \displaystyle {N_{0}^{\ast }}& \displaystyle =1+\frac{8{({A}^{(1)})}^{2}}{{Q}^{2}},\\{} \displaystyle {N}^{\ast }& \displaystyle =\max \bigg\{1+\log _{8}\frac{4{({B}^{(0)})}^{2}}{7{Q}^{2}},3+\log _{2}\frac{4{({B}^{(1)})}^{2}}{{Q}^{2}}\bigg\},\\{} \displaystyle {M}^{\ast }& \displaystyle =1+16\frac{{({B}^{(1)})}^{2}}{{Q}^{2}},\\{} \displaystyle {N_{0}^{\ast \ast }}& \displaystyle =\frac{6}{{\tau _{\ast }^{2}}}{E_{2}^{2}}{T}^{2}+1,\\{} \displaystyle {N}^{\ast \ast }& \displaystyle =\max \bigg\{1+\log _{2}\bigg(\frac{72{F_{2}^{2}}{T}^{2}}{5{\tau _{\ast }^{2}}}\bigg),1+\log _{8}\bigg(\frac{18{F_{1}^{2}}{T}^{2}}{7{\tau _{\ast }^{2}}}\bigg)\bigg\},\\{} \displaystyle {M}^{\ast \ast }& \displaystyle =1+\frac{12}{{\tau _{\ast }^{2}}}{F_{2}^{2}}{T}^{2}.\end{array}\]
If
then the model $\hat{Y}(t)$ approximates the process $Y(t)$ with given relative accuracy δ and reliability $1-\varepsilon $ in $C([0,T])$.
(16)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\max \big\{{N_{0}^{\ast }},{N_{0}^{\ast \ast }}\big\},\end{array}\]Proof.
Denote
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varDelta X(t)& \displaystyle =X(t)-\hat{X}(t),\\{} \displaystyle U(t)& \displaystyle =Y(t)/\hat{Y}(t)-1=\exp \big\{\varDelta X(t)\big\}-1,\\{} \displaystyle \rho _{U}(t,s)& \displaystyle =\big\| U(t)-U(s)\big\| _{L_{2}(\varOmega )},\\{} \displaystyle \tau _{\varDelta X}& \displaystyle =\underset{t\in [0,T]}{\sup }\tau \big(\varDelta X(t)\big).\end{array}\]
Let us note that $\rho _{U}$ is a pseudometric. Let $N(u)$ be the metric massiveness of $[0,T]$ with respect to $\rho _{U}$, that is, the minimum number of closed balls in the space $([0,T],\rho _{U})$ with diameters at most $2u$ needed to cover $[0,T]$,
We will denote the norm in $L_{2}(\varOmega )$ by $\| \cdot \| _{2}$.Since $U(t)\in L_{2}(\varOmega )$, $t\in [0,T]$, using Theorem 3.3.3 from [1] (see p. 98), we obtain
where
We will prove that $S_{2}\le \delta \sqrt{\varepsilon }=\hat{\varepsilon }$.
First, let us estimate $\mathsf{E}|U(t){|}^{2}$for $t\in [0,T]$.
Using the inequality
(we set $b=0$) and the Cauchy–Schwarz inequality, we obtain
(20)
\[\big|{e}^{a}-{e}^{b}\big|\le |a-b|\max \big\{{e}^{a},{e}^{b}\big\}\le |a-b|\big({e}^{a}+{e}^{b}\big)\]It follows from (4) that
Let us estimate $G=\mathsf{E}{(\exp \{\varDelta X(t)\}+1)}^{4}$. Since
\[\mathsf{E}\exp \big\{k\varDelta X(t)\big\}\le \exp \big\{{k}^{2}{\tau }^{2}\big(\varDelta X(t)\big)/2\big\}={A}^{{k}^{2}}\le {A}^{16},\hspace{1em}1\le k\le 4,\]
where $A=\exp \{{\tau _{\varDelta X}^{2}}/2\}$, we have
It follows from Lemma 1 and (16)–(18) that
Using (21)–(23), we obtain
(23)
\[\tau _{\varDelta X}=\underset{t\in [0,T]}{\sup }\mathsf{E}{\big({\big|\varDelta X(t)\big|}^{2}\big)}^{1/2}\le \tau _{\ast }.\]Let us estimate now
First, we will find an upper bound for $N(u)$. For this, we will prove that
where
with $C_{\varDelta X}$ defined in Lemma 2.
Using (20) and the Cauchy–Schwarz inequality, we have:
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \big\| U(t_{1})-U(t_{2}){\big\| _{2}^{2}}\\{} & \displaystyle \hspace{1em}=\mathsf{E}{\big|\exp \big\{\varDelta X(t_{1})\big\}-\exp \big\{\varDelta X(t_{2})\big\}\big|}^{2}\\{} & \displaystyle \hspace{1em}\le \mathsf{E}{\big|\varDelta X(t_{1})-\varDelta X(t_{2})\big|}^{2}{\big(\exp \big\{\varDelta X(t_{1})\big\}+\exp \big\{\varDelta X(t_{2})\big\}\big)}^{2}\\{} & \displaystyle \hspace{1em}\le {\big(\mathsf{E}{\big(\varDelta X(t_{1})-\varDelta X(t_{2})\big)}^{4}\big)}^{1/2}{\big(\mathsf{E}{\big(\exp \big\{\varDelta X(t_{1})\big\}+\exp \big\{\varDelta X(t_{2})\big\}\big)}^{4}\big)}^{1/2}.\end{array}\]
Applying (4), we obtain
Let us find an upper bound for
Since
and (25) follows from (26) and (27).
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathsf{E}\exp \big\{k\varDelta X(t_{1})+l\varDelta X(t_{2})\big\}\\{} & \displaystyle \hspace{1em}\le \exp \big\{{\tau }^{2}\big(k\varDelta X(t_{1})+l\varDelta X(t_{2})\big)/2\big\}\le \exp \big\{{\big(k\tau \big(\varDelta X(t_{1})\big)+l\tau \big(\varDelta X(t_{2})\big)\big)}^{2}/2\big\}\\{} & \displaystyle \hspace{1em}\le \exp \big\{8{\tau _{\varDelta X}^{2}}\big\},\end{array}\]
where $k+l=4$, we have
(27)
\[H\le \sum \limits_{k=0}^{4}\left(\genfrac{}{}{0pt}{}{4}{k}\right)\exp \big\{8{\tau _{\varDelta X}^{2}}\big\}=16\exp \big\{8{\tau _{\varDelta X}^{2}}\big\},\]Using inequality (25), simple properties of metric entropy (see [1], Lemma 3.2.1, p. 88), and the inequality
(where $N_{\rho _{1}}$ is the entropy of $[0,T]$ with respect to the Euclidean metric), we have
Since $\varepsilon _{0}\le C_{U}T$, we obtain
(28)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\int _{0}^{\theta \varepsilon _{0}}}{N}^{1/2}(u)du& \displaystyle \le {\int _{0}^{\theta \varepsilon _{0}}}{\big(TC_{U}/(2u)+1\big)}^{1/2}du\\{} & \displaystyle =\frac{TC_{U}}{2}{\int _{TC_{U}/(2\theta \varepsilon _{0})}^{\infty }}\frac{\sqrt{v+1}}{{v}^{2}}dv\le TC_{U}A(\theta )/2.\end{array}\]Example.
Let us consider the function $u(t,\lambda )=t/{(1+{t}^{2}+{\lambda }^{2})}^{4}$ and an arbitrary Daubechies wavelet with the corresponding f-wavelet ϕ and m-wavelet ψ. We will use the notations
\[a_{0k}(t)=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }_{0k}(y)}dy,\hspace{2em}b_{jl}(t)=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\psi }_{jl}(y)}dy\]
and consider the stochastic process
\[X(t)=\sum \limits_{k\in \mathbb{Z}}\xi _{0k}a_{0k}(t)+\sum \limits_{j=0}^{\infty }\sum \limits_{l\in \mathbb{Z}}\eta _{jl}b_{jl}(t),\]
where $\xi _{0k}$, $\eta _{jl}$ ($k,l\in \mathbb{Z}$, $j=0,1,\dots $) are independent uniformly distributed over $[-\sqrt{3},\sqrt{3}]$. It can be checked that the process $Y(t)=\exp \{X(t)\}$ and the Daubechies wavelet satisfy the conditions of Theorem 3.5 Simulation with given accuracy and reliability in $L_{p}([0,T])$
Lemma 4.
Suppose that a centered stochastic process $X=\{X(t),t\in \mathbb{R}\}$ satisfies the conditions of Theorem 1, ϕ is an f-wavelet, ψ is the corresponding m-wavelet, $\hat{\phi }$ and $\hat{\psi }$ are the Fourier transforms of ϕ and ψ, respectively, $\hat{\phi }(y)$ is absolutely continuous, $u(t,y)$ is defined in Theorem 1 and is absolutely continuous for any fixed t, there exist the derivatives ${u^{\prime }_{y}}(t,y)$, ${\hat{\phi }^{\prime }}(y)$, ${\hat{\psi }^{\prime }}(y)$ and $|{\hat{\psi }^{\prime }}(y)|\le C$, $|u(t,y)|\le u_{1}(y)$, $|{u^{\prime }_{y}}(t,y)|\le |t|\hspace{0.1667em}u_{2}(y)$, Eqs. (11) and (12) hold,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\hspace{0.1667em}\big|\overline{\hat{\psi }\big(y/{2}^{j}\big)}\big|=0\hspace{1em}\forall j=0,1,\dots ,\hspace{2.5pt}\forall t\in \mathbb{R},\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\big|\hat{\phi }(y)\big|=0\hspace{1em}\forall t\in \mathbb{R};\\{} & \displaystyle S_{1}=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)\big|{\hat{\phi }^{\prime }}(y)\big|dy,\hspace{2em}S_{2}=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{2}(y)\big|\hat{\phi }(y)\big|dy,\\{} & \displaystyle Q_{1}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)dy,\hspace{2em}Q_{2}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{2}(y)|y|dy.\end{array}\]
Then the following inequalities hold for the coefficients $a_{0k}(t),b_{jl}(t)$ in expansion (5) of the process $X(t)$:
(31)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|a_{00}(t)\big|& \displaystyle \le \frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)\big|\hat{\phi }(y)\big|dy,\end{array}\](32)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|b_{j0}(t)\big|& \displaystyle \le \frac{C}{\sqrt{2\pi }\hspace{0.1667em}{2}^{3j/2}}\int _{\mathbb{R}}u_{1}(y)|y|dy,\hspace{1em}j=0,1,\dots ,\end{array}\]The proof of inequalities (31)–(34) is analogous to the proof of similar inequalities for the coefficients of expansion (5) of a stationary process in [5].
Definition. We say that a model $\hat{Y}(t)$ approximates a stochastic process $Y(t)$ with given accuracy δ and reliability $1-\varepsilon $ (where $\varepsilon \in (0;1)$) in $L_{p}([0,T])$ if
Lemma 5.
Suppose that a random process $X=\{X(t),t\in \mathbb{R}\}$ satisfies the conditions of Theorem 1, an f-wavelet ϕ and the corresponding m-wavelet ψ together with the process $X(t)$ satisfy the conditions of Lemma 4, C, $Q_{1}$, $Q_{2}$, $S_{1}$, $S_{2}$, $u_{1}(y)$ are defined in Lemma 4, $T>0$, $p\ge 1$, $\delta \in (0;1)$, $\varepsilon >0$,
If
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\frac{6}{\delta _{1}}{(S_{1}+S_{2}T)}^{2}+1,\\{} \displaystyle N& \displaystyle >\max \bigg\{1+\log _{2}\bigg(\frac{72{(Q_{1}+Q_{2}T)}^{2}}{5\delta _{1}}\bigg),1+\log _{8}\bigg(\frac{18{D}^{2}}{7\delta _{1}}\bigg)\bigg\},\\{} \displaystyle M_{j}& \displaystyle >1+\frac{12}{\delta _{1}}{(Q_{1}+Q_{2}T)}^{2}\bigg(1-\frac{1}{{2}^{N}}\bigg),\end{array}\]
then
Proof.
We have
\[\mathsf{E}{\big|X(t)-\widehat{X}(t)\big|}^{2}=\sum \limits_{k:|k|\ge N_{0}}{\big|a_{0k}(t)\big|}^{2}+\sum \limits_{j=0}^{N-1}\sum \limits_{l:|l|\ge M_{j}}{\big|b_{jl}(t)\big|}^{2}+\sum \limits_{j=N}^{\infty }\sum \limits_{l\in \mathbb{Z}}{\big|b_{jl}(t)\big|}^{2}.\]
It remains to apply inequalities (31)–(34). □
Theorem 4.
Suppose that a random process $Y=\{Y(t),t\in \mathbb{R}\}$ can be represented as $Y(t)=\exp \{X(t)\}$, where a separable strictly sub-Gaussian random process $X=\{X(t),t\in \mathbb{R}\}$ is mean-square continuous, satisfies the condition RC and the conditions of Lemma 5 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables $\xi _{0k},\eta _{jl}$ in expansion (5) of the process $X(t)$ are independent strictly sub-Gaussian, $\hat{X}(t)$ is a model of $X(t)$ defined by (8), $\hat{Y}(t)$ is defined by (10), D, $Q_{1}$, $Q_{2}$, $S_{1}$, $S_{2}$ are defined in Lemmas 4 and 5, $\delta >0$, $\varepsilon \in (0;1)$, $p\ge 1$, $T>0$.
Let
\[\begin{array}{r@{\hskip0pt}l}\displaystyle m& \displaystyle =\frac{\varepsilon {\delta }^{p}}{{2}^{2p}{(p/e)}^{p/2}\hspace{0.1667em}T\sup _{t\in [0,T]}{(\mathsf{E}\exp \{2pX(t)\})}^{1/2}},\\{} \displaystyle h(t)& \displaystyle ={t}^{p}{\big(1+\exp \big\{8{p}^{2}{t}^{2}\big\}\big)}^{1/4},\hspace{1em}t\ge 0,\end{array}\]
and $x_{m}$ be the root of the equation
If
then the model $\hat{Y}(t)$ defined by (10) approximates $Y(t)$ with given accuracy δ and reliability $1-\varepsilon $ in $L_{p}([0,T])$.
(35)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\frac{6}{{x_{m}^{2}}}{(S_{1}+S_{2}T)}^{2}+1,\end{array}\]Proof.
We will use the following notations:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varDelta X(t)& \displaystyle =\hat{X}(t)-X(t),\\{} \displaystyle \overline{\tau }_{X}& \displaystyle =\underset{t\in [0,T]}{\sup }\tau \big(X(t)\big),\\{} \displaystyle \overline{\tau }_{\varDelta X}& \displaystyle =\underset{t\in [0,T]}{\sup }\tau \big(\varDelta X(t)\big),\\{} \displaystyle c_{p}& \displaystyle =2{(4p/e)}^{2p}.\end{array}\]
We will denote the norm in $L_{p}([0,T])$ by $\| \cdot \| _{p}$.Let us estimate $\mathrm{P}\{\| Y-\hat{Y}\| _{p}>\delta \}$. We have
(38)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathrm{P}\big\{\| Y-\hat{Y}\| _{p}>\delta \big\}& \displaystyle \le \frac{\mathsf{E}\| Y-\hat{Y}{\| _{p}^{p}}}{{\delta }^{p}}\\{} & \displaystyle =\frac{\mathsf{E}{\textstyle\int _{0}^{T}}|\exp \{X(t)\}-\exp \{\hat{X}(t)\}{|}^{p}dt}{{\delta }^{p}}.\end{array}\]Denote
An application of the Cauchy–Schwarz inequality yields
(39)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varDelta (t)& \displaystyle =\mathsf{E}\exp \big\{pX(t)\big\}{\big|1-\exp \big\{\varDelta X(t)\big\}\big|}^{p}\\{} & \displaystyle \le {\big(\mathsf{E}\exp \big\{2pX(t)\big\}\big)}^{1/2}{\big(\mathsf{E}{\big|1-\exp \big\{\varDelta X(t)\big\}\big|}^{2p}\big)}^{1/2}.\end{array}\]We will need two auxiliary inequalities. Using the power-mean inequality
where $r\ge 1$, and setting $a={e}^{c}$ and $b=1$, we obtain
It follows from (20) that
for $q\ge 0$.
Now let us estimate $\mathsf{E}|1-\exp \{\varDelta X(t)\}{|}^{2p}$, where $t\in [0,T]$, using (41):
(42)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{\big|1-\exp \big\{\varDelta X(t)\big\}\big|}^{2p}& \displaystyle \le \mathsf{E}{\big|\varDelta X(t)\big|}^{2p}{\big(1+\exp \big\{\varDelta X(t)\big\}\big)}^{2p}\\{} & \displaystyle \le {\big(\mathsf{E}{\big|\varDelta X(t)\big|}^{4p}\big)}^{1/2}{\big(\mathsf{E}{\big(1+\exp \big\{\varDelta X(t)\big\}\big)}^{4p}\big)}^{1/2}.\end{array}\]Applying (40), we obtain:
Since, for $t\in [0,T]$,
(see (4)) and
\[\mathsf{E}\exp \big\{4p\varDelta X(t)\big\}\le \exp \big\{8{p}^{2}{\overline{\tau }_{\varDelta X}^{2}}\big\}\]
(see (3)), we have
It follows from Lemma 5 and inequalities (35)–(37) that
Now the statement of the theorem follows from (38) and (46). □
\[\overline{\tau }_{\varDelta X}=\underset{t\in [0,T]}{\sup }{\big(\mathsf{E}{\big(X(t)-\hat{X}(t)\big)}^{2}\big)}^{1/2}\le x_{m}.\]
We obtain using (45) that
and hence
(46)
\[\mathsf{E}\| Y-\hat{Y}{\| _{p}^{p}}={\int _{0}^{T}}\varDelta (t)dt\le \varepsilon {\delta }^{p}.\]Example.
Let us consider a centered Gaussian process $X(t)$ with correlation function
where
and an arbitrary Battle–Lemarié wavelet. It can be checked that the process $Y(t)=\exp \{X(t)\}$ and the Battle–Lemarié wavelet satisfy the conditions of Theorem 4.