Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 2, Issue 4 (2015)
  4. A multiplicative wavelet-based model for ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Cited by
  • More
    Article info Full article Cited by

A multiplicative wavelet-based model for simulation of a random process
Volume 2, Issue 4 (2015), pp. 309–325
Ievgen Turchyn  

Authors

 
Placeholder
https://doi.org/10.15559/15-VMSTA33
Pub. online: 24 September 2015      Type: Research Article      Open accessOpen Access

Received
21 July 2015
Revised
8 September 2015
Accepted
11 September 2015
Published
24 September 2015

Abstract

We find a multiplicative wavelet-based representation for stochastic processes that can be represented as the exponent of a second-order centered random process. We propose a wavelet-based model for simulation of such a stochastic process and find its rates of convergence to the process in different functional spaces in terms of approximation with given accuracy and reliability. This approach allows us to simulate stochastic processes (including certain classes of processes with heavy tails) with given accuracy and reliability.

1 Introduction

Simulation of random processes is a wide area nowadays, and there are many simulation methods (see, e.g., [9, 10]). There is one substantial problem: for most of traditional simulation methods, it is difficult to measure the quality of approximation of a process by its model in terms of “distance” between paths of the process and the corresponding paths of the model. Therefore, models for which such distance can be estimated are quite interesting.
There is a concept for simulation by such models called simulation with given accuracy and reliability. Simulation with given accuracy and reliability is considered, for example, in [7, 4, 6, 11, 12].
Simulation with given accuracy and reliability can be described in the following way. Suppose that an approximation $\hat{X}(t)$ of a random process $X(t)$ is constructed. The random process $\hat{X}(t)$ is called a model of $X(t)$. A model depends on certain parameters. The rate of convergence of a model to a process is given by a statement of the following type: if numbers δ (accuracy) and ε ($1-\varepsilon $ is called reliability) are given and the parameters of the model satisfy certain restrictions (for instance, they are not less than certain lower bounds), then
(1)
\[P\big\{\| X-\hat{X}\| >\delta \big\}\le \varepsilon .\]
Many such results have been proved for the cases where the norm in (1) is the $L_{p}$ norm or the uniform norm. But simulation with given accuracy and reliability has been developed so far mostly for processes with one-dimensional distributions having tails not heavier than Gaussian tails (e.g., for sub-Gaussian processes), and such a simulation for processes with tails heavier than Gaussian tails deserves attention.
We consider a random process $Y(t)=\exp \{X(t)\}$ and an f-wavelet $\phi (x)$ with the corresponding m-wavelet $\psi (x)$, where $X(t)$ is a centered second-order process such that its correlation function $R(t,s)$ can be represented as
\[R(t,s)=\int _{\mathbb{R}}u(t,\lambda )\overline{u(s,\lambda )}d\lambda .\]
We prove that
\[Y(t)=\prod \limits_{k\in \mathbb{Z}}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{\infty }\prod \limits_{l\in \mathbb{Z}}\exp \big\{\eta _{jl}b_{jl}(t)\big\},\]
where $\xi _{0k}$, $\eta _{jl}$ are random variables, and $a_{0k}(t)$, $b_{jl}(t)$ are functions that depend on $X(t)$ and the wavelet.
As a model of $Y(t)$, we take the process
\[\hat{Y}(t)=\prod \limits_{k=-(N_{0}-1)}^{N_{0}-1}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{N-1}\prod \limits_{l=-(M_{j}-1)}^{M_{j}-1}\exp \big\{\eta _{jl}b_{jl}(t)\big\}.\]
Let us consider the case where $X(t)$ is a strictly sub-Gaussian process. Note that the class of processes $Y(t)=\exp \{X(t)\}$, where $X(t)$ is a strictly sub-Gaussian process, is a rich class that includes many processes with one-dimensional distributions having tails heavier than Gaussian tails; for example, when $X(t)$ is a Gaussian process, one-dimensional distributions of $Y(t)$ are lognormal.
We describe the rate of convergence of $\hat{Y}(t)$ to a process $Y(t)$ in $C([0,T])$ in such a way: if $\varepsilon \in (0;1)$ and $\delta >0$ are given and the parameters $N_{0}$, N, $M_{j}$ are big enough, then
(2)
\[\mathrm{P}\Big\{\underset{t\in [0,T]}{\sup }\big|Y(t)/\hat{Y}(t)-1\big|>\delta \Big\}\le \varepsilon .\]
A similar statement that characterizes the rate of convergence of $\hat{Y}(t)$ to $Y(t)$ in $L_{p}([0,T])$ is also proved for the case where (2) is replaced by the inequality
\[\mathrm{P}\Bigg\{{\Bigg({\int _{0}^{T}}{\big|Y(t)-\hat{Y}(t)\big|}^{p}dt\Bigg)}^{1/p}>\delta \Bigg\}\le \varepsilon .\]
If the process $X(t)=\ln Y(t)$ is Gaussian, then the model $\hat{Y}(t)$ can be used for computer simulation of $Y(t)$.
One of the merits of our model is its simplicity. Besides, it can be used for simulation of processes with one-dimensional distributions having tails heavier than Gaussian tails.

2 Auxiliary facts

A random variable ξ is called sub-Gaussian if there exists a constant $a\ge 0$ such that
\[\mathsf{E}\exp \{\lambda \xi \}\le \exp \big\{{\lambda }^{2}{a}^{2}/2\big\}\]
for all $\lambda \in \mathbb{R}$.
The class of all sub-Gaussian random variables on a standard probability space $\{\varOmega ,\mathcal{B},P\}$ is a Banach space with respect to the norm
\[\tau (\xi )=\inf \big\{a\ge 0:\mathsf{E}\exp \{\lambda \xi \}\le \exp \big\{{\lambda }^{2}{a}^{2}/2\big\},\lambda \in \mathbb{R}\big\}.\]
A centered Gaussian random variable and a random variable uniformly distributed on $[-b,b]$ are examples of sub-Gaussian random variables.
A sub-Gaussian random variable ξ is called strictly sub-Gaussian if
\[\tau (\xi )={\big(\mathsf{E}{\xi }^{2}\big)}^{1/2}.\]
For any sub-Gaussian random variable ξ,
(3)
\[\mathsf{E}\exp \{\lambda \xi \}\le \exp \big\{{\lambda }^{2}{\tau }^{2}(\xi )/2\big\},\hspace{1em}\lambda \in \mathbb{R},\]
and
(4)
\[\mathsf{E}|\xi {|}^{p}\le 2{\bigg(\frac{p}{e}\bigg)}^{p/2}{\big(\tau (\xi )\big)}^{p},\hspace{1em}p>0.\]
A family Δ of sub-Gaussian random variables is called strictly sub-Gaussian if for any finite or countable set I of random variables $\xi _{i}\in \varDelta $ and for any $\lambda _{i}\in \mathbb{R}$,
\[{\tau }^{2}\bigg(\sum \limits_{i\in I}\lambda _{i}\xi _{i}\bigg)=\mathsf{E}{\bigg(\sum \limits_{i\in I}\lambda _{i}\xi _{i}\bigg)}^{2}.\]
A stochastic process $X=\{X(t),t\in \mathbf{T}\}$ is called sub-Gaussian if all the random variables $X(t)$, $t\in \mathbf{T}$, are sub-Gaussian and $\sup _{t\in \mathbf{T}}\tau (X(t))<\infty $. We call a sub-Gaussian stochastic process $X=\{X(t),t\in \mathbf{T}\}$ strictly sub-Gaussian if the family $\{X(t),t\in \mathbf{T}\}$ is strictly sub-Gaussian. Any centered Gaussian process $X=\{X(t),t\in \mathbf{T}\}$ for which $\sup _{t\in \mathbf{T}}\mathsf{E}{(X(t))}^{2}<\infty $ is strictly sub-Gaussian.
Details about sub-Gaussian random variables and sub-Gaussian and strictly sub-Gaussian random processes can be found in [1] and [3].
We will use wavelets (see [2] for details) for an expansion of a stochastic process. Namely, we use a father wavelet $\phi (x)$ and the corresponding mother wavelet $\psi (x)$ (we will use the terms “f-wavelet” and “m-wavelet” instead of the terms “father wavelet” and “mother wavelet,” respectively). Set
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \phi _{0k}(x)& \displaystyle =\phi (x-k),\hspace{1em}k\in \mathbb{Z},\hspace{2em}\\{} \displaystyle \psi _{jl}(x)& \displaystyle ={2}^{j/2}\psi \big({2}^{j}x-l\big),\hspace{1em}j,l\in \mathbb{Z}.\hspace{2em}\end{array}\]
Note that $\{\phi _{0l},\psi _{jk},l\in \mathbb{Z},k\in \mathbb{Z},j=0,1,\dots \}$ is an orthonormal basis in $L_{2}(\mathbb{R})$. We will further consider only wavelets for which both $\phi (x)$ and $\psi (x)$ are real-valued.
We denote by $\hat{f}$ the Fourier transform of a function $f\in L_{2}(\mathbb{R})$:
\[\hat{f}(y)=\int _{\mathbb{R}}{e}^{-ixy}f(x)dx.\]
The following statement is crucial for us.
Theorem 1.
([5]) Let $X=\{X(t),t\in \mathbb{R}\}$ be a centered random process such that $\mathsf{E}|X(t){|}^{2}<\infty $ for all $t\in \mathbb{R}$. Let $R(t,s)=\mathsf{E}X(t)\overline{X(s)}$ and suppose that there exists a Borel function $u(t,\lambda )$, $t\in \mathbb{R}$, $\lambda \in \mathbb{R}$, such that
\[\int _{\mathbb{R}}{\big|u(t,\lambda )\big|}^{2}d\lambda <\infty \hspace{1em}\textit{for all }t\in \mathbb{R}\]
and
\[R(t,s)=\int _{\mathbb{R}}u(t,\lambda )\overline{u(s,\lambda )}d\lambda .\]
Let $\phi (x)$ be an f-wavelet, and $\psi (x)$ the corresponding m-wavelet. Then the process $X(t)$ can be presented as the following series, which converges for any $t\in \mathbb{R}$ in $L_{2}(\varOmega )$:
(5)
\[X(t)=\sum \limits_{k\in \mathbb{Z}}\xi _{0k}a_{0k}(t)+\sum \limits_{j=0}^{\infty }\sum \limits_{l\in \mathbb{Z}}\eta _{jl}b_{jl}(t),\]
where
(6)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle a_{0k}(t)& \displaystyle =\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }_{0k}(y)}dy=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }(y)}{e}^{iyk}dy,\end{array}\]
(7)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle b_{jl}(t)& \displaystyle =\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\psi }_{jl}(y)}dy=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y){2}^{-j/2}\exp \bigg\{i\frac{y}{{2}^{j}}l\bigg\}\overline{\hat{\psi }\bigg(\frac{y}{{2}^{j}}\bigg)}dy,\end{array}\]
and $\xi _{0k}$, $\eta _{jl}$ are centered random variables such that
\[\mathsf{E}\xi _{0k}\overline{\xi _{0l}}=\delta _{kl},\hspace{2em}\mathsf{E}\eta _{mk}\overline{\eta _{nl}}=\delta _{mn}\delta _{kl},\hspace{2em}\mathsf{E}\xi _{0k}\overline{\eta _{nl}}=0\hspace{0.1667em}.\]
Remark 1.
There have been obtained explicit formulae for random variables $\xi _{0k}$, $\eta _{jl}$ from an expansion more general than (5) under certain restrictions on the process $X(t)$ (see [8], Theorem 2.1). It seems that getting explicit formulae for $\xi _{0k}$ and $\eta _{jl}$ in the general case is either impossible or quite nontrivial.
Definition.
Condition RC holds for stochastic process $X(t)$ if it satisfies the conditions of Theorem 1, $u(t,\cdot )\in L_{1}(\mathbb{R})\cap L_{2}(\mathbb{R})$, and the inverse Fourier transform $\tilde{u}_{x}(t,x)$ of the function $u(t,x)$ with respect to x is a real function.
Remark 2.
Condition RC guarantees that the coefficients $a_{0k}(t)$, $b_{jl}(t)$ of expansion (5) are real. Indeed, this follows from the formulae
\[\begin{array}{r@{\hskip0pt}l}\displaystyle a_{0k}(t)& \displaystyle =\sqrt{2\pi }\int _{\mathbb{R}}\tilde{u}_{y}(t,y)\overline{\phi _{0k}(y)}dy,\\{} \displaystyle b_{jk}(t)& \displaystyle =\sqrt{2\pi }\int _{\mathbb{R}}\tilde{u}_{y}(t,y)\overline{\psi _{jk}(y)}dy.\end{array}\]
Suppose that $X(t)$ is a process that satisfies the conditions of Theorem 1. Let us consider the following approximation (or model) of $X(t)$:
(8)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{X}(t)& \displaystyle =\hat{X}(N_{0},N,M_{0},\dots ,M_{N-1},t)\\{} & \displaystyle =\sum \limits_{k=-(N_{0}-1)}^{N_{0}-1}\xi _{0k}a_{0k}(t)+\sum \limits_{j=0}^{N-1}\sum \limits_{l=-(M_{j}-1)}^{M_{j}-1}\eta _{jl}b_{jl}(t),\end{array}\]
where $\xi _{0k}$, $\eta _{jl}$, $a_{0k}(t)$, $b_{jl}(t)$ are defined in Theorem 1.
Approximation of Gaussian and sub-Gaussian processes by model (8) has been studied in [5] and [13].
Remark 3.
If $X(t)$ is a Gaussian process, then we can take as $\xi _{0k}$, $\eta _{jl}$ in (8) independent random variables with distribution $N(0;1)$.

3 A multiplicative representation

We will obtain a multiplicative representation for a wide class of stochastic processes.
Theorem 2.
Suppose that a random process $Y(t)$ can be represented as $Y(t)=\exp \{X(t)\}$, where the process $X(t)$ satisfies the conditions of Theorem 1. Then the equality
(9)
\[Y(t)=\prod \limits_{k\in \mathbb{Z}}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{\infty }\prod \limits_{l\in \mathbb{Z}}\exp \big\{\eta _{jl}b_{jl}(t)\big\}\]
holds, where product (9) converges in probability for any fixed t, and $\xi _{0k}$, $\eta _{jl}$, $a_{0k}(t)$, $b_{jl}(t)$ are defined in Theorem 1.
The statement of the theorem immediately follows from Theorem 1.
Remark 4.
It was shown in [5] that any centered second-order wide-sense stationary process $X(t)$ that has the spectral density satisfies the conditions of Theorem 1. The process $Y(t)=\exp \{X(t)\}$ can be represented as product (9), and therefore the class of processes that satisfy the conditions of Theorem 2 is wide enough.
It is natural to approximate a stochastic process $Y(t)=\exp \{X(t)\}$ that satisfies the conditions of Theorem 2 by the model
(10)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{Y}(t)& \displaystyle =\hat{Y}(N_{0},N,M_{0},\dots ,M_{N-1},t)\\{} & \displaystyle =\prod \limits_{k=-(N_{0}-1)}^{N_{0}-1}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{N-1}\prod \limits_{l=-(M_{j}-1)}^{M_{j}-1}\exp \big\{\eta _{jl}b_{jl}(t)\big\}=\exp \big\{\hat{X}(t)\big\}.\end{array}\]
Remark 5.
If $X(t)=\ln Y(t)$ is a Gaussian process, then we can use the model $\hat{Y}(t)$ for computer simulation of $Y(t)$, taking as $\xi _{0k}$, $\eta _{jl}$ in (10) independent random variables with distribution $N(0;1)$.

4 Simulation with given relative accuracy and reliability in $C([0,T])$

Let us study the rate of convergence in $C([0,T])$ of model (10) to a process $Y(t)$. We will need several auxiliary facts.
Lemma 1.
([13]) Let $X=\{X(t),t\in \mathbb{R}\}$ be a centered stochastic process that satisfies the requirements of Theorem 1, $T>0$, ϕ be an f-wavelet, ψ be the corresponding m-wavelet, the function $\hat{\phi }(y)$ be absolutely continuous on any interval, the function $u(t,y)$ be absolutely continuous with respect to y for any fixed t, there exist the derivatives ${u^{\prime }_{\lambda }}(t,\lambda )$, ${\hat{\phi }^{\prime }}(y)$, ${\hat{\psi }^{\prime }}(y)$, the inequalities $|{\hat{\psi }^{\prime }}(y)|\le C$, $|u(t,\lambda )|\le |t|u_{1}(\lambda )$, $|{u^{\prime }_{\lambda }}(t,\lambda )|\le |t|\hspace{0.1667em}u_{2}(\lambda )$ hold,
(11)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{\mathbb{R}}u_{1}(y)|y|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{1}(y)dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{1}(y)\big|{\hat{\phi }^{\prime }}(y)\big|dy<\infty ,\hspace{2em}\end{array}\]
(12)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{\mathbb{R}}u_{1}(y)\big|\hat{\phi }(y)\big|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{2}(y)|y|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}u_{2}(y)\big|\hat{\phi }(y)\big|dy<\infty ,\hspace{2em}\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\hspace{0.1667em}\overline{\hat{\psi }\big(y/{2}^{j}\big)}=0\hspace{1em}\forall j=0,1,\dots ,\hspace{2.5pt}\forall t\in [0,T],\hspace{2em}\end{array}\]
and
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\overline{\hat{\phi }(y)}=0\hspace{1em}\forall t\in [0,T],\\{} & \displaystyle E_{1}=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)|\widehat{\phi }(y)|dy,\\{} & \displaystyle E_{2}=\frac{1}{\sqrt{2\pi }}\bigg(\int _{\mathbb{R}}u_{1}(y)|{\widehat{\phi }^{\prime }}(y)|dy+\int _{\mathbb{R}}u_{2}(y)|\widehat{\phi }(y)|dy\bigg),\\{} & \displaystyle F_{1}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)|y|dy,\\{} & \displaystyle F_{2}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}\big(u_{1}(y)+|y|u_{2}(y)\big)dy.\end{array}\]
Let the process $\hat{X}(t)$ be defined by (8), $\delta >0$. If $N_{0}$, N, $M_{j}$ ($j=0,1,\dots ,N-1$) satisfy the inequalities
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\frac{6}{\delta }{E_{2}^{2}}{T}^{2}+1,\\{} \displaystyle N& \displaystyle >\max \bigg\{1+\log _{2}\bigg(\frac{72{F_{2}^{2}}{T}^{2}}{5\delta }\bigg),1+\log _{8}\bigg(\frac{18{F_{1}^{2}}{T}^{2}}{7\delta }\bigg)\bigg\},\\{} \displaystyle M_{j}& \displaystyle >1+\frac{12}{\delta }{F_{2}^{2}}{T}^{2},\end{array}\]
then
(13)
\[\underset{t\in [0,T]}{\sup }\mathsf{E}{\big|X(t)-\widehat{X}(t)\big|}^{2}\le \delta .\]
Lemma 2.
([13]) Let $X=\{X(t),t\in \mathbb{R}\}$ be a centered stochastic process satisfying the requirements of Theorem 1, $T>0$, ϕ be an f-wavelet, ψ be the corresponding m-wavelet, $S(y)=\overline{\hat{\psi }(y)}$, $S_{\phi }(y)=\overline{\hat{\phi }(y)}$. Suppose that $\phi (y),u(t,\lambda )$, $S(y)$, $S_{\phi }(y)$ satisfy the following conditions: the function $u(t,y)$ is absolutely continuous with respect to y, the function $\hat{\phi }(y)$ is absolutely continuous,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \big|{S^{\prime }}(y)\big|\le M<\infty ,\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)S\big(y/{2}^{j}\big)=0,\hspace{1em}j=0,1,\dots ,\hspace{2.5pt}t\in [0,T],\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)S_{\phi }(y)=0,\hspace{1em}t\in [0,T],\end{array}\]
there exist functions $v(y)$ and $w(y)$ such that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|{u^{\prime }_{y}}(t_{1},y)-{u^{\prime }_{y}}(t_{2},y)\big|& \displaystyle \le |t_{2}-t_{1}|v(y),\\{} \displaystyle \big|u(t_{1},y)-u(t_{2},y)\big|& \displaystyle \le |t_{2}-t_{1}|w(y),\end{array}\]
and
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{\mathbb{R}}|y|v(y)dy<\infty ,\hspace{2em}\int _{\mathbb{R}}v(y)\big|S_{\phi }(y)\big|dy<\infty ,\\{} & \displaystyle \int _{\mathbb{R}}w(y)\big|{S^{\prime }_{\phi }}(y)\big|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}w(y)dy<\infty ,\\{} & \displaystyle \int _{\mathbb{R}}w(y)|y|dy<\infty ,\hspace{2em}\int _{\mathbb{R}}w(y)\big|S_{\phi }(y)\big|dy<\infty ;\end{array}\]
$a_{0k}(t)$ and $b_{jl}(t)$ are defined by Eqs. (6) and (7),
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {A}^{(1)}& \displaystyle =\frac{1}{\sqrt{2\pi }}\bigg(\int _{\mathbb{R}}v(y)\big|S_{\phi }(y)\big|dy+\int _{\mathbb{R}}w(y)\big|{S^{\prime }_{\phi }}(y)\big|dy\bigg),\\{} \displaystyle {B}^{(0)}& \displaystyle =\frac{M}{\sqrt{2\pi }}\int _{\mathbb{R}}w(y)|y|dy,\\{} \displaystyle {B}^{(1)}& \displaystyle =\frac{M}{\sqrt{2\pi }}\int _{\mathbb{R}}\big(w(y)+|y|v(y)\big)dy,\\{} \displaystyle C_{\varDelta X}& \displaystyle =\sqrt{\frac{2{({A}^{(1)})}^{2}}{N_{0}-1}+\frac{{({B}^{(0)})}^{2}}{7\cdot {8}^{N-1}}+\frac{{({B}^{(1)})}^{2}}{{2}^{N-3}}+{\big({B}^{(1)}\big)}^{2}\sum \limits_{j=0}^{N-1}\frac{1}{{2}^{j-1}(M_{j}-1)}}\hspace{2.5pt}.\end{array}\]
Then, for $t_{1},t_{2}\in [0,T]$ and $N>1$, $N_{0}>1$, $M_{j}>1$, we have the inequality
(14)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sum \limits_{|k|\ge N_{0}}{\big|a_{0k}(t_{1})-a_{0k}(t_{2})\big|}^{2}+\sum \limits_{j\ge N}\sum \limits_{l\in \mathbb{Z}}{\big|b_{jl}(t_{1})-b_{jl}(t_{2})\big|}^{2}\\{} & \displaystyle \hspace{2em}+\sum \limits_{j=0}^{N-1}\sum \limits_{|l|\ge M_{j}}{\big|b_{jl}(t_{1})-b_{jl}(t_{2})\big|}^{2}\\{} & \displaystyle \hspace{1em}\le {C_{\varDelta X}^{2}}{(t_{2}-t_{1})}^{2}.\end{array}\]
Remark 6.
It is easy to see that the functions $a_{0k}(t)$ and $b_{jl}(t)$ are continuous under the conditions of Lemma 2.
Lemma 3.
If
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle \ge 1+\frac{8{({A}^{(1)})}^{2}}{{\varepsilon }^{2}},\\{} \displaystyle N& \displaystyle \ge \max \bigg\{1+\log _{8}\frac{4{({B}^{(0)})}^{2}}{7{\varepsilon }^{2}},\hspace{2.5pt}3+\log _{2}\frac{4{({B}^{(1)})}^{2}}{{\varepsilon }^{2}}\bigg\},\\{} \displaystyle M_{j}& \displaystyle \ge 1+16\frac{{({B}^{(1)})}^{2}}{{\varepsilon }^{2}}\end{array}\]
for some $\varepsilon >0$, then
\[C_{\varDelta X}\le \varepsilon ,\]
where ${A}^{(1)}$, ${B}^{(0)}$, ${B}^{(1)}$, $C_{\varDelta X}$ are defined in Lemma 2.
We omit the proof due to its simplicity.
Definition.
We say that a model $\hat{Y}(t)$ approximates a stochastic process $Y(t)$ with given relative accuracy δ and reliability $1-\varepsilon $ (where $\varepsilon \in (0;1)$) in $C([0,T])$ if
\[\mathrm{P}\Big\{\underset{t\in [0,T]}{\sup }\big|Y(t)/\hat{Y}(t)-1\big|>\delta \Big\}\le \varepsilon .\]
Now we can state a result on the rate of convergence in $C([0,T])$.
Theorem 3.
Suppose that a random process $Y=\{Y(t),t\in \mathbb{R}\}$ can be represented as $Y(t)=\exp \{X(t)\}$, where a separable strictly sub-Gaussian random process $X=\{X(t),t\in \mathbb{R}\}$ is mean-square continuous, satisfies the condition RC and the conditions of Lemmas 1 and 2 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables $\xi _{0k}$, $\eta _{jl}$ in expansion (5) of the process $X(t)$ are independent strictly sub-Gaussian, $\hat{X}(t)$ is a model of $X(t)$ defined by (8), $\hat{Y}(t)$ is defined by (10), $\theta \in (0;1)$, $\delta >0$, $\varepsilon \in (0;1)$, $T>0$, the numbers ${A}^{(1)}$, ${B}^{(0)}$, ${B}^{(1)}$, $E_{2}$, $F_{1}$, $F_{2}$ are defined in Lemmas 1 and 2,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{\varepsilon }& \displaystyle =\delta \sqrt{\varepsilon },\\{} \displaystyle A(\theta )& \displaystyle ={\int _{1/(2\theta )}^{\infty }}\frac{\sqrt{v+1}}{{v}^{2}}dv,\\{} \displaystyle \tau _{1}& \displaystyle =\frac{{e}^{1/2}\hspace{0.1667em}\hat{\varepsilon }}{{2}^{7/4}{(64+{\hat{\varepsilon }}^{2})}^{1/4}},\\{} \displaystyle \tau _{2}& \displaystyle ={\big(32\ln \big(1+{\hat{\varepsilon }}^{2}/60\big)\big)}^{1/2},\\{} \displaystyle \tau _{3}& \displaystyle =\sqrt{\ln \big(1+{\hat{\varepsilon }}^{3}/8\big)}/\sqrt{2},\\{} \displaystyle \tau _{\ast }& \displaystyle =\min \{\tau _{1},\tau _{2},\tau _{3}\},\\{} \displaystyle Q& \displaystyle =\frac{{e}^{1/2}\hat{\varepsilon }\hspace{0.1667em}\theta (1-\theta )}{{2}^{9/4}A(\theta )T(1+{\hat{\varepsilon }}^{3}/8)},\\{} \displaystyle {N_{0}^{\ast }}& \displaystyle =1+\frac{8{({A}^{(1)})}^{2}}{{Q}^{2}},\\{} \displaystyle {N}^{\ast }& \displaystyle =\max \bigg\{1+\log _{8}\frac{4{({B}^{(0)})}^{2}}{7{Q}^{2}},3+\log _{2}\frac{4{({B}^{(1)})}^{2}}{{Q}^{2}}\bigg\},\\{} \displaystyle {M}^{\ast }& \displaystyle =1+16\frac{{({B}^{(1)})}^{2}}{{Q}^{2}},\\{} \displaystyle {N_{0}^{\ast \ast }}& \displaystyle =\frac{6}{{\tau _{\ast }^{2}}}{E_{2}^{2}}{T}^{2}+1,\\{} \displaystyle {N}^{\ast \ast }& \displaystyle =\max \bigg\{1+\log _{2}\bigg(\frac{72{F_{2}^{2}}{T}^{2}}{5{\tau _{\ast }^{2}}}\bigg),1+\log _{8}\bigg(\frac{18{F_{1}^{2}}{T}^{2}}{7{\tau _{\ast }^{2}}}\bigg)\bigg\},\\{} \displaystyle {M}^{\ast \ast }& \displaystyle =1+\frac{12}{{\tau _{\ast }^{2}}}{F_{2}^{2}}{T}^{2}.\end{array}\]
Suppose also that
(15)
\[\underset{t\in [0,T]}{\sup }\mathsf{E}{\big(X(t)-\hat{X}(t)\big)}^{2}>0.\]
If
(16)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\max \big\{{N_{0}^{\ast }},{N_{0}^{\ast \ast }}\big\},\end{array}\]
(17)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N& \displaystyle >\max \big\{{N}^{\ast },{N}^{\ast \ast }\big\},\end{array}\]
(18)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle M_{j}& \displaystyle >\max \big\{{M}^{\ast },{M}^{\ast \ast }\big\}\hspace{1em}(j=0,1,\dots ,N-1),\end{array}\]
then the model $\hat{Y}(t)$ approximates the process $Y(t)$ with given relative accuracy δ and reliability $1-\varepsilon $ in $C([0,T])$.
Proof.
Denote
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varDelta X(t)& \displaystyle =X(t)-\hat{X}(t),\\{} \displaystyle U(t)& \displaystyle =Y(t)/\hat{Y}(t)-1=\exp \big\{\varDelta X(t)\big\}-1,\\{} \displaystyle \rho _{U}(t,s)& \displaystyle =\big\| U(t)-U(s)\big\| _{L_{2}(\varOmega )},\\{} \displaystyle \tau _{\varDelta X}& \displaystyle =\underset{t\in [0,T]}{\sup }\tau \big(\varDelta X(t)\big).\end{array}\]
Let us note that $\rho _{U}$ is a pseudometric. Let $N(u)$ be the metric massiveness of $[0,T]$ with respect to $\rho _{U}$, that is, the minimum number of closed balls in the space $([0,T],\rho _{U})$ with diameters at most $2u$ needed to cover $[0,T]$,
\[\varepsilon _{0}=\underset{t,s\in [0,T]}{\sup }\rho _{U}(t,s).\]
We will denote the norm in $L_{2}(\varOmega )$ by $\| \cdot \| _{2}$.
Since $U(t)\in L_{2}(\varOmega )$, $t\in [0,T]$, using Theorem 3.3.3 from [1] (see p. 98), we obtain
(19)
\[\mathrm{P}\Big\{\underset{t\in [0,T]}{\sup }\big|U(t)\big|>\delta \Big\}\le \frac{{S_{2}^{2}}}{{\delta }^{2}},\]
where
\[S_{2}=\underset{t\in [0,T]}{\sup }{\big(\mathsf{E}{\big|U(t)\big|}^{2}\big)}^{1/2}+\frac{1}{\theta (1-\theta )}{\int _{0}^{\theta \varepsilon _{0}}}{N}^{1/2}(u)du.\]
We will prove that $S_{2}\le \delta \sqrt{\varepsilon }=\hat{\varepsilon }$.
First, let us estimate $\mathsf{E}|U(t){|}^{2}$for $t\in [0,T]$.
Using the inequality
(20)
\[\big|{e}^{a}-{e}^{b}\big|\le |a-b|\max \big\{{e}^{a},{e}^{b}\big\}\le |a-b|\big({e}^{a}+{e}^{b}\big)\]
(we set $b=0$) and the Cauchy–Schwarz inequality, we obtain
\[\mathsf{E}{\big|U(t)\big|}^{2}=\mathsf{E}{\big(\exp \big\{\varDelta X(t)\big\}-1\big)}^{2}\le {\big(\mathsf{E}{\big|\varDelta X(t)\big|}^{4}\big)}^{1/2}{\big(\mathsf{E}{\big(\exp \big\{\varDelta X(t)\big\}+1\big)}^{4}\big)}^{1/2}.\]
It follows from (4) that
(21)
\[\mathsf{E}|\varDelta X(t){|}^{4}\le \frac{32}{{e}^{2}}\hspace{0.1667em}{\tau _{\varDelta X}^{4}}.\]
Let us estimate $G=\mathsf{E}{(\exp \{\varDelta X(t)\}+1)}^{4}$. Since
\[\mathsf{E}\exp \big\{k\varDelta X(t)\big\}\le \exp \big\{{k}^{2}{\tau }^{2}\big(\varDelta X(t)\big)/2\big\}={A}^{{k}^{2}}\le {A}^{16},\hspace{1em}1\le k\le 4,\]
where $A=\exp \{{\tau _{\varDelta X}^{2}}/2\}$, we have
(22)
\[G\le \sum \limits_{k=1}^{4}\left(\genfrac{}{}{0pt}{}{4}{k}\right){A}^{16}+1=15{A}^{16}+1.\]
It follows from Lemma 1 and (16)–(18) that
(23)
\[\tau _{\varDelta X}=\underset{t\in [0,T]}{\sup }\mathsf{E}{\big({\big|\varDelta X(t)\big|}^{2}\big)}^{1/2}\le \tau _{\ast }.\]
Using (21)–(23), we obtain
(24)
\[{\big(\mathsf{E}{\big|U(t)\big|}^{2}\big)}^{1/2}\le \hat{\varepsilon }/2.\]
Let us estimate now
\[I(\theta )=\frac{1}{\theta (1-\theta )}{\int _{0}^{\theta \varepsilon _{0}}}{N}^{1/2}(u)du.\]
First, we will find an upper bound for $N(u)$. For this, we will prove that
(25)
\[\big\| U(t_{1})-U(t_{2})\big\| _{2}\le C_{U}|t_{1}-t_{2}|,\]
where
\[C_{U}=\big({2}^{9/4}/{e}^{1/2}\big)C_{\varDelta X}\exp \big\{2{\tau _{\varDelta X}^{2}}\big\}\]
with $C_{\varDelta X}$ defined in Lemma 2.
Using (20) and the Cauchy–Schwarz inequality, we have:
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \big\| U(t_{1})-U(t_{2}){\big\| _{2}^{2}}\\{} & \displaystyle \hspace{1em}=\mathsf{E}{\big|\exp \big\{\varDelta X(t_{1})\big\}-\exp \big\{\varDelta X(t_{2})\big\}\big|}^{2}\\{} & \displaystyle \hspace{1em}\le \mathsf{E}{\big|\varDelta X(t_{1})-\varDelta X(t_{2})\big|}^{2}{\big(\exp \big\{\varDelta X(t_{1})\big\}+\exp \big\{\varDelta X(t_{2})\big\}\big)}^{2}\\{} & \displaystyle \hspace{1em}\le {\big(\mathsf{E}{\big(\varDelta X(t_{1})-\varDelta X(t_{2})\big)}^{4}\big)}^{1/2}{\big(\mathsf{E}{\big(\exp \big\{\varDelta X(t_{1})\big\}+\exp \big\{\varDelta X(t_{2})\big\}\big)}^{4}\big)}^{1/2}.\end{array}\]
Applying (4), we obtain
(26)
\[{\big(\mathsf{E}{\big(\varDelta X(t_{1})-\varDelta X(t_{2})\big)}^{4}\big)}^{1/2}\le \big({2}^{5/2}/e\big){C_{\varDelta X}^{2}}|t_{2}-t_{1}{|}^{2}.\]
Let us find an upper bound for
\[H=\mathsf{E}{\big(\exp \big\{\varDelta X(t_{1})\big\}+\exp \big\{\varDelta X(t_{2})\big\}\big)}^{4}.\]
Since
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathsf{E}\exp \big\{k\varDelta X(t_{1})+l\varDelta X(t_{2})\big\}\\{} & \displaystyle \hspace{1em}\le \exp \big\{{\tau }^{2}\big(k\varDelta X(t_{1})+l\varDelta X(t_{2})\big)/2\big\}\le \exp \big\{{\big(k\tau \big(\varDelta X(t_{1})\big)+l\tau \big(\varDelta X(t_{2})\big)\big)}^{2}/2\big\}\\{} & \displaystyle \hspace{1em}\le \exp \big\{8{\tau _{\varDelta X}^{2}}\big\},\end{array}\]
where $k+l=4$, we have
(27)
\[H\le \sum \limits_{k=0}^{4}\left(\genfrac{}{}{0pt}{}{4}{k}\right)\exp \big\{8{\tau _{\varDelta X}^{2}}\big\}=16\exp \big\{8{\tau _{\varDelta X}^{2}}\big\},\]
and (25) follows from (26) and (27).
Using inequality (25), simple properties of metric entropy (see [1], Lemma 3.2.1, p. 88), and the inequality
\[N_{\rho _{1}}(u)\le T/(2u)+1\]
(where $N_{\rho _{1}}$ is the entropy of $[0,T]$ with respect to the Euclidean metric), we have
\[N(u)\le \frac{TC_{U}}{2u}+1.\]
Since $\varepsilon _{0}\le C_{U}T$, we obtain
(28)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\int _{0}^{\theta \varepsilon _{0}}}{N}^{1/2}(u)du& \displaystyle \le {\int _{0}^{\theta \varepsilon _{0}}}{\big(TC_{U}/(2u)+1\big)}^{1/2}du\\{} & \displaystyle =\frac{TC_{U}}{2}{\int _{TC_{U}/(2\theta \varepsilon _{0})}^{\infty }}\frac{\sqrt{v+1}}{{v}^{2}}dv\le TC_{U}A(\theta )/2.\end{array}\]
It is easy to check using Lemma 3 that, under the conditions of the theorem,
(29)
\[C_{\varDelta X}\le Q.\]
It follows from (23) and (29) that
\[C_{U}\le \frac{\hat{\varepsilon }\hspace{0.1667em}\theta (1-\theta )}{TA(\theta )},\]
and therefore, using (28), we obtain
(30)
\[I(\theta )\le \hat{\varepsilon }/2.\]
Now the statement of the theorem follows from (19), (24), and (30).  □
Example.
Let us consider the function $u(t,\lambda )=t/{(1+{t}^{2}+{\lambda }^{2})}^{4}$ and an arbitrary Daubechies wavelet with the corresponding f-wavelet ϕ and m-wavelet ψ. We will use the notations
\[a_{0k}(t)=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }_{0k}(y)}dy,\hspace{2em}b_{jl}(t)=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\psi }_{jl}(y)}dy\]
and consider the stochastic process
\[X(t)=\sum \limits_{k\in \mathbb{Z}}\xi _{0k}a_{0k}(t)+\sum \limits_{j=0}^{\infty }\sum \limits_{l\in \mathbb{Z}}\eta _{jl}b_{jl}(t),\]
where $\xi _{0k}$, $\eta _{jl}$ ($k,l\in \mathbb{Z}$, $j=0,1,\dots $) are independent uniformly distributed over $[-\sqrt{3},\sqrt{3}]$. It can be checked that the process $Y(t)=\exp \{X(t)\}$ and the Daubechies wavelet satisfy the conditions of Theorem 3.

5 Simulation with given accuracy and reliability in $L_{p}([0,T])$

Now we will consider the rate of convergence in $L_{p}([0,T])$ of model (10) to a process $Y(t)$.
Lemma 4.
Suppose that a centered stochastic process $X=\{X(t),t\in \mathbb{R}\}$ satisfies the conditions of Theorem 1, ϕ is an f-wavelet, ψ is the corresponding m-wavelet, $\hat{\phi }$ and $\hat{\psi }$ are the Fourier transforms of ϕ and ψ, respectively, $\hat{\phi }(y)$ is absolutely continuous, $u(t,y)$ is defined in Theorem 1 and is absolutely continuous for any fixed t, there exist the derivatives ${u^{\prime }_{y}}(t,y)$, ${\hat{\phi }^{\prime }}(y)$, ${\hat{\psi }^{\prime }}(y)$ and $|{\hat{\psi }^{\prime }}(y)|\le C$, $|u(t,y)|\le u_{1}(y)$, $|{u^{\prime }_{y}}(t,y)|\le |t|\hspace{0.1667em}u_{2}(y)$, Eqs. (11) and (12) hold,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\hspace{0.1667em}\big|\overline{\hat{\psi }\big(y/{2}^{j}\big)}\big|=0\hspace{1em}\forall j=0,1,\dots ,\hspace{2.5pt}\forall t\in \mathbb{R},\\{} & \displaystyle \underset{|y|\to \infty }{\lim }u(t,y)\big|\hat{\phi }(y)\big|=0\hspace{1em}\forall t\in \mathbb{R};\\{} & \displaystyle S_{1}=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)\big|{\hat{\phi }^{\prime }}(y)\big|dy,\hspace{2em}S_{2}=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{2}(y)\big|\hat{\phi }(y)\big|dy,\\{} & \displaystyle Q_{1}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)dy,\hspace{2em}Q_{2}=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{2}(y)|y|dy.\end{array}\]
Then the following inequalities hold for the coefficients $a_{0k}(t),b_{jl}(t)$ in expansion (5) of the process $X(t)$:
(31)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|a_{00}(t)\big|& \displaystyle \le \frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)\big|\hat{\phi }(y)\big|dy,\end{array}\]
(32)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|b_{j0}(t)\big|& \displaystyle \le \frac{C}{\sqrt{2\pi }\hspace{0.1667em}{2}^{3j/2}}\int _{\mathbb{R}}u_{1}(y)|y|dy,\hspace{1em}j=0,1,\dots ,\end{array}\]
(33)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|a_{0k}(t)\big|& \displaystyle \le \frac{S_{1}+S_{2}|t|}{|k|},\hspace{1em}k\ne 0,\end{array}\]
(34)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|b_{jl}(t)\big|& \displaystyle \le \frac{Q_{1}+Q_{2}|t|}{{2}^{j/2}|l|},\hspace{1em}l\ne 0,\hspace{2.5pt}j=0,1,\dots .\end{array}\]
The proof of inequalities (31)–(34) is analogous to the proof of similar inequalities for the coefficients of expansion (5) of a stationary process in [5].
Lemma 5.
Suppose that a random process $X=\{X(t),t\in \mathbb{R}\}$ satisfies the conditions of Theorem 1, an f-wavelet ϕ and the corresponding m-wavelet ψ together with the process $X(t)$ satisfy the conditions of Lemma 4, C, $Q_{1}$, $Q_{2}$, $S_{1}$, $S_{2}$, $u_{1}(y)$ are defined in Lemma 4, $T>0$, $p\ge 1$, $\delta \in (0;1)$, $\varepsilon >0$,
\[\delta _{1}=\min \bigg\{\frac{{\varepsilon }^{2}}{2{T}^{2/p}\ln (2/\delta )},\hspace{0.1667em}\hspace{0.1667em}\frac{{\varepsilon }^{2}}{p{T}^{2/p}}\bigg\},\hspace{2em}D=\frac{C}{\sqrt{2\pi }}\int _{\mathbb{R}}u_{1}(y)|y|dy.\]
If
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\frac{6}{\delta _{1}}{(S_{1}+S_{2}T)}^{2}+1,\\{} \displaystyle N& \displaystyle >\max \bigg\{1+\log _{2}\bigg(\frac{72{(Q_{1}+Q_{2}T)}^{2}}{5\delta _{1}}\bigg),1+\log _{8}\bigg(\frac{18{D}^{2}}{7\delta _{1}}\bigg)\bigg\},\\{} \displaystyle M_{j}& \displaystyle >1+\frac{12}{\delta _{1}}{(Q_{1}+Q_{2}T)}^{2}\bigg(1-\frac{1}{{2}^{N}}\bigg),\end{array}\]
then
\[\underset{t\in [0,T]}{\sup }\mathsf{E}{\big|X(t)-\widehat{X}(t)\big|}^{2}\le \delta _{1}.\]
Proof.
We have
\[\mathsf{E}{\big|X(t)-\widehat{X}(t)\big|}^{2}=\sum \limits_{k:|k|\ge N_{0}}{\big|a_{0k}(t)\big|}^{2}+\sum \limits_{j=0}^{N-1}\sum \limits_{l:|l|\ge M_{j}}{\big|b_{jl}(t)\big|}^{2}+\sum \limits_{j=N}^{\infty }\sum \limits_{l\in \mathbb{Z}}{\big|b_{jl}(t)\big|}^{2}.\]
It remains to apply inequalities (31)–(34).  □
Definition. We say that a model $\hat{Y}(t)$ approximates a stochastic process $Y(t)$ with given accuracy δ and reliability $1-\varepsilon $ (where $\varepsilon \in (0;1)$) in $L_{p}([0,T])$ if
\[\mathrm{P}\Bigg\{{\Bigg({\int _{0}^{T}}{\big|Y(t)-\hat{Y}(t)\big|}^{p}dt\Bigg)}^{1/p}>\delta \Bigg\}\le \varepsilon .\]
Theorem 4.
Suppose that a random process $Y=\{Y(t),t\in \mathbb{R}\}$ can be represented as $Y(t)=\exp \{X(t)\}$, where a separable strictly sub-Gaussian random process $X=\{X(t),t\in \mathbb{R}\}$ is mean-square continuous, satisfies the condition RC and the conditions of Lemma 5 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables $\xi _{0k},\eta _{jl}$ in expansion (5) of the process $X(t)$ are independent strictly sub-Gaussian, $\hat{X}(t)$ is a model of $X(t)$ defined by (8), $\hat{Y}(t)$ is defined by (10), D, $Q_{1}$, $Q_{2}$, $S_{1}$, $S_{2}$ are defined in Lemmas 4 and 5, $\delta >0$, $\varepsilon \in (0;1)$, $p\ge 1$, $T>0$.
Let
\[\begin{array}{r@{\hskip0pt}l}\displaystyle m& \displaystyle =\frac{\varepsilon {\delta }^{p}}{{2}^{2p}{(p/e)}^{p/2}\hspace{0.1667em}T\sup _{t\in [0,T]}{(\mathsf{E}\exp \{2pX(t)\})}^{1/2}},\\{} \displaystyle h(t)& \displaystyle ={t}^{p}{\big(1+\exp \big\{8{p}^{2}{t}^{2}\big\}\big)}^{1/4},\hspace{1em}t\ge 0,\end{array}\]
and $x_{m}$ be the root of the equation
\[h(x)=m.\]
If
(35)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\frac{6}{{x_{m}^{2}}}{(S_{1}+S_{2}T)}^{2}+1,\end{array}\]
(36)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N& \displaystyle >\max \bigg\{1+\log _{2}\bigg(\frac{72{(Q_{1}+Q_{2}T)}^{2}}{5{x_{m}^{2}}}\bigg),1+\log _{8}\bigg(\frac{18{D}^{2}}{7{x_{m}^{2}}}\bigg)\bigg\},\end{array}\]
(37)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle M_{j}& \displaystyle >1+\frac{12}{{x_{m}^{2}}}{(Q_{1}+Q_{2}T)}^{2}\bigg(1-\frac{1}{{2}^{N}}\bigg)\hspace{1em}(j=0,1,\dots ,N-1),\end{array}\]
then the model $\hat{Y}(t)$ defined by (10) approximates $Y(t)$ with given accuracy δ and reliability $1-\varepsilon $ in $L_{p}([0,T])$.
Proof.
We will use the following notations:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varDelta X(t)& \displaystyle =\hat{X}(t)-X(t),\\{} \displaystyle \overline{\tau }_{X}& \displaystyle =\underset{t\in [0,T]}{\sup }\tau \big(X(t)\big),\\{} \displaystyle \overline{\tau }_{\varDelta X}& \displaystyle =\underset{t\in [0,T]}{\sup }\tau \big(\varDelta X(t)\big),\\{} \displaystyle c_{p}& \displaystyle =2{(4p/e)}^{2p}.\end{array}\]
We will denote the norm in $L_{p}([0,T])$ by $\| \cdot \| _{p}$.
Let us estimate $\mathrm{P}\{\| Y-\hat{Y}\| _{p}>\delta \}$. We have
(38)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathrm{P}\big\{\| Y-\hat{Y}\| _{p}>\delta \big\}& \displaystyle \le \frac{\mathsf{E}\| Y-\hat{Y}{\| _{p}^{p}}}{{\delta }^{p}}\\{} & \displaystyle =\frac{\mathsf{E}{\textstyle\int _{0}^{T}}|\exp \{X(t)\}-\exp \{\hat{X}(t)\}{|}^{p}dt}{{\delta }^{p}}.\end{array}\]
Denote
\[\varDelta (t)=\mathsf{E}{\big|\exp \big\{X(t)\big\}-\exp \big\{\hat{X}(t)\big\}\big|}^{p}.\]
An application of the Cauchy–Schwarz inequality yields
(39)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varDelta (t)& \displaystyle =\mathsf{E}\exp \big\{pX(t)\big\}{\big|1-\exp \big\{\varDelta X(t)\big\}\big|}^{p}\\{} & \displaystyle \le {\big(\mathsf{E}\exp \big\{2pX(t)\big\}\big)}^{1/2}{\big(\mathsf{E}{\big|1-\exp \big\{\varDelta X(t)\big\}\big|}^{2p}\big)}^{1/2}.\end{array}\]
We will need two auxiliary inequalities. Using the power-mean inequality
\[\frac{a+b}{2}\le {\bigg(\frac{{a}^{r}+{b}^{r}}{2}\bigg)}^{1/r},\]
where $r\ge 1$, and setting $a={e}^{c}$ and $b=1$, we obtain
(40)
\[{\big({e}^{c}+1\big)}^{r}\le {2}^{r-1}\big({e}^{cr}+1\big).\]
It follows from (20) that
(41)
\[{\big|{e}^{a}-1\big|}^{q}\le |a{|}^{q}{\big({e}^{a}+1\big)}^{q}\]
for $q\ge 0$.
Now let us estimate $\mathsf{E}|1-\exp \{\varDelta X(t)\}{|}^{2p}$, where $t\in [0,T]$, using (41):
(42)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{\big|1-\exp \big\{\varDelta X(t)\big\}\big|}^{2p}& \displaystyle \le \mathsf{E}{\big|\varDelta X(t)\big|}^{2p}{\big(1+\exp \big\{\varDelta X(t)\big\}\big)}^{2p}\\{} & \displaystyle \le {\big(\mathsf{E}{\big|\varDelta X(t)\big|}^{4p}\big)}^{1/2}{\big(\mathsf{E}{\big(1+\exp \big\{\varDelta X(t)\big\}\big)}^{4p}\big)}^{1/2}.\end{array}\]
Applying (40), we obtain:
(43)
\[\mathsf{E}{\big(1+\exp \big\{\varDelta X(t)\big\}\big)}^{4p}\le {2}^{4p-1}\mathsf{E}\big(\exp \big\{4p\varDelta X(t)\big\}+1\big).\]
It follows from (39), (42), and (43) that, for $t\in [0,T]$,
(44)
\[\varDelta (t)\le {2}^{p-1/4}{\big(\mathsf{E}\exp \big\{2pX(t)\big\}\big)}^{1/2}{\big(\mathsf{E}{\big|\varDelta X(t)\big|}^{4p}\big)}^{1/4}{\big(1+\mathsf{E}\exp \big\{4p\varDelta X(t)\big\}\big)}^{1/4}.\]
Since, for $t\in [0,T]$,
\[\mathsf{E}{\big|\varDelta X(t)\big|}^{4p}\le c_{p}{\overline{\tau }_{\varDelta X}^{4p}}\]
(see (4)) and
\[\mathsf{E}\exp \big\{4p\varDelta X(t)\big\}\le \exp \big\{8{p}^{2}{\overline{\tau }_{\varDelta X}^{2}}\big\}\]
(see (3)), we have
(45)
\[\varDelta (t)\le {2}^{p-1/4}{c_{p}^{1/4}}\underset{t\in [0,T]}{\sup }{\big(\mathsf{E}\exp \big\{2pX(t)\big\}\big)}^{1/2}h(\overline{\tau }_{\varDelta X}),\hspace{1em}t\in [0,T].\]
It follows from Lemma 5 and inequalities (35)–(37) that
\[\overline{\tau }_{\varDelta X}=\underset{t\in [0,T]}{\sup }{\big(\mathsf{E}{\big(X(t)-\hat{X}(t)\big)}^{2}\big)}^{1/2}\le x_{m}.\]
We obtain using (45) that
\[\varDelta (t)\le \varepsilon {\delta }^{p}/T,\hspace{1em}t\in [0,T],\]
and hence
(46)
\[\mathsf{E}\| Y-\hat{Y}{\| _{p}^{p}}={\int _{0}^{T}}\varDelta (t)dt\le \varepsilon {\delta }^{p}.\]
Now the statement of the theorem follows from (38) and (46).  □
Example.
Let us consider a centered Gaussian process $X(t)$ with correlation function
\[R(t,s)=\int _{\mathbb{R}}u(t,y)u(s,y)dy,\]
where
\[u(t,y)=\frac{t}{1+{t}^{2}+\exp \{{y}^{2}\}},\]
and an arbitrary Battle–Lemarié wavelet. It can be checked that the process $Y(t)=\exp \{X(t)\}$ and the Battle–Lemarié wavelet satisfy the conditions of Theorem 4.

Acknowledgments

The author’s research was supported by a Swiss Government Excellence Scholarship. The author would like to thank professors Enkelejd Hashorva and Yuriy V. Kozachenko for valuable discussions. The author is also grateful to the referee for his remarks, which helped to substantially improve the paper.

References

[1] 
Buldygin, V.V., Kozachenko, Y.V.: Metric Characterization of Random Variables and Random Processes. Am. Math. Soc., Boca Raton (2000). MR1743716
[2] 
Härdle, W., Kerkyacharian, G., Picard, D., Tsybakov, A.: Wavelets, Approximation and Statistical Applications. Springer, New York (1998). MR1618204. doi:10.1007/978-1-4612-2222-4
[3] 
Kozachenko, Y., Kovalchuk, Y.: Boundary-value problems with random initial conditions and functional series from $\mathrm{Sub}_{\varphi }(\varOmega )$. I. Ukr. Math. J. 50, 572–585 (1998). MR1698149. doi:10.1007/BF02487389
[4] 
Kozachenko, Y., Pogoriliak, O.: Simulation of Cox processes driven by random Gaussian field. Methodol. Comput. Appl. Probab. 13, 511–521 (2011). MR2822393. doi:10.1007/s11009-010-9169-8
[5] 
Kozachenko, Y., Turchyn, Y.: On Karhunen–Loeve-like expansion for a class of random processes. Int. J. Stat. Manag. Syst. 3, 43–55 (2008)
[6] 
Kozachenko, Y., Sottinen, T., Vasylyk, O.: Simulation of weakly self-similar stationary increment $\mathrm{Sub}_{\varphi }(\varOmega )$-processes: a series expansion approach. Methodol. Comput. Appl. Probab. 7, 379–400 (2005). MR2210587. doi:10.1007/s11009-005-4523-y
[7] 
Kozachenko, Y., Pashko, A., Rozora, I.: Simulation of Random Processes and Fields. Zadruga, Kyiv (2007) (in Ukrainian)
[8] 
Kozachenko, Y., Rozora, I., Turchyn, Y.: Properties of some random series. Commun. Stat., Theory Methods 40, 3672–3683 (2011). MR2860766. doi:10.1080/03610926.2011.581188
[9] 
Ogorodnikov, V.A., Prigarin, S.M.: Numerical Modelling of Random Processes and Fields: Algorithms and Applications. VSP, Utrecht (1996). MR1419502
[10] 
Ripley, B.D.: Stochastic Simulation. John Wiley & Sons, New York (1987). MR0875224. doi:10.1002/9780470316726
[11] 
Turchyn, I.: Simulation of a strictly sub-Gaussian random field. Stat. Probab. Lett. 92, 183–189 (2014). MR3230492. doi:10.1016/j.spl.2014.05.022
[12] 
Turchyn, I.: Haar wavelet and simulation of stochastic processes. Contemp. Math. Stat. 3, 1–7 (2015)
[13] 
Turchyn, Y.: Simulation of sub-Gaussian processes using wavelets. Monte Carlo Methods Appl. 17, 215–231 (2011). MR2846496. doi:10.1515/MCMA.2011.010
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Auxiliary facts
  • 3 A multiplicative representation
  • 4 Simulation with given relative accuracy and reliability in $C([0,T])$
  • 5 Simulation with given accuracy and reliability in $L_{p}([0,T])$
  • Acknowledgments
  • References

Copyright
© 2015 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Sub-Gaussian random processes simulation

MSC2010
60G12

Metrics
since March 2018
396

Article info
views

337

Full article
views

317

PDF
downloads

144

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    4
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 1.
([5]) Let $X=\{X(t),t\in \mathbb{R}\}$ be a centered random process such that $\mathsf{E}|X(t){|}^{2}<\infty $ for all $t\in \mathbb{R}$. Let $R(t,s)=\mathsf{E}X(t)\overline{X(s)}$ and suppose that there exists a Borel function $u(t,\lambda )$, $t\in \mathbb{R}$, $\lambda \in \mathbb{R}$, such that
\[\int _{\mathbb{R}}{\big|u(t,\lambda )\big|}^{2}d\lambda <\infty \hspace{1em}\textit{for all }t\in \mathbb{R}\]
and
\[R(t,s)=\int _{\mathbb{R}}u(t,\lambda )\overline{u(s,\lambda )}d\lambda .\]
Let $\phi (x)$ be an f-wavelet, and $\psi (x)$ the corresponding m-wavelet. Then the process $X(t)$ can be presented as the following series, which converges for any $t\in \mathbb{R}$ in $L_{2}(\varOmega )$:
(5)
\[X(t)=\sum \limits_{k\in \mathbb{Z}}\xi _{0k}a_{0k}(t)+\sum \limits_{j=0}^{\infty }\sum \limits_{l\in \mathbb{Z}}\eta _{jl}b_{jl}(t),\]
where
(6)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle a_{0k}(t)& \displaystyle =\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }_{0k}(y)}dy=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\phi }(y)}{e}^{iyk}dy,\end{array}\]
(7)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle b_{jl}(t)& \displaystyle =\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y)\overline{\hat{\psi }_{jl}(y)}dy=\frac{1}{\sqrt{2\pi }}\int _{\mathbb{R}}u(t,y){2}^{-j/2}\exp \bigg\{i\frac{y}{{2}^{j}}l\bigg\}\overline{\hat{\psi }\bigg(\frac{y}{{2}^{j}}\bigg)}dy,\end{array}\]
and $\xi _{0k}$, $\eta _{jl}$ are centered random variables such that
\[\mathsf{E}\xi _{0k}\overline{\xi _{0l}}=\delta _{kl},\hspace{2em}\mathsf{E}\eta _{mk}\overline{\eta _{nl}}=\delta _{mn}\delta _{kl},\hspace{2em}\mathsf{E}\xi _{0k}\overline{\eta _{nl}}=0\hspace{0.1667em}.\]
Theorem 2.
Suppose that a random process $Y(t)$ can be represented as $Y(t)=\exp \{X(t)\}$, where the process $X(t)$ satisfies the conditions of Theorem 1. Then the equality
(9)
\[Y(t)=\prod \limits_{k\in \mathbb{Z}}\exp \big\{\xi _{0k}a_{0k}(t)\big\}\prod \limits_{j=0}^{\infty }\prod \limits_{l\in \mathbb{Z}}\exp \big\{\eta _{jl}b_{jl}(t)\big\}\]
holds, where product (9) converges in probability for any fixed t, and $\xi _{0k}$, $\eta _{jl}$, $a_{0k}(t)$, $b_{jl}(t)$ are defined in Theorem 1.
Theorem 3.
Suppose that a random process $Y=\{Y(t),t\in \mathbb{R}\}$ can be represented as $Y(t)=\exp \{X(t)\}$, where a separable strictly sub-Gaussian random process $X=\{X(t),t\in \mathbb{R}\}$ is mean-square continuous, satisfies the condition RC and the conditions of Lemmas 1 and 2 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables $\xi _{0k}$, $\eta _{jl}$ in expansion (5) of the process $X(t)$ are independent strictly sub-Gaussian, $\hat{X}(t)$ is a model of $X(t)$ defined by (8), $\hat{Y}(t)$ is defined by (10), $\theta \in (0;1)$, $\delta >0$, $\varepsilon \in (0;1)$, $T>0$, the numbers ${A}^{(1)}$, ${B}^{(0)}$, ${B}^{(1)}$, $E_{2}$, $F_{1}$, $F_{2}$ are defined in Lemmas 1 and 2,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{\varepsilon }& \displaystyle =\delta \sqrt{\varepsilon },\\{} \displaystyle A(\theta )& \displaystyle ={\int _{1/(2\theta )}^{\infty }}\frac{\sqrt{v+1}}{{v}^{2}}dv,\\{} \displaystyle \tau _{1}& \displaystyle =\frac{{e}^{1/2}\hspace{0.1667em}\hat{\varepsilon }}{{2}^{7/4}{(64+{\hat{\varepsilon }}^{2})}^{1/4}},\\{} \displaystyle \tau _{2}& \displaystyle ={\big(32\ln \big(1+{\hat{\varepsilon }}^{2}/60\big)\big)}^{1/2},\\{} \displaystyle \tau _{3}& \displaystyle =\sqrt{\ln \big(1+{\hat{\varepsilon }}^{3}/8\big)}/\sqrt{2},\\{} \displaystyle \tau _{\ast }& \displaystyle =\min \{\tau _{1},\tau _{2},\tau _{3}\},\\{} \displaystyle Q& \displaystyle =\frac{{e}^{1/2}\hat{\varepsilon }\hspace{0.1667em}\theta (1-\theta )}{{2}^{9/4}A(\theta )T(1+{\hat{\varepsilon }}^{3}/8)},\\{} \displaystyle {N_{0}^{\ast }}& \displaystyle =1+\frac{8{({A}^{(1)})}^{2}}{{Q}^{2}},\\{} \displaystyle {N}^{\ast }& \displaystyle =\max \bigg\{1+\log _{8}\frac{4{({B}^{(0)})}^{2}}{7{Q}^{2}},3+\log _{2}\frac{4{({B}^{(1)})}^{2}}{{Q}^{2}}\bigg\},\\{} \displaystyle {M}^{\ast }& \displaystyle =1+16\frac{{({B}^{(1)})}^{2}}{{Q}^{2}},\\{} \displaystyle {N_{0}^{\ast \ast }}& \displaystyle =\frac{6}{{\tau _{\ast }^{2}}}{E_{2}^{2}}{T}^{2}+1,\\{} \displaystyle {N}^{\ast \ast }& \displaystyle =\max \bigg\{1+\log _{2}\bigg(\frac{72{F_{2}^{2}}{T}^{2}}{5{\tau _{\ast }^{2}}}\bigg),1+\log _{8}\bigg(\frac{18{F_{1}^{2}}{T}^{2}}{7{\tau _{\ast }^{2}}}\bigg)\bigg\},\\{} \displaystyle {M}^{\ast \ast }& \displaystyle =1+\frac{12}{{\tau _{\ast }^{2}}}{F_{2}^{2}}{T}^{2}.\end{array}\]
Suppose also that
(15)
\[\underset{t\in [0,T]}{\sup }\mathsf{E}{\big(X(t)-\hat{X}(t)\big)}^{2}>0.\]
If
(16)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\max \big\{{N_{0}^{\ast }},{N_{0}^{\ast \ast }}\big\},\end{array}\]
(17)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N& \displaystyle >\max \big\{{N}^{\ast },{N}^{\ast \ast }\big\},\end{array}\]
(18)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle M_{j}& \displaystyle >\max \big\{{M}^{\ast },{M}^{\ast \ast }\big\}\hspace{1em}(j=0,1,\dots ,N-1),\end{array}\]
then the model $\hat{Y}(t)$ approximates the process $Y(t)$ with given relative accuracy δ and reliability $1-\varepsilon $ in $C([0,T])$.
Theorem 4.
Suppose that a random process $Y=\{Y(t),t\in \mathbb{R}\}$ can be represented as $Y(t)=\exp \{X(t)\}$, where a separable strictly sub-Gaussian random process $X=\{X(t),t\in \mathbb{R}\}$ is mean-square continuous, satisfies the condition RC and the conditions of Lemma 5 together with an f-wavelet ϕ and the corresponding m-wavelet ψ, the random variables $\xi _{0k},\eta _{jl}$ in expansion (5) of the process $X(t)$ are independent strictly sub-Gaussian, $\hat{X}(t)$ is a model of $X(t)$ defined by (8), $\hat{Y}(t)$ is defined by (10), D, $Q_{1}$, $Q_{2}$, $S_{1}$, $S_{2}$ are defined in Lemmas 4 and 5, $\delta >0$, $\varepsilon \in (0;1)$, $p\ge 1$, $T>0$.
Let
\[\begin{array}{r@{\hskip0pt}l}\displaystyle m& \displaystyle =\frac{\varepsilon {\delta }^{p}}{{2}^{2p}{(p/e)}^{p/2}\hspace{0.1667em}T\sup _{t\in [0,T]}{(\mathsf{E}\exp \{2pX(t)\})}^{1/2}},\\{} \displaystyle h(t)& \displaystyle ={t}^{p}{\big(1+\exp \big\{8{p}^{2}{t}^{2}\big\}\big)}^{1/4},\hspace{1em}t\ge 0,\end{array}\]
and $x_{m}$ be the root of the equation
\[h(x)=m.\]
If
(35)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N_{0}& \displaystyle >\frac{6}{{x_{m}^{2}}}{(S_{1}+S_{2}T)}^{2}+1,\end{array}\]
(36)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle N& \displaystyle >\max \bigg\{1+\log _{2}\bigg(\frac{72{(Q_{1}+Q_{2}T)}^{2}}{5{x_{m}^{2}}}\bigg),1+\log _{8}\bigg(\frac{18{D}^{2}}{7{x_{m}^{2}}}\bigg)\bigg\},\end{array}\]
(37)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle M_{j}& \displaystyle >1+\frac{12}{{x_{m}^{2}}}{(Q_{1}+Q_{2}T)}^{2}\bigg(1-\frac{1}{{2}^{N}}\bigg)\hspace{1em}(j=0,1,\dots ,N-1),\end{array}\]
then the model $\hat{Y}(t)$ defined by (10) approximates $Y(t)$ with given accuracy δ and reliability $1-\varepsilon $ in $L_{p}([0,T])$.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy