1 Introduction
In this paper we study the Cauchy problem for a one-dimensional stochastic wave equation driven by a general stochastic measure. We consider solution of this problem in the mild sense (see (12) below). Our goal is to show the convergence of solutions of equations using the approximation of stochastic measures by partial sums of Fourier series and Fejèr sums.
Existence and uniqueness of the solution of our equation are obtained in [2]. The wave equation driven by stochastic measure defined on subsets of the spatial variable was considered in [1]. Similar equations driven by random stable noises are studied in [9, 10], where the properties of generalized solutions are investigated.
Convergence of solutions of stochastic wave equation by using the approximation of stochastic integrator was studied in [3, 4]. Mild solutions of equations driven by the Gaussian random field in dimension three were considered in these papers.
Approximation of stochastic measures may be obtained by using Fourier and Fourier–Haar series, the corresponding results are given in [15, 17]. The partial sums of the resulting series generate random functions of sets, which are signed measures for each fixed $\omega \in \varOmega $. The resulting equations can be solved as a nonstochastic equation for each ω. The results of our paper imply that we obtain in this way an approximation of the solution of (12).
Papers [15, 17] contain examples of applying Fourier series of stochastic measures to the convergence of solutions of the stochastic heat equation. A similar application of the Fourier transform is given in [18]. Continuous dependence of solutions of wave equation upon the data was studied in [1, 2]. In this paper we obtain a continuous dependence upon the values of stochastic integrator of the equation.
The paper is organized as follows. In Section 2 we recall the basic facts about stochastic measures and Fourier series. Important auxiliary lemma concerning convergence of stochastic integrals is proved in Section 3. The formulation of the Cauchy problem and theorem about approximation of the solution by using Fourier partial sums are given in Section 4. The similar approximation that uses Fejèr sums is obtained in Section 5. Section 6 contains one example with comments about the convergence rate.
2 Preliminaries
Let $\mathsf{X}$ be an arbitrary set and let $\mathcal{B}(\mathsf{X})$ be a σ-algebra of subsets of $\mathsf{X}$. Let ${\mathsf{L}_{0}}(\varOmega ,\mathcal{F},\mathsf{P})$ be the set of (equivalence classes of) all real-valued random variables defined on a complete probability space $(\varOmega ,\mathcal{F},\mathsf{P})$. The convergence in ${\mathsf{L}_{0}}(\varOmega ,\mathcal{F},\mathsf{P})$ is understood to be in probability.
Definition 1.
A σ-additive mapping $\mu :\hspace{2.5pt}\mathcal{B}\to {\mathsf{L}_{0}}$ is called a stochastic measure (SM).
We do not assume positivity or moment existence for μ. In other words, μ is a vector measure with values in ${\mathsf{L}_{0}}$.
For a deterministic measurable function $g:\mathsf{X}\to \mathbb{R}$ and SM μ, an integral of the form ${\int _{\mathsf{X}}}g\hspace{0.1667em}d\mu $ is defined and studied in [6, Chapter 7], see also [11, Chapter 1]. In particular, every bounded measurable g is integrable with respect to any μ. Moreover, an analogue of the Lebesgue dominated convergence theorem holds for this integral (see [6, Theorem 7.1.1], [11, Corollary 1.2]).
Important examples of SMs are orthogonal stochastic measures, α-stable random measures defined on a σ-algebra for $\alpha \in (0,1)\cup (1,2]$ (see [19, Chapter 3]). Conditions under which a process with independent increments generates an SM may be found in [6, Chapter 7 and 8].
Let the random series ${\sum _{n\ge 1}}{\xi _{n}}$ converge unconditionally in probability, and ${m_{n}}$ be real signed measures on $\mathcal{B}$, $|{\mathsf{m}_{n}}(A)|\le 1$. Set $\mu (A)={\sum _{n\ge 1}}{\xi _{n}}{\mathsf{m}_{n}}(A)$. Convergence of this series in probability follows from [21, Theorem V.4.2], and μ is a SM by [5, Theorem 8.6].
Many examples of the SMs on the Borel subsets of $[0,T]$ may be given by the Wiener-type integral
We note the following cases of processes ${X_{t}}$ in (1) that generate SM.
-
1. ${X_{t}}$ – any square integrable martingale.
-
2. ${X_{t}}={W_{t}^{H}}$ – the fractional Brownian motion with Hurst index $H>1/2$, see Theorem 1.1 [8].
-
3. ${X_{t}}={S_{t}^{k}}$ – the sub-fractional Brownian motion for $k=H-1/2,\hspace{2.5pt}1/2<H<1$, see Theorem 3.2 (ii) and Remark 3.3 c) in [20].
We will give another example. Let ζ be an arbitrary SM defined on Borel subsets of $[a,b]\subset \mathbb{R}$, function $h:[0,T]\times [a,b]\to \mathbb{R}$ be such that $h(0,y)=0$, and
Then $h(\cdot ,y)$ is absolutely continuous for each y, $|\frac{\partial h(t,y)}{\partial t}|\le L$ a. e., and we can define SM
see details in [16, Section 3]. Note that Theorem 1 of [16] implies that the process
has a continuous version. In this case the process ${X_{t}}={\mu _{t}}$ in (1) defines an SM.
(2)
\[ \big|h(t,y)-h(s,x)\big|\le L\big(|t-s|+|y-x{|^{\gamma }}\big),\hspace{1em}\gamma >1/2,\hspace{1em}L\in \mathbb{R}.\](3)
\[ \mu (A)={\int _{[a,b]}}\hspace{0.1667em}d\zeta (y){\int _{A}}\frac{\partial h(t,y)}{\partial t}\hspace{0.1667em}dt,\hspace{1em}A\in \mathcal{B}\big([0,T]\big),\](4)
\[ {\mu _{t}}=\mu \big((0,t]\big)={\int _{[a,b]}}h(t,y)\hspace{0.1667em}d\zeta (y),\hspace{1em}t\in [0,T],\]Let $\mathcal{B}$ be a Borel σ-algebra on $(0,1]$. For arbitrary SM μ on $\mathcal{B}$ we consider the Fourier series in the following sense.
Denote
Definition 2.
Stochastic integrals on the right hand side of (5) are defined for any μ, since the integral functions are bounded. Thus the Fourier series is well defined for every SM on $\mathcal{B}$.
We will also consider Fejèr sums for SM μ:
For necessary information concerning the classical Fourier series, we refer to [22]. In the sequel, C and $C(\omega )$ will denote nonrandom and random constants respectively whose exact value is not essential.
3 Convergence of integrals
Put
Let the function $g(z,s):Z\times [0,1]\to \mathbb{R}$ be such that $\forall z\in Z:\hspace{0.2778em}g(z,\cdot )$ is continuous on $[0,1]$. Here $Z={Z_{0}}\times [0,1],\hspace{0.1667em}{Z_{0}}$ is an arbitrary set, $z=({z_{0}},t)$. Denote
From [14, Lemma 3] it follows that the random function
has a version
such that for all $\varepsilon >0,\hspace{0.1667em}\omega \in \varOmega ,\hspace{0.1667em}z\in Z$
We note that the series with the values of SM in (8) converges a. s. (see [13, Lemma 3.1]).
(7)
\[ \begin{aligned}{}\widetilde{\eta }(z,t)& ={\int _{(0,t]}}{g^{(0)}}(z,s)d\mu (s)\\ {} & \hspace{1em}+\sum \limits_{n\ge 1}\bigg({\int _{(0,t]}}{g^{(n)}}(z,s)d\mu (s)-{\int _{(0,t]}}{g^{(n-1)}}(z,s)d\mu (s)\bigg),\end{aligned}\](8)
\[ \begin{aligned}{}\big|\widetilde{\eta }(z,t)\big|& \le \big|g(z,0)\mu \big((0,t]\big)\big|\\ {} & \hspace{1em}+{\bigg\{\sum \limits_{n\ge 1}{2^{n\varepsilon }}\sum \limits_{1\le k\le {2^{n}}}{\big|g\big(z,k{2^{-n}}\wedge t\big)-g\big(z,(k-1){2^{-n}}\wedge t\big)\big|^{2}}\bigg\}^{\frac{1}{2}}}\\ {} & \hspace{1em}\times \bigg\{\sum \limits_{n\ge 1}{2^{-n\varepsilon }}\sum \limits_{1\le k\le {2^{n}}}{\big|\mu \big({\Delta _{kn}}\cap {(0,t]\big)\big|^{2}}\bigg\}^{\frac{1}{2}}}.\end{aligned}\]Lemma 1.
Let Z be an arbitrary set, $g,\hspace{2.5pt}{g_{j}}:Z\times [0,1]\to \mathbb{R}$, and the following conditions hold
(i) ${\sup _{z\in Z,t\in [0,1]}}|{g_{j}}(z,t)-g(z,t)|\to 0$, $j\to \infty $;
(ii) for some constants ${L_{g}}>0$, $\beta (g)>1/2$
(iii) for some random constant ${C_{\mu }}(\omega )$
Then for versions (7) of the processes
the following holds:
Proof.
Without loss of generality, we can assume that $g=0$. For ${\eta _{j}}$ we will use inequality (8). Separating for each n intervals ${\Delta _{kn}}$ that contain t and using the condition (iii), we have
Let ${\sup _{z\in Z,t\in [0,1]}}|{g_{j}}(z,t)|={\delta _{j}}$, ${\delta _{j}}\to 0$. Then
The product of (9) to the power θ and (10) to the power $1-\theta $ now satisfies
\[\begin{aligned}{}& \sum \limits_{n\ge 1}{2^{-n\varepsilon }}\sum \limits_{1\le k\le {2^{n}}}\big|\mu {\big({\Delta _{kn}}\cap (0,t]\big)\big|^{2}}\\ {} & \hspace{1em}\le \sum \limits_{n\ge 1}{2^{-n\varepsilon }}\sum \limits_{1\le k\le {2^{n}}}{\big|\mu ({\Delta _{kn}})\big|^{2}}+\sum \limits_{n\ge 1}{2^{-n\varepsilon }}{\big(2{C_{\mu }}(\omega )\big)^{2}}.\end{aligned}\]
Since both series in the right hand side are finite a. s., we obtain a random upper bound uniformly in t. Condition (ii) implies that
(9)
\[ \sum \limits_{1\le k\le {2^{n}}}{\big|{g_{j}}\big(z,k{2^{-n}}\wedge t\big)-{g_{j}}\big(z,(k-1){2^{-n}}\wedge t\big)\big|^{2}}\le {2^{n}}{L_{g}}{2^{-2n\beta (g)}}.\](10)
\[ \sum \limits_{1\le k\le {2^{n}}}{\big|{g_{j}}\big(z,k{2^{-n}}\wedge t\big)-{g_{j}}\big(z,(k-1){2^{-n}}\wedge t\big)\big|^{2}}\le {2^{n}}\cdot 4{\delta _{j}^{2}}.\]
\[ \sum \limits_{1\le k\le {2^{n}}}{\big|{g_{j}}\big(z,k{2^{-n}}\wedge t\big)-{g_{j}}\big(z,(k-1){2^{-n}}\wedge t\big)\big|^{2}}\le C{2^{n}}{2^{-2n\theta \beta (g)}}{\delta _{j}^{2(1-\theta )}}.\]
For $\frac{1}{2\beta (h)}<\theta <1$, $0<\varepsilon <2\theta \beta (h)-1$ we have
\[ \underset{z\in Z,t\in [0,1]}{\sup }\sum \limits_{n\ge 1}{2^{n\varepsilon }}\sum \limits_{1\le k\le {2^{n}}}{\big|{g_{j}}\big(z,k{2^{-n}}\wedge t\big)-{g_{j}}\big(z,(k-1){2^{-n}}\wedge t\big)\big|^{2}}\le C{\delta _{j}^{2(1-\theta )}}.\]
The right hand side of the inequality tends to zero as $j\to \infty $. Since
\[ \underset{z\in Z,t\in [0,1],j\ge 1}{\sup }\big|{g_{j}}(z,0)\mu \big((0,t]\big)\big|\to 0,\hspace{1em}j\to \infty ,\]
applying (8) completes the proof of the lemma. □4 Approximation of solutions by using the Fourier partial sums
Consider the Cauchy problem for a one-dimensional stochastic wave equation
where $(t,x)\in [0,1]\times \mathbb{R}$, $a>0$, μ is an SM defined on the Borel σ-algebra $\mathcal{B}((0,1])$.
(11)
\[ \left\{\begin{array}{l}\frac{{\partial ^{2}}u(t,x)}{\partial {t^{2}}}={a^{2}}\frac{{\partial ^{2}}u(t,x)}{\partial {x^{2}}}+f(t,x,u(t,x))+\sigma (t,x)\hspace{0.1667em}\dot{\mu }(t),\\ {} u(0,x)={u_{0}}(x),\hspace{1em}\frac{\partial u(0,x)}{\partial t}={v_{0}}(x),\end{array}\right.\]The solution of equation (11) is understood in the mild sense,
The integrals of random functions with respect to $dx$ are taken for each fixed $\omega \in \varOmega $. We impose the following assumptions.
(12)
\[ \begin{aligned}{}u(t,x)& =\frac{1}{2}\big({u_{0}}(x+at)-{u_{0}}(x-at)\big)+\frac{1}{2a}{\int _{x-at}^{x+at}}{v_{0}}(y)\hspace{0.1667em}dy\\ {} & \hspace{1em}+\frac{1}{2a}{\int _{0}^{t}}\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}f\big(s,y,u(s,y)\big)\hspace{0.1667em}dy\\ {} & \hspace{1em}+\frac{1}{2a}{\int _{(0,t]}}\hspace{0.1667em}d\mu (s){\int _{x-a(t-s)}^{x+a(t-s)}}\sigma (s,y)\hspace{0.1667em}dy\hspace{0.1667em}.\end{aligned}\]
A1. Functions ${u_{0}}(y)={u_{0}}(y,\omega ):\mathbb{R}\times \varOmega \to \mathbb{R}$ and ${v_{0}}(y)={v_{0}}(y,\omega ):\mathbb{R}\times \varOmega \to \mathbb{R}$ are measurable and bounded for every fixed $\omega \in \varOmega $.
A2. The function $f(s,y,v):[0,1]\times \mathbb{R}\times \mathbb{R}\to \mathbb{R}$ is measurable and bounded.
A3. The function $f(s,y,v)$ is uniformly Lipschitz in $y,v\in \mathbb{R}$:
A4. The function $\sigma (s,y):[0,1]\times \mathbb{R}\to \mathbb{R}$ is measurable and bounded.
A5. The function $\sigma (s,y)$ is Hölder continuous:
A6. For some random constant ${C_{\mu }}(\omega )$ $|\mu ((0,t])|\le {C_{\mu }}(\omega ),\hspace{2.5pt}t\in (0,1]$.
From A1–A5 it follows that equation (12) has a unique solution (see Theorem 2.1 [2]). The Hölder continuity condition was imposed on ${u_{0}}$ in [2], but was not used in proof of existence and uniqueness of the solution.
Note that the processes ${X_{t}}$ in examples 2–4 of SMs and ${\mu _{t}}$ in (4) are continuous, therefore A6 is fulfilled in these cases.
Consider the following equations:
(13)
\[ \begin{aligned}{}{u_{j}}(t,x)& =\frac{1}{2}\big({u_{0}}(x+at)-{u_{0}}(x-at)\big)+\frac{1}{2a}{\int _{x-at}^{x+at}}{v_{0}}(y)\hspace{0.1667em}dy\\ {} & \hspace{1em}+\frac{1}{2a}{\int _{0}^{t}}\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}f\big(s,y,{u_{j}}(s,y)\big)\hspace{0.1667em}dy\\ {} & \hspace{1em}+\frac{1}{2a}{\int _{(0,t]}}{S_{j}}(s)\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}\sigma (s,y)\hspace{0.1667em}dy\hspace{0.1667em}.\end{aligned}\]Theorem 1.
Let A1–A6 are fulfilled, and assume that the following conditions hold: if $h\in {\mathsf{L}_{2}}((0,1])$ then h is integrable with respect to μ, and
Proof.
The outline of the proof is the following. Denote
Step 1 – using the Gronwall’s inequality, we will estimate supremum in (15) by the value
\[ \underset{t,x}{\sup }\left|{\int _{(0,t]}}g\hspace{0.1667em}d\mu -{\int _{(0,t]}}g{S_{j}}\hspace{0.1667em}ds\right|.\]
Further (Step 2), we consider the continuation of $g(t,x,s),0\le s\le t,$ to the function ${g_{\delta }}(t,x,s),0\le s\le 1$, and estimate
\[ \underset{t,x}{\sup }\left|{\int _{(0,t]}}g\hspace{0.1667em}d\mu -{\int _{(0,1]}}{g_{\delta }}\hspace{0.1667em}d\mu \right|.\]
In Step 3 we consider
and in Step 4 we estimate
and make the concluding remarks.
Step 1. We will take versions (7) for all integrals with respect to μ. We have
(16)
\[ \begin{aligned}{}& \big|u(t,x)-{u_{j}}(t,x)\big|\\ {} & \hspace{1em}\le \frac{1}{2a}\Bigg|{\int _{0}^{t}}\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}f\big(s,y,u(s,y)\big)\hspace{0.1667em}dy\\ {} & \hspace{2em}-{\int _{0}^{t}}\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}f\big(s,y,{u_{j}}(s,y)\big)\hspace{0.1667em}dy\Bigg|\\ {} & \hspace{2em}+\frac{1}{2a}\Bigg|{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}{S_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\Bigg|\\ {} & \hspace{1em}\stackrel{A3}{\le }C{\int _{0}^{t}}\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}\big|u(s,y)-{u_{j}}(s,y)\big|\hspace{0.1667em}dy\\ {} & \hspace{2em}+\frac{1}{2a}\Bigg|{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}{S_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\Bigg|.\end{aligned}\]Denote
Applying the Gronwall’s inequality, we get
Further, we will estimate ${\eta _{\delta j}}$. In particular, we will get that ${\eta _{\delta j}}<+\infty $ a. s. From this, A2 and first inequality in (16) it follows that ${\xi _{j}}(t)\le C(\omega )<\infty $ a. s.
\[\begin{aligned}{}{\xi _{j}}(t)& =\underset{x\in \mathbb{R}}{\sup }\big|u(t,x)-{u_{j}}(t,x)\big|,\\ {} {\eta _{\delta j}}& =\frac{1}{2a}\underset{x\in \mathbb{R},\hspace{2.5pt}t\in [0,1-\delta ]}{\sup }\left|{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}{S_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\right|.\end{aligned}\]
Then
(17)
\[ {\xi _{j}}(t)\le C{\int _{0}^{t}}{\xi _{j}}(s)\hspace{0.1667em}ds+{\eta _{\delta j}},\hspace{1em}t\in [0,1-\delta ].\](18)
\[ {\xi _{j}}(t)\le {\eta _{\delta j}}+C{\int _{0}^{t}}\exp \big\{C(t-s)\big\}{\eta _{\delta j}}\hspace{0.1667em}ds\le C{\eta _{\delta j}},\hspace{1em}t\in [0,1-\delta ].\]
Step 2. Note that $g(t,x,t)=0$. We define the function
\[ {g_{\delta }}(t,x,s)=g(t,x,s),\hspace{1em}0\le s\le t,\hspace{1em}{g_{\delta }}(t,x,s)=0,\hspace{1em}t\le s<1-\delta .\]
Put ${g_{\delta }}(t,x,1)=g(t,x,0)$ and extend ${g_{\delta }}(t,x,s)$ for $s\in [1-\delta ,1]$ in a linear way such that the function is continuous on $[0,1]$. Also, ${g_{\delta }}(t,x,s)$ has a continuous periodic extension on $\mathbb{R}$ in a variable s for fixed $x\in \mathbb{R},\hspace{2.5pt}t\in [0,1)$.First, consider
\[ {A_{\delta }}:=\underset{t,x}{\sup }\left|{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,1]}}{g_{\delta }}(t,x,s)\hspace{0.1667em}d\mu (s)\right|.\]
By the definition of the function ${g_{\delta }}$, we have
\[\begin{aligned}{}& {\int _{(0,1]}}{g_{\delta }}(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)={\int _{(1-\delta ,1]}}{g_{\delta }}(t,x,s)\hspace{0.1667em}d\mu (s)\\ {} & \hspace{1em}={\int _{(1-\delta ,1]}}{g_{\delta }}(t,x,0)\frac{s+\delta -1}{\delta }\hspace{0.1667em}d\mu (s)={g_{\delta }}(t,x,0){\int _{(1-\delta ,1]}}\frac{s+\delta -1}{\delta }\hspace{0.1667em}d\mu (s)\end{aligned}\]
By the analogue of the Lebesgue theorem,
Step 3. Further, we will estimate
(We can change the order of integration in (*) due to Theorems 1 and 2 [12].) Partial sums of Fourier series of functions ${g_{\delta }}(t,x,r)$ in variable r are given by
\[ {B_{\delta j}}:=\underset{t,x}{\sup }\left|{\int _{(0,1]}}{g_{\delta }}(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{[0,1]}}{g_{\delta }}(t,x,s){S_{j}}(s)\hspace{0.1667em}ds\right|.\]
We have
(21)
\[ \begin{aligned}{}& {\int _{[0,1]}}{g_{\delta }}(t,x,s){S_{j}}(s)\hspace{0.1667em}ds={\int _{[0,1]}}{g_{\delta }}(t,x,s)\bigg(\sum \limits_{|k|\le j}{\xi _{k}}\exp \{2\pi iks\}\bigg)\hspace{0.1667em}ds\\ {} & \hspace{1em}={\int _{[0,1]}}{g_{\delta }}(t,x,s)\bigg(\sum \limits_{|k|\le j}\exp \{2\pi iks\}{\int _{(0,1]}}\exp \{-2\pi ikr\}\hspace{0.1667em}d\mu (r)\bigg)\hspace{0.1667em}ds\\ {} & \hspace{1em}=\sum \limits_{|k|\le j}{\int _{[0,1]}}{g_{\delta }}(t,x,s)\bigg(\exp \{2\pi iks\}{\int _{(0,1]}}\exp \{-2\pi ikr\}\hspace{0.1667em}d\mu (r)\bigg)\hspace{0.1667em}ds\\ {} & \hspace{1em}\stackrel{(\ast )}{=}\sum \limits_{|k|\le j}{\int _{(0,1]}}\exp \{-2\pi ikr\}\hspace{0.1667em}d\mu (r){\int _{[0,1]}}\exp \{2\pi iks\}{g_{\delta }}(t,x,s)\hspace{0.1667em}ds\\ {} & \hspace{1em}={\int _{(0,1]}}\bigg(\sum \limits_{|k|\le j}\exp \{-2\pi ikr\}{\int _{[0,1]}}\exp \{2\pi iks\}{g_{\delta }}(t,x,s)\hspace{0.1667em}ds\bigg)d\mu (r).\end{aligned}\]
\[ {g_{\delta j}}(t,x,r)=\sum \limits_{|k|\le j}\exp \{-2\pi ikr\}{\int _{[0,1]}}\exp \{2\pi iks\}{g_{\delta }}(t,x,s)\hspace{0.1667em}ds.\]
We will demonstrate that for fixed δ for functions ${g_{\delta j}}$, ${g_{\delta }}$ and $z=(t,x)$ the conditions of Lemma 1 hold.Consider the Fourier coefficients
Obviously, the supremum of ${\mathsf{L}_{2}}$-norms (taken in the variable s) will be finite for fixed δ. Thus,
\[\begin{aligned}{}{a_{-k}}(t,x)& ={\int _{[0,1]}}\exp \{2\pi iks\}{g_{\delta }}(t,x,s)\hspace{0.1667em}ds,\\ {} {a^{\prime }_{-k}}(t,x)& ={\int _{[0,1]}}\exp \{2\pi iks\}\frac{\partial {g_{\delta }}(t,x,s)}{\partial s}\hspace{0.1667em}ds=-2\pi ik{a_{-k}}(t,x).\end{aligned}\]
For any set of indices $\mathbb{M}\subset \mathbb{Z}\setminus \{0\}$ we have
(22)
\[ \begin{aligned}{}\underset{t,x}{\sup }\sum \limits_{k\in \mathbb{M}}|{a_{k}}|& =\frac{1}{2\pi }\underset{t,x}{\sup }\sum \limits_{k\in \mathbb{M}}\frac{|{a^{\prime }_{k}}|}{|k|}\le \frac{1}{2\pi }\underset{t,x}{\sup }{\bigg(\sum \limits_{k\in \mathbb{M}}{\big|{a^{\prime }_{k}}\big|^{2}}\bigg)^{1/2}}{\bigg(\sum \limits_{k\in \mathbb{M}}\frac{1}{{k^{2}}}\bigg)^{1/2}}\\ {} & \le \frac{1}{2\pi }\underset{t,x}{\sup }{\left\| \frac{\partial {g_{\delta }}(t,x,s)}{\partial s}\right\| _{{\mathsf{L}_{2}}}}{\bigg(\sum \limits_{k\in \mathbb{M}}\frac{1}{{k^{2}}}\bigg)^{1/2}}.\end{aligned}\]
\[ \underset{t,x,r}{\sup }\big|{g_{\delta j}}(t,x,r)-{g_{\delta l}}(t,x,r)\big|\le \underset{t,x}{\sup }\sum \limits_{l<|k|\le j}|{a_{k}}|\to 0,\hspace{1em}l,\hspace{2.5pt}j\to \infty ,\]
and sequence ${g_{\delta j}}(t,x,r)$, $j\ge 1$, converges uniformly in $(t,x,r)$. Also, it is well known that for our piecewise smooth function ${g_{\delta }}$ the pointwise convergence ${g_{\delta j}}(t,x,s)\to {g_{\delta }}(t,x,s)$, $j\to \infty $, holds. Therefore, condition (i) of Lemma 1 is fullfiled.Further, we will check condition (ii) for $\beta (g)\hspace{0.1667em}=\hspace{0.1667em}1$. Using the periodicity of ${g_{\delta }}(t,x,s)$ in s, for $\rho \in \mathbb{R}$ we obtain
\[ {g_{\delta j}}(t,x,r+\rho )=\sum \limits_{|k|\le j}\exp \{-2\pi ikr\}{\int _{[0,1]}}\exp \{2\pi iks\}{g_{\delta }}(t,x,s+\rho )\hspace{0.1667em}ds.\]
Therefore, ${g_{\delta j}}(t,x,r+\rho )-{g_{\delta j}}(t,x,r)$ are partial sums of the Fourier series of the function ${g_{\delta }}(t,x,s+\rho )-{g_{\delta }}(t,x,s)$. We can repeat the reasoning from (22) for $\mathbb{M}=\mathbb{Z}$ and
\[ {a_{-k}}={\int _{[0,1]}}\exp \{2\pi iks\}\big({g_{\delta }}(t,x,s+\rho )-{g_{\delta }}(t,x,s)\big)\hspace{0.1667em}ds.\]
It is easy to see that
Since
we get (ii).Lemma 1 implies that
for each fixed δ.
Step 4. It remains to consider
If we consider the function ${h_{\delta }}(s)=\frac{s+\delta -1}{\delta }{\mathbf{1}_{[1-\delta ,1]}}$ and its corresponding j-th Fourier sum ${h_{\delta j}}(s)$, then, as in (21), we have
By the standard properties of Fourier sums,
in ${\mathsf{L}_{2}}([0,1])$. From (14) we get
We have already noticed in (19) that
(24)
\[ \begin{aligned}{}{C_{\delta j}}& :=\underset{t,x}{\sup }\left|{\int _{(0,1]}}{S_{j}}(s){g_{\delta }}(t,x,s)\hspace{0.1667em}ds-{\int _{(0,t]}}{S_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\right|\\ {} & =\underset{t,x}{\sup }\left|{\int _{(0,1]}}{S_{j}}(s){g_{\delta }}(t,x,s)\hspace{0.1667em}ds-{\int _{(0,t]}}{S_{j}}(s){g_{\delta }}(t,x,s)\hspace{0.1667em}ds\right|\\ {} & =\underset{t,x}{\sup }\left|{\int _{(1-\delta ,1]}}{S_{j}}(s){g_{\delta }}(t,x,s)\hspace{0.1667em}ds\right|\\ {} & =\underset{t,x}{\sup }\left|{\int _{(1-\delta ,1]}}{S_{j}}(s)g(t,x,0)\frac{s+\delta -1}{\delta }\hspace{0.1667em}ds\right|\\ {} & =\underset{t,x}{\sup }\big|g(t,x,0)\big|\left|{\int _{(1-\delta ,1]}}{S_{j}}(s)\frac{s+\delta -1}{\delta }\hspace{0.1667em}ds\right|\\ {} & \le C\left|{\int _{(1-\delta ,1]}}{S_{j}}(s)\frac{s+\delta -1}{\delta }\hspace{0.1667em}ds\right|=:C|{\tilde{C}_{\delta j}}|.\end{aligned}\](25)
\[ {\int _{(1-\delta ,1]}}{S_{j}}(s)\frac{s+\delta -1}{\delta }\hspace{0.1667em}ds={\int _{(0,1]}}{h_{\delta j}}(s)\hspace{0.1667em}d\mu (s).\](26)
\[ {\tilde{C}_{\delta j}}={\int _{(0,1]}}{h_{\delta j}}(s)\hspace{0.1667em}d\mu (s)\stackrel{\mathsf{P}}{\to }{\int _{(0,1]}}{h_{\delta }}(s)\hspace{0.1667em}d\mu (s):={D_{\delta }},\hspace{1em}j\to \infty .\]Finally, we have
In order to explain that ${\eta _{\delta j}}\stackrel{\mathsf{P}}{\to }0$, we will use the seminorm
that corresponds to the convergence in ${\mathsf{L}_{0}}$. If (15) does not hold then
for some $\delta ,{\alpha _{0}}>0$ and infinitely many j.
We have
\[\begin{aligned}{}\| {\eta _{\delta j}}\| & \stackrel{\text{(28)}}{\le }\| {A_{\delta }}\| +\| {B_{\delta j}}\| +\| {C_{\delta j}}\| \stackrel{\text{(24)}}{\le }\| {A_{\delta }}\| +\| {B_{\delta j}}\| +\| C{\tilde{C}_{\delta j}}\| \\ {} & \le \| {A_{\delta }}\| +\| {B_{\delta j}}\| +\big\| C({\tilde{C}_{\delta j}}-{D_{\delta }})\big\| +\| C{D_{\delta }}\| .\end{aligned}\]
From (23) and (26) it follows that for each δ
\[ \underset{j\to \infty }{\limsup }\| {\eta _{\delta j}}\| \le \| {A_{\delta }}\| +\| C{D_{\delta }}\| ,\]
(20) and (27) imply that
This contradicts to (29) (the reduction of δ given in the formulation of the theorem reinforces the assertion). □Remark 1.
In the next section, we will demonstrate that replacing partial sums of the Fourier series by the corresponding Fejèr sums, we can omit condition (14).
5 Approximation of solutions by using the Fejèr sums
Consider the following equations that use the Fejèr sums ${\tilde{S}_{j}}(s)$ of SM μ:
(30)
\[ \begin{aligned}{}{\tilde{u}_{j}}(t,x)& =\frac{1}{2}\big({u_{0}}(x+at)-{u_{0}}(x-at)\big)+\frac{1}{2a}{\int _{x-at}^{x+at}}{v_{0}}(y)\hspace{0.1667em}dy\\ {} & \hspace{1em}+\frac{1}{2a}{\int _{0}^{t}}\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}f\big(s,y,{\tilde{u}_{j}}(s,y)\big)\hspace{0.1667em}dy\\ {} & \hspace{1em}+\frac{1}{2a}{\int _{(0,t]}}{\tilde{S}_{j}}(s)\hspace{0.1667em}ds{\int _{x-a(t-s)}^{x+a(t-s)}}\sigma (s,y)\hspace{0.1667em}dy\hspace{0.1667em}.\end{aligned}\]We show that the functions ${\tilde{u}_{j}}$ also approximate the solution u of equation (12). Here we impose weaker conditions on μ than in Theorem 1.
Proof.
We use the notation from the proof of Theorem 1. As in (17), we get
\[ {\tilde{\xi }_{j}}(t)\le C{\int _{0}^{t}}{\tilde{\xi }_{j}}(s)\hspace{0.1667em}ds+{\tilde{\eta }_{\delta j}},\hspace{1em}t\in [0,1-\delta ],\]
where
\[\begin{aligned}{}{\tilde{\xi }_{j}}(t)& =\underset{x\in \mathbb{R}}{\sup }\big|u(t,x)-{\tilde{u}_{j}}(t,x)\big|,\\ {} {\tilde{\eta }_{\delta j}}& =\frac{1}{2a}\underset{x\in \mathbb{R},\hspace{2.5pt}t\in [0,1-\delta ]}{\sup }\left|{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}{\tilde{S}_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\right|.\end{aligned}\]
The Gronwall’s inequality implies that
We will estimate ${\tilde{\eta }_{\delta j}}$. Consider
\[\begin{aligned}{}& {\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}{\tilde{S}_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\\ {} & \hspace{1em}=\frac{1}{j+1}\sum \limits_{0\le k\le j}\bigg({\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}{S_{k}}(s)g(t,x,s)\hspace{0.1667em}ds\bigg).\end{aligned}\]
Similarly to the estimates of ${\eta _{\delta j}}$ in Theorem 1, we have
\[\begin{aligned}{}{\tilde{\eta }_{\delta j}}& \le \frac{1}{j+1}\sum \limits_{0\le k\le j}({A_{\delta }}+{B_{\delta k}})\\ {} & \hspace{1em}+\underset{t,x}{\sup }\left|{\int _{(0,1]}}{\tilde{S}_{j}}(s){g_{\delta }}(t,x,s)\hspace{0.1667em}ds-{\int _{(0,t]}}{\tilde{S}_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\right|.\end{aligned}\]
As in (24), we obtain
\[\begin{aligned}{}& \underset{t,x}{\sup }\left|{\int _{(0,1]}}{\tilde{S}_{j}}(s){g_{\delta }}(t,x,s)\hspace{0.1667em}ds-{\int _{(0,t]}}{\tilde{S}_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\right|\\ {} & \hspace{1em}\le C\left|{\int _{(1-\delta ,1]}}{\tilde{S}_{j}}(s)\frac{s+\delta -1}{\delta }\hspace{0.1667em}ds\right|.\end{aligned}\]
Taking the sum of terms (25), we get
\[ {\int _{(1-\delta ,1]}}{\tilde{S}_{j}}(s)\frac{s+\delta -1}{\delta }\hspace{0.1667em}ds={\int _{(0,1]}}{\tilde{h}_{\delta j}}(s)\hspace{0.1667em}d\mu (s).\]
Here the functions
are the Fejèr sums of function ${h_{\delta }}(s)=\frac{s+\delta -1}{\delta }{\mathbf{1}_{[1-\delta ,1]}}$. By well-known properties, ${\tilde{h}_{\delta j}}(s)$ are uniformly bounded for every δ and ${\tilde{h}_{\delta j}}(s)\to {\tilde{h}_{\delta }}(s)$ for every $s\in (0,1)$. Therefore,
\[ {\int _{(0,1]}}{\tilde{h}_{\delta j}}(s)\hspace{0.1667em}d\mu (s)\stackrel{\mathsf{P}}{\to }{\int _{(0,1]}}{\tilde{h}_{\delta }}(s)\hspace{0.1667em}d\mu (s),\hspace{1em}j\to \infty ,\]
(here we used the condition $\mu (\{1\})=0$ a. s.). It remains to repeat the reasoning in the proof of Theorem 1 carried out after (25). □6 Example
We obtained that solution of (12) is approximated by solutions of (13) and (30). Equations (13) and (30) may be considered as nonstochastic for each fixed ω, properties of the solutions ${u_{j}}$ and ${\tilde{u}_{j}}$ follows from the theory of deterministic wave equation.
Also, in some cases the rate of convergence in (15) and (31) may be estimated. By (18) and (32), we need to estimate ${\eta _{\delta j}}$ and ${\tilde{\eta }_{\delta j}}$ respectively.
As an example, consider SM μ given by (3) and (4) provided that condition (2) holds. Assumption A6 is fulfilled in this case because ${\mu _{t}}$ has a continuous version, A1–A5 are assumed as before. Recall that if (14) is fulfilled for SM ζ in (3) then it holds for μ (see Remark 1).
In addition, assume that for some $L>0$, $\gamma >1/2$ and all $t,{y_{1}},{y_{2}}$
The Fourier coefficients of μ are
\[ \left|\frac{\partial h(t,{y_{1}})}{\partial t}-\frac{\partial h(t,{y_{2}})}{\partial t}\right|\le L|{y_{1}}-{y_{2}}{|^{\gamma }}.\]
Then from Theorems 1 and 2 [12] we obtain that we can change the order of integration in (3), and
(33)
\[ \mu (A)={\int _{A}}\hspace{0.1667em}dt{\int _{[a,b]}}\frac{\partial h(t,y)}{\partial t}\hspace{0.1667em}d\zeta (y).\]
\[\begin{aligned}{}{\xi _{k}}& ={\int _{(0,1]}}\exp \{-2\pi ikt\}\hspace{0.1667em}d\mu (t)={\int _{(0,1]}}\exp \{-2\pi ikt\}\hspace{0.1667em}dt{\int _{[a,b]}}\frac{\partial h(t,y)}{\partial t}\hspace{0.1667em}d\zeta (y)\\ {} & \stackrel{(\ast )}{=}{\int _{[a,b]}}\hspace{0.1667em}d\zeta (y){\int _{(0,1]}}\exp \{-2\pi ikt\}\frac{\partial h(t,y)}{\partial t}\hspace{0.1667em}dt={\int _{[a,b]}}{c_{k}}(y)\hspace{0.1667em}d\zeta (y)\end{aligned}\]
where in (*) we again use Theorems 1 and 2 [12], ${c_{k}}(y)$ denotes the Fourier series coefficient of $\frac{\partial h(\cdot ,y)}{\partial t}$.Therefore, the partial Fourier sums and Fejèr sums for this SM are
where ${S_{j}^{(h)}}(\cdot ,y)$ and ${\tilde{S}_{j}^{(h)}}(\cdot ,y)$ are respectively Fourier and Fejèr sums of function $\frac{\partial h(\cdot ,y)}{\partial t}$ for each fixed y. To estimate ${\eta _{\delta j}}$, consider
(34)
\[ {S_{j}}(t)={\int _{[a,b]}}{S_{j}^{(h)}}(t,y)\hspace{0.1667em}d\zeta (y),\hspace{1em}{\tilde{S}_{j}}(t)={\int _{[a,b]}}{\tilde{S}_{j}^{(h)}}(t,y)\hspace{0.1667em}d\zeta (y),\]
\[\begin{aligned}{}& {\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}d\mu (s)-{\int _{(0,t]}}{S_{j}}(s)g(t,x,s)\hspace{0.1667em}ds\\ {} & \stackrel{\text{(33)},\text{(34)}}{=}{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}ds{\int _{[a,b]}}\frac{\partial h(s,y)}{\partial s}\hspace{0.1667em}d\zeta (y)\\ {} & \hspace{2em}-{\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}ds{\int _{[a,b]}}{S_{j}^{(h)}}(s,y)\hspace{0.1667em}d\zeta (y)\\ {} & \hspace{1em}={\int _{(0,t]}}g(t,x,s)\hspace{0.1667em}ds{\int _{[a,b]}}\bigg(\frac{\partial h(s,y)}{\partial s}-{S_{j}^{(h)}}(s,y)\bigg)\hspace{0.1667em}d\zeta (y).\end{aligned}\]
The integral with respect to ζ may be estimated by (8). For the value of
we can find numerous results in the theory of classical Fourier series. For example, if h is smooth enough, we obtain $O({j^{-1}}\ln j)$, see [22, Theorem (10.8) of Chapter II]. The detailed calculations is not the subject of this paper.Analogous considerations may be carried out for Fejèr sums, ${\tilde{\eta }_{\delta j}}$, and ${\tilde{S}_{j}^{(h)}}(t,x)$.