1 Introduction
Since the beginning of studying partial differential equations the Laplacian operator $\Delta :={\textstyle\sum \limits_{j=1}^{d}}{\partial _{j}^{2}}$ was of great interest in different mathematical theories and applications. For example, the solution of the Poisson equation
for some function f can be interpreted as a stationary solution of the heat equation and is therefore important in thermodynamics. In order to study different heterogeneity assumptions in the space, the divergence operator
\[ \text{div}(A(x)\nabla u):={\sum \limits_{i,j=1}^{d}}{\partial _{i}}({a_{ij}}(x){\partial _{j}}u)\]
was introduced, where the matrix function A satisfies some ellipticity condition. This kind of operator is, for example, used in the Maxwell equations in general media (see [16]).The fundamental solution of the Laplace equation is well-known, but there is no explicit form for a fundamental solution of a general divergence form operator, although there exist upper and lower bounds, see, for example, [9].
The goal of this paper is to obtain generalized solutions of the equation
where $\dot{L}$ is a so-called generalized Lévy white noise and p is a partial differential operator of the form
for a uniformly elliptic ${\mathbb{R}^{d}}$-valued matrix function A and functions $b:{\mathbb{R}^{d}}\to {\mathbb{R}^{d}}$, $V:{\mathbb{R}^{d}}\to \mathbb{R}$. We especially get generalized and mild solutions for the generalized electric Schrödinger operator driven by a Lévy white noise, i.e. we are looking for a solution u of the stochastic partial differential equation
where A is a uniformly elliptic $d\times d$ matrix, the potential $V>0$ belongs to the reverse Hölder class and $\dot{L}$ is a Lévy white noise. Since the fundamental solution of the Schrödinger operator has exponential decay, we will derive weaker assumptions on the Lévy white noise in comparison to the general case (1) to show the existence of generalized and mild solutions. This can be seen as an extension of the theory founded in [2] by D. Berger, but the results are not directly applicable. In order to overcome this shortcoming we derive existence results for generalized random processes constructed by integral transforms of the underlying Lévy white noise. Furthermore, we study different distributional properties of these solutions and show that we can construct periodically stationary generalized random processes.
(1)
\[ -\text{div}(A(x)\nabla u)+b(x)\cdot \nabla u+V(x)u,\hspace{1em}u\in {C^{\infty }}({\mathbb{R}^{d}}),\]We are solving the stochastic partial differential equations in distributional sense, i.e. a solution s is a distribution valued random variable such that $\langle s,p{(x,D)^{\ast }}\varphi \rangle =\langle \dot{L},\varphi \rangle $ for every φ in our function space. For a good introduction to distributional solutions of partial differential equations see, for example, [8]. Until now there does not exist a good understanding of Lévy white noise driven stochastic partial differential equations under general moment conditions, but there exists literature for the case of Gaussian white noise and Lévy white noise with stricter moment conditions. In [17] SPDEs driven by Gaussian white noise where studied. For the case of stochastic partial differential equations with constant coefficients, see also [3] and [2]. Our method is inspired by the paper [5] and the results of [12]. We also mention the monograph [11] by S. Peszat and J. Zabczyk, which gives a good overview about SPDEs driven by Lévy noise, where another approach motivated by the semigroup theory is used to consider parabolic and hyperbolic SPDEs driven by Lévy noise in Banach spaces.
In Section 2 we provide the general framework needed to discuss stochastic partial differential equations driven by Lévy white noise, whose solutions are defined as generalized random process. We introduce Lévy white noise as a generalized random process in the sense of I.M. Gelfand and N.Y. Vilenkin (see [6]). Theorem 1 implies that a large class of linear stochastic partial differential equations driven by a Lévy white noise has a generalized solution, where we used a more general kernel $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ as compared to Theorem 3.4 of D. Berger in [2]. Furthermore, we study the moment properties of generalized random processes s driven by Lévy white noise $\dot{L}$. For a well-defined random process $s(\varphi )=\langle \dot{L},G(\varphi )\rangle $, $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$ we show in Theorem 3 that if $\dot{L}$ has finite $\beta >0$ moment, then s has also finite β-moment under further conditions on the kernel G. Moreover, we show that if s has finite β-moment, then also $\dot{L}$ has finite β-moment. In Section 3 we discuss our first example, the partial differential operators of the form (1) and give existence results for generalized solutions. Furthermore, we discuss periodically stationary solutions s for this example. Afterwards we consider the generalized electric Schrödinger operator driven by Lévy white noise and show under weaker conditions, as in the example above, the existence of generalized solutions. We also study the concept of mild solution of (2), i.e. a solution u which is a random field given by the convolution of the Lévy white noise with the fundamental solution of (2). In Proposition 3 we mention when such a solution u exists and is stochastically continuous. Most of the notation used later on is standard or self-explanatory. We mention only that ${\lambda ^{d}}$ denotes the Lebesgue measure on ${\mathbb{R}^{d}}$ and $\mathcal{D}({\mathbb{R}^{d}})$ the space of test functions on ${\mathbb{R}^{d}}$, i.e. the space of infinitely differentiable real valued functions on ${\mathbb{R}^{d}}$ with compact support, and ${\mathcal{D}^{\prime }}$ its dual space, i.e. the space of distributions.
2 Integral transforms and generalized stochastic processes driven by Lévy white noise
We provide the general framework needed to discuss stochastic partial differential equations driven by a Lévy white noise and introduce a Lévy white noise as generalized random process in the sense of I.M. Gelfand and N.Y. Vilenkin (see [6]). In [2] it was shown that a convolution operator, with certain properties regarding his integrability, defines a generalized random process, assuming low moment conditions on the Lévy white noise. Similar to [2], we will use the characterization of the extended domain (see [5], Proposition 3.4) and achieve new results for a more general kernel $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$, which allows us in Section 3 to model different kinds of stationarity assumptions and also to obtain generalized solutions of Lévy-driven stochastic partial differential equations.
Let $(\Omega ,\mathcal{F},\mathcal{P})$ be a probability space.
Definition 1 (See [5], Definition 2.1).
A generalized random process is a linear and continuous function $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$. The linearity means that, for every ${\varphi _{1}},{\varphi _{2}}\in \mathcal{D}({\mathbb{R}^{d}})$ and $\mu \in \mathbb{R}$,
\[ s({\varphi _{1}}+\mu {\varphi _{2}})=s({\varphi _{1}})+\mu s({\varphi _{2}})\hspace{2.5pt}\text{almost surely.}\]
The continuity means that if ${\varphi _{n}}\to \varphi $ in $\mathcal{D}({\mathbb{R}^{d}})$, then $s({\varphi _{n}})$ converges to $s(\varphi )$ in probability.Due to the nuclear structure on $\mathcal{D}({\mathbb{R}^{d}})$ it follows from [17], Corollary 4.2 that a generalized random process has a version which is a measurable function from $(\Omega ,\mathcal{F})$ to $({\mathcal{D}^{\prime }}({\mathbb{R}^{d}}),\mathcal{C})$ with respect to the cylindrical σ-field $\mathcal{C}$ generated by the sets
\[ \{u\in {\mathcal{D}^{\prime }}({\mathbb{R}^{d}})|\hspace{0.1667em}(\langle u,{\varphi _{1}}\rangle ,\dots ,\langle u,{\varphi _{N}}\rangle )\in B\}\]
with $N\in \mathbb{N}$, ${\varphi _{1}},\dots ,{\varphi _{N}}\in \mathcal{D}({\mathbb{R}^{d}})$ and $B\in \mathcal{B}({\mathbb{R}^{N}})$. From now on such a version is meant always.The probability law of a generalized random process s is the probability measure on ${\mathcal{D}^{\prime }}({\mathbb{R}^{d}})$ given by
for $B\in \mathcal{C}$, where $\mathcal{C}$ is the cylindrical σ-field on ${\mathcal{D}^{\prime }}({\mathbb{R}^{d}})$.
The characteristic functional of a generalized random process s is the functional $\widehat{\mathcal{P}}:\mathcal{D}({\mathbb{R}^{d}})\to \mathbb{C}$ defined by
\[ {\widehat{\mathcal{P}}_{s}}(\varphi )=\underset{{\mathcal{D}^{\prime }}({\mathbb{R}^{d}})}{\int }\exp (i\langle u,\varphi \rangle )d{\mathcal{P}_{s}}(u).\]
The characteristic functional characterizes the law of s in the sense that two random processes are equal in law if and only if they have the same characteristic functional. Now we define the Lévy white noise, which is closely connected to a Lévy process. In general, a Lévy process is a stochastically continuous process with independent and stationary increments starting in 0. A Lévy process ${({L_{t}})_{t\ge 0}}$ is characterized by its characteristic function
for every $z\in \mathbb{R}$ and $t\ge 0$. We call ψ the Lévy exponent; it can be characterized by $a\ge 0$, $\gamma \in \mathbb{R}$ and a Lévy measure ν, i.e. a measure such that
\[ \nu (\{0\})=0\hspace{2.5pt}\text{and}\hspace{2.5pt}\underset{\mathbb{R}\setminus \{0\}}{\int }\min \{1,{x^{2}}\}\nu (dx)<\infty .\]
For all $z\in \mathbb{R}$ it holds that
\[ \psi (z)=i\gamma z-\frac{1}{2}a{z^{2}}+\underset{\mathbb{R}}{\int }({e^{ixz}}-1-ixz{\mathds{1}_{|x|\le 1}})\nu (dx).\]
The function ψ is uniquely characterized by the triplet $(a,\gamma ,\nu )$ known as the characteristic triplet.Definition 2.
A Lévy white noise $\dot{L}$ on ${\mathbb{R}^{d}}$ is a generalized random process with characteristic functional of the form
\[ {\widehat{\mathcal{P}}_{\dot{L}}}(\varphi )=\exp \left(\hspace{0.1667em}\hspace{0.1667em}\underset{{\mathbb{R}^{d}}}{\int }\psi (\varphi (x)){\lambda ^{d}}(dx)\right)\]
for every $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$, where $\psi :\mathbb{R}\to \mathbb{C}$ is a Lévy exponent.The existence of the Lévy white noise was shown in [6]. Another possible way to construct Lévy white noise would be via independently scattered random measures, i.e. one can consider a random process whose test functions are indicator functions and are independently scattered when two indicator functions with disjoint supports define independent random variables (see B.S. Rajput and J. Rosinski [12]). In [5] J. Fageot and T. Humeau unified these two approaches by extending the Lévy white noise, defined as generalized random processes, to independently scattered random measures. This connection led to results in [5], which made it possible to extend the domain of definition of Lévy white noise to some Borel-measurable functions $f:{\mathbb{R}^{d}}\to \mathbb{R}$. We say that a function f is in the domain of $\dot{L}$ if there exists a sequence of elementary functions ${f_{n}}$ converging almost everywhere to f such that $\langle \dot{L},{f_{n}}{\mathds{1}_{A}}\rangle $ converges in probability for $n\to \infty $ for every Borel set A and set $\langle \dot{L},f\rangle $ as the limit in probability of $\langle \dot{L},{f_{n}}\rangle $ for $n\to \infty $, where $\langle \dot{L},{f_{n}}\rangle $ is defined by ${\textstyle\sum _{j=1}^{m}}{a_{j}}\langle \dot{L},{\mathds{1}_{{A_{j}}}}\rangle $ for a elementary function ${f_{n}}:={\textstyle\sum _{j=1}^{m}}{a_{j}}{\mathds{1}_{{A_{j}}}}$, see also [5], Definition 3.3. For the maximal domain of the Lévy white noise $\dot{L}$ we write $D(\dot{L})$. By setting $L(A):=\langle \dot{L},{\mathds{1}_{A}}\rangle $ for bounded Borel sets A, the extension of a Lévy white noise $\dot{L}$ can be identified with a Lévy basis L in the sense of Rajput and Rosinski [12], see [5], Theorem 3.2 and Proposition 3.4. As a Lévy basis can be identified with a Lévy white noise in a canonical way, i.e. $\langle \dot{L},\varphi \rangle :={\textstyle\int _{{\mathbb{R}^{d}}}}\varphi (x)dL(x)$ for $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$, we make no difference between a Lévy white noise and a Lévy basis. In particular, a Borel-measurable function $f:{\mathbb{R}^{d}}\to \mathbb{R}$ is in $D(\dot{L})$ if and only if f is integrable with respect to the Lévy basis L in the sense of Rajput and Rosinski [12], see [5], Definition 3.3.
Definition 3 (See [7], Definition 1.1.1.).
With the aid of the distribution function we can now obtain a sufficient condition for the existence of the generalized random process s defined by $s(\varphi )=\langle \dot{L},G(\varphi )\rangle $, where $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ is a suitable kernel. In order to do so, we use the results from [12] and [5] regarding integrability conditions for Lévy white noises. In contrast to [2], where the existence of the stationary generalized random process $s(\varphi )=\langle \dot{L},G\ast \varphi \rangle $ was obtained, this more general kernel $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ allows us to model different kinds of stationarity assumptions. Furthermore, this will be crucial in Section 3 for proving the existence of generalized processes as solutions to stochastic partial differential equations as in (1).
Theorem 1.
Let $\dot{L}$ be a Lévy white noise on ${\mathbb{R}^{m}}$ with characteristic triplet $(a,\gamma ,\nu )$ and $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ be a measurable function. Define for every $x\in {\mathbb{R}^{m}}$ and $R>0$
\[\begin{aligned}{}{G_{R}}(x)& :=\underset{{B_{R}}(0)}{\int }|G(x,y)|{\lambda ^{d}}(dy)\in [0,\infty ]\end{aligned}\]
and
\[\begin{aligned}{}{h_{R}}(x)& :=x{\underset{0}{\overset{1/x}{\int }}}{d_{{G_{R}}}}(\alpha ){\lambda ^{1}}(d\alpha )\textit{for}\hspace{2.5pt}x>0.\end{aligned}\]
Assume that ${G_{R}}\in {L^{1}}({\mathbb{R}^{m}})\cap {L^{2}}({\mathbb{R}^{m}})$ and
for every $R>0$. Then for $\big(G(\varphi )\big)(x):=\underset{{\mathbb{R}^{d}}}{\textstyle\int }G(x,y)\varphi (y){\lambda ^{d}}(dy)$ we have that
defines a generalized random process.
Proof.
The proof is quite similar to that of [2], Theorem 3.4. For the sake of completeness we give a detailed proof. We need to show that $G(\varphi )\in D(\dot{L})$ and $\langle \dot{L},G({\varphi _{n}})\rangle \to \langle \dot{L},G(\varphi )\rangle $ as $n\to \infty $ in probability for a sequence ${({\varphi _{n}})_{n\in \mathbb{N}}}$ converging to φ in $\mathcal{D}({\mathbb{R}^{d}})$. As $\langle \dot{L},G(\cdot )\rangle $ is linear, this is equivalent to checking that $\langle \dot{L},G({\varphi _{n}}-\varphi )\rangle \to 0$ as $n\to \infty $ in probability (see [5], Theorem 3.6). Now given [12], Theorem 2.7, we have to show
as $n\to \infty $ if ${\varphi _{n}}\to 0$ for $n\to \infty $ in $\mathcal{D}({\mathbb{R}^{d}})$.
(4)
\[\begin{aligned}{}& \underset{{\mathbb{R}^{m}}}{\int }\big|\gamma \big(G({\varphi _{n}})\big)(x)+\underset{\mathbb{R}}{\int }r\big(G({\varphi _{n}})\big)(x)\left({\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}-{\mathds{1}_{|r|\le 1}}\right)\nu (dr)\big|{\lambda ^{m}}(dx)\to 0,\end{aligned}\]In the following we give a pointwise upper bound for $G(\varphi )$. Therefore let $R>0$ be such that supp$({\varphi _{n}})\subset {B_{r}}(0)$ for some $r<R$. Then for every $x\in {\mathbb{R}^{m}}$
and we obtain for $\alpha >0$
Since ${G_{R}}\in {L^{2}}({\mathbb{R}^{m}})$ we have
Now we show (4). Since ${G_{R}}\in {L^{1}}({\mathbb{R}^{m}})$, we have
for $n\to \infty $. We rewrite the second term in (4) in the way
(7)
\[\begin{aligned}{}\big|\big(G({\varphi _{n}})\big)(x)|& \le \underset{{\mathbb{R}^{d}}}{\int }\big|G(x,y){\varphi _{n}}(y)|{\lambda ^{d}}(dy)\\ {} & =\underset{{B_{R}}(0)}{\int }|G(x,y){\varphi _{n}}(y)|{\lambda ^{d}}(dy)\le {G_{R}}(x)\| {\varphi _{n}}{\| _{\infty }}\end{aligned}\](8)
\[\begin{aligned}{}{d_{G({\varphi _{n}})}}(\alpha )=& {\lambda ^{m}}\left(\{x\in {\mathbb{R}^{m}}:|(G({\varphi _{n}}))(x)|>\alpha \}\right)\\ {} \le & {\lambda ^{m}}\left(\big\{x\in {\mathbb{R}^{m}}:|{G_{R}}(x)|>\frac{\alpha }{\| {\varphi _{n}}{\| _{\infty }}}\big\}\right)={d_{{G_{R}}}}\left(\frac{\alpha }{\| {\varphi _{n}}{\| _{\infty }}}\right).\end{aligned}\](9)
\[\begin{aligned}{}\underset{{\mathbb{R}^{m}}}{\int }|\big(G({\varphi _{n}})\big)(x)|{\mathds{1}_{|(G({\varphi _{n}}))(x)|>\frac{1}{|r|}}}{\lambda ^{m}}(dx)& \le \underset{{\mathbb{R}^{m}}}{\int }|G\big({\varphi _{n}}\big)(x){|^{2}}|r|{\lambda ^{m}}(dx)\\ {} & \le \| {\varphi _{n}}{\| _{\infty }^{2}}\| {G_{R}}{\| _{{L^{2}}({\mathbb{R}^{m}})}^{2}}|r|.\end{aligned}\](10)
\[ \underset{{\mathbb{R}^{m}}}{\int }\big|\gamma \big(G({\varphi _{n}})\big)(x)\big|{\lambda ^{m}}(dx)\le |\gamma |\hspace{0.1667em}\| {\varphi _{n}}{\| _{\infty }}\| {G_{R}}{\| _{{L^{1}}({\mathbb{R}^{m}})}}\to 0\]
\[\begin{aligned}{}& \underset{{\mathbb{R}^{m}}}{\int }\underset{\mathbb{R}}{\int }\big|r\big(G({\varphi _{n}})\big)(x)\big|\left({\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}-{\mathds{1}_{|r|\le 1}}\right)\nu (dr){\lambda ^{m}}(dx)\\ {} =& \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|>1}}\underset{{\mathbb{R}^{m}}}{\int }\big|\big(G({\varphi _{n}})\big)(x)\big|{\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}{\lambda ^{m}}(dx)\nu (dr)\\ {} & -\underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|\le 1}}\underset{{\mathbb{R}^{m}}}{\int }\big|\big(G({\varphi _{n}})\big)(x)\big|{\mathds{1}_{|r(G({\varphi _{n}}))(x)|>1}}{\lambda ^{m}}(dx)\nu (dr)\end{aligned}\]
and by [7], Exercise 1.1.10, p. 14 and (8) we observe
\[\begin{aligned}{}\underset{{\mathbb{R}^{m}}}{\int }\big|\big(G({\varphi _{n}})\big)(x)\big|{\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}{\lambda ^{m}}(dx)& \le {\underset{0}{\overset{\frac{1}{|r|}}{\int }}}{d_{G({\varphi _{n}})}}(\alpha ){\lambda ^{1}}(d\alpha )\\ {} & \le {\underset{0}{\overset{\frac{1}{|r|}}{\int }}}{d_{{G_{R}}}}\left(\frac{\alpha }{\| {\varphi _{n}}{\| _{\infty }}}\right){\lambda ^{1}}(d\alpha ).\end{aligned}\]
We see that the right hand side converges to 0 for $n\to \infty $, and for n large enough we have
\[ {\underset{0}{\overset{\frac{1}{|r|}}{\int }}}{d_{{G_{R}}}}\left(\frac{\alpha }{\| {\varphi _{n}}{\| _{\infty }}}\right){\lambda ^{1}}(d\alpha )\le {\underset{0}{\overset{\frac{1}{|r|}}{\int }}}{d_{{G_{R}}}}\left(\alpha \right){\lambda ^{1}}(d\alpha )=\frac{1}{|r|}{h_{R}}(|r|).\]
Lebesgue’s dominated convergence theorem, using (3), implies
\[ \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|>1}}\underset{{\mathbb{R}^{m}}}{\int }\big|\big(G({\varphi _{n}})\big)(x)\big|{\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}{\lambda ^{m}}(dx)\nu (dr)\to 0\]
for $n\to \infty $. We observe from (9) for the remaining term that
\[\begin{aligned}{}& \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|\le 1}}\underset{{\mathbb{R}^{m}}}{\int }\big|\big(G({\varphi _{n}})\big)(x)\big|{\mathds{1}_{|r(G({\varphi _{n}}))(x)|>1}}{\lambda ^{m}}(dx)\nu (dr)\\ {} \le & \| {\varphi _{n}}{\| _{\infty }^{2}}\| {G_{R}}{\| _{{L^{2}}({\mathbb{R}^{m}})}^{2}}\underset{\mathbb{R}}{\int }|r{|^{2}}{\mathds{1}_{|r|\le 1}}\nu (dr)\end{aligned}\]
and by Lebesgue’s dominated convergence theorem (since ${\textstyle\int _{|r|\le 1}}{r^{2}}\nu (dr)<\infty $)
\[ \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|\le 1}}\underset{{\mathbb{R}^{m}}}{\int }\big|\big(G({\varphi _{n}})\big)(x)\big|{\mathds{1}_{|r(G({\varphi _{n}}))(x)|>1}}{\lambda ^{m}}(dx)\nu (dr)\to 0\]
for $n\to \infty $. This gives (4). In order to show (5) we observe
\[\begin{aligned}{}\min \big(1,\big|r\big(G({\varphi _{n}})\big)(x){\big|^{2}}\big)\le & {\mathds{1}_{|rG({\varphi _{n}})(x)|>1}}{\mathds{1}_{|r|>1}}+|rG({\varphi _{n}})(x)|{\mathds{1}_{|rG({\varphi _{n}})(x)|>1}}{\mathds{1}_{|r|\le 1}}\\ {} & +{\big|rG({\varphi _{n}})(x)\big|^{2}}{\mathds{1}_{|rG({\varphi _{n}})(x)|\le 1}}{\mathds{1}_{|r|\le 1}}\\ {} & +|rG({\varphi _{n}})(x)|{\mathds{1}_{|rG({\varphi _{n}})(x)|\le 1}}{\mathds{1}_{|r|>1}}.\end{aligned}\]
From the previous calculations we conclude that the second and fourth term (when integrated with respect to $\nu (dr){\lambda ^{m}}(dx)$) converge to 0 for $n\to \infty $ and for the first term we note that
\[ \underset{{\mathbb{R}^{m}}}{\int }{\mathds{1}_{|rG({\varphi _{n}})(x)|>1}}{\lambda ^{m}}(dx)={d_{G({\varphi _{n}})}}\left(\frac{1}{|r|}\right)\le {d_{{G_{R}}}}\left(\frac{1}{|r|\hspace{0.1667em}\| {\varphi _{n}}{\| _{\infty }}}\right)\]
and by Lebesgue’s dominated convergence theorem we conclude that
\[ \underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}{d_{{G_{R}}}}\left(\frac{1}{|r|\hspace{0.1667em}\| {\varphi _{n}}{\| _{\infty }}}\right)\nu (dr)\to 0\]
for $n\to \infty $, as ${h_{R}}(|r|)\ge {d_{{G_{R}}}}(1/|r|)$. For the third term we directly see
\[\begin{aligned}{}\underset{\mathbb{R}}{\int }& \underset{{\mathbb{R}^{m}}}{\int }{\big|rG({\varphi _{n}})(x)\big|^{2}}{\mathds{1}_{|rG({\varphi _{n}})(x)|\le 1}}{\mathds{1}_{|r|\le 1}}{\lambda ^{m}}(dx)\nu (dr)\\ {} & \le \| G({\varphi _{n}})(\cdot ){\| _{{L^{2}}({\mathbb{R}^{m}})}^{2}}\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|\le 1}}|r{|^{2}}\nu (dr)\\ {} & \le \| {\varphi _{n}}{\| _{\infty }^{2}}\| {G_{R}}{\| _{{L^{2}}({\mathbb{R}^{m}})}^{2}}\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|\le 1}}|r{|^{2}}\nu (dr)\to 0\end{aligned}\]
for $n\to \infty $. This gives (5), and (6) follows from (9). Hence $G({\varphi _{n}})\to G(\varphi )$ in $D(\dot{L})$ as $n\to \infty $. □In Theorem 1 we assumed that ${G_{R}}\in {L^{1}}({\mathbb{R}^{m}})\cap {L^{2}}({\mathbb{R}^{m}})$. In the following Proposition we will show that, if a Lévy white noise has no Gaussian part and, for $\beta \in (1,2)$, ${\textstyle\int _{\mathbb{R}}}|r{|^{\beta }}{\mathds{1}_{|r|\le 1}}\nu (dr)<\infty $, then we can assume ${G_{R}}\in {L^{1}}({\mathbb{R}^{m}})\cap {L^{\beta }}({\mathbb{R}^{m}})$ instead.
Proposition 1.
Let $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ be a measurable function and for $R>0$ let ${G_{R}}$ and ${h_{R}}$ be defined as in Theorem 1. Furthermore, let $\dot{L}$ be a Lévy white noise on ${\mathbb{R}^{m}}$ with characteristic triplet $(0,\gamma ,\nu )$ such that (3) holds. If further ${G_{R}}\in {L^{1}}({\mathbb{R}^{m}})\cap {L^{\beta }}({\mathbb{R}^{m}})$ for some $\beta \in (1,2)$ and
then
\[ s(\varphi ):=\langle \dot{L},G(\varphi )\rangle ,\hspace{1em}\varphi \in \mathcal{D}({\mathbb{R}^{d}})\]
defines a generalized random process, where $G(\varphi )$ is defined as in Theorem 1.
Proof.
The proof follows along steps in the proof of Theorem 1 and hence we only mention the needed modifications. As ${G_{R}}\in {L^{1}}({\mathbb{R}^{m}})$ we only have to consider the terms which were estimated with $\| {G_{R}}{\| _{{L^{2}}({\mathbb{R}^{m}})}}$ in the proof of Theorem 1. These are
and
and we have to show that they converge to 0 as ${\varphi _{n}}\to 0$ in $\mathcal{D}({\mathbb{R}^{d}})$. We have
(11)
\[\begin{aligned}{}& \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|\le 1}}\underset{{\mathbb{R}^{m}}}{\int }|\big(G({\varphi _{n}})\big)(x)|{\mathds{1}_{|(G({\varphi _{n}}))(x)|>\frac{1}{|r|}}}{\lambda ^{m}}(dx)\nu (dr)\end{aligned}\](12)
\[\begin{aligned}{}& \underset{\mathbb{R}}{\int }\underset{{\mathbb{R}^{m}}}{\int }|r{|^{2}}|\big(G({\varphi _{n}})\big)(x){|^{2}}{\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}{\mathds{1}_{|r|\le 1}}{\lambda ^{m}}(dx)\nu (dr).\end{aligned}\]
\[\begin{aligned}{}\underset{{\mathbb{R}^{m}}}{\int }|\big(G({\varphi _{n}})\big)(x)|{\mathds{1}_{|(G({\varphi _{n}}))(x)|>\frac{1}{|r|}}}{\lambda ^{m}}(ds)& \le \| (G({\varphi _{n}})){\| _{{L^{\beta }}({\mathbb{R}^{m}})}^{\beta }}|r{|^{\beta -1}}\\ {} & \le \| {\varphi _{n}}{\| _{\infty }^{\beta }}\| {G_{R}}{\| _{{L^{\beta }}({\mathbb{R}^{m}})}^{\beta }}|r{|^{\beta -1}}.\end{aligned}\]
So it follows that the term (11) converges to 0 as ${\varphi _{n}}\to 0$ in $\mathcal{D}({\mathbb{R}^{d}})$. Furthermore,
\[\begin{aligned}{}& \underset{\mathbb{R}}{\int }\underset{{\mathbb{R}^{m}}}{\int }|r{|^{2}}|\big(G({\varphi _{n}})\big)(x){|^{2}}{\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}{\mathds{1}_{|r|\le 1}}{\lambda ^{m}}(dx)\nu (dr)\\ {} =& \underset{\mathbb{R}}{\int }\underset{{\mathbb{R}^{m}}}{\int }|r{|^{\beta }}|\big(G({\varphi _{n}})\big)(x){|^{\beta }}|r{|^{2-\beta }}|\big(G({\varphi _{n}})\big)(x){|^{2-\beta }}{\mathds{1}_{|r(G({\varphi _{n}}))(x)|\le 1}}{\mathds{1}_{|r|\le 1}}{\lambda ^{m}}(dx)\nu (dr)\\ {} \le & \underset{\mathbb{R}}{\int }|r{|^{\beta }}{\mathds{1}_{|r|\le 1}}\nu (dr)\| {\varphi _{n}}{\| _{\infty }^{\beta }}\| {G_{R}}{\| _{{L^{\beta }}({\mathbb{R}^{m}})}^{\beta }}.\end{aligned}\]
This shows that the term (12) converges to 0 as ${\varphi _{n}}\to 0$ in $\mathcal{D}({\mathbb{R}^{d}})$ and the rest of the proof follows from similar arguments as mentioned in the proof of Theorem 1. □When ${G_{R}}\notin {L^{1}}({\mathbb{R}^{m}})$ we can still obtain a generalized process s under some extra conditions. Similar to Theorem 3.5 in [2] we obtain the following result in a more general case.
Theorem 2.
Let $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ be a measurable function such that ${G_{R}}\in {L^{2}}({\mathbb{R}^{m}})$, where ${G_{R}}$ and $G(\varphi )$, $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$, are defined as in Theorem 1. If the first moment of the Lévy white noise $\dot{L}$ on ${\mathbb{R}^{m}}$ with characteristic triplet $(a,\gamma ,\nu )$ vanishes, i.e. $\mathbb{E}|\langle \dot{L},\varphi \rangle |<\infty $ and $\mathbb{E}\langle \dot{L},\varphi \rangle =0$ for every $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$, then $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ defined by
is a generalized random process if
and
for all $R>0$.
(13)
\[ \underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}|r|{\underset{\frac{1}{|r|}}{\overset{\infty }{\int }}}{d_{{G_{R}}}}(\alpha ){\lambda ^{1}}(d\alpha )\nu (dr)<\infty \](14)
\[ \underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}|r{|^{2}}{\underset{0}{\overset{\frac{1}{|r|}}{\int }}}\alpha {d_{{G_{R}}}}(\alpha ){\lambda ^{1}}(d\alpha )\nu (dr)<\infty \]Proof.
This proof follows from the same arguments as in the proof of [2], Theorem 3.5, where we use $G({\varphi _{n}})$ instead of $G\ast {\varphi _{n}}$ and $\| {\varphi _{n}}{\| _{\infty }}\| {G_{R}}{\| _{{L^{2}}({\mathbb{R}^{m}})}}<\infty $ instead of $\| G\ast {\varphi _{n}}{\| _{{L^{2}}({\mathbb{R}^{d}})}}<\infty $. For the sake of completeness we give a detailed proof. By [13], Example 25.12, p. 163 we conclude that we need to show, similar to Theorem 1, that (5), (6) and
are satisfied for all ${({\varphi _{n}})_{n\in \mathbb{N}}}$ converging to 0 in $\mathcal{D}({\mathbb{R}^{d}})$. Let ${({\varphi _{n}})_{n\in \mathbb{N}}}$ be a sequence converging to 0 in $\mathcal{D}({\mathbb{R}^{d}})$ such that supp ${\varphi _{n}}\subset {B_{R}}(0)$ for some $R>0$ and all $n\in \mathbb{N}$. Considering that ${\textstyle\int _{{\mathbb{R}^{m}}}}|f(x)|{\mathds{1}_{|f(x)|>\beta }}{\lambda ^{m}}(dx)={\textstyle\int _{\beta }^{\infty }}{d_{f}}(\alpha ){\lambda ^{1}}(d\alpha )+\beta {d_{f}}(\beta )$ for $\beta >0$ and measurable f (cf. [7], Exercise 1.1.10, p.14), we estimate (15) together with (9) as follows:
(15)
\[ \underset{{\mathbb{R}^{m}}}{\int }\bigg|\underset{\mathbb{R}}{\int }r\big(G({\varphi _{n}})\big)(x){\mathds{1}_{|r(G({\varphi _{n}})(x)|>1}}\bigg|{\lambda ^{m}}(dx)\to 0,\]
\[\begin{aligned}{}& \underset{{\mathbb{R}^{m}}}{\int }\bigg|\underset{\mathbb{R}}{\int }r\big(G({\varphi _{n}})\big)(x){\mathds{1}_{|r(G({\varphi _{n}})(x)|>1}}\bigg|{\lambda ^{m}}(dx)\\ {} \le & \| {G_{R}}{\| _{{L^{2}}({\mathbb{R}^{m}})}^{2}}\| {\varphi _{n}}{\| _{\infty }^{2}}\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|\le 1}}|r{|^{2}}\nu (dr)\\ {} & +\underset{|r|>1}{\int }\left(|r|{\underset{\frac{1}{|r|}}{\overset{\infty }{\int }}}{d_{G({\varphi _{n}})}}\left(\alpha \right){\lambda ^{1}}(d\alpha )+{d_{G({\varphi _{n}})}}\left(\frac{1}{|r|}\right)\right)\nu (dr)\to 0,\end{aligned}\]
for $n\to \infty $ by Lebesgue’s dominated convergence theorem. Hence by (8)
\[\begin{aligned}{}& \underset{|r|>1}{\int }\left(|r|{\underset{\frac{1}{|r|}}{\overset{\infty }{\int }}}{d_{G({\varphi _{n}})}}\left(\alpha \right){\lambda ^{1}}(d\alpha )+{d_{G({\varphi _{n}})}}\left(\frac{1}{|r|}\right)\right)\nu (dr)\\ {} \le & \underset{|r|>1}{\int }|r|{\underset{\frac{1}{|r|}}{\overset{\infty }{\int }}}{d_{{G_{R}}}}\left(\frac{\alpha }{\| {\varphi _{n}}{\| _{\infty }}}\right){\lambda ^{1}}(d\alpha )\nu (dr)+\underset{|r|>1}{\int }{d_{{G_{R}}}}\left(\frac{1}{|r|\| {\varphi _{n}}{\| _{\infty }}}\right)\nu (dr)\\ {} \le & \underset{|r|>1}{\int }|r|{\underset{\frac{1}{|r|}}{\overset{\infty }{\int }}}{d_{{G_{R}}}}\left(\alpha \right){\lambda ^{1}}(d\alpha )\nu (dr)+\underset{|r|>1}{\int }{d_{{G_{R}}}}\left(\frac{1}{|r|}\right)\nu (dr)\end{aligned}\]
for large n, and the latter is finite by (13), (14) and
\[ {\underset{0}{\overset{x}{\int }}}\alpha {d_{{G_{R}}}}(\alpha ){\lambda ^{1}}(d\alpha )\ge {d_{{G_{R}}}}(x){\underset{0}{\overset{x}{\int }}}\alpha {\lambda ^{1}}(d\alpha )=\frac{1}{2}{d_{{G_{R}}}}(x){x^{2}}\hspace{1em}\hspace{2.5pt}\text{for every}\hspace{2.5pt}x>0.\]
This gives (15). In order to prove (5) we follow the same steps as in the proof of Theorem 1 and observe that we only have to show
\[ \underset{{\mathbb{R}^{m}}}{\int }\underset{\mathbb{R}}{\int }|\big(G({\varphi _{n}})\big)(x){|^{2}}|r{|^{2}}{\mathds{1}_{|rG({\varphi _{n}})(x)|\le 1}}{\mathds{1}_{|r|>1}}\nu (dr){\lambda ^{m}}(dx)\to 0\]
for $n\to \infty $. We see by [7], Exercise 1.1.10, p.14 and similar arguments as above that
\[\begin{aligned}{}& \underset{{\mathbb{R}^{m}}}{\int }\underset{\mathbb{R}}{\int }|\big(G({\varphi _{n}})\big)(x){|^{2}}|r{|^{2}}{\mathds{1}_{|rG({\varphi _{n}})(x)|\le 1}}{\mathds{1}_{|r|>1}}\nu (dr){\lambda ^{m}}(dx)\\ {} \le & 2\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}|r{|^{2}}{\underset{0}{\overset{\frac{1}{|r|}}{\int }}}\alpha {d_{G({\varphi _{n}})}}(\alpha ){\lambda ^{1}}(d\alpha )\nu (dr)\to 0\end{aligned}\]
for $n\to \infty $. Hence, we conclude that s defines a generalized process. □Example 1.
Let $d\ge 1$, $q\in [1,2)$ and $\frac{d}{2}<p<\frac{d}{q}$. We consider $G:{\mathbb{R}^{d}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ such that
for all $x,y\in {\mathbb{R}^{d}}$, where $w\in {L_{loc}^{q\ast }}({\mathbb{R}^{d}})$ with ${q^{\ast }}=\frac{q}{q-1}$. From the Hölder inequality, for $R>0$ and $x\in {\mathbb{R}^{d}}$, we conclude
We obtain that
Furthermore, we observe for a Lévy white noise $\dot{L}$ with characteristic triplet $(a,\gamma ,\nu )$ that
(16)
\[\begin{aligned}{}{G_{R}}(x):& =\underset{{B_{R}}(0)}{\int }|G(x,y)|{\lambda ^{d}}(dy)\\ {} & \le {\left(\hspace{0.1667em}\hspace{0.1667em}\underset{{B_{R}}(0)}{\int }\| x-y{\| ^{-qp}}{\lambda ^{d}}(dy)\right)^{1/q}}{\left(\hspace{0.1667em}\hspace{0.1667em}\underset{{B_{R}}(0)}{\int }|w(y){|^{{q^{\ast }}}}{\lambda ^{d}}(dy)\right)^{1/q\ast }}\\ {} & \le C(w,q,p,d,R)\min \{1,\| x{\| ^{-p}}\}.\end{aligned}\]
\[\begin{aligned}{}& {\underset{0}{\overset{\frac{1}{|r|}}{\int }}}\alpha {d_{{G_{R}}}}(\alpha ){\lambda ^{1}}(d\alpha )\le C{\underset{0}{\overset{\frac{1}{|r|}}{\int }}}\alpha (1+{\alpha ^{-\frac{d}{p}}}){\lambda ^{1}}(d\alpha )=\tilde{C}(|r{|^{-2}}+|r{|^{\frac{d}{p}-2}})\end{aligned}\]
and
\[\begin{aligned}{}& \underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}|r{|^{2}}{\underset{0}{\overset{\frac{1}{|r|}}{\int }}}{d_{{G_{R}}}}(\alpha ){\lambda ^{1}}(d\alpha )\nu (dx)\le \underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}\tilde{C}(1+|r{|^{\frac{d}{p}}})\nu (dx),\end{aligned}\]
where $\tilde{C}>0$. If the Lévy white noise $\dot{L}$ has vanishing first moment, then (13) is satisfied by [13], Example 25.12. So if additionally $\dot{L}$ satisfies
then it follows from Theorem 2 that
is a well-defined generalized random process.
2.1 Moment properties
Next we show, that if the Lévy white noise $\dot{L}$ has finite $\beta >0$ moment, then so has the generalized random process $s(\varphi )=\langle \dot{L},G(\varphi )\rangle $, $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$.
Theorem 3.
Let $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ be a measurable function different from 0 and $\dot{L}$ be a Lévy white noise on ${\mathbb{R}^{m}}$ with characteristic triplet $(a,\gamma ,\nu )$ and assume that $\langle s,\varphi \rangle :=\langle \dot{L},G(\varphi )\rangle $, $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$, is a well-defined generalized random process. Let $\beta >0$
-
i) If $0<\beta <2$ assume that ${G_{R}}\in {L^{\beta }}({\mathbb{R}^{m}})\cap {L^{2}}({\mathbb{R}^{m}})$ with ${G_{R}}$ as defined in Theorem 1. If $\dot{L}$ has finite β-moment, then so has s. If $\beta \ge 2$ it is sufficient to assume that ${G_{R}}\in {L^{\beta }}({\mathbb{R}^{m}})$.
-
ii) If s has finite β-moment, then $\dot{L}$ has also finite β-moment.
Proof.
From [12], Theorem 2.7 we know that the Lévy measure of the random variable $\langle s,\varphi \rangle $ is given by
\[ {\nu _{s(\varphi )}}(B)=\underset{{\mathbb{R}^{m}}}{\int }\underset{\mathbb{R}}{\int }{\mathds{1}_{B\setminus \{0\}}}\left(rG(\varphi )(x)\right)\nu (dr){\lambda ^{m}}(dx).\]
Then $\langle s,\varphi \rangle $ has finite β-moment if and only if $\underset{|z|>1}{\textstyle\int }|z{|^{\beta }}{\nu _{s(\varphi )}}(dz)<\infty $.i) Let $\dot{L}$ have finite β-moment and assume at first that $0<\beta <2$. We calculate from (7) that
\[\begin{aligned}{}& \underset{|z|>1}{\int }|z{|^{\beta }}{\nu _{s(\varphi )}}(dz)\\ {} & \hspace{1em}=\underset{\mathbb{R}}{\int }|r{|^{\beta }}\underset{|G(\varphi )(x)|>\frac{1}{|r|}}{\int }|G(\varphi )(x){|^{\beta }}{\lambda ^{m}}(dx)\nu (dr)\\ {} & \hspace{1em}\le \underset{|r|\le 1}{\int }|r{|^{2}}\underset{|G(\varphi )(x)|>\frac{1}{|r|}}{\int }|G(\varphi )(x){|^{2}}{\lambda ^{m}}(dx)\nu (dr)\\ {} & \hspace{1em}\hspace{1em}+\underset{|r|>1}{\int }|r{|^{\beta }}\underset{{\mathbb{R}^{m}}}{\int }|{G_{R}}(x){|^{\beta }}\| \varphi {\| _{\infty }^{\beta }}{\lambda ^{m}}(dx)\nu (dr)\\ {} & \hspace{1em}\le \underset{|r|\le 1}{\int }|r{|^{2}}\nu (dr)\| {G_{R}}{\| _{{L^{2}}({\mathbb{R}^{m}})}^{2}}\| \varphi {\| _{\infty }^{2}}+\| {G_{R}}{\| _{{L^{\beta }}({\mathbb{R}^{m}})}^{\beta }}\underset{|r|>1}{\int }|r{|^{\beta }}\nu (dr)\| \varphi {\| _{\infty }^{\beta }}\\ {} & \hspace{1em}<\infty ,\end{aligned}\]
where $R>0$ is such that $\text{supp}\hspace{2.5pt}\varphi \subset {B_{R}}(0)$.If $\beta \ge 2$ we obtain by similar arguments as above that
which is indeed finite.
ii) Assume that s has finite β-moment and that G is different from 0. So we know that there exists a function $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$ such that
hence there exists an ${r_{0}}>1$ with
We conclude
\[\begin{aligned}{}\infty >\underset{|z|>1}{\int }|z{|^{\beta }}{\nu _{s(\varphi )}}(dz)=& \underset{\mathbb{R}}{\int }|r{|^{\beta }}\underset{|G(\varphi )(x)|>\frac{1}{|r|}}{\int }|G(\varphi )(x){|^{\beta }}{\lambda ^{m}}(dx)\nu (dr)\\ {} \ge & \underset{|r|>{r_{0}}}{\int }|r{|^{\beta }}\underset{|G(\varphi )(x)|>\frac{1}{|r|}}{\int }|G(\varphi )(x){|^{\beta }}{\lambda ^{m}}(dx)\nu (dr)\\ {} \ge & \underset{|r|>{r_{0}}}{\int }|r{|^{\beta }}\nu (dr)\underset{|G(\varphi )(x)|>\frac{1}{|{r_{0}}|}}{\int }|G(\varphi )(x){|^{\beta }}{\lambda ^{m}}(dx),\end{aligned}\]
hence $\underset{|r|>{r_{0}}}{\textstyle\int }|r{|^{\beta }}\nu (dr)<\infty $, so that $\dot{L}$ has finite β-moment. □3 Second order elliptic partial differential equations driven by Lévy white noise
3.1 Second order elliptic partial differential equations in divergence form driven by Levy white noise
In this section we discuss elliptic partial differential operators of second order with variable coefficients in divergence form, i.e. partial differential operators $p(x,D)$ of the form
where $A(x)={({a_{ij}}(x))_{i,j=1}^{d}}\in {C^{\infty }}({\mathbb{R}^{d}},{\mathbb{R}^{d\times d}})$ is a uniformly elliptic matrix, i.e. there exists $C>0$ such that
(17)
\[ p(x,D)u=-{\sum \limits_{i,j=1}^{d}}{\partial _{i}}({a_{ij}}(x){\partial _{j}}u)=-\text{div}((A(x)\nabla u),\]
\[ {C^{-1}}\| \xi {\| ^{2}}\le {\xi ^{T}}A(x)\xi \le C\| \xi {\| ^{2}}\hspace{2.5pt}\text{for all}\hspace{2.5pt}\xi \in {\mathbb{R}^{d}}.\]
Now let $\dot{L}$ be a Lévy white noise on ${\mathbb{R}^{d}}$ with characteristic triplet $(a,\gamma ,\nu )$ and $p(x,D)$ be a partial differential operator (PDO) of the form (17). We say that a generalized stochastic process $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ is a generalized solution of the equation
if
\[ \langle s,p{(x,D)^{\ast }}\varphi \rangle =\langle \dot{L},\varphi \rangle \hspace{1em}\text{for all}\hspace{2.5pt}\varphi \in \mathcal{D}({\mathbb{R}^{d}}),\]
where $p{(x,D)^{\ast }}$ is the adjoint of $p(x,D)$, i.e.
In the first theorem we derive sufficient conditions for the existence of such a solution in terms of the characteristic triplet $(a,\gamma ,\nu )$, which is just a a simple extension of the Laplacian case. Afterwards we discuss stationarity of these generalized processes, e.g., if the coefficients are y-periodic for some $y\in {\mathbb{R}^{d}}$, then s is y-periodically stationary. We assume for the entire section that the coefficients of $p(x,D)$ are in ${C^{\infty }}({\mathbb{R}^{d}})$.Theorem 4.
Let $\dot{L}$ be a Lévy white noise on ${\mathbb{R}^{d}}$ with characteristic triplet $(a,\gamma ,\nu )$ with vanishing first moment and $p(x,D)$ be a PDO of the form (17). The stochastic partial differential equation
has a generalized solution $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$, if $d\ge 5$ and
Proof.
By [9], Chapter 10 there exists a locally integrable left inverse $E:{\mathbb{R}^{d}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ of the operator $p{(x,D)^{\ast }}$ such that for all $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$
\[ E(p{(\cdot ,D)^{\ast }}\varphi )(x):={\int _{{\mathbb{R}^{d}}}}E(x,y)p{(y,D)^{\ast }}\varphi (y)dy=\varphi (x)\hspace{2.5pt}\text{for all}\hspace{2.5pt}x\in {\mathbb{R}^{d}}.\]
Moreover, there exists $N\in \mathbb{N}$ such that
\[ {N^{-1}}\| x-y{\| ^{2-d}}\le E(x,y)\le N\| x-y{\| ^{2-d}}\hspace{2.5pt}\text{for all}\hspace{2.5pt}x\ne y.\]
We set
and from Example 1 with $w=1$, $p=d-2$ and $q=1$ (observe that $d\ge 5$) it follows that
\[\begin{aligned}{}s:\mathcal{D}({\mathbb{R}^{d}})& \to {L^{0}}(\Omega ),\\ {} \langle s,\varphi \rangle & :=\langle \dot{L},E(\varphi )\rangle ,\hspace{1em}\varphi \in \mathcal{D}({\mathbb{R}^{d}}),\end{aligned}\]
defines a generalized process. Moreover, s is a solution of the equation (18), as
\[ \langle s,p{(x,D)^{\ast }}\varphi \rangle =\langle \dot{L},E(p{(x,D)^{\ast }}\varphi )\rangle =\langle \dot{L},\varphi \rangle \]
for every $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$. □The solution $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ is not unique, which is quite clear. For example, let $p(x,D)=-\Delta $ and define
\[ \langle {s^{\prime }},\varphi \rangle :=\langle s,\varphi \rangle +\underset{{\mathbb{R}^{d}}}{\int }({x_{1}^{2}}-{x_{2}^{2}})\varphi (x){\lambda ^{d}}(dx),\]
where s is the solution constructed in Theorem 4 for the equation
Then it is easy to see that ${s^{\prime }}$ is also a solution of (19).Remark 1.
We assumed that the coefficients of the partial differential operator $p(x,D)$ are infinitely differentiable, but this is not necessary. It would be sufficient if ${a_{ij}}\in {C^{1}}({\mathbb{R}^{d}})$ for all $i,j\in \{1,\dots ,d\}$.
Remark 2.
The above method can also be used to find solutions of SDPEs of the form
under some suitable assumptions for the functions A, b and V, as the fundamental solution E of the elliptic operator here can be bounded from above by a constant times $\| x-y{\| ^{d-2}}$ for all $x\ne y$. For a very general result, see [4]. Observe that in the most general case the fundamental solution solves the equation only in the weak sense. We will discuss in the next section what we understand under a weak solution.
As a next step we discuss stationarity properties, which depend heavily on the matrix ${({a_{ij}}(x))_{i,j=1}^{d}}$. For example, if ${a_{ij}}:{\mathbb{R}^{d}}\to \mathbb{R}$ is constant, it is easily seen that $E(x,y)=E(x-y)$ for all $x\ne y$ and hence we observe that the constructed solution $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ in Theorem 4 is stationary.
Definition 4.
A generalized process s on $\mathcal{D}({\mathbb{R}^{d}})$ is called periodic with period $l\in {\mathbb{R}^{d}}$, if $s(\cdot +l)$ has the same law as s, and stationary if s is periodic for every period $l\in {\mathbb{R}^{d}}$. Here, $s(\cdot +l)$ is defined by
Remark 3.
Let $G:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ be a measurable function which fulfills the assumptions of Theorem 1 with $m=d$. Assume that $G(x,y+l)=G(x+l,y)$ for all $x,y\in {\mathbb{R}^{d}}$ and for some $l\in {\mathbb{R}^{d}}$. Then it is easily seen that for $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$
\[ \left(G\varphi (\cdot -l)\right)(x)=\underset{{\mathbb{R}^{d}}}{\int }G(x,y)\varphi (y-l){\lambda ^{d}}(dy)=\left(G\varphi \right)(x+l),\]
hence the generalized process s defined in Theorem 1 satisfies
\[ \langle s(\cdot +l),\varphi \rangle =\langle s,\varphi (\cdot -l)\rangle =\langle \dot{L},G\varphi (\cdot -l)\rangle =\langle \dot{L},\left(G\varphi \right)(\cdot +l)\rangle \hspace{-0.1667em}=\hspace{-0.1667em}\langle \dot{L}(\cdot -l),G\varphi \rangle .\]
Since $\dot{L}\stackrel{d}{=}\dot{L}(\cdot -l)$ it follows that in this case the process s is periodic with period l. Observe that ${(s(\varphi (\cdot +ly)))_{y\in \mathbb{Z}}}$ is then a stationary process for all $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$. Therefore, these models seem to be useful in statistics to model periodic processes or random fields. In the case when $G(x,y+l)=G(x+l,y)$ for all $l,x,y\in {\mathbb{R}^{d}}$, we see that s will be stationary.
Proposition 2.
Let $p(x,D):\mathcal{D}({\mathbb{R}^{d}})\to C({\mathbb{R}^{d}})$ be an elliptic partial differential operator of the form (17), $d\ge 5$ and assume that the matrix-valued function $A:{\mathbb{R}^{d}}\to {\mathbb{R}^{d\times d}}$ is periodic with period $y\in {\mathbb{R}^{d}}$, i.e. $A(x+y)=A(x)$ for all $x\in {\mathbb{R}^{d}}$. Let $\dot{L}$ be a Lévy white noise that satisfies the assumption of Theorem 4. Then there exists a solution $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ of $p(x,D)s=\dot{L}$, which is periodically stationary with period y.
Proof.
It is enough to show that
where E is again the fundamental solution of the operator $p{(x,D)^{\ast }}$. The assertion follows then from the stationarity of the Lévy white noise $\dot{L}$. We see that
and
so $E(\varphi )(\cdot +y):{\mathbb{R}^{d}}\to \mathbb{R}$ and $E(\varphi (\cdot +y)):{\mathbb{R}^{d}}\to \mathbb{R}$ solve the same elliptic equation. By construction,
where (20) follows from (7) and (16). By the maximum principle for uniformly elliptic equations we obtain $E(\varphi )(x+y)-E(\varphi (\cdot +y))(x)=0$ for all $x\in {\mathbb{R}^{d}}$, hence we obtain that $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ is periodically stationary. □
(20)
\[\begin{aligned}{}\underset{|x|\to \infty }{\lim }|(E(\varphi )(x+y)-E(\varphi (\cdot +y))(x))|& =0\hspace{2.5pt}\text{and}\hspace{2.5pt}\\ {} p{(x,D)^{\ast }}(E(\varphi )(x+y)-E(\varphi (\cdot +y))(x))& =0\hspace{2.5pt}\text{for all}\hspace{2.5pt}x\in {\mathbb{R}^{d}}.\end{aligned}\]From this result we can construct a stationary process on a certain group as long as the coefficients of the partial differential operator satisfy some periodicity condition.
Corollary 1.
Let $(\mathcal{G},+)$ be a subgroup of $({\mathbb{R}^{d}},+)$ and $p(x,D):\mathcal{D}({\mathbb{R}^{d}})\to C({\mathbb{R}^{d}})$ be an elliptic partial differential operator of the form (17) and assume that the matrix-valued function $A:{\mathbb{R}^{d}}\to {\mathbb{R}^{d\times d}}$ is periodic with period $y\in \mathcal{G}$ for all $y\in \mathcal{G}$. Let $\dot{L}$ be a Lévy white noise satisfying the assumption of Theorem 4 and s be the generalized solution of $p(x,D)s=\dot{L}$ constructed in Theorem 4. Then for every $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$ the process
\[ {({s_{\varphi }}(y))_{y\in \mathcal{G}}}:={(\langle s,\varphi (\cdot +y)\rangle )_{y\in \mathcal{G}}}\]
is a stationary process in $\mathcal{G}$.
3.2 The generalized and mild solutions of the electric Schrödinger equation driven by Lévy white noise
We saw in Remark 2 before, that we can find generalized solutions of stochastic partial differential equations
for suitable A and V, assuming that the dimension $d\ge 5$, the first moment of the Lévy white noise vanishes, and the moment condition
In the case when V lies in a Reverse Hölder class these assumptions seem to be not necessary. We are able to find generalized and mild solutions in dimension 3 under much weaker conditions. At first we introduce the Reverse Hölder class $R{H_{p}}({\mathbb{R}^{d}})$, and if V is in this class, the moment assumption reduces to some kind of a logarithm moment condition (dependent on V), which is very similar to the case when V is a positive constant. We first define what is meant by a mild solution of (21).
We call $E:{\mathbb{R}^{d}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ a weak fundamental solution of the generalized electric Schrödinger operator
if $E(\varphi ):=\underset{{\mathbb{R}^{d}}}{\textstyle\int }E(x,y)\varphi (y){\lambda ^{d}}(dy)$ solves
in the weak sense for all $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$. We call $u(x):=\langle \dot{L},E(x,\cdot )\rangle $ the mild solution of (21), if $u(x)$ exists for all $x\in {\mathbb{R}^{d}}$, i.e. if $E(x,\cdot )\in D(\dot{L})$ for all $x\in {\mathbb{R}^{d}}$. Then Theorem 5 i) will give a sufficient condition for that to hold.
In the following we define the maximum function m and Agmon distance γ of the potential V to apply the estimates of the fundamental solution of the generalized electric Schrödinger operator shown in [14] and [10].
Definition 5.
Let $p\ge 1$. A function $\omega \in {L_{loc}^{p}}({\mathbb{R}^{d}})$ with $\omega >0$ a.e. belongs to the Reverse Hölder class $R{H_{p}}({\mathbb{R}^{d}})$ if there exists a constant C such that for any ball $B\subset {\mathbb{R}^{d}}$,
\[ {\left(\frac{1}{{\lambda ^{d}}(B)}\underset{B}{\int }\omega {(x)^{p}}{\lambda ^{d}}(dx)\right)^{1/p}}\le \frac{C}{{\lambda ^{d}}(B)}\underset{B}{\int }\omega (x){\lambda ^{d}}(dx).\]
Furthermore, we define for $\omega \in R{H_{p}}({\mathbb{R}^{d}})$ the maximum function $m(x,\omega )$ by
\[ \frac{1}{m(x,\omega )}:=\sup \left\{r>0:\frac{1}{{r^{d-2}}}\underset{B(x,r)}{\int }\omega (y)dy\le 1\right\}\in (0,\infty )\]
and the distance function
\[ \gamma (x,y,\omega ):=\underset{\Gamma }{\inf }{\int _{0}^{1}}m(\Gamma (t),\omega )|\dot{\Gamma }(t)|{\lambda ^{1}}(dt),\]
where $\Gamma :[0,1]\to {\mathbb{R}^{d}}$ is absolutely continuous and $\Gamma (0)=x$ and $\Gamma (1)=y$. Moreover, we define for $R>0$ the ball
The set $R{H_{p}}({\mathbb{R}^{d}})$ is closely related to the space of Muckenhoupt weights ${A_{s}}$, $s\ge 1$, where ω measurable and nonnegative is in ${A_{s}}$ if
where $k,C>0$, see [10], Corollary 6.16, page 40. From now on the constant $k>0$ is fixed and such that (22) is satisfied.
\[ \underset{B\hspace{2.5pt}\text{ball in}\hspace{2.5pt}{\mathbb{R}^{d}}}{\sup }\left(\frac{1}{{\lambda ^{d}}(B)}\underset{B}{\int }\omega (x){\lambda ^{d}}(dx)\right){\left(\frac{1}{{\lambda ^{d}}(B)}\underset{B}{\int }\omega {(x)^{-{s^{\prime }}/s}}{\lambda ^{d}}(dx)\right)^{s/{s^{\prime }}}}<\infty ,\]
where ${s^{\prime }}\in \mathbb{R}$ such that $\frac{1}{s}+\frac{1}{{s^{\prime }}}=1$. For further information see, for example, [15]. Especially it holds that $\omega \in {A_{s}}$ for some $s\ge 1$ if and only if there exists a $p>1$ such that $\omega \in R{H_{p}}({\mathbb{R}^{d}})$. We see that the set of all positive and measurable functions bounded from above and strictly away from zero given by
\[\begin{aligned}{}\bigg\{& f:{\mathbb{R}^{d}}\to (0,\infty ):\exists {C_{1}},\hspace{0.1667em}{C_{2}}>0\hspace{2.5pt}\text{such that}\hspace{2.5pt}{C_{1}}\le f(y)\le {C_{2}}\hspace{2.5pt}\text{for all}\hspace{2.5pt}y\in {\mathbb{R}^{d}}\bigg\}\end{aligned}\]
is a subset of $R{H_{p}}({\mathbb{R}^{d}})$ for all $p\ge 1$. We state now an existence theorem for a mild solution of the equation
where V lies in $R{H_{\frac{d}{2}}}({\mathbb{R}^{d}})$ and show that under much weaker moment conditions there exists a generalized solution. We use that the weak fundamental solution E of the operator $p(x,D)$ can be bounded as follows
(22)
\[ |E(x,y)|\le C\frac{{e^{-k\gamma (x,y,V)}}}{\| x-y{\| ^{d-2}}}\hspace{1em}\hspace{2.5pt}\text{for all}\hspace{2.5pt}x,y\in {\mathbb{R}^{d}},x\ne y,\]Theorem 5.
Let $A(x)={({a_{i,j}}(x))_{i,j=1}^{d}}$ be a real, uniformly bounded and elliptic matrix and $V\in R{H_{\frac{d}{2}}}({\mathbb{R}^{d}})$. Let $\dot{L}$ be a Lévy white noise on ${\mathbb{R}^{d}}$ with characteristic triplet $(a,\gamma ,\nu )$ such that
\[ \underset{|r|>1}{\int }|r|{\underset{0}{\overset{1/|r|}{\int }}}{\lambda ^{d}}\left({B^{V}}(0,-\frac{1}{k}\log (\alpha ))\right){\lambda ^{1}}(d\alpha )\nu (dr)<\infty .\]
i) If $d=3$ then there exists a mild solution of
which is stochastically continuous.
ii) If $d\ge 3$ then there exists a generalized solution $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ of
iii) Under the assumption that the first moment of the Lévy white noise exists, the mild solution u from i) gives rise to a generalized solution s of the stochastic partial differential equation $(-\text{div}(A\nabla )+V)s=\dot{L}$ via
We will prove Theorem 5 in Section 3.4. Here we will calculate the moment condition for $\dot{L}$ for functions which are greater than a positive constant.
Example 2.
Let $d\ge 3$ and $V\in R{H_{\frac{d}{2}}}({\mathbb{R}^{d}})$ be such that $V>\varepsilon $, where $\varepsilon >0$. We observe that
\[ {\underset{0}{\overset{1}{\int }}}m(\Gamma (t),V)|\dot{\Gamma }(t)|{\lambda ^{1}}(dt)\ge C\sqrt{\varepsilon }\| y-x\| \]
for every path $\Gamma :[0,1]\to {\mathbb{R}^{d}}$ with $\Gamma (0)=x$ and $\Gamma (1)=y$ from which it follows for $0<\alpha \le 1$ that for fixed $k>0$
\[ {\lambda ^{d}}\left({B^{V}}(0,-\frac{1}{k}\log (\alpha ))\right)\le {C_{1}}{\left(\log \left(\frac{C}{\alpha }\right)\right)^{d}},\]
where $C,{C_{1}}>0$. Since
\[\begin{aligned}{}{\underset{0}{\overset{1/r}{\int }}}{\left(\log \left(\frac{1}{\alpha }\right)\right)^{d}}{\lambda ^{1}}(d\alpha )& ={\underset{\log (r)}{\overset{\infty }{\int }}}{\beta ^{d}}{e^{-\beta }}{\lambda ^{1}}(d\beta )=\Gamma (d+1,\log (r))\\ {} & =\frac{d!}{r}{\sum \limits_{j=0}^{d}}\frac{{(\log (r))^{j}}}{j!},\end{aligned}\]
where $\Gamma (d+1,\log (r))$ denotes the upper incomplete gamma function, this leads to
\[\begin{aligned}{}& \underset{|r|>1}{\int }|r|{\underset{0}{\overset{1/|r|}{\int }}}{\lambda ^{d}}\left({B^{V}}(0,-\frac{1}{k}\log (\alpha ))\right){\lambda ^{1}}(d\alpha )\nu (dr)\\ {} \le & \underset{|r|>1}{\int }{C_{2}}\log {(|r|)^{d}}\nu (dr)+{C_{3}}\nu (\mathbb{R}\setminus [-1,1]),\end{aligned}\]
where ${C_{2}},{C_{3}}>0$. So if we assume that the Lévy white noise $\dot{L}$ with characteristic triplet $(a,\gamma ,\nu )$ satisfies
then the assumptions of Theorem 5 are satisfied and we obtain generalized and mild solutions, if $d\ge 3$ or $d=3$ respectively.
3.3 Existence and continuity of mild solutions
In the following we give sufficient conditions for the existence and continuity of a random field $u(x):={(\langle \dot{L},E(x,\cdot )\rangle )_{x\in {\mathbb{R}^{m}}}}$, where $E:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ is a kernel. This will be used in the proof of Theorem 5, where E is the weak fundamental solution of the generalized electric Schrödinger operator.
Proposition 3.
Let $\dot{L}$ be a Lévy basis on ${\mathbb{R}^{d}}$ with characteristic triplet $(a,\gamma ,\nu )$ and let $E:{\mathbb{R}^{m}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ be a measurable function. We define for every $x\in {\mathbb{R}^{m}}$ a function ${h_{x}}:{\mathbb{R}^{+}}\to {\mathbb{R}^{+}}$ by
\[\begin{aligned}{}{h_{x}}(r)& :=r{\underset{0}{\overset{1/r}{\int }}}{d_{E(x,\cdot )}}(\alpha ){\lambda ^{1}}(d\alpha )\textit{for}\hspace{2.5pt}r>0.\end{aligned}\]
-
i) Assume that $E(x,\cdot )\in {L^{1}}({\mathbb{R}^{d}})\cap {L^{2}}({\mathbb{R}^{d}})$ for every $x\in {\mathbb{R}^{m}}$ and for every $x\in {\mathbb{R}^{m}}$. Then $E(x,\cdot )\in D(\dot{L})$ for every $x\in {\mathbb{R}^{m}}$ and hence the random field $u={(u(x))_{x\in {\mathbb{R}^{m}}}}$ given by $u(x):=\langle \dot{L},E(x,\cdot )\rangle $ for all $x\in {\mathbb{R}^{m}}$ exists.
-
ii) Furthermore, if the function ${T_{E}}:{\mathbb{R}^{m}}\to {L^{1}}({\mathbb{R}^{d}})\cap {L^{2}}({\mathbb{R}^{d}})$ given by ${T_{E}}(x):=E(x,\cdot )$ is continuous in ${L^{1}}({\mathbb{R}^{d}})$ and ${L^{2}}({\mathbb{R}^{d}})$ and for every $x\in {\mathbb{R}^{m}}$ there exists an $\varepsilon >0$ such that\[ \underset{{x^{\ast }}\in {B_{\varepsilon }}(x)}{\sup }\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}{h_{{x^{\ast }}}}(|r|)\nu (dr)<\infty ,\]then the process $u={(u(x))_{x\in {\mathbb{R}^{m}}}}$ is stochastically continuous.
Proof.
i) Similar to the proof of Theorem 1 the existence of the random field u is characterized by [12], Theorem 2.7 and hence with the same calculations the result follows.
ii) Using [12], Theorem 2.7 by the same reasoning as in Theorem 1 we have to show that
as $n\to \infty $, if ${x_{n}}\to x$ for $n\to \infty $. At first we observe that
(23)
\[\begin{aligned}{}& \underset{{\mathbb{R}^{d}}}{\int }\big|\gamma \big(E({x_{n}},y)-E(x,y)\big)\end{aligned}\](24)
\[\begin{aligned}{}& +\underset{\mathbb{R}}{\int }r\big(E({x_{n}},y)-E(x,y)\big)\left({\mathds{1}_{|r(E({x_{n}},y)-E(x,y))|\le 1}}-{\mathds{1}_{|r|\le 1}}\right)\nu (dr)\big|{\lambda ^{d}}(dy)\to 0,\\ {} & \underset{{\mathbb{R}^{d}}}{\int }\underset{\mathbb{R}}{\int }\min \big(1,\big|r\big(E({x_{n}},y)-E(x,y)\big){\big|^{2}}\big)\nu (dr){\lambda ^{d}}(dy)\to 0\hspace{1em}\hspace{2.5pt}\text{and}\hspace{2.5pt}\end{aligned}\]
\[ \underset{{\mathbb{R}^{m}}}{\int }\big|\gamma \big(E({x_{n}},y)-E(x,y)\big)|{\lambda ^{m}}(dy)\le |\gamma |\| E({x_{n}},\cdot )-E(x,\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}})}}\to 0\]
as $n\to \infty $. With similar calculations as in the proof of Theorem 1 we can estimate the remaining term in (23) by
\[\begin{aligned}{}& \underset{{\mathbb{R}^{d}}}{\int }\underset{\mathbb{R}}{\int }\big|r\big(E({x_{n}},y)-E(x,y)\big)\left({\mathds{1}_{|r(E({x_{n}},y)-E(x,y))|\le 1}}-{\mathds{1}_{|r|\le 1}}\right)\big|\nu (dr){\lambda ^{d}}(dy)\\ {} =& \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|\le 1}}\underset{{\mathbb{R}^{d}}}{\int }|E({x_{n}},y)-E(x,y)|{\mathds{1}_{|E({x_{n}},y)-E(x,y)|>\frac{1}{|r|}}}{\lambda ^{d}}(dy)\nu (dr)\\ {} +& \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|>1}}\underset{{\mathbb{R}^{d}}}{\int }|E({x_{n}},y)-E(x,y)|{\mathds{1}_{|E({x_{n}},y)-E(x,y)|\le \frac{1}{|r|}}}{\lambda ^{d}}(dy)\nu (dr).\end{aligned}\]
As
\[ \underset{{\mathbb{R}^{d}}}{\int }|E({x_{n}},y)-E(x,y)|{\mathds{1}_{|E({x_{n}},y)-E(x,y)|>\frac{1}{|r|}}}{\lambda ^{d}}(dy)\hspace{-0.1667em}\le \hspace{-0.1667em}|r|\| E({x_{n}},\cdot )-E(x,\cdot ){\| _{{L^{2}}({\mathbb{R}^{d}})}^{2}},\]
it follows from Lebesgue’s dominated convergence theorem that
\[ \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|\le 1}}\underset{{\mathbb{R}^{m}}}{\int }|E({x_{n}},y)-E(x,y)|{\mathds{1}_{|E({x_{n}},y)-E(x,y)|>\frac{1}{|r|}}}{\lambda ^{m}}(dy)\nu (dr)\to 0\]
as $n\to \infty $. For the last term in (23) we observe by [7], Prop. 1.1.3 and 1.1.4 that
\[\begin{aligned}{}& \underset{{\mathbb{R}^{d}}}{\int }|E({x_{n}},y)-E(x,y)|{\mathds{1}_{|E({x_{n}},y)-E(x,y)|\le \frac{1}{|r|}}}{\lambda ^{d}}(dy)\\ {} \le & {\underset{0}{\overset{1/|r|}{\int }}}{d_{|E({x_{n}},\cdot )-E(x,\cdot )|}}(\alpha ){\lambda ^{1}}(d\alpha )\\ {} \le & {\underset{0}{\overset{1/|r|}{\int }}}{d_{|E({x_{n}},\cdot )|}}(\alpha /2){\lambda ^{1}}(d\alpha )+{\underset{0}{\overset{1/|r|}{\int }}}{d_{|E(x,\cdot )|}}(\alpha /2){\lambda ^{1}}(d\alpha )\\ {} \le & 2\left({\underset{0}{\overset{\frac{1}{2|r|}}{\int }}}{d_{|E({x_{n}},\cdot )|}}(\alpha ){\lambda ^{1}}(d\alpha )+{\underset{0}{\overset{\frac{1}{2|r|}}{\int }}}{d_{|E(x,\cdot )|}}(\alpha ){\lambda ^{1}}(d\alpha )\right).\end{aligned}\]
By Lebesgue’s dominated convergence theorem we obtain that
\[ \underset{\mathbb{R}}{\int }\hspace{-0.1667em}|r|{\mathds{1}_{|r|>1}}\hspace{-0.1667em}\underset{{\mathbb{R}^{d}}}{\int }|E({x_{n}},y)\hspace{-0.1667em}-\hspace{-0.1667em}E(x,y)|{\mathds{1}_{|E({x_{n}},y)-E(x,y)|\le \frac{1}{|r|}}}{\lambda ^{d}}(dy)\nu (dr)\to 0\hspace{2.5pt}\text{as}\hspace{2.5pt}n\hspace{-0.1667em}\to \hspace{-0.1667em}\infty .\]
So we showed (23). In order to see (24), observe that
\[ \underset{{\mathbb{R}^{d}}}{\int }{\mathds{1}_{|r(E({x_{n}},y)-E(x,y))|>1}}{\lambda ^{d}}(dy)\le |r|\| E({x_{n}},\cdot )-E(x,\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}})}}\to 0\hspace{2.5pt}\text{as}\hspace{2.5pt}n\to \infty \]
and
\[ \underset{{\mathbb{R}^{d}}}{\int }{\mathds{1}_{|r(E({x_{n}},y)-E(x,y))|>1}}{\lambda ^{d}}(dy)\le {d_{|E({x_{n}},\cdot )|}}\left(\frac{1}{2|r|}\right)+{d_{|E(x,\cdot )|}}\left(\frac{1}{2|r|}\right).\]
Now by similar arguments as in the proof of Theorem 1 we see that (24) holds true. Furthermore, it is clear that (25) holds, since ${T_{E}}$ is continuous. □Now we state conditions under which a mild solution of a stochastic partial differential equation gives rise to a generalized solution.
Theorem 6.
Let $\dot{L}$ be a Lévy white noise on ${\mathbb{R}^{d}}$ with characteristic triplet $(a,\gamma ,\nu )$ with existing first moment, and $p(x,D)$ be a partial differential operator of the form
\[ p(x,D)\varphi (x)=-\text{div}(A\nabla \varphi (x))+b(x)\cdot \nabla \varphi (x)+V(x)\varphi (x),\]
where $b\in {C^{1}}({\mathbb{R}^{d}},{\mathbb{R}^{d}})$ and $V\in {L_{loc}^{1}}({\mathbb{R}^{d}})$, such that there exists a weak fundamental solution $E:{\mathbb{R}^{d}}\times {\mathbb{R}^{d}}\to \mathbb{R}$ of the equation $p(x,D)u={\delta _{0}}$ with $E(x,\cdot )\in {L^{1}}({\mathbb{R}^{d}})\cap {L^{2}}({\mathbb{R}^{d}})\cap D(\dot{L})$ for all $x\in {\mathbb{R}^{d}}$ and
\[ \underset{K}{\int }\| E(x,\cdot ){\| _{{L^{p}}({\mathbb{R}^{d}})}^{p}}{\lambda ^{d}}(dx)<\infty \]
for all compact sets $K\subset {\mathbb{R}^{d}}$ for $p=1,2$. Then the mild solution
of $p(x,D)u=\dot{L}$ gives rise to a generalized solution s of the stochastic partial differential equation $p(x,D)s=\dot{L}$ via
Proof.
We want to apply the stochastic Fubini theorem. Therefore we have to show that
We see that for every $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$
(26)
\[ \underset{{\mathbb{R}^{d}}}{\int }\underset{{\mathbb{R}^{d}}}{\int }\underset{\mathbb{R}}{\int }\min \left(|rE(x,y)\varphi (x)|,|rE(x,y)\varphi (x){|^{2}}\right)\nu (dr){\lambda ^{d}}(dy){\lambda ^{d}}(dx)<\infty .\]
\[\begin{aligned}{}& \min \left(|rE(x,y)\varphi (x)|,|rE(x,y)\varphi (x){|^{2}}\right)\\ {} =& {\mathds{1}_{|rE(x,y)\varphi (x)|>1}}|rE(x,y)\varphi (x)|+{\mathds{1}_{|rE(x,y)\varphi (x)|\le 1}}|rE(x,y)\varphi (x){|^{2}}\\ {} \le & {\mathds{1}_{|r|>1}}{\mathds{1}_{|rE(x,y)\varphi (x)|>1}}|rE(x,y)\varphi (x)|+{\mathds{1}_{|r|\le 1}}{\mathds{1}_{|rE(x,y)\varphi (x)|>1}}|rE(x,y)\varphi (x){|^{2}}\\ {} & +{\mathds{1}_{|r|\le 1}}{\mathds{1}_{|rE(x,y)\varphi (x)|\le 1}}|rE(x,y)\varphi (x){|^{2}}+{\mathds{1}_{|r|>1}}{\mathds{1}_{|rE(x,y)\varphi (x)|\le 1}}|rE(x,y)\varphi (x)|\\ {} =& {\mathds{1}_{|r|>1}}|rE(x,y)\varphi (x)|+{\mathds{1}_{|r|\le 1}}|rE(x,y)\varphi (x){|^{2}}.\end{aligned}\]
Let $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$ be such that $\text{supp}\hspace{2.5pt}\varphi \subset {B_{R}}(0)$, $R>0$. We observe that
\[\begin{aligned}{}& \underset{{\mathbb{R}^{d}}}{\int }\underset{{\mathbb{R}^{d}}}{\int }\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}|rE(x,y)\varphi (x)|\nu (dr){\lambda ^{d}}(dy){\lambda ^{d}}(dx)\\ {} \le & \| \varphi {\| _{\infty }}\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|>1}}|r|\nu (dr)\underset{{B_{R}}(0)}{\int }\| E(x,\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}})}}{\lambda ^{d}}(dx)<\infty \end{aligned}\]
and
\[\begin{aligned}{}& \underset{{\mathbb{R}^{d}}}{\int }\underset{{\mathbb{R}^{d}}}{\int }\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|\le 1}}|rE(x,y)\varphi (x){|^{2}}\nu (dr){\lambda ^{d}}(dy){\lambda ^{d}}(dx)\\ {} \le & \| \varphi {\| _{\infty }^{2}}\underset{\mathbb{R}}{\int }{\mathds{1}_{|r|\le 1}}|r{|^{2}}\nu (dr)\underset{{B_{R}}(0)}{\int }\| E(x,\cdot ){\| _{{L^{2}}({\mathbb{R}^{d}})}^{2}}{\lambda ^{d}}(dx)<\infty .\end{aligned}\]
This shows (26). Since $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$ has compact support and ${\lambda ^{d}}$ is finite on the support of φ, from [1], Theorem 3.1 p. 926 we get that
\[\begin{aligned}{}\langle s,\varphi \rangle :& =\underset{{\mathbb{R}^{d}}}{\int }u(x)\varphi (x){\lambda ^{d}}(dx)=\underset{{\mathbb{R}^{d}}}{\int }\underset{{\mathbb{R}^{d}}}{\int }E(x,y)\varphi (x)dL(y){\lambda ^{d}}(dx)\\ {} & =\underset{{\mathbb{R}^{d}}}{\int }\underset{{\mathbb{R}^{d}}}{\int }E(x,y)\varphi (x){\lambda ^{d}}(dx)dL(y)\hspace{1em}\hspace{2.5pt}\text{a.s.}\end{aligned}\]
and further one can choose a version of u such that $u\cdot \varphi $ is integrable with respect to ${\lambda ^{d}}$. The linearity of $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ is clear and the estimates above show that it is also continuous, hence s is a generalized random process. In order to see that $p(x,D)s=\dot{L}$, we observe that for arbitrary $f\in \mathcal{D}({\mathbb{R}^{d}})$
\[\begin{aligned}{}& \underset{{\mathbb{R}^{d}}}{\int }\left(\hspace{0.1667em}\hspace{0.1667em}\underset{{\mathbb{R}^{d}}}{\int }E(x,y)p{(x,D)^{\ast }}\varphi (x){\lambda ^{d}}(dx)\right)f(y){\lambda ^{d}}(dy)\\ {} =& -\underset{{\mathbb{R}^{d}}}{\int }\left(\hspace{0.1667em}\hspace{0.1667em}\underset{{\mathbb{R}^{d}}}{\int }E(x,y)f(y){\lambda ^{d}}(dy)\right)(\text{div}({A^{T}}(x)\nabla \varphi (x)){\lambda ^{d}}(dx)\\ {} & -\underset{{\mathbb{R}^{d}}}{\int }\left(\hspace{0.1667em}\hspace{0.1667em}\underset{{\mathbb{R}^{d}}}{\int }E(x,y)f(y){\lambda ^{d}}(dy)\right)\nabla \cdot (b(x)\varphi (x)){\lambda ^{d}}(dx)\\ {} & +\underset{{\mathbb{R}^{d}}}{\int }\left(\hspace{0.1667em}\hspace{0.1667em}\underset{{\mathbb{R}^{d}}}{\int }E(x,y)f(y){\lambda ^{d}}(dy)\right)V(x)\varphi (x){\lambda ^{d}}(dx)\\ {} =& \underset{{\mathbb{R}^{d}}}{\int }\langle A(x)\nabla \left(\hspace{0.1667em}\hspace{0.1667em}\underset{{\mathbb{R}^{d}}}{\int }E(x,y)f(y){\lambda ^{d}}(dy)\right),\nabla \varphi (x)\rangle {\lambda ^{d}}(dx)\\ {} & +\underset{{\mathbb{R}^{d}}}{\int }(b(x)\cdot \nabla +V(x))\left(\hspace{0.1667em}\hspace{0.1667em}\underset{{\mathbb{R}^{d}}}{\int }E(x,y)f(y){\lambda ^{d}}(dy)\right)\varphi (x){\lambda ^{d}}(dx)\\ {} =& \underset{{\mathbb{R}^{d}}}{\int }f(x)\varphi (x){\lambda ^{d}}(dx).\end{aligned}\]
As $f\in \mathcal{D}({\mathbb{R}^{d}})$ was arbitrary, it follows from the fundamental lemma of calculus of variations that
Now we obtain
so we see that s is a generalized solution. □3.4 Proof of Theorem 5
Proof.
i) Similar to [14], Remark 3.21 we can estimate the distance function γ in (22) and prove for the weak fundamental solution E of the generalized electric Schrödinger operator $p(x,D)$ that
for some constants ${C_{1}},{C_{2}}>0$ and $0<\theta <1$. Hence, we obtain that
\[\begin{aligned}{}\underset{{\mathbb{R}^{d}}}{\int }|E(x,y)|{\lambda ^{d}}(dy)& \le {C_{1}}\underset{{\mathbb{R}^{d}}}{\int }{e^{-{C_{2}}{(1+m(x,V)\| z\| )^{\theta }}}}\| z{\| ^{2-d}}{\lambda ^{d}}(dz)\\ {} & ={C_{1}}{\underset{0}{\overset{\infty }{\int }}}r{e^{-{C_{2}}{(1+m(x,V)r)^{\theta }}}}{\lambda ^{1}}(dr)<\infty \end{aligned}\]
and also
\[\begin{aligned}{}\underset{{\mathbb{R}^{d}}}{\int }|E(x,y){|^{2}}{\lambda ^{d}}(dy)& \le {C_{3}}{\underset{0}{\overset{\infty }{\int }}}{r^{3-d}}{e^{-2{C_{2}}{(1+m(x,V)r)^{\theta }}}}{\lambda ^{1}}(dr)<\infty ,\end{aligned}\]
where ${C_{3}}>0$. For $\alpha >0$ and $x,y\in {\mathbb{R}^{d}}$, $x\ne y$ it follows from the triangle inequality that (observe that $d=3$ and hence the Lebesgue measure of a ball with radius r is $\frac{4\pi }{3}$)
\[\begin{aligned}{}& {\lambda ^{d}}\left(\{y\in {\mathbb{R}^{d}}:\frac{C{e^{-k\gamma (x,y,V)}}}{\| x-y{\| ^{d-2}}}>\alpha \}\right)\\ {} \le & {\lambda ^{d}}\left(\{y\in {\mathbb{R}^{d}}\setminus {B_{{e^{-k\gamma (x,0,V)/(d-2)}}{C^{1/(d-2)}}}}(x):\frac{{e^{-k\gamma (0,y,V)}}}{||x-y|{|^{d-2}}}>{e^{k\gamma (x,0,V)}}\alpha /C\}\right)\\ {} & +\frac{4\pi }{3}{({e^{-k\gamma (x,0,V)/(d-2)}}{C^{1/(d-2)}})^{d}}\\ {} \le & {\lambda ^{d}}\left(\{y\in {\mathbb{R}^{d}}\setminus {B_{{e^{-k\gamma (x,0,V)/(d-2)}}{C^{1/(d-2)}}}}(x):\gamma (0,y,V)\le -\frac{1}{k}\log (\alpha )\right)\\ {} & +\frac{4\pi }{3}{({e^{-k\gamma (x,0,V)/(d-2)}}{C^{1/(d-2)}})^{d}}\\ {} \le & {\lambda ^{d}}\left({B^{V}}(0,-\frac{1}{k}\log (\alpha ))\right)+\frac{4\pi }{3}{({e^{-k\gamma (x,0,V)/(d-2)}}{C^{1/(d-2)}})^{d}}.\end{aligned}\]
It follows from (22) that
\[\begin{aligned}{}& \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|>1}}{\underset{0}{\overset{1/|r|}{\int }}}{d_{E(x,\cdot )}}(\alpha ){\lambda ^{1}}(d\alpha )\nu (dr)\\ {} \le & {C_{4}}(x)\left(1+\underset{|r|>1}{\int }|r|{\underset{0}{\overset{1/|r|}{\int }}}{\lambda ^{d}}\left({B^{V}}(0,-\frac{1}{k}\log (\alpha ))\right){\lambda ^{1}}(d\alpha )\nu (dr)\right)<\infty \end{aligned}\]
by assumption, where $0<{C_{4}}(x)<\infty $. Proposition 3 i) now gives the existence of a mild solution.To show the continuity of the mild solution by the previous estimates and Proposition 3 ii) it is sufficient to prove that ${T_{E}}:{\mathbb{R}^{d}}\to {L^{1}}({\mathbb{R}^{d}})\cap {L^{2}}({\mathbb{R}^{d}})$, ${T_{E}}(x)(\cdot )=E(x,\cdot )$, is continuous. Let ${x_{0}}\in {\mathbb{R}^{d}}$ and ${({x_{n}})_{n\in \mathbb{N}}}$ be a sequence such that ${x_{n}}\to {x_{0}}$ as $n\to \infty $. Let $0<2\| {x_{0}}-{x_{n}}\| <{r_{0}}$ for all $n\ge M$, $M\in \mathbb{N}$. We calculate that
for a constant $0<\kappa <1$, hence there exists an $\varepsilon >0$ such that it follows from (27) that
\[\begin{aligned}{}\| E({x_{0}},\cdot )-E({x_{n}},\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}})}}\le & \| E({x_{0}},\cdot )-E({x_{n}},\cdot ){\| _{{L^{1}}({B_{{r_{0}}}}(x))}}\\ {} & +\| E({x_{0}},\cdot )-E({x_{n}},\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}}\setminus {B_{{r_{0}}}}(x))}}.\end{aligned}\]
It was shown in [10], Lemma 3.12, page 14 that
(28)
\[ m(x,V)\ge C\frac{m(0,V)}{{(1+\| x\| m(0,V))^{\kappa }}}\hspace{2.5pt}\text{for all}\hspace{2.5pt}x\in {\mathbb{R}^{d}}\]
\[ |E({x_{n}},y)|\le {C_{1}}{e^{-{C_{2}}{(1+\varepsilon \| {x_{n}}-y\| )^{\theta }}}}\| {x_{n}}-y{\| ^{2-d}}\]
for every $n\in {\mathbb{N}_{0}}$. Therefore, we obtain that
\[ \| E({x_{0}},\cdot )-E({x_{n}},\cdot ){\| _{{L^{1}}({B_{{r_{0}}}}(x))}}\le 2{\int _{{B_{2{r_{0}}}}(0)}}{C_{1}}{e^{-{C_{2}}{(1+\varepsilon \| y\| )^{\theta }}}}\| y{\| ^{2-d}}{\lambda ^{d}}(dy).\]
and
\[\begin{aligned}{}|E({x_{0}},y)-E({x_{n}},y)|\le & {C_{1}}{e^{-{C_{2}}{(1+\varepsilon \| {x_{n}}-y\| )^{\theta }}}}\| {x_{n}}-y{\| ^{2-d}}\\ {} & +{C_{1}}{e^{-{C_{2}}{(1+\varepsilon \| {x_{0}}-y\| )^{\theta }}}}\| {x_{0}}-y{\| ^{2-d}}.\end{aligned}\]
As ${({x_{n}})_{n\ge M}}$ is bounded we can find an integrable majorant on ${\mathbb{R}^{d}}\setminus {B_{{r_{0}}}}(x)$. We know from [10], chapter 7 that E is continuous and by Lebesgue’s dominated convergence theorem we obtain
\[ \underset{n\to \infty }{\lim }\| E({x_{0}},\cdot )-E({x_{n}},\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}}\setminus {B_{{r_{0}}}}(x))}}=0.\]
We see that
\[ \underset{n\to \infty }{\lim }\| E({x_{0}},\cdot )-E({x_{n}},\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}})}}\le 2{\int _{{B_{2{r_{0}}}}(0)}}{C_{1}}{e^{-{C_{2}}{(1+\varepsilon \| y\| )^{\theta }}}}\| y{\| ^{2-d}}{\lambda ^{d}}(dy).\]
By letting ${r_{0}}$ go to 0 we obtain that ${\lim \nolimits_{n\to \infty }}\| E({x_{0}},\cdot )-E({x_{n}},\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}})}}=0$. The same proof works for the ${L^{2}}$-norm.ii) Let $\tilde{E}$ be the left inverse of $p{(x,D)^{\ast }}$, i.e.
\[ \underset{{\mathbb{R}^{d}}}{\int }\tilde{E}(x,y)p{(y,D)^{\ast }}\varphi (y){\lambda ^{d}}(dy)=\varphi (x)\]
for $\varphi \in \mathcal{D}({\mathbb{R}^{d}})$. We verify that ${\tilde{E}_{R}}\in {L^{1}}({\mathbb{R}^{d}})\cap {L^{2}}({\mathbb{R}^{d}})$ in order to satisfy the assumptions of Theorem 1. As $\tilde{E}(x,y)=E(y,x)$ we can show by a similar argument as in i) that for $R>0$
\[ {\tilde{E}_{R}}(x)=\underset{{B_{R}}(0)}{\int }|\tilde{E}(x,y)|{\lambda ^{d}}(dy)\le \underset{{B_{R}}(0)}{\int }{C_{1}}{e^{-{C_{2}}{(1+m(y,V)\| x-y\| )^{\theta }}}}\| x-y{\| ^{2-d}}{\lambda ^{d}}(dy).\]
By using (28) we obtain that
\[\begin{aligned}{}{\tilde{E}_{R}}(x)& \le {C_{R}}\underset{{B_{R}}(0)}{\int }{e^{-k{C_{R}^{1}}\| x-y{\| ^{\theta }}}}\| x-y{\| ^{2-d}}{\lambda ^{d}}(dy)\le {\tilde{C}_{R}}{e^{-k{C_{R}^{1}}\| x{\| ^{\theta }}}}\| x{\| ^{2-d}},\end{aligned}\]
where ${C_{R}},{C_{R}^{1}},{\tilde{C}_{R}}>0$. Therefore we obtain that $\| {\tilde{E}_{R}}{\| _{{L^{1}}({\mathbb{R}^{d}})}}+\| {\tilde{E}_{R}}{\| _{{L^{2}}({\mathbb{R}^{d}})}}<\infty $. We observe from (22) and [14], Remark 3.21 by applying the triangle inequality that
\[\begin{aligned}{}{\tilde{E}_{R}}(x)& \le {e^{-k\gamma (x,0,V)}}\underset{{B_{R}}(0)}{\int }\frac{{e^{k\gamma (y,0,V)}}}{\| x-y{\| ^{d-2}}}{\lambda ^{d}}(dy)\\ {} & \le {C^{\prime }_{R}}{e^{-k\gamma (x,0,V)}}\underset{{B_{R}}(x)}{\int }\frac{1}{\| y{\| ^{d-2}}}{\lambda ^{d}}(dy)\\ {} & \le {C^{\prime\prime }_{R}}\frac{{e^{-k\gamma (x,0,V)}}}{\| x{\| ^{d-2}}},\end{aligned}\]
where ${C^{\prime }_{R}},{C^{\prime\prime }_{R}}>0$ are constants dependent on R. This leads by similar arguments as in i) to
\[\begin{aligned}{}& \underset{\mathbb{R}}{\int }|r|{\mathds{1}_{|r|>1}}{\underset{0}{\overset{1/|r|}{\int }}}{d_{{\tilde{E}_{R}}}}(\alpha ){\lambda ^{1}}(d\alpha )\nu (dr)\\ {} \le & {C_{R}}\left(1+\underset{|r|>1}{\int }|r|{\underset{0}{\overset{1/|r|}{\int }}}{\lambda ^{d}}\left({B^{V}}(0,-\frac{1}{k}\log (\alpha ))\right){\lambda ^{1}}(d\alpha )\nu (dr)\right)<\infty ,\end{aligned}\]
for a constant ${C_{R}}>0$ dependent on $R>0$. The existence of a generalized solution $s:\mathcal{D}({\mathbb{R}^{d}})\to {L^{0}}(\Omega )$ follows from Theorem 1.iii) Given the mild solution from i) we obtain from (28) for $R>0$ that
\[ \underset{{B_{R}}(0)}{\int }\| E(x,\cdot ){\| _{{L^{1}}({\mathbb{R}^{d}})}}{\lambda ^{d}}(dx)<\infty \]
and
\[ \underset{{B_{R}}(0)}{\int }\| E(x,\cdot ){\| _{{L^{2}}({\mathbb{R}^{d}})}}{\lambda ^{d}}(dx)<\infty .\]
Hence, we obtain the assertion by Theorem 6. □