Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 2, Issue 4 (2015)
  4. Accuracy of discrete approximation for i ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Accuracy of discrete approximation for integral functionals of Markov processes
Volume 2, Issue 4 (2015), pp. 401–420
Iurii Ganychenko   Victoria Knopova   Alexei Kulik  

Authors

 
Placeholder
https://doi.org/10.15559/15-VMSTA46
Pub. online: 30 December 2015      Type: Research Article      Open accessOpen Access

Received
4 December 2015
Revised
15 December 2015
Accepted
15 December 2015
Published
30 December 2015

Abstract

The article is devoted to the estimation of the convergence rate of integral functionals of a Markov process. Under the assumption that the given Markov process admits a transition probability density differentiable in t and the derivative has an integrable upper bound of a certain type, we derive the accuracy rates for strong and weak approximations of the functionals by Riemannian sums. We also develop a version of the parametrix method, which provides the required upper bound for the derivative of the transition probability density for a solution of an SDE driven by a locally stable process. As an application, we give accuracy bounds for an approximation of the price of an occupation time option.

1 Introduction

Let $X_{t}$, $t\ge 0$, be a Markov process with values in ${\mathbb{R}}^{d}$. Consider an integral functional of the form
(1)
\[I_{T}(h)={\int _{0}^{T}}h(X_{t})\hspace{0.1667em}dt,\]
where $h:{\mathbb{R}}^{d}\to \mathbb{R}$ is a measurable function. In this paper, we investigate the accuracy of the approximation of $I_{T}(h)$ by the Riemannian sums
\[I_{T,n}(h)=\frac{T}{n}\sum \limits_{k=0}^{n-1}h(X_{(kT)/n}),\hspace{1em}n\ge 1.\]
The function h is assumed to be bounded, only, that is, we do not impose any regularity assumptions on h. In particular, under this assumption, the class of integral functionals we investigate contains the class of occupation time type functionals (for which $h=1_{A}$ for a fixed $A\in \mathcal{B}({\mathbb{R}}^{d})$), which are of particular importance.
Integral functionals arise naturally in a wide class of stochastic representation formulae and applied stochastic models. It is very typical that exact calculation of the respective probabilities and/or expectations is hardly possible, which naturally suggests the usage of approximation methods. As an example of such a situation, we mention the so-called occupation time option [11], whose price is actually given by an expression similar to the Feynman–Kac formula. The exact calculation of the price is possible only in the particular case where the underlying process is a spectrally negative Lévy process (i.e., does not have positive jumps; see [5]), and practically more realistic cases of general Lévy processes, solutions to Lévy driven SDEs, etc. can be treated only numerically. To estimate the convergence rate of the respective Monte Carlo approximative methods, we need to estimate the accuracy of various approximation steps involved in the algorithm. In this paper, we focus ourselves on solving such a problem for discrete approximation of the integral functional of type (1).
For diffusion processes, this problem was studied in [4] and recently in [9] by means of the methods involving the particular structural features of the process, for example, the Malliavin calculus tools. On the other hand, in two recent papers [2, 3], an alternative method is developed, which exploits only the basic Markov structure of the process and the additive structure of the integral functional and its discrete approximations. One of the aims of this paper is to extend this method for a wider class of Markov processes. To explain our goal more in details, let us formulate our principal assumption on the process X.
  • X. The transition probability $P_{t}(x,dy)$ of X admits a density $p_{t}(x,y)$ w.r.t. the Lebesgue measure on ${\mathbb{R}}^{d}$. This density is differentiable w.r.t. t, and its derivative possesses the bound
    (2)
    \[\big|\partial _{t}p_{t}(x,y)\big|\le B_{T,X}{t}^{-\beta }q_{t,x}(y),\hspace{1em}t\le T,\]
    for some $B_{T,X}\ge 1$, $\beta \ge 1$, and a measurable function q such that for any fixed $t,x$, the function $q_{t,x}(\cdot )$ is a distribution density.
In [2, 3], the condition similar to X was formulated with $\beta =1$. Such a condition is satisfied for particularly important classes of diffusion processes and symmetric α-stable processes. However, in some natural cases, we can expect to get (2) only with $\beta >1$. As the simplest and the most illustrative example, we can take an α-stable process with drift:
\[X_{t}=ct+Z_{t},\]
where $c\ne 0$, and Z is an (e.g., symmetric) α-stable process. Then
\[p_{t}(x,y)={t}^{-d/\alpha }{g}^{(\alpha )}\bigg(\frac{y-x-ct}{{t}^{1/\alpha }}\bigg),\]
where ${g}^{(\alpha )}$ denotes the distribution density of $Z_{1}$. Straightforward calculation shows that (2) holds with $\beta =\max (1,1/\alpha )$, which is strictly greater than 1 when $\alpha <1$. Since the Lévy noises are now extensively used in various applied models, the simple calculation made above shows that it is highly desirable to extend the results of [2] and [3], which deal with the “diffusive like” class of processes satisfying X with $\beta =1$, to the more general case of X with arbitrary $\beta \ge 1$.
Another aim of this paper is to develop tools that would allow us to get the bounds of the form (2) for a wider class of solutions of Lévy driven SDEs. One result of such a type is given in the recent preprint [7], with the process X being a solution of the SDE
(3)
\[dX_{t}=b(X_{t})\hspace{0.1667em}dt+\sigma (X_{t-})\hspace{0.1667em}dZ_{t},\]
where Z is a symmetric α-stable process. The method used therein is a version of the parametrix method, and it is quite sensitive to the form of the Lévy measure of the process Z on the entire ${\mathbb{R}}^{d}$. Currently, apart from the stable noises, various types of “locally stable” noises are frequently used in applied models: tempered stable processes, damped stable processes, and so on. Heuristically, for a “locally stable” process, its “small jump behavior” is the same as for the stable one, whereas the “large jump behavior” may differ drastically and is determined by the “tail behavior” of the particular Lévy measure. Since (2) is genuinely related to the “local behavior” of the process, we can expect that the results of [7] should have a natural extension to the case of “locally stable” Z. However, it is a sophisticated problem to make such a conjecture rigorous; the main reason here is that the parametrix method treats the transition probability of a Lévy process as the “zero approximation” for the unknown transition probability density $p_{t}(x,y)$, and hence any bound for $p_{t}(x,y)$, which one may expect to design within this method, is at least as much complicated as the respective bound for the process Z. On the other hand, there is an extensive literature on the estimates of transition probability densities for Lévy processes (e.g., [1, 8, 6, 10, 12–17]; this list is far from being complete), which shows that these densities inherit the structure of the densities of the corresponding Lévy measures. In particular, in order to get exact two-sided bounds for $p_{t}(x,y)$, quite nontrivial structural assumptions on the “tails” of the Lévy measure should be imposed even in a comparatively simple “locally stable” case. Motivated by this observation on one hand and by the initial approximation problem which strongly relies on assumption (2) on the other hand, we pose the following general question: Is it possible to give a “rough” upper bound, which would be the same for a large class of transition probability densities of “locally stable processes”, without assuming complicated conditions on the “tails” of their Lévy measures? The answer is positive, and it roughly says that we can get the bound (2), where on the left-hand side, we have the transition probability density of the SDE driven by a “locally stable” process, and on the right-hand side, we have a (properly modified) transition probability density of an α-stable process. This bound is not necessarily precise: the power-type “tail” of the α-stable density might be essentially larger than the “tail,” for example, for an exponentially tempered α-stable law. The gain is, however, that under a mild set of assumptions, we obtain a uniform-in-class upper bound, useful in applications. To keep the exposition reasonably compact, we treat this problem in a comparatively simple case of a one-dimensional SDE of the form (10). The extension of these results to a more general multidimensional case is much more technical, and we postpone it to a separate publication.
The structure of the paper is the following. In Section 2, we formulate and prove our two main results concerning the accuracy of the strong and weak approximations of an integral functional by Riemannian sums, provided that condition X is satisfied. In Section 3, we outline a version of the parametrix method, which makes it possible to obtain (2) for solutions to Lévy driven SDEs without strong structural assumptions on the “tails” of the Lévy measure of the noise. In Section 4, an application for the price of an occupation time option is given.

2 Accuracy of discrete approximation for integral functionals

In this section, we prove two results. The first one concerns the “strong approximation rate”, that is, the control on the $L_{p}$-distance between $I_{T}(h)$ and its approximation $I_{T,n}(h)$.
Theorem 2.1.
Suppose that X holds. Then, for any $p>0$,
\[{\big(\mathbb{E}_{x}{\big|I_{T}(h)-I_{T,n}(h)\big|}^{p}\big)}^{1/p}\le C_{T,p}\| h\| {\big(D_{T,\beta }(n)\big)}^{1/2},\]
where $\| h\| =\sup _{x}|h(x)|$,
(4)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle D_{T,\beta }(n)=\left\{\begin{array}{l@{\hskip10.0pt}l}{n}^{-1}\log n,\hspace{1em}& \beta =1,\\{} \max (1,\frac{{T}^{1-\beta }}{\beta -1}){n}^{-1/\beta },\hspace{1em}& \beta >1,\end{array}\right.\\{} & \displaystyle C_{T,p}=\left\{\begin{array}{l@{\hskip10.0pt}l}{(14p(p-1)B_{T,X})}^{1/2}T,\hspace{1em}& p\ge 2,\\{} C_{T,2}={(28B_{T,X})}^{1/2}T,\hspace{1em}& p\in (0,2).\end{array}\right.\end{array}\]
Remark 2.1.
This theorem extends [2, Theorem 2.1], where it was assumed that $\beta =1$.
The second result concerns the “weak approximation,” that is, the control on the difference between the expectations of certain terms, which involve $I_{T}(h)$ together with its approximation $I_{T,n}(h)$.
Theorem 2.2.
Suppose that X holds. Then, for any $k\in \mathbb{N}$ and any bounded function f, we have
(5)
\[\big|\mathbb{E}_{x}{\big(I_{T}(h)\big)}^{k}f(X_{T})-\mathbb{E}_{x}{\big(I_{T,n}(h)\big)}^{k}f(X_{T})\big|\le {2}^{\beta \vee 2}{k}^{2}B_{T,X}{T}^{k+1}\| h{\| }^{k}\| f\| D_{T,\beta }(n).\]
Remark 2.2.
This theorem extends [3, Theorem 1.1], where it was assumed that $\beta =1$. In the proof, we will concentrate ourselves on the case $\beta >1$.
Using the Taylor expansion, we can directly obtain the following corollary of Theorem 2.2.
Corollary 2.1.
Suppose that X holds, and let φ be an analytic function defined in a neighborhood of 0. In addition, suppose that the constants $D_{\varphi },R_{\varphi }>0$ are such that
\[\bigg|\frac{{\varphi }^{(m)}(0)}{m!}\bigg|\le D_{\varphi }{\bigg(\frac{1}{R_{\varphi }}\bigg)}^{m},\hspace{1em}m\ge 0.\]
Then, for any bounded function f and a function h such that $T\| h\| <R_{\varphi }$, we have the following bound:
\[\big|\mathbb{E}_{x}\varphi \big(I_{T}(h)\big)f(X_{T})-\mathbb{E}_{x}\varphi \big(I_{T,n}(h)\big)f(X_{T})\big|\le C_{T,X,h,\varphi }\| f\| D_{T,\beta }(n),\]
where
\[C_{T,X,h,\varphi }={2}^{\beta \vee 2}D_{\varphi }B_{T,X}\frac{{T}^{2}\| h\| }{R_{\varphi }}\bigg(1+\frac{T\| h\| }{R_{\varphi }}\bigg){\bigg(1-\frac{T\| h\| }{R_{\varphi }}\bigg)}^{-3}.\]
Before proceeding to the proof of Theorem 2.1, we give an auxiliary result this proof is based on. This result is, in fact, a weaker version of Theorem 2.2 with $k=1$ and $f\equiv 1$, but we give it separately in order to make the exposition more transparent.
Proposition 2.1.
Suppose that X holds. Then
\[\big|\mathbb{E}_{x}I_{T}(h)-\mathbb{E}_{x}I_{T,n}(h)\big|\le 5B_{T,X}T\| h\| D_{T,\beta }(n).\]
Proof.
Let us introduce the notation used throughout the whole section: for $t\in [kT/n,(k+1)T/n),k\ge 0$, we put $\eta _{n}(t)=\frac{kT}{n},\hspace{2.5pt}\zeta _{n}(t)=\frac{(k+1)T}{n}$; that is, $\eta _{n}(t)$ is the point of the partition $\{Tk/n,\hspace{0.1667em}k\ge 0\}$ of the time axis, closest to t from the left, and $\zeta _{n}(t)$ is the point closest to t from the right that is strictly greater than t.
We have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}_{x}I_{T}(h)-\mathbb{E}_{x}I_{T,n}(h)& \displaystyle ={\int _{0}^{T}}\mathbb{E}_{x}\big[h(X_{s})-h(X_{\eta _{n}(s)})\big]\hspace{0.1667em}ds\\{} & \displaystyle ={\int _{0}^{T}}\int _{{\mathbb{R}}^{d}}h(y)\big[p_{s}(x,y)-p_{\eta _{n}(s)}(x,y)\big]\hspace{0.1667em}dyds\\{} & \displaystyle =M_{1}+M_{2},\end{array}\]
where
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle M_{1}={\int _{0}^{k_{n,\beta }T/n}}\int _{{\mathbb{R}}^{d}}h(y)\big[p_{s}(x,y)-p_{\eta _{n}(s)}(x,y)\big]\hspace{0.1667em}dyds\hspace{1em}\text{and}\\{} & \displaystyle M_{2}={\int _{k_{n,\beta }T/n}^{T}}\int _{{\mathbb{R}}^{d}}h(y)\big[p_{s}(x,y)-p_{\eta _{n}(s)}(x,y)\big]\hspace{0.1667em}dyds\end{array}\]
for some $1\le k_{n,\beta }\le n$, which will be chosen later. We estimate each term separately.
For $M_{1}$, we have
\[|M_{1}|\le \| h\| {\int _{0}^{k_{n,\beta }T/n}}\int _{{\mathbb{R}}^{d}}\big[p_{s}(x,y)+p_{\eta _{n}(s)}(x,y)\big]\hspace{0.1667em}dyds=2\| h\| T\frac{k_{n,\beta }}{n}.\]
Further, using (2), we get
(6)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle |M_{2}|& \displaystyle \le \| h\| {\int _{k_{n,\beta }T/n}^{T}}\int _{{\mathbb{R}}^{d}}\big|p_{s}(x,y)-p_{\eta _{n}(s)}(x,y)\big|\hspace{0.1667em}dyds\\{} & \displaystyle \le \| h\| {\int _{k_{n,\beta }T/n}^{T}}{\int _{\eta _{n}(s)}^{s}}\int _{{\mathbb{R}}^{d}}\big|\partial _{u}p_{u}(x,y)\big|\hspace{0.1667em}dyduds\\{} & \displaystyle \le B_{T,X}\| h\| {\int _{k_{n,\beta }T/n}^{T}}{\int _{\eta _{n}(s)}^{s}}\int _{{\mathbb{R}}^{d}}{u}^{-\beta }q_{u,x}(y)\hspace{0.1667em}dyduds\\{} & \displaystyle =B_{T,X}\| h\| {\int _{k_{n,\beta }T/n}^{T}}{\int _{\eta _{n}(s)}^{s}}{u}^{-\beta }\hspace{0.1667em}duds\\{} & \displaystyle =B_{T,X}\| h\| \sum \limits_{i=k_{n,\beta }}^{n-1}{\int _{iT/n}^{(i+1)T/n}}{\int _{iT/n}^{s}}{u}^{-\beta }\hspace{0.1667em}duds\\{} & \displaystyle =B_{T,X}\| h\| \sum \limits_{i=k_{n,\beta }}^{n-1}{\int _{iT/n}^{(i+1)T/n}}{\int _{u}^{(i+1)T/n}}{u}^{-\beta }\hspace{0.1667em}dsdu\\{} & \displaystyle \le \frac{T}{n}B_{T,X}\| h\| \sum \limits_{i=k_{n,\beta }}^{n-1}{\int _{iT/n}^{(i+1)T/n}}{u}^{-\beta }\hspace{0.1667em}du=\frac{T}{n}B_{T,X}\| h\| {\int _{k_{n,\beta }T/n}^{T}}{u}^{-\beta }\hspace{0.1667em}du.\end{array}\]
Now we finalize the argument.
1) If $\beta =1$, put $k_{n,\beta }=1,\hspace{2.5pt}n\ge 1$. Then we get
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle |M_{1}|\le 2\| h\| T{n}^{-1},\\{} & \displaystyle |M_{2}|\le \frac{T}{n}B_{T,X}\| h\| {\int _{T/n}^{T}}{u}^{-1}\hspace{0.1667em}du=B_{T,X}T\| h\| {n}^{-1}\log n.\end{array}\]
2) If $\beta >1$, put $k_{n,\beta }=[{n}^{1-1/\beta }]+1$. Then
\[|M_{1}|\le 2\| h\| T\frac{[{n}^{1-1/\beta }]+1}{n}\le 2\| h\| T\frac{{n}^{1-1/\beta }+1}{n}\le 4\| h\| T{n}^{-1/\beta }.\]
To estimate $M_{2}$, observe that
(7)
\[\displaystyle {\int _{k_{n,\beta }T/n}^{T}}{u}^{-\beta }\hspace{0.1667em}du\le \frac{{T}^{1-\beta }}{\beta -1}{\bigg(\frac{k_{n,\beta }}{n}\bigg)}^{1-\beta }\le \frac{{T}^{1-\beta }}{\beta -1}{\bigg(\frac{{n}^{1-1/\beta }}{n}\bigg)}^{1-\beta }\le \frac{{T}^{1-\beta }}{\beta -1}{n}^{1-1/\beta }.\]
Therefore,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle |M_{2}|& \displaystyle \le \frac{T}{n}B_{T,X}\| h\| {\int _{k_{n,\beta }T/n}^{T}}{u}^{-\beta }\hspace{0.1667em}du\le \frac{B_{T,X}}{\beta -1}{T}^{2-\beta }\| h\| {n}^{-1/\beta }.\end{array}\]
 □
Proof of Theorem 2.1.
Since we can obtain the required bound for $p<2$ from the bound with $p=2$ by the Hölder inequality, we consider the case $p\ge 2$ only.
Define
\[J_{t,n}(h):=I_{t}(h)-I_{t,n}(h)={\int _{0}^{t}}\varDelta _{n}(s)ds,\hspace{1em}\varDelta _{n}(s):=h(X_{s})-h(X_{\eta _{n}(s)}).\]
By definition, the function $t\mapsto J_{t,n}(h)$ is absolutely continuous. Then using the Newton–Leibniz formula twice, we get
\[{\big|J_{T,n}(h)\big|}^{p}=p(p-1){\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\varDelta _{n}(s)\Bigg({\int _{s}^{T}}\varDelta _{n}(t)\hspace{0.1667em}dt\Bigg)ds.\]
Therefore,
\[{\big|J_{T,n}(h)\big|}^{p}\le p(p-1)\big({H_{T,n,p}^{1}}(h)+{H_{T,n,p}^{2}}(h)\big),\]
where
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {H_{T,n,p}^{1}}(h)& \displaystyle ={\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\big|\varDelta _{n}(s)\big|\Bigg|{\int _{s}^{\zeta _{n}(s)}}\varDelta _{n}(t)\hspace{0.1667em}dt\Bigg|ds,\\{} \displaystyle {H_{T,n,p}^{2}}(h)& \displaystyle ={\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\varDelta _{n}(s)\Bigg({\int _{\zeta _{n}(s)}^{T}}\varDelta _{n}(t)\hspace{0.1667em}dt\Bigg)ds.\end{array}\]
Let us estimate separately the expectations of ${H_{T,n,p}^{1}}(h)$ and ${H_{T,n,p}^{2}}(h)$. By the Hölder inequality,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}_{x}{H_{T,n,p}^{1}}(h)& \displaystyle \le {\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\Bigg)}^{1-2/p}\\{} & \displaystyle \hspace{1em}\times {\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|\varDelta _{n}(s)\big|}^{p/2}{\Bigg|{\int _{s}^{\zeta _{n}(s)}}\varDelta _{n}(t)\hspace{0.1667em}dt\Bigg|}^{p/2}\hspace{0.1667em}ds\Bigg)}^{2/p}\\{} & \displaystyle \le {\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\Bigg)}^{1-2/p}{\bigg({\big(2\| h\| \big)}^{p/2}T{\big(2\| h\| \big)}^{p/2}{\bigg(\frac{T}{n}\bigg)}^{p/2}\bigg)}^{2/p}\\{} & \displaystyle =4{T}^{1+2/p}{n}^{-1}\| h{\| }^{2}{\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\Bigg)}^{1-2/p}.\end{array}\]
Further, observe that for every s the variables
\[\varDelta _{n}(s),\hspace{1em}{\big|J_{s,n}(h)\big|}^{p-2}\varDelta _{n}(s)\]
are $\mathcal{F}_{\zeta _{n}(s)}$-measurable, where $\{\mathcal{F}_{t},t\ge 0\}$ is the natural filtration for the process X. Hence,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}_{x}{H_{T,n,p}^{2}}(h)& \displaystyle =\mathbb{E}_{x}\Bigg({\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\varDelta _{n}(s)\mathbb{E}_{x}\Bigg({\int _{\zeta _{n}(s)}^{T}}\varDelta _{n}(t)\hspace{0.1667em}dt\bigg|\mathcal{F}_{\zeta _{n}(s)}\Bigg)ds\Bigg)\\{} & \displaystyle \le \mathbb{E}_{x}\Bigg({\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\big|\varDelta _{n}(s)\big|\Bigg|\mathbb{E}_{x}\Bigg({\int _{\zeta _{n}(s)}^{T}}\varDelta _{n}(t)\hspace{0.1667em}dt\bigg|\mathcal{F}_{\zeta _{n}(s)}\Bigg)\Bigg|ds\Bigg).\end{array}\]
By Proposition 2.1 and the Markov property of X we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \Bigg|\mathbb{E}_{x}\Bigg({\int _{\zeta _{n}(s)}^{T}}\varDelta _{n}(t)\hspace{0.1667em}dt\bigg|\mathcal{F}_{\zeta _{n}(s)}\Bigg)\Bigg|& \displaystyle =\Bigg|E_{X_{\zeta _{n}(s)}}{\int _{0}^{T-\zeta _{n}(s)}}\varDelta _{n}(t)\hspace{0.1667em}dt\Bigg|\\{} & \displaystyle \le 5B_{T,X}TD_{T,\beta }(n)\| h\| .\end{array}\]
Therefore, using the Hölder inequality, we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}_{x}{H_{T,n,p}^{2}}(h)& \displaystyle \le 5B_{T,X}TD_{T,\beta }(n)\| h\| \\{} & \displaystyle \hspace{1em}\times {\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\Bigg)}^{1-2/p}{\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|\varDelta _{n}(s)\big|}^{p/2}\hspace{0.1667em}ds\Bigg)}^{2/p}\\{} & \displaystyle \le 10B_{T,X}{T}^{1+2/p}D_{T,\beta }(n)\| h{\| }^{2}{\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\Bigg)}^{1-2/p}.\end{array}\]
Note that ${n}^{-1}\le D_{T,\beta }(n)$; hence, the bounds for $\mathbb{E}_{x}{H_{T,n,p}^{1}}(h)$ and $\mathbb{E}_{x}{H_{T,n,p}^{2}}(h)$ finally yield the estimate
(8)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}_{x}{\big|J_{T,n}(h)\big|}^{p}& \displaystyle \le 14p(p-1)B_{T,X}{T}^{1+2/p}D_{T,\beta }(n)\| h{\| }^{2}\\{} & \displaystyle \hspace{1em}\times {\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\Bigg)}^{1-2/p}.\end{array}\]
It can be easily seen that this inequality also holds if $J_{T,n}(h)$ in the left-hand side is replaced by $J_{t,n}(h)$. Integrating over $t\in [0,T]$, we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{t,n}(h)\big|}^{p}\hspace{0.1667em}dt& \displaystyle \le 14p(p-1)B_{T,X}{T}^{2+2/p}D_{T,\beta }(n)\| h{\| }^{2}\\{} & \displaystyle \hspace{1em}\times {\Bigg(\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\Bigg)}^{1-2/p}.\end{array}\]
Because h is bounded, the left-hand side expression in this inequality is finite. Hence, resolving this inequality, we get
\[\mathbb{E}_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\le {\big(14p(p-1)B_{T,X}\big)}^{p/2}{T}^{p+1}{\big(D_{T,\beta }(n)\big)}^{p/2}\| h{\| }^{p},\]
which together with (8) gives the required statement.  □
Proof of Theorem 2.2.
Denote
\[S_{k,a,b}:=\big\{(s_{1},s_{2},\dots ,s_{k})\in {\mathbb{R}}^{k}:a\le s_{1}<s_{2}<\cdots <s_{k}\le b\big\},\hspace{1em}k\in \mathbb{N},\hspace{2.5pt}a,b\in \mathbb{R}.\]
We have
(9)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}_{x}& \displaystyle \big[{\big(I_{T}(h)\big)}^{k}-{\big(I_{T,n}(h)\big)}^{k}\big]f(X_{T})\\{} & \displaystyle \hspace{1em}=k!\hspace{0.1667em}\mathbb{E}_{x}\int _{S_{k,0,T}}\big[h(X_{s_{1}})h(X_{s_{2}})\cdots h(X_{s_{k}})\\{} & \displaystyle \hspace{2em}-h(X_{\eta _{n}(s_{1})})h(X_{\eta _{n}(s_{2})})\cdots h(X_{\eta _{n}(s_{k})})\big]f(X_{T})\prod \limits_{i=1}^{k}ds_{i}\\{} & \displaystyle \hspace{1em}=k!\hspace{0.1667em}\int _{S_{k,0,T}}\int _{{({\mathbb{R}}^{d})}^{k+1}}\Bigg(\prod \limits_{i=1}^{k}h(y_{i})\Bigg)f(z)\Bigg(\prod \limits_{i=1}^{k}p_{s_{i}-s_{i-1}}(y_{i-1},y_{i})\Bigg)\\{} & \displaystyle \hspace{2em}\times p_{T-s_{k}}(y_{k},z)\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}\prod \limits_{i=1}^{k}ds_{i}-k!\hspace{0.1667em}\int _{S_{k,0,T}}\int _{{({\mathbb{R}}^{d})}^{k+1}}\Bigg(\prod \limits_{i=1}^{k}h(y_{i})\Bigg)f(z)\\{} & \displaystyle \hspace{2em}\times \Bigg(\prod \limits_{i=1}^{k}p_{\eta _{n}(s_{i})-\eta _{n}(s_{i-1})}(y_{i-1},y_{i})\Bigg)p_{T-\eta _{n}(s_{k})}(y_{k},z)\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}\prod \limits_{i=1}^{k}ds_{i}\\{} & \displaystyle \hspace{1em}=k!\sum \limits_{r=1}^{k}\int _{S_{k,0,T}}\int _{{({\mathbb{R}}^{d})}^{k+1}}\Bigg(\prod \limits_{i=1}^{k}h(y_{i})\Bigg)f(z){J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\\{} & \displaystyle \hspace{2em}\times dz\prod \limits_{j=1}^{k}dy_{j}\prod \limits_{i=1}^{k}ds_{i},\end{array}\]
where the convention $s_{0}=0,\hspace{2.5pt}s_{k+1}=T,\hspace{2.5pt}y_{0}=x,\hspace{2.5pt}y_{k+1}=z$ is used, and the functions ${J}^{(r)},r=1,\dots ,k$ are defined by the relations
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\\{} & \displaystyle \hspace{1em}=\Bigg(\prod \limits_{i=1}^{r-1}p_{\eta _{n}(s_{i})-\eta _{n}(s_{i-1})}(y_{i-1},y_{i})\Bigg)\\{} & \displaystyle \hspace{2em}\times \big(p_{s_{r}-s_{r-1}}(y_{r-1},y_{r})-p_{\eta _{n}(s_{r})-\eta _{n}(s_{r-1})}(y_{r-1},y_{r})\big)\Bigg(\prod \limits_{i=r}^{k}p_{s_{i+1}-s_{i}}(y_{i},y_{i+1})\Bigg).\end{array}\]
Let us estimate the rth term in the last line in (9). We have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \int _{S_{k,0,T}}& \displaystyle \int _{{({\mathbb{R}}^{d})}^{k+1}}\Bigg(\prod \limits_{i=1}^{k}h(y_{i})\Bigg)f(z){J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}\prod \limits_{i=1}^{k}ds_{i}\\{} & \displaystyle \le \| h{\| }^{k}\| f\| \int _{S_{k,0,T}}\int _{{({\mathbb{R}}^{d})}^{k+1}}\hspace{2.5pt}\big|{J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\big|\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}\prod \limits_{i=1}^{k}ds_{i}.\end{array}\]
Since the case $\beta =1$ was already treated in [3], for the rest of the proof, we assume that $\beta >1$.
Consider two cases: a) $s_{r}-s_{r-1}>k_{n,\beta }T/n$ and b) $s_{r}-s_{r-1}\le k_{n,\beta }T/n$. Recall that $k_{n,\beta }=[{n}^{1-1/\beta }]+1,\hspace{2.5pt}\beta >1$, is defined in the proof of Proposition 2.1.
In case a), using condition X and the Chapman–Kolmogorov equation, we derive
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{{({\mathbb{R}}^{d})}^{k+1}}\big|{J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\big|\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}\\{} & \displaystyle \hspace{1em}\le B_{T,X}\int _{{({\mathbb{R}}^{d})}^{2}}p_{\eta _{n}(s_{r-1})-\eta _{n}(s_{0})}(x,y_{r-1})\\{} & \displaystyle \hspace{2em}\times \Bigg|{\int _{\eta _{n}(s_{r})-\eta _{n}(s_{r-1})}^{s_{r}-s_{r-1}}}{v}^{-\beta }q_{v,y_{r-1}}(y_{r})\hspace{0.1667em}dv\Bigg|\hspace{0.1667em}dy_{r-1}dy_{r}.\end{array}\]
Since $k_{n,\beta }\ge 2$, in the case a) we have $s_{r}-s_{r-1}\ge 2T/n$, and hence
\[\eta _{n}(s_{r})-\eta _{n}(s_{r-1})\ge s_{r}-\frac{T}{n}-s_{r-1}\ge \frac{s_{r}-s_{r-1}}{2}.\]
Therefore, using the fact that $q_{t,y}(\cdot )$ is the probability density for any $t>0$ and $y\in {\mathbb{R}}^{d}$, we finally get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \int _{{({\mathbb{R}}^{d})}^{k+1}}\big|{J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\big|\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}& \displaystyle \le B_{T,X}{\int _{s_{r}-s_{r-1}-T/n}^{s_{r}-s_{r-1}}}{v}^{-\beta }\hspace{0.1667em}dv\\{} & \displaystyle \le \frac{B_{T,X}T}{n}{\bigg(\frac{s_{r}-s_{r-1}}{2}\bigg)}^{-\beta }.\end{array}\]
In case b), we simply apply the Chapman–Kolmogorov equation:
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{{({\mathbb{R}}^{d})}^{k+1}}\big|{J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\big|\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}\\{} & \displaystyle \hspace{1em}\le \int _{{({\mathbb{R}}^{d})}^{2}}p_{\eta _{n}(s_{r-1})-\eta _{n}(s_{0})}(x,y_{r-1})\\{} & \displaystyle \hspace{2em}\times \big(p_{s_{r}-s_{r-1}}(y_{r-1},y_{r})+p_{\eta _{n}(s_{r})-\eta _{n}(s_{r-1})}(y_{r-1},y_{r})\big)\hspace{0.1667em}dy_{r-1}dy_{r}\le 2.\end{array}\]
Therefore, summarizing the estimates obtained in cases a) and b) we get, using (7), the estimates
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \int _{S_{k,0,T}}\int _{{({\mathbb{R}}^{d})}^{k+1}}\hspace{2.5pt}\big|{J_{s_{1},\dots ,s_{k},T}^{(r)}}(x,y_{1},\dots ,y_{k},z)\big|\hspace{0.1667em}dz\prod \limits_{j=1}^{k}dy_{j}\prod \limits_{i=1}^{k}ds_{i}\\{} & \displaystyle \hspace{1em}\le \frac{B_{T,X}T}{n}{2}^{\beta }{\int _{0}^{T}}{\int _{0}^{s_{r}-k_{n,\beta }T/n}}\frac{{s_{r-1}^{r-1}}}{(r-1)!}{(s_{r}-s_{r-1})}^{-\beta }\frac{{(T-s_{r})}^{k-r}}{(k-r)!}\hspace{0.1667em}ds_{r-1}ds_{r}\\{} & \displaystyle \hspace{2em}+2{\int _{0}^{T}}{\int _{s_{r}-k_{n,\beta }T/n}^{s_{r}}}\frac{{s_{r-1}^{r-1}}}{(r-1)!}\frac{{(T-s_{r})}^{k-r}}{(k-r)!}\hspace{0.1667em}ds_{r-1}ds_{r}\\{} & \displaystyle \hspace{1em}\le \frac{B_{T,X}T}{n}{2}^{\beta }{\int _{0}^{T}}\frac{{s_{r}^{r-1}}}{(r-1)!}\frac{{(T-s_{r})}^{k-r}}{(k-r)!}\hspace{0.1667em}ds_{r}\Bigg({\int _{k_{n,\beta }T/n}^{T}}{u}^{-\beta }\hspace{0.1667em}du\Bigg)\\{} & \displaystyle \hspace{2em}+\frac{2Tk_{n,\beta }}{n}{\int _{0}^{T}}\frac{{s_{r}^{r-1}}}{(r-1)!}\frac{{(T-s_{r})}^{k-r}}{(k-r)!}\hspace{0.1667em}ds_{r}\\{} & \displaystyle \hspace{1em}\le \frac{{2}^{\beta }B_{T,X}{T}^{k+2-\beta }}{(\beta -1)(k-1)!}{n}^{-1/\beta }+\frac{4{T}^{k+1}}{(k-1)!}{n}^{-1/\beta }\le \frac{{2}^{\beta \vee 2}{T}^{k+1}B_{T,X}D_{T,\beta }(n)}{(k-1)!},\end{array}\]
where in the fourth and fifth lines we used that ${s_{r-1}^{r-1}}\le {s_{r}^{r-1}}$. Taking into account that in (9) we have k terms and the common multiplier $k!$, we finally arrive at (5).  □

3 Condition X for solutions to Lévy driven SDEs

Consider the SDE
(10)
\[dX_{t}=b(X_{t})\hspace{0.1667em}dt+dZ_{t},\hspace{1em}X_{0}=x,\]
where Z is a real-valued Lévy process. In [7], it was shown that if $Z_{t}$ is a symmetric α-stable process and $b(\cdot )$ is bounded and Lipschitz continuous, then the solution to Eq. (10) satisfies condition X with $\beta =\max (1,1/\alpha )$ (in fact, in [7], more general multidimensional SDEs of the form (3) are considered). In this section, we outline the argument that makes it possible to extend the class of “Lévy noises”. Namely, we omit the requirement that Z is symmetric and relax the stability assumption, demanding Z to be “locally α-stable” in the sense we specify further.
Recall that the characteristic function of a Lévy process is of the form
\[\mathbb{E}{e}^{i\xi Z_{t}}={e}^{-t\psi (\xi )},\hspace{1em}t>0,\hspace{0.1667em}\xi \in \mathbb{R},\]
where the characteristic exponent ψ admits the Lévy–Khinchin representation
(11)
\[\psi (\xi )=-ia\xi +\frac{1}{2}{\sigma }^{2}{\xi }^{2}+\int _{\mathbb{R}}\big(1-{e}^{i\xi u}+i\xi u\mathbb{1}_{\{|u|\le 1\}}\big)\mu (du).\]
In what follows, we assume that ${\sigma }^{2}=0$ and the Lévy measure μ is of the form
(12)
\[\mu (du)=C_{+}{u}^{-1-\alpha }\mathbb{1}_{u\in (0,1)}du+C_{-}|u{|}^{-1-\alpha }\mathbb{1}_{u\in (-1,0)}du+m(u)du\]
with some $C_{\pm }\ge 0$ and $m(u)\ge 0$ such that $m(u)=0$ for $|u|\le 1$ and
(13)
\[m(u)\le c|u{|}^{-1-\alpha },\hspace{1em}|u|\ge 1.\]
On the interval $[-1,1]$, the Lévy measure μ given by (12) coincides with the Lévy measure of a (nonsymmetric) α-stable process. This is the reason for us to call Z a “locally α-stable” process: its “local behavior” near the origin is similar to those of the α-stable process. In that context, condition (13) means that the “tails” of the Lévy measure for μ are dominated by the “tails” of the α-stable Lévy measure.
Let us impose three minor conventions, which will simplify the technicalities. First, since we are mostly interested in the case $\beta >1$, we assume that $\alpha <1$. Second, the latter assumption assures that the integral
\[\int _{\{|u|\le 1\}}u\mu (du)\]
is well defined, and we assume that the constant a in (11) equals this integral; that is, ψ has the form
\[\psi (\xi )=\int _{\mathbb{R}}\big(1-{e}^{i\xi u}\big)\mu (du).\]
Clearly, this does not restrict the generality because we can change the constant a by changing appropriately the drift coefficient $b(\cdot )$ in (10). Finally, in order to avoid the usage of the Rademacher theorem (see [7, Lemma 7.4] for the case where b is just Lipschitz continuous), let us assume that $b\in {C}^{1}(\mathbb{R})$.
In what follows, we show how the parametrix construction developed in [7] can be modified to provide the representation and the bounds for the transition probability density $p_{t}(x,y)$ of the solution to (10) driven by the “locally stable” noise Z.
Let us introduce some notation and give some preliminaries. We denote the space and the space-time convolutions respectively by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle (f\ast g)(x,y)& \displaystyle :=\int _{{\mathbb{R}}^{d}}f(x,z)g(z,y)\hspace{0.1667em}dz,\\{} \displaystyle (f\circledast g)_{t}(x,y)& \displaystyle :={\int _{0}^{t}}(f_{t-s}\ast g_{s})(x,y)\hspace{0.1667em}ds={\int _{0}^{t}}\int _{{\mathbb{R}}^{d}}f_{t-s}(x,z)g_{s}(z,y)\hspace{0.1667em}dzds.\end{array}\]
Generically, the parametrix construction provides a representation of the required transition probability density in the form
(14)
\[p_{t}(x,y)={p_{t}^{0}}(x,y)+{\int _{0}^{t}}\int _{\mathbb{R}}{p_{t-s}^{0}}(x,z)\varPsi _{s}(z,y)\hspace{0.1667em}dzds,\hspace{1em}t>0,\hspace{2.5pt}\hspace{2.5pt}x,y\in \mathbb{R}.\]
Here ${p_{t}^{0}}(x,y)$ is a “zero approximation term” for the unknown $p_{t}(x,y)$, and the function $\varPsi _{t}(x,y)$ is given by the “convolution series”
(15)
\[\varPsi _{t}(x,y)=\sum \limits_{k=1}^{\infty }{\varPhi _{t}^{\circledast k}}(x,y),\hspace{1em}t>0,\hspace{2.5pt}\hspace{2.5pt}x,y\in \mathbb{R},\]
where the function $\varPhi _{t}(x,y)$ depends on the particular choice of ${p_{t}^{0}}(x,y)$ and equals
(16)
\[\varPhi _{t}(x,y):=(L_{x}-\partial _{t}){p_{t}^{0}}(x,y),\]
where
\[\begin{array}{r@{\hskip0pt}l}\displaystyle Lf(x):& \displaystyle =b(x){f^{\prime }}(x)+\int _{\mathbb{R}}\big(f(x+u)-f(x)\big)\mu (du),\hspace{1em}f\in {C_{b}^{2}}(\mathbb{R}),\end{array}\]
is the formal generator of the process X. The subscript x means that the operator L is applied with respect to the variable x. Note that to make this construction feasible, we should properly choose the “zero approximation term” ${p_{t}^{0}}(x,y)$, so that the convolution series (15) converges and the space-time convolution in (14) is well defined. To introduce in our setting such ${p_{t}^{0}}(x,y)$ and then to construct the bounds for the associated $\varPhi _{t}(x,y)$ and its convolution powers, we need some more notation.
Denote by ${Z}^{(\alpha ,C_{\pm })}$ the α-stable process with the Lévy measure $\mu _{\alpha ,C_{\pm }}(du)={m}^{(\alpha ,C_{\pm })}(u)\hspace{0.1667em}du$,
\[{m}^{(\alpha ,C_{\pm })}(u):=C_{+}{u}^{-1-\alpha }\mathbb{1}_{u>0}du+C_{-}{(-u)}^{-1-\alpha }\mathbb{1}_{u<0},\]
and the characteristic exponent
\[{\psi }^{(\alpha ,C_{\pm })}(\xi )=\int _{\mathbb{R}}\big(1-{e}^{i\xi u}\big){\mu }^{(\alpha ,C_{\pm })}(du).\]
Note that since
\[{\psi }^{(\alpha ,C_{\pm })}(c\xi )={c}^{\alpha }{\psi }^{(\alpha ,C_{\pm })}(\xi ),\hspace{1em}c>0,\]
the process ${Z}^{(\alpha ,C_{\pm })}$ possesses the scaling property
\[\mathrm{Law}\hspace{0.1667em}\big({Z_{ct}^{(\alpha ,C_{\pm })}}\big)=\mathrm{Law}\hspace{0.1667em}\big({c}^{1/\alpha }{Z_{t}^{(\alpha ,C_{\pm })}}\big),\hspace{1em}c>0.\]
Denote by ${g_{t}^{(\alpha ,C_{\pm })}}$ the distribution density of ${Z_{t}^{(\alpha ,C_{\pm })}}$. By the scaling property we have
\[{g_{t}^{(\alpha ,C_{\pm })}}(x)={t}^{-1/\alpha }{g}^{(\alpha ,C_{\pm })}\big(x{t}^{-1/\alpha }\big),\hspace{1em}{g}^{(\alpha ,C_{\pm })}:={g_{1}^{(\alpha ,C_{\pm })}}.\]
Denote also by ${Z}^{(\alpha )}$ the symmetric α-stable process, that is, the process of the form introduced before with $C_{+}=C_{-}=1$. Let ${g_{t}^{(\alpha )}}$ be the respective distribution density, and ${g}^{(\alpha )}:={g_{1}^{(\alpha )}}$.
Finally, denote by $\chi _{t}(x)$ and $\theta _{t}(y)$, respectively, the solutions to the ODEs
(17)
\[d\chi _{t}=b(\chi _{t})dt,\hspace{1em}\chi _{0}=x,\hspace{2em}d\theta _{t}=-b(\theta _{t})dt,\hspace{1em}\theta _{0}=y.\]
Note that these solutions exist because $b(\cdot )$ is Lipschitz continuous.
Now we are ready to formulate the main statement of this section.
Theorem 3.1.
Let
(18)
\[{p_{t}^{0}}(x,y):={g_{t}^{(\alpha ,C_{\pm })}}\big(\theta _{t}(y)-x\big).\]
Then the convolution series (15) is well defined, and Eq. (14) gives a representation of the transition probability density $p_{t}(x,y)$ of the process X. This density and its time derivative have the following upper bounds:
(19)
\[p_{t}(x,y)\le B_{T,X}\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(y-\chi _{t}(x)\big),\hspace{1em}t\in (0,T],\hspace{2.5pt}\hspace{2.5pt}x,y\in \mathbb{R},\]
(20)
\[\partial _{t}p_{t}(x,y)\le B_{T,X}{t}^{-1/\alpha }\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(y-\chi _{t}(x)\big),\hspace{1em}t\in (0,T],\hspace{2.5pt}\hspace{2.5pt}x,y\in \mathbb{R}.\]
Consequently, the process X satisfies condition X with $\beta =1/\alpha $ and
\[q_{t,x}(y)=\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(y-\chi _{t}(x)\big).\]
Proof.
First, we evaluate $\varPhi _{t}(x,y)$. If not stated otherwise, in all our estimates, we further assume that $t\in (0,T]$ for some $T>0$ and that $x,y\in \mathbb{R}$. Observe that ${g}^{(\alpha ,C_{\pm })}\in {C_{b}^{2}}(\mathbb{R})$. Indeed, this property easily follows from the Fourier inversion formula and the expression for the characteristic function. It is known that ${g_{t}^{(\alpha ,C_{\pm })}}(y-x)$ is the fundamental solution to the Cauchy problem for $\partial _{t}-{L}^{(\alpha ,C_{\pm })}$, where ${L}^{(\alpha ,C_{\pm })}$ denotes the generator of the process ${Z}^{(\alpha ,C_{\pm })}$:
(21)
\[{L}^{(\alpha ,C_{\pm })}f(x)=\int _{\mathbb{R}}\big(f(x+u)-f(x)\big){\mu }^{(\alpha ,C_{\pm })}(du),\hspace{1em}f\in {C_{b}^{2}}(\mathbb{R}).\]
Since
\[\big(\partial _{t}-{L_{x}^{(\alpha ,C_{\pm })}}\big){g_{t}^{(\alpha ,C_{\pm })}}(y-x)=0,\]
we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \partial _{t}{p_{t}^{0}}(x,y)& \displaystyle =\bigg[\partial _{t}{g_{t}^{(\alpha ,C_{\pm })}}(w)+\frac{\partial _{t}\theta _{t}(y)}{{t}^{2/\alpha }}{\big({g}^{(\alpha ,C_{\pm })}\big)}^{\prime }\bigg(\frac{w}{{t}^{1/\alpha }}\bigg)\bigg]\bigg|_{w=\theta _{t}(y)-x}\\{} & \displaystyle =\bigg[\hspace{-0.1667em}\frac{1}{{t}^{1/\alpha }}\big({L}^{(\alpha ,C_{\pm })}{g}^{(\alpha ,C_{\pm })}\big)\bigg(\frac{w}{{t}^{1/\alpha }}\bigg)+\hspace{0.1667em}\frac{\partial _{t}\theta _{t}(y)}{{t}^{2/\alpha }}{\big({g}^{(\alpha ,C_{\pm })}\big)}^{\prime }\bigg(\frac{w}{{t}^{1/\alpha }}\bigg)\hspace{-0.1667em}\bigg]\bigg|_{w=\theta _{t}(y)-x},\end{array}\]
where in the last identity we used the scaling property of ${g_{t}^{(\alpha ,C_{\pm })}}$ and the fact that ${L}^{(\alpha ,C_{\pm })}$ is a homogeneous operator of order $1/\alpha $. Next, by the definition of L and ${L}^{(\alpha ,C_{\pm })}$ we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle L_{x}{p_{t}^{0}}(x,y)& \displaystyle =\bigg[\frac{1}{{t}^{1/\alpha }}\big({L}^{(\alpha ,C_{\pm })}{g}^{(\alpha )}\big)\bigg(\frac{w}{{t}^{1/\alpha }}\bigg)-\frac{b(x)}{{t}^{2/\alpha }}{\big({g}^{(\alpha ,C_{\pm })}\big)}^{\prime }\bigg(\frac{w}{{t}^{1/\alpha }}\bigg)\bigg]\bigg|_{w=\theta _{t}(y)-x}\\{} & \displaystyle \hspace{1em}+\bigg[\int _{|u|\ge 1}\bigg(\frac{1}{{t}^{1/\alpha }}{g}^{(\alpha ,C_{\pm })}\bigg(\frac{w-u}{{t}^{1/\alpha }}\bigg)-\frac{1}{{t}^{1/\alpha }}{g}^{(\alpha ,C_{\pm })}\bigg(\frac{w}{{t}^{1/\alpha }}\bigg)\bigg)\\{} & \displaystyle \hspace{1em}\times \big(m(u)du-{m}^{(\alpha ,C_{\pm })}(du)\big)\bigg]\bigg|_{w=\theta _{t}(y)-x}.\end{array}\]
Therefore, using the relation $\partial _{t}\theta _{t}(y)=-b(\theta _{t}(y))$, we get
(22)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varPhi _{t}(x,y)& \displaystyle =(L_{x}-\partial _{t}){p_{t}^{0}}(x,y)\\{} & \displaystyle =\frac{b(x)-b(\theta _{t}(y))}{{t}^{2/\alpha }}{\big({g}^{(\alpha ,C_{\pm })}\big)}^{\prime }\bigg(\frac{\theta _{t}(y)-x}{{t}^{1/\alpha }}\bigg)\\{} & \displaystyle \hspace{1em}+\frac{1}{{t}^{1/\alpha }}\int _{|u|\ge 1}\bigg({g}^{(\alpha ,C_{\pm })}\bigg(\frac{\theta _{t}(y)-x-u}{{t}^{1/\alpha }}\bigg)-{g}^{(\alpha ,C_{\pm })}\bigg(\frac{\theta _{t}(y)-x}{{t}^{1/\alpha }}\bigg)\bigg)\\{} & \displaystyle \hspace{1em}\times \big(m(u)-{m}^{(\alpha ,C_{\pm })}(u)\big)\hspace{0.1667em}du\\{} & \displaystyle =:{\varPhi _{t}^{1}}(x,y)+{\varPhi _{t}^{2}}(x,y).\end{array}\]
Further, we find the bounds for the absolute values of ${\varPhi _{t}^{1}}(x,y)$, ${\varPhi _{t}^{2}}(x,y)$, and $\varPhi _{t}(x,y)$. In what follows, D denotes a generic constant whose value may be different from place to place. We have
(23)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {g}^{(\alpha ,C_{\pm })}(x)& \displaystyle \le D{g}^{(\alpha )}(x),\hspace{1em}\hspace{1em}x\in \mathbb{R},\end{array}\]
(24)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|{\big({g}^{(\alpha ,C_{\pm })}\big)}^{\prime }(x)\big|& \displaystyle \le D{\big(1+|x|\big)}^{-1}{g}^{(\alpha )}(x),\hspace{1em}x\in \mathbb{R},\end{array}\]
(25)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|{\big({g}^{(\alpha ,C_{\pm })}\big)}^{\prime\prime }(x)\big|& \displaystyle \le D{\big(1+|x|\big)}^{-2}{g}^{(\alpha )}(x),\hspace{1em}x\in \mathbb{R}.\end{array}\]
Since the argument used in the proof of (23)–(25) is quite standard (see, e.g., [7, Appendix A]), we omit the details.
By (24) and the Lipschitz continuity of $b(\cdot )$ we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|{\varPhi _{t}^{1}}(x,y)\big|& \displaystyle \le \frac{D|x-\theta _{t}(y)|}{{t}^{2/\alpha }}\bigg|{\big({g}^{(\alpha ,C_{\pm })}\big)}^{\prime }\bigg(\frac{\theta _{t}(y)-x}{{t}^{1/\alpha }}\bigg)\bigg|\\{} & \displaystyle \le \frac{D}{{t}^{1/\alpha }}{g}^{(\alpha )}\bigg(\frac{\theta _{t}(y)-x}{{t}^{1/\alpha }}\bigg)=D{g_{t}^{(\alpha )}}\big(\theta _{t}(y)-x\big).\end{array}\]
To get the estimate for $|{\varPhi _{t}^{2}}(x,y)|$, we first observe that
\[\big|m(u)-{m}^{(\alpha ,C_{\pm })}(u)\big|I_{|u|\ge 1}\le D{g}^{(\alpha )}(u),\]
which implies
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|{\varPhi _{t}^{2}}(x,y)\big|& \displaystyle \le \frac{D}{{t}^{1/\alpha }}\int _{|u|\ge 1}{g}^{(\alpha ,C_{\pm })}\bigg(\frac{\theta _{t}(y)-x-u}{{t}^{1/\alpha }}\bigg){g}^{(\alpha )}(u)\hspace{0.1667em}du\\{} & \displaystyle \hspace{1em}+\frac{D}{{t}^{1/\alpha }}{g}^{(\alpha ,C_{\pm })}\bigg(\frac{\theta _{t}(y)-x}{{t}^{1/\alpha }}\bigg).\end{array}\]
Taking into account (23), we deduce that
\[\big|{\varPhi _{t}^{2}}(x,y)\big|\le D\big({g_{t}^{(\alpha )}}\ast {g_{1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big)=D\big({g_{t+1}^{(\alpha )}}(x)+{g_{t}^{(\alpha )}}(x)\big)\big(\theta _{t}(y)-x\big).\]
Combining the estimates for ${\varPhi _{t}^{1}}(x,y)$ and ${\varPhi _{t}^{2}}(x,y)$, we get
(26)
\[\big|\varPhi _{t}(x,y)\big|\le D\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\]
Our next step is to estimate the convolution powers of Φ. It is shown in [7, Appendix B] that the kernel ${g_{t}^{(\alpha )}}(\theta _{t}(y)-x)$ possess the following sub-convolution property:
(27)
\[\int _{\mathbb{R}}{g_{t-s}^{(\alpha )}}\big(\theta _{t-s}(z)-x\big){g_{s}^{(\alpha )}}\big(\theta _{s}(y)-z\big)\hspace{0.1667em}dz\le D{g_{t}^{(\alpha )}}\big(\theta _{t}(y)-x\big).\]
By this property we get
(28)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \int _{\mathbb{R}}{g_{t+1-s}^{(\alpha )}}\big(\theta _{t-s}(z)-x\big){g_{s}^{(\alpha )}}\big(\theta _{s}(y)-z\big)\hspace{0.1667em}dz& \displaystyle \le D{g_{t+1}^{(\alpha )}}\big(\theta _{t}(y)-x\big),\\{} \displaystyle \int _{\mathbb{R}}{g_{t-s+1}^{(\alpha )}}\big(\theta _{t-s}(z)-x\big){g_{s+1}^{(\alpha )}}\big(\theta _{s}(y)-z\big)\hspace{0.1667em}dz& \displaystyle \le D{g_{t+2}^{(\alpha )}}\big(\theta _{t}(y)-x\big)\\{} & \displaystyle \le D{g_{t+1}^{(\alpha )}}\big(\theta _{t}(y)-x\big),\end{array}\]
where in the last line we used that ${g_{2}^{(\alpha )}}\le Dg_{1}$, and therefore ${g_{t+2}^{(\alpha )}}={g_{t}^{(\alpha )}}\ast {g_{2}^{(\alpha )}}\le D{g_{t+2}^{(\alpha )}}$. Having these estimates, we deduce in the same way as in [7, Section 3] that
(29)
\[\big|{\varPhi _{t}^{\circledast k}}(x,y)\big|\le \frac{C_{0}{(Dt)}^{k-1}}{k!}\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\]
Therefore, the series (15) converges absolutely for $(t,x,y)\in (0,\infty )\times \mathbb{R}\times \mathbb{R}$, and
\[\big|\varPsi _{t}(x,y)\big|\le D\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\]
Applying once again the sub-convolution property (27), we see that the convolution ${p}^{0}\circledast \varPsi $ is well defined and
\[\big|\big({p}^{0}\circledast \varPsi \big)_{t}(x,y)\big|\le D\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\]
Thus, expression (14) for $p_{t}(x,y)$ is well defined for any $(t,x,y)\in (0,\infty )\times \mathbb{R}\times \mathbb{R}$, and
\[\big|p_{t}(x,y)\big|\le D\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\]
Finally, to get (19), we use the following inequalities, which were proved in [7, Appendix B]:
(30)
\[c\big|\theta _{t}(y)-x\big|\le \big|\chi _{t}(x)-y\big|\le C\big|\theta _{t}(y)-x\big|.\]
Since for any constant $c>0$, we have ${g_{t}^{(\alpha )}}(x)\asymp {g_{t}^{(\alpha )}}(cx)$, this completes the proof of (19).
Our final step is to use representation (14) in order to find bounds for $\partial _{t}p_{t}(x,y)$. Since ${p_{t}^{0}}(x,y)$ and $\varPhi _{t}(x,y)$ are given explicitly, it is straightforward to show that these functions are differentiable with respect to t and to check using (23)–(25) that
(31)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|\partial _{t}{p_{t}^{0}}(x,y)\big|& \displaystyle \le D{t}^{-1/\alpha }{g_{t}^{(\alpha )}}\big(\theta _{t}(y)-x\big),\end{array}\]
(32)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|\partial _{t}\varPhi _{t}(x,y)\big|& \displaystyle \le D{t}^{-1/\alpha }\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\end{array}\]
To show that the convolution powers ${\varPhi _{t}^{\circledast k}}(x,y)$ are differentiable in t and to get the upper bounds, we use the following trick. The expression for ${\varPhi _{t}^{\circledast (k+1)}}(x,y)$ can be reorganized as follows:
(33)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\varPhi _{t}^{\circledast (k+1)}}(x,y)& \displaystyle ={\int _{0}^{t}}\int _{\mathbb{R}}{\varPhi _{t-s}^{\circledast k}}(x,z)\varPhi _{s}(z,y)\hspace{0.1667em}dzds\\{} & \displaystyle ={\int _{0}^{t/2}}\int _{\mathbb{R}}{\varPhi _{t-s}^{\circledast k}}(x,z)\varPhi _{s}(z,y)\hspace{0.1667em}dzds\\{} & \displaystyle \hspace{1em}+{\int _{0}^{t/2}}\int _{\mathbb{R}}{\varPhi _{s}^{\circledast k}}(x,z)\varPhi _{t-s}(z,y)\hspace{0.1667em}dzds.\end{array}\]
If $k=1$, then the first line in (33) does not allow us to differentiate $\partial _{t}{\varPhi _{t}^{\circledast (2)}}(x,y)$ because the upper bound for $\partial _{t}\varPhi _{t-s}(x,z)$ has a nonintegrable singularity ${(t-s)}^{-1/\alpha }$ at the vicinity of the point $s=t$ (recall that $\alpha <1$). However, the identity given by the second line in (33) does not contain such singularities, and we can show using induction that for any $k\ge 1$, the function ${\varPhi _{t}^{\circledast k}}(x,y)$ is continuously differentiable in t, satisfies
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \partial _{t}{\varPhi _{t}^{\circledast (k+1)}}(x,y)& \displaystyle ={\int _{0}^{t/2}}\int _{{\mathbb{R}}^{d}}\big(\partial _{t}{\varPhi }^{\circledast k}\big)_{t-s}(x,z)\varPhi _{s}(z,y)\hspace{0.1667em}dzds\\{} & \displaystyle \hspace{1em}+{\int _{0}^{t/2}}\int _{{\mathbb{R}}^{d}}{\varPhi _{s}^{\circledast k}}(x,z)(\partial _{t}\varPhi )_{t-s}(z,y)\hspace{0.1667em}dzds\\{} & \displaystyle \hspace{1em}+\int _{{\mathbb{R}}^{d}}{\varPhi _{t/2}^{\circledast k}}(x,z)\varPhi _{t/2}(z,y)\hspace{0.1667em}dz,\end{array}\]
and possesses the bound
(34)
\[\big|\partial _{t}{\varPhi _{t}^{\circledast k}}(x,y)\big|\le \frac{C_{0}{(Dt)}^{k-1}{t}^{-1/\alpha }}{k!}\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big),\hspace{1em}k\ge 1.\]
Since the proof is completely analogous to the proof of [7, Lemma 7.3], we omit the details.
From (34) we derive the following bound for the derivative of $\varPsi _{t}(x,y)$:
(35)
\[\big|\partial _{t}{\varPsi _{t}^{\circledast k}}(x,y)\big|\le D{t}^{-1/\alpha }\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\]
Reorganizing representation (14) in the same way as in (33), we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle p_{t}(x,y)={p_{t}^{0}}(x,y)& \displaystyle +{\int _{0}^{t/2}}\int _{{\mathbb{R}}^{d}}{p_{t-s}^{0}}(x,z)\varPsi _{s}(z,y)\hspace{0.1667em}dzds\\{} & \displaystyle +{\int _{0}^{t/2}}\int _{{\mathbb{R}}^{d}}{p_{s}^{0}}(x,z)\varPsi _{t-s}(z,y)\hspace{0.1667em}dzds.\end{array}\]
Using this representation of $p_{t}(x,y)$ together with (31) and (35), we derive the existence of the continuous derivative $\partial _{t}p_{t}(x,y)$, which satisfies the inequality
\[\big|\partial _{t}p_{t}(x,y)\big|\le D{t}^{-1/\alpha }\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(\theta _{t}(y)-x\big).\]
Using estimates (30) in the same way as in the proof of (19), we can change the argument $\theta _{t}(y)-x$ in the right-hand side of this estimate to $y-\chi _{t}(x)$, which completes the proof of (20).  □

4 Application: the price of an occupation time option

In this section, we consider an occupation time option (see [11]), with the price of the option depending on the time spent by an asset price process in a given set. Comparing to the standard barrier options, which are activated or cancelled when the asset price process hits some definite level (barrier), the payoff of the occupation time option depends on the time during which the asset price process stays below or above such a barrier.
For instance, for the strike price K, the barrier level L and the knock-out rate ρ, the payoff of a down-and-out call occupation time option equals
\[\exp \Bigg(-\rho {\int _{0}^{T}}\mathbb{I}_{\{S_{t}\le L\}}\hspace{0.1667em}dt\Bigg)(S_{T}-K)_{+},\]
and the price $\mathbf{C}(T)$ of the option is defined as
\[\mathbf{C}(T)=\exp (-rT)E\Bigg[\exp \Bigg(-\rho {\int _{0}^{T}}\mathbb{I}_{\{S_{t}\le L\}}\hspace{0.1667em}dt\Bigg)(S_{T}-K)_{+}\Bigg]\]
where r is the risk-free interest rate (see [11]).
Assume that the price of an asset $S=\{S_{t},t\ge 0\}$ is of the form
\[S_{t}=S_{0}\exp (X_{t}),\]
where X is the Markov process studied in the previous sections. Then the time spent by the process S in a set $J\subset \mathbb{R}$ equals the time spent by X in the set ${J^{\prime }}=\log J$.
Let us approximate the price $\textbf{C}(T)$ of our option by
\[\mathbf{C}_{n}(T)=\exp (-rT)E\Bigg[\exp \Bigg(-\rho T/n\sum \limits_{k=0}^{n-1}\mathbb{I}_{\{S_{kT/n}\le L\}}\hspace{0.1667em}dt\Bigg)(S_{T}-K)_{+}\Bigg].\]
Then, using the results from the previous sections, we can get the control on the accuracy of such an approximation.
First, we apply Theorem 2.1 and derive the strong approximation rate.
Proposition 4.1.
Suppose that the process X satisfies condition X and assume that there exists $\lambda >1$ such that $G(\lambda ):=E\exp (\lambda X_{T})=E{(S_{T})}^{\lambda }<+\infty $.
Then
\[\big|\boldsymbol{C}_{n}(T)-\boldsymbol{C}(T)\big|\le \exp (-rT)\rho G{(\lambda )}^{1/\lambda }C_{T,\lambda /(\lambda -1)}{\big(D_{T,\beta }(n)\big)}^{1/2}\]
with constants $C_{T,\lambda /(\lambda -1)}$ and $D_{T,\beta }(n)$ given by (4).
Proof.
The proof is a simple corollary of Theorem 2.1. Denote $h(x)=\rho \mathbb{I}_{x\le \log L}$. Then, keeping the notation of Section 2, we get
\[\mathbf{C}(T)={e}^{-rT}E{e}^{-I_{T}(h)}(S_{T}-K)_{+},\hspace{2em}\mathbf{C}_{n}(T)={e}^{-rT}E{e}^{-I_{T,n}(h)}(S_{T}-K)_{+}.\]
By the Hölder inequality with $p=\lambda $ and $q=\lambda /(\lambda -1)$,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|\mathbf{C}_{n}(T)-\mathbf{C}(T)\big|& \displaystyle \le {e}^{-rT}{\big(E{(S_{T}-K)_{+}^{\lambda }}\big)}^{1/\lambda }\\{} & \displaystyle \hspace{1em}\times {\big(E{\big|{e}^{-I_{T}(h)}-{e}^{-I_{T,n}(h)}\big|}^{\lambda /(\lambda -1)}\big)}^{(\lambda -1)/\lambda }.\end{array}\]
Since $|{e}^{-a}-{e}^{-b}|\le |a-b|$ for positive a and b, we have
\[\big|\mathbf{C}_{n}(T)-\mathbf{C}(T)\big|\le {e}^{-rT}G{(\lambda )}^{1/\lambda }{\big(E{\big|I_{T}(h)-I_{T,n}(h)\big|}^{\lambda /(\lambda -1)}\big)}^{(\lambda -1)/\lambda },\]
and thus the required statement follows from Theorem 2.1 with $p=\frac{\lambda }{\lambda -1}$.  □
We also can control the accuracy of the approximation using the weak rate bound from Theorem 2.2. Note that the bound given in the next proposition is sharper than those obtained in the previous proposition precisely when $\lambda >2$.
Proposition 4.2.
Under the assumptions of Proposition 4.1, we have
\[\big|\boldsymbol{C}_{n}(T)-\boldsymbol{C}(T)\big|\le {2}^{\beta \vee 2+1}\max \big\{B_{T,X}\rho {T}^{2}(1+\rho T){e}^{\rho T},G(\lambda )\big\}{e}^{-rT}\widetilde{D}_{T,\beta }(n),\]
where
\[\widetilde{D}_{T,\beta }(n)=\left\{\begin{array}{l@{\hskip10.0pt}l}{n}^{-(1-1/\lambda )}\log n,\hspace{1em}& \beta =1,\\{} \max (1,\frac{{T}^{1-\beta }}{\beta -1}){n}^{-1/\beta (1-1/\lambda )},\hspace{1em}& \beta >1.\end{array}\right.\]
Proof.
For $N>0$, denote
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\mathbf{C}}^{N}(T)=\exp (-rT)E\Bigg[\exp \Bigg(-\rho {\int _{0}^{T}}\mathbb{I}_{\{S_{t}\le L\}}dt\Bigg)\big((S_{T}-K)_{+}\wedge N\big)\Bigg],\\{} & \displaystyle {\mathbf{C}_{n}^{N}}(T)=\exp (-rT)E\Bigg[\exp \Bigg(-\rho T/n\sum \limits_{k=0}^{n-1}\mathbb{I}_{\{S_{kT/n}\le L\}}dt\Bigg)\big((S_{T}-K)_{+}\wedge N\big)\Bigg].\end{array}\]
Then
(36)
\[\big|\mathbf{C}_{n}(T)-\mathbf{C}(T)\big|\le \big|{\mathbf{C}_{n}^{N}}(T)-{\mathbf{C}}^{N}(T)\big|+\big|\mathbf{C}(T)-{\mathbf{C}}^{N}(T)\big|+\big|\mathbf{C}_{n}(T)-{\mathbf{C}_{n}^{N}}(T)\big|.\]
We estimate each term separately.
Using Theorem 2.2 and the Taylor expansion of the exponent, we derive
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|{\mathbf{C}_{n}^{N}}(T)-{\mathbf{C}}^{N}(T)\big|& \displaystyle \le {2}^{\beta \vee 2}B_{T,X}NT\exp (-rT)\sum \limits_{k=1}^{\infty }\frac{{\rho }^{k}}{k!}{k}^{2}{T}^{k}D_{T,\beta }(n)\\{} & \displaystyle ={2}^{\beta \vee 2}B_{T,X}\rho N{T}^{2}(1+\rho T)\exp (\rho T-rT)D_{T,\beta }(n).\end{array}\]
For the last two terms, we get
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \big|\mathbf{C}(T)-{\mathbf{C}}^{N}(T)\big|+\big|\mathbf{C}_{n}(T)-{\mathbf{C}_{n}^{N}}(T)\big|\\{} & \displaystyle \hspace{1em}\le 2\exp (-rT)E\big[(S_{T}-K)_{+}-(S_{T}-K)_{+}\wedge N\big]\\{} & \displaystyle \hspace{1em}\le 2\exp (-rT)E[S_{T}\mathbb{I}_{\{S_{T}>N\}}]=2\exp (-rT)E\bigg[\frac{S_{T}{N}^{\lambda -1}\mathbb{I}_{\{S_{T}>N\}}}{{N}^{\lambda -1}}\bigg]\\{} & \displaystyle \hspace{1em}\le \frac{2G(\lambda )}{{N}^{\lambda -1}}\exp (-rT).\end{array}\]
To complete the proof, put $N={n}^{1/(\beta \lambda )}$.  □

Acknowledgments

The second- and third-named authors gratefully acknowledge the DFG Grant Schi 419/8-1; the third-named author acknowledges the joint DFFD-RFFI project No. 09-01-14.

References

[1] 
Bogdan, K., Grzywny, T., Ryznar, M.: Density and tails of unimodal convolution semigroups. J. Funct. Anal. 266(6), 3543–3571 (2014). MR3165234. doi:10.1016/j.jfa.2014.01.007
[2] 
Ganychenko, I., Kulik, A.: Rates of approximation of nonsmooth integral-type functionals of Markov processes. Mod. Stoch., Theory Appl. 2, 117–126 (2014). MR3316480. doi:10.15559/vmsta-2014.12
[3] 
Ganychenko, I., Kulik, A.: Weak approximation rates for integral functionals of Markov processes. Mod. Stoch., Theory Appl. 3, 251–266 (2015)
[4] 
Gobet, E., Labart, C.: Sharp estimates for the convergence of the density of the Euler scheme in small time. Electron. Commun. Probab. 13, 352–363 (2008). MR2415143. doi:10.1214/ECP.v13-1393
[5] 
Guerin, H., Renaud, J.-F.: Joint distribution of a spectrally negative Lévy process and its occupation time, with step option pricing in view. arXiv:1406.3130
[6] 
Knopova, V.: Compound kernel estimates for the transition probability density of a Lévy process in ${\mathbb{R}}^{n}$. Theory Probab. Math. Stat. 89, 57–70 (2014). MR3235175. doi:10.1090/s0094-9000-2015-00935-2
[7] 
Knopova, V., Kulik, A.: Parametrix construction of the transition probability density of the solution to an SDE driven by α-stable noise. arXiv:1412.8732
[8] 
Knopova, V., Kulik, A.: Intrinsic small time estimates for distribution densities of Lévy processes. Random Oper. Stoch. Equ. 21(4), 321–344 (2013). MR3139314. doi:10.1515/rose-2013-0015
[9] 
Kohatsu-Higa, A., Makhlouf, A., Ngo, H.L.: Approximations of non-smooth integral type functionals of one dimensional diffusion precesses. Stoch. Process. Appl. 124, 1881–1909 (2014). MR3170228. doi:10.1016/j.spa.2014.01.003
[10] 
Kulczycki, T., Ryznar, M.: Gradient estimates of harmonic functions and transition densities for Lévy processes. arXiv:1307.7158. MR3413864. doi:10.1090/tran/6333
[11] 
Linetsky, V.: Step options. Math. Finance 9, 55–96 (1999). MR1849356. doi:10.1111/1467-9965.00063
[12] 
Mimica, A.: Heat kernel upper estimates for symmetric jump processes with small jumps of high intensity. Potential Anal. 36(2), 203–222 (2012). MR2886459. doi:10.1007/s11118-011-9225-1
[13] 
Pruitt, W.E., Taylor, S.J.: The potential kernel and hitting probabilities for the general stable process in ${\mathbb{R}}^{n}$. Trans. Am. Math. Soc. 146, 299–321 (1969). MR0250372
[14] 
Rosinski, J., Singlair, J.: Generalized tempered stable processes. Stab. Probab. 90, 153–170 (2010). MR2798857. doi:10.4064/bc90-0-10
[15] 
Sztonyk, P.: Estimates of tempered stable densities. J. Theor. Probab. 23(1), 127–147 (2010). MR2591907. doi:10.1007/s10959-009-0208-8
[16] 
Sztonyk, P.: Transition density estimates for jump Lévy processes. Stoch. Process. Appl. 121, 1245–1265 (2011). MR2794975. doi:10.1016/j.spa.2011.03.002
[17] 
Watanabe, T.: Asymptotic estimates of multi-dimensional stable densities and their applications. Trans. Am. Math. Soc. 359(6), 2851–2879 (2007). MR2286060. doi:10.1090/S0002-9947-07-04152-9
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Accuracy of discrete approximation for integral functionals
  • 3 Condition X for solutions to Lévy driven SDEs
  • 4 Application: the price of an occupation time option
  • Acknowledgments
  • References

Copyright
© 2015 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Markov process integral functional approximation rate

MSC2010
60H07 60H35

Metrics
since March 2018
630

Article info
views

491

Full article
views

342

PDF
downloads

166

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    3
Theorem 2.1.
Theorem 2.2.
Theorem 3.1.
Theorem 2.1.
Suppose that X holds. Then, for any $p>0$,
\[{\big(\mathbb{E}_{x}{\big|I_{T}(h)-I_{T,n}(h)\big|}^{p}\big)}^{1/p}\le C_{T,p}\| h\| {\big(D_{T,\beta }(n)\big)}^{1/2},\]
where $\| h\| =\sup _{x}|h(x)|$,
(4)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle D_{T,\beta }(n)=\left\{\begin{array}{l@{\hskip10.0pt}l}{n}^{-1}\log n,\hspace{1em}& \beta =1,\\{} \max (1,\frac{{T}^{1-\beta }}{\beta -1}){n}^{-1/\beta },\hspace{1em}& \beta >1,\end{array}\right.\\{} & \displaystyle C_{T,p}=\left\{\begin{array}{l@{\hskip10.0pt}l}{(14p(p-1)B_{T,X})}^{1/2}T,\hspace{1em}& p\ge 2,\\{} C_{T,2}={(28B_{T,X})}^{1/2}T,\hspace{1em}& p\in (0,2).\end{array}\right.\end{array}\]
Theorem 2.2.
Suppose that X holds. Then, for any $k\in \mathbb{N}$ and any bounded function f, we have
(5)
\[\big|\mathbb{E}_{x}{\big(I_{T}(h)\big)}^{k}f(X_{T})-\mathbb{E}_{x}{\big(I_{T,n}(h)\big)}^{k}f(X_{T})\big|\le {2}^{\beta \vee 2}{k}^{2}B_{T,X}{T}^{k+1}\| h{\| }^{k}\| f\| D_{T,\beta }(n).\]
Theorem 3.1.
Let
(18)
\[{p_{t}^{0}}(x,y):={g_{t}^{(\alpha ,C_{\pm })}}\big(\theta _{t}(y)-x\big).\]
Then the convolution series (15) is well defined, and Eq. (14) gives a representation of the transition probability density $p_{t}(x,y)$ of the process X. This density and its time derivative have the following upper bounds:
(19)
\[p_{t}(x,y)\le B_{T,X}\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(y-\chi _{t}(x)\big),\hspace{1em}t\in (0,T],\hspace{2.5pt}\hspace{2.5pt}x,y\in \mathbb{R},\]
(20)
\[\partial _{t}p_{t}(x,y)\le B_{T,X}{t}^{-1/\alpha }\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(y-\chi _{t}(x)\big),\hspace{1em}t\in (0,T],\hspace{2.5pt}\hspace{2.5pt}x,y\in \mathbb{R}.\]
Consequently, the process X satisfies condition X with $\beta =1/\alpha $ and
\[q_{t,x}(y)=\big({g_{t+1}^{(\alpha )}}+{g_{t}^{(\alpha )}}\big)\big(y-\chi _{t}(x)\big).\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy