## 1 Introduction

Let $X_{t}$, $t\ge 0$, be an ${\mathbb{R}}^{d}$-valued Markov process. We study an integral functional of this process. The most natural numerical scheme to approximate such a functional is the sequence of integral sums and the main objective of this paper is to study approximation rates within this scheme. The function

*h*, in general, is not assumed to be smooth, and therefore the mapping may fail to be Lipschitz continuous (and even simply continuous) on a natural functional space of the trajectories of*X*(e.g., $C(0,T)$ or $D(0,T)$). This makes it impossible to carry out the error analysis with a classical technique (see, e.g., [5]). The typical case of interest here is $h=1_{A}$, with $I_{T}(h)$ being respectively the occupation time of*X*at the set*A*up to the time moment*T*.In the paper, we establish strong $L_{p}$-approximation rates, that is, the bounds for Our research is strongly motivated by the recent paper [7], where such a problem was studied in a particularly important case where

*X*is a one-dimensional diffusion, and we refer the reader to [7] for more motivation and background on the subject. The technique developed in [7], involving both the Malliavin calculus tools and the Gaussian bounds for the transition probability density, relies substantially on the structure of the process, and hence it seems not easy to extend this approach to other classes of processes, for example, multidimensional diffusions or solutions to Lévy driven SDEs.We would like to explain in this note that, in order to get the required approximation rates, one can modify some well-developed estimates from the theory of continuous additive functionals of Markov processes. An advantage of such approach is that the assumptions on the process are formulated only in terms of its transition probability density and therefore are quite flexible. The basis for the approach is given by the fact that the weak approximation rates for are available as a consequence of a bound for the derivative w.r.t.

*t*of the transition probability density; see [4], Theorem 2.5, and Proposition 2.1 below. To explain the principal idea of the approach, let us assume for a while that*h*is nonnegative and bounded. Then the integral functional $I_{T}(h)$ is a*W-functional*of the process*X*; see [1], Chapter 6. It is well known that the properties of a*W*-functional are mainly controlled by its*characteristic*, that is, the expectation In particular, the convergence of characteristics implies the $L_{2}$-convergence of the respective functionals. The core of our approach is that we extend the Dynkin’s technique for a study of convergence of*W*-functionals and give approximation rates for integral functionals $I_{T}(h)$ by difference functionals $I_{T,n}(h)$, based on the weak approximation rates for their expectations. We remark that now we are beyond the scopes of the original Dynkin’s theory because $I_{T}(h)$*may fail*to be a*W*-functional (we do not assume*h*to be nonnegative), and $I_{T,n}(h)$*definitely fails*to be a*W*-functional. In addition, Dynkin’s theory addresses $L_{2}$-bounds, whereas, in general, we are interested in $L_{p}$-bounds. This brings some extra difficulties, which however are not really substantial, and we resolve them in a way similar to the one used in the classical Khas’minskii lemma; see, for example, Lemma 2.1 in [8].## 2 Main results

### 2.1 Notation, assumptions, and auxiliaries

In what follows, $P_{x}$ denotes the law of the Markov process with $X_{0}=x$, and $E_{x}$ denotes the expectation w.r.t. this law. The natural filtration of the process

*X*is denoted by $\{\mathcal{F}_{t},t\ge 0\}$. The process*X*is assumed to possess a transition probability density, denoted below by $p_{t}(x,y)$. By*C*we denote a generic constant; the value of*C*may vary from place to place. Both the absolute value of a real number and the Euclidean norm in ${\mathbb{R}}^{d}$ are denoted by $|\cdot |$.Our standing assumption on the process

*X*under investigation is the following.The assumption

**X**is motivated by the following class of processes of particular interest.##### Example 2.1.

Let

*X*be a symmetric*α*-stable process with $\alpha \in (0,2]$; in the case $\alpha =2$ this is just a Brownian motion. Then with ${g}^{(\alpha )}$ being the distribution density of $X_{1}$. Respectively, (2) holds with $Q=Q_{\alpha }$, \[ Q_{\alpha }(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}c_{1}{e}^{-c_{2}|x{|}^{2}},\hspace{1em}& \alpha =2,\\{} \frac{c}{1+|x{|}^{d+\alpha }},\hspace{1em}& \alpha \in (0,2),\end{array}\right.\]

where $c_{2}<{(2E{X_{1}^{2}})}^{-1}$ and $c_{1},c$ should be chosen such that $\int _{{\mathbb{R}}^{d}}Q(x)\hspace{0.1667em}dx=1$.Observe that, in a sense, this bound is “stable under perturbations of the process

*X*.” Namely, if*X*is a uniformly elliptic diffusion with Hölder continuous coefficients, then (2) with $Q=Q_{2}$ and properly chosen $c_{2}$ is provided by the classical*parametrix method*; see [3]. An analogue of the parametrix method for*α*-stable generators with state-dependent coefficients yields the bound (2) with $Q=Q_{{\alpha ^{\prime }}}$, ${\alpha ^{\prime }}<\alpha $, for*α*-stable driven processes*X*; see [2] and [6].Our principal assumption on the function

*h*is the following. Observe that for a bounded*h*, condition**H1**holds trivially with $V\equiv 1$. On the other hand, in particular cases, one can weaken the assumptions on*h*by using nontrivial “weight functions”*V*. For instance, if $Q=Q_{\alpha }$ from the above example, then one can take \[ V(r)=\left\{\begin{array}{l@{\hskip10.0pt}l}{e}^{Cr},\hspace{1em}& \alpha =2,\\{} {(1+r)}^{\beta },\hspace{1em}& \alpha \in (0,2),\end{array}\right.\hspace{1em}r\in {\mathbb{R}}^{+},\]

with arbitrary *C*and $\beta \in (0,\alpha )$. We denote The following auxiliary statement is crucial for the whole approach. Its proof is completely analogous to the proof of (a part of) Theorem 2.5 [4], but in order to make the exposition self-sufficient, we give it here.##### Proof.

Write Next,

\[\begin{array}{r@{\hskip0pt}l}\displaystyle E_{x}I_{T}(h)-E_{x}I_{T,n}(h)& \displaystyle ={\int _{0}^{T/n}}\int _{{\mathbb{R}}^{d}}p_{t}(x,y)h(y)\hspace{0.1667em}dydt-\frac{T}{n}h(x)\\{} & \displaystyle \hspace{1em}+{\sum \limits_{k=2}^{n}}{\int _{(k-1)T/n}^{kT/n}}\int _{{\mathbb{R}}^{d}}\big(p_{t}(x,y)-p_{(k-1)T/n}(x,y)\big)h(y)\hspace{0.1667em}dydt.\end{array}\]

We have, by the bound for $p_{t}(x,y)$ in (1) and properties of *V*,##### (3)

\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hspace{-1.0pt}\bigg|\hspace{-0.1667em}{\int _{0}^{T/n}}\hspace{-0.1667em}\int _{{\mathbb{R}}^{d}}\hspace{-0.1667em}p_{t}(x,y)h(y)\hspace{0.1667em}dydt\bigg|& \displaystyle \le C_{T}\| h\| _{V}\hspace{-0.1667em}{\int _{0}^{T/n}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{t}^{-d/\alpha }\hspace{-0.1667em}\hspace{-0.1667em}\int _{{\mathbb{R}}^{d}}\hspace{-0.1667em}Q\big({t}^{-1/\alpha }(x-y)\big)V(|y|)\hspace{0.1667em}dydt\hspace{1em}\\{} & \displaystyle =C_{T}\| h\| _{V}{\int _{0}^{T/n}}\int _{{\mathbb{R}}^{d}}Q(z)V\big(|x+{t}^{1/\alpha }z|\big)\hspace{0.1667em}dzdt\\{} & \displaystyle \le {n}^{-1}TC_{T}\| h\| _{V}V\big(|x|\big)\int _{{\mathbb{R}}^{d}}Q(z)V\big({T}^{1/\alpha }|z|\big)\hspace{0.1667em}dz.\end{array}\] \[ {\sum \limits_{k=2}^{n}}{\int _{(k-1)T/n}^{kT/n}}\int _{{\mathbb{R}}^{d}}\big(p_{t}(x,y)-p_{(k-1)T/n}(x,y)\big)h(y)\hspace{0.1667em}dydt=\int _{{\mathbb{R}}^{d}}K_{n,T}(x,y)h(y)\hspace{0.1667em}dy\]

with \[\begin{array}{r@{\hskip0pt}l}\displaystyle K_{n,T}(x,y)& \displaystyle ={\sum \limits_{k=2}^{n}}{\int _{(k-1)T/n}^{kT/n}}{\int _{(k-1)T/n}^{t}}\partial _{s}p_{s}(x,y)\hspace{0.1667em}dsdt\\{} & \displaystyle ={\sum \limits_{k=2}^{n}}{\int _{(k-1)T/n}^{kT/n}}\bigg(\frac{kT}{n}-s\bigg)\partial _{s}p_{s}(x,y)\hspace{0.1667em}ds.\end{array}\]

Then \[ \big|K_{n,T}(x,y)\big|\le \frac{T}{n}{\int _{T/n}^{T}}\big|\partial _{s}p_{s}(x,y)\big|\hspace{0.1667em}ds,\]

and therefore, using the bound for $\partial _{t}p_{t}(x,y)$ in (2), we obtain, similarly to (3), \[\begin{array}{r@{\hskip0pt}l}\displaystyle \bigg|\int _{{\mathbb{R}}^{d}}\hspace{-0.1667em}\hspace{-0.1667em}K_{n,T}(x,y)h(y)\hspace{0.1667em}dy\bigg|& \displaystyle \le \frac{T}{n}C_{T}\| h\| _{V}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{T/n}^{T}}\hspace{-0.1667em}\hspace{-0.1667em}{s}^{-1}\hspace{-0.1667em}\hspace{-0.1667em}\int _{{\mathbb{R}}^{d}}\hspace{-0.1667em}{s}^{-d/\alpha }Q\big({s}^{-1/\alpha }(x-y)\big)V\big(|y|\big)\hspace{0.1667em}dyds\\{} & \displaystyle \le \frac{T}{n}C_{T}\| h\| _{V}V\big(|x|\big)\bigg(\int _{{\mathbb{R}}^{d}}\hspace{-0.1667em}Q(y)V\big({T}^{1/\alpha }|y|\big)\hspace{0.1667em}dy\bigg){\int _{T/n}^{T}}\hspace{-0.1667em}{s}^{-1}\hspace{0.1667em}ds\\{} & \displaystyle =\frac{T}{n}(\log n)C_{T}\| h\| _{V}V\big(|x|\big)\bigg(\int _{{\mathbb{R}}^{d}}Q(y)V\big({T}^{1/\alpha }|y|\big)\hspace{0.1667em}dy\bigg),\end{array}\]

which completes the proof. □### 2.2 Approximation rate in terms of $\| h\| _{V}$

Our main estimate, in a shortest and most transparent form, is presented in the following theorem, which concerns the case where the only assumption on

*h*is that the weighted sup-norm $\| h\| _{V}$ is finite.##### Theorem 2.1.

##### Proof.

Denote, for $t\in [kT/n,(k+1)T/n)$, and write the difference $I_{t}(h)-I_{t,n}(h)$ in the integral form: By the properties of

\[ J_{t,n}(h):=I_{t}(h)-I_{t,n}(h)={\int _{0}^{t}}\Delta _{n}(s)ds,\hspace{1em}\Delta _{n}(s):=h(X_{s})-h(X_{\eta _{n}(s)}).\]

Hence, this difference is an absolutely continuous function of *t*, and using the Newton–Leibnitz formula twice, we get \[ {\big|J_{T,n}(h)\big|}^{p}=p(p-1){\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\Delta _{n}(s)\bigg({\int _{s}^{T}}\Delta _{n}(t)\hspace{0.1667em}dt\bigg)ds.\]

We then write \[ {\big|J_{T,n}(h)\big|}^{p}=p(p-1)\big({H_{T,n,p}^{1}}(h)+{H_{T,n,p}^{2}}(h)\big)\le p(p-1)\big({\tilde{H}_{T,n,p}^{1}}(h)+{H_{T,n,p}^{2}}(h)\big),\]

where \[\begin{array}{r@{\hskip0pt}l}\displaystyle {H_{T,n,p}^{1}}(h)& \displaystyle ={\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\Delta _{n}(s)\bigg({\int _{s}^{\zeta _{n}(s)}}\Delta _{n}(t)\hspace{0.1667em}dt\bigg)ds,\\{} \displaystyle {\tilde{H}_{T,n,p}^{1}}(h)& \displaystyle ={\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\big|\Delta _{n}(s)\big|\bigg|{\int _{s}^{\zeta _{n}(s)}}\Delta _{n}(t)\hspace{0.1667em}dt\bigg|ds,\\{} \displaystyle {H_{T,n,p}^{2}}(h)& \displaystyle ={\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\Delta _{n}(s)\bigg({\int _{\zeta _{n}(s)}^{T}}\Delta _{n}(t)\hspace{0.1667em}dt\bigg)ds.\end{array}\]

Let us estimate separately the expectations of ${\tilde{H}_{T,n,p}^{1}}(h)$ and ${H_{T,n,p}^{2}}(h)$. By the Hölder inequality, \[\begin{array}{r@{\hskip0pt}l}\displaystyle E_{x}{\tilde{H}_{T,n,p}^{1}}(h)& \displaystyle \le {\bigg(E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\bigg)}^{1-2/p}\\{} & \displaystyle \hspace{1em}\times {\bigg(E_{x}{\int _{0}^{T}}{\big|\Delta _{n}(s)\big|}^{p/2}{\bigg|{\int _{s}^{\zeta _{n}(s)}}\Delta _{n}(t)\hspace{0.1667em}dt\bigg|}^{p/2}\hspace{0.1667em}ds\bigg)}^{2/p}.\end{array}\]

Again by the Hölder inequality, \[\begin{array}{r@{\hskip0pt}l}& \displaystyle E_{x}{\int _{0}^{T}}{\big|\Delta _{n}(s)\big|}^{p/2}{\bigg|{\int _{s}^{\zeta _{n}(s)}}\Delta _{n}(t)\hspace{0.1667em}dt\bigg|}^{p/2}\hspace{0.1667em}ds\\{} & \displaystyle \hspace{1em}\le {\bigg(\frac{T}{n}\bigg)}^{p/2-1}{\int _{0}^{T}}{\int _{s}^{\zeta _{n}(s)}}E_{x}{\big|\Delta _{n}(s)\big|}^{p/2}{\big|\Delta _{n}(t)\big|}^{p/2}\hspace{0.1667em}dtds.\end{array}\]

Because $t\in [s,\zeta _{n}(s)]$, we have $\eta _{n}(t)=\eta _{n}(s)$, and, consequently, ##### (4)

\[\begin{array}{r@{\hskip0pt}l}& \displaystyle E_{x}{\big|\Delta _{n}(s)\big|}^{p/2}{\big|\Delta _{n}(t)\big|}^{p/2}\\{} & \displaystyle \hspace{1em}=E_{x}{\big|h(X_{s})-h(X_{\eta _{n}(s)})\big|}^{p/2}{\big|h(X_{t})-h(X_{\eta _{n}(s)})\big|}^{p/2}\\{} & \displaystyle \hspace{1em}\le {\big(E_{x}{\big|h(X_{s})-h(X_{\eta _{n}(s)})\big|}^{p}\big)}^{1/2}{\big(E_{x}{\big|h(X_{t})-h(X_{\eta _{n}(s)})\big|}^{p}\big)}^{1/2}\\{} & \displaystyle \hspace{1em}\le \| h{\| _{V}^{p}}{2}^{p-1}{\big(E_{x}\big({V}^{p}\big(|X_{s}|\big)+{V}^{p}\big(|X_{\eta _{n}(s)}|\big)\big)\big)}^{1/2}\\{} & \displaystyle \hspace{2em}\times {\big(E_{x}\big({V}^{p}\big(|X_{t}|\big)+{V}^{p}\big(|X_{\eta _{n}(s)}|\big)\big)\big)}^{1/2}.\end{array}\]*V*and the bound (1) we have \[\begin{array}{r@{\hskip0pt}l}\displaystyle E_{x}{V}^{p}\big(|X_{r}|\big)& \displaystyle =\int _{{\mathbb{R}}^{d}}p_{r}(x,y){V}^{p}\big(|y|\big)\hspace{0.1667em}dy\\{} & \displaystyle \le C_{T}{r}^{-d/\alpha }\int _{{\mathbb{R}}^{d}}Q\big({r}^{-1/\alpha }(x-y)\big){V}^{p}\big(|y|\big)\hspace{0.1667em}dy\\{} & \displaystyle \le C_{T}{V}^{p}\big(|x|\big)\bigg(\int _{{\mathbb{R}}^{d}}Q(y){V}^{p}\big({T}^{1/\alpha }|y|\big)\hspace{0.1667em}dy\bigg)\end{array}\]

for any $r\in (0,T]$. Using this bound with $r=t,s,\eta _{n}(s)$ and recalling that $|\zeta _{n}(s)-s|\le 1/n$, we get \[ E_{x}{\tilde{H}_{T,n,p}^{1}}(h)\le C{\bigg(E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\bigg)}^{1-2/p}\bigg(\frac{1}{n}\bigg)\| h{\| _{V}^{2}}{V}^{2}\big(|x|\big)\]

with constant *C*depending on $T,Q,V,p$ only.Next, observe that, for every with a constant

*s*, the variables are $\mathcal{F}_{\zeta _{n}(s)}$-measurable. Hence, \[\begin{array}{r@{\hskip0pt}l}\displaystyle E_{x}{H_{T,n,p}^{2}}(h)& \displaystyle =E_{x}\bigg({\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\Delta _{n}(s)E_{x}\bigg({\int _{\zeta _{n}(s)}^{T}}\Delta _{n}(t)\hspace{0.1667em}dt\big|\mathcal{F}_{\zeta _{n}(s)}\bigg)ds\bigg)\\{} & \displaystyle \le E_{x}\bigg({\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p-2}\big|\Delta _{n}(s)\big|\bigg|E_{x}\bigg({\int _{\zeta _{n}(s)}^{T}}\Delta _{n}(t)\hspace{0.1667em}dt\big|\mathcal{F}_{\zeta _{n}(s)}\bigg)\bigg|ds\bigg).\end{array}\]

By Proposition 2.1 and the Markov property of *X*we have \[ \bigg|E_{x}\bigg({\int _{\zeta _{n}(s)}^{T}}\Delta _{n}(t)\hspace{0.1667em}dt\big|\mathcal{F}_{\zeta _{n}(s)}\bigg)\bigg|\le C\bigg(\frac{\log n}{n}\bigg)\| h\| _{V}V\big(|X_{\zeta _{n}(s)}|\big).\]

Hence, again, using the Hölder inequality we get \[\begin{array}{r@{\hskip0pt}l}\displaystyle E_{x}{H_{T,n,p}^{2}}(h)& \displaystyle \le C\bigg(\frac{\log n}{n}\bigg)\| h\| _{V}{\bigg(E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\bigg)}^{1-2/p}\\{} & \displaystyle \hspace{1em}\times {\bigg(E_{x}{\int _{0}^{T}}{\big|\Delta _{n}(s)\big|}^{p/2}{V}^{p/2}\big(|X_{\zeta _{n}(s)}|\big)\hspace{0.1667em}ds\bigg)}^{2/p}.\end{array}\]

Similarly to (4), we have \[ E_{x}{\big|\Delta _{n}(s)\big|}^{p/2}{V}^{p/2}\big(|X_{\zeta _{n}(s)}|\big)\le C{V}^{p}\big(|x|\big)\| h{\| _{V}^{p/2}}.\]

Hence, the above bounds for $E_{x}{\tilde{H}_{T,n,p}^{1}}(h)$ and $E_{x}{H_{T,n,p}^{2}}(h)$ finally yield ##### (5)

\[ E_{x}{\big|J_{T,n}(h)\big|}^{p}\le C{\bigg(E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\bigg)}^{1-2/p}\bigg(\frac{\log n}{n}\bigg)\| h{\| _{V}^{2}}{V}^{2}\big(|x|\big)\]*C*depending on $T,Q,V,p$ only. It can be seen easily that in this inequality one can write arbitrary $t\le T$ instead of*T*, with the same constant*C*. Taking the integral over*t*, we get \[ E_{x}{\int _{0}^{T}}{\big|J_{t,n}(h)\big|}^{p}\hspace{0.1667em}dt\le CT{\bigg(E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\bigg)}^{1-2/p}\bigg(\frac{\log n}{n}\bigg)\| h{\| _{V}^{2}}{V}^{2}\big(|x|\big).\]

Because $\| h\| _{V}<\infty $ and ${V}^{p}$ satisfies the integrability condition from the condition of the theorem, the left-hand side expression in the last inequality is finite. Hence, resolving this inequality, we get \[ E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\le {(CT)}^{p/2}{\bigg(\frac{\log n}{n}\bigg)}^{p/2}\| h{\| _{V}^{p}}{V}^{p}\big(|x|\big),\]

which, together with (5), gives the required statement. □### 2.3 An improved approximation rate for a Hölder continuous *h*

In this section, we consider the case where

*h*has the following additional regularity property. An additional regularity of*h*allows one to improve the accuracy of the previous estimates. Namely, the following statement holds.##### Theorem 2.2.

*Assume that*

*X**,*

*H1**, and*

*H2**hold. Then, for every*$p\ge 2$

*such that*

*and*

*we have*

\[ E_{x}{\big|I_{T}(h)-I_{T,n}(h)\big|}^{p}\le C{\bigg(\frac{\log n}{n}\bigg)}^{p/2}{n}^{-(\gamma p)/(2\alpha )}\| h{\| _{\gamma }^{p/2}}\big(\| h{\| _{\gamma }^{p/2}}+\| h{\| _{V}^{p/2}}{V}^{p/2}\big(|x|\big)\big)\]

*with constant C depending on*$T,Q,V,p,\gamma $

*only.*

##### Proof.

The method of the proof remains the same as that of Theorem 2.1; hence, we use the same notation. The only new point is that, instead of the bound where

\[ E_{x}{\big|\Delta _{n}(s)\big|}^{p}\le \| h{\| _{V}^{p}}E_{x}{\big(V\big(|X_{s}|\big)+V\big(|X_{\eta _{n}(s)}|\big)\big)}^{p},\]

now a more precise inequality is available, based on the Hölder continuity of *h*. Namely, we have##### (6)

\[\begin{array}{r@{\hskip0pt}l}\displaystyle E_{x}{\big|\Delta _{n}(s)\big|}^{p}& \displaystyle =E_{x}{\big|h(X_{s})-h(X_{\eta _{n}(s)})\big|}^{p}\\{} & \displaystyle \le \| h{\| _{\gamma }^{p}}E_{x}{\big|X_{s}-X_{\eta _{n}(s)}\big|}^{\gamma p}\le C\| h{\| _{\gamma }^{p}}{\big|s-\eta _{n}(s)\big|}^{\gamma p/\alpha },\end{array}\]*C*depends on $T,Q,p,\gamma $ only.The last inequality holds due to the following representation. By the Markov property of

*X*, for $r<s$, we have where \[\begin{array}{r@{\hskip0pt}l}\displaystyle f(z)& \displaystyle =\int _{{\mathbb{R}}^{d}}p_{s-r}(z,y)|y-z{|}^{\gamma p}\hspace{0.1667em}dy\\{} & \displaystyle \le C_{T}\int _{{\mathbb{R}}^{d}}{(s-r)}^{-d/\alpha }Q\big({(s-r)}^{-1/\alpha }(z-y)\big)|z-y{|}^{\gamma p}\hspace{0.1667em}dy\\{} & \displaystyle =C_{T}{(s-r)}^{\gamma p/\alpha }\int _{{\mathbb{R}}^{d}}Q(y)|y{|}^{\gamma p}\hspace{0.1667em}dy.\end{array}\]

Thus, for $t\in [s,\zeta _{n}(s)]$, we have

\[\begin{array}{r@{\hskip0pt}l}& \displaystyle E_{x}{\big|\Delta _{n}(s)\big|}^{p/2}{\big|\Delta _{n}(t)\big|}^{p/2}\\{} & \displaystyle \hspace{1em}=E_{x}{\big|h(X_{s})-h(X_{\eta _{n}(s)})\big|}^{p/2}{\big|h(X_{t})-h(X_{\eta _{n}(s)})\big|}^{p/2}\\{} & \displaystyle \hspace{1em}\le \big(E_{x}\big|h(X_{s})-h(X_{\eta _{n}(s)}){\big|{}^{p}\big)}^{1/2}\big(E_{x}\big|h(X_{t})-h(X_{\eta _{n}(s)}){\big|{}^{p}\big)}^{1/2}\\{} & \displaystyle \hspace{1em}\le C\| h{\| _{\gamma }^{p}}{\big|s-\eta _{n}(s)\big|}^{\gamma p/(2\alpha )}{\big|t-\eta _{n}(s)\big|}^{\gamma p/(2\alpha )}\\{} & \displaystyle \hspace{1em}\le CT\| h{\| _{\gamma }^{p}}{n}^{-(\gamma p)/\alpha }\end{array}\]

and \[ E_{x}{\tilde{H}_{T,n,p}^{1}}(h)\le C{\bigg(E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\bigg)}^{1-2/p}{\bigg(\frac{1}{n}\bigg)}^{1+2\gamma /\alpha }\| h{\| _{\gamma }^{2}}\]

with constant *C*depending on $T,Q,p,\gamma $ only.Next, using (6) and (4), we have and

Hence, the previous bounds for $E_{x}{\tilde{H}_{T,n,p}^{1}}(h)$ and $E_{x}{H_{T,n,p}^{2}}(h)$ finally yield with constant

##### (7)

\[\begin{array}{r@{\hskip0pt}l}& \displaystyle E_{x}{\big|J_{T,n}(h)\big|}^{p}\\{} & \displaystyle \hspace{1em}\le C{\bigg(E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\bigg)}^{1-2/p}\bigg(\frac{\log n}{n}\bigg){n}^{-\gamma /\alpha }\| h\| _{\gamma }\big(\| h\| _{\gamma }+\| h\| _{V}V\big(|x|\big)\big)\end{array}\]*C*depending on $T,Q,V,p,\gamma $ only. Using the same procedure as that provided at the end of the proof of Theorem 2.1, we simply have \[\begin{array}{r@{\hskip0pt}l}& \displaystyle E_{x}{\int _{0}^{T}}{\big|J_{s,n}(h)\big|}^{p}\hspace{0.1667em}ds\\{} & \displaystyle \hspace{1em}\le {(CT)}^{p/2}{\bigg(\frac{\log n}{n}\bigg)}^{p/2}{n}^{-(\gamma p)/(2\alpha )}\| h{\| _{\gamma }^{p/2}}\big(\| h{\| _{\gamma }^{p/2}}+\| h{\| _{V}^{p/2}}{V}^{p/2}\big(|x|\big)\big),\end{array}\]

which, together with (7), completes the proof. □### 2.4 Discussion

The results of Theorem 2.1 and Theorem 2.2 should be compared with Theorem 2.3 in [7], where, in our notation, the following bounds were obtained:

\[ E_{x}{\big|I_{T}(h)-I_{T,n}(h)\big|}^{p}\le \left\{\begin{array}{l@{\hskip10.0pt}l}C{n}^{-p(1+\gamma )/2},\hspace{1em}& \gamma \in (0,1),\\{} C{(\log n)}^{p}{n}^{-p(1+\gamma )/2},\hspace{1em}& \gamma =1.\end{array}\right.\]

In Theorem 2.3 of [7], *h*satisfies our conditions**H1**and**H2**with $V(|x|)={e}^{C|x|}$, and*X*is a one-dimensional diffusion with coefficients that satisfy some smoothness condition (assumption (H)); in particular,**X**holds with $\alpha =2$. In this case, our bound given in Theorem 2.2, is somewhat worse. On the other hand, this bound is of an independent interest because of a wider class of processes*X*it applies to.