Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. Approximations related to tempered stabl ...

Approximations related to tempered stable distributions
Kalyan Barman ORCID icon link to view author Kalyan Barman details   Neelesh S Upadhye   Palaniappan Vellaisamy  

Authors

 
Placeholder
https://doi.org/10.15559/25-VMSTA275
Pub. online: 27 February 2025      Type: Research Article      Open accessOpen Access

Received
14 August 2024
Revised
24 January 2025
Accepted
15 February 2025
Published
27 February 2025

Abstract

In this article, we first obtain, for the Kolmogorov distance, an error bound between a tempered stable and a compound Poisson distribution (CPD) and also an error bound between a tempered stable and an α-stable distribution via Stein’s method. For the smooth Wasserstein distance, an error bound between two tempered stable distributions (TSDs) is also derived. As examples, we discuss the approximation of a TSD to normal and variance-gamma distributions (VGDs). As corollaries, the corresponding limit theorem follows.

1 Introduction

Probability approximations is one of the fundamental topics in probability theory, due to its wide range of applications in limit theorems [6, 30, 35], runs [34], stochastic algorithms [37], and various other fields. They mainly provide estimates of the distance between the distributions of two random variables (rvs), which measure the closeness of the approximations. Hence, estimating the accuracy of the approximation is a crucial task. Recently, Chen et al. [9, 8], Jin et al. [18], Upadhye and Barman [30], Xu [38] have studied stable approximations via the Stein’s method. The distributional approximations for a family of stable distributions is not straightforward due to the lack of symmetry and heavy-tailed behavior of stable distributions. One of the major obstacles is that the moments of a stable distribution do not exist, whenever the stability parameter $\alpha \in (0,1]$. To overcome these issues, different approaches and various assumptions are used.
Koponen [22] first introduced tempered stable distributions (TSDs) by tempering the tail of the stable (also called α-stable) distributions and making the distribution’s tail lighter. The tails of TSDs are heavier than those of the normal distribution and thinner than those of the α-stable distribution, see [21]. Therefore, quantifying the error in approximating α-stable and normal distributions to a TSD is of interest. A TSD has mean, variance and exponential moments. Also, the class of TSDs includes many well-known subfamilies of probability distributions, such as CGMY, KoBol, bilateral-gamma, and variance-gamma distributions, which have applications in several disciplines including financial mathematics, see [4, 5, 29, 33].
In this article, we first obtain, for the Kolmogorov distance, an error bound between tempered stable and compound Poisson distributions (CPDs). This provides a convergence rate for the tempered stable approximation to a CPD. Next, we obtain the error bounds for the closeness between tempered stable and α-stable distributions via Stein’s method. We obtain also the error bounds for the closeness between two TSDs, for smooth Wasserstein distance. As a consequence, we discuss the normal and variance-gamma approximation and the corresponding limit theorems to a TSD.
The organization of this article is as follows. In Section 2, we discuss some notations and preliminaries that will be useful later. First, we discuss some important properties of TSDs and some special and limiting distributions from the TSD family. A brief discussion of Stein’s method is also presented. In Section 3, we establish a Stein identity and a Stein equation for TSD and solve it via the semigroup approach. The properties of the solution to the Stein equation are discussed. In Section 4, we discuss bounds for tempered stable approximations of various probability distributions.

2 The preliminary results

2.1 Properties of tempered stable distributions

We first define the TSD and discuss some of their properties. Let ${\textbf{I}_{B}}(.)$ denote the indicator function of the set B. A rv X is said to have TSD (see [24, p.2]) if its cf is given by
(1)
\[ {\phi _{ts}}(z)=\exp \left({\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)\right),\hspace{1em}z\in \mathbb{R},\]
where the Lévy measure ${\nu _{ts}}$ is
(2)
\[ {\nu _{ts}}(du)=\left(\frac{{m_{1}}}{{u^{1+{\alpha _{1}}}}}{e^{-{\lambda _{1}}u}}{\mathbf{I}_{(0,\infty )}}(u)+\frac{{m_{2}}}{|u{|^{1+{\alpha _{2}}}}}{e^{-{\lambda _{2}}|u|}}{\mathbf{I}_{(-\infty ,0)}}(u)\right)du,\]
with parameters ${m_{i}},{\lambda _{i}}\in (0,\infty )$ and ${\alpha _{i}}\in [0,1)$, for $i=1,2$, and we denote it by $\text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$. Note that TSDs are infinitely divisible and self-decomposable, see [24]. Also, note that if ${\alpha _{1}}={\alpha _{2}}=\alpha \in (0,1)$, then the Lévy measure in (2) can be seen as
(3)
\[ {\nu _{ts}}(du)=q(u){\nu _{\alpha }}(du),\]
where
(4)
\[ {\nu _{\alpha }}(du)=\left(\frac{{m_{1}}}{{u^{1+\alpha }}}{\mathbf{I}_{(0,\infty )}}+\frac{{m_{2}}}{{u^{1+\alpha }}}{\mathbf{I}_{(-\infty ,0)}}\right)du\]
is the Lévy measure of a α-stable distribution (see [19]) and $q:\mathbb{R}\to {\mathbb{R}_{+}}$ is a tempering function (see [24]), given by
(5)
\[ q(u)={e^{-{\lambda _{1}}u}}{\mathbf{I}_{(0,\infty )}}(u)+{e^{-{\lambda _{2}}|u|}}{\mathbf{I}_{(-\infty ,0)}}(u).\]
The following special and limiting cases of TSDs are well known, see [24]. Let $\stackrel{L}{\to }$ denote the convergence in distribution. Also let ${m_{i}},{\lambda _{i}}\in (0,\infty )$ and ${\alpha _{i}}\in [0,1)$, for $i=1,2$.
  • (i) When ${\alpha _{1}}={\alpha _{2}}=\alpha $, then $\text{TSD}({m_{1}},\alpha ,{\lambda _{1}},{m_{2}},\alpha ,{\lambda _{2}})$ is the KoBol distribution, see [3].
  • (ii) When ${m_{1}}={m_{2}}=m$ and ${\alpha _{1}}={\alpha _{2}}=\alpha $, then $\text{TSD}(m,\alpha ,{\lambda _{1}},m,\alpha ,{\lambda _{2}})$ is the CGMY distribution, see [5].
  • (iii) When ${\alpha _{1}}={\alpha _{2}}=0$, then $\text{TSD}({m_{1}},0,{\lambda _{1}},{m_{2}},0,{\lambda _{2}})$ is the bilateral-gamma distribution (BGD), denoted by BGD$({m_{1}},{\lambda _{1}},{m_{2}},{\lambda _{2}})$, see [23].
  • (iv) When ${m_{1}}={m_{2}}=m$ and ${\alpha _{1}}={\alpha _{2}}=0$, then $\text{TSD}(m,0,{\lambda _{1}},m,0,{\lambda _{2}})$ is the variance-gamma distribution (VGD), denoted by VGD$(m,{\lambda _{1}},{\lambda _{2}})$, see [23].
  • (v) When ${m_{1}}={m_{2}}=m$, ${\lambda _{1}}={\lambda _{2}}=\lambda $ and ${\alpha _{1}}={\alpha _{2}}=0$, then $\text{TSD}(m,0,\lambda ,m,0,\lambda )$ is the symmetric VGD, denoted by SVGD$(m,\lambda )$, see [23].
  • (vi) When ${\lambda _{1}},{\lambda _{2}}\downarrow 0$, then $\text{TSD}({m_{1}},\alpha ,{\lambda _{1}},{m_{2}},\alpha ,{\lambda _{2}})$ converges to an α-stable distribution, denoted by $S({m_{1}},{m_{2}},\alpha )$, with cf
    (6)
    \[ {\phi _{\alpha }}(z)=\exp \left({\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{\alpha }}(du)\right),\hspace{1em}z\in \mathbb{R},\]
    where ${\nu _{\alpha }}$ is the Lévy measure given in (4), see [28].
  • (vii) The limiting case as $m\to \infty $ of the SVGD$(m,\sqrt{2m}/\lambda )$ is the normal distribution $\mathcal{N}(0,{\lambda ^{2}})$, see [12].
Let $X\sim \text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$. Then from (1), for $z\in \mathbb{R}$, the cumulant generating function is given by
(7)
\[ \Psi (z)=\log \mathbb{E}[{e^{izX}}]=\log {\phi _{ts}}(z),\]
where ${\phi _{ts}}(z)$ is given in (1). Then the n-th cumulant of X is
(8)
\[\begin{aligned}{}{C_{n}}(X)& :={(-i)^{n}}{\bigg[\frac{{d^{n}}}{d{z^{n}}}\Psi (z)\bigg]_{z=0}}={\int _{\mathbb{R}}}{u^{n}}{\nu _{ts}}(du)\lt \infty ,\hspace{1em}n\ge 1.\end{aligned}\]
In particular (see [24]),
(9)
\[\begin{aligned}{}{C_{1}}(X)& =\mathbb{E}[X]=\Gamma (1-{\alpha _{1}})\frac{{m_{1}}}{{\lambda _{1}^{1-{\alpha _{1}}}}}-\Gamma (1-{\alpha _{2}})\frac{{m_{2}}}{{\lambda _{2}^{1-{\alpha _{2}}}}},\end{aligned}\]
(10)
\[\begin{aligned}{}{C_{2}}(X)& =\mathit{Var}(X)=\Gamma (2-{\alpha _{1}})\frac{{m_{1}}}{{\lambda _{1}^{2-{\alpha _{1}}}}}+\Gamma (2-{\alpha _{2}})\frac{{m_{2}}}{{\lambda _{2}^{2-{\alpha _{2}}}}},\hspace{1em}\text{and}\end{aligned}\]
(11)
\[\begin{aligned}{}{C_{3}}(X)& =\Gamma (3-{\alpha _{1}})\frac{{m_{1}}}{{\lambda _{1}^{3-{\alpha _{1}}}}}-\Gamma (3-{\alpha _{2}})\frac{{m_{2}}}{{\lambda _{2}^{3-{\alpha _{2}}}}}.\end{aligned}\]

2.2 Key steps of Stein’s method

Let ${f^{(n)}}$ henceforth denote the n-th derivative of f with ${f^{(0)}}=f$ and ${f^{\prime }}={f^{(1)}}$. Let $\mathcal{S}(\mathbb{R})$ be the Schwartz space defined by
(12)
\[ \mathcal{S}(\mathbb{R}):=\left\{f\in {C^{\infty }}(\mathbb{R}):\underset{|x|\to \infty }{\lim }|{x^{m}}{f^{(n)}}(x)|=0,\hspace{2.5pt}\text{for all}\hspace{2.5pt}m,n\in {\mathbb{N}_{0}}\right\},\]
where ${\mathbb{N}_{0}}=\mathbb{N}\cup \{0\}$ and ${C^{\infty }}(\mathbb{R})$ is the class of infinitely differentiable functions on $\mathbb{R}$. Note that the Fourier transform (FT) on $\mathcal{S}(\mathbb{R})$ is an automorphism. In particular, if $f\in \mathcal{S}(\mathbb{R})$, and $\widehat{f}(u):={\textstyle\int _{\mathbb{R}}}{e^{-iux}}f(x)dx$, $u\in \mathbb{R}$, then $\widehat{f}(u)\in \mathcal{S}(\mathbb{R})$. Similarly, if $\widehat{f}(u)\in \mathcal{S}(\mathbb{R})$, and $f(x):=\frac{1}{2\pi }{\textstyle\int _{\mathbb{R}}}{e^{iux}}\widehat{f}(u)du$, $x\in \mathbb{R}$, then $f(x)\in \mathcal{S}(\mathbb{R})$, see [32].
Next, let
(13)
\[ {\mathcal{H}_{r}}=\{h:\mathbb{R}\to \mathbb{R}|h\hspace{2.5pt}\text{is}\hspace{2.5pt}r\hspace{2.5pt}\text{times differentiable and}\hspace{2.5pt}\| {h^{(k)}}\| \le 1,k=0,1,\dots ,r\},\]
where $\| h\| ={\sup _{x\in \mathbb{R}}}|h(x)|$. Then, for any two rvs Y and Z, the smooth Wasserstein distance (see [14]) is given by
(14)
\[ {d_{{\mathcal{H}_{r}}}}(Y,Z):=\underset{h\in {\mathcal{H}_{r}}}{\sup }\left|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]\right|,\hspace{1em}r\ge 1.\]
Also, let
(15)
\[ {\mathcal{H}_{W}}=\{h:\mathbb{R}\to \mathbb{R}|h\hspace{2.5pt}\text{is}\hspace{2.5pt}1\text{-Lipschitz and}\hspace{2.5pt}\| h\| \le 1\}.\]
Then, for any two rvs Y and Z, the classical Wasserstein distance (see [14]) is given by
(16)
\[ {d_{W}}(Y,Z):=\underset{h\in {\mathcal{H}_{W}}}{\sup }\left|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]\right|.\]
Finally, let
(17)
\[\begin{aligned}{}{\mathcal{H}_{K}}& =\left\{h:\mathbb{R}\to \mathbb{R}\hspace{2.5pt}\big|\hspace{2.5pt}h={\mathbf{I}_{(-\infty ,x]}}\hspace{2.5pt}\text{for some}\hspace{2.5pt}x\in \mathbb{R}\right\}.\end{aligned}\]
Then, for any two rvs Y and Z, the Kolmogorov distance (see [14]) is given by
(18)
\[ {d_{K}}(Y,Z):=\underset{h\in {\mathcal{H}_{K}}}{\sup }\left|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]\right|.\]
Next, we discuss the steps of Stein’s method. Let Z be a random variable (rv) with probability distribution ${F_{Z}}$ (denote this by $Z\sim {F_{Z}}$). First, one identifies a suitable operator $\mathcal{A}$ (called the Stein operator) and a class of suitable functions $\mathcal{F}$ such that $Z\sim {F_{Z}}$, if and only if
\[ \mathbb{E}\left[\mathcal{A}f(Z)\right]=0,\hspace{1em}\text{for all}\hspace{2.5pt}f\in \mathcal{F}.\]
This equivalence is called the Stein characterization of ${F_{Z}}$. This characterization leads us to the Stein equation
(19)
\[ \mathcal{A}f(x)=h(x)-\mathbb{E}[h(Z)],\]
where h is a real-valued test function. Replacing x with a rv Y and taking expectations on both sides of (19) gives
(20)
\[ \mathbb{E}\left[\mathcal{A}f(Y)\right]=\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)].\]
This equality (20) plays a crucial role in Stein’s method. The probability distribution ${F_{Z}}$ is characterized by (19) so that the problem of bounding the quantity $|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]|$ depends on the smoothness of the solution to (19) and the behavior of Y. For more details on Stein’s method, we refer to [1, 7] and the references therein.
In particular, let Z have the normal distribution $\mathcal{N}(0,{\sigma ^{2}})$. Then the Stein characterization for Z (see [31]) is
(21)
\[ \mathbb{E}\left[{\sigma ^{2}}{f^{\prime }}(Z)-Zf(Z)\right]=0,\]
where f is any real-valued absolutely continuous function such that $\mathbb{E}|{f^{\prime }}(Z)|\lt \infty $. This characterization leads us to the Stein equation
(22)
\[ {\sigma ^{2}}{f^{\prime }}(x)-xf(x)=h(x)-\mathbb{E}[h(Z)],\]
where h is a real-valued test function. Replacing x with a rv ${Z_{n}}\sim \mathcal{N}(0,{\sigma _{n}^{2}})$ and taking expectations on both sides of (22) gives
(23)
\[ \mathbb{E}\left[{\sigma ^{2}}{f^{\prime }}({Z_{n}})-{Z_{n}}f({Z_{n}})\right]=\mathbb{E}[h({Z_{n}})]-\mathbb{E}[h(Z)].\]
Using the smoothness of solution to (22), it can be shown (see [27, Section 3.6]) that
(24)
\[ {d_{W}}({Z_{n}},Z)\le \frac{\sqrt{2/\pi }}{{\sigma ^{2}}\vee {\sigma _{n}^{2}}}|{\sigma _{n}^{2}}-{\sigma ^{2}}|.\]
From (24), if ${\sigma _{n}}\to \sigma $, then ${d_{W}}({Z_{n}},Z)=0$, as expected, which implies that ${Z_{n}}$ converges to the normal distribution $\mathcal{N}(0,{\sigma ^{2}})$. We refer to [25] and [26] for a number of bounds similar to (24) for comparison of univariate probability distributions.

3 Stein’s method for tempered stable distributions

3.1 The Stein identity for tempered stable distributions

In this section, we obtain the Stein identity for a TSD. First recall that $\mathcal{S}(\mathbb{R})$ denotes the Schwartz space of functions, defined in (12).
Proposition 3.1.
A rv $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ if and only if
(25)
\[ \mathbb{E}\left[Xf(X)-{\int _{\mathbb{R}}}uf(X+u){\nu _{ts}}(du)\right]=0,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\]
where ${\nu _{ts}}$ is the associated Lévy measure of TSD, defined in (2).
Proof.
From Equations (2.7) and (2.8) of [24], the integral ${\textstyle\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)$ is convergent for all $z\in \mathbb{R}$. Also, the cf of $X\sim \text{TSD}$ is given by (see (1))
(26)
\[\begin{aligned}{}{\phi _{ts}}(z)& =\exp (\Psi (z)),\hspace{1em}z\in \mathbb{R}\end{aligned}\]
where $\Psi (z)={\textstyle\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)$. Since $|(iu){e^{izu}}|\le |u|$ and ${\textstyle\int _{\mathbb{R}}}|u|{\nu _{ts}}(du)\lt \infty $ (see (8)), we have
\[ {\Psi ^{\prime }}(z)=\frac{d}{dz}{\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)={\int _{\mathbb{R}}}iu{e^{izu}}{\nu _{ts}}(du).\]
Now, taking logarithms on both sides of (26), differentiating with respect to z, we have
(27)
\[ {\phi ^{\prime }_{ts}}(z)=i{\phi _{ts}}(z){\int _{\mathbb{R}}}u{e^{izu}}{\nu _{ts}}(du).\]
Let ${F_{X}}$ be the cumulative distribution function (CDF) of X. Then,
(28)
\[ {\phi _{ts}}(z)={\int _{\mathbb{R}}}{e^{izx}}{F_{X}}(dx)\hspace{2.5pt}\hspace{2.5pt}\hspace{0.2778em}\Longrightarrow \hspace{0.2778em}\hspace{2.5pt}\hspace{2.5pt}{\phi ^{\prime }_{ts}}(z)=i{\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx).\]
Using (28) in (27) and rearranging the integrals, we have
(29)
\[ 0=i\left({\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx)-{\phi _{ts}}(z){\int _{\mathbb{R}}}u{e^{izu}}{\nu _{ts}}(du)\right).\]
Multiplying both sides of (29) by $-i$, we get
(30)
\[ 0={\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx)-{\phi _{ts}}(z){\int _{\mathbb{R}}}u{e^{izu}}{\nu _{ts}}(du).\]
Note that ${\phi _{ts}}(z)$ and ${\phi ^{\prime }_{ts}}(z)$ exist and are finite for all $z\in \mathbb{R}$. Hence by Fubini’s theorem, the second integral of (30) can be written as
(31)
\[\begin{aligned}{}{\phi _{ts}}(z){\int _{\mathbb{R}}}u{e^{izu}}{\nu _{ts}}(du)& ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{izu}}{e^{izx}}{F_{X}}(dx){\nu _{ts}}(du)\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{iz(u+x)}}{\nu _{ts}}(du){F_{X}}(dx)\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{izy}}{\nu _{ts}}(du){F_{X}}(d(y-u))\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{izx}}{\nu _{ts}}(du){F_{X}}(d(x-u))\\ {} & ={\int _{\mathbb{R}}}{e^{izx}}{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du).\end{aligned}\]
Substituting (31) in (30), we have
(32)
\[\begin{aligned}{}0& ={\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx)-{\int _{\mathbb{R}}}{e^{izx}}{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du)\\ {} & ={\int _{\mathbb{R}}}{e^{izx}}\left(x{F_{X}}(dx)-{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du)\right).\end{aligned}\]
Applying the Fourier transform to (32), multiplying with $f\in \mathcal{S}(\mathbb{R})$, and integrating over $\mathbb{R}$, we get
(33)
\[ {\int _{\mathbb{R}}}f(x)\left(x{F_{X}}(dx)-{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du)\right)=0.\]
The second integral of (33) can be seen as
(34)
\[\begin{aligned}{}{\int _{\mathbb{R}}}{\int _{\mathbb{R}}}uf(x){F_{X}}(d(x-u)){\nu _{ts}}(du)& ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}uf(y+u){F_{X}}(dy){\nu _{ts}}(du)\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}uf(x+u){F_{X}}(dx){\nu _{ts}}(du)\\ {} & =\mathbb{E}\left[{\int _{\mathbb{R}}}uf(X+u){\nu _{ts}}(du)\right].\end{aligned}\]
Substituting (34) in (33), we have
\[ \mathbb{E}\left[Xf(X)-{\int _{\mathbb{R}}}uf(X+u){\nu _{ts}}(du)\right]=0,\]
which proves (25).
Assume, conversely, that (25) holds for ${\nu _{ts}}$ defined in (2). For any $s\in \mathbb{R}$, let $f(x)={e^{isx}}$, $x\in \mathbb{R}$, then (25) becomes
\[\begin{aligned}{}\mathbb{E}\left[X{e^{isX}}\right]& =\mathbb{E}\left[{\int _{\mathbb{R}}}{e^{is(X+u)}}u{\nu _{ts}}(du)\right]\\ {} & =\mathbb{E}\left[{e^{isX}}{\int _{\mathbb{R}}}{e^{isu}}u{\nu _{ts}}(du)\right].\end{aligned}\]
Setting ${\phi _{ts}}(s)=\mathbb{E}[{e^{isX}}]$, we have
(35)
\[ {\phi ^{\prime }_{ts}}(s)=i{\phi _{ts}}(s){\int _{\mathbb{R}}}{e^{isu}}u{\nu _{ts}}(du).\]
Integrating the real and imaginary parts of (35) leads, for any $z\ge 0$, to
\[\begin{aligned}{}{\phi _{ts}}(z)& =\exp \left(i{\int _{0}^{z}}{\int _{\mathbb{R}}}{e^{isu}}u{\nu _{ts}}(du)ds\right)\\ {} & =\exp \left(i{\int _{\mathbb{R}}}{\int _{0}^{z}}{e^{isu}}dsu{\nu _{ts}}(du)\right)\\ {} & =\exp \left({\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)\right).\end{aligned}\]
A similar computation for $z\le 0$ completes the derivation of the cf.  □
We now have the following corollary for α-stable distributions.
Corollary 3.2.
A rv $X\sim S({m_{1}},{m_{2}},\alpha )$ if and only if
(36)
\[ \mathbb{E}\left[Xf(X)-{\int _{\mathbb{R}}}uf(X+u){\nu _{\alpha }}(du)\right]=0,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\]
where ${\nu _{\alpha }}$ is the associated Lévy measure of an α-stable distribution, given in (4).
Proof.
Let ${\alpha _{1}}={\alpha _{2}}=\alpha $ in Theorem 3.1. Observe now that
\[\begin{aligned}{}& \left|xf(x)-{\int _{\mathbb{R}}}uf(x+u){\nu _{ts}}(du)\right|\lt \infty ,\hspace{1em}\text{and}\hspace{2.5pt}\\ {} & \underset{{\lambda _{1}},{\lambda _{2}}\downarrow 0}{\lim }{\int _{\mathbb{R}}}uf(x+u){\nu _{ts}}(du)={\int _{\mathbb{R}}}uf(x+u){\nu _{\alpha }}(du),\end{aligned}\]
since $f\in \mathcal{S}(\mathbb{R})$. So, the dominated convergence theorem is applicable in (25). Next, taking limits as ${\lambda _{1}},{\lambda _{2}}\downarrow 0$ in (25), and then applying the dominated convergence theorem, and noting that ${\nu _{ts}}\to {\nu _{\alpha }}$, we get (36).  □
Remark 3.3.
(i) Note that we derive the characterizing (Stein) identity (25) for TSD using the Lévy–Khinchine representation of the cf. Also, observe that several classes of distributions such as variance-gamma, bilateral-gamma, CGMY, and KoBol can be viewed as TSD. The Stein identities for these classes of distributions can be easily obtained using (25).
(ii) Recently Arras and Houdré ([1], Theorem 3.1 and Section 5) obtained the Stein identity for infinitely divisible distributions with first finite moment via the covariance representation given in [17]. Note that TSD is a subclass of IDD and TSD has finite mean. Hence, the Stein identity for TSD can also be derived using the approach given in [1].

3.1.1 A nonzero bias distribution

In Stein’s method literature, the zero bias distribution is a powerful tool to obtain bounds, which has been used in several situations. It has been used in conjunction with coupling techniques to produce quantitative results for normal and product normal approximations, see, e.g., [11]. The zero bias distribution due to Goldstein and Reinert [16] is as follows.
Definition 3.4.
Let X be a rv with $\mathbb{E}[X]=0$, and $\mathit{Var}(X)={\sigma ^{2}}\lt \infty $. We say that ${X^{\ast }}$ has X-zero bias distribution if
(37)
\[ \mathbb{E}[Xf(X)]={\sigma ^{2}}\mathbb{E}[{f^{\prime }}({X^{\ast }})],\]
for any differentiable function f with $\mathbb{E}[Xf(X)]\lt \infty $.
In the following lemma, we prove the existence of a nonzero (extended) bias distribution (see [1, Remark 3.9 (ii)]) associated with TSD. Before stating our result, let us define
(38)
\[ {\eta ^{+}}(u)={\int _{u}^{\infty }}y{\nu _{ts}}(dy),\hspace{2.5pt}u\gt 0,\hspace{1em}\text{and}\hspace{1em}{\eta ^{-}}(u)={\int _{-\infty }^{u}}(-y){\nu _{ts}}(dy),\hspace{2.5pt}u\lt 0,\]
where ${\nu _{ts}}$ is the Lévy measure of TSD (see (2)). Let
\[ \eta (u):={\eta ^{+}}(u)+{\eta ^{-}}(u).\]
Also let Y be a random variable with the density
(39)
\[ {f_{1}}(u)=\frac{\eta (u)}{{\textstyle\int _{\mathbb{R}}}{y^{2}}{\nu _{ts}}(dy)}=\frac{\eta (u)}{\mathit{Var}(X)},\hspace{1em}u\in \mathbb{R}.\]
Then, for $n\ge 1$, the nth moment of Y is
(40)
\[\begin{aligned}{}\mathbb{E}[{Y^{n}}]& =\frac{1}{\mathit{Var}(X)}{\int _{\mathbb{R}}}{u^{n}}\eta (u)du\\ {} & =\frac{1}{(n+1)\mathit{Var}(X)}{\int _{\mathbb{R}}}{u^{n+2}}{\nu _{ts}}(u)du\\ {} & =\frac{{C_{n+2}}(X)}{(n+1){C_{2}}(X)}.\end{aligned}\]
Lemma 3.5.
Let $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ and Y (independent of X) have the density given in (39). Then
(41)
\[ \mathit{Cov}(X,f(X))=\mathit{Var}(X)\mathbb{E}\left[{f^{\prime }}(X+Y)\right],\]
where f is an absolutely continuous function with $\mathbb{E}\left[{f^{\prime }}(X+Y)\right]\lt \infty $.
Proof.
Using (8) and (9), for the case $n=1$, in (25), and rearranging the terms, we get
\[ \mathit{Cov}(X,f(X))=\mathbb{E}\left[{\int _{\mathbb{R}}}u(f(X+u)-f(X)){\nu _{ts}}(du)\right].\]
Now
\[\begin{aligned}{}\mathit{Cov}(X,f(X))=& \mathbb{E}\left[{\int _{\mathbb{R}}}u(f(X+u)-f(X)){\nu _{ts}}(du)\right]\\ {} =& \mathbb{E}\left[{\int _{0}^{\infty }}{f^{\prime }}(X+v){\int _{v}^{\infty }}u{\nu _{ts}}(du)dv\right]\\ {} & +\mathbb{E}\left[{\int _{-\infty }^{0}}{f^{\prime }}(X+v){\int _{-\infty }^{v}}(-u){\nu _{ts}}(du)dv\right]\\ {} =& \mathbb{E}\left[{\int _{\mathbb{R}}}{f^{\prime }}(X+v)\left({\eta ^{+}}(v){\mathbf{I}_{(0,\infty )}}(v)+{\eta ^{-}}(v){\mathbf{I}_{(-\infty ,0)}}(v)\right)dv\right]\\ {} =& \left({\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\right)\mathbb{E}\left[{\int _{\mathbb{R}}}{f^{\prime }}(X+v){f_{1}}(v)dv\right]\\ {} =& \left({\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\right)\mathbb{E}\left[{f^{\prime }}(X+Y)\right]\\ {} =& \mathit{Var}(X)\mathbb{E}\left[{f^{\prime }}(X+Y)\right].\end{aligned}\]
This proves the result.  □
Remark 3.6.
Note that the covariance identity in (41) coincides with the one given in [1, Proposition 3.8]. However, the usefulness of the identity is shown in deriving the error bound of the limiting distributions of TSD (see the proof of Theorem 4.5).

3.2 The Stein equation for tempered stable distributions

In this section, we first derive the Stein equation for TSD and then solve it via the semigroup approach. From Proposition 3.1, for any $f\in \mathcal{S}(\mathbb{R})$,
(42)
\[ \mathcal{A}f(x):=-xf(x)+{\int _{\mathbb{R}}}uf(x+u){\nu _{ts}}(du)\]
is the Stein operator for TSD. Hence, the Stein equation for $X\sim $TSD $({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}}$, ${\lambda _{2}})$ is given by
(43)
\[ \mathcal{A}f(x)=h(x)-\mathbb{E}[h(X)],\]
where $h\in \mathcal{H}$, a class of test functions. The semigroup approach for solving the Stein equation (43) is developed by Barbour [2], and Arras and Houdré [1] generalized it for infinitely divisible distributions with the finite first moment. Consider the following family of operators ${({P_{t}})_{t\ge 0}}$, defined as, for all $x\in \mathbb{R}$,
(44)
\[ {P_{t}}(f)(x)=\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{f}(z){e^{izx{e^{-t}}}}\frac{{\phi _{ts}}(z)}{{\phi _{ts}}({e^{-t}}z)}dz,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\]
where $\hat{f}$ is FT of f, and ${\phi _{ts}}$ is the cf of TSD given in (1). Recall that the TSD family is self-decomposable, see [24], p. 4284. Hence, from Equations 5.8 and 5.15 of [1], one can define a cf, for all $z\in \mathbb{R}$, and $t\ge 0$, by
(45)
\[\begin{aligned}{}{\phi _{t}}(z)& :=\frac{{\phi _{ts}}(z)}{{\phi _{ts}}({e^{-t}}z)}\\ {} & =\frac{\exp \left({\textstyle\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)\right)}{\exp \left({\textstyle\int _{\mathbb{R}}}({e^{i{e^{-t}}zu}}-1){\nu _{ts}}(du)\right)}\\ {} & =\exp \left({\int _{\mathbb{R}}}{e^{izu}}(1-{e^{i({e^{-t}}-1)zu}}){\nu _{ts}}(du)\right)\\ {} & ={\int _{\mathbb{R}}}{e^{izu}}{F_{{X_{(t)}}}}(du),\end{aligned}\]
where ${F_{{X_{(t)}}}}$ is CDF of a rv ${X_{(t)}}$, say. Also, for any $t\ge 0$, the rv ${X_{(t)}}$ is related to X (see Equation 2.11 of [1], Section 2.2) as
\[ X\stackrel{d}{=}{e^{-t}}X+{X_{(t)}},\]
where $X\sim $TSD and $\stackrel{d}{=}$ denotes the equality in distribution. We refer to Section 15 of [19] for more details on self-decomposable distributions. Now using (45) in (44), we get
(46)
\[\begin{aligned}{}{P_{t}}(f)(x)& =\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{\mathbb{R}}}\widehat{f}(z){e^{izx{e^{-t}}}}{e^{izu}}{F_{{X_{(t)}}}}(du)dz\\ {} & =\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{\mathbb{R}}}\widehat{f}(z){e^{iz(u+x{e^{-t}})}}{F_{{X_{(t)}}}}(du)dz\\ {} & ={\int _{\mathbb{R}}}f(u+x{e^{-t}}){F_{{X_{(t)}}}}(du),\end{aligned}\]
where the last step follows by applying the inverse FT.
Proposition 3.7.
Let ${({P_{t}})_{t\ge 0}}$ be a family of operators given in (44). Then
  • (i) ${({P_{t}})_{t\ge 0}}$ is a ${\mathbb{C}_{0}}$-semigroup on $\mathcal{S}(\mathbb{R})$;
  • (ii) its generator T is given by
    (47)
    \[ T(f)(x)=-x{f^{\prime }}(x)+{\int _{\mathbb{R}}}u{f^{\prime }}(x+u){\nu _{ts}}(du),\hspace{1em}f\in \mathcal{S}(\mathbb{R}).\]
Following the steps similar to Proposition 3.8 and Lemma 3.10 of [30], the proof is derived.
Remark 3.8.
Note that the domain of the semigroup ${({P_{t}})_{t\ge 0}}$ is $\mathcal{S}(\mathbb{R})$. So, the semigroup ${({P_{t}})_{t\ge 0}}$ is a uniformly continuous semigroup. Also, for $1\le p\lt \infty $, the Schwartz space $\mathcal{S}(\mathbb{R})$ is dense in ${L^{p}}({F_{X}})=\{f:\mathbb{R}\to \mathbb{R}|{\textstyle\int _{\mathbb{R}}}|f(x){|^{p}}{F_{X}}(dx)\lt \infty \}$, where ${F_{X}}$ is CDF of $X\sim $ TSD (see [20, Remark 1.9.1]). Following the proof of Proposition 5.1 of [1], the domain of ${({P_{t}})_{t\ge 0}}$ can be extended to ${L^{p}}({F_{X}})$ and the topology on ${L^{p}}({F_{X}})$ can be derived from the ${L^{p}}$-norm, which is a metric topology.
Next, we provide a solution to the Stein equation (43).
Theorem 3.9.
Let $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ and $h\in {\mathcal{H}_{r}}$. Then the function ${f_{h}}:\mathbb{R}\to \mathbb{R}$ defined by
(48)
\[ {f_{h}}(x):=-{\int _{0}^{\infty }}\frac{d}{dx}{P_{t}}h(x)dt\]
solves (43).
Proof.
Let
\[ {g_{h}}(x)=-{\int _{0}^{\infty }}\bigg({P_{t}}(h)(x)-\mathbb{E}[h(X)]\bigg)dt.\]
Then ${g^{\prime }_{h}}(x)={f_{h}}(x)$. Now from (44), we get
(49)
\[\begin{aligned}{}{P_{0}}(h)(x)& =\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{h}(z){e^{izx}}dz=h(x),\end{aligned}\]
(50)
\[\begin{aligned}{}\hspace{2.5pt}\text{and}\hspace{2.5pt}\underset{\epsilon \to \infty }{\lim }{P_{\epsilon }}h(x)& =\underset{\epsilon \to \infty }{\lim }\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{h}(z){e^{izx{e^{-\epsilon }}}}\frac{{\phi _{ts}}(z)}{{\phi _{ts}}({e^{-\epsilon }}z)}dz\\ {} & =\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{h}(z){\phi _{ts}}(z)dz\\ {} & =\mathbb{E}[h(X)].\end{aligned}\]
Also from (47), we get
\[\begin{aligned}{}\mathcal{A}{f_{h}}(x)& =-x{f_{h}}(x)+{\int _{\mathbb{R}}}u{f_{h}}(x+u){\nu _{ts}}(du)\\ {} & =T{g_{h}}(x)\\ {} & =-{\int _{0}^{\infty }}T{P_{t}}(h)(x)dt\\ {} & =-{\int _{0}^{\infty }}\frac{d}{dt}{P_{t}}h(x)dt\hspace{2.5pt}\text{(see [27, p.68])}\\ {} & =-\underset{\epsilon \to \infty }{\lim }{\int _{0}^{\epsilon }}\frac{d}{dt}{P_{t}}h(x)dt\\ {} & ={P_{0}}h(x)-\underset{\epsilon \to \infty }{\lim }{P_{\epsilon }}h(x)\\ {} & =h(x)-\mathbb{E}[h(X)]\hspace{2.5pt}\text{(using (49) and (50)}).\end{aligned}\]
Hence, ${f_{h}}$ is the solution to (43).  □

3.3 Properties of the solution of the Stein equation

The next step is to investigate the properties of ${f_{h}}$. In the following theorem, we establish estimates of ${f_{h}}$, which play a crucial role in the TSD approximation problems. Gaunt [12, 13] and Döbler et al. [10] propose various methods for bounding the solution to the Stein equations that allow them to derive properties of the solution to the Stein equation, in particular for a subfamily of TSD, namely the variance-gamma family.
Lemma 3.10.
For $h\in {\mathcal{H}_{r+1}}$, let ${f_{h}}$ be defined in (48).
  • (i) For $r=0,1,2,\dots $,
    (51)
    \[ \| {f_{h}^{(r)}}\| \le \frac{1}{r+1}\| {h^{(r+1)}}\| .\]
  • (ii) For any $x,y\in \mathbb{R}$,
    (52)
    \[ \| {f^{\prime }_{h}}(x)-{f^{\prime }_{h}}(y)\| \le \frac{\| {h^{(3)}}\| }{3}\left|x-y\right|.\]
Proof.
(i) For $h\in {\mathcal{H}_{r+1}}$,
\[\begin{aligned}{}\| {f_{h}}\| & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}\frac{d}{dx}{P_{t}}h(x)dt\right|\\ {} & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}\frac{d}{dx}{\int _{\mathbb{R}}}h(x{e^{-t}}+y){F_{{X_{(t)}}}}(dy)dt\right|\hspace{2.5pt}(\text{using (46)})\\ {} & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}{e^{-t}}{\int _{\mathbb{R}}}{h^{(1)}}(x{e^{-t}}+y){F_{{X_{(t)}}}}(dy)dt\right|\\ {} & \le \| {h^{(1)}}\| \left|{\int _{0}^{\infty }}{e^{-t}}dt\right|\\ {} & =\| {h^{(1)}}\| .\end{aligned}\]
It can be easily seen that ${f_{h}}$ is r-times differentiable. Let $r=1$, then
\[\begin{aligned}{}\| {f_{h}^{(1)}}\| & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}{e^{-2t}}{\int _{\mathbb{R}}}{h^{(2)}}(x{e^{-t}}+y){F_{{X_{(t)}}}}(dy)dt\right|\\ {} & \le \| {h^{(2)}}\| \left|{\int _{0}^{\infty }}{e^{-2t}}dt\right|\\ {} & =\frac{\| {h^{(2)}}\| }{2}.\end{aligned}\]
Also, by induction, we get
\[ \| {f_{h}^{(r)}}\| \le \frac{1}{r+1}\| {h^{(r+1)}}\| ,\hspace{1em}r=0,1,2,\dots .\]
(ii) For any $x,y\in \mathbb{R}$ and $h\in {\mathcal{H}_{3}}$,
\[\begin{aligned}{}\left|{f^{\prime }_{h}}(x)-{f^{\prime }_{h}}(y)\right|& \le {\int _{0}^{\infty }}{e^{-2t}}{\int _{\mathbb{R}}}\left|{h^{(2)}}(x{e^{-t}}+z)-{h^{(3)}}(y{e^{-t}}+z)\right|{F_{{X_{(t)}}}}(dz)dt\\ {} & \le {\int _{0}^{\infty }}{e^{-2t}}{\int _{\mathbb{R}}}\| {h^{(3)}}\| \left|x-y\right|{e^{-t}}{F_{{X_{(t)}}}}(dz)dt\\ {} & =\| {h^{(3)}}\| \left|x-y\right|{\int _{0}^{\infty }}{e^{-3t}}dt\\ {} & =\frac{\| {h^{(3)}}\| }{3}\left|x-y\right|.\end{aligned}\]
This proves the result.  □

4 Bounds for tempered stable approximation

In this section, we present bounds for the tempered stable approximations to various probability distributions. First, we obtain, for the Kolmogorov distance ${d_{K}}$, the error bounds for a sequence of CPD that converges to a TSD.
Theorem 4.1.
Let $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ and ${X_{n}}$, $n\ge 1$, be compound Poisson rvs with cf
(53)
\[ {\phi _{n}}(z):=\exp \bigg(n\left({\phi _{ts}^{\frac{1}{n}}}(z)-1\right)\bigg),\hspace{1em}z\in \mathbb{R},\]
where ${\phi _{ts}}(t)$ is given in (1). Then
(54)
\[ {d_{K}}({X_{n}},X)\le c{\bigg({\sum \limits_{j=1}^{2}}|{C_{j}}(X)|\bigg)^{\frac{2}{5}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{5}}},\]
where $c\gt 0$ is independent of n and ${C_{j}}$ denotes the jth cumulant of X.
Proof.
Let $b={\textstyle\int _{-1}^{1}}u{\nu _{ts}}(du)$, where ${\nu _{ts}}$ is defined in (2). Then by Equation (2.6) of [1], $\text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})\stackrel{d}{=}\mathrm{ID}(b,0,{\nu _{ts}})$. That is, $\text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ is an infinitely divisible distribution with the triplet $b,0$ and ${\nu _{ts}}$. Note that TSD are absolutely continuous with respect to the Lévy measure with a bounded density and $\mathbb{E}|X{|^{2}}\lt \infty $ (see [24, Section 7]). Recall from Proposition 4.11 of [1] that, if $X\sim \mathrm{ID}(b,0,\nu )$ with cf ${\phi _{X}}$ (say), and ${X_{n}}$, $n\ge 1$, are compound Poisson rvs each with cf as (53), then
(55)
\[\begin{aligned}{}{d_{K}}({X_{n}},X)& \le c{\bigg(|\mathbb{E}[X]|+{\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\bigg)^{\frac{2}{p+4}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{p+4}}},\end{aligned}\]
where $|{\phi _{X}}(z)|{\displaystyle\int _{0}^{|z|}}\displaystyle\frac{ds}{|{\phi _{X}}(s)|}\le {c_{0}}|z{|^{p}}$, $p\ge 1$. Now observe that
\[\begin{aligned}{}|{\phi _{ts}}(z)|{\int _{0}^{|z|}}\frac{ds}{|{\phi _{ts}}(s)|}& \le {\int _{0}^{|z|}}\frac{ds}{|\mathbb{E}[\cos sX]+i\mathbb{E}[\sin sX]|}\\ {} & ={\int _{0}^{|z|}}\frac{|\mathbb{E}[{e^{\left(-isX\right)}}]|}{{\mathbb{E}^{2}}[\cos sX]+{\mathbb{E}^{2}}[\sin sX]}ds\\ {} & \le {c_{0}}{\int _{0}^{|z|}}|\mathbb{E}[{e^{\left(-isX\right)}}]|ds\\ {} & \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\left(\frac{1}{{\mathbb{E}^{2}}[\cos sX]+{\mathbb{E}^{2}}[\sin sX]}\lt {c_{0}}\hspace{2.5pt}\text{(say)}\right)\\ {} & ={c_{0}}{\int _{0}^{|z|}}|\mathbb{E}[\cos sX-i\sin sX]|ds\\ {} & \le {c_{0}}{\int _{0}^{|z|}}ds\\ {} & ={c_{0}}|z|.\end{aligned}\]
Hence by (55), for $p=1$, we get
\[\begin{aligned}{}{d_{K}}({X_{n}},X)& \le c{\bigg(|\mathbb{E}[X]|+{\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\bigg)^{\frac{2}{5}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{5}}}\\ {} & =c{\bigg({\sum \limits_{j=1}^{2}}|{C_{j}}(X)|\bigg)^{\frac{2}{5}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{5}}},\end{aligned}\]
$\hspace{2.5pt}\text{since}\hspace{2.5pt}{\textstyle\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)={C_{2}}(X)$. This proves the result.  □
Remark 4.2.
Note that if $n\to \infty $, ${d_{K}}({X_{n}},X)\to 0$, as expected, and TSD is the limit of CPD.
Our next result yields an error bound for the closeness of tempered stable distributions to α-stable distributions.
Theorem 4.3.
Let $\alpha \in (0,1)$. Let $X\sim $TSD $({m_{1}},\alpha ,{\lambda _{1}},{m_{2}},\alpha ,{\lambda _{2}})$ and ${X_{\alpha }}\sim S({m_{1}},{m_{2}},\alpha )$. Then
(56)
\[ {d_{K}}({X_{\alpha }},X)\le {C_{1}}{\lambda _{1}^{\alpha +\frac{1}{2}}}+{C_{2}}{\lambda _{2}^{\alpha +\frac{1}{2}}},\]
where ${C_{1}},{C_{2}}\gt 0$ are independent of ${\lambda _{1}}$ and ${\lambda _{2}}$.
Proof.
For $h\in {\mathcal{H}_{K}}$, from (43), we get
(57)
\[\begin{aligned}{}\mathbb{E}[h({X_{\alpha }})]-\mathbb{E}[h(X)]& =\mathbb{E}[\mathcal{A}f({X_{\alpha }})]=\mathbb{E}\left[\mathcal{A}f({X_{\alpha }})-{\mathcal{A}_{\alpha }}f({X_{\alpha }})\right],\end{aligned}\]
since
\[ \mathbb{E}[{\mathcal{A}_{\alpha }}f({X_{\alpha }})]=\mathbb{E}\left[-{X_{\alpha }}f({X_{\alpha }})+{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{\alpha }}(du)\right]=0,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\]
where ${\nu _{\alpha }}$ is the Lévy measure given in (4) (see (36)).
Then, from (57), we have
\[\begin{aligned}{}\bigg|\mathbb{E}[h({X_{\alpha }})]-\mathbb{E}[h(X)]\bigg|& =\bigg|\mathbb{E}\bigg[\bigg(-{X_{\alpha }}f({X_{\alpha }})+{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{ts}}(du)\bigg)\\ {} & \hspace{1em}-\bigg(-{X_{\alpha }}f({X_{\alpha }})+{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{\alpha }}(du)\bigg)\bigg]\bigg|\\ {} & =\bigg|\mathbb{E}\bigg[{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{ts}}(du)-{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{\alpha }}(du)\bigg]\bigg|\\ {} & =\bigg|\mathbb{E}\bigg[\bigg({m_{1}}{\int _{0}^{\infty }}uf({X_{\alpha }}+u)\frac{{e^{-{\lambda _{1}}u}}}{{u^{1+\alpha }}}du\\ {} & \hspace{1em}+{m_{2}}{\int _{-\infty }^{0}}uf({X_{\alpha }}+u)\frac{{e^{-{\lambda _{2}}|u|}}}{|u{|^{1+\alpha }}}du\bigg)\\ {} & \hspace{1em}-\bigg({m_{1}}{\int _{0}^{\infty }}uf({X_{\alpha }}+u)\frac{du}{{u^{1+\alpha }}}\\ {} & \hspace{1em}+{m_{2}}{\int _{-\infty }^{0}}uf({X_{\alpha }}+u)\frac{du}{|u{|^{1+\alpha }}}\bigg)\bigg]\bigg|\end{aligned}\]
So, we write
(58)
\[\begin{aligned}{}\bigg|\mathbb{E}[h({X_{\alpha }})]-\mathbb{E}[h(X)]\bigg|& =\bigg|\mathbb{E}\bigg[{m_{1}}{\int _{0}^{\infty }}\frac{({e^{-{\lambda _{1}}u}}-1)}{{u^{1+\alpha }}}uf({X_{\alpha }}+u)du\\ {} & \hspace{1em}-{m_{2}}{\int _{0}^{\infty }}\frac{({e^{-{\lambda _{2}}u}}-1)}{{u^{1+\alpha }}}uf({X_{\alpha }}-u)du\bigg]\bigg|.\end{aligned}\]
Now applying the triangle and Cauchy–Schwartz inequalities in (58), we get
(59)
\[\begin{aligned}{}{d_{K}}({X_{\alpha }},X)& \le {m_{1}}{\bigg\{{\int _{0}^{\infty }}{\bigg(\frac{({e^{-{\lambda _{1}}u}}-1)}{{u^{1+\alpha }}}\bigg)^{2}}du\bigg\}^{\frac{1}{2}}}\mathbb{E}{\bigg({\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du\bigg)^{\frac{1}{2}}}\\ {} & \hspace{1em}+{m_{2}}{\bigg\{{\int _{0}^{\infty }}{\bigg(\frac{({e^{-{\lambda _{2}}u}}-1)}{{u^{1+\alpha }}}\bigg)^{2}}du\bigg\}^{\frac{1}{2}}}\mathbb{E}{\bigg({\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du\bigg)^{\frac{1}{2}}}\\ {} & ={\lambda _{1}^{\alpha +\frac{1}{2}}}{m_{1}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du\bigg]^{\frac{1}{2}}}\\ {} & \hspace{1em}+{\lambda _{2}^{\alpha +\frac{1}{2}}}{m_{2}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du\bigg]^{\frac{1}{2}}},\end{aligned}\]
where $M(\alpha )={\textstyle\int _{0}^{\infty }}{\bigg(\frac{({e^{-u}}-1)}{{u^{1+\alpha }}}\bigg)^{2}}du\lt \infty $ (see [15, p.169]). Also, $\mathbb{E}{[{\textstyle\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du]^{\frac{1}{2}}}$ and $\mathbb{E}{[{\textstyle\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du]^{\frac{1}{2}}}$ are finite, since $f\in \mathcal{S}(\mathbb{R})$. Now setting
\[\begin{aligned}{}{C_{1}}& ={m_{1}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du\bigg]^{\frac{1}{2}}}\lt \infty ,\hspace{1em}\text{and}\\ {} {C_{2}}& ={m_{2}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du\bigg]^{\frac{1}{2}}}\lt \infty ,\end{aligned}\]
in (59), we get
\[ {d_{K}}({X_{\alpha }},X)\le {C_{1}}{\lambda _{1}^{\alpha +\frac{1}{2}}}+{C_{2}}{\lambda _{2}^{\alpha +\frac{1}{2}}},\]
where ${C_{1}},{C_{2}}\gt 0$ are independent of ${\lambda _{1}}$ and ${\lambda _{2}}$. This proves the result.  □
Next, we state a result that gives the limiting distribution of a sequence of tempered stable random variables.
Lemma 4.4.
([24, Proposition 3.1]) Let ${m_{1}},{m_{2}},{m_{i,n}},{\lambda _{i,n}}\in (0,\infty )$ and ${\alpha _{1}},{\alpha _{2}},{\alpha _{i,n}}\in [0,1)$, for $i=1,2$. Also, let ${X_{n}}\sim \textit{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$ and $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$. If $({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})\to ({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}}$, ${\alpha _{2}},{\lambda _{2}})\hspace{2.5pt}\textit{as}\hspace{2.5pt}n\to \infty $, then ${X_{n}}\stackrel{L}{\to }X$.
The following theorem gives the error in the closeness of ${X_{n}}$ to X.
Theorem 4.5.
Let ${X_{n}}$ and X be defined as in Lemma 4.4. Then
(60)
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},X)& \le \left|{C_{1}}({X_{n}})-{C_{1}}(X)\right|+\frac{1}{2}\left|{C_{2}}({X_{n}})-{C_{2}}(X)\right|\\ {} & \hspace{1em}+\frac{1}{6}{C_{2}}(X)\left|\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})}-\frac{|{C_{3}}(X)|}{{C_{2}}(X)}\right|,\end{aligned}\]
where ${C_{j}}(X)$, $j=1,2,3$, denotes the j-th cumulant of X and ${d_{{\mathcal{H}_{3}}}}$ is defined in (14).
Proof.
Let $h\in {\mathcal{H}_{3}}$ and f be the solution to the Stein equation (42). Then
(61)
\[\begin{aligned}{}\mathbb{E}[h({X_{n}})]-\mathbb{E}[h(X)]& =\mathbb{E}[\mathcal{A}f({X_{n}})]\\ {} & =\mathbb{E}\bigg[-{X_{n}}f({X_{n}})+{\int _{\mathbb{R}}}uf({X_{n}}+u){\nu _{ts}}(du)\bigg]\\ {} & =\mathbb{E}\bigg[\bigg(-{X_{n}}+{C_{1}}(X)\bigg)f({X_{n}})+{C_{2}}(X){f^{\prime }}({X_{n}}+Y)\bigg],\end{aligned}\]
where the last equality follows by (41), and Y has the density given in (39).
Since ${X_{n}}\sim \text{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$, by Proposition 3.1, we have
(62)
\[ \mathbb{E}\bigg[-{X_{n}}f({X_{n}})+{\int _{\mathbb{R}}}uf({X_{n}}+u){\nu _{ts}^{n}}(du)\bigg]=0,\]
where ${\nu _{ts}^{n}}$ is the Lévy measure given by
\[ {\nu _{ts}^{n}}(du)=\left(\frac{{m_{1,n}}}{{u^{1+{\alpha _{1,n}}}}}{e^{-{\lambda _{1,n}}u}}{\mathbf{I}_{(0,\infty )}}(u)+\frac{{m_{2,n}}}{|u{|^{1+{\alpha _{2,n}}}}}{e^{-{\lambda _{2,n}}|u|}}{\mathbf{I}_{(-\infty ,0)}}(u)\right)du.\]
Also, by Lemma 3.5, the identity in (62) can be seen as
(63)
\[ \mathbb{E}\bigg[\bigg(-{X_{n}}+{C_{1}}({X_{n}})\bigg)f({X_{n}})+{C_{2}}({X_{n}}){f^{\prime }}({X_{n}}+{Y_{n}})\bigg]=0,\]
where ${Y_{n}}$ has the density
(64)
\[ {f_{n}}(u)=\frac{[{\textstyle\textstyle\int _{u}^{\infty }}y{\nu _{ts}^{n}}(dy)]{\textbf{I}_{(0,\infty )}}(u)-[{\textstyle\textstyle\int _{-\infty }^{u}}y{\nu _{ts}^{n}}(dy)]{\textbf{I}_{(-\infty ,0)}}(u)}{{C_{2}}({X_{n}})},\hspace{1em}u\in \mathbb{R}.\]
Using (63) in (61), we get
(65)
\[\begin{aligned}{}\bigg|\mathbb{E}[h({X_{n}})]-\mathbb{E}[h(X)]\bigg|& =\bigg|\mathbb{E}\bigg[\bigg((-{X_{n}}+{C_{1}}(X))f({X_{n}})+{C_{2}}(X){f^{\prime }}({X_{n}}+Y)\bigg)\\ {} & \hspace{1em}-\bigg((-{X_{n}}+{C_{1}}({X_{n}}))f({X_{n}})+{C_{2}}({X_{n}}){f^{\prime }}({X_{n}}+{Y_{n}})\bigg)\bigg]\bigg|\\ {} & \le \left|{C_{1}}({X_{n}})-{C_{1}}(X)\right|\| f\| \\ {} & \hspace{1em}+\mathbb{E}\left|{C_{2}}({X_{n}}){f^{\prime }}({X_{n}}+{Y_{n}})-{C_{2}}(X){f^{\prime }}({X_{n}}+Y)\right|\\ {} & \le \left|{C_{1}}({X_{n}})-{C_{1}}(X)\right|\| f\| \\ {} & \hspace{1em}+\mathbb{E}\bigg|({C_{2}}({X_{n}})-{C_{2}}(X)){f^{\prime }}({X_{n}}+{Y_{n}})\bigg|\\ {} & \hspace{1em}+{C_{2}}(X)\mathbb{E}\bigg|{f^{\prime }}({X_{n}}+{Y_{n}})-{f^{\prime }}({X_{n}}+Y)\bigg|\\ {} & \le \| {h^{(1)}}\| |{C_{1}}({X_{n}})-{C_{1}}(X)|+\frac{\| {h^{(2)}}\| }{2}|{C_{2}}({X_{n}})-{C_{2}}(X)|\\ {} & \hspace{1em}+{C_{2}}(X)\frac{\| {h^{(3)}}\| }{3}\bigg|\mathbb{E}|{Y_{n}}|-\mathbb{E}|Y|\bigg|,\end{aligned}\]
where the last inequality follows by applying the estimates given in Lemma 3.10. From (64) and (39), it can be verified that (see (40))
(66)
\[ \mathbb{E}|{Y_{n}}|=\frac{|{C_{3}}({X_{n}})|}{2{C_{2}}({X_{n}})}\hspace{1em}\text{and}\hspace{1em}\mathbb{E}|Y|=\frac{|{C_{3}}(X)|}{2{C_{2}}(X)}.\]
Using (66) in (65), we get the desired result.  □
Remark 4.6.
(i) Note that if $({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})\to ({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}}$, ${\alpha _{2}},{\lambda _{2}})\hspace{2.5pt}\textit{as}\hspace{2.5pt}n\to \infty $, then ${C_{j}}({X_{n}})\to {C_{j}}(X)$, $j=1,2,3$, and ${d_{{\mathcal{H}_{3}}}}({X_{n}},X)=0$, as expected.
(ii) Note also that if ${m_{1,n}}={m_{2,n}}$, ${\alpha _{1,n}}={\alpha _{2,n}}$, ${\lambda _{1,n}}={\lambda _{2,n}}$, ${m_{1}}={m_{2}}$, ${\alpha _{1}}={\alpha _{2}}$, and ${\lambda _{1}}={\lambda _{2}}$, then ${C_{j}}({X_{n}})={C_{j}}(X)=0$, $j=1,3$. Under these conditions, from (60), we get
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},X)& \le \frac{1}{2}|{C_{2}}({X_{n}})-{C_{2}}(X)|\\ {} & =\bigg|\Gamma (2-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1}^{2-{\alpha _{1,n}}}}}-\Gamma (2-{\alpha _{1}})\frac{{m_{1}}}{{\lambda _{1}^{2-{\alpha _{1}}}}}\bigg|.\end{aligned}\]
If in addition ${C_{2}}({X_{n}})\to {C_{2}}(X)$, then ${X_{n}}\stackrel{L}{\to }X$, as $n\to \infty $.
Next, we discuss two examples. Our first example yields the error in approximating a TSD by a normal distribution.
Example 4.7 (Normal approximation to a TSD).
Let ${X_{n}}\sim \text{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}}$, ${m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$, ${X_{m}}\sim \text{SVGD}(m,\sqrt{2m}/\lambda )$ and ${X_{\lambda }}\sim \mathcal{N}(0,{\lambda ^{2}})$. Recall from Section 2.1 that, $\text{SVGD}(m,\sqrt{2m}/\lambda )\stackrel{d}{=}\text{TSD}(m,0,\sqrt{2m}/\lambda ,m,0,\sqrt{2m}/\lambda )$. Then, the cf of $\text{SVGD}(m,\sqrt{2m}/\lambda )$ is
(67)
\[\begin{aligned}{}{\phi _{sv}}(z)& ={\bigg(1+\frac{{z^{2}}{\lambda ^{2}}}{2m}\bigg)^{-m}}\end{aligned}\]
(68)
\[\begin{aligned}{}& =\exp \bigg({\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{sv}}(du)\bigg),\hspace{1em}z\in \mathbb{R},\end{aligned}\]
where the Lévy measure ${\nu _{sv}}$ is
\[ {\nu _{sv}}(du)=\bigg(\frac{m}{u}{e^{-\frac{\sqrt{2m}}{\lambda }u}}{\textbf{I}_{(0,\infty )}}(u)+\frac{m}{|u|}{e^{-\frac{\sqrt{2m}}{\lambda }|u|}}{\textbf{I}_{(-\infty ,0)}}(u)\bigg).\]
Note from (67) that
\[ \underset{m\to \infty }{\lim }{\phi _{sv}}(z)={e^{-\frac{{\lambda ^{2}}{z^{2}}}{2}}}.\]
That is, ${X_{m}}\stackrel{L}{\to }{X_{\lambda }}\sim \mathcal{N}(0,{\lambda ^{2}})$, as $m\to \infty $. Also, it follows from [36, Theorem 7.12] that, if ${X_{m}}\stackrel{L}{\to }{X_{\lambda }}$, as $m\to \infty $, then
(69)
\[ {d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{\lambda }})=\underset{m\to \infty }{\lim }{d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{m}}).\]
Applying Theorem 4.5 to $X={X_{m}}$, and taking the limit as $m\to \infty $, we get from (69)
(70)
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{\lambda }})& \le \underset{m\to \infty }{\lim }\bigg(\left|{C_{1}}({X_{n}})-{C_{1}}({X_{m}})\right|+\frac{1}{2}\left|{C_{2}}({X_{n}})-{C_{2}}({X_{m}})\right|\\ {} & \hspace{1em}+\frac{1}{6}{C_{2}}({X_{m}})\left|\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})}-\frac{|{C_{3}}({X_{m}})|}{{C_{2}}({X_{m}})}\right|\bigg)\\ {} & =|{C_{1}}({X_{n}})|+\frac{1}{2}|{C_{2}}({X_{n}})-{\lambda ^{2}}|+\frac{1}{6}{\lambda ^{2}}\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})},\end{aligned}\]
which gives the error in the closeness between ${X_{n}}$ and ${X_{\lambda }}$. Note that
\[\begin{aligned}{}{C_{1}}({X_{n}})& =\mathbb{E}[{X_{n}}]=\Gamma (1-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1,n}^{1-{\alpha _{1,n}}}}}-\Gamma (1-{\alpha _{2,n}})\frac{{m_{2,n}}}{{\lambda _{2,n}^{1-{\alpha _{2,n}}}}},\\ {} {C_{2}}({X_{n}})& =\mathit{Var}({X_{n}})=\Gamma (2-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1,n}^{2-{\alpha _{1,n}}}}}+\Gamma (2-{\alpha _{2,n}})\frac{{m_{2,n}}}{{\lambda _{2,n}^{1-{\alpha _{2,n}}}}},\hspace{1em}\text{and}\\ {} {C_{3}}({X_{n}})& =\Gamma (3-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1,n}^{3-{\alpha _{1,n}}}}}-\Gamma (3-{\alpha _{2,n}})\frac{{m_{2,n}}}{{\lambda _{2,n}^{3-{\alpha _{2,n}}}}}.\end{aligned}\]
When ${C_{j}}({X_{n}})\to 0$, for $j=1,3$ and ${C_{2}}({X_{n}})\to {\lambda ^{2}}$, from (70), we have ${X_{n}}\stackrel{L}{\to }{X_{\lambda }}$, as $n\to \infty $.
Example 4.8 (Variance-gamma approximation to a TSD).
Let ${X_{n}}\sim \text{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$ and ${X_{v}}\sim $VGD$(m,{\lambda _{1}},{\lambda _{2}})$. Then
\[\begin{aligned}{}{C_{1}}({X_{v}})& =m\bigg(\frac{1}{{\lambda _{1}}}-\frac{1}{{\lambda _{2}}}\bigg),\hspace{1em}{C_{2}}({X_{v}})=m\bigg(\frac{1}{{\lambda _{1}^{2}}}+\frac{1}{{\lambda _{2}^{2}}}\bigg),\hspace{1em}\text{and}\\ {} {C_{3}}({X_{v}})& =2m\bigg(\frac{1}{{\lambda _{1}^{3}}}-\frac{1}{{\lambda _{2}^{3}}}\bigg).\end{aligned}\]
Now applying Theorem 4.5 to $X={X_{v}}$, we get
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{v}})& \le \left|{C_{1}}({X_{n}})-\frac{m({\lambda _{2}}-{\lambda _{1}})}{{\lambda _{1}}{\lambda _{2}}}\right|+\frac{1}{2}\left|{C_{2}}({X_{n}})-\frac{m({\lambda _{1}^{2}}+{\lambda _{2}^{2}})}{{\lambda _{1}^{2}}{\lambda _{2}^{2}}}\right|\\ {} & \hspace{1em}+\frac{1}{6}m\frac{{\lambda _{1}^{2}}+{\lambda _{2}^{2}}}{{\lambda _{1}^{2}}{\lambda _{2}^{2}}}\left|\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})}-\frac{2|{\lambda _{2}^{3}}-{\lambda _{1}^{3}}|}{{\lambda _{1}}{\lambda _{2}}({\lambda _{1}^{2}}+{\lambda _{2}^{2}})}\right|,\end{aligned}\]
which gives the error in the closeness between ${X_{n}}$ and ${X_{v}}$. When ${C_{j}}({X_{n}})\to {C_{j}}({X_{v}})$, for $j=1,2,3$, we have ${X_{n}}\stackrel{L}{\to }{X_{v}}$, as $n\to \infty $.

Acknowledgement

The authors are thankful to the reviewers for the several helpful comments and suggestions, which greatly improved the presentation of the article.

References

[1] 
Arras, B., Houdré, C.: On Stein’s Method for Infinitely Divisible Laws with Finite First Moment. Springer (2019) MR3931309
[2] 
Barbour, A.D.: Stein’s method for diffusion approximations. Probab. Theory Relat. Fields 84(3), 297–322 (1990) MR1035659. https://doi.org/10.1007/BF01197887
[3] 
Blumenthal, R.M., Getoor, R.K.: Sample functions of stochastic processes with stationary independent increments. J. Math. Mech., 493–516 (1961) MR0123362
[4] 
Boyarchenko, S.I., Levendorskiǐ, S.Z.: Option pricing for truncated lévy processes. Int. J. Theor. Appl. Finance 3(03), 549–552 (2000) MR3951952. https://doi.org/10.1142/S0219024919500110
[5] 
Carr, P., Geman, H., Madan, D.B., Yor, M.: The fine structure of asset returns: An empirical investigation. J. Bus. 75(2), 305–332 (2002)
[6] 
Čekanavičius, V.: Approximation Methods in Probability Theory. Springer (2016) MR3467748. https://doi.org/10.1007/978-3-319-34072-2
[7] 
Čekanavičius, V., Vellaisamy, P.: Approximating by convolution of the normal and compound poisson laws via stein’s method. Lith. Math. J. 58, 127–140 (2018) MR3814710. https://doi.org/10.1007/s10986-018-9392-5
[8] 
Chen, P., Xu, L.: Approximation to stable law by the lindeberg principle. J. Appl. Math. Anal. Appl. 480(2), 123338 (2019) MR4000076. https://doi.org/10.1016/j.jmaa.2019.07.028
[9] 
Chen, P., Nourdin, I., Xu, L., Yang, X., Zhang, R.: Non-integrable stable approximation by stein’s method. J. Theor. Probab., 1–50 (2022) MR4414414. https://doi.org/10.1007/s10959-021-01094-5
[10] 
Döbler, C., Gaunt, R.E., Vollmer, S.J.: An iterative technique for bounding derivatives of solutions of stein equations (2017) MR3724564. https://doi.org/10.1214/17-EJP118
[11] 
Gaunt, R.E.: On stein’s method for products of normal random variables and zero bias couplings. Bernoulli 23(4B), 3311–3345 (2017) MR3654808. https://doi.org/10.3150/16-BEJ848
[12] 
Gaunt, R.E.: Variance-gamma approximation via stein’s method. Electron. J. Probab. 19(38), 1–33 (2014) MR3194737. https://doi.org/10.1214/EJP.v19-3020
[13] 
Gaunt, R.E.: Wasserstein and kolmogorov error bounds for variance-gamma approximation via stein’s method i. J. Theor. Probab. 33(1), 465–505 (2020) MR4064309. https://doi.org/10.1007/s10959-018-0867-4
[14] 
Gaunt, R.E., Li, S.: Bounding kolmogorov distances through wasserstein and related integral probability metrics. J. Appl. Math. Anal. Appl. 522(1), 126985 (2023) MR4533915. https://doi.org/10.1016/j.jmaa.2022.126985
[15] 
Gnedenko, B.V., Kolmogorov, A.N.: Limit Distributions for Sums of Independent Random Variables vol. 2420. Addison-wesley (1968) MR0233400
[16] 
Goldstein, L., Reinert, G.: Stein’s method and the zero bias transformation with application to simple random sampling. Ann. Appl. Probab. 7(4), 935–952 (1997) MR1484792. https://doi.org/10.1214/aoap/1043862419
[17] 
Houdré, C., Pérez-Abreu, V., Surgailis, D.: Interpolation, correlation identities, and inequalities for infinitely divisible variables. J. Fourier Anal. Appl. 4, 651–668 (1998) MR1665993. https://doi.org/10.1007/BF02479672
[18] 
Jin, X., Li, X., Lu, J.: A kernel bound for non-symmetric stable distribution and its applications. J. Appl. Math. Anal. Appl. 488(2), 124063 (2020) MR4081550. https://doi.org/10.1016/j.jmaa.2020.124063
[19] 
Ken-Iti, S.: Lévy Processes and Infinitely Divisible Distributions vol. 68. Cambridge university press (1999)
[20] 
Kesavan, S.: Topics in functional analysis and applications. Wiley Eastern Ltd. (1989) MR0990018
[21] 
Kim, Y.S., Rachev, S.T., Bianchi, M.L., Fabozzi, F.J.: Financial market models with lévy processes and time-varying volatility. J. Bank. Finance 32(7), 1363–1378 (2008)
[22] 
Koponen, I.: Analytic approach to the problem of convergence of truncated lévy flights towards the gaussian stochastic process. Phys. Rev. E 52(1), 1197 (1995)
[23] 
Küchler, U., Tappe, S.: Bilateral gamma distributions and processes in financial mathematics. Stoch. Process. Appl. 118(2), 261–283 (2008). doi:https://doi.org/10.1016/j.spa.2007.04.006. MR2376902
[24] 
Küchler, U., Tappe, S.: Tempered stable distributions and processes. Stoch. Process. Appl. 123(12), 4256–4293 (2013). doi:https://doi.org/10.1016/j.spa.2013.06.012. MR3096354
[25] 
Kumar, A., Vellaisamy, P., Viens, F.: Poisson approximation to the convolution of psds. Probab. Math. Stat. 42, 63–80 (2022) MR4490669. https://doi.org/10.37190/0208-4147.00056
[26] 
Ley, C., Reinert, G., Swan, Y.: Stein’s method for comparison of univariate distributions. Probab. Surv. 14, 1–52 (2017) MR3595350. https://doi.org/10.1214/16-PS278
[27] 
Nourdin, I., Peccati, G.: Normal Approximations with Malliavin Calculus: from Stein’s Method to Universality vol. 192. Cambridge University Press (2012) MR2962301. https://doi.org/10.1017/CBO9781139084659
[28] 
Rachev, S.T., Kim, Y.S., Bianchi, M.L., Fabozzi, F.J.: Financial Models with Lévy Processes and Volatility Clustering. John Wiley & Sons (2011)
[29] 
Rosiński, J.: Tempering stable processes. Stoch. Process. Appl. 117(6), 677–707 (2007) MR2327834. https://doi.org/10.1016/j.spa.2006.10.003
[30] 
S Upadhye, N., Barman, K.: A unified approach to stein’s method for stable distributions. Probab. Surv. 19, 533–589 (2022) MR4507474. https://doi.org/10.1214/20-ps354
[31] 
Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory, vol. 6, pp. 583–603 (1972). University of California Press MR0402873
[32] 
Stein, E.M., Shakarchi, R.: Fourier Analysis: an Introduction vol. 1. Princeton University Press (2011) MR1970295
[33] 
Sztonyk, P.: Estimates of tempered stable densities. J. Theor. Probab. 23(1), 127–147 (2010) MR2591907. https://doi.org/10.1007/s10959-009-0208-8
[34] 
Upadhye, N.S., Čekanavičius, V., Vellaisamy, P.: On Stein operators for discrete approximations. Bernoulli 23(4A), 2828–2859 (2017). doi:https://doi.org/10.3150/16-BEJ829. MR3648047
[35] 
Vellaisamy, P., Čekanavičius, V.: Infinitely divisible approximations for sums of m-dependent random variables. J. Theor. Probab. 31, 2432–2445 (2018) MR3866620. https://doi.org/10.1007/s10959-017-0774-0
[36] 
Villani, C.: Topics in Optimal Transportation vol. 58. American Mathematical Soc. (2021) MR1964483. https://doi.org/10.1090/gsm/058
[37] 
Wang, Y., Yuan, C.: Convergence of the euler-maruyama method for stochastic differential equations with respect to semimartingales. Appl. Math. Sci. (Ruse) 1(41-44), 2063–2077 (2007) MR2369925
[38] 
Xu, L.: Approximation of stable law in wasserstein-1 distance by stein’s method. Ann. Appl. Probab. 29(1), 458–504 (2019) MR3910009. https://doi.org/10.1214/18-AAP1424
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 The preliminary results
  • 3 Stein’s method for tempered stable distributions
  • 4 Bounds for tempered stable approximation
  • Acknowledgement
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy