1 Introduction
Probability approximations is one of the fundamental topics in probability theory, due to its wide range of applications in limit theorems [6, 30, 35], runs [34], stochastic algorithms [37], and various other fields. They mainly provide estimates of the distance between the distributions of two random variables (rvs), which measure the closeness of the approximations. Hence, estimating the accuracy of the approximation is a crucial task. Recently, Chen et al. [9, 8], Jin et al. [18], Upadhye and Barman [30], Xu [38] have studied stable approximations via the Stein’s method. The distributional approximations for a family of stable distributions is not straightforward due to the lack of symmetry and heavy-tailed behavior of stable distributions. One of the major obstacles is that the moments of a stable distribution do not exist, whenever the stability parameter $\alpha \in (0,1]$. To overcome these issues, different approaches and various assumptions are used.
Koponen [22] first introduced tempered stable distributions (TSDs) by tempering the tail of the stable (also called α-stable) distributions and making the distribution’s tail lighter. The tails of TSDs are heavier than those of the normal distribution and thinner than those of the α-stable distribution, see [21]. Therefore, quantifying the error in approximating α-stable and normal distributions to a TSD is of interest. A TSD has mean, variance and exponential moments. Also, the class of TSDs includes many well-known subfamilies of probability distributions, such as CGMY, KoBol, bilateral-gamma, and variance-gamma distributions, which have applications in several disciplines including financial mathematics, see [4, 5, 29, 33].
In this article, we first obtain, for the Kolmogorov distance, an error bound between tempered stable and compound Poisson distributions (CPDs). This provides a convergence rate for the tempered stable approximation to a CPD. Next, we obtain the error bounds for the closeness between tempered stable and α-stable distributions via Stein’s method. We obtain also the error bounds for the closeness between two TSDs, for smooth Wasserstein distance. As a consequence, we discuss the normal and variance-gamma approximation and the corresponding limit theorems to a TSD.
The organization of this article is as follows. In Section 2, we discuss some notations and preliminaries that will be useful later. First, we discuss some important properties of TSDs and some special and limiting distributions from the TSD family. A brief discussion of Stein’s method is also presented. In Section 3, we establish a Stein identity and a Stein equation for TSD and solve it via the semigroup approach. The properties of the solution to the Stein equation are discussed. In Section 4, we discuss bounds for tempered stable approximations of various probability distributions.
2 The preliminary results
2.1 Properties of tempered stable distributions
We first define the TSD and discuss some of their properties. Let ${\textbf{I}_{B}}(.)$ denote the indicator function of the set B. A rv X is said to have TSD (see [24, p.2]) if its cf is given by
where the Lévy measure ${\nu _{ts}}$ is
with parameters ${m_{i}},{\lambda _{i}}\in (0,\infty )$ and ${\alpha _{i}}\in [0,1)$, for $i=1,2$, and we denote it by $\text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$. Note that TSDs are infinitely divisible and self-decomposable, see [24]. Also, note that if ${\alpha _{1}}={\alpha _{2}}=\alpha \in (0,1)$, then the Lévy measure in (2) can be seen as
where
is the Lévy measure of a α-stable distribution (see [19]) and $q:\mathbb{R}\to {\mathbb{R}_{+}}$ is a tempering function (see [24]), given by
The following special and limiting cases of TSDs are well known, see [24]. Let $\stackrel{L}{\to }$ denote the convergence in distribution. Also let ${m_{i}},{\lambda _{i}}\in (0,\infty )$ and ${\alpha _{i}}\in [0,1)$, for $i=1,2$.
Let $X\sim \text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$. Then from (1), for $z\in \mathbb{R}$, the cumulant generating function is given by
where ${\phi _{ts}}(z)$ is given in (1). Then the n-th cumulant of X is
In particular (see [24]),
(1)
\[ {\phi _{ts}}(z)=\exp \left({\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)\right),\hspace{1em}z\in \mathbb{R},\](2)
\[ {\nu _{ts}}(du)=\left(\frac{{m_{1}}}{{u^{1+{\alpha _{1}}}}}{e^{-{\lambda _{1}}u}}{\mathbf{I}_{(0,\infty )}}(u)+\frac{{m_{2}}}{|u{|^{1+{\alpha _{2}}}}}{e^{-{\lambda _{2}}|u|}}{\mathbf{I}_{(-\infty ,0)}}(u)\right)du,\](4)
\[ {\nu _{\alpha }}(du)=\left(\frac{{m_{1}}}{{u^{1+\alpha }}}{\mathbf{I}_{(0,\infty )}}+\frac{{m_{2}}}{{u^{1+\alpha }}}{\mathbf{I}_{(-\infty ,0)}}\right)du\](5)
\[ q(u)={e^{-{\lambda _{1}}u}}{\mathbf{I}_{(0,\infty )}}(u)+{e^{-{\lambda _{2}}|u|}}{\mathbf{I}_{(-\infty ,0)}}(u).\]-
(i) When ${\alpha _{1}}={\alpha _{2}}=\alpha $, then $\text{TSD}({m_{1}},\alpha ,{\lambda _{1}},{m_{2}},\alpha ,{\lambda _{2}})$ is the KoBol distribution, see [3].
-
(ii) When ${m_{1}}={m_{2}}=m$ and ${\alpha _{1}}={\alpha _{2}}=\alpha $, then $\text{TSD}(m,\alpha ,{\lambda _{1}},m,\alpha ,{\lambda _{2}})$ is the CGMY distribution, see [5].
-
(iii) When ${\alpha _{1}}={\alpha _{2}}=0$, then $\text{TSD}({m_{1}},0,{\lambda _{1}},{m_{2}},0,{\lambda _{2}})$ is the bilateral-gamma distribution (BGD), denoted by BGD$({m_{1}},{\lambda _{1}},{m_{2}},{\lambda _{2}})$, see [23].
-
(iv) When ${m_{1}}={m_{2}}=m$ and ${\alpha _{1}}={\alpha _{2}}=0$, then $\text{TSD}(m,0,{\lambda _{1}},m,0,{\lambda _{2}})$ is the variance-gamma distribution (VGD), denoted by VGD$(m,{\lambda _{1}},{\lambda _{2}})$, see [23].
-
(v) When ${m_{1}}={m_{2}}=m$, ${\lambda _{1}}={\lambda _{2}}=\lambda $ and ${\alpha _{1}}={\alpha _{2}}=0$, then $\text{TSD}(m,0,\lambda ,m,0,\lambda )$ is the symmetric VGD, denoted by SVGD$(m,\lambda )$, see [23].
-
(vi) When ${\lambda _{1}},{\lambda _{2}}\downarrow 0$, then $\text{TSD}({m_{1}},\alpha ,{\lambda _{1}},{m_{2}},\alpha ,{\lambda _{2}})$ converges to an α-stable distribution, denoted by $S({m_{1}},{m_{2}},\alpha )$, with cf
(6)
\[ {\phi _{\alpha }}(z)=\exp \left({\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{\alpha }}(du)\right),\hspace{1em}z\in \mathbb{R},\] -
(vii) The limiting case as $m\to \infty $ of the SVGD$(m,\sqrt{2m}/\lambda )$ is the normal distribution $\mathcal{N}(0,{\lambda ^{2}})$, see [12].
(8)
\[\begin{aligned}{}{C_{n}}(X)& :={(-i)^{n}}{\bigg[\frac{{d^{n}}}{d{z^{n}}}\Psi (z)\bigg]_{z=0}}={\int _{\mathbb{R}}}{u^{n}}{\nu _{ts}}(du)\lt \infty ,\hspace{1em}n\ge 1.\end{aligned}\](9)
\[\begin{aligned}{}{C_{1}}(X)& =\mathbb{E}[X]=\Gamma (1-{\alpha _{1}})\frac{{m_{1}}}{{\lambda _{1}^{1-{\alpha _{1}}}}}-\Gamma (1-{\alpha _{2}})\frac{{m_{2}}}{{\lambda _{2}^{1-{\alpha _{2}}}}},\end{aligned}\]2.2 Key steps of Stein’s method
Let ${f^{(n)}}$ henceforth denote the n-th derivative of f with ${f^{(0)}}=f$ and ${f^{\prime }}={f^{(1)}}$. Let $\mathcal{S}(\mathbb{R})$ be the Schwartz space defined by
where ${\mathbb{N}_{0}}=\mathbb{N}\cup \{0\}$ and ${C^{\infty }}(\mathbb{R})$ is the class of infinitely differentiable functions on $\mathbb{R}$. Note that the Fourier transform (FT) on $\mathcal{S}(\mathbb{R})$ is an automorphism. In particular, if $f\in \mathcal{S}(\mathbb{R})$, and $\widehat{f}(u):={\textstyle\int _{\mathbb{R}}}{e^{-iux}}f(x)dx$, $u\in \mathbb{R}$, then $\widehat{f}(u)\in \mathcal{S}(\mathbb{R})$. Similarly, if $\widehat{f}(u)\in \mathcal{S}(\mathbb{R})$, and $f(x):=\frac{1}{2\pi }{\textstyle\int _{\mathbb{R}}}{e^{iux}}\widehat{f}(u)du$, $x\in \mathbb{R}$, then $f(x)\in \mathcal{S}(\mathbb{R})$, see [32].
(12)
\[ \mathcal{S}(\mathbb{R}):=\left\{f\in {C^{\infty }}(\mathbb{R}):\underset{|x|\to \infty }{\lim }|{x^{m}}{f^{(n)}}(x)|=0,\hspace{2.5pt}\text{for all}\hspace{2.5pt}m,n\in {\mathbb{N}_{0}}\right\},\]Next, let
where $\| h\| ={\sup _{x\in \mathbb{R}}}|h(x)|$. Then, for any two rvs Y and Z, the smooth Wasserstein distance (see [14]) is given by
Also, let
Then, for any two rvs Y and Z, the classical Wasserstein distance (see [14]) is given by
Finally, let
Then, for any two rvs Y and Z, the Kolmogorov distance (see [14]) is given by
Next, we discuss the steps of Stein’s method. Let Z be a random variable (rv) with probability distribution ${F_{Z}}$ (denote this by $Z\sim {F_{Z}}$). First, one identifies a suitable operator $\mathcal{A}$ (called the Stein operator) and a class of suitable functions $\mathcal{F}$ such that $Z\sim {F_{Z}}$, if and only if
(13)
\[ {\mathcal{H}_{r}}=\{h:\mathbb{R}\to \mathbb{R}|h\hspace{2.5pt}\text{is}\hspace{2.5pt}r\hspace{2.5pt}\text{times differentiable and}\hspace{2.5pt}\| {h^{(k)}}\| \le 1,k=0,1,\dots ,r\},\](14)
\[ {d_{{\mathcal{H}_{r}}}}(Y,Z):=\underset{h\in {\mathcal{H}_{r}}}{\sup }\left|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]\right|,\hspace{1em}r\ge 1.\](15)
\[ {\mathcal{H}_{W}}=\{h:\mathbb{R}\to \mathbb{R}|h\hspace{2.5pt}\text{is}\hspace{2.5pt}1\text{-Lipschitz and}\hspace{2.5pt}\| h\| \le 1\}.\](16)
\[ {d_{W}}(Y,Z):=\underset{h\in {\mathcal{H}_{W}}}{\sup }\left|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]\right|.\](17)
\[\begin{aligned}{}{\mathcal{H}_{K}}& =\left\{h:\mathbb{R}\to \mathbb{R}\hspace{2.5pt}\big|\hspace{2.5pt}h={\mathbf{I}_{(-\infty ,x]}}\hspace{2.5pt}\text{for some}\hspace{2.5pt}x\in \mathbb{R}\right\}.\end{aligned}\](18)
\[ {d_{K}}(Y,Z):=\underset{h\in {\mathcal{H}_{K}}}{\sup }\left|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]\right|.\]
\[ \mathbb{E}\left[\mathcal{A}f(Z)\right]=0,\hspace{1em}\text{for all}\hspace{2.5pt}f\in \mathcal{F}.\]
This equivalence is called the Stein characterization of ${F_{Z}}$. This characterization leads us to the Stein equation
where h is a real-valued test function. Replacing x with a rv Y and taking expectations on both sides of (19) gives
This equality (20) plays a crucial role in Stein’s method. The probability distribution ${F_{Z}}$ is characterized by (19) so that the problem of bounding the quantity $|\mathbb{E}[h(Y)]-\mathbb{E}[h(Z)]|$ depends on the smoothness of the solution to (19) and the behavior of Y. For more details on Stein’s method, we refer to [1, 7] and the references therein.In particular, let Z have the normal distribution $\mathcal{N}(0,{\sigma ^{2}})$. Then the Stein characterization for Z (see [31]) is
where f is any real-valued absolutely continuous function such that $\mathbb{E}|{f^{\prime }}(Z)|\lt \infty $. This characterization leads us to the Stein equation
where h is a real-valued test function. Replacing x with a rv ${Z_{n}}\sim \mathcal{N}(0,{\sigma _{n}^{2}})$ and taking expectations on both sides of (22) gives
Using the smoothness of solution to (22), it can be shown (see [27, Section 3.6]) that
From (24), if ${\sigma _{n}}\to \sigma $, then ${d_{W}}({Z_{n}},Z)=0$, as expected, which implies that ${Z_{n}}$ converges to the normal distribution $\mathcal{N}(0,{\sigma ^{2}})$. We refer to [25] and [26] for a number of bounds similar to (24) for comparison of univariate probability distributions.
(23)
\[ \mathbb{E}\left[{\sigma ^{2}}{f^{\prime }}({Z_{n}})-{Z_{n}}f({Z_{n}})\right]=\mathbb{E}[h({Z_{n}})]-\mathbb{E}[h(Z)].\](24)
\[ {d_{W}}({Z_{n}},Z)\le \frac{\sqrt{2/\pi }}{{\sigma ^{2}}\vee {\sigma _{n}^{2}}}|{\sigma _{n}^{2}}-{\sigma ^{2}}|.\]3 Stein’s method for tempered stable distributions
3.1 The Stein identity for tempered stable distributions
In this section, we obtain the Stein identity for a TSD. First recall that $\mathcal{S}(\mathbb{R})$ denotes the Schwartz space of functions, defined in (12).
Proposition 3.1.
A rv $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ if and only if
where ${\nu _{ts}}$ is the associated Lévy measure of TSD, defined in (2).
(25)
\[ \mathbb{E}\left[Xf(X)-{\int _{\mathbb{R}}}uf(X+u){\nu _{ts}}(du)\right]=0,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\]Proof.
From Equations (2.7) and (2.8) of [24], the integral ${\textstyle\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)$ is convergent for all $z\in \mathbb{R}$. Also, the cf of $X\sim \text{TSD}$ is given by (see (1))
where $\Psi (z)={\textstyle\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)$. Since $|(iu){e^{izu}}|\le |u|$ and ${\textstyle\int _{\mathbb{R}}}|u|{\nu _{ts}}(du)\lt \infty $ (see (8)), we have
Using (28) in (27) and rearranging the integrals, we have
Multiplying both sides of (29) by $-i$, we get
Note that ${\phi _{ts}}(z)$ and ${\phi ^{\prime }_{ts}}(z)$ exist and are finite for all $z\in \mathbb{R}$. Hence by Fubini’s theorem, the second integral of (30) can be written as
Substituting (31) in (30), we have
Applying the Fourier transform to (32), multiplying with $f\in \mathcal{S}(\mathbb{R})$, and integrating over $\mathbb{R}$, we get
The second integral of (33) can be seen as
Substituting (34) in (33), we have
which proves (25).
(26)
\[\begin{aligned}{}{\phi _{ts}}(z)& =\exp (\Psi (z)),\hspace{1em}z\in \mathbb{R}\end{aligned}\]
\[ {\Psi ^{\prime }}(z)=\frac{d}{dz}{\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)={\int _{\mathbb{R}}}iu{e^{izu}}{\nu _{ts}}(du).\]
Now, taking logarithms on both sides of (26), differentiating with respect to z, we have
Let ${F_{X}}$ be the cumulative distribution function (CDF) of X. Then,
(28)
\[ {\phi _{ts}}(z)={\int _{\mathbb{R}}}{e^{izx}}{F_{X}}(dx)\hspace{2.5pt}\hspace{2.5pt}\hspace{0.2778em}\Longrightarrow \hspace{0.2778em}\hspace{2.5pt}\hspace{2.5pt}{\phi ^{\prime }_{ts}}(z)=i{\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx).\](29)
\[ 0=i\left({\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx)-{\phi _{ts}}(z){\int _{\mathbb{R}}}u{e^{izu}}{\nu _{ts}}(du)\right).\](30)
\[ 0={\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx)-{\phi _{ts}}(z){\int _{\mathbb{R}}}u{e^{izu}}{\nu _{ts}}(du).\](31)
\[\begin{aligned}{}{\phi _{ts}}(z){\int _{\mathbb{R}}}u{e^{izu}}{\nu _{ts}}(du)& ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{izu}}{e^{izx}}{F_{X}}(dx){\nu _{ts}}(du)\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{iz(u+x)}}{\nu _{ts}}(du){F_{X}}(dx)\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{izy}}{\nu _{ts}}(du){F_{X}}(d(y-u))\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}u{e^{izx}}{\nu _{ts}}(du){F_{X}}(d(x-u))\\ {} & ={\int _{\mathbb{R}}}{e^{izx}}{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du).\end{aligned}\](32)
\[\begin{aligned}{}0& ={\int _{\mathbb{R}}}x{e^{izx}}{F_{X}}(dx)-{\int _{\mathbb{R}}}{e^{izx}}{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du)\\ {} & ={\int _{\mathbb{R}}}{e^{izx}}\left(x{F_{X}}(dx)-{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du)\right).\end{aligned}\](33)
\[ {\int _{\mathbb{R}}}f(x)\left(x{F_{X}}(dx)-{\int _{\mathbb{R}}}u{F_{X}}(d(x-u)){\nu _{ts}}(du)\right)=0.\](34)
\[\begin{aligned}{}{\int _{\mathbb{R}}}{\int _{\mathbb{R}}}uf(x){F_{X}}(d(x-u)){\nu _{ts}}(du)& ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}uf(y+u){F_{X}}(dy){\nu _{ts}}(du)\\ {} & ={\int _{\mathbb{R}}}{\int _{\mathbb{R}}}uf(x+u){F_{X}}(dx){\nu _{ts}}(du)\\ {} & =\mathbb{E}\left[{\int _{\mathbb{R}}}uf(X+u){\nu _{ts}}(du)\right].\end{aligned}\]Assume, conversely, that (25) holds for ${\nu _{ts}}$ defined in (2). For any $s\in \mathbb{R}$, let $f(x)={e^{isx}}$, $x\in \mathbb{R}$, then (25) becomes
\[\begin{aligned}{}\mathbb{E}\left[X{e^{isX}}\right]& =\mathbb{E}\left[{\int _{\mathbb{R}}}{e^{is(X+u)}}u{\nu _{ts}}(du)\right]\\ {} & =\mathbb{E}\left[{e^{isX}}{\int _{\mathbb{R}}}{e^{isu}}u{\nu _{ts}}(du)\right].\end{aligned}\]
Setting ${\phi _{ts}}(s)=\mathbb{E}[{e^{isX}}]$, we have
Integrating the real and imaginary parts of (35) leads, for any $z\ge 0$, to
\[\begin{aligned}{}{\phi _{ts}}(z)& =\exp \left(i{\int _{0}^{z}}{\int _{\mathbb{R}}}{e^{isu}}u{\nu _{ts}}(du)ds\right)\\ {} & =\exp \left(i{\int _{\mathbb{R}}}{\int _{0}^{z}}{e^{isu}}dsu{\nu _{ts}}(du)\right)\\ {} & =\exp \left({\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)\right).\end{aligned}\]
A similar computation for $z\le 0$ completes the derivation of the cf. □We now have the following corollary for α-stable distributions.
Corollary 3.2.
A rv $X\sim S({m_{1}},{m_{2}},\alpha )$ if and only if
where ${\nu _{\alpha }}$ is the associated Lévy measure of an α-stable distribution, given in (4).
(36)
\[ \mathbb{E}\left[Xf(X)-{\int _{\mathbb{R}}}uf(X+u){\nu _{\alpha }}(du)\right]=0,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\]Proof.
Let ${\alpha _{1}}={\alpha _{2}}=\alpha $ in Theorem 3.1. Observe now that
\[\begin{aligned}{}& \left|xf(x)-{\int _{\mathbb{R}}}uf(x+u){\nu _{ts}}(du)\right|\lt \infty ,\hspace{1em}\text{and}\hspace{2.5pt}\\ {} & \underset{{\lambda _{1}},{\lambda _{2}}\downarrow 0}{\lim }{\int _{\mathbb{R}}}uf(x+u){\nu _{ts}}(du)={\int _{\mathbb{R}}}uf(x+u){\nu _{\alpha }}(du),\end{aligned}\]
since $f\in \mathcal{S}(\mathbb{R})$. So, the dominated convergence theorem is applicable in (25). Next, taking limits as ${\lambda _{1}},{\lambda _{2}}\downarrow 0$ in (25), and then applying the dominated convergence theorem, and noting that ${\nu _{ts}}\to {\nu _{\alpha }}$, we get (36). □Remark 3.3.
(i) Note that we derive the characterizing (Stein) identity (25) for TSD using the Lévy–Khinchine representation of the cf. Also, observe that several classes of distributions such as variance-gamma, bilateral-gamma, CGMY, and KoBol can be viewed as TSD. The Stein identities for these classes of distributions can be easily obtained using (25).
(ii) Recently Arras and Houdré ([1], Theorem 3.1 and Section 5) obtained the Stein identity for infinitely divisible distributions with first finite moment via the covariance representation given in [17]. Note that TSD is a subclass of IDD and TSD has finite mean. Hence, the Stein identity for TSD can also be derived using the approach given in [1].
3.1.1 A nonzero bias distribution
In Stein’s method literature, the zero bias distribution is a powerful tool to obtain bounds, which has been used in several situations. It has been used in conjunction with coupling techniques to produce quantitative results for normal and product normal approximations, see, e.g., [11]. The zero bias distribution due to Goldstein and Reinert [16] is as follows.
In the following lemma, we prove the existence of a nonzero (extended) bias distribution (see [1, Remark 3.9 (ii)]) associated with TSD. Before stating our result, let us define
where ${\nu _{ts}}$ is the Lévy measure of TSD (see (2)). Let
Also let Y be a random variable with the density
Then, for $n\ge 1$, the nth moment of Y is
(38)
\[ {\eta ^{+}}(u)={\int _{u}^{\infty }}y{\nu _{ts}}(dy),\hspace{2.5pt}u\gt 0,\hspace{1em}\text{and}\hspace{1em}{\eta ^{-}}(u)={\int _{-\infty }^{u}}(-y){\nu _{ts}}(dy),\hspace{2.5pt}u\lt 0,\](39)
\[ {f_{1}}(u)=\frac{\eta (u)}{{\textstyle\int _{\mathbb{R}}}{y^{2}}{\nu _{ts}}(dy)}=\frac{\eta (u)}{\mathit{Var}(X)},\hspace{1em}u\in \mathbb{R}.\](40)
\[\begin{aligned}{}\mathbb{E}[{Y^{n}}]& =\frac{1}{\mathit{Var}(X)}{\int _{\mathbb{R}}}{u^{n}}\eta (u)du\\ {} & =\frac{1}{(n+1)\mathit{Var}(X)}{\int _{\mathbb{R}}}{u^{n+2}}{\nu _{ts}}(u)du\\ {} & =\frac{{C_{n+2}}(X)}{(n+1){C_{2}}(X)}.\end{aligned}\]Lemma 3.5.
Let $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ and Y (independent of X) have the density given in (39). Then
where f is an absolutely continuous function with $\mathbb{E}\left[{f^{\prime }}(X+Y)\right]\lt \infty $.
Proof.
Using (8) and (9), for the case $n=1$, in (25), and rearranging the terms, we get
Now
\[\begin{aligned}{}\mathit{Cov}(X,f(X))=& \mathbb{E}\left[{\int _{\mathbb{R}}}u(f(X+u)-f(X)){\nu _{ts}}(du)\right]\\ {} =& \mathbb{E}\left[{\int _{0}^{\infty }}{f^{\prime }}(X+v){\int _{v}^{\infty }}u{\nu _{ts}}(du)dv\right]\\ {} & +\mathbb{E}\left[{\int _{-\infty }^{0}}{f^{\prime }}(X+v){\int _{-\infty }^{v}}(-u){\nu _{ts}}(du)dv\right]\\ {} =& \mathbb{E}\left[{\int _{\mathbb{R}}}{f^{\prime }}(X+v)\left({\eta ^{+}}(v){\mathbf{I}_{(0,\infty )}}(v)+{\eta ^{-}}(v){\mathbf{I}_{(-\infty ,0)}}(v)\right)dv\right]\\ {} =& \left({\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\right)\mathbb{E}\left[{\int _{\mathbb{R}}}{f^{\prime }}(X+v){f_{1}}(v)dv\right]\\ {} =& \left({\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\right)\mathbb{E}\left[{f^{\prime }}(X+Y)\right]\\ {} =& \mathit{Var}(X)\mathbb{E}\left[{f^{\prime }}(X+Y)\right].\end{aligned}\]
This proves the result. □3.2 The Stein equation for tempered stable distributions
In this section, we first derive the Stein equation for TSD and then solve it via the semigroup approach. From Proposition 3.1, for any $f\in \mathcal{S}(\mathbb{R})$,
is the Stein operator for TSD. Hence, the Stein equation for $X\sim $TSD $({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}}$, ${\lambda _{2}})$ is given by
where $h\in \mathcal{H}$, a class of test functions. The semigroup approach for solving the Stein equation (43) is developed by Barbour [2], and Arras and Houdré [1] generalized it for infinitely divisible distributions with the finite first moment. Consider the following family of operators ${({P_{t}})_{t\ge 0}}$, defined as, for all $x\in \mathbb{R}$,
where $\hat{f}$ is FT of f, and ${\phi _{ts}}$ is the cf of TSD given in (1). Recall that the TSD family is self-decomposable, see [24], p. 4284. Hence, from Equations 5.8 and 5.15 of [1], one can define a cf, for all $z\in \mathbb{R}$, and $t\ge 0$, by
where ${F_{{X_{(t)}}}}$ is CDF of a rv ${X_{(t)}}$, say. Also, for any $t\ge 0$, the rv ${X_{(t)}}$ is related to X (see Equation 2.11 of [1], Section 2.2) as
where $X\sim $TSD and $\stackrel{d}{=}$ denotes the equality in distribution. We refer to Section 15 of [19] for more details on self-decomposable distributions. Now using (45) in (44), we get
where the last step follows by applying the inverse FT.
(44)
\[ {P_{t}}(f)(x)=\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{f}(z){e^{izx{e^{-t}}}}\frac{{\phi _{ts}}(z)}{{\phi _{ts}}({e^{-t}}z)}dz,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\](45)
\[\begin{aligned}{}{\phi _{t}}(z)& :=\frac{{\phi _{ts}}(z)}{{\phi _{ts}}({e^{-t}}z)}\\ {} & =\frac{\exp \left({\textstyle\int _{\mathbb{R}}}({e^{izu}}-1){\nu _{ts}}(du)\right)}{\exp \left({\textstyle\int _{\mathbb{R}}}({e^{i{e^{-t}}zu}}-1){\nu _{ts}}(du)\right)}\\ {} & =\exp \left({\int _{\mathbb{R}}}{e^{izu}}(1-{e^{i({e^{-t}}-1)zu}}){\nu _{ts}}(du)\right)\\ {} & ={\int _{\mathbb{R}}}{e^{izu}}{F_{{X_{(t)}}}}(du),\end{aligned}\](46)
\[\begin{aligned}{}{P_{t}}(f)(x)& =\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{\mathbb{R}}}\widehat{f}(z){e^{izx{e^{-t}}}}{e^{izu}}{F_{{X_{(t)}}}}(du)dz\\ {} & =\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{\mathbb{R}}}\widehat{f}(z){e^{iz(u+x{e^{-t}})}}{F_{{X_{(t)}}}}(du)dz\\ {} & ={\int _{\mathbb{R}}}f(u+x{e^{-t}}){F_{{X_{(t)}}}}(du),\end{aligned}\]Following the steps similar to Proposition 3.8 and Lemma 3.10 of [30], the proof is derived.
Remark 3.8.
Note that the domain of the semigroup ${({P_{t}})_{t\ge 0}}$ is $\mathcal{S}(\mathbb{R})$. So, the semigroup ${({P_{t}})_{t\ge 0}}$ is a uniformly continuous semigroup. Also, for $1\le p\lt \infty $, the Schwartz space $\mathcal{S}(\mathbb{R})$ is dense in ${L^{p}}({F_{X}})=\{f:\mathbb{R}\to \mathbb{R}|{\textstyle\int _{\mathbb{R}}}|f(x){|^{p}}{F_{X}}(dx)\lt \infty \}$, where ${F_{X}}$ is CDF of $X\sim $ TSD (see [20, Remark 1.9.1]). Following the proof of Proposition 5.1 of [1], the domain of ${({P_{t}})_{t\ge 0}}$ can be extended to ${L^{p}}({F_{X}})$ and the topology on ${L^{p}}({F_{X}})$ can be derived from the ${L^{p}}$-norm, which is a metric topology.
Next, we provide a solution to the Stein equation (43).
Theorem 3.9.
Let $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ and $h\in {\mathcal{H}_{r}}$. Then the function ${f_{h}}:\mathbb{R}\to \mathbb{R}$ defined by
solves (43).
Proof.
Let
Then ${g^{\prime }_{h}}(x)={f_{h}}(x)$. Now from (44), we get
Also from (47), we get
(49)
\[\begin{aligned}{}{P_{0}}(h)(x)& =\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{h}(z){e^{izx}}dz=h(x),\end{aligned}\](50)
\[\begin{aligned}{}\hspace{2.5pt}\text{and}\hspace{2.5pt}\underset{\epsilon \to \infty }{\lim }{P_{\epsilon }}h(x)& =\underset{\epsilon \to \infty }{\lim }\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{h}(z){e^{izx{e^{-\epsilon }}}}\frac{{\phi _{ts}}(z)}{{\phi _{ts}}({e^{-\epsilon }}z)}dz\\ {} & =\frac{1}{2\pi }{\int _{\mathbb{R}}}\hat{h}(z){\phi _{ts}}(z)dz\\ {} & =\mathbb{E}[h(X)].\end{aligned}\]
\[\begin{aligned}{}\mathcal{A}{f_{h}}(x)& =-x{f_{h}}(x)+{\int _{\mathbb{R}}}u{f_{h}}(x+u){\nu _{ts}}(du)\\ {} & =T{g_{h}}(x)\\ {} & =-{\int _{0}^{\infty }}T{P_{t}}(h)(x)dt\\ {} & =-{\int _{0}^{\infty }}\frac{d}{dt}{P_{t}}h(x)dt\hspace{2.5pt}\text{(see [27, p.68])}\\ {} & =-\underset{\epsilon \to \infty }{\lim }{\int _{0}^{\epsilon }}\frac{d}{dt}{P_{t}}h(x)dt\\ {} & ={P_{0}}h(x)-\underset{\epsilon \to \infty }{\lim }{P_{\epsilon }}h(x)\\ {} & =h(x)-\mathbb{E}[h(X)]\hspace{2.5pt}\text{(using (49) and (50)}).\end{aligned}\]
Hence, ${f_{h}}$ is the solution to (43). □3.3 Properties of the solution of the Stein equation
The next step is to investigate the properties of ${f_{h}}$. In the following theorem, we establish estimates of ${f_{h}}$, which play a crucial role in the TSD approximation problems. Gaunt [12, 13] and Döbler et al. [10] propose various methods for bounding the solution to the Stein equations that allow them to derive properties of the solution to the Stein equation, in particular for a subfamily of TSD, namely the variance-gamma family.
Proof.
(i) For $h\in {\mathcal{H}_{r+1}}$,
\[\begin{aligned}{}\| {f_{h}}\| & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}\frac{d}{dx}{P_{t}}h(x)dt\right|\\ {} & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}\frac{d}{dx}{\int _{\mathbb{R}}}h(x{e^{-t}}+y){F_{{X_{(t)}}}}(dy)dt\right|\hspace{2.5pt}(\text{using (46)})\\ {} & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}{e^{-t}}{\int _{\mathbb{R}}}{h^{(1)}}(x{e^{-t}}+y){F_{{X_{(t)}}}}(dy)dt\right|\\ {} & \le \| {h^{(1)}}\| \left|{\int _{0}^{\infty }}{e^{-t}}dt\right|\\ {} & =\| {h^{(1)}}\| .\end{aligned}\]
It can be easily seen that ${f_{h}}$ is r-times differentiable. Let $r=1$, then
\[\begin{aligned}{}\| {f_{h}^{(1)}}\| & =\underset{x\in \mathbb{R}}{\sup }\left|-{\int _{0}^{\infty }}{e^{-2t}}{\int _{\mathbb{R}}}{h^{(2)}}(x{e^{-t}}+y){F_{{X_{(t)}}}}(dy)dt\right|\\ {} & \le \| {h^{(2)}}\| \left|{\int _{0}^{\infty }}{e^{-2t}}dt\right|\\ {} & =\frac{\| {h^{(2)}}\| }{2}.\end{aligned}\]
Also, by induction, we get
(ii) For any $x,y\in \mathbb{R}$ and $h\in {\mathcal{H}_{3}}$,
\[\begin{aligned}{}\left|{f^{\prime }_{h}}(x)-{f^{\prime }_{h}}(y)\right|& \le {\int _{0}^{\infty }}{e^{-2t}}{\int _{\mathbb{R}}}\left|{h^{(2)}}(x{e^{-t}}+z)-{h^{(3)}}(y{e^{-t}}+z)\right|{F_{{X_{(t)}}}}(dz)dt\\ {} & \le {\int _{0}^{\infty }}{e^{-2t}}{\int _{\mathbb{R}}}\| {h^{(3)}}\| \left|x-y\right|{e^{-t}}{F_{{X_{(t)}}}}(dz)dt\\ {} & =\| {h^{(3)}}\| \left|x-y\right|{\int _{0}^{\infty }}{e^{-3t}}dt\\ {} & =\frac{\| {h^{(3)}}\| }{3}\left|x-y\right|.\end{aligned}\]
This proves the result. □4 Bounds for tempered stable approximation
In this section, we present bounds for the tempered stable approximations to various probability distributions. First, we obtain, for the Kolmogorov distance ${d_{K}}$, the error bounds for a sequence of CPD that converges to a TSD.
Theorem 4.1.
Let $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ and ${X_{n}}$, $n\ge 1$, be compound Poisson rvs with cf
where ${\phi _{ts}}(t)$ is given in (1). Then
where $c\gt 0$ is independent of n and ${C_{j}}$ denotes the jth cumulant of X.
(53)
\[ {\phi _{n}}(z):=\exp \bigg(n\left({\phi _{ts}^{\frac{1}{n}}}(z)-1\right)\bigg),\hspace{1em}z\in \mathbb{R},\](54)
\[ {d_{K}}({X_{n}},X)\le c{\bigg({\sum \limits_{j=1}^{2}}|{C_{j}}(X)|\bigg)^{\frac{2}{5}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{5}}},\]Proof.
Let $b={\textstyle\int _{-1}^{1}}u{\nu _{ts}}(du)$, where ${\nu _{ts}}$ is defined in (2). Then by Equation (2.6) of [1], $\text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})\stackrel{d}{=}\mathrm{ID}(b,0,{\nu _{ts}})$. That is, $\text{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$ is an infinitely divisible distribution with the triplet $b,0$ and ${\nu _{ts}}$. Note that TSD are absolutely continuous with respect to the Lévy measure with a bounded density and $\mathbb{E}|X{|^{2}}\lt \infty $ (see [24, Section 7]). Recall from Proposition 4.11 of [1] that, if $X\sim \mathrm{ID}(b,0,\nu )$ with cf ${\phi _{X}}$ (say), and ${X_{n}}$, $n\ge 1$, are compound Poisson rvs each with cf as (53), then
where $|{\phi _{X}}(z)|{\displaystyle\int _{0}^{|z|}}\displaystyle\frac{ds}{|{\phi _{X}}(s)|}\le {c_{0}}|z{|^{p}}$, $p\ge 1$. Now observe that
(55)
\[\begin{aligned}{}{d_{K}}({X_{n}},X)& \le c{\bigg(|\mathbb{E}[X]|+{\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\bigg)^{\frac{2}{p+4}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{p+4}}},\end{aligned}\]
\[\begin{aligned}{}|{\phi _{ts}}(z)|{\int _{0}^{|z|}}\frac{ds}{|{\phi _{ts}}(s)|}& \le {\int _{0}^{|z|}}\frac{ds}{|\mathbb{E}[\cos sX]+i\mathbb{E}[\sin sX]|}\\ {} & ={\int _{0}^{|z|}}\frac{|\mathbb{E}[{e^{\left(-isX\right)}}]|}{{\mathbb{E}^{2}}[\cos sX]+{\mathbb{E}^{2}}[\sin sX]}ds\\ {} & \le {c_{0}}{\int _{0}^{|z|}}|\mathbb{E}[{e^{\left(-isX\right)}}]|ds\\ {} & \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\left(\frac{1}{{\mathbb{E}^{2}}[\cos sX]+{\mathbb{E}^{2}}[\sin sX]}\lt {c_{0}}\hspace{2.5pt}\text{(say)}\right)\\ {} & ={c_{0}}{\int _{0}^{|z|}}|\mathbb{E}[\cos sX-i\sin sX]|ds\\ {} & \le {c_{0}}{\int _{0}^{|z|}}ds\\ {} & ={c_{0}}|z|.\end{aligned}\]
Hence by (55), for $p=1$, we get
\[\begin{aligned}{}{d_{K}}({X_{n}},X)& \le c{\bigg(|\mathbb{E}[X]|+{\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)\bigg)^{\frac{2}{5}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{5}}}\\ {} & =c{\bigg({\sum \limits_{j=1}^{2}}|{C_{j}}(X)|\bigg)^{\frac{2}{5}}}{\bigg(\frac{1}{n}\bigg)^{\frac{1}{5}}},\end{aligned}\]
$\hspace{2.5pt}\text{since}\hspace{2.5pt}{\textstyle\int _{\mathbb{R}}}{u^{2}}{\nu _{ts}}(du)={C_{2}}(X)$. This proves the result. □Remark 4.2.
Note that if $n\to \infty $, ${d_{K}}({X_{n}},X)\to 0$, as expected, and TSD is the limit of CPD.
Our next result yields an error bound for the closeness of tempered stable distributions to α-stable distributions.
Theorem 4.3.
Let $\alpha \in (0,1)$. Let $X\sim $TSD $({m_{1}},\alpha ,{\lambda _{1}},{m_{2}},\alpha ,{\lambda _{2}})$ and ${X_{\alpha }}\sim S({m_{1}},{m_{2}},\alpha )$. Then
where ${C_{1}},{C_{2}}\gt 0$ are independent of ${\lambda _{1}}$ and ${\lambda _{2}}$.
(56)
\[ {d_{K}}({X_{\alpha }},X)\le {C_{1}}{\lambda _{1}^{\alpha +\frac{1}{2}}}+{C_{2}}{\lambda _{2}^{\alpha +\frac{1}{2}}},\]Proof.
For $h\in {\mathcal{H}_{K}}$, from (43), we get
since
(57)
\[\begin{aligned}{}\mathbb{E}[h({X_{\alpha }})]-\mathbb{E}[h(X)]& =\mathbb{E}[\mathcal{A}f({X_{\alpha }})]=\mathbb{E}\left[\mathcal{A}f({X_{\alpha }})-{\mathcal{A}_{\alpha }}f({X_{\alpha }})\right],\end{aligned}\]
\[ \mathbb{E}[{\mathcal{A}_{\alpha }}f({X_{\alpha }})]=\mathbb{E}\left[-{X_{\alpha }}f({X_{\alpha }})+{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{\alpha }}(du)\right]=0,\hspace{1em}f\in \mathcal{S}(\mathbb{R}),\]
where ${\nu _{\alpha }}$ is the Lévy measure given in (4) (see (36)).Then, from (57), we have
\[\begin{aligned}{}\bigg|\mathbb{E}[h({X_{\alpha }})]-\mathbb{E}[h(X)]\bigg|& =\bigg|\mathbb{E}\bigg[\bigg(-{X_{\alpha }}f({X_{\alpha }})+{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{ts}}(du)\bigg)\\ {} & \hspace{1em}-\bigg(-{X_{\alpha }}f({X_{\alpha }})+{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{\alpha }}(du)\bigg)\bigg]\bigg|\\ {} & =\bigg|\mathbb{E}\bigg[{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{ts}}(du)-{\int _{\mathbb{R}}}uf({X_{\alpha }}+u){\nu _{\alpha }}(du)\bigg]\bigg|\\ {} & =\bigg|\mathbb{E}\bigg[\bigg({m_{1}}{\int _{0}^{\infty }}uf({X_{\alpha }}+u)\frac{{e^{-{\lambda _{1}}u}}}{{u^{1+\alpha }}}du\\ {} & \hspace{1em}+{m_{2}}{\int _{-\infty }^{0}}uf({X_{\alpha }}+u)\frac{{e^{-{\lambda _{2}}|u|}}}{|u{|^{1+\alpha }}}du\bigg)\\ {} & \hspace{1em}-\bigg({m_{1}}{\int _{0}^{\infty }}uf({X_{\alpha }}+u)\frac{du}{{u^{1+\alpha }}}\\ {} & \hspace{1em}+{m_{2}}{\int _{-\infty }^{0}}uf({X_{\alpha }}+u)\frac{du}{|u{|^{1+\alpha }}}\bigg)\bigg]\bigg|\end{aligned}\]
So, we write
Now applying the triangle and Cauchy–Schwartz inequalities in (58), we get
where $M(\alpha )={\textstyle\int _{0}^{\infty }}{\bigg(\frac{({e^{-u}}-1)}{{u^{1+\alpha }}}\bigg)^{2}}du\lt \infty $ (see [15, p.169]). Also, $\mathbb{E}{[{\textstyle\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du]^{\frac{1}{2}}}$ and $\mathbb{E}{[{\textstyle\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du]^{\frac{1}{2}}}$ are finite, since $f\in \mathcal{S}(\mathbb{R})$. Now setting
(58)
\[\begin{aligned}{}\bigg|\mathbb{E}[h({X_{\alpha }})]-\mathbb{E}[h(X)]\bigg|& =\bigg|\mathbb{E}\bigg[{m_{1}}{\int _{0}^{\infty }}\frac{({e^{-{\lambda _{1}}u}}-1)}{{u^{1+\alpha }}}uf({X_{\alpha }}+u)du\\ {} & \hspace{1em}-{m_{2}}{\int _{0}^{\infty }}\frac{({e^{-{\lambda _{2}}u}}-1)}{{u^{1+\alpha }}}uf({X_{\alpha }}-u)du\bigg]\bigg|.\end{aligned}\](59)
\[\begin{aligned}{}{d_{K}}({X_{\alpha }},X)& \le {m_{1}}{\bigg\{{\int _{0}^{\infty }}{\bigg(\frac{({e^{-{\lambda _{1}}u}}-1)}{{u^{1+\alpha }}}\bigg)^{2}}du\bigg\}^{\frac{1}{2}}}\mathbb{E}{\bigg({\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du\bigg)^{\frac{1}{2}}}\\ {} & \hspace{1em}+{m_{2}}{\bigg\{{\int _{0}^{\infty }}{\bigg(\frac{({e^{-{\lambda _{2}}u}}-1)}{{u^{1+\alpha }}}\bigg)^{2}}du\bigg\}^{\frac{1}{2}}}\mathbb{E}{\bigg({\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du\bigg)^{\frac{1}{2}}}\\ {} & ={\lambda _{1}^{\alpha +\frac{1}{2}}}{m_{1}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du\bigg]^{\frac{1}{2}}}\\ {} & \hspace{1em}+{\lambda _{2}^{\alpha +\frac{1}{2}}}{m_{2}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du\bigg]^{\frac{1}{2}}},\end{aligned}\]
\[\begin{aligned}{}{C_{1}}& ={m_{1}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}+u)du\bigg]^{\frac{1}{2}}}\lt \infty ,\hspace{1em}\text{and}\\ {} {C_{2}}& ={m_{2}}{M^{\frac{1}{2}}}(\alpha )\mathbb{E}{\bigg[{\int _{0}^{\infty }}{u^{2}}{f^{2}}({X_{\alpha }}-u)du\bigg]^{\frac{1}{2}}}\lt \infty ,\end{aligned}\]
in (59), we get
\[ {d_{K}}({X_{\alpha }},X)\le {C_{1}}{\lambda _{1}^{\alpha +\frac{1}{2}}}+{C_{2}}{\lambda _{2}^{\alpha +\frac{1}{2}}},\]
where ${C_{1}},{C_{2}}\gt 0$ are independent of ${\lambda _{1}}$ and ${\lambda _{2}}$. This proves the result. □Next, we state a result that gives the limiting distribution of a sequence of tempered stable random variables.
Lemma 4.4.
([24, Proposition 3.1]) Let ${m_{1}},{m_{2}},{m_{i,n}},{\lambda _{i,n}}\in (0,\infty )$ and ${\alpha _{1}},{\alpha _{2}},{\alpha _{i,n}}\in [0,1)$, for $i=1,2$. Also, let ${X_{n}}\sim \textit{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$ and $X\sim \textit{TSD}({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}},{\alpha _{2}},{\lambda _{2}})$. If $({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})\to ({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}}$, ${\alpha _{2}},{\lambda _{2}})\hspace{2.5pt}\textit{as}\hspace{2.5pt}n\to \infty $, then ${X_{n}}\stackrel{L}{\to }X$.
The following theorem gives the error in the closeness of ${X_{n}}$ to X.
Theorem 4.5.
Let ${X_{n}}$ and X be defined as in Lemma 4.4. Then
where ${C_{j}}(X)$, $j=1,2,3$, denotes the j-th cumulant of X and ${d_{{\mathcal{H}_{3}}}}$ is defined in (14).
(60)
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},X)& \le \left|{C_{1}}({X_{n}})-{C_{1}}(X)\right|+\frac{1}{2}\left|{C_{2}}({X_{n}})-{C_{2}}(X)\right|\\ {} & \hspace{1em}+\frac{1}{6}{C_{2}}(X)\left|\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})}-\frac{|{C_{3}}(X)|}{{C_{2}}(X)}\right|,\end{aligned}\]Proof.
Let $h\in {\mathcal{H}_{3}}$ and f be the solution to the Stein equation (42). Then
where the last equality follows by (41), and Y has the density given in (39).
(61)
\[\begin{aligned}{}\mathbb{E}[h({X_{n}})]-\mathbb{E}[h(X)]& =\mathbb{E}[\mathcal{A}f({X_{n}})]\\ {} & =\mathbb{E}\bigg[-{X_{n}}f({X_{n}})+{\int _{\mathbb{R}}}uf({X_{n}}+u){\nu _{ts}}(du)\bigg]\\ {} & =\mathbb{E}\bigg[\bigg(-{X_{n}}+{C_{1}}(X)\bigg)f({X_{n}})+{C_{2}}(X){f^{\prime }}({X_{n}}+Y)\bigg],\end{aligned}\]Since ${X_{n}}\sim \text{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$, by Proposition 3.1, we have
where ${\nu _{ts}^{n}}$ is the Lévy measure given by
where ${Y_{n}}$ has the density
Using (63) in (61), we get
where the last inequality follows by applying the estimates given in Lemma 3.10. From (64) and (39), it can be verified that (see (40))
Using (66) in (65), we get the desired result. □
(62)
\[ \mathbb{E}\bigg[-{X_{n}}f({X_{n}})+{\int _{\mathbb{R}}}uf({X_{n}}+u){\nu _{ts}^{n}}(du)\bigg]=0,\]
\[ {\nu _{ts}^{n}}(du)=\left(\frac{{m_{1,n}}}{{u^{1+{\alpha _{1,n}}}}}{e^{-{\lambda _{1,n}}u}}{\mathbf{I}_{(0,\infty )}}(u)+\frac{{m_{2,n}}}{|u{|^{1+{\alpha _{2,n}}}}}{e^{-{\lambda _{2,n}}|u|}}{\mathbf{I}_{(-\infty ,0)}}(u)\right)du.\]
Also, by Lemma 3.5, the identity in (62) can be seen as
(63)
\[ \mathbb{E}\bigg[\bigg(-{X_{n}}+{C_{1}}({X_{n}})\bigg)f({X_{n}})+{C_{2}}({X_{n}}){f^{\prime }}({X_{n}}+{Y_{n}})\bigg]=0,\](64)
\[ {f_{n}}(u)=\frac{[{\textstyle\textstyle\int _{u}^{\infty }}y{\nu _{ts}^{n}}(dy)]{\textbf{I}_{(0,\infty )}}(u)-[{\textstyle\textstyle\int _{-\infty }^{u}}y{\nu _{ts}^{n}}(dy)]{\textbf{I}_{(-\infty ,0)}}(u)}{{C_{2}}({X_{n}})},\hspace{1em}u\in \mathbb{R}.\](65)
\[\begin{aligned}{}\bigg|\mathbb{E}[h({X_{n}})]-\mathbb{E}[h(X)]\bigg|& =\bigg|\mathbb{E}\bigg[\bigg((-{X_{n}}+{C_{1}}(X))f({X_{n}})+{C_{2}}(X){f^{\prime }}({X_{n}}+Y)\bigg)\\ {} & \hspace{1em}-\bigg((-{X_{n}}+{C_{1}}({X_{n}}))f({X_{n}})+{C_{2}}({X_{n}}){f^{\prime }}({X_{n}}+{Y_{n}})\bigg)\bigg]\bigg|\\ {} & \le \left|{C_{1}}({X_{n}})-{C_{1}}(X)\right|\| f\| \\ {} & \hspace{1em}+\mathbb{E}\left|{C_{2}}({X_{n}}){f^{\prime }}({X_{n}}+{Y_{n}})-{C_{2}}(X){f^{\prime }}({X_{n}}+Y)\right|\\ {} & \le \left|{C_{1}}({X_{n}})-{C_{1}}(X)\right|\| f\| \\ {} & \hspace{1em}+\mathbb{E}\bigg|({C_{2}}({X_{n}})-{C_{2}}(X)){f^{\prime }}({X_{n}}+{Y_{n}})\bigg|\\ {} & \hspace{1em}+{C_{2}}(X)\mathbb{E}\bigg|{f^{\prime }}({X_{n}}+{Y_{n}})-{f^{\prime }}({X_{n}}+Y)\bigg|\\ {} & \le \| {h^{(1)}}\| |{C_{1}}({X_{n}})-{C_{1}}(X)|+\frac{\| {h^{(2)}}\| }{2}|{C_{2}}({X_{n}})-{C_{2}}(X)|\\ {} & \hspace{1em}+{C_{2}}(X)\frac{\| {h^{(3)}}\| }{3}\bigg|\mathbb{E}|{Y_{n}}|-\mathbb{E}|Y|\bigg|,\end{aligned}\](66)
\[ \mathbb{E}|{Y_{n}}|=\frac{|{C_{3}}({X_{n}})|}{2{C_{2}}({X_{n}})}\hspace{1em}\text{and}\hspace{1em}\mathbb{E}|Y|=\frac{|{C_{3}}(X)|}{2{C_{2}}(X)}.\]Remark 4.6.
(i) Note that if $({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})\to ({m_{1}},{\alpha _{1}},{\lambda _{1}},{m_{2}}$, ${\alpha _{2}},{\lambda _{2}})\hspace{2.5pt}\textit{as}\hspace{2.5pt}n\to \infty $, then ${C_{j}}({X_{n}})\to {C_{j}}(X)$, $j=1,2,3$, and ${d_{{\mathcal{H}_{3}}}}({X_{n}},X)=0$, as expected.
(ii) Note also that if ${m_{1,n}}={m_{2,n}}$, ${\alpha _{1,n}}={\alpha _{2,n}}$, ${\lambda _{1,n}}={\lambda _{2,n}}$, ${m_{1}}={m_{2}}$, ${\alpha _{1}}={\alpha _{2}}$, and ${\lambda _{1}}={\lambda _{2}}$, then ${C_{j}}({X_{n}})={C_{j}}(X)=0$, $j=1,3$. Under these conditions, from (60), we get
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},X)& \le \frac{1}{2}|{C_{2}}({X_{n}})-{C_{2}}(X)|\\ {} & =\bigg|\Gamma (2-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1}^{2-{\alpha _{1,n}}}}}-\Gamma (2-{\alpha _{1}})\frac{{m_{1}}}{{\lambda _{1}^{2-{\alpha _{1}}}}}\bigg|.\end{aligned}\]
If in addition ${C_{2}}({X_{n}})\to {C_{2}}(X)$, then ${X_{n}}\stackrel{L}{\to }X$, as $n\to \infty $.
Next, we discuss two examples. Our first example yields the error in approximating a TSD by a normal distribution.
Example 4.7 (Normal approximation to a TSD).
Let ${X_{n}}\sim \text{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}}$, ${m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$, ${X_{m}}\sim \text{SVGD}(m,\sqrt{2m}/\lambda )$ and ${X_{\lambda }}\sim \mathcal{N}(0,{\lambda ^{2}})$. Recall from Section 2.1 that, $\text{SVGD}(m,\sqrt{2m}/\lambda )\stackrel{d}{=}\text{TSD}(m,0,\sqrt{2m}/\lambda ,m,0,\sqrt{2m}/\lambda )$. Then, the cf of $\text{SVGD}(m,\sqrt{2m}/\lambda )$ is
where the Lévy measure ${\nu _{sv}}$ is
Applying Theorem 4.5 to $X={X_{m}}$, and taking the limit as $m\to \infty $, we get from (69)
which gives the error in the closeness between ${X_{n}}$ and ${X_{\lambda }}$. Note that
\[ {\nu _{sv}}(du)=\bigg(\frac{m}{u}{e^{-\frac{\sqrt{2m}}{\lambda }u}}{\textbf{I}_{(0,\infty )}}(u)+\frac{m}{|u|}{e^{-\frac{\sqrt{2m}}{\lambda }|u|}}{\textbf{I}_{(-\infty ,0)}}(u)\bigg).\]
Note from (67) that
That is, ${X_{m}}\stackrel{L}{\to }{X_{\lambda }}\sim \mathcal{N}(0,{\lambda ^{2}})$, as $m\to \infty $. Also, it follows from [36, Theorem 7.12] that, if ${X_{m}}\stackrel{L}{\to }{X_{\lambda }}$, as $m\to \infty $, then
(69)
\[ {d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{\lambda }})=\underset{m\to \infty }{\lim }{d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{m}}).\](70)
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{\lambda }})& \le \underset{m\to \infty }{\lim }\bigg(\left|{C_{1}}({X_{n}})-{C_{1}}({X_{m}})\right|+\frac{1}{2}\left|{C_{2}}({X_{n}})-{C_{2}}({X_{m}})\right|\\ {} & \hspace{1em}+\frac{1}{6}{C_{2}}({X_{m}})\left|\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})}-\frac{|{C_{3}}({X_{m}})|}{{C_{2}}({X_{m}})}\right|\bigg)\\ {} & =|{C_{1}}({X_{n}})|+\frac{1}{2}|{C_{2}}({X_{n}})-{\lambda ^{2}}|+\frac{1}{6}{\lambda ^{2}}\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})},\end{aligned}\]
\[\begin{aligned}{}{C_{1}}({X_{n}})& =\mathbb{E}[{X_{n}}]=\Gamma (1-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1,n}^{1-{\alpha _{1,n}}}}}-\Gamma (1-{\alpha _{2,n}})\frac{{m_{2,n}}}{{\lambda _{2,n}^{1-{\alpha _{2,n}}}}},\\ {} {C_{2}}({X_{n}})& =\mathit{Var}({X_{n}})=\Gamma (2-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1,n}^{2-{\alpha _{1,n}}}}}+\Gamma (2-{\alpha _{2,n}})\frac{{m_{2,n}}}{{\lambda _{2,n}^{1-{\alpha _{2,n}}}}},\hspace{1em}\text{and}\\ {} {C_{3}}({X_{n}})& =\Gamma (3-{\alpha _{1,n}})\frac{{m_{1,n}}}{{\lambda _{1,n}^{3-{\alpha _{1,n}}}}}-\Gamma (3-{\alpha _{2,n}})\frac{{m_{2,n}}}{{\lambda _{2,n}^{3-{\alpha _{2,n}}}}}.\end{aligned}\]
When ${C_{j}}({X_{n}})\to 0$, for $j=1,3$ and ${C_{2}}({X_{n}})\to {\lambda ^{2}}$, from (70), we have ${X_{n}}\stackrel{L}{\to }{X_{\lambda }}$, as $n\to \infty $.Example 4.8 (Variance-gamma approximation to a TSD).
Let ${X_{n}}\sim \text{TSD}({m_{1,n}},{\alpha _{1,n}},{\lambda _{1,n}},{m_{2,n}},{\alpha _{2,n}},{\lambda _{2,n}})$ and ${X_{v}}\sim $VGD$(m,{\lambda _{1}},{\lambda _{2}})$. Then
\[\begin{aligned}{}{C_{1}}({X_{v}})& =m\bigg(\frac{1}{{\lambda _{1}}}-\frac{1}{{\lambda _{2}}}\bigg),\hspace{1em}{C_{2}}({X_{v}})=m\bigg(\frac{1}{{\lambda _{1}^{2}}}+\frac{1}{{\lambda _{2}^{2}}}\bigg),\hspace{1em}\text{and}\\ {} {C_{3}}({X_{v}})& =2m\bigg(\frac{1}{{\lambda _{1}^{3}}}-\frac{1}{{\lambda _{2}^{3}}}\bigg).\end{aligned}\]
Now applying Theorem 4.5 to $X={X_{v}}$, we get
\[\begin{aligned}{}{d_{{\mathcal{H}_{3}}}}({X_{n}},{X_{v}})& \le \left|{C_{1}}({X_{n}})-\frac{m({\lambda _{2}}-{\lambda _{1}})}{{\lambda _{1}}{\lambda _{2}}}\right|+\frac{1}{2}\left|{C_{2}}({X_{n}})-\frac{m({\lambda _{1}^{2}}+{\lambda _{2}^{2}})}{{\lambda _{1}^{2}}{\lambda _{2}^{2}}}\right|\\ {} & \hspace{1em}+\frac{1}{6}m\frac{{\lambda _{1}^{2}}+{\lambda _{2}^{2}}}{{\lambda _{1}^{2}}{\lambda _{2}^{2}}}\left|\frac{|{C_{3}}({X_{n}})|}{{C_{2}}({X_{n}})}-\frac{2|{\lambda _{2}^{3}}-{\lambda _{1}^{3}}|}{{\lambda _{1}}{\lambda _{2}}({\lambda _{1}^{2}}+{\lambda _{2}^{2}})}\right|,\end{aligned}\]
which gives the error in the closeness between ${X_{n}}$ and ${X_{v}}$. When ${C_{j}}({X_{n}})\to {C_{j}}({X_{v}})$, for $j=1,2,3$, we have ${X_{n}}\stackrel{L}{\to }{X_{v}}$, as $n\to \infty $.