1 Introduction
The Variance Gamma process is a famous Lévy process used in mathematical finance especially for option pricing (see [19, 22]), also known in physics as Laplace motion [15]. Another financial model involving this process has been considered quite recently in [2].
Further generalizations of the Variance Gamma process have been developed in [16, 10], while an application of such a process is given in [20].
The classic Variance Gamma can be obtained by considering a Brownian motion with a random time given by an independent Gamma subordinator [18, Eq. (7)]. The concept of subordination has been introduced by Bochner [5] and, as for the other subordinated processes, we can associate to the Variance Gamma process the Phillips operator [21] as generator for the governing equation. Recently, some new equations for the Variance Gamma process and the Gamma subordinator have been provided in [3], involving time-operators, differently from classic theory on space operators.
At the current stage, the nonlocal equation for the Variance Gamma process, defined as difference of two independent Gamma subordinators [18, Eq. (8)], is not considered and also a deeper analysis concerning this process and the equations for the modified Bessel functions is not taken into account. In order to close such a gap, we focus on these equations and the knowledge of these allows to understand the process from a new point of view, also for simulations. Then, we continue our dissertation by examining the similar version of the nonlocal equation for the Variance Gamma process with a drift. This is possible because the definition by difference of two independent Gamma subordinators holds in presence of drift also. At the end, we consider the compound Poisson process, related to the Gamma subordinator (the construction for any subordinator is developed in [25, Proposition 3.3]), and its convergence (in distribution) to the Variance Gamma process.
2 Preliminaries
A subordinator is a nondecreasing Lévy process [4, Chapter III] and it is characterized by its Laplace transform [24, Theorem 5.1]. Let $\Phi :(0,\infty )\mapsto (0,\infty )$ be a Bernstein function which is uniquely defined by the so-called Bernstein representation (Lévy–Kintchine representation)
where Π on $(0,\infty )$ with ${\textstyle\int _{0}^{\infty }}(1\wedge z)\Pi (dz)\lt \infty $ is the associated Lévy measure. In general Bernstein functions can have a drift and a killing rate, but in this paper we are assuming them to be zero.
We also recall that
where $\overline{\Pi }(z)=\Pi ((z,\infty ))$ is termed tail of the Lévy measure.
(1)
\[ \frac{\Phi (\lambda )}{\lambda }={\int _{0}^{\infty }}{e^{-\lambda z}}\overline{\Pi }(z)dz,\hspace{1em}\lambda \gt 0,\]We focus only on the Laplace symbol
Thus, in this case, the Lévy measure is ${\Pi _{a,b}}(dy)=a\frac{{e^{-by}}}{y}\hspace{0.1667em}dy$ and the associated Gamma subordinator $H=\{{H_{t}},t\ge 0\}$, starting from zero, is such that
where ${\mathbf{P}_{x}}$ denotes the probability measure for a process started from x at time $t=0$ and ${\mathbf{E}_{x}}$ the mean value with respect to ${\mathbf{P}_{x}}$. The interested reader can consult [4, Chapter III] for more details on subordinators. Since ${\Pi _{a,b}}(0,\infty )=\infty $, from [13, Theorem 21.3] we have that H has infinite activity, i.e. strictly increasing sample paths with jumps; indeed the symbol Φ does not admit any drift. We use the notation
In addition, it is well known that, $\forall \hspace{0.1667em}t\gt 0$,
trivially verifies
which coincides with formula (3). We observe that the continuity of the function $h(t,\cdot )$, when $x\to 0$, depends on the time variable t. Indeed, the Gamma subordinator has the time dependent property (see [13, Chapter 23]). From this, the Variance Gamma process inherits also continuity problems for its probability density function.
(2)
\[ \Phi (\lambda )=a\ln \left(1+\frac{\lambda }{b}\right)=a{\int _{0}^{\infty }}\left(1-{e^{-\lambda y}}\right)\frac{{e^{-by}}}{y}\hspace{0.1667em}dy,\hspace{1em}\lambda \gt 0,\hspace{2.5pt}a\gt 0,\hspace{0.2778em}b\gt 0.\](3)
\[ {\mathbf{E}_{0}}[{e^{-\lambda {H_{t}}}}]={e^{-t\Phi (\lambda )}},\hspace{1em}\lambda \gt 0,\](4)
\[\begin{aligned}{}h(t,x)=& \left\{\begin{array}{l@{\hskip10.0pt}l}\displaystyle \frac{{b^{at}}}{\Gamma (at)}{x^{at-1}}{e^{-bx}},& x\gt 0,\\ {} 0,& x\le 0,\end{array}\right.\end{aligned}\](5)
\[ {\int _{0}^{\infty }}{e^{-\lambda x}}h(t,x)\hspace{0.1667em}dx=\frac{{b^{at}}}{{(\lambda +b)^{at}}}={e^{-t\hspace{0.1667em}a\hspace{0.1667em}\ln \left(1+\frac{\lambda }{b}\right)}},\hspace{1em}\lambda \gt 0,\hspace{2.5pt}t\gt 0,\]Let $B:=\{{B_{t}},t\ge 0\}$ be the one-dimensional Brownian motion starting from zero, independent of H, and $g(t,x)={e^{-{x^{2}}/4t}}/\sqrt{4\pi t}$ be its probability density function. The Variance Gamma process $X:=\{{X_{t}},t\ge 0\}$ can be defined as $X={B_{H}}:=B\circ H$, i.e. it is a Brownian motion time-changed with a random clock given by an independent Gamma subordinator, and we know from [13, Theorem 30.1] that X is still a Lévy process. Its probability density function is
and the Lévy symbol is $\Phi ({\xi ^{2}})$, as we see from
For the Variance Gamma process we know an explicit representation for $p(t,x)$. From [11, formula 3.478] we have
where ${K_{\nu }}$ is the modified Bessel function, so we get that
Then we use (8) by choosing $\nu =at-\frac{1}{2}$, $q=1$, $\alpha =\frac{{z^{2}}}{4}$, $\beta =b$ and we obtain
(7)
\[\begin{aligned}{}{\mathbf{E}_{0}}[{e^{i\xi {X_{t}}}}]=\hat{p}(t,x)={\int _{-\infty }^{\infty }}{e^{i\xi x}}p(t,x)dx& ={\int _{-\infty }^{\infty }}{e^{i\xi x}}{\int _{0}^{\infty }}g(s,x)h(t,s)ds\hspace{0.1667em}dx\\ {} & ={\int _{0}^{\infty }}{e^{-s{\xi ^{2}}}}h(t,s)ds\\ {} & ={e^{-t\Phi ({\xi ^{2}})}}={e^{-at\ln \big(1+\frac{{\xi ^{2}}}{b}\big)}}.\end{aligned}\](8)
\[ {\int _{0}^{\infty }}{x^{\nu -1}}\exp \{-\beta {x^{q}}-\alpha {x^{-q}}\}dx=\frac{2}{q}{\left(\frac{\alpha }{\beta }\right)^{\frac{\nu }{2q}}}{K_{\frac{\nu }{q}}}\left(2\sqrt{\alpha \beta }\right),\hspace{1em}q,\alpha ,\beta ,\nu \gt 0,\](9)
\[\begin{aligned}{}{\int _{0}^{\infty }}g(s,x)\hspace{0.1667em}h(t,s)\hspace{0.1667em}ds& ={\int _{0}^{\infty }}\frac{{e^{-\frac{{x^{2}}}{4s}}}}{\sqrt{4\pi s}}\frac{{b^{at}}}{\Gamma (at)}{s^{at-1}}{e^{-bs}}\hspace{0.1667em}ds=\\ {} & =\frac{{b^{at}}}{\sqrt{4\pi }}\frac{1}{\Gamma (at)}{\int _{0}^{\infty }}\frac{{s^{at-1}}}{\sqrt{s}}\exp \left(-\frac{{x^{2}}}{4}{s^{-1}}-bs\right)ds.\end{aligned}\](10)
\[\begin{aligned}{}p(t,x)={\int _{0}^{\infty }}g(s,x)\hspace{0.1667em}h(t,s)\hspace{0.1667em}ds& =\frac{{b^{at}}}{\sqrt{4\pi }}\frac{1}{\Gamma (at)}\hspace{2.5pt}2{\left(\frac{{x^{2}}}{4b}\right)^{\frac{1}{2}(at-\frac{1}{2})}}{K_{at-\frac{1}{2}}}\left(2\sqrt{\frac{{x^{2}}}{4}b}\right)\\ {} & =\frac{{b^{at}}}{\sqrt{\pi }}\frac{1}{\Gamma (at)}{\left(\frac{1}{2\sqrt{b}}\right)^{(at-\frac{1}{2})}}|x{|^{at-\frac{1}{2}}}\hspace{2.5pt}{K_{at-\frac{1}{2}}}(|x|\sqrt{b}).\end{aligned}\]Since we are dealing with a subordinate stochastic process (see [6, 14]), we can use the Phillips representation (or the Bochner subordination) to generate a new operator through subordination. It is well known that the Phillips operator $-\Phi (-\Delta )$ [21] is the generator of X, when it is restricted to $Dom(\Delta )$ ([13, Theorem 32.1] or [12, Theorem 4.3.5]), and it can be written as
where Φ is defined in (2) and T is the semigroup associated to the Brownian motion on $\mathbb{R}$, such that the characteristic symbol is ${\widehat{T}_{t}}={e^{-t{\xi ^{2}}}}$. This leads us to
where we indicate by $\widehat{u}$ the Fourier transform of u, for u in the Schwartz space $\mathcal{S}(\mathbb{R})$. If we combine (7) and (12), we obtain the well known result
with $p(t,0)=\delta (x)$ in the sense that the Fourier transform of the fundamental solution of (13) is satisfied by p.
(12)
\[\begin{aligned}{}{\int _{-\infty }^{\infty }}{e^{i\xi x}}\left(-\Phi (-\Delta )u(x)\right)dx& =\left(a{\int _{0}^{\infty }}({e^{-y{\xi ^{2}}}}-1)\frac{{e^{-by}}}{y}dy\right)\hat{u}(\xi )\\ {} & =\left(-a\ln \left(1+\frac{{\xi ^{2}}}{b}\right)\right)\widehat{u}(\xi ),\end{aligned}\](13)
\[ \frac{\partial }{\partial t}p(t,x)=-\Phi (-\Delta )p(t,x),\hspace{1em}t\gt 0,\hspace{0.1667em}x\in \mathbb{R},\]If we change perspective and consider the operator in time, an interesting equation, presented in [3, Remark 3.3], for the Variance Gamma process is
\[ \frac{{\partial ^{2}}}{\partial {x^{2}}}p(t,x)=b\left(p(t,x)-p\left(t-\frac{1}{a},x\right)\right),\hspace{1em}x\in \mathbb{R},\]
with $p(t,0)=\delta (x)$ and $at\gt 1$.
3 Main results
3.1 Nonlocal equations
In the preceding section we have seen how in (13), from the definition of time-changed process, we can associate to the Variance Gamma process a nonlocal operator. However, it is not the only possible definition. From (7), we see that
\[ {e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}}={\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at}}={\left(1-i\frac{\xi }{\sqrt{b}}\right)^{-at}}{\left(1+i\frac{\xi }{\sqrt{b}}\right)^{-at}}.\]
Then, in the Wiener–Hopf factorization [17, Remark 3.3], the Bernstein function associated to the Gamma subordinator (2) is used. Therefore, we have another definition for our process
where G and L are two independent Gamma subordinators both with parameters a and $\sqrt{b}$. From a financial point of view this representation has the meaning of the difference between independent ‘gains’ and ‘losses’ (see [18]). Formula (14) means that X has trajectories of locally bounded variation, as it is the difference of two increasing functions, and this leads us to investigate the connection between the Variance Gamma process and the nonlocal operators of the Gamma subordinator.Let us introduce the following generalized Weyl derivatives for $x\in \mathbb{R}$: respectively defined for function u such that
Remark 3.
From a simple change of variables, by assuming that u is at least absolutely continuous, our definitions (15) and (16) concur with [25, Definition 2.8] and we achieve
Similarly we obtain
These results coincide with [25, Lemma 2.9]. We now analyze the connection between these operators and the Variance Gamma process by exploiting (14). In particular, we will show that the Fourier transform of its probability density function solves the following equation.
\[\begin{aligned}{}{\mathcal{D}_{a,b}^{+}}u(x)& :={\int _{0}^{\infty }}\frac{\partial }{\partial x}u(x-s){\overline{\Pi }_{a,b}}(s)ds,\\ {} {\mathcal{D}_{a,b}^{-}}u(x)& :=-{\int _{0}^{\infty }}\frac{\partial }{\partial x}u(x+s){\overline{\Pi }_{a,b}}(s)ds.\end{aligned}\]
The operators ${\mathcal{D}_{a,b}^{+}}$ and ${\mathcal{D}_{a,b}^{-}}$ are important due to the fact that we can easily compute the characteristic function essential to the study of Lévy processes. Indeed, by recalling (1), for $u\in \mathcal{S}(\mathbb{R})$ we have
(18)
\[\begin{aligned}{}{\int _{-\infty }^{\infty }}{e^{ix\xi }}{\mathcal{D}_{a,b}^{+}}u(x)dx& =(-i\xi ){\int _{-\infty }^{\infty }}{e^{ix\xi }}{\int _{0}^{\infty }}u(x-s){\overline{\Pi }_{a,b}}(s)ds\hspace{0.1667em}dx\\ {} & =(-i\xi )\hat{u}(\xi ){\int _{0}^{\infty }}{e^{is\xi }}{\overline{\Pi }_{a,b}}(s)ds\\ {} & =a\ln \left(1-\frac{i\xi }{b}\right)\hat{u}(\xi ).\end{aligned}\](19)
\[\begin{aligned}{}{\int _{-\infty }^{\infty }}{e^{ix\xi }}{\mathcal{D}_{a,b}^{-}}u(x)dx& =a\ln \left(1+\frac{i\xi }{b}\right)\hat{u}(\xi ).\end{aligned}\]Proof.
The initial value can be easily checked, since the Fourier transform of the δ distribution is 1. The characteristic function of the left-hand side, with respect to x, is given by
\[ {\int _{-\infty }^{\infty }}{e^{ix\xi }}\frac{\partial }{\partial t}p(t,x)dx=\frac{\partial }{\partial t}{e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}}=-a\ln \left(1+\frac{{\xi ^{2}}}{b}\right){e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}}.\]
For the right-hand side, by using (18) and (19), we have
\[\begin{aligned}{}& -{\int _{-\infty }^{\infty }}{e^{ix\xi }}\left({\mathcal{D}_{a,\sqrt{b}}^{+}}p(t,x)+{\mathcal{D}_{a,\sqrt{b}}^{-}}p(t,x)\right)dx\\ {} & =-\left(a\ln \left(1-\frac{i\xi }{\sqrt{b}}\right){e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}}+a\ln \left(1+\frac{i\xi }{\sqrt{b}}\right){e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}}\right)\\ {} & =-a\ln \left(1+\frac{{\xi ^{2}}}{b}\right){e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}},\end{aligned}\]
so that our claim holds true. □Remark 4.
The fact that, in the last theorem, there is a sum of nonlocal derivatives is not new in the theory of probability and nonlocal operators. For example, for the Brownian motion time-changed with an independent stable subordinator, we know that the one-dimensional Riesz derivative can be written as sum of Marchaud derivatives (see [9, page 12]).
In Theorem 1 we took advantage of the possible definition of the process X as difference of two independent Gamma subordinators. We observe that this information on the process brings us to the next result for the operators.
Proof.
We easily show this result by using the characteristic functions. On the left-hand side we have, from (12),
\[ {\int _{-\infty }^{\infty }}{e^{i\xi x}}\left(-\Phi (-\Delta )u(x)\right)dx=-\left(a\ln \left(1+\frac{{\xi ^{2}}}{b}\right)\right)\hat{u}(\xi ).\]
On the right-hand side, from (18) and (19), we have
\[\begin{aligned}{}& -{\int _{-\infty }^{\infty }}{e^{ix\xi }}\left({\mathcal{D}_{a,\sqrt{b}}^{+}}u(x)+{\mathcal{D}_{a,\sqrt{b}}^{-}}u(x)\right)dx=\\ {} & =-\left(a\ln \left(1-\frac{i\xi }{\sqrt{b}}\right)\hat{u}(\xi )+a\ln \left(1+\frac{i\xi }{\sqrt{b}}\right)\hat{u}(\xi )\right)=-a\ln \left(1+\frac{{\xi ^{2}}}{b}\right)\hat{u}(\xi ),\end{aligned}\]
which concludes the proof. □
Adding the drift. We now focus on the Variance Gamma process with drift and we prove that the nonlocal equation is still true. Let ${B^{\theta }}:=\{{B_{t}^{\theta }},t\ge 0\}$ be the drifted Brownian motion on $\mathbb{R}$ starting from zero, independent of H, and ${g^{\theta }}(t,x)={e^{-{(x-\theta t)^{2}}/4t}}/\sqrt{4\pi t}$, for the drift $\theta \in \mathbb{R}$, be its probability density function. The drifted Variance Gamma process ${X^{\theta }}:=\{{X_{t}^{\theta }},t\ge 0\}$ is ${X^{\theta }}={B_{H}^{\theta }}:={B^{\theta }}\circ H$. Its characteristic function turns out to be
\[ {\mathbf{E}_{0}}[{e^{i\xi {X_{t}^{\theta }}}}]={\left(1-i\xi \frac{\theta }{b}+\frac{{\xi ^{2}}}{b}\right)^{-at}}.\]
We see that, again, the process can be written as difference of two independent Gamma subordinators as suggested by [19, Eq. (8)]. Indeed, the following holds
\[ {\left(1-i\xi \frac{\theta }{b}+\frac{{\xi ^{2}}}{b}\right)^{-at}}={\left(1-\frac{i\xi }{\sqrt{\frac{{\theta ^{2}}}{4}+b}-\frac{\theta }{2}}\right)^{-at}}{\left(1+\frac{i\xi }{\sqrt{\frac{{\theta ^{2}}}{4}+b}+\frac{\theta }{2}}\right)^{-at}}.\]
Thus we have ${X^{\theta }}={G^{\theta }}-{L^{\theta }}$, where ${G^{\theta }}$ and ${L^{\theta }}$ are two independent Gamma subordinators, the first one with parameters a and $\sqrt{\frac{{\theta ^{2}}}{4}+b}-\frac{\theta }{2}$ and the second one with parameters a and $\sqrt{\frac{{\theta ^{2}}}{4}+b}+\frac{\theta }{2}$. As well as for the Variance Gamma process, ${X^{\theta }}$ admits paths of locally bounded variation and we show the next result. Corollary 2.
Let ${p^{\theta }}(t,x)$ be the probability density function of the drifted Variance Gamma process ${X^{\theta }}$. Then we have
\[ \frac{\partial }{\partial t}{p^{\theta }}(t,x)=-\left({\mathcal{D}_{a,\sqrt{\frac{{\theta ^{2}}}{4}+b}-\frac{\theta }{2}}^{+}}{p^{\theta }}(t,x)+{\mathcal{D}_{a,\sqrt{\frac{{\theta ^{2}}}{4}+b}+\frac{\theta }{2}}^{-}}{p^{\theta }}(t,x)\right),\hspace{1em}t\gt 0,x\in \mathbb{R},\]
with ${p^{\theta }}(0,x)=\delta (x)$.
Proof.
We observe that ${p^{\theta }}(t,x)$ can be written as (6), where ${g^{\theta }}$ replaces g. By using that
\[ \left(1-i\xi \frac{\theta }{b}+\frac{{\xi ^{2}}}{b}\right)=\left(1-\frac{i\xi }{\sqrt{\frac{{\theta ^{2}}}{4}+b}-\frac{\theta }{2}}\right)\left(1+\frac{i\xi }{\sqrt{\frac{{\theta ^{2}}}{4}+b}+\frac{\theta }{2}}\right),\]
the proof is analogous to that of Theorem 1 and the Fourier transform of ${p^{\theta }}$ solves the equation. □Remark 6.
Trivially, if $\theta =0$, the last differential equation coincides with that of Theorem 1 as expected.
3.2 Variance Gamma and special functions
In this section we examine how special functions bring us to a new equation for the Variance Gamma process. From (10), we have seen that the modified Bessel function ${K_{\nu }}(x)$ appears in the density and this special function solves (see [1, formula 9.6.1])
\[ {x^{2}}\frac{{\partial ^{2}}}{\partial {x^{2}}}u(x)+x\frac{\partial }{\partial x}u(x)-({x^{2}}+{\nu ^{2}})u(x)=0,\]
where, in our case, ν is a time function. Another interesting fact on the function ${K_{\nu }}(x)$ is that it can be written in terms of the Kummer (confluent hypergeometric) function usually denoted by U, so that it is connected to the Kummer equation. This moves our thinking about differential space equations for the Variance Gamma process.
Proof.
The initial value can be easily checked, since the Fourier transform of the δ is 1. The characteristic function, with respect to x, of (20) is
\[\begin{aligned}{}& {\int _{-\infty }^{\infty }}{e^{ix\xi }}\left(x\frac{{\partial ^{2}}}{\partial {x^{2}}}p(t,x)-(2at-2)\frac{\partial }{\partial x}p(t,x)-bxp(t,x)\right)\hspace{0.1667em}dx=\\ {} & =-i\frac{\partial }{\partial \xi }{\int _{-\infty }^{\infty }}{e^{ix\xi }}\frac{{\partial ^{2}}}{\partial {x^{2}}}p(t,x)\hspace{0.1667em}dx-(2at-2)(-i\xi ){\int _{-\infty }^{\infty }}{e^{ix\xi }}p(t,x)\hspace{0.1667em}dx\\ {} & \hspace{1em}-b(-i)\frac{\partial }{\partial \xi }{\int _{-\infty }^{\infty }}{e^{ix\xi }}p(t,x)\hspace{0.1667em}dx\\ {} & =-i\frac{\partial }{\partial \xi }(-{\xi ^{2}}){e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}}+(2at-2)(i\xi ){e^{-at\ln \left(1+\frac{{\xi ^{2}}}{b}\right)}}\\ {} & \hspace{1em}-ib2at\frac{\xi }{b}{\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\\ {} & =2i\xi {\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at}}-2i{\xi ^{2}}at\frac{\xi }{b}{\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}+(2at-2)(i\xi ){\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at}}\\ {} & \hspace{1em}-2iat\xi {\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\\ {} & =-2i{\xi ^{2}}at\frac{\xi }{b}{\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}+2iat\xi {\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at}}-2iat\xi {\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\\ {} & =-2i{\xi ^{2}}at\frac{\xi }{b}{\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}+2iat\xi \left(1+\frac{{\xi ^{2}}}{b}\right){\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\\ {} & \hspace{1em}-2iat\xi {\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\\ {} & =-2i{\xi ^{2}}at\frac{\xi }{b}{\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}+2i{\xi ^{2}}at\frac{\xi }{b}{\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\hspace{-0.1667em}+\hspace{-0.1667em}2iat\xi {\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\\ {} & \hspace{1em}-2iat\xi {\left(1+\frac{{\xi ^{2}}}{b}\right)^{-at-1}}\\ {} & =0,\end{aligned}\]
so that the Fourier transform of p solves (20), as required. □3.3 Compound Poisson process convergence
The convergence of compound Poisson processes to subordinators has been extensively studied. In [8, Theorem 1] M. D’Ovidio shows the connection to a difference of α-stable subordinators and in [25, Proposition 3.3] we have the construction for any subordinator. In this section, we exploit these mentioned results to obtain the convergence of a compound Poisson process to the Variance Gamma process.
We consider the i.i.d. random variables ${Y_{j}}$, ${Y_{j}}\sim Y$ (distributed as Y), with probability density function
\[ {\nu _{Y}}(y)=\frac{{e^{-\sqrt{b}y}}}{y}\frac{1}{{E_{1}}(\sqrt{b}\gamma )}{\mathbf{1}_{y\ge \gamma }},\hspace{1em}\gamma \gt 0,\]
where ${E_{1}}$ is defined in (17). Let ${\epsilon _{j}}\sim \epsilon $ be i.i.d. (centered) Rademacher random variables, with the distribution
Now, we define ${Y^{\ast }}=\epsilon Y$ with probability density function
We are ready to provide the following convergence result.Proposition 1.
Let $N(t)$, $t\ge 0$, be a homogeneous Poisson process with parameter 1, independent of the i.i.d random variables ${Y_{j}^{\ast }}\sim {Y^{\ast }}$, with law (21). We have that
Proof.
Since we are dealing with a compound Poisson process, we know that
\[\begin{aligned}{}& {\mathbf{E}_{0}}\left[\exp \left(i\xi {\sum \limits_{j=0}^{N(ta{E_{1}}(\sqrt{b}\gamma ))}}{Y_{j}^{\ast }}\right)\right]\\ {} & =\exp \left[ta{E_{1}}(\sqrt{b}\gamma )({\mathbf{E}_{0}}\left[\exp (i\xi {Y^{\ast }})\right]-1)\right]\\ {} & =\exp \left[ta{E_{1}}(\sqrt{b}\gamma )({\mathbf{E}_{0}}\left[\exp (i\xi \epsilon Y)\right]-1)\right]\\ {} & =\exp \left[ta{E_{1}}(\sqrt{b}\gamma )\left(\frac{1}{2}{\mathbf{E}_{0}}\left[\exp (i\xi Y)\right]+\frac{1}{2}{\mathbf{E}_{0}}\left[\exp (-i\xi Y)\right]-\left(\frac{1}{2}+\frac{1}{2}\right)\right)\right]\\ {} & =\exp \left[ta{E_{1}}(\sqrt{b}\gamma )\left(\frac{1}{2}{\mathbf{E}_{0}}[\exp (i\xi Y)-1]+\frac{1}{2}{\mathbf{E}_{0}}[\exp (-i\xi Y)-1]\right)\right]\\ {} & =\exp \left[t\left(\frac{a}{2}{\int _{\gamma }^{\infty }}({e^{i\xi y}}-1)\frac{{e^{-\sqrt{b}y}}}{y}dy+\frac{a}{2}{\int _{\gamma }^{\infty }}({e^{-i\xi y}}-1)\frac{{e^{-\sqrt{b}y}}}{y}dy\right)\right].\end{aligned}\]
If $\gamma \to 0$, we get that
\[\begin{array}{r}\displaystyle \exp \left[t\left(\frac{a}{2}{\int _{\gamma }^{\infty }}({e^{i\xi y}}-1)\frac{{e^{-\sqrt{b}y}}}{y}dy+\frac{a}{2}{\int _{\gamma }^{\infty }}({e^{-i\xi y}}-1)\frac{{e^{-\sqrt{b}y}}}{y}dy\right)\right]\\ {} \displaystyle \to \exp \left[-\frac{t}{2}\left(a\ln \left(1-\frac{i\xi }{\sqrt{b}}\right)+a\ln \left(1+\frac{i\xi }{\sqrt{b}}\right)\right)\right],\end{array}\]
hence the claim holds by using (14) and the classic Lévy continuity theorem. □Remark 7.
This result may be important for the simulations: numerical analysis for compound Poisson process is simpler than that for Bessel functions of (10).