1 Introduction
Poisson processes with randomized time have been intensively studied in the recent literature. The most popular models of such processes are represented by the space-fractional and time-fractional Poisson processes where a random time-change is introduced by a stable subordinator or its inverse correspondingly (we refer, for example, to [7, 12, 14, 3, 10, 1, 13], to mention only few, see also references therein).
In the paper [14] a general class of time-changed Poisson processes ${N}^{f}(t)=N({H}^{f}(t))$, $t>0$, has been introduced and studied, where $N(t)$ is a Poisson process and ${H}^{f}(t)$ is an arbitrary subordinator with Laplace exponent f, independent of $N(t)$. Distributional properties, hitting times and governing equations for such processes were presented in [14, 7]; the case of iterated time change and some further generalizations of the class of process ${N}^{f}(t)$ were also considered in [7]. Hitting times for the iterated Poisson process were studied in [11, 6], and for population processes with random time in [4].
In the papers [5, 9] time-changed Poisson processes were studied for the case where the role of time is played by compound Poisson-Gamma subordinators and their inverse processes. In the present paper, we continue to investigate the properties of the processes $N({G_{N}}(t))$, $t>0$, where ${G_{N}}(t)=G(N(t))$ is the compound Poisson-Gamma process. Some motivation for considering this class of processes is given in [5] (see Section 3 therein). We derive the expressions for the hitting times and first passage times of the processes $N({G_{N}}(t))$ in Sections 3 and 4. We next study in Section 5 the time-change introduced by the process ${G_{N+a}}(t)=G(N(t)+at)$, where the Gamma process $G(t)$ and the Poisson process with a drift $N(t)+at$ are independent. In Section 6 we consider some kinds of iterated Bessel transforms and their use for time-change in the Poisson process.
2 Preliminaries
In this section we recall some results on time-changed Poisson processes, which will be used in the next sections. In the paper [14] the time-changed Poisson processes ${N}^{f}(t)=N({H}^{f}(t))$, $t>0$, have been studied, where $N(t)$ is the Poisson process with intensity λ and ${H}^{f}(t)$ is the subordinator with a Bernštein function $f(u)$, independent of $N(t)$. Their distributions are characterized as follows:
the probability generating function of ${N}^{f}(t)$ is given by
(1)
\[ Pr\big\{{N}^{f}[t,t+dt)=k\big\}=\left\{\begin{array}{l}dt\frac{{\lambda }^{k}}{k!}{\textstyle\int _{0}^{\infty }}{e}^{-\lambda s}{s}^{k}\nu (ds)+o(dt),k\ge 1,\hspace{1em}\\{} 1-dt{\textstyle\int _{0}^{\infty }}(1-{e}^{-\lambda s})\nu (ds)+o(dt),k=0,\hspace{1em}\end{array}\right.\](2)
\[ {p_{k}^{f}}(t)=P\big\{{N}^{f}(t)=k\big\}=\frac{{(-1)}^{k}}{k!}\frac{{d}^{k}}{d{u}^{k}}{e}^{-tf(\lambda u)}{\bigg|_{u=1}},\]The time-changed Poisson processes ${N}^{f}(t)$, $t>0$, have independent stationary increments (see, e.g., the general result on subordinated Lévy processes given in Theorem 1.3.25 [2]).
It is also shown in [14] that the probabilities of the processes ${N}^{f}(t)$ satisfy the difference-differential equations:
with the usual initial conditions: ${p_{0}^{f}}(0)=1$, and ${p_{k}^{f}}(0)=0$ for $k\ge 1$. The equation (4) can be also written in the following form (see [14], Remark 2.3):
where B is the shift operator: $B{p_{k}^{f}}(t)={p_{k-1}^{f}}(t)$, and it is supposed that ${p_{-1}}(t)=0$.
(4)
\[ \frac{d}{dt}{p_{k}^{f}}(t)=-f(\lambda ){p_{k}^{f}}(t)+{\sum \limits_{m=1}^{k}}\frac{{\lambda }^{m}}{m!}{p_{k-m}^{f}}(t){\int _{0}^{\infty }}{e}^{-s\lambda }{s}^{m}\nu (ds),\hspace{1em}k\ge 0,\hspace{0.1667em}t>0,\](5)
\[ \frac{d}{dt}{p_{k}^{f}}(t)=-f\big(\lambda (I-B)\big){p_{k}^{f}}(t),\hspace{1em}k\ge 0,\hspace{0.1667em}t>0,\]Let ${N_{1}}(t)$ be the Poisson process with intensity ${\lambda _{1}}$, and let ${G_{N}}(t)=G(N(t))$, $t>0$, be the compound Poisson-Gamma subordinator with parameters $\lambda ,\alpha ,\beta $, that is, with the Laplace exponent $f(u)=\lambda {\beta }^{\alpha }({\beta }^{-\alpha }-{(\beta +u)}^{-\alpha })$ and the Lévy measure $\nu (du)=\lambda {\beta }^{\alpha }{(\varGamma (\alpha ))}^{-1}{u}^{\alpha -1}{e}^{-\beta u}du$, $\lambda ,\alpha ,\beta >0$. In the case when $\alpha =1$ we have the compound Poisson-exponential subordinator, which we will denote as ${E_{N}}(t)$.
Consider the time-changed process ${N_{1}}({G_{N}}(t))={N_{1}}(G(N(t)))$, $t>0$, where the compound Poisson-Gamma process ${G_{N}}(t)$ is independent of ${N_{1}}(t)$.
We recall the results on probability distributions of the process ${N_{1}}({G_{N}}(t))$, which were presented in our previous paper [5] and will be used in the next sections.
Theorem 1.
[5] Probability mass function of the process $X(t)={N_{1}}({G_{N}}(t))$, $t>0$, is given by
and
The probabilities ${p_{k}}(t)$, $k\ge 0$, satisfy the following system of difference-differential equations:
(6)
\[ {p_{k}}(t)=P\big(X(t)=k\big)=\frac{{e}^{-\lambda t}}{k!}\frac{{\lambda _{1}^{k}}}{{({\lambda _{1}}+\beta )}^{k}}{\sum \limits_{n=1}^{\infty }}\frac{{(\lambda t{\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{({\lambda _{1}}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\hspace{1em}\textit{for}\hspace{2.5pt}k\ge 1,\](7)
\[ {p_{0}}(t)=\exp \big\{-tf({\lambda _{1}})\big\}=\exp \bigg\{-\lambda t\bigg(1-\frac{{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)\bigg\}.\](8)
\[ \frac{d}{dt}{p_{k}}(t)=\bigg(\frac{\lambda {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}-\lambda \bigg){p_{k}}(t)+\frac{\lambda {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}{\sum \limits_{m=1}^{k}}\frac{{\lambda _{1}^{m}}}{{({\lambda _{1}}+\beta )}^{m}}\frac{\varGamma (m+\alpha )}{m!\varGamma (\alpha )}{p_{k-m}}(t).\]Remark 1.
Probability mass function of the process $X(t)={N_{1}}({G_{N}}(t))$ for $k\ge 1$ can be represented with the use of the generalized Wright function in the following form:
where
is the generalized Wright function defined for $z\in C$, ${a_{i}},{b_{i}}\in C$, ${\alpha _{i}},{\beta _{i}}\in R$, ${\alpha _{i}},{\beta _{i}}\ne 0$ and $\sum {\alpha _{i}}-\sum {\beta _{i}}>-1$ (see, e.g., [8]).
(9)
\[ {p_{k}}(t)=P\big(X(t)=k\big)=\frac{{e}^{-\lambda t}}{k!}\frac{{\lambda _{1}^{k}}}{{({\lambda _{1}}+\beta )}^{k}}\hspace{0.1667em}{\hspace{0.1667em}_{1}}{\varPsi _{1}}\bigg((k,\alpha ),(0,\alpha ),\frac{\lambda t{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg),\](10)
\[ {_{p}}{\varPsi _{q}}\big(({a_{i}},{\alpha _{i}}),({b_{j}},{\beta _{j}}),z\big)={\sum \limits_{k=0}^{\infty }}\frac{{\prod \limits_{i=1}^{p}}\varGamma ({a_{i}}+{\alpha _{i}}k)}{{\prod \limits_{j=1}^{q}}\varGamma ({b_{j}}+{\beta _{j}}k)}\frac{{z}^{k}}{k!}\]To represent the distribution function in the next theorem we will use the three-parameter generalized Mittag-Leffler function, which is defined as follows:
with $\hspace{2.5pt}\text{Re}(\rho )>0,\text{Re}(\delta )>0,\text{Re}(\gamma )>0$ (see, e.g., [8]).
(11)
\[ {\mathcal{E}_{\rho ,\delta }^{\gamma }}(z)={\sum \limits_{k=0}^{\infty }}\frac{\varGamma (\gamma +k)}{\varGamma (\gamma )}\frac{{z}^{k}}{k!\varGamma (\rho k+\delta )},\hspace{1em}z\in C,\rho ,\delta ,\gamma \in C,\]Theorem 2.
[5] Let $X(t)={N_{1}}({E_{N}}(t))$. Then for $k\ge 1$
and the probabilities ${p_{k}}(t)$, $k\ge 0$, satisfy the following equation:
(12)
\[ {p_{k}^{E}}(t)=P\big(X(t)=k\big)={e}^{-\lambda t}\frac{{\lambda _{1}^{k}}\lambda \beta t}{{({\lambda _{1}}+\beta )}^{k+1}}{\mathcal{E}_{1,2}^{k+1}}\bigg(\frac{\lambda \beta t}{{\lambda _{1}}+\beta }\bigg);\]Figures 1 and 2 show the behavior of the probabilities (6) and (12), for various choices of t $(t=1,2,3)$.
Fig. 2.
Probabilities (12), for values of $\beta =0.8,\lambda =4,{\lambda _{1}}=1$
3 Hitting times of the subordinated Poisson process ${N_{1}}({G_{N}}(s))$
In this section we study the hitting times for the process ${N_{1}}({G_{N}}(s))$ defined as
In the next theorem we obtain the analytic expression for $P\{{T_{k}}<\infty \}$.
Theorem 3.
For the random times ${T_{k}}$ we have that
(15)
\[ P\{{T_{k}}<\infty \}=\frac{{\lambda _{1}^{k}}}{k!{({\lambda _{1}}+\beta )}^{k}}\bigg(1-{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha }\bigg){\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha n}\frac{\varGamma (\alpha n+k)}{\varGamma (\alpha n)}.\]Proof.
Following the first lines of the proof of Theorem 2.3. from [7], using the independence of increments of the process ${N_{1}}({G_{N}}(s))$, we write the lines (16) and (17) below, and then, having in mind (1), we come to the expression (18):
Now we note that the expression (18) coincides with the second term in the r.h.s. of the equation (4), and, therefore, we can write:
Finally, from the expression (19) we obtain:
(16)
\[ P\{{T_{k}}\in ds\}=P\Bigg\{{\bigcup \limits_{j=1}^{k}}\big\{{N_{1}}\big({G_{N}}(s)\big)=k-j,{N_{1}}\big({G_{N}}[s,s+ds\big))=j\big\}\Bigg\}\](19)
\[\begin{aligned}P\{{T_{k}}\in ds\}=& \bigg\{\frac{d}{ds}{p_{k}}(s)+f({\lambda _{1}}){p_{k}}(s)\bigg\}ds\\{} =& \Bigg\{\frac{1}{k!}\frac{{\lambda _{1}^{k}}}{{({\lambda _{1}}+\beta )}^{k}}{\sum \limits_{n=1}^{\infty }}\frac{{(\lambda {\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{({\lambda _{1}}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\frac{d}{ds}{e}^{-\lambda s}{s}^{n}\\{} & +\bigg(\lambda -\frac{\lambda {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg){p_{k}}(s)\Bigg\}ds\\{} =& \frac{{e}^{-\lambda s}{\lambda _{1}^{k}}}{k!{({\lambda _{1}}+\beta )}^{k}}{\sum \limits_{n=1}^{\infty }}\frac{{(\lambda {\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{({\lambda _{1}}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\bigg(-\lambda {s}^{n}+n{s}^{n-1}\\{} & +{s}^{n}\bigg(\lambda -\frac{\lambda {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)\bigg)ds\\{} =& \frac{{e}^{-\lambda s}{\lambda _{1}^{k}}}{k!{({\lambda _{1}}+\beta )}^{k}}{\sum \limits_{n=1}^{\infty }}\frac{{(\lambda {\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{({\lambda _{1}}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\bigg(n{s}^{n-1}-{s}^{n}\frac{\lambda {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)ds.\end{aligned}\]
\[\begin{aligned}P\{{T_{k}}<\infty \}=& \frac{{\lambda _{1}^{k}}}{k!{({\lambda _{1}}+\beta )}^{k}}{\sum \limits_{n=1}^{\infty }}\bigg\{\frac{{(\lambda {\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{({\lambda _{1}}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\\{} & \times {\int _{0}^{\infty }}{e}^{-\lambda s}\bigg(n{s}^{n-1}-{s}^{n}\frac{\lambda {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)ds\bigg\}\\{} =& \frac{{\lambda _{1}^{k}}}{k!{({\lambda _{1}}+\beta )}^{k}}{\sum \limits_{n=1}^{\infty }}\bigg\{\frac{{(\lambda {\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{({\lambda _{1}}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\\{} & \times \bigg(n\frac{\varGamma (n)}{{\lambda }^{n}}-\frac{\lambda {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\frac{\varGamma (n+1)}{{\lambda }^{n+1}}\bigg)\bigg\}\\{} =& \frac{{\lambda _{1}^{k}}}{k!{({\lambda _{1}}+\beta )}^{k}}\bigg(1-{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha }\bigg){\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha n}\frac{\varGamma (\alpha n+k)}{\varGamma (\alpha n)}.\hspace{1em}\end{aligned}\]
□Remark 2.
We note that the expression for the hitting times (15) does not depend on the parameter λ, the intensity parameter of the inner Poisson process involved in the time-changed process
In the paper [7] the authors show that in general case for the process ${N}^{f}(t)=N({H}^{f}(t))$ the probabilities $P\{{T_{k}^{f}}\in ds\}$ can be represented as follows (see the formula (2.8) in [7]):
To be able to use the above formula, it is necessary to calculate the derivatives
It is noted in [7] that evaluation of these derivatives seems possible only for a small subset of Bernštein functions. One example of such functions is $f(u)={u}^{\alpha }$, the Bernštein function which corresponds to the stable subordinator. We show below another example of such functions.
(20)
\[ P\big\{{T_{k}^{f}}\in ds\big\}={\sum \limits_{j=0}^{k-1}}\frac{{(-1)}^{j}}{j!}\frac{{d}^{j}}{d{u}^{j}}{e}^{-sf(\lambda u)}{\bigg|_{u=1}}ds\frac{{\lambda }^{k-j}}{(k-j)!}{\int _{0}^{\infty }}{e}^{-\lambda t}{t}^{k-j}\nu (dt)\]In the case when $\alpha =1$ the process ${G_{N}}(t)$ becomes ${E_{N}}(t)$, the compound Poisson process with exponentially distributed jumps that has Laplace exponent $f(u)=\lambda \frac{u}{\beta +u},\lambda >0,\beta >0$. For this function it is possible to calculate the derivatives (21), since we easily find:
Therefore, we can use the formula (20) for the process ${N_{1}}({E_{N}}(T))$.
(22)
\[ \frac{{d}^{j}}{d{u}^{j}}\frac{1}{f({\lambda _{1}}u)}={(-1)}^{j}j!\frac{\beta }{\lambda {\lambda _{1}}}{u}^{-(j+1)},\hspace{1em}j\ge 1,\hspace{2em}\frac{1}{f({\lambda _{1}}u)}=\frac{\beta +{\lambda _{1}}u}{\lambda {\lambda _{1}}u},\hspace{1em}j=0.\]Proof.
Using the formulas (20) and (22), we obtain:
Therefore,
(24)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{T_{k}^{E}}\in ds\big\}& \displaystyle =& \displaystyle {\sum \limits_{j=0}^{k-1}}\frac{{(-1)}^{j}}{j!}\frac{{d}^{j}}{d{u}^{j}}{e}^{-sf({\lambda _{1}}u)}{\bigg|_{u=1}}ds\frac{{\lambda _{1}^{k-j}}}{(k-j)!}\\{} & & \displaystyle \times {\int _{0}^{\infty }}{e}^{-{\lambda _{1}}t}{t}^{k-j}\lambda \beta {e}^{-\beta t}dt\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{j=0}^{k-1}}\frac{{(-1)}^{j}}{j!}\frac{{\lambda _{1}^{k-j}}}{{({\lambda _{1}}+\beta )}^{k-j}}\frac{{d}^{j}}{d{u}^{j}}{e}^{-sf({\lambda _{1}}u)}{\bigg|_{u=1}}ds.\hspace{1em}\end{array}\]
\[\begin{aligned}P\big\{{T_{k}^{E}}<\infty \big\}& =\frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{j=1}^{k-1}}\frac{{(-1)}^{j}}{j!}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{k-j}\frac{{d}^{j}}{d{u}^{j}}\frac{1}{f({\lambda _{1}}u)}{\bigg|_{u=1}}\\{} +\frac{\beta {\lambda _{1}^{k-1}}}{{({\lambda _{1}}+\beta )}^{k}}& =\frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{j=1}^{k-1}}\frac{{(-1)}^{j}}{j!}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{k-j}{(-1)}^{j}j!\frac{\beta }{\lambda {\lambda _{1}}}+\frac{\beta {\lambda _{1}^{k-1}}}{{({\lambda _{1}}+\beta )}^{k}}\\{} & =\frac{{\lambda _{1}^{k-1}}{\beta }^{2}}{{({\lambda _{1}}+\beta )}^{k+1}}{\sum \limits_{j=1}^{k-1}}{\bigg(\frac{{\lambda _{1}}+\beta }{{\lambda _{1}}}\bigg)}^{j}+\frac{\beta {\lambda _{1}^{k-1}}}{{({\lambda _{1}}+\beta )}^{k}}\\{} & =\frac{\beta }{{\lambda _{1}}+\beta }\bigg(1-{\bigg(\frac{{\lambda _{1}}}{\beta +{\lambda _{1}}}\bigg)}^{k-1}\bigg)+\frac{\beta {\lambda _{1}^{k-1}}}{{({\lambda _{1}}+\beta )}^{k}}=\frac{\beta }{{\lambda _{1}}+\beta }.\end{aligned}\]
□Remark 3.
For all Bernštein functions $f(u)$ it holds (see [7]):
For our case when $f(u)=\lambda {\beta }^{\alpha }({\beta }^{-\alpha }-{(\beta +u)}^{-\alpha })$, we obtain:
\[ P\{{T_{1}}<\infty \}=\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\frac{\alpha {\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }-{\beta }^{\alpha }},\]
and when $f(u)=\lambda \frac{u}{\beta +u},\lambda >0,\beta >0$, we have that for all $k\ge 1$:
Remark 4.
It is easy to calculate the expression $P\{{T_{k}}<\infty \}$ for the case of process ${N_{1}}({G_{N}}(t))$ with the parameter $\alpha =2$:
Indeed, we notice that the sum in the r.h.s. of the formula (15) can be obtained by summing the even terms of the following sum:
(25)
\[ P\{{T_{k}}<\infty \}=\frac{1}{k!}\bigg(1-{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{2}\bigg)\bigg[\frac{\beta }{2}\bigg(\frac{1}{{\lambda _{1}}}+\frac{{\lambda _{1}^{k}}}{{({\lambda _{1}}+2\beta )}^{k+1}}\bigg)\bigg].\]
\[ {\sum \limits_{n=0}^{\infty }}{x}^{n}\frac{\varGamma (n+k)}{k!\varGamma (n)}=x{(1-x)}^{-(k+1)},\]
where $|x|<1$.Lemma 1.
The expression for the probabilities $P\{{T_{k}^{E}}\in ds\}$ for the process ${N_{1}}({E_{N}}(t))={N_{{\lambda _{1}}}}({E_{\beta }}({N_{\lambda }}(t))$ can be written in the following form:
\[ P\big\{{T_{k}^{E}}\in ds\big\}=\frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{n=0}^{\infty }}P\big\{{N_{\lambda }}(s)=n\big\}P\big\{{N_{\beta }}\big({E_{{\lambda _{1}}}}(k)\big)=n\big\}ds,\]
here $P\{{N_{\beta }}({E_{{\lambda _{1}}}}(k))=n\}$ is the distribution of the time-changed Poisson process, where the role of time is played by the exponential process, subscripts denote the parameters of the processes.
Proof.
Let us return to the formula (18). Having in mind (1) and (6), we come to the following:
(26)
\[\begin{aligned}& P\{{T_{k}}\in ds\}\\{} & \hspace{1em}={\sum \limits_{j=1}^{k-1}}\frac{{e}^{-\lambda s}{\lambda _{1}^{k-j}}}{(k-j)!{({\lambda _{1}}+\beta )}^{k-j}}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda s{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\frac{\varGamma (\alpha n+k-j)}{n!\varGamma (\alpha n)}\\{} & \hspace{2em}\times \frac{{\lambda _{1}^{j}}}{j!}{\int _{0}^{\infty }}{e}^{-{\lambda _{1}}t}{t}^{j}\lambda {\beta }^{\alpha }{\big(\varGamma (\alpha )\big)}^{-1}{t}^{\alpha -1}{e}^{-\beta t}dtds\\{} & \hspace{2em}+\exp \bigg\{-\lambda s\bigg(1-\frac{{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)\bigg\}\frac{{\lambda _{1}^{k}}}{k!}{\int _{0}^{\infty }}{e}^{-{\lambda _{1}}t}{t}^{k}\lambda {\beta }^{\alpha }{\big(\varGamma (\alpha )\big)}^{-1}{t}^{\alpha -1}{e}^{-\beta t}dtds\\{} & \hspace{1em}=\frac{\lambda {\beta }^{\alpha }{e}^{-\lambda s}}{\varGamma (\alpha )}{\sum \limits_{j=1}^{k-1}}\frac{{\lambda _{1}^{k-j}}}{(k-j)!{({\lambda _{1}}+\beta )}^{k-j}}\frac{{\lambda _{1}^{j}}}{j!}\frac{\varGamma (j+\alpha )}{{({\lambda _{1}}+\beta )}^{\alpha +j}}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda s{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\\{} & \hspace{2em}\times \frac{\varGamma (\alpha n\hspace{0.1667em}+\hspace{0.1667em}k\hspace{0.1667em}-\hspace{0.1667em}j)}{n!\varGamma (\alpha n)}ds+\exp \bigg\{-\lambda s\bigg(1-\frac{{\beta }^{\alpha }}{{({\lambda _{1}}\hspace{0.1667em}+\hspace{0.1667em}\beta )}^{\alpha }}\bigg)\bigg\}\frac{\lambda {\lambda _{1}^{k}}{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha +k}}\frac{\varGamma (\alpha +k)}{k!\varGamma (\alpha )}ds\\{} & \hspace{1em}=\frac{\lambda {\beta }^{\alpha }{e}^{-\lambda s}}{\varGamma (\alpha )}\frac{{\lambda _{1}^{k}}}{{({\lambda _{1}}+\beta )}^{\alpha +k}}{\sum \limits_{j=1}^{k-1}}\frac{\varGamma (j+\alpha )}{(k-j)!j!}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda s{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\frac{\varGamma (\alpha n+k-j)}{n!\varGamma (\alpha n)}ds\\{} & \hspace{2em}+\exp \bigg\{-\lambda s\bigg(1-\frac{{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)\bigg\}\frac{\lambda {\lambda _{1}^{k}}{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha +k}}\frac{\varGamma (\alpha +k)}{k!\varGamma (\alpha )}ds.\end{aligned}\]Now we take $\alpha =1$ in (26) and obtain:
□
(27)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{T_{k}^{E}}\in ds\big\}& \displaystyle =& \displaystyle \frac{\lambda \beta {\lambda _{1}^{k}}{e}^{-\lambda s}}{{({\lambda _{1}}+\beta )}^{1+k}}{\sum \limits_{j=1}^{k-1}}\frac{\varGamma (j+1)}{(k-j)!j!}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda s\beta }{{\lambda _{1}}+\beta }\bigg)}^{n}\frac{\varGamma (n+k-j)}{n!\varGamma (n)}ds\\{} & & \displaystyle +\exp \bigg\{-\lambda s\hspace{-0.1667em}\bigg(\hspace{-0.1667em}1-\frac{\beta }{{\lambda _{1}}+\beta }\hspace{-0.1667em}\bigg)\hspace{-0.1667em}\bigg\}\frac{\lambda {\lambda _{1}^{k}}\beta }{{({\lambda _{1}}+\beta )}^{k+1}}\frac{\varGamma (k+1)}{k!}ds={I_{1}}+{I_{2}}\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta {\lambda _{1}^{k}}{e}^{-\lambda s}}{{({\lambda _{1}}+\beta )}^{1+k}}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)}^{n}\frac{1}{n!\varGamma (n)}{\sum \limits_{j=1}^{k-1}}\frac{\varGamma (n+k-j)}{(k-j)!}ds+{I_{2}}\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta {\lambda _{1}^{k}}{e}^{-\lambda s}}{{({\lambda _{1}}+\beta )}^{1+k}}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)}^{n}\frac{1}{n!}{\sum \limits_{l=1}^{k-1}}\frac{(n+l-1)!}{l!(n-1)!}ds+{I_{2}}\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta {\lambda _{1}^{k}}{e}^{-\lambda s}}{{({\lambda _{1}}+\beta )}^{1+k}}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)}^{n}\frac{1}{n!}\Bigg[{\sum \limits_{l=0}^{k-1}}{C_{n+l-1}^{l}}-1\Bigg]ds+{I_{2}}\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta {\lambda _{1}^{k}}{e}^{-\lambda s}}{{({\lambda _{1}}+\beta )}^{1+k}}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)}^{n}\frac{1}{n!}\big[{C_{n+k-1}^{k-1}}-1\big]ds+{I_{2}}\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{n=1}^{\infty }}{e}^{-\lambda s}\frac{{(\lambda s)}^{n}}{n!}\frac{{\lambda _{1}^{k}}{\beta }^{n}}{{({\lambda _{1}}+\beta )}^{n+k}}\frac{\varGamma (n+k)}{n!\varGamma (k)}ds\\{} & & \displaystyle -\frac{\lambda \beta {\lambda _{1}^{k}}{e}^{-\lambda s}}{{({\lambda _{1}}+\beta )}^{1+k}}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)}^{n}\frac{1}{n!}ds+{I_{2}}\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{n=1}^{\infty }}{e}^{-\lambda s}\frac{{(\lambda s)}^{n}}{n!}\frac{{\lambda _{1}^{k}}{\beta }^{n}}{{({\lambda _{1}}+\beta )}^{n+k}}\frac{\varGamma (n+k)}{n!\varGamma (k)}ds\\{} & & \displaystyle -\frac{\lambda \beta {\lambda _{1}^{k}}{e}^{-\lambda s}}{{({\lambda _{1}}+\beta )}^{1+k}}\big[{e}^{-\lambda s\frac{\beta }{{\lambda _{1}}+\beta }}-1\big]ds\\{} & & \displaystyle +\exp \bigg\{-\lambda s\bigg(1-\frac{\beta }{{\lambda _{1}}+\beta }\bigg)\bigg\}\frac{\lambda {\lambda _{1}^{k}}\beta }{{({\lambda _{1}}+\beta )}^{k+1}}ds\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{n=0}^{\infty }}{e}^{-\lambda s}\frac{{(\lambda s)}^{n}}{n!}\frac{{\lambda _{1}^{k}}{\beta }^{n}}{{({\lambda _{1}}+\beta )}^{n+k}}\frac{\varGamma (n+k)}{n!\varGamma (k)}ds\\{} & \displaystyle =& \displaystyle \frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{n=0}^{\infty }}P\big\{{N_{\lambda }}(s)=n\big\}P\big\{{N_{\beta }}\big({E_{{\lambda _{1}}}}(k)\big)=n\big\}ds.\end{array}\]By integrating the left part of the formula (27) by s we get:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{T_{k}^{E}}<\infty \big\}& \displaystyle =& \displaystyle \frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{n=0}^{\infty }}{\int _{0}^{\infty }}{e}^{-\lambda s}\frac{{(\lambda s)}^{n}}{n!}dsP\big\{{N_{\beta }}\big({G_{{\lambda _{1}}}}(k)\big)=n\big\}\\{} & \displaystyle =& \displaystyle \frac{\beta }{{\lambda _{1}}+\beta }{\sum \limits_{n=0}^{\infty }}P\big\{{N_{\beta }}\big({G_{{\lambda _{1}}}}(k)\big)=n\big\}=\frac{\beta }{{\lambda _{1}}+\beta },\end{array}\]
which coincides with the formula (23).4 First passage time of the subordinated Poisson process ${N_{1}}({G_{N}}(s))$
In this section we study first passage time for the process ${N_{1}}({G_{N}}(s))$ defined as
We have:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\{{\tilde{T}_{k}}<s\}& \displaystyle =& \displaystyle P\big\{{N_{1}}\big({G_{N}}(s)\big)\ge k\big\}={\sum \limits_{j=k}^{\infty }}{\int _{0}^{\infty }}{e}^{-{\lambda _{1}}z}\frac{{({\lambda _{1}}z)}^{j}}{j!}P\big\{{G_{N}}(s)\in dz\big\}\\{} & \displaystyle =& \displaystyle {e}^{-\lambda s}{\sum \limits_{j=k}^{\infty }}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{j}\frac{1}{j!}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda s{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\frac{\varGamma (\alpha n+j)}{n!\varGamma (\alpha n)},\end{array}\]
and, therefore,
\[\begin{aligned}& P\{{\tilde{T}_{k}}\in ds\}/ds\\{} & \hspace{1em}=\frac{d}{ds}{e}^{-\lambda s}{\sum \limits_{j=k}^{\infty }}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{j}\frac{1}{j!}{\sum \limits_{n=1}^{\infty }}{\bigg(\frac{\lambda s{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\frac{\varGamma (\alpha n+j)}{n!\varGamma (\alpha n)}\\{} & \hspace{1em}={e}^{-\lambda s}{\sum \limits_{j=k}^{\infty }}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{j}\frac{1}{j!}{\sum \limits_{n=1}^{\infty }}{s}^{n-1}(n-\lambda s){\bigg(\frac{\lambda s{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\frac{\varGamma (\alpha n+j)}{n!\varGamma (\alpha n)}.\end{aligned}\]
When $\alpha =1$, the process ${G_{N}}(t)$ becomes ${E_{N}}(t)$, that is, the compound Poisson process with exponentially distributed jumps. For this process we denote the hitting times by
We obtain:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{\tilde{T}_{k}^{E}}<s\big\}& \displaystyle =& \displaystyle P\big\{{N_{1}}\big({G_{N}}(s)\big)\ge k\big\}={\sum \limits_{j=k}^{\infty }}{\int _{0}^{\infty }}{e}^{-{\lambda _{1}}z}\frac{{({\lambda _{1}}z)}^{j}}{j!}P\big\{{E_{N}}(s)\in dz\big\}\\{} & \displaystyle =& \displaystyle {e}^{-\lambda s}{\sum \limits_{j=k}^{\infty }}\frac{{\lambda _{1}^{j}}\lambda \beta s}{{({\lambda _{1}}+\beta )}^{j+1}}{\mathcal{E}_{1,2}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg).\end{array}\]
Therefore,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{\tilde{T}_{k}^{E}}\in ds\big\}/ds& \displaystyle =& \displaystyle \frac{d}{ds}{e}^{-\lambda s}{\sum \limits_{j=k}^{\infty }}\frac{{\lambda _{1}^{j}}\lambda \beta s}{{({\lambda _{1}}+\beta )}^{j+1}}{\mathcal{E}_{1,2}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg).\end{array}\]
Taking into account that $\frac{d}{ds}(as{\mathcal{E}_{1,2}^{\gamma }}(as))=a{\mathcal{E}_{1,1}^{\gamma }}(as)$, we obtain:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{\tilde{T}_{k}^{E}}\in ds\big\}/ds& \displaystyle =& \displaystyle {\sum \limits_{j=k}^{\infty }}\frac{{\lambda _{1}^{j}}}{{({\lambda _{1}}+\beta )}^{j}}\frac{d}{ds}{e}^{-\lambda s}\frac{\lambda \beta s}{{\lambda _{1}}+\beta }{\mathcal{E}_{1,2}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)\\{} & \displaystyle =& \displaystyle {\sum \limits_{j=k}^{\infty }}\frac{{\lambda _{1}^{j}}}{{({\lambda _{1}}+\beta )}^{j}}{e}^{-\lambda s}\bigg[\bigg(-\lambda \frac{\lambda \beta s}{{\lambda _{1}}+\beta }{\mathcal{E}_{1,2}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)\\{} & & \displaystyle +\frac{\lambda \beta }{{\lambda _{1}}+\beta }{\mathcal{E}_{1,1}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)\bigg)\bigg]\\{} & \displaystyle =& \displaystyle {e}^{-\lambda s}\frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{j=k}^{\infty }}\frac{{\lambda _{1}^{j}}}{{({\lambda _{1}}+\beta )}^{j}}\\{} & & \displaystyle \times \bigg[{\mathcal{E}_{1,1}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)-\lambda s{\mathcal{E}_{1,2}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)\bigg].\end{array}\]
We summarize the above reasonings in the following form.
Lemma 2.
For the process ${N_{1}}({G_{N}}(s))$ the law of ${\tilde{T}_{k}}$ is given by the following formula:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\{{\tilde{T}_{k}}\in ds\}/ds& \displaystyle =& \displaystyle {e}^{-\lambda s}{\sum \limits_{j=k}^{\infty }}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{j}\frac{1}{j!}{\sum \limits_{n=1}^{\infty }}{s}^{n-1}(n-\lambda s)\\{} & & \displaystyle \times {\bigg(\frac{\lambda s{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\frac{\varGamma (\alpha n+j)}{n!\varGamma (\alpha n)}.\end{array}\]
In the case when $\alpha =1$, that is, for the process ${N_{1}}({E_{N}}(s))$, we have:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{\tilde{T}_{k}^{E}}\in ds\big\}/ds& \displaystyle =& \displaystyle {e}^{-\lambda s}\frac{\lambda \beta }{{\lambda _{1}}+\beta }{\sum \limits_{j=k}^{\infty }}\frac{{\lambda _{1}^{j}}}{{({\lambda _{1}}+\beta )}^{j}}\\{} & & \displaystyle \times \bigg[{\mathcal{E}_{1,1}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)-\lambda s{\mathcal{E}_{1,2}^{j+1}}\bigg(\frac{\lambda \beta s}{{\lambda _{1}}+\beta }\bigg)\bigg].\end{array}\]
5 Gamma process subordinated to the Poisson process with a drift
Let $N(t)$ be the Poisson process with the intensity parameter λ. Consider the process with a drift
The probability law of the process $N(t)+at$ can be written in the following form (see, e.g., [3], Theorem 1):
The Laplace transform of the process (28) is:
and, correspondingly, the Laplace exponent is:
(29)
\[ {p_{x}}(t)={e}^{-\lambda t}{\sum \limits_{k=0}^{\infty }}\frac{{(\lambda t)}^{k}}{k!}\delta (x-k-at),\hspace{1em}x\ge at,a>0,t>0.\]Let $G(t)$ be the Gamma process with parameters $(\alpha ,\beta )$, that is, with the Laplace exponent $f(u)=\alpha \log (1+\frac{u}{\beta })$ . Denote the probability density of $G(t)$ by ${h_{G(t)}}(y)$.
For $a\ge 0$ consider the process
Its Laplace transform has the following form:
therefore, the Laplace exponent and the corresponding Levy measure are given by the following formulas:
and
and we note that the process $G(at+N(t))$ coincides in distribution with the sum of independent processes ${\tilde{G}_{N}}(t)+\tilde{G}(at)$, where ${\tilde{G}_{N}}(t)$ is the compound Poisson-Gamma process and $\tilde{G}(t)$ is the Gamma process.
(33)
\[ \mathsf{E}{e}^{-uG(at+N(t))}={e}^{-t(a\alpha \log (1+\frac{u}{\beta })+\lambda (1-{(1+\frac{u}{\beta })}^{-\alpha }))},\](34)
\[ {f_{{G_{N+a}}}}(u)=a\alpha \log \bigg(1+\frac{u}{\beta }\bigg)+\lambda \bigg(1-{\bigg(1+\frac{u}{\beta }\bigg)}^{-\alpha }\bigg),\](35)
\[ \nu (du)={e}^{-\beta u}{u}^{-1}\bigg(a\alpha +\frac{\lambda {\beta }^{\alpha }}{\varGamma (\alpha )}{u}^{\alpha }\bigg)du,\hspace{1em}\lambda >0,\alpha >0,\beta >0,\]The distribution of the process $G(at+N(t))$ can be calculated as follows:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{G_{N+a}}(t)\in dy\big\}& \displaystyle =& \displaystyle {\int _{0}^{\infty }}{h_{G(s)}}(y){p_{s}}(t)ds\\{} & \displaystyle =& \displaystyle {\int _{0}^{\infty }}\frac{{\beta }^{\alpha s}}{\varGamma (\alpha s)}{y}^{\alpha s-1}{e}^{-\beta y}{e}^{-\lambda t}{\sum \limits_{k=0}^{\infty }}\frac{{(\lambda t)}^{k}}{k!}\delta (s-k-at)dsdy\\{} & \displaystyle =& \displaystyle {e}^{-y\beta -\lambda t}{\sum \limits_{k=0}^{\infty }}\frac{{(\lambda t)}^{k}}{k!}\frac{1}{y}{\int _{0}^{\infty }}\frac{{\beta }^{\alpha s}}{\varGamma (\alpha s)}{y}^{\alpha s}\delta (s-k-at)dsdy\\{} & \displaystyle =& \displaystyle {e}^{-y\beta -\lambda t}{y}^{\alpha at-1}{\beta }^{\alpha at}{\sum \limits_{k=0}^{\infty }}\frac{{(\lambda t{(\beta y)}^{\alpha })}^{k}}{k!\varGamma (\alpha (k+at))}\\{} & \displaystyle =& \displaystyle {e}^{-y\beta -\lambda t}\frac{{(y\beta )}^{\alpha at}}{y}\varPhi \big(\alpha ,\alpha at,\lambda t{(\beta y)}^{\alpha }\big)dy,\hspace{1em}a\ne 0,\end{array}\]
where in the last line the Wright function appears:
We summarize the above reasonings in the following lemma.Lemma 3.
When the parameter $\alpha =1$, that is, the process ${G_{N+a}}(t)$ becomes ${E_{N+a}}(t)$, we obtain:
since
where ${I_{k}}$ is the modified Bessel function of the first kind (see, e.g., [16]):
Note that the distribution (37) was presented in [15, 18, 17] for the case when $\beta =1$.
(37)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle P\big\{{E_{N+a}}(t)\in dy\big\}& \displaystyle =& \displaystyle {e}^{-y\beta -\lambda t}{\beta }^{\frac{at+1}{2}}{\bigg(\frac{y}{\lambda t}\bigg)}^{\frac{at-1}{2}}{I_{at-1}}(2\sqrt{\lambda \beta ty})\end{array}\]Remark 6.
We recall that for the case when the shift $a=0$ the distributions of ${G_{N}}(t)$ and ${E_{N}}(t)$ have the atom at zero and are given by the following formulas:
\[\begin{aligned}P\big\{{G_{N}}(t)\in ds\big\}& ={e}^{-\lambda t}{\delta _{\{0\}}}(ds)+{e}^{-\lambda t-\beta s}\frac{1}{s}\varPhi \big(\alpha ,0,\lambda t{(\beta s)}^{\alpha }\big)ds,\\{} P\big\{{E_{N}}(t)\in ds\big\}& ={e}^{-\lambda t}{\delta _{\{0\}}}(ds)+{e}^{-\lambda t-\beta s}\frac{\sqrt{\lambda tbs}}{s}{I_{1}}(2\sqrt{\lambda t\beta s})ds.\end{aligned}\]
Let ${N_{1}}(t)$ be the Poisson process with intensity parameter ${\lambda _{1}}$. Consider the time-changed process
Theorem 5.
Probability mass function of the process ${N_{1}}({G_{N+a}}(t))$ is given by
The probabilities ${p_{k}}(t)$ satisfy the following system of difference-differential equations:
(39)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {p_{k}}(t)& \displaystyle =& \displaystyle P\big\{{N_{1}}\big({G_{N+a}}(t)\big)=k\big\}\\{} & \displaystyle =& \displaystyle {e}^{-\lambda t}\frac{{\lambda _{1}^{k}}}{k!}{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda t)}^{n}}{n!}\frac{{\beta }^{\alpha (n+at)}}{\varGamma (\alpha (n+at))}\frac{\varGamma (\alpha (n+at)+k)}{{({\lambda _{1}}+\beta )}^{\alpha (n+at)+k}}.\end{array}\](40)
\[\begin{aligned}\frac{d}{dt}{p_{k}}(t)& =-\bigg(a\alpha \log \bigg(1+\frac{{\lambda _{1}}}{\beta }\bigg)+\lambda \bigg(1-{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha }\bigg)\bigg){p_{k}}(t)\\{} & \hspace{1em}+{\sum \limits_{m=1}^{k}}\frac{1}{m!}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{m}\bigg(a\alpha \varGamma (m)+\lambda {\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha }\frac{\varGamma (m+\alpha )}{\varGamma (\alpha )}\bigg){p_{k-m}}(t).\end{aligned}\]Proof.
The probability mass function of the process ${N_{1}}({G_{N+a}}(t))$ can be obtained by standard conditioning arguments (see, e.g., the general result for subordinated Lévy processes in [15], Theorem 30.1).
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {p_{k}}(t)& \displaystyle =& \displaystyle P\big\{{N_{1}}\big(G\big(at+N(t)\big)\big)=k\big\}\\{} & \displaystyle =& \displaystyle {\int _{0}^{\infty }}P\big\{{N_{1}}(s)=k\big\}P\big\{G\big(at+N(t)\big)\in ds\big\}\\{} & \displaystyle =& \displaystyle {\int _{0}^{\infty }}{e}^{-{\lambda _{1}}s}\frac{{({\lambda _{1}}s)}^{k}}{k!}{e}^{-\beta s-\lambda t}{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda t)}^{n}}{n!}\frac{1}{s}\frac{{\beta }^{\alpha (n+at)}}{\varGamma (n+at)}{s}^{\alpha (n+at)}\\{} & \displaystyle =& \displaystyle {e}^{-\lambda t}{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda t)}^{n}}{n!}\frac{{\lambda _{1}^{k}}}{k!}\frac{{\beta }^{\alpha (n+at)}}{\varGamma (\alpha (n+at))}\frac{\varGamma (\alpha (n+at)+k)}{{({\lambda _{1}}+\beta )}^{\alpha (n+at)+k}}.\end{array}\]
Using the formula (4), we obtain the equations (40). □Theorem 6.
Probability mass function of the process ${N_{1}}({E_{N+a}}(t))$ is given by
The probabilities ${p_{k}^{E}}(t)$ satisfy the following system of difference-differential equations:
(41)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {p_{k}^{E}}(t)& \displaystyle =& \displaystyle P\big\{{N_{1}}\big({E_{N+a}}(t)\big)=k\big\}\\{} & \displaystyle =& \displaystyle {e}^{-\lambda t}\frac{\varGamma (k+at)}{k!}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{k}{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{at}{\mathcal{E}_{1,at}^{k+at}}\bigg(\frac{\lambda \beta t}{{\lambda _{1}}+\beta }\bigg).\end{array}\]
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \frac{d}{dt}{p_{k}^{E}}(t)& \displaystyle =& \displaystyle -\bigg(a\log \bigg(1+\frac{{\lambda _{1}}}{\beta }\bigg)+\frac{\lambda {\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg){p_{k}^{E}}(t)+{\sum \limits_{m=1}^{k}}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{m}\\{} & & \displaystyle \times \bigg(\frac{a}{m}+\frac{\lambda \beta }{{\lambda _{1}}+\beta }\bigg){p_{k-m}^{E}}(t).\end{array}\]
Remark 7.
Distribution (39) can be also obtained from the probability generating function. Using the general formula (3) for the process (38) we find:
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {G}^{F}(u,t)& \displaystyle =& \displaystyle {e}^{-tf({\lambda _{1}}(1-u))}={e}^{-\lambda t}{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda t)}^{n}}{n!}{\bigg(1+\frac{{\lambda _{1}}(1-u)}{\beta }\bigg)}^{-\alpha (n+at)}\\{} & \displaystyle =& \displaystyle {e}^{-\lambda t}{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda t)}^{n}}{n!}{\bigg(1+\frac{{\lambda _{1}}}{\beta }\bigg)}^{-\alpha (n+at)}{\bigg(1-\frac{{\lambda _{1}}u}{{\lambda _{1}}+\beta }\bigg)}^{-\alpha (n+at)}\\{} & \displaystyle =& \displaystyle {e}^{-\lambda t}{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha at}{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda t)}^{n}}{n!}{\bigg(1+\frac{{\lambda _{1}}}{\beta }\bigg)}^{-\alpha n}{\sum \limits_{k=0}^{\infty }}\frac{1}{k!}{\bigg(\frac{{\lambda _{1}}u}{{\lambda _{1}}+\beta }\bigg)}^{k}\\{} & & \displaystyle \times \frac{\varGamma (\alpha (n+at)+k)}{\varGamma (\alpha (n+at))}\\{} & \displaystyle =& \displaystyle {\sum \limits_{k=0}^{\infty }}{u}^{k}\Bigg[{e}^{-\lambda t}{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha at}\frac{1}{k!}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{k}{\sum \limits_{n=0}^{\infty }}\frac{1}{n!}{\bigg(\frac{\lambda t{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\\{} & & \displaystyle \times \frac{\varGamma (\alpha (n+at)+k)}{\varGamma (\alpha (n+at))}\Bigg].\end{array}\]
Therefore,
\[ {p_{k}}(t)={e}^{-\lambda t}{\bigg(\frac{\beta }{{\lambda _{1}}+\beta }\bigg)}^{\alpha at}\frac{1}{k!}{\bigg(\frac{{\lambda _{1}}}{{\lambda _{1}}+\beta }\bigg)}^{k}{\sum \limits_{n=0}^{\infty }}\frac{1}{n!}{\bigg(\frac{\lambda t{\beta }^{\alpha }}{{({\lambda _{1}}+\beta )}^{\alpha }}\bigg)}^{n}\frac{\varGamma (\alpha (n+at)+k)}{\varGamma (\alpha (n+at))},\]
which coincides with (39).Remark 8.
Figures 3 and 4 show the behavior of the probabilities (39) and (41), for various choices of t $(t=1,2,3)$.
Fig. 3.
Probabilities (39), for values of $a=5$, $\alpha =2$, $\beta =0.8$, $\lambda =1$, ${\lambda _{1}}=1$
6 Iterated Bessel transforms
Consider the Lévy process ${Z_{G}}(t)=G(at+N(X(t)))$, $t\ge 0$, $a\ge 0$, where we assume that $G(t)$, $N(t)$ and $X(t)$ are independent Lévy processes, $G(t)$ is the Gamma process with parameters $(\alpha ,\beta )$, $N(t)$ is the Poisson process with parameter λ, and ${\nu _{X}}(du)$ is the Lévy measure of $X(t)$.
Using Theorem 30.1 from [15], we can calculate the Lévy measure of ${Z_{G}}(t)$. We obtain:
where $\varPhi (\rho ,0,z)$ is the Wright function.
(42)
\[ {\nu _{{Z_{G}}}}(dx)={e}^{-\beta x}\Bigg(a\alpha {x}^{-1}+{\int _{0}^{\infty }}{e}^{-\lambda u}{x}^{-1}\varPhi \big(\alpha ,0,\lambda u{(\beta x)}^{\alpha }\big){\nu _{X}}(du)\Bigg)dx,\]In the case when $\alpha =1$, that is, when the process $G(t)$ becomes $E(t)$, the exponential process, we obtain the process ${Z_{E}}(t)=E(at+N(X(t)))$ with the Lévy measure given by the formula
since $\varPhi (1,0,z)=\sqrt{z}{I_{1}}(2\sqrt{z})$.
(43)
\[ {\nu _{{Z_{E}}}}(dx)={e}^{-\beta x}\Bigg(a{x}^{-1}+{\int _{0}^{\infty }}{e}^{-\lambda u}\sqrt{\lambda u\beta {x}^{-1}}\hspace{0.1667em}{I_{1}}(2\sqrt{\lambda u\beta x}\hspace{0.1667em}){\nu _{X}}(du)\Bigg)dx,\]In what follows we will consider the process $G(at+N(t))$ for $\alpha =1$.
Define the following iteration of the processes $E(at+N(t))$:
where ${E_{i}}(t),i=1,\dots ,n$, are independent exponential processes with parameters ${\beta _{i}},i=1,\dots ,n$, ${N_{i}},i=1,\dots ,n$, are independent Poisson processes with intensity parameters ${\lambda _{i}},i=1,\dots ,n$. For the process ${X_{n}}(t)$, we are able to calculate in closed form its Lévy measure ${\nu _{n}}(du)$ and the corresponding Bernštein function ${f_{n}}(u)$. This result is presented in the next theorem.
(44)
\[\begin{array}{r@{\hskip10.0pt}c}& \displaystyle {X_{0}}(t)=t,\\{} & \displaystyle {X_{1}}(t)={E_{1}}\big({a_{1}}t+{N_{1}}\big({X_{0}}(t)\big)\big),\\{} & \displaystyle \dots \\{} & \displaystyle {X_{n}}(t)={E_{n}}\big({a_{n}}t+{N_{n}}\big({X_{n-1}}(t)\big)\big),\end{array}\]Theorem 7.
Let ${X_{n}}(t)$ be the process defined by the iteration formulas (44). Then the following holds:
(i) if ${\beta _{i}}={\lambda _{i}}=1,i=1,\dots ,n$, then
(ii) if ${\beta _{i}}=\beta \ne 1,{\lambda _{i}}=\lambda \ne 1,i=1,\dots ,n$, then
where $\gamma (\lambda ,\beta ,m)={\sum _{j=1}^{m}}{\lambda }^{m-j}{\beta }^{j-1}=({\lambda }^{m}-{\beta }^{m}){(\lambda -\beta )}^{-1}$, ${a_{0}}=0$.
(47)
\[ {\nu _{n}}(du)=\Bigg({u}^{-1}{\sum \limits_{k=0}^{n-1}}{e}^{-u\frac{{\beta }^{k+1}}{\gamma (\lambda ,\beta ,k+1)}}({a_{n-k}}-{a_{n-k-1}})+{e}^{-u\frac{{\beta }^{n}}{\gamma (\lambda ,\beta ,n)}}\frac{{(\lambda \beta )}^{n}}{{(\gamma (\lambda ,\beta ,n))}^{2}}\Bigg)du,\](48)
\[ {f_{n}}(u)={\sum \limits_{k=0}^{n-1}}({a_{n-k}}-{a_{n-k-1}})\log \bigg(1+u\frac{\gamma (\lambda ,\beta ,k+1)}{{\beta }^{k+1}}\bigg)+{\lambda }^{n}\frac{u}{{\beta }^{n}+u\gamma (\lambda ,\beta ,n)},\]Proof.
We present the proof for the case ${\beta _{i}}=\beta \ne 1,{\lambda _{i}}=\lambda \ne 1,i=1,\dots ,n$. We prove the claimed results by induction.
For $n=1$ the formula (47) holds. Suppose that the result is true for $n=m$ ($m\ge 1$), that is,
\[ {\nu _{m}}(du)=\Bigg({u}^{-1}{\sum \limits_{k=0}^{m-1}}{e}^{-u\frac{{\beta }^{k+1}}{\gamma (\lambda ,\beta ,k+1)}}({a_{m-k}}-{a_{m-k-1}})+{e}^{-u\frac{{\beta }^{m}}{\gamma (\lambda ,\beta ,m)}}\frac{(\lambda \beta )mn}{{(\gamma (\lambda ,\beta ,m))}^{2}}\Bigg)du.\]
We need to show that (47) holds for $n=m+1$. We calculate ${\nu _{m+1}}(dx)$:
\[\begin{aligned}{\nu _{m+1}}(dx)& ={e}^{-\beta x}\Bigg({a_{m+1}}{x}^{-1}+{\int _{0}^{\infty }}{e}^{-\lambda u}\sqrt{\lambda u\beta {x}^{-1}}{I_{1}}(2\sqrt{\lambda u\beta x}){\nu _{m}}(du)\Bigg)dx\\{} & ={e}^{-\beta x}\Bigg({a_{m+1}}{x}^{-1}+{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda \beta )}^{n+1}{x}^{n}}{n!(n+1)!}{\int _{0}^{\infty }}{e}^{-\lambda u}{u}^{n+1}{\nu _{m}}(du)\Bigg)dx\\{} & ={e}^{-\beta x}\Bigg({a_{m+1}}{x}^{-1}+{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda \beta )}^{n+1}{x}^{n}}{n!(n+1)!}{\int _{0}^{\infty }}{e}^{-\lambda u}{u}^{n+1}\\{} & \hspace{1em}\times \Bigg({u}^{-1}{\sum \limits_{k=0}^{m-1}}{e}^{-u\frac{{\beta }^{k+1}}{\gamma (\lambda ,\beta ,k+1)}}({a_{m-k}}-{a_{m-k-1}})\\{} & \hspace{1em}+{e}^{-u\frac{{\beta }^{m}}{\gamma (\lambda ,\beta ,m)}}\frac{{(\lambda \beta )}^{m}}{\gamma {(\lambda ,\beta ,m)}^{2}}\Bigg)du\Bigg)dx\\{} & ={e}^{-\beta x}\Bigg({a_{m+1}}{x}^{-1}\\{} & \hspace{1em}+\hspace{-0.1667em}{\sum \limits_{n=0}^{\infty }}\hspace{-0.1667em}\frac{{(\lambda \beta )}^{n+1}{x}^{n}}{(n+1)!}\hspace{-0.1667em}{\sum \limits_{k=0}^{m-1}}\hspace{-0.1667em}({a_{m-k}}-{a_{m-k-1}}){\bigg(\hspace{-0.1667em}\lambda +\frac{{\beta }^{k+1}}{\gamma (\lambda ,\beta ,k+1)}\hspace{-0.1667em}\bigg)}^{\hspace{-0.1667em}-(n+1)}\\{} & \hspace{1em}+{\sum \limits_{n=0}^{\infty }}\frac{{(\lambda \beta )}^{n+1}{x}^{n}}{n!}\frac{{(\lambda \beta )}^{m}}{{(\gamma (\lambda ,\beta ,m))}^{2}}{\bigg(\lambda +\frac{{\beta }^{m}}{\gamma (\lambda ,\beta ,m)}\bigg)}^{-(n+2)}\Bigg)dx\\{} & ={e}^{-\beta x}\Bigg({a_{m+1}}{x}^{-1}\\{} & \hspace{1em}+{x}^{-1}{\sum \limits_{k=0}^{m-1}}({a_{m-k}}-{a_{m-k-1}})\big({e}^{x\frac{\lambda \beta \gamma (k+1)}{\lambda \gamma (\lambda ,\beta ,k+1)+{\beta }^{k+1}}}-1\big)\\{} & \hspace{1em}+\frac{{(\lambda \beta )}^{m+1}}{{(\lambda \gamma (\lambda ,\beta ,m)+{\beta }^{m})}^{2}}{e}^{\lambda \beta x\frac{\gamma (\lambda ,\beta ,m)}{\lambda \gamma (\lambda ,\beta ,m)+{\beta }^{m}}}\Bigg)dx\\{} & =\Bigg({e}^{-\beta x}{x}^{-1}\Bigg({a_{m+1}}-{\sum \limits_{k=0}^{m-1}}({a_{m-k}}-{a_{m-k-1}})\Bigg)\\{} & \hspace{1em}+{e}^{-x\frac{{\beta }^{k+2}}{\gamma (\lambda ,\beta ,k+2)}}{x}^{-1}{\sum \limits_{k=0}^{m-1}}({a_{m-k}}-{a_{m-k-1}})\\{} & \hspace{1em}+{e}^{-x\frac{{\beta }^{m+1}}{\gamma (\lambda ,\beta ,m+1)}}\frac{{(\lambda \beta )}^{m+1}}{{(\gamma (\lambda ,\beta ,m+1))}^{2}}\Bigg)dx\\{} & =\Bigg({x}^{-1}{\sum \limits_{k=0}^{m}}{e}^{-x\frac{{\beta }^{k+1}}{\gamma (\lambda ,\beta ,k+1)}}({a_{m+1-k}}-{a_{m-k}})\\{} & \hspace{1em}+{e}^{-x\frac{{\beta }^{m+1}}{\gamma (\lambda ,\beta ,m+1)}}\frac{{(\lambda \beta )}^{m+1}}{{(\gamma (\lambda ,\beta ,m+1))}^{2}}\Bigg)dx\end{aligned}\]
Therefore, the formula (47) is true. □Remark 9.
If ${X_{0}}=\lambda t$ in (44) and ${\beta _{i}}={\lambda _{i}}=1$ or ${\beta _{i}}=1,{\lambda _{1}}=\lambda ,{\lambda _{i}}=1,i=2,3,\dots ,n$, then
Remark 10.
Formulas (45)–(48) become significantly simpler in the case when ${a_{1}}={a_{2}}=\cdots ={a_{n}}=a$, that is, when the shift is the same at each step. We obtain:
(i) if ${\beta _{i}}={\lambda _{i}}=1,i=1,\dots ,n$, then
(ii) if ${\beta _{i}}=\beta \ne 1,{\lambda _{i}}=\lambda \ne 1,i=1,\dots ,n$, then
where $\gamma (\lambda ,\beta ,n)={\sum _{j=1}^{n}}{\lambda }^{n-j}{\beta }^{j-1}=({\lambda }^{n}-{\beta }^{n}){(\lambda -\beta )}^{-1}$.
(51)
\[ {\nu _{n}}(du)={u}^{-1}{e}^{-u\frac{{\beta }^{n}}{\gamma (\lambda ,\beta ,n)}}\bigg(a+\frac{{(\lambda \beta )}^{n}}{{(\gamma (\lambda ,\beta ,n))}^{2}}u\bigg)du,\](52)
\[ {f_{n}}(u)=a\log \bigg(1+u\frac{\gamma (\lambda ,\beta ,n)}{{\beta }^{n}}\bigg)+{\lambda }^{n}\frac{u}{{\beta }^{n}+u\gamma (\lambda ,\beta ,n)},\]We can conclude that the process ${X_{n}}(t)$, which is given by the formula (44) as some kind of n-th iteration of processes $E(N(t)+at)$, under the assumption that $E(t)$ and $N(t)$ have the parameters $\beta =\lambda =1$, coincides in distribution with the process ${E_{1/n}}({N_{1/n}}(t)+at)$:
where the exponential process ${E_{1/n}}$ and the Poisson process ${N_{1/n}}(t)$ have the parameters $\beta =\lambda =1/n$, and, therefore, the distribution of ${X_{n}}(t)$ in such a case is given by the formula (37) with $\beta =\lambda =1/n$.
Correspondingly, in the case (ii) the process ${X_{n}}(t)$ coincides in distribution with the process $\tilde{E}(\tilde{N}(t)+at)$, where the exponential process $\tilde{E}(t)$ has parameter $\frac{{\beta }^{n}}{\gamma (\lambda ,\beta ,n)}$, and the Poisson process $\tilde{N}(t)$ has parameter $\frac{{\lambda }^{n}}{\gamma (\lambda ,\beta ,n)}$.
Lemma 4.
Assume the process ${X_{1}}(t)=E(at+N(t))$, where $E(t)$ is an exponential process with parameter γ, $N(t)$ is the Poisson process with parameter γ, that is, ${X_{1}}(t)$ has the Lévy measure
Then the process
where $\tilde{E}(t)$ is the exponential process with parameter α, $\tilde{N}(t)$ is the Poisson process with parameter α, has the Lévy measure of the form:
Proof.
Using the formula (43) with $\lambda =\beta =\gamma $ we obtain:
\[\begin{aligned}{\nu _{2}}(dx)& ={e}^{-\alpha x}a{x}^{-1}dx\\{} & \hspace{1em}+{e}^{-\alpha x}{\int _{0}^{\infty }}{e}^{\alpha u}\sqrt{{\alpha }^{2}u{x}^{-1}}{I_{1}}\big(2\sqrt{{\alpha }^{2}ux}\big){e}^{-u\gamma }\big({u}^{-1}a+{\gamma }^{2}\big)dudx\\{} & ={e}^{-\alpha x}a{x}^{-1}dx\\{} & \hspace{1em}+{e}^{-\alpha x}{\sum \limits_{j=0}^{\infty }}\frac{{({\alpha }^{2}x)}^{j}{\alpha }^{2}}{j!(j+1)!}{\int _{0}^{\infty }}{e}^{-u(\alpha +\gamma )}{u}^{j+1}\big({u}^{-1}a+{\gamma }^{2}\big)dudx\\{} & ={e}^{-\alpha x}\Bigg(a{x}^{-1}+{\sum \limits_{j=0}^{\infty }}\frac{{({\alpha }^{2}x)}^{j}{\alpha }^{2}}{j!(j+1)!}\bigg(a\frac{j!}{{(\alpha +\gamma )}^{j+1}}+{\gamma }^{2}\frac{(j+1)!}{{(\alpha +\gamma )}^{j+2}}\bigg)\Bigg)dx\\{} & ={e}^{-x\frac{\alpha \gamma }{\alpha +\gamma }}\bigg(a{x}^{-1}+{\bigg(\frac{\gamma \alpha }{\alpha +\gamma }\bigg)}^{2}\bigg)dx.\end{aligned}\]
□Generalizing the first statement of Remark 10, we obtain the next interesting result. Let the iterated process be constructed according to the formula (43) with the following parameters at each step:
(54)
\[\begin{array}{r@{\hskip10.0pt}c}& \displaystyle {X_{0}}(t)=t,\\{} & \displaystyle {X_{1}}(t)={E_{1}}\big(at+{N_{1}}\big({X_{0}}(t)\big)\big),\hspace{1em}{\beta _{1}}={\lambda _{1}}=\frac{1}{{c_{1}}}\\{} & \displaystyle \dots \\{} & \displaystyle {X_{n}}(t)={E_{n}}\big(at+{N_{n}}\big({X_{n-1}}(t)\big)\big),\hspace{1em}{\beta _{n}}={\lambda _{n}}=\frac{1}{{c_{n}}}.\end{array}\]Lemma 5.
Let the process ${X_{n}}(t)$ be given by the iteration formula (54). Then its Lévy measure and Laplace exponent are of the following form:
Therefore, ${X_{n}}(t)$ coincides in distribution with the process ${E_{1/c}}({N_{1/c}}(t)+at)$, where $c={\sum _{1}^{n}}{c_{i}}$.
(55)
\[ {\nu _{n}}(dx)={e}^{-x\frac{1}{{c_{1}}+\cdots +{c_{n}}}}\bigg(a{x}^{-1}+\bigg(\frac{1}{{({c_{1}}+\cdots +{c_{n}})}^{2}}\bigg)\bigg)dx,\](56)
\[ {f_{n}}(x)=a\log \Bigg(1+{\sum \limits_{1}^{n}}{c_{i}}x\Bigg)+\frac{x}{1+{\textstyle\sum _{1}^{n}}{c_{i}}x}.\]Proof.
By induction, we need to show that if (55) holds at the n-th step, then
Using Lemma 4 with parameters $\alpha =\frac{1}{{c_{n+1}}},\gamma =\frac{1}{{c_{1}}+\cdots +{c_{n}}}$ we immediately come to (57). □
(57)
\[ {\nu _{n+1}}(dx)={e}^{-x\frac{1}{{c_{1}}+\cdots +{c_{n+1}}}}\bigg(a{x}^{-1}+\bigg(\frac{1}{{({c_{1}}+\cdots +{c_{n+1}})}^{2}}\bigg)\bigg)dx.\]We next consider the time-changed process ${N_{\mu }}({X_{n}}(t))$, where ${X_{n}}(t)$ is the process defined by the formula (44), and ${N_{\mu }}((t))$ is the Poisson process with parameter μ.
Theorem 8.
The probabilities ${p_{k}}(t)=P\{{N_{\mu }}({X_{n}}(t))=k\}$ are solutions to the equation
with the usual initial condition, where ${f_{n,j}}(u)$ are given by the following formulas:
(i) if ${\beta _{i}}={\lambda _{i}}=1,i=1,\dots ,n$, then
(ii) if ${\beta _{i}}=\beta \ne 1,{\lambda _{i}}=\lambda \ne 1,i=1,\dots ,n$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {f_{n,j}}(u)& \displaystyle =& \displaystyle ({a_{n-j}}-{a_{n-j-1}})\log \bigg(1+u\frac{\gamma (\lambda ,\beta ,j+1)}{{\beta }^{j+1}}\bigg),\hspace{1em}j=0,1,\dots ,n-1;\\{} \displaystyle {f_{n,n}}(u)& \displaystyle =& \displaystyle \frac{{\lambda }^{n}u}{{\beta }^{n}+u\gamma (\lambda ,\beta ,n)}.\end{array}\]
Remark 11.
The equation (58) can be also written in the form
where ${f_{n}}(u)$ and ${\nu _{n}}(du)$ are given in Theorem 7.
(59)
\[ \frac{d}{dt}{p_{k}}(t)=-{f_{n}}(\mu ){p_{k}}(t)+{\sum \limits_{m=1}^{k}}\frac{{\mu }^{m}}{m!}{p_{k-m}}(t){\int _{0}^{\infty }}{e}^{-s\mu }{s}^{m}{\nu _{n}}(ds),\hspace{1em}k\ge 0,\hspace{0.1667em}t>0,\]Remark 12.
In [15, 18, 17] the process of the following form was considered: $G(at+N(X(t)))$, where $G(t)$ is the Gamma process with parameters $(1,1)$, that is, the process with probability density ${(\varGamma (t))}^{-1}{e}^{-x}{x}^{t-1}$ and the Lévy measure $\nu (du)={u}^{-1}{e}^{-u}du$ (actually, the exponential process), and $N(t)$ is the Poisson process with parameter 1. Such a process is called the Bessel transform of the process $X(t)$. In the present paper we suppose that the process $G(t)$ has parameters $(1,\beta )$ (that is, it is the exponential process with parameter β), and the Poisson process has parameter λ and we consider ${E_{\beta }}(at+{N_{\lambda }}(X(t)))$. Let us call such a transform of the process $X(t)$, with more general parameters, the Bessel transform as well, and denote it by
Then the process ${X_{n}}(t)$, which is given by (44), we can represented as n-th iteration of Bessel transforms:
\[ {X_{n}}(t)=\underset{n}{\underbrace{{B_{n}}\big({B_{n-1}}\big(\dots {B_{1}}}}\big({X_{0}}(t)\big)\big)\big),\]
where ${X_{0}}(t)=t$, or
\[ {X_{n}}(t)=\underset{n-1}{\underbrace{{B_{n-1}}\big({B_{n-2}}\big(\dots {B_{1}}}}\big(E\big(at+N(t)\big)\big)\big)\big),\]
where ${B_{i}}$ are Bessel transforms with parameters ${\beta _{i}}$, ${\lambda _{i}}$.In the particular case, when ${\beta _{i}}={\lambda _{i}}=1/{c_{i}}$, we obtain:
where $c={\sum _{1}^{n}}{c_{i}}$.
\[ {B_{n}}\big({B_{n-1}}\big(\dots {B_{1}}\big((t)\big)\big)\big)\stackrel{d}{=}\tilde{B}(t)={E_{1/c}}\big({N_{1/c}}(t)+at\big),\]
where $c={\sum _{1}^{n}}{c_{i}}$ (see Lemma 5). We also have:
(60)
\[ {N_{\mu }}({B_{n}^{1/{c_{n}}}}\big({B_{n-1}^{1/{c_{n-1}}}}\big(\dots {B_{1}^{1/{c_{1}}}}\big((t)\big)\big)\big)\stackrel{d}{=}{N_{\mu }}\big({B}^{1/c}(t)\big)={N_{\mu }}\big({E_{1/c}}\big({N_{1/c}}(t)+at\big)\big),\]When the shift $a=0$, we recover the result stated in Remark 4 of [5], concerning the Poisson process with iterated time change, where the role of time is played by the compound Poisson-exponential process ${\widetilde{E}_{N}^{1/c}}(t)={E_{1/c}}({N_{1/c}}(t)$ with the Laplace exponent ${\tilde{f}_{c}}(u)=\frac{u}{1+cu}$. Namely, if we denote ${\widetilde{N}}^{1/c}(t)=N({\widetilde{E}_{N}^{1/c}}(t))$, then the following holds:
The similar property with respect to iterated time change was discovered previously for the time-changed Poisson processes where the time is expressed by stable subordinators with Laplace exponent $f(u)={u}^{\alpha }$ (and called the auto-conservative property) in the papers [7, 12]. Two more cases of the iterated time change which preserves the structure of the process are provided by the above examples (60), (61).