1 Introduction
Stochastic processes with random time and more general compositions of processes are quite popular topics of recent studies both in the theory of stochastic processes and in various applied areas. Specifically, in financial mathematics, models with random clock (or time change) allow to capture more realistically the relationship between calendar time and financial markets activity. Models with random time appear in reliability and queuing theory, biological, ecological and medical research, note also that for solving some problems of statistical estimation sampling of a stochastic process at random times or on the trajectory of another process can be used. Some examples of applications are described, for example, in [5].
In the present paper we study various compositions of Poisson and Gamma processes. We only consider the cases when processes used for compositions are independent.
The Poisson process directed by a Gamma process and, in reverse, the Gamma process directed by a Poisson process one can encounter, for example, in the book by W. Feller [6], where distributions of these processes are presented as particular examples within the general framework of directed processes.
Time-changed Poisson processes have been investigated extensively in the literature. We mention, for example, the recent comprehensive study undertaken in [16] concerned with the processes $N({H}^{f}(t))$, where $N(t)$ is a Poisson process and ${H}^{f}(t)$ is an arbitrary subordinator with Laplace exponent f, independent of $N(t)$. The particular emphasis has been made on the cases where $f(u)={u}^{\alpha }$, $\alpha \in (0,1)$ (space-fractional Poisson process), $f(u)={(u+\theta )}^{\alpha }-{\theta }^{\alpha }$, $\alpha \in (0,1)$, $\theta >0\hspace{2.5pt}$(tempered Poisson process) and $f(\lambda )=\log (1+\lambda )$ (negative binomial process), in the last case we have the Poisson process with Gamma subordinator.
The most intensively studied models of Poisson processes with time-change are two fractional extensions of Poisson process, namely, the space-fractional and the time-fractional Poisson processes, obtained by choosing a stable subordinator or its inverse process in the role of time correspondingly; the literature devoted to these processes is rather voluminous, we refer, e.g., to the recent papers [7, 15, 12] (and references therein), and to the paper [14], where the correlation structure of the time-changed Lévy processes has been investigated and the correlation of time-fractional Poisson process was discussed among the variety of other examples. The most recent results are concerned with non-homogeneous fractional Poisson processes (see, e.g. [13] and references therein).
Interesting models of processes are based on the use of the difference of two Poisson processes, so-called Skellam processes, and their generalizations via time change. Investigation of these models can be found, for example, in [2], and we also refer to the paper [10], where fractional Skellam processes were introduced and studied.
In the present paper we study time-changed Poisson and Skellam processes, where the role of time is played by compound Poisson-Gamma subordinators and their inverse (or first passage time) processes. Some motivation for our study is presented in Remarks 1 and 2 in Section 3.
We obtain explicitly the probability distributions of considered time-changed processes and their first and second order moments.
In particular, for the case, where time-change is taken by means of compound Poisson-exponential subordinator and its inverse process, corresponding probability distributions of time-changed Poisson and Skellam processes are presented in terms of generalized Mittag-Leffler functions.
We also find the relation, in the form of differential equation, between the distribution of Poisson process, time-changed by Poisson-exponential process, and the distribution of Poisson processes time-changed by inverse Poisson-exponential process.
The paper is organized as follows. In Section 2 we recall definitions of processes which will be considered in the paper, in Section 3 we discuss the main features of the compound Poisson-Gamma process $G_{N}(t)$. In Section 4 we study Poisson and Skellam-type processes time-changed by the process $G_{N}(t)$; in Section 5 we investigate time-changed Poisson and Skellam-type processes, where the role of time is played by the inverse process for $G_{N}(t)$, we also discuss some properties of the inverse processes. Appendix contains details of the derivation of some results stated in Section 5.
2 Preliminaries
In this section we recall definitions of processes, which will be considered in the paper (see, e.g. [1, 3]).
The Poisson process $N(t)$ with intensity parameter $\lambda >0$ is a Lévy process with values in $N\cup \{0\}$ such that each $N(t)$ has Poisson distribution with parameter $\lambda t$, that is,
Lévy measure of $N(t)$ can be written in the form $\nu (du)=\lambda \delta _{\{1\}}(u)du$, where $\delta _{\{1\}}(u)$ is the Dirac delta centered at $u=1$, the corresponding Bernštein function (or Laplace exponent of $N(t)$) is $f(u)=\lambda (1-{e}^{-u}),u>0$.
The Gamma process $G(t)$ with parameters $\alpha ,\beta >0$ is a Lévy process such that each $G(t)$ follows Gamma distribution $\varGamma (\alpha t,\beta )$, that is, has the density
\[h_{G(t)}(x)=\frac{{\beta }^{\alpha t}}{\varGamma (\alpha t)}{x}^{\alpha t-1}{e}^{-\beta x},\hspace{1em}x\ge 0;\]
The Lévy measure of $G(t)$ is $\nu (du)=\alpha {u}^{-1}{e}^{-\beta u}du$ and the corresponding Bernštein function is
The Skellam process is defined as
where $N_{1}(t),t\ge 0$, and $N_{2}(t),t\ge 0$, are two independent Poisson processes with intensities $\lambda _{1}>0$ and $\lambda _{2}>0$, respectively.
The probability mass function of $S(t)$ is given by
where $I_{k}$ is the modified Bessel function of the first kind [20]:
The Skellam process is a Lévy process, its Lévy measure is the linear combination of two Dirac measures: $\nu (du)=\lambda _{1}\delta _{\{1\}}(u)du+\lambda _{2}\delta _{\{-1\}}(du)$, the corresponding Bernštein function is
and the moment generating function is given by
(1)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle s_{k}(t)=P\big(S(t)=k\big)={e}^{-t(\lambda _{1}+\lambda _{2})}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{k/2}I_{|k|}(2t\sqrt{\lambda _{1}\lambda _{2}}),\\{} & \displaystyle \hspace{1em}k\in \mathbb{Z}=\{0,\pm 1,\pm 2,\dots \},\end{array}\]Skellam processes are considered, for example, in [2], the Skellam distribution had been introduced and studied in [19] and [9].
We will consider Skellam processes with time change
where $X(t)$ is a subordinator independent of $N_{1}(t)$ and $N_{2}(t)$, and will call such processes time-changed Skellam processes of type I. We will also consider the processes of the form
where $N_{1}(t)$, $N_{2}(t)$ are two independent Poisson processes with intensities $\lambda _{1}>0$ and $\lambda _{2}>0$, and $X_{1}(t)$, $X_{2}(t)$ are two independent copies of a subordinator $X(t)$, which are also independent of $N_{1}(t)$ and $N_{2}(t)$, and we will call the process $S_{\mathit{II}}(t)$ a time-changed Skellam process of type II.
To represent distributions and other characteristics of processes considered in the next sections, we will use some special functions, besides the modified Bessel function introduced above. Namely, we will use the Wright function
for $\delta =0$, (3) is simplified as follows:
the two-parameter generalized Mittag-Leffler function
and the three-parameter generalized Mittag-Leffler function
with $\text{ Re}(\rho )>0,\text{Re}(\delta )>0,\text{Re}(\gamma )>0$ (see, e.g., [8] for definitions and properties of these functions).
(3)
\[\varPhi (\rho ,\delta ,z)=\sum \limits_{k=0}^{\infty }\frac{{z}^{k}}{k!\varGamma (\rho k+\delta )},\hspace{1em}z\in C,\hspace{2.5pt}\rho \in (-1,0)\cup (0,\infty ),\hspace{2.5pt}\delta \in C,\](5)
\[\mathcal{E}_{\rho ,\delta }(z)=\sum \limits_{k=0}^{\infty }\frac{{z}^{k}}{\varGamma (\rho k+\delta )},\hspace{1em}z\in C,\hspace{2.5pt}\rho \in (0,\infty ),\delta \in (0,\infty );\](6)
\[{\mathcal{E}_{\rho ,\delta }^{\gamma }}(z)=\sum \limits_{k=0}^{\infty }\frac{\varGamma (\gamma +k)}{\varGamma (\gamma )}\frac{{z}^{k}}{k!\varGamma (\rho k+\delta )},\hspace{1em}z\in C,\hspace{2.5pt}\rho ,\delta ,\gamma \in C,\]3 Compound Poisson-Gamma process
The first example of compositions of Poisson and Gamma processes which we consider in the paper is the compound Poisson-Gamma process. This is a well known process, however here we would like to focus on some of its important features.
Let $N(t)$ be a Poisson process and $\{G_{n},n\ge 1\}$ be a sequence of i.i.d. Gamma random variables independent of $N(t)$. Then compound Poisson process with Gamma distributed jumps is defined as
(in the above definition it is meant that ${\sum _{n=1}^{0}}\stackrel{\mathrm{def}}{=}0$). This process can be also represented as $Y(t)=G(N(t))$, that is, as a time-changed Gamma process, where the role of time is played by Poisson process. Let us denote this process by $G_{N}(t)$:
Let $N(t)$ and $G(t)$ have parameters λ and $(\alpha ,\beta )$ correspondingly. The process $G_{N}(t)$ is a Lévy process with Laplace exponent (or Bernštein function) of the form
and the corresponding Lévy measure is
The transition probability measure of the process $G_{N}(t)$ can be written in the closed form:
therefore, probability law of $G_{N}(t)$ has atom ${e}^{-\lambda t}$ at zero, that is, has a discrete part $P\{G_{N}(t)=0\}={e}^{-\lambda t}$, and the density of the absolutely continuous part can be expressed in terms of the Wright function.
(8)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big\{G_{N}(t)\in ds\big\}& \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+\sum \limits_{n=1}^{\infty }{e}^{-\lambda t}\frac{{(\lambda t{\beta }^{\alpha })}^{n}}{n!\varGamma (\alpha n)}{s}^{\alpha n-1}{e}^{-\beta s}ds\\{} & \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{1}{s}\varPhi \big(\alpha ,0,\lambda t{(\beta s)}^{\alpha }\big)ds,\end{array}\]In particular, when $\alpha =n$, $n\in N$, we have a Poisson–Erlang process, which we will denote by ${G_{N}^{(n)}}(t)$ and for $\alpha =1$, we have a Poisson process with exponentially distributed jumps. We will denote this last process by $E_{N}(t)$. Its Lévy measure $\nu (du)=\lambda \beta {e}^{-\beta u}du$ and Laplace exponent is
The transition probability measure of $E_{N}(t)$ is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big\{E_{N}(t)\in ds\big\}& \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{1}{s}\varPhi (1,0,\lambda t\beta s)ds\\{} & \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{\sqrt{\lambda t\beta s}}{s}I_{1}(2\sqrt{\lambda t\beta s})ds.\end{array}\]
Remark 1.
Measures (7) belong to the class of Lévy measures
Note that:
Probability distributions of the above subordinators can be written in the closed form in the case (iii); in the case (i) for $\alpha =1/2$, when we have the inverse Gaussian subordinators; and in the case (iv).
In the paper [16] the deep and detailed investigation was performed for the time-changed Poisson processes where the role of time is played by the subordinators from the above cases (i)–(iii) (and as we have already pointed out in the introduction, the most studied in the literature is the case (ii)). The mentioned paper deals actually with the general time-changed processes of the form $N({H}^{f}(t))$, where $N(t)$ is a Poisson process and ${H}^{f}(t)$ is an arbitrary subordinator with Laplace exponent f, independent of $N(t)$. This general construction falls into the framework of Bochner subordination (see, e.g. book by Sato [18]) and can be studied by means of different approaches.
With the present paper we intend to complement the study undertaken in [16] with one more example. We consider time-change by means of subordinators corresponding to the above case (iv) and, therefore, subordination related to the measures (9) will be covered for the all range of parameters. We must also admit that the attractive feature of these processes is the closed form of their distributions which allows to perform the exact calculations for characteristics of corresponding time-changed processes.
We also study Skellam processes with time-change by means of compound Poisson-Gamma subordinators, in this part our results are close to the corresponding results of the paper [10].
In our paper we develop (following [16, 10] among others) an approach for studying time-changed Poisson process via investigation of their distributional properties, with the use of the form and properties of distributions of the processes involved.
It would be also interesting to study the above mentioned processes within the framework of Bochner subordination via semigroup approach. We address this topic for future research, as well as the study of other characteristics of the processes considered in the present paper and their comparison with related results existing in the literature.
Remark 2.
It is well known that the composition of two independent stable subordinators is again a stable subordinator. If $S_{i}(t)$, $i=1,2$, are two independent stable subordinators with Laplace exponents $c_{i}{u}^{a_{i}}$, $i=1,2$, then $S_{1}(S_{2}(t))$ is the stable subordinator with index $a_{1}a_{2}$, since its Laplace exponent is $c_{2}{(c_{1})}^{a_{2}}{u}^{a_{1}a_{2}}$. More generally, iterated composition of stable subordinators $S_{1},\dots ,S_{k}$ with indices $a_{1},\dots ,a_{k}$ is the stable subordinator with index ${\prod _{i=1}^{k}}a_{i}$.
In the paper [11] it was shown that one specific subclass of Poisson-Gamma subordinators has a property similar to the property of stable subordinators described above, that is, if two processes belong to this class, so does their composition. Namely, this is the class of compound Poisson processes with exponentially distributed jumps and parameters $\lambda =\beta =\frac{1}{a}$, therefore, the Lévy measure is of the form $\nu (du)=\frac{1}{{a}^{2}}{e}^{-u/a}du$ and the corresponding Bernštein function is $\tilde{f}_{a}(u)=\frac{u}{1+au}$. Denote such processes by ${\widetilde{E}_{N}^{a}}(t)$. Then, as can be easily checked (see also [11]) the composition of two independent processes ${\widetilde{E}_{N}^{a_{1}}}({\widetilde{E}_{N}^{a_{2}}}(t))$ has Laplace exponent $\tilde{f}_{a_{1}+a_{2}}(u)=\frac{u}{1+(a_{1}+a_{2})u}$, since the functions
have the property
Therefore, the process ${\widetilde{E}_{N}^{a_{1}}}({\widetilde{E}_{N}^{a_{2}}}(t))$ coincides, in distribution, with ${\widetilde{E}_{N}^{a}}(t)$, with a parameter $a=a_{1}+a_{2}$. More generally, iterated composition of n independent processes
\[{\widetilde{E}_{N}^{a_{1}}}\big({\widetilde{E}_{N}^{a_{2}}}\big(\dots {\widetilde{E}_{N}^{a_{n}}}(t)\dots \big)\big)\]
is equal in distribution to the process ${\widetilde{E}_{N}^{a}}(t)$ with $a={\sum _{i=1}^{n}}a_{i}$, and the n-th iterated composition as $n\to \infty $ results in the process ${\widetilde{E}_{N}^{a}}(t)$ with $a={\sum _{i=1}^{\infty }}a_{i}$, provided that $0<a<\infty $. In particular, subordinator ${\widetilde{E}_{N}^{1}}(t)$ with Laplace exponent $f(u)=\frac{u}{1+u}$ can be represented as infinite composition of independent subordinators ${\widetilde{E}_{N}^{1/{2}^{n}}}$:
\[{\widetilde{E}_{N}^{1}}(t)\stackrel{d}{=}{\widetilde{E}_{N}^{1/2}}\big({\widetilde{E}_{N}^{1/{2}^{2}}}\big(\dots {\widetilde{E}_{N}^{1/{2}^{n}}}(\dots )\big)\big).\]
This interesting feature of the processes ${\widetilde{E}_{N}^{a}}(t)$ deserves further investigation.4 Compound Poisson-Gamma process as time change
Let $N_{1}(t)$ be the Poisson process with intensity $\lambda _{1}$. Consider the time-changed process $N_{1}(G_{N}(t))=N_{1}(G(N(t)))$, where $G_{N}(t)$ is compound Poisson-Gamma process (with parameters $\lambda ,\alpha ,\beta $) independent of $N_{1}(t)$.
Theorem 1.
Probability mass function of the process $X(t)=N_{1}(G_{N}(t))$ is given by
and
The probabilities $p_{k}(t)$, $k\ge 0$, satisfy the following system of difference-differential equations:
(10)
\[p_{k}(t)=P\big(X(t)=k\big)=\frac{{e}^{-\lambda t}}{k!}\frac{{\lambda _{1}^{k}}}{{(\lambda _{1}+\beta )}^{k}}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{(\lambda _{1}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\hspace{1em}\textit{for}\hspace{2.5pt}k\ge 1,\](11)
\[p_{0}(t)=\exp \bigg\{-\lambda t\bigg(1-\frac{{\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}\bigg)\bigg\}.\]
\[\frac{d}{dt}p_{k}(t)=\bigg(\frac{\lambda {\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}-\lambda \bigg)p_{k}(t)+\frac{\lambda {\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}\sum \limits_{m=1}^{k}\frac{{\lambda _{1}^{m}}}{{(\lambda _{1}+\beta )}^{m}}\frac{\varGamma (m+\alpha )}{m!\varGamma (\alpha )}p_{k-m}(t).\]
In the case when $a=1$, that is, when the process $G_{N}(t)$ becomes $E_{N}(t)$, the compound Poisson process with exponentially distributed jumps, probabilities $p_{k}(t)$, $k\ge 0$, can be represented via the generalized Mittag-Leffler function as stated in the next theorem.
Theorem 2.
Let $X(t)=N_{1}(E_{N}(t))$. Then for $k\ge 1$
and the probabilities $p_{k}(t)$, $k\ge 0$, satisfy the following equation:
Proof of Theorem 1.
The probability mass function of the process $X(t)=N_{1}(G_{N}(t))$ can be obtained by standard conditioning arguments (see, e.g., the general result for subordinated Lévy processes in [18], Theorem 30.1).
For $k\ge 1$ we obtain:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle p_{k}(t)& \displaystyle =P\big(N_{1}\big(G_{N}(t)\big)=k\big)={\int _{0}^{\infty }}P\big(N_{1}(s)=k\big)P\big(G_{N}(t)\in ds\big)\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s}\frac{{(\lambda _{1}s)}^{k}}{k!}\bigg\{{e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{1}{s}\varPhi \big(\alpha ,0,\lambda t{(\beta s)}^{\alpha }\big)\bigg\}ds\\{} & \displaystyle =\frac{{e}^{-\lambda t}{\lambda _{1}^{k}}}{k!}{\int _{0}^{\infty }}{e}^{-\lambda _{1}s}{s}^{k}{e}^{-\beta s}\frac{1}{s}\varPhi \big(\alpha ,0,\lambda t{(\beta s)}^{\alpha }\big)ds\\{} & \displaystyle =\frac{{e}^{-\lambda t}{\lambda _{1}^{k}}}{k!}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}}{n!\varGamma (\alpha n)}{\int _{0}^{\infty }}{e}^{-(\lambda _{1}+\beta )s}{s}^{k+\alpha n-1}ds\\{} & \displaystyle =\frac{{e}^{-\lambda t}}{k!}\frac{{\lambda _{1}^{k}}}{{(\lambda _{1}+\beta )}^{k}}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{(\lambda _{1}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}.\end{array}\]
For $k=0$ we have:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle p_{0}(t)& \displaystyle ={e}^{-\lambda t}+{e}^{-\lambda t}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}\varGamma (\alpha n)}{{(\lambda _{1}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\\{} & \displaystyle ={e}^{-\lambda t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}}{{(\lambda _{1}+\beta )}^{\alpha n}n!}={e}^{-\lambda t}{e}^{\frac{\lambda t{\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}}\\{} & \displaystyle =\exp \bigg\{-\lambda t\bigg(1-\frac{{\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}\bigg)\bigg\}.\end{array}\]
The governing equation for the probabilities $p_{k}(t)=P(N_{1}(G_{N}(t))=k)$ follows as a particular case of the general equation presented in Theorem 2.1 [16] for probabilities $P(N(H(t))=k)$, where $H(t)$ is an arbitrary subordinator. □
Remark 3.
Moments of the process $N_{1}(G_{N}(t))$ can be calculated, for example, from the moment generating function which is given by:
for $\theta \in \mathbb{R}$ such that $\beta +\lambda _{1}(1-{e}^{\theta })\ne 0$. We have, in particular,
(15)
\[\mathsf{E}{e}^{\theta N_{1}(G_{N}(t))}={e}^{-t\lambda (1-{\beta }^{\alpha }{(\beta +\lambda _{1}(1-{e}^{\theta }))}^{-\alpha })},\text{ }\]
\[\mathsf{E}N_{1}\big(G_{N}(t)\big)=\lambda _{1}\lambda \alpha {\beta }^{-1}t,\hspace{2em}\mathsf{Var}N_{1}\big(G_{N}(t)\big)=\lambda _{1}\lambda \alpha {\beta }^{-2}\big((1+\alpha )\lambda _{1}+\beta \big)t.\]
Expressions for probabilities $p_{k}(t)$ and expressions for moments were also calculated in [11] using the probability generating function of the process $N_{1}(G_{N}(t))$. In [11] the covariance function was also obtained:
The very detailed study of time-changed processes $N({H}^{f}(t))$ with ${H}^{f}(t)$ being an arbitrary subordinator, independent of Poisson process $N(t)$, with Laplace exponent $f(u)$, is presented in [16]. In particular, it was shown therein that the probability generating function can be written in the form $G(u,t)={e}^{-tf(\lambda (1-u))}$, where λ is the parameter of the process $N(t)$.
Note also that in order to compute the first two moments and covariance function of time-changed Lévy processes the following result, stated in [14] as Theorem 2.1, can be used:
If $X(t)$ is a homogeneous Lévy process with $X(0)=0$, $Y(t)$ is a nondecreasing process independent of $X(t)$ and $Z(t)=X(Y(t))$, then
provided that $\mathsf{E}X(t)$ and $U(t)=\mathsf{E}Y(t)$ exist;
and if $X(t)$ and $Y(t)$ have finite second moments, then
We will use the above formulas further in the paper.
(18)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{\big[Z(t)\big]}^{2}& \displaystyle =\mathsf{Var}X(1)U(t)-{\big[\mathsf{E}X(1)\big]}^{2}\mathsf{E}{\big[Y(t)\big]}^{2},\end{array}\]Remark 4.
Let ${\widetilde{E}}^{a}(t)={\widetilde{E}_{N}^{a}}(t)$ be the compound Poisson-exponential process with Laplace exponent $\tilde{f}_{a}(u)=\frac{u}{1+au}$, which we discussed in Remark 2 above, and let $N(t)=N_{\lambda }(t)$ be the Poisson process with parameter λ, independent of ${\widetilde{E}}^{a}(t)$. Denote ${\widetilde{N}}^{a}(t)=N({\widetilde{E}}^{a}(t))$. In view of Remark 2, for the processes ${\widetilde{N}}^{a}(t)$ we have the following property concerning double and iterated compositions:
relation (21) is referred to in [7] as auto-conservative property. This property deserves the further investigation.
\[{\widetilde{N}}^{a_{1}}\big({\widetilde{E}}^{a_{2}}(t)\big)=N\big({\widetilde{E}}^{a_{1}}\big({\widetilde{E}}^{a_{2}}(t)\big)\big)\stackrel{d}{=}{\widetilde{N}}^{a_{1}+a_{2}}(t)\]
and
\[{\widetilde{N}}^{a_{1}}\big({\widetilde{E}_{N}^{a_{2}}}\big(\dots {\widetilde{E}_{N}^{a_{n}}}(t)\dots \big)\big)\stackrel{d}{=}{\widetilde{N}}^{{\textstyle\sum _{i=1}^{n}}a_{i}}(t).\]
This property makes processes ${\widetilde{N}}^{a}(t)$ similar to the space-fractional Poisson processes, which are obtained as
where ${S}^{\alpha }(t)$ is a stable subordinator with $\mathsf{E}{e}^{-\theta {S}^{\alpha }(t)}={e}^{-t{\theta }^{\alpha }}$. In the papers [15, 7] it was shown that
and
(21)
\[{N}^{\alpha }\big({S}^{\gamma _{1}}\big(\dots {S}^{\gamma _{n}}(t)\dots \big)\big)\stackrel{d}{=}{N}^{\alpha {\textstyle\prod _{i=1}^{n}}\gamma _{i}}(t),\]Remark 5.
Note that the marginal laws of the time-changed processes obtained in Theorems 1, 2 (and in the next theorems in what follows) can be considered as new classes of distributions, in particular, (12) and (32) below represent three-parameter distributions involving the generalized Mittag-Leffler functions.
We now consider Skellam processes $S(t)$ with time change, where the role of time is played by compound Poisson-Gamma subordinators $G_{N}(t)$ with Laplace exponent $f(u)=\lambda {\beta }^{\alpha }({\beta }^{-\alpha }-{(\beta +u)}^{-\alpha })$, $\lambda >0$, $\alpha >0$, $\beta >0$.
Let the process $S(t)$ have parameters $\lambda _{1}$ and $\lambda _{2}$ and let us consider first the time-changed Skellam process of type I, that is the process
where $N_{1}(t)$, $N_{2}(t)$ and $G_{N}(t)$ are mutually independent.
Theorem 3.
Let $S_{_{I}}(t)=S(G_{N}(t))$, then probabilities $r_{k}(t)=P(S_{_{I}}(t)=k)$ are given by
The moment generating function of $S_{_{I}}(t)$ has the following form:
(22)
\[r_{k}(t)={e}^{-\lambda t}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{\frac{k}{2}}{\int _{0}^{\infty }}{e}^{-u(\lambda _{1}+\lambda _{2}+\beta )}I_{|k|}(2u\sqrt{\lambda _{1}\lambda _{2}})\frac{1}{u}\varPhi \big(\alpha ,0,\mu t{(\beta u)}^{\alpha }\big)du.\]
\[\mathsf{E}{e}^{\theta S_{I}(t)}={e}^{-\lambda t(1-{\beta }^{\alpha }{(\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })}^{-\alpha })}\]
for θ such that $\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\ne 0$.
Remark 6.
For the case $\alpha =1$, that is, $S_{_{I}}(t)=S(E_{N}(t))$, we obtain
\[\begin{array}{r@{\hskip0pt}l}\displaystyle r_{k}(t)& \displaystyle ={e}^{-\lambda t}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{\frac{k}{2}}{\int _{0}^{\infty }}{e}^{-u(\lambda _{1}+\lambda _{2}+\beta )}I_{|k|}(2u\sqrt{\lambda _{1}\lambda _{2}})\sqrt{\frac{\lambda \beta t}{u}}I_{1}(2\sqrt{\lambda t\beta u})du,\\{} \displaystyle \mathsf{E}{e}^{\theta S_{I}(t)}& \displaystyle ={e}^{-\lambda t(\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }){(\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })}^{-1}}.\end{array}\]
Proof of Theorem 3.
Using conditioning arguments, we can write:
where $s_{k}(u)=P(S(t)=k)$. Inserting the expressions for $s_{k}(u)$ and $P(G_{N}(t)\in du)$, which are given by formulas (1) and (8) correspondingly, we come to (22). The moment generating function can be obtained as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{\theta S_{I}(t)}& \displaystyle ={\int _{0}^{\infty }}\mathsf{E}{e}^{\theta S(t)}P\big(G_{N}(t)\in du\big)=\mathsf{E}{e}^{-f_{s}(-\theta )G_{N}(t)}\\{} & \displaystyle =\mathsf{E}{e}^{-(\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })G_{N}(t)}\\{} & \displaystyle ={e}^{-t\lambda (1-{\beta }^{\alpha }{(\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })}^{-\alpha })}\end{array}\]
for θ such that $\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\ne 0$. □Remark 7.
The mean, variance and covariance function for Skellam process $S_{I}(t)=S(G_{N}(t))$ can be calculated with the use of the general result for time-changed Lévy processes stated in [14], Theorem 2.1 (see our Remark 3, formulas (17)–(20)) and expressions for the mean, variance and covariance of Skellam process, which are
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}S(t)& \displaystyle =(\lambda _{1}-\lambda _{2})t;\\{} \displaystyle \mathsf{Var}S(t)& \displaystyle =(\lambda _{1}+\lambda _{2})t;\\{} \displaystyle \mathsf{Cov}\big(S(t),S(s)\big)& \displaystyle =(\lambda _{1}+\lambda _{2})\min (t,s).\end{array}\]
We obtain:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}S\big(G_{N}(t)\big)& \displaystyle =(\lambda _{1}-\lambda _{2})\alpha {\beta }^{-1}\lambda t;\\{} \displaystyle \mathsf{Var}S\big(G_{N}(t)\big)& \displaystyle =\alpha {\beta }^{-2}\lambda \big[{\big(\lambda _{1}-\lambda _{2}(\alpha +1)\big)}^{2}+(\lambda _{1}+\lambda _{2})\beta \big]t;\\{} \displaystyle \mathsf{Cov}\big(S\big(G_{N}(t)\big),S\big(G_{N}(s)\big)\big)& \displaystyle =\alpha {\beta }^{-2}\lambda \big[{\big(\lambda _{1}-\lambda _{2}(\alpha +1)\big)}^{2}+(\lambda _{1}+\lambda _{2})\beta \big]\min (t,s).\end{array}\]
Consider now the time-changed Skellam process of type II, where the role of time is played by the subordinator $X(t)=E_{N}(t)$ with Laplace exponent $f(u)=\lambda \frac{u}{\beta +u}$, that is, the process
where $X_{1}(t)$ and $X_{2}(t)$ are independent copies $X(t)$ and independent of $N_{1}(t)$, $N_{2}(t)$.
Theorem 4.
Let $S_{\mathit{II}}(t)$ be the time-changed Skellam process of type II given by (23). Its probability mass function is given by
for $k\in Z$, $k\ge 0$, and when $k<0$
The moment generating function is
(24)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\lambda t}\frac{{(\lambda t\beta )}^{2}{\lambda _{1}^{k}}}{{(\lambda _{1}+\beta )}^{k+1}(\lambda _{2}+\beta )}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}}{{(\lambda _{1}+\beta )}^{n}{(\lambda _{2}+\beta )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,2}^{n+k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg){\mathcal{E}_{1,2}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\beta }\bigg)\end{array}\](25)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\lambda t}\frac{{(\lambda t\beta )}^{2}{\lambda _{2}^{|k|}}}{(\lambda _{1}+\beta ){(\lambda _{2}+\beta )}^{|k|+1}}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}}{{(\lambda _{1}+\beta )}^{n}{(\lambda _{2}+\beta )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,2}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg){\mathcal{E}_{1,2}^{n+|k|+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\beta }\bigg).\end{array}\]
\[\mathsf{E}{e}^{\theta S_{\mathit{II}}(t)}={e}^{-t\lambda \lambda _{1}(1-{e}^{\theta })\hspace{0.1667em}/\hspace{0.1667em}(\beta +\lambda _{1}(1-{e}^{\theta }))}{e}^{-t\lambda \lambda _{2}(1-{e}^{-\theta })\hspace{0.1667em}/\hspace{0.1667em}(\beta +\lambda _{2}(1-{e}^{-\theta }))}\]
for θ such that $\beta +\lambda _{1}(1-{e}^{\theta })\ne 0$.
Proof of Theorem 4.
Using the independence of $N_{1}(X_{1}(t))$ and $N_{2}(X_{2}(t))$, we can write:
and then we use the expressions for probabilities of $N_{i}(E_{N}(t))$ given in Theorem 2. For $k>0$ we have the expression
(26)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle =\sum \limits_{n=0}^{\infty }P\big\{N_{1}\big(X_{1}(t)\big)=n+k\big\}P\big\{N_{2}\big(X_{2}(t)\big)=n\big\}I_{\{k\ge 0\}}\\{} & \displaystyle \hspace{1em}+\sum \limits_{n=0}^{\infty }P\big\{N_{1}\big(X_{1}(t)\big)=n\big\}P\big\{N_{2}\big(X_{2}(t)\big)=n+|k|\big\}I_{\{k<0\}}\end{array}\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle =\sum \limits_{n=0}^{\infty }{e}^{-2\lambda t}\frac{{\lambda _{1}^{n+k}}\lambda t\beta }{{(\lambda _{1}+\beta )}^{n+k+1}}{\mathcal{E}_{1,2}^{n+k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg)\\{} & \displaystyle \hspace{1em}\times \frac{{\lambda _{2}^{n}}\lambda t\beta }{{(\lambda _{2}+\beta )}^{n+1}}{\mathcal{E}_{1,2}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda 2+\beta }\bigg),\end{array}\]
from which we obtain (24); and in the analogous way we come to (25). In view of independence of $N_{1}(X_{1}(t))$ and $N_{2}(X_{2}(t))$, the moment generating function is obtained as the product:
\[\mathsf{E}{e}^{\theta S_{\mathit{II}}(t)}=\mathsf{E}{e}^{\theta N_{1}(X_{1}(t))}\mathsf{E}{e}^{-\theta N_{2}(X_{2}(t))},\]
and then we use expression (15) for $\alpha =1$. □Remark 8.
The moments of $S_{\mathit{II}}(t)$ can be calculated using the moment generating function given in Theorem 4, or using the independence of processes $N_{i}(X_{i}(t)),i=1,2$, and corresponding expressions for the moments of $N_{i}(X_{i}(t))$, $i=1,2$. Since $N_{1}(X_{1}(t))$ and $N_{2}(X_{2}(t))$ are independent we can also easily obtain the covariance function as follows:
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathsf{Cov}\big(S_{\mathit{II}}(t),S_{\mathit{II}}(s)\big)\\{} & \displaystyle \hspace{1em}=\mathsf{Cov}\big(N_{1}\big(X_{1}(t)\big),N_{1}\big(X_{1}(s)\big)\big)+\mathsf{Cov}\big(N_{2}\big(X_{2}(t)\big),N_{2}\big(X_{2}(s)\big)\big)\\{} & \displaystyle \hspace{1em}=\lambda {\beta }^{-2}\big[\beta (\lambda _{1}+\lambda _{2})+2\big({\lambda _{1}^{2}}+{\lambda _{2}^{2}}\big)\big]\min (t,s),\end{array}\]
where the expressions for covariance function of the process $N_{1}(E_{N}(t))$ are used (see formula (16) with $\alpha =1$).5 Inverse compound Poisson-Gamma process as time change
5.1 Inverse compound Poisson-exponential process
We first consider the process $E_{N}(t)$, the compound Poisson process with exponentially distributed jumps with Laplace exponent $f(u)=\lambda \frac{u}{\beta +u}$.
Define the inverse process (or first passage time):
It is known (see, e.g., [4]) that the process $Y(t)$ has density
and its Laplace transform is
which can be also verified as follows:
For example, the first moment can be obtained as follows:
(28)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle h(s,t)& \displaystyle =\lambda {e}^{-\lambda s-\beta t}I_{0}(2\sqrt{\lambda \beta st})\\{} & \displaystyle =\lambda {e}^{-\lambda s-\beta t}\varPhi (1,1,\lambda \beta st),\end{array}\](29)
\[\mathsf{E}{e}^{-\theta Y(t)}=\frac{\lambda }{\theta +\lambda }{e}^{-\beta \frac{\theta }{\theta +\lambda }t},\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{-\theta Y(t)}& \displaystyle ={\int _{0}^{\infty }}{e}^{-\theta u}\lambda {e}^{-\lambda u-\beta t}I_{0}(2\sqrt{\lambda \beta ut})du\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\theta u}\lambda {e}^{-\lambda u-\beta t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta ut)}^{n}}{{(n!)}^{2}}du\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}}{{(n!)}^{2}}{\int _{0}^{\infty }}{e}^{-(\theta +\lambda )u}{u}^{n}du\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}}{n!{(\theta +\lambda )}^{n+1}}=\frac{\lambda }{\theta +\lambda }{e}^{-\beta \frac{\theta }{\theta +\lambda }t}.\end{array}\]
Moments of $Y(t)$ can be easily found by the direct calculations. We have
(30)
\[\mathsf{E}Y(t)\hspace{0.1667em}=\hspace{0.1667em}\frac{1}{\lambda }(\beta t+1),\hspace{2em}\mathsf{E}Y{(t)}^{2}\hspace{0.1667em}=\hspace{0.1667em}\frac{1}{{\lambda }^{2}}\big({\beta }^{2}{t}^{2}+4\beta t+2\big),\hspace{2em}\mathsf{Var}Y(t)\hspace{0.1667em}=\hspace{0.1667em}\frac{1}{{\lambda }^{2}}(2\beta t+1).\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}Y(t& \displaystyle )={\int _{0}^{\infty }}uh(u,t)du={\int _{0}^{\infty }}u\lambda {e}^{-\lambda u-\beta t}I_{0}(2\sqrt{\lambda \beta ut})du\\{} & \displaystyle ={\int _{0}^{\infty }}u\lambda {e}^{-\lambda u-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\lambda \beta ut)}^{k}}{{(k!)}^{2}}du=\lambda {e}^{-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\lambda \beta t)}^{k}}{{(k!)}^{2}}{\int _{0}^{\infty }}{u}^{k+1}{e}^{-\lambda u}du\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\lambda \beta t)}^{k}(k+1)!}{{(k!)}^{2}{\lambda }^{k+2}}=\frac{1}{\lambda }{e}^{-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\beta t)}^{k}(k+1)}{k!}=\frac{1}{\lambda }(\beta t+1).\end{array}\]
The covariance function of the process $Y(t)$ can be calculated using the results on moments of the inverse subordinators stated in [21].
Remark 9.
It is known that, generally, the inverse subordinator is a process with non-stationary, non-independent increments (see, e.g., [21]). We have not investigated here this question for the process $Y(t)$, however we can observe the same similarity between the expressions for variance and covariance of $Y(t)$ as that which holds for the processes with stationary independent increments.
Remark 10.
Note that, for inverse processes, the important role is played by the function $U(t)=\mathsf{E}Y(t)$, which is called the renewal function. This function can be calculated using the following formula for its Laplace transform:
where $f(s)$ is the Laplace exponent of the subordinator, for which $Y(t)$ is the inverse (see, [21], formula (13)).
In our case we obtain
\[{\int _{0}^{\infty }}U(t){e}^{-st}dt=\frac{\beta +u}{\lambda {u}^{2}}=\frac{1}{\lambda }\bigg(\frac{\beta }{{u}^{2}}+\frac{1}{{u}^{}}\bigg),\]
and by inverting this Laplace transform we come again to the expression for the first moment given in (30). The function $U(t)$ characterizes the distribution of the inverse process $Y(t)$, since all the moments of $Y(t)$ (and the mixed moments as well) can be expressed (by recursion) in terms of $U(t)$ (see, [21]).Let $N_{1}(t)$ be the Poisson process with intensity $\lambda _{1}$. Consider the time-changed process $Z(t)=N_{1}(Y(t))$, where $Y(t)$ is the inverse process given by (27), independent of $N_{1}(t)$.
Theorem 5.
The probability mass function of the process $Z(t)=N_{1}(Y(t))$ is given by
the mean and variance are:
and the covariance function has the following form:
(32)
\[{p_{k}^{I}}(t)=P\big(Z(t)=k\big)={e}^{-\beta t}\frac{\lambda {\lambda _{1}^{k}}(k+1)}{{(\lambda _{1}+\lambda )}^{k+1}}{\mathcal{E}_{1,1}^{k+1}}\bigg(\frac{\lambda t\beta }{\lambda _{1}+\lambda }\bigg),\]Proof of Theorem 5.
The probabilities ${p_{k}^{I}}(t)=P(Z(t)=k)$ can be obtained by means of the following calculations:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {p_{k}^{I}}(t)& \displaystyle =P\big(N_{1}\big(Y(t)\big)=k\big)={\int _{0}^{\infty }}P\big(N_{1}(s)=k\big)h(s,t)ds\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s}\frac{{(\lambda _{1}s)}^{k}}{k!}\lambda {e}^{-\lambda s-\beta t}I_{0}(2\sqrt{\lambda \beta st})ds\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s-\lambda s-\beta t}\lambda \frac{{(\lambda _{1}s)}^{k}}{k!}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta st)}^{n}}{{(n!)}^{2}}ds\\{} & \displaystyle =\frac{{e}^{-\beta t}\lambda {\lambda _{1}^{k}}}{k!}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}}{{(n!)}^{2}}{\int _{0}^{\infty }}{e}^{-(\lambda _{1}+\lambda )s}{s}^{k+n}ds\\{} & \displaystyle =\frac{{e}^{-\beta t}\lambda {\lambda _{1}^{k}}}{k!}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}\varGamma (n+k+1)}{{(n!)}^{2}{(\lambda _{1}+\lambda )}^{n+k+1}}\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda {\lambda _{1}^{k}}(k+1)}{{(\lambda _{1}+\lambda )}^{k+1}}{\mathcal{E}_{1,1}^{k+1}}\bigg(\frac{\lambda t\beta }{\lambda _{1}+\lambda }\bigg).\end{array}\]
The mean and variance can be calculated using formulas (17), (19) and expressions (30).The Laplace transform is obtained as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{-\theta N_{1}(Y(t))}& \displaystyle ={\int _{0}^{\infty }}\mathsf{E}{e}^{-\theta N_{1}(t)}h(u,t)du=\mathsf{E}{e}^{-\lambda _{1}(1-{e}^{-\theta })Y(t)}\\{} & \displaystyle =\frac{\lambda }{\lambda +\lambda _{1}(1-{e}^{-\theta })\hspace{0.1667em}}\exp \bigg(-t\frac{\beta \lambda _{1}(1-{e}^{-\theta })}{\lambda +\lambda _{1}(1-{e}^{-\theta })}\bigg).\hspace{40.0pt}\end{array}\]
□We now state the relationship, in the form of a system of differential equations, between the marginal distributions of the processes $N_{1}(E_{N}(t))$ and $N_{1}(Y(t))$, with $Y(t)$ being the inverse process for $E_{N}(t)$.
Introduce the following notations:
for ${\hat{p}_{k}^{E}}(t)$ we changed the order of parameters λ and β, that is, parameters in E and N are interchanged, and λ and β are now parameters of E and N correspondingly; and for the inverse process we denote:
where $Y(t)$ is the inverse process for $E_{N}(t)={E}^{\beta }({N}^{\lambda }(t))$,
where $\widehat{Y}(t)$ is the inverse process for $\widehat{E}_{N}(t)={E}^{\lambda }({N}^{\beta }(t))$.
(38)
\[{p_{k}^{I}}(t)={p_{k}^{I}}(t,\lambda ,\beta ,\lambda _{1})=P\big(N_{1}\big(Y(t)\big)=k\big),\](39)
\[{\hat{p}_{k}^{I}}(t)={p_{k}^{I}}(t,\beta ,\lambda ,\lambda _{1})=P\big(N_{1}\big(\widehat{Y}(t)\big)=k\big),\]Theorem 6.
The probabilities ${p_{k}^{E}}(t)$, ${\hat{p}_{k}^{E}}(t)$ and ${p_{k}^{I}}(t)$, ${\hat{p}_{k}^{I}}(t)$ satisfy the following differential equations:
and
If $\lambda =\beta $ then ${p_{k}^{E}}(t)$ and ${p_{k}^{I}}(t)$ satisfy the following equation:
(40)
\[\frac{d}{dt}{\hat{p}_{k}^{E}}(t)=-\beta {\hat{p}_{k}^{E}}(t)+\frac{\beta }{k+1}{p_{k}^{I}}(t)\](41)
\[\frac{d}{dt}{p_{k}^{E}}(t)=-\lambda {p_{k}^{E}}(t)+\frac{\lambda }{k+1}{\hat{p}_{k}^{I}}(t).\]Proof.
Firstly we represent the derivative $\frac{d}{dt}{\hat{p}_{k}^{E}}(t)$ in the form:
Next we notice that the following relation holds:
which can be easily checked by direct calculations.
Therefore, (43) can be written as
Comparing the second term in the r.h.s. of (44) with the expression for ${p_{k}^{I}}(t)$, we come to (40). With analogous reasonings we derive (41), and taking $\lambda =\beta $ in (40) or in (41), we obtain (42). □
(44)
\[\frac{d}{dt}{\hat{p}_{k}^{E}}(t)=-\beta {\hat{p}_{k}^{E}}(t)+{e}^{-\beta t}\frac{{\lambda _{1}^{k}}\lambda \beta }{{(\lambda _{1}+\lambda )}^{k+1}}{\mathcal{E}_{1,1}^{k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\lambda }\bigg).\]Remark 11.
Consider now Skellam processes with time change, where the role of time is played by the inverse process $Y(t)$ given by (27).
Let the Skellam process $S(t)$ have parameters $\lambda _{1}$ and $\lambda _{2}$ and let us consider the process
where $N_{1}(t)$, $N_{2}(t)$ and $Y(t)$ are independent.
Theorem 7.
Let $S_{I}(t)$ be a Skellam process of type I given by (45), then the probabilities $r_{k}(t)=P(S_{I}(t)=k),k\in Z$, are given by
The moment generating function is given by
for θ such that $\lambda +f_{s}(-\theta )\ne 0$, where $f_{s}(\theta )$ is the Laplace transform of the initial Skellam process $S(t)$.
(46)
\[\mathsf{E}{e}^{\theta S_{I}(t)}=\frac{\lambda }{\lambda +f_{s}(-\theta )\hspace{0.1667em}}{e}^{-\beta tf_{s}(-\theta ){(\lambda +f_{s}(-\theta ))}^{-1}},\]Proof.
Using conditioning arguments, we write:
and then we insert the expressions for $s_{k}(u)$ and $h(u,t)$, which are given by formulas (1) and (28) correspondingly. The moment generating function can be calculated as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{\theta S_{I}(t)}& \displaystyle ={\int _{0}^{\infty }}\mathsf{E}{e}^{\theta S(t)}h(u,t)du=\mathsf{E}{e}^{-f_{s}(-\theta )Y(t)}\\{} & \displaystyle =\frac{\lambda }{\lambda +f_{s}(-\theta )\hspace{0.1667em}}{e}^{-\beta tf_{s}(-\theta ){(\lambda +f_{s}(-\theta ))}^{-1}}\\{} & \displaystyle =\frac{\lambda }{\lambda +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\hspace{0.1667em}}\exp \bigg(-\beta t\frac{\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }}{\lambda +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\hspace{0.1667em}}\bigg)\end{array}\]
for θ such that $\lambda +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\ne 0$. □Remark 12.
The expressions for mean, variance and covariance function for the Skellam process $S(Y(t))$ can be calculated analogously to corresponding calculations for the process $S(G_{N}(t))$ (see Remark 7). We obtain:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}S\big(Y(t)\big)& \displaystyle =(\lambda _{1}-\lambda _{2})\frac{\beta t+1}{\lambda };\\{} \displaystyle \mathsf{Var}S\big(Y(t)\big)& \displaystyle ={(\lambda _{1}-\lambda _{2})}^{2}\frac{2\beta t+1}{{\lambda }^{2}}+(\lambda _{1}+\lambda _{2})\frac{\beta t+1}{\lambda };\\{} \displaystyle \mathsf{Cov}\big(S\big(Y(t)\big),S\big(Y(s)\big)\big)& \displaystyle ={(\lambda _{1}-\lambda _{2})}^{2}\frac{2\beta \min (t,s)+1}{{\lambda }^{2}}+(\lambda _{1}+\lambda _{2})\frac{\beta \min (t,s)+1}{\lambda }.\end{array}\]
Consider the time-changed Skellam process of type II:
where $Y_{1}(t)$ and $Y_{2}(t)$ are independent copies of the inverse process $Y(t)$ and independent of $N_{1}(t)$, $N_{2}(t)$.
Theorem 8.
Let $S_{\mathit{II}}(t)$ be the time-changed Skellam process of type II given by (47). Its probability mass function is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{1}^{k}}}{{(\lambda _{1}+\lambda )}^{k+1}(\lambda _{2}+\lambda )}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}(n+k+1)(n+1)}{{(\lambda _{1}+\lambda )}^{n}{(\lambda _{2}+\lambda )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,1}^{n+k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\lambda }\bigg){\mathcal{E}_{1,1}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\lambda }\bigg)\end{array}\]
for $k\in Z$, $k\ge 0$, and when $k<0$
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{2}^{|k|}}}{(\lambda _{1}+\lambda ){(\lambda _{2}+\lambda )}^{|k|+1}}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}(n+k+1)(n+1)}{{(\lambda _{1}+\lambda )}^{n}{(\lambda _{2}+\lambda )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,1}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg){\mathcal{E}_{1,1}^{n+|k|+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\beta }\bigg).\end{array}\]
The moment generating function is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E{e}^{\theta S_{\mathit{II}}(t)}& \displaystyle =\frac{{\lambda }^{2}}{(\lambda +\lambda _{1}(1-{e}^{-\theta }))(\lambda +\lambda _{2}(1-{e}^{\theta }))\hspace{0.1667em}}\\{} & \displaystyle \hspace{1em}\times \exp \bigg(-\beta t\frac{\lambda _{1}(1-{e}^{-\theta })}{\lambda +\lambda _{1}(1-{e}^{-\theta })\hspace{0.1667em}}\bigg)\exp \bigg(-\beta t\frac{\lambda _{2}(1-{e}^{\theta })}{\lambda +\lambda _{2}(1-{e}^{\theta })\hspace{0.1667em}}\bigg)\end{array}\]
for θ such that $\lambda +\lambda _{2}(1-{e}^{\theta })\ne 0$.
Proof.
Analogously to the proof of Theorem 4 we write the expression for $P(S_{\mathit{II}}(t)=k)$ in the form (26) and then we use the expressions for probabilities $P(N_{1}(Y(t))=k)$ from Theorem 5. The moment generating function is obtained as the product: $\mathsf{E}{e}^{\theta S_{\mathit{II}}(t)}=\mathsf{E}{e}^{\theta N_{1}(Y_{1}(t))}\mathsf{E}{e}^{-\theta N_{2}(Y_{2}(t))}$, and then we use expression (35). □
Remark 13.
The moments of $S_{\mathit{II}}(t)$ can be calculated using the moment generating function given in Theorem 8, or using independence of processes $N_{i}(Y_{i}(t)),i=1,2$, and corresponding expressions for the moments of $N_{i}(Y_{i}(t)),i=1,2$. In view of mutual independence of $N_{1}(Y_{1}(t))$ and $N_{2}(Y_{2}(t))$ and using the formula (31), we obtain the covariance function:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{Cov}\big(S_{\mathit{II}}(t),S_{\mathit{II}}(s)\big)& \displaystyle =\mathsf{Cov}\big(N_{1}\big(Y_{1}(t)\big),N_{1}\big(Y_{1}(s)\big)\big)+\mathsf{Cov}\big(N_{2}\big(Y_{2}(t)\big),N_{2}\big(Y_{2}(s)\big)\big)\\{} & \displaystyle =\frac{\lambda _{1}+\lambda _{2}}{\lambda }\big(\beta \min (t,s)+1\big)+\frac{{\lambda _{1}^{2}}+{\lambda _{2}^{2}}}{{\lambda }^{2}}\big(2\beta \min (t,s)+1\big).\end{array}\]
5.2 Inverse compound Poisson–Erlang process
Consider now the compound Poisson–Erlang process ${G_{N}^{(n)}}(t)$, that is, the Poisson-Gamma process with $\alpha =n$.
For this case the inverse process
has density of the following form:
The formula for the density (49) (in different set of parameters) was obtained in [17] (Theorem 3.1) by developing the approach presented in [4]. This approach is based on calculating and inverting the Laplace transforms, by taking into account the relationship between Laplace transforms of a direct process and its inverse process. We refer for details to [4, 17] (see also Appendix A.2). It should be noted that for the compound Poisson-Gamma processes with a non-integer parameter α the inverting Laplace transforms within this approach leads to complicated infinite integrals (see again [17]).
(49)
\[q(s,t)=\lambda {e}^{-\beta t-\lambda s}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\varPhi \big(n,k,\lambda s{(\beta t)}^{n}\big).\]Laplace transform of the process ${Y}^{(n)}(t)$ can be represented in the following form:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}\big[{e}^{-\theta {Y}^{(n)}(t)}\big]& \displaystyle ={e}^{-\beta t}\sum \limits_{k=1}^{n}\sum \limits_{j=0}^{\infty }\frac{{(\beta t)}^{nj+k-1}}{\varGamma (nj+k)}{\bigg(\frac{\lambda }{\lambda +\theta }\bigg)}^{j+1}\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda }{\lambda +\theta }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +\theta }\bigg)\end{array}\]
(see (60) in Appendix A.2). With direct calculations, using the known form of the density of ${Y}^{(n)}(t)$, we find the following expressions for the moments:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{Y}^{(n)}(t)& \displaystyle ={e}^{-\beta t}\frac{1}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{2}}\big({(\beta t)}^{n}\big),\\{} \displaystyle \mathsf{E}{\big[{Y}^{(n)}(t)\big]}^{2}& \displaystyle ={e}^{-\beta t}\frac{2}{{\lambda }^{2}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{3}}\big({(\beta t)}^{n}\big),\end{array}\]
and, generally, for $p\ge 1$:
\[\mathsf{E}{\big[{Y}^{(n)}(t)\big]}^{p}={e}^{-\beta t}\frac{p!}{{\lambda }^{p}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{p+1}}\big({(\beta t)}^{n}\big)\]
(see details of calculations in Appendix A.2).Remark 14.
Using the arguments similar to those in [17] (see proof of Lemma 3.11 therein), we can also derive another expression for the first moment, in terms of the two-parameter generalized Mittag-Leffler function:
From (50) we can see that $\mathsf{E}{Y}^{(n)}(t)$ has a linear behavior with respect to t as $t\to \infty $:
which should be indeed expected, since the general result holds for subordinators with finite mean: the mean of their first passage time exhibit linear behavior for large times (see, for example, [21] and references therein).
(50)
\[\mathsf{E}{Y}^{(n)}(t)=\frac{\beta t}{n\lambda }+\frac{{e}^{-\beta t}}{n\lambda }\sum \limits_{k=1}^{n}(n+k-1){(\beta t)}^{k-1}\mathcal{E}_{n,k}\big({(\beta t)}^{n}\big).\]Consider the time-changed process ${Z}^{(n)}(t)=N_{1}({Y}^{(n)}(t))$, where ${Y}^{(n)}(t)$ is the inverse process given by (48), independent of $N_{1}(t)$.
Theorem 9.
The probability mass function of the process ${Z}^{(n)}(t)=N_{1}({Y}^{(n)}(t))$ is given by
Laplace transform is given by
Proof.
Proof is similar to that for Theorem 5. In particular, the probability mass function is obtained as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hspace{20.0pt}p_{k}(t)& \displaystyle =P\big(N_{1}\big({Y}^{(n)}(t)\big)=k\big)\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s}\frac{{(\lambda _{1}s)}^{k}}{k!}\lambda {e}^{-\lambda s-\beta t}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}\varPhi \big(n,m,\lambda s{(\beta t)}^{n}\big)ds\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s-\lambda s-\beta t}\lambda \frac{{(\lambda _{1}s)}^{k}}{k!}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda s{(\beta t)}^{n})}^{l}}{l!\varGamma (nl+m)}ds\\{} & \displaystyle =\frac{{e}^{-\beta t}\lambda {\lambda _{1}^{k}}}{k!}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda {(\beta t)}^{n})}^{l}}{l!\varGamma (nl+m)}{\int _{0}^{\infty }}{e}^{-(\lambda _{1}+\lambda )s}{s}^{k+l}ds\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda {\lambda _{1}^{k}}(k+1)}{{(\lambda _{1}+\lambda )}^{k+1}}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}{\mathcal{E}_{n,m}^{k+1}}\bigg(\frac{\lambda {(t\beta )}^{n}}{\lambda _{1}+\lambda }\bigg).\hspace{40.0pt}\end{array}\]
□Remark 15.
The first two moments of the process ${Z}^{(n)}(t)$ can be calculated as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{Z}^{(n)}(t)& \displaystyle =\mathsf{E}N_{1}(1)\mathsf{E}{Y}^{(n)}(t)={e}^{-\beta t}\frac{\lambda _{1}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{2}}\big({(\beta t)}^{n}\big),\\{} \displaystyle \mathsf{E}{\big[{Z}^{(n)}(t)\big]}^{2}& \displaystyle =\mathsf{Var}N_{1}(1)U(t)-{\big[\mathsf{E}N_{1}(1)\big]}^{2}\mathsf{E}{\big[Y(t)\big]}^{2}\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda _{1}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{2}}\big({(\beta t)}^{n}\big)-{e}^{-\beta t}\frac{{\lambda _{1}^{2}}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{3}}\big({(\beta t)}^{n}\big),\end{array}\]
and we can see that, similarly to $\mathsf{E}{Y}^{(n)}(t)$, $\mathsf{E}{Z}^{(n)}(t)$ has linear behavior as $t\to \infty $:
Let the Skellam process $S(t)$ have parameters $\lambda _{1}$ and $\lambda _{2}$ and let us consider the process
where $N_{1}(t)$, $N_{2}(t)$ and ${Y}^{(n)}(t)$ are independent.
(53)
\[{S_{I}^{(n)}}(t)=S\big({Y}^{(n)}(t)\big)=N_{1}\big({Y}^{(n)}(t)\big)-N_{2}\big({Y}^{(n)}(t)\big),\]Theorem 10.
Let ${S_{I}^{(n)}}(t)$ be a Skellam process of type I given by (53), then the probabilities $r_{k}(t)=P({S_{I}^{(n)}}(t)=k)$, $k\in Z$, are given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle r_{k}(t)& \displaystyle =\lambda {e}^{-\beta t}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{\frac{k}{2}}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}{\int _{0}^{\infty }}{e}^{-u(\lambda _{1}+\lambda _{2}+\lambda )}I_{|k|}(2u\sqrt{\lambda _{1}\lambda _{2}})\\{} & \displaystyle \hspace{1em}\times \varPhi \big(n,m,\lambda u{(\beta t)}^{n}\big)du.\end{array}\]
The moment generating function is given by
\[E\big[{e}^{\theta {S_{I}^{(n)}}(t)}\big]={e}^{-\beta t}\frac{\lambda }{\lambda +f_{s}(-\theta )\hspace{0.1667em}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +f_{s}(-\theta )}\bigg)\]
for θ such that $\lambda +f_{s}(-\theta )\ne 0$, where $f_{s}(\theta )$ is the Laplace transform of the initial Skellam process $S(t)$.
Consider the time-changed Skellam process of type II:
where $Y_{1}(t)$ and $Y_{2}(t)$ are independent copies of the inverse process ${Y}^{(n)}(t)$ and independent of $N_{1}(t)$, $N_{2}(t)$.
Theorem 11.
Let ${S_{\mathit{II}}^{(n)}}(t)$ be the time-changed Skellam process of type II given by (54). Its probability mass function is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big({S_{\mathit{II}}^{(n)}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{1}^{k}}}{{(\lambda _{1}+\lambda )}^{k+1}(\lambda _{2}+\lambda )}\sum \limits_{p=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{p}(p+k+1)(n+1)}{{(\lambda _{1}+\lambda )}^{p}{(\lambda _{2}+\lambda )}^{p}}\\{} & \displaystyle \hspace{1em}\times \sum \limits_{m_{1}=1}^{n}\sum \limits_{m_{2}=1}^{n}{(\beta t)}^{m_{1}+m_{2}-2}\hspace{0.1667em}{\mathcal{E}_{n,m_{1}}^{p+k}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{1}+\lambda }\bigg){\mathcal{E}_{n,m_{2}}^{p}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{2}+\lambda }\bigg)\end{array}\]
for $k\in Z$, $k\ge 0$, and when $k<0$
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big({S_{\mathit{II}}^{(n)}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{2}^{|k|}}}{(\lambda _{1}+\lambda ){(\lambda _{2}+\lambda )}^{|k|+1}}\sum \limits_{p=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{p}(p+k+1)(p+1)}{{(\lambda _{1}+\lambda )}^{p}{(\lambda _{2}+\lambda )}^{p}}\\{} & \displaystyle \hspace{1em}\times \sum \limits_{m_{1}=1}^{n}\sum \limits_{m_{2}=1}^{n}{(\beta t)}^{m_{1}+m_{2}-2}\hspace{0.1667em}{\mathcal{E}_{n,m_{1}}^{p}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{1}+\lambda }\bigg){\mathcal{E}_{n,m_{2}}^{p+|k|}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{2}+\lambda }\bigg).\end{array}\]
The moment generating function is
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E\big[{e}^{\theta S_{\mathit{II}}(t)}\big]& \displaystyle ={e}^{-2\beta t}\frac{\lambda }{\lambda +\lambda _{1}(1-{e}^{\theta })\hspace{0.1667em}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +\lambda _{1}(1-{e}^{\theta })}\bigg)\\{} & \displaystyle \hspace{1em}\times \frac{\lambda }{\lambda +\lambda _{2}(1-{e}^{-\theta })\hspace{0.1667em}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +\lambda _{2}(1-{e}^{-\theta })}\bigg)\end{array}\]
for θ such that $\lambda +\lambda _{1}(1-{e}^{\theta })\ne 0$.