Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 4, Issue 2 (2017)
  4. Compositions of Poisson and Gamma proces ...

Compositions of Poisson and Gamma processes
Volume 4, Issue 2 (2017), pp. 161–188
Khrystyna Buchak   Lyudmyla Sakhno  

Authors

 
Placeholder
https://doi.org/10.15559/17-VMSTA79
Pub. online: 29 June 2017      Type: Research Article      Open accessOpen Access

Received
9 February 2017
Revised
16 May 2017
Accepted
2 June 2017
Published
29 June 2017

Abstract

In the paper we study the models of time-changed Poisson and Skellam-type processes, where the role of time is played by compound Poisson-Gamma subordinators and their inverse (or first passage time) processes. We obtain explicitly the probability distributions of considered time-changed processes and discuss their properties.

1 Introduction

Stochastic processes with random time and more general compositions of processes are quite popular topics of recent studies both in the theory of stochastic processes and in various applied areas. Specifically, in financial mathematics, models with random clock (or time change) allow to capture more realistically the relationship between calendar time and financial markets activity. Models with random time appear in reliability and queuing theory, biological, ecological and medical research, note also that for solving some problems of statistical estimation sampling of a stochastic process at random times or on the trajectory of another process can be used. Some examples of applications are described, for example, in [5].
In the present paper we study various compositions of Poisson and Gamma processes. We only consider the cases when processes used for compositions are independent.
The Poisson process directed by a Gamma process and, in reverse, the Gamma process directed by a Poisson process one can encounter, for example, in the book by W. Feller [6], where distributions of these processes are presented as particular examples within the general framework of directed processes.
Time-changed Poisson processes have been investigated extensively in the literature. We mention, for example, the recent comprehensive study undertaken in [16] concerned with the processes $N({H}^{f}(t))$, where $N(t)$ is a Poisson process and ${H}^{f}(t)$ is an arbitrary subordinator with Laplace exponent f, independent of $N(t)$. The particular emphasis has been made on the cases where $f(u)={u}^{\alpha }$, $\alpha \in (0,1)$ (space-fractional Poisson process), $f(u)={(u+\theta )}^{\alpha }-{\theta }^{\alpha }$, $\alpha \in (0,1)$, $\theta >0\hspace{2.5pt}$(tempered Poisson process) and $f(\lambda )=\log (1+\lambda )$ (negative binomial process), in the last case we have the Poisson process with Gamma subordinator.
The most intensively studied models of Poisson processes with time-change are two fractional extensions of Poisson process, namely, the space-fractional and the time-fractional Poisson processes, obtained by choosing a stable subordinator or its inverse process in the role of time correspondingly; the literature devoted to these processes is rather voluminous, we refer, e.g., to the recent papers [7, 15, 12] (and references therein), and to the paper [14], where the correlation structure of the time-changed Lévy processes has been investigated and the correlation of time-fractional Poisson process was discussed among the variety of other examples. The most recent results are concerned with non-homogeneous fractional Poisson processes (see, e.g. [13] and references therein).
Interesting models of processes are based on the use of the difference of two Poisson processes, so-called Skellam processes, and their generalizations via time change. Investigation of these models can be found, for example, in [2], and we also refer to the paper [10], where fractional Skellam processes were introduced and studied.
In the present paper we study time-changed Poisson and Skellam processes, where the role of time is played by compound Poisson-Gamma subordinators and their inverse (or first passage time) processes. Some motivation for our study is presented in Remarks 1 and 2 in Section 3.
We obtain explicitly the probability distributions of considered time-changed processes and their first and second order moments.
In particular, for the case, where time-change is taken by means of compound Poisson-exponential subordinator and its inverse process, corresponding probability distributions of time-changed Poisson and Skellam processes are presented in terms of generalized Mittag-Leffler functions.
We also find the relation, in the form of differential equation, between the distribution of Poisson process, time-changed by Poisson-exponential process, and the distribution of Poisson processes time-changed by inverse Poisson-exponential process.
The paper is organized as follows. In Section 2 we recall definitions of processes which will be considered in the paper, in Section 3 we discuss the main features of the compound Poisson-Gamma process $G_{N}(t)$. In Section 4 we study Poisson and Skellam-type processes time-changed by the process $G_{N}(t)$; in Section 5 we investigate time-changed Poisson and Skellam-type processes, where the role of time is played by the inverse process for $G_{N}(t)$, we also discuss some properties of the inverse processes. Appendix contains details of the derivation of some results stated in Section 5.

2 Preliminaries

In this section we recall definitions of processes, which will be considered in the paper (see, e.g. [1, 3]).
The Poisson process $N(t)$ with intensity parameter $\lambda >0$ is a Lévy process with values in $N\cup \{0\}$ such that each $N(t)$ has Poisson distribution with parameter $\lambda t$, that is,
\[P\big\{N(t)=n\big\}=\frac{{(\lambda t)}^{n}}{n!}{e}^{-\lambda t};\]
Lévy measure of $N(t)$ can be written in the form $\nu (du)=\lambda \delta _{\{1\}}(u)du$, where $\delta _{\{1\}}(u)$ is the Dirac delta centered at $u=1$, the corresponding Bernštein function (or Laplace exponent of $N(t)$) is $f(u)=\lambda (1-{e}^{-u}),u>0$.
The Gamma process $G(t)$ with parameters $\alpha ,\beta >0$ is a Lévy process such that each $G(t)$ follows Gamma distribution $\varGamma (\alpha t,\beta )$, that is, has the density
\[h_{G(t)}(x)=\frac{{\beta }^{\alpha t}}{\varGamma (\alpha t)}{x}^{\alpha t-1}{e}^{-\beta x},\hspace{1em}x\ge 0;\]
The Lévy measure of $G(t)$ is $\nu (du)=\alpha {u}^{-1}{e}^{-\beta u}du$ and the corresponding Bernštein function is
\[f(u)=\alpha \log \bigg(1+\frac{u}{\beta }\bigg).\]
The Skellam process is defined as
\[S(t)=N_{1}(t)-N_{2}(t),\hspace{1em}t\ge 0,\]
where $N_{1}(t),t\ge 0$, and $N_{2}(t),t\ge 0$, are two independent Poisson processes with intensities $\lambda _{1}>0$ and $\lambda _{2}>0$, respectively.
The probability mass function of $S(t)$ is given by
(1)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle s_{k}(t)=P\big(S(t)=k\big)={e}^{-t(\lambda _{1}+\lambda _{2})}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{k/2}I_{|k|}(2t\sqrt{\lambda _{1}\lambda _{2}}),\\{} & \displaystyle \hspace{1em}k\in \mathbb{Z}=\{0,\pm 1,\pm 2,\dots \},\end{array}\]
where $I_{k}$ is the modified Bessel function of the first kind [20]:
(2)
\[I_{k}(z)=\sum \limits_{n=0}^{\infty }\frac{{(z/2)}^{2n+k}}{n!(n+k)!}.\]
The Skellam process is a Lévy process, its Lévy measure is the linear combination of two Dirac measures: $\nu (du)=\lambda _{1}\delta _{\{1\}}(u)du+\lambda _{2}\delta _{\{-1\}}(du)$, the corresponding Bernštein function is
\[f_{S}(\theta )={\int _{-\infty }^{\infty }}\big(1-{e}^{-\theta y}\big)\nu (dy)=\lambda _{1}\big(1-{e}^{-\theta }\big)+\lambda _{2}\big(1-{e}^{\theta }\big),\]
and the moment generating function is given by
\[\mathsf{E}{e}^{\theta S(t)}={e}^{-t(\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })},\hspace{1em}\theta \in \mathbb{R}.\]
Skellam processes are considered, for example, in [2], the Skellam distribution had been introduced and studied in [19] and [9].
We will consider Skellam processes with time change
\[S_{_{I}}(t)=S\big(X(t)\big)=N_{1}\big(X(t)\big)-N_{2}\big(X(t)\big),\]
where $X(t)$ is a subordinator independent of $N_{1}(t)$ and $N_{2}(t)$, and will call such processes time-changed Skellam processes of type I. We will also consider the processes of the form
\[S_{\mathit{II}}(t)=N_{1}\big(X_{1}(t)\big)-N_{2}\big(X_{2}(t)\big),\]
where $N_{1}(t)$, $N_{2}(t)$ are two independent Poisson processes with intensities $\lambda _{1}>0$ and $\lambda _{2}>0$, and $X_{1}(t)$, $X_{2}(t)$ are two independent copies of a subordinator $X(t)$, which are also independent of $N_{1}(t)$ and $N_{2}(t)$, and we will call the process $S_{\mathit{II}}(t)$ a time-changed Skellam process of type II.
To represent distributions and other characteristics of processes considered in the next sections, we will use some special functions, besides the modified Bessel function introduced above. Namely, we will use the Wright function
(3)
\[\varPhi (\rho ,\delta ,z)=\sum \limits_{k=0}^{\infty }\frac{{z}^{k}}{k!\varGamma (\rho k+\delta )},\hspace{1em}z\in C,\hspace{2.5pt}\rho \in (-1,0)\cup (0,\infty ),\hspace{2.5pt}\delta \in C,\]
for $\delta =0$, (3) is simplified as follows:
(4)
\[\varPhi (\rho ,0,z)=\sum \limits_{k=1}^{\infty }\frac{{z}^{k}}{k!\varGamma (\rho k)};\]
the two-parameter generalized Mittag-Leffler function
(5)
\[\mathcal{E}_{\rho ,\delta }(z)=\sum \limits_{k=0}^{\infty }\frac{{z}^{k}}{\varGamma (\rho k+\delta )},\hspace{1em}z\in C,\hspace{2.5pt}\rho \in (0,\infty ),\delta \in (0,\infty );\]
and the three-parameter generalized Mittag-Leffler function
(6)
\[{\mathcal{E}_{\rho ,\delta }^{\gamma }}(z)=\sum \limits_{k=0}^{\infty }\frac{\varGamma (\gamma +k)}{\varGamma (\gamma )}\frac{{z}^{k}}{k!\varGamma (\rho k+\delta )},\hspace{1em}z\in C,\hspace{2.5pt}\rho ,\delta ,\gamma \in C,\]
with $\text{ Re}(\rho )>0,\text{Re}(\delta )>0,\text{Re}(\gamma )>0$ (see, e.g., [8] for definitions and properties of these functions).

3 Compound Poisson-Gamma process

The first example of compositions of Poisson and Gamma processes which we consider in the paper is the compound Poisson-Gamma process. This is a well known process, however here we would like to focus on some of its important features.
Let $N(t)$ be a Poisson process and $\{G_{n},n\ge 1\}$ be a sequence of i.i.d. Gamma random variables independent of $N(t)$. Then compound Poisson process with Gamma distributed jumps is defined as
\[Y(t)=\sum \limits_{n=1}^{N(t)}G_{n},\hspace{1em}t>0,\hspace{1em}Y(0)=0\]
(in the above definition it is meant that ${\sum _{n=1}^{0}}\stackrel{\mathrm{def}}{=}0$). This process can be also represented as $Y(t)=G(N(t))$, that is, as a time-changed Gamma process, where the role of time is played by Poisson process. Let us denote this process by $G_{N}(t)$:
\[G_{N}(t)=G\big(N(t)\big).\]
Let $N(t)$ and $G(t)$ have parameters λ and $(\alpha ,\beta )$ correspondingly. The process $G_{N}(t)$ is a Lévy process with Laplace exponent (or Bernštein function) of the form
\[f(u)=\lambda {\beta }^{\alpha }\big({\beta }^{-\alpha }-{(\beta +u)}^{-\alpha }\big),\hspace{1em}\lambda >0,\hspace{2.5pt}\alpha >0,\hspace{2.5pt}\beta >0,\]
and the corresponding Lévy measure is
(7)
\[\nu (du)=\lambda {\beta }^{\alpha }{\big(\varGamma (\alpha )\big)}^{-1}{u}^{\alpha -1}{e}^{-\beta u}du,\hspace{1em}\lambda >0,\hspace{2.5pt}\alpha >0,\hspace{2.5pt}\beta >0.\]
The transition probability measure of the process $G_{N}(t)$ can be written in the closed form:
(8)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big\{G_{N}(t)\in ds\big\}& \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+\sum \limits_{n=1}^{\infty }{e}^{-\lambda t}\frac{{(\lambda t{\beta }^{\alpha })}^{n}}{n!\varGamma (\alpha n)}{s}^{\alpha n-1}{e}^{-\beta s}ds\\{} & \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{1}{s}\varPhi \big(\alpha ,0,\lambda t{(\beta s)}^{\alpha }\big)ds,\end{array}\]
therefore, probability law of $G_{N}(t)$ has atom ${e}^{-\lambda t}$ at zero, that is, has a discrete part $P\{G_{N}(t)=0\}={e}^{-\lambda t}$, and the density of the absolutely continuous part can be expressed in terms of the Wright function.
In particular, when $\alpha =n$, $n\in N$, we have a Poisson–Erlang process, which we will denote by ${G_{N}^{(n)}}(t)$ and for $\alpha =1$, we have a Poisson process with exponentially distributed jumps. We will denote this last process by $E_{N}(t)$. Its Lévy measure $\nu (du)=\lambda \beta {e}^{-\beta u}du$ and Laplace exponent is
\[f(u)=\lambda \frac{u}{\beta +u}.\]
The transition probability measure of $E_{N}(t)$ is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big\{E_{N}(t)\in ds\big\}& \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{1}{s}\varPhi (1,0,\lambda t\beta s)ds\\{} & \displaystyle ={e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{\sqrt{\lambda t\beta s}}{s}I_{1}(2\sqrt{\lambda t\beta s})ds.\end{array}\]
Remark 1.
Measures (7) belong to the class of Lévy measures
(9)
\[\nu (du)=c{u}^{-a-1}{e}^{-bu}du,\hspace{1em}c>0,\hspace{2.5pt}a<1,\hspace{2.5pt}b>0.\]
Note that:
  • (i) for the range $a\in (0,1)$ and $b>0$ we obtain the tempered Poisson processes,
  • (ii) the limiting case when $a\in (0,1)$, $b=0$ corresponds to stable subordinators,
  • (iii) the case $a=0$, $b>0$ corresponds to Gamma subordinators,
  • (iv) for $a<0$ we have compound Poisson-Gamma subordinators.
Probability distributions of the above subordinators can be written in the closed form in the case (iii); in the case (i) for $\alpha =1/2$, when we have the inverse Gaussian subordinators; and in the case (iv).
In the paper [16] the deep and detailed investigation was performed for the time-changed Poisson processes where the role of time is played by the subordinators from the above cases (i)–(iii) (and as we have already pointed out in the introduction, the most studied in the literature is the case (ii)). The mentioned paper deals actually with the general time-changed processes of the form $N({H}^{f}(t))$, where $N(t)$ is a Poisson process and ${H}^{f}(t)$ is an arbitrary subordinator with Laplace exponent f, independent of $N(t)$. This general construction falls into the framework of Bochner subordination (see, e.g. book by Sato [18]) and can be studied by means of different approaches.
With the present paper we intend to complement the study undertaken in [16] with one more example. We consider time-change by means of subordinators corresponding to the above case (iv) and, therefore, subordination related to the measures (9) will be covered for the all range of parameters. We must also admit that the attractive feature of these processes is the closed form of their distributions which allows to perform the exact calculations for characteristics of corresponding time-changed processes.
We also study Skellam processes with time-change by means of compound Poisson-Gamma subordinators, in this part our results are close to the corresponding results of the paper [10].
In our paper we develop (following [16, 10] among others) an approach for studying time-changed Poisson process via investigation of their distributional properties, with the use of the form and properties of distributions of the processes involved.
It would be also interesting to study the above mentioned processes within the framework of Bochner subordination via semigroup approach. We address this topic for future research, as well as the study of other characteristics of the processes considered in the present paper and their comparison with related results existing in the literature.
Note that in our exposition for convenience we will write Lévy measures (9) for the case of compound Poisson-Gamma subordinators in the form (7) with $\alpha >0$ and transparent meaning of parameters.
Remark 2.
It is well known that the composition of two independent stable subordinators is again a stable subordinator. If $S_{i}(t)$, $i=1,2$, are two independent stable subordinators with Laplace exponents $c_{i}{u}^{a_{i}}$, $i=1,2$, then $S_{1}(S_{2}(t))$ is the stable subordinator with index $a_{1}a_{2}$, since its Laplace exponent is $c_{2}{(c_{1})}^{a_{2}}{u}^{a_{1}a_{2}}$. More generally, iterated composition of stable subordinators $S_{1},\dots ,S_{k}$ with indices $a_{1},\dots ,a_{k}$ is the stable subordinator with index ${\prod _{i=1}^{k}}a_{i}$.
In the paper [11] it was shown that one specific subclass of Poisson-Gamma subordinators has a property similar to the property of stable subordinators described above, that is, if two processes belong to this class, so does their composition. Namely, this is the class of compound Poisson processes with exponentially distributed jumps and parameters $\lambda =\beta =\frac{1}{a}$, therefore, the Lévy measure is of the form $\nu (du)=\frac{1}{{a}^{2}}{e}^{-u/a}du$ and the corresponding Bernštein function is $\tilde{f}_{a}(u)=\frac{u}{1+au}$. Denote such processes by ${\widetilde{E}_{N}^{a}}(t)$. Then, as can be easily checked (see also [11]) the composition of two independent processes ${\widetilde{E}_{N}^{a_{1}}}({\widetilde{E}_{N}^{a_{2}}}(t))$ has Laplace exponent $\tilde{f}_{a_{1}+a_{2}}(u)=\frac{u}{1+(a_{1}+a_{2})u}$, since the functions
\[\tilde{f}_{a}(u)=\frac{u}{1+au}\]
have the property
\[\tilde{f}_{a}\big(\tilde{f}_{b}(\lambda )\big)=\tilde{f}_{a+b}(\lambda ).\]
Therefore, the process ${\widetilde{E}_{N}^{a_{1}}}({\widetilde{E}_{N}^{a_{2}}}(t))$ coincides, in distribution, with ${\widetilde{E}_{N}^{a}}(t)$, with a parameter $a=a_{1}+a_{2}$. More generally, iterated composition of n independent processes
\[{\widetilde{E}_{N}^{a_{1}}}\big({\widetilde{E}_{N}^{a_{2}}}\big(\dots {\widetilde{E}_{N}^{a_{n}}}(t)\dots \big)\big)\]
is equal in distribution to the process ${\widetilde{E}_{N}^{a}}(t)$ with $a={\sum _{i=1}^{n}}a_{i}$, and the n-th iterated composition as $n\to \infty $ results in the process ${\widetilde{E}_{N}^{a}}(t)$ with $a={\sum _{i=1}^{\infty }}a_{i}$, provided that $0<a<\infty $. In particular, subordinator ${\widetilde{E}_{N}^{1}}(t)$ with Laplace exponent $f(u)=\frac{u}{1+u}$ can be represented as infinite composition of independent subordinators ${\widetilde{E}_{N}^{1/{2}^{n}}}$:
\[{\widetilde{E}_{N}^{1}}(t)\stackrel{d}{=}{\widetilde{E}_{N}^{1/2}}\big({\widetilde{E}_{N}^{1/{2}^{2}}}\big(\dots {\widetilde{E}_{N}^{1/{2}^{n}}}(\dots )\big)\big).\]
This interesting feature of the processes ${\widetilde{E}_{N}^{a}}(t)$ deserves further investigation.

4 Compound Poisson-Gamma process as time change

Let $N_{1}(t)$ be the Poisson process with intensity $\lambda _{1}$. Consider the time-changed process $N_{1}(G_{N}(t))=N_{1}(G(N(t)))$, where $G_{N}(t)$ is compound Poisson-Gamma process (with parameters $\lambda ,\alpha ,\beta $) independent of $N_{1}(t)$.
Theorem 1.
Probability mass function of the process $X(t)=N_{1}(G_{N}(t))$ is given by
(10)
\[p_{k}(t)=P\big(X(t)=k\big)=\frac{{e}^{-\lambda t}}{k!}\frac{{\lambda _{1}^{k}}}{{(\lambda _{1}+\beta )}^{k}}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{(\lambda _{1}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\hspace{1em}\textit{for}\hspace{2.5pt}k\ge 1,\]
and
(11)
\[p_{0}(t)=\exp \bigg\{-\lambda t\bigg(1-\frac{{\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}\bigg)\bigg\}.\]
The probabilities $p_{k}(t)$, $k\ge 0$, satisfy the following system of difference-differential equations:
\[\frac{d}{dt}p_{k}(t)=\bigg(\frac{\lambda {\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}-\lambda \bigg)p_{k}(t)+\frac{\lambda {\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}\sum \limits_{m=1}^{k}\frac{{\lambda _{1}^{m}}}{{(\lambda _{1}+\beta )}^{m}}\frac{\varGamma (m+\alpha )}{m!\varGamma (\alpha )}p_{k-m}(t).\]
In the case when $a=1$, that is, when the process $G_{N}(t)$ becomes $E_{N}(t)$, the compound Poisson process with exponentially distributed jumps, probabilities $p_{k}(t)$, $k\ge 0$, can be represented via the generalized Mittag-Leffler function as stated in the next theorem.
Theorem 2.
Let $X(t)=N_{1}(E_{N}(t))$. Then for $k\ge 1$
(12)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {p_{k}^{E}}(t)& \displaystyle =P\big(X(t)=k\big)={e}^{-\lambda t}\frac{{\lambda _{1}^{k}}\lambda \beta t}{{(\lambda _{1}+\beta )}^{k+1}}{\mathcal{E}_{1,2}^{k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg);\end{array}\]
(13)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle p_{0}(t)& \displaystyle =\exp \bigg\{-\frac{\lambda \lambda _{1}t}{\lambda _{1}+\beta }\bigg\},\end{array}\]
and the probabilities $p_{k}(t)$, $k\ge 0$, satisfy the following equation:
(14)
\[\frac{d}{dt}{p_{k}^{E}}(t)=-\lambda \frac{\lambda _{1}}{\lambda _{1}+\beta }{p_{k}^{E}}(t)+\frac{\lambda \beta }{\lambda _{1}+\beta }\sum \limits_{m=1}^{k}{\bigg(\frac{\lambda _{1}}{\lambda _{1}+\beta }\bigg)}^{m}{p_{k-m}^{E}}(t).\]
Proof of Theorem 1.
The probability mass function of the process $X(t)=N_{1}(G_{N}(t))$ can be obtained by standard conditioning arguments (see, e.g., the general result for subordinated Lévy processes in [18], Theorem 30.1).
For $k\ge 1$ we obtain:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle p_{k}(t)& \displaystyle =P\big(N_{1}\big(G_{N}(t)\big)=k\big)={\int _{0}^{\infty }}P\big(N_{1}(s)=k\big)P\big(G_{N}(t)\in ds\big)\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s}\frac{{(\lambda _{1}s)}^{k}}{k!}\bigg\{{e}^{-\lambda t}\delta _{\{0\}}(ds)+{e}^{-\lambda t-\beta s}\frac{1}{s}\varPhi \big(\alpha ,0,\lambda t{(\beta s)}^{\alpha }\big)\bigg\}ds\\{} & \displaystyle =\frac{{e}^{-\lambda t}{\lambda _{1}^{k}}}{k!}{\int _{0}^{\infty }}{e}^{-\lambda _{1}s}{s}^{k}{e}^{-\beta s}\frac{1}{s}\varPhi \big(\alpha ,0,\lambda t{(\beta s)}^{\alpha }\big)ds\\{} & \displaystyle =\frac{{e}^{-\lambda t}{\lambda _{1}^{k}}}{k!}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}}{n!\varGamma (\alpha n)}{\int _{0}^{\infty }}{e}^{-(\lambda _{1}+\beta )s}{s}^{k+\alpha n-1}ds\\{} & \displaystyle =\frac{{e}^{-\lambda t}}{k!}\frac{{\lambda _{1}^{k}}}{{(\lambda _{1}+\beta )}^{k}}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}\varGamma (\alpha n+k)}{{(\lambda _{1}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}.\end{array}\]
For $k=0$ we have:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle p_{0}(t)& \displaystyle ={e}^{-\lambda t}+{e}^{-\lambda t}\sum \limits_{n=1}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}\varGamma (\alpha n)}{{(\lambda _{1}+\beta )}^{\alpha n}n!\varGamma (\alpha n)}\\{} & \displaystyle ={e}^{-\lambda t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda t{\beta }^{\alpha })}^{n}}{{(\lambda _{1}+\beta )}^{\alpha n}n!}={e}^{-\lambda t}{e}^{\frac{\lambda t{\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}}\\{} & \displaystyle =\exp \bigg\{-\lambda t\bigg(1-\frac{{\beta }^{\alpha }}{{(\lambda _{1}+\beta )}^{\alpha }}\bigg)\bigg\}.\end{array}\]
The governing equation for the probabilities $p_{k}(t)=P(N_{1}(G_{N}(t))=k)$ follows as a particular case of the general equation presented in Theorem 2.1 [16] for probabilities $P(N(H(t))=k)$, where $H(t)$ is an arbitrary subordinator.  □
Proof of Theorem 2.
We obtain statements of Theorem 2 as consequences of corresponding statements of Theorem 1, using the expansion (6) for three-parameter generalized Mittag-Leffler function.  □
Remark 3.
Moments of the process $N_{1}(G_{N}(t))$ can be calculated, for example, from the moment generating function which is given by:
(15)
\[\mathsf{E}{e}^{\theta N_{1}(G_{N}(t))}={e}^{-t\lambda (1-{\beta }^{\alpha }{(\beta +\lambda _{1}(1-{e}^{\theta }))}^{-\alpha })},\text{ }\]
for $\theta \in \mathbb{R}$ such that $\beta +\lambda _{1}(1-{e}^{\theta })\ne 0$. We have, in particular,
\[\mathsf{E}N_{1}\big(G_{N}(t)\big)=\lambda _{1}\lambda \alpha {\beta }^{-1}t,\hspace{2em}\mathsf{Var}N_{1}\big(G_{N}(t)\big)=\lambda _{1}\lambda \alpha {\beta }^{-2}\big((1+\alpha )\lambda _{1}+\beta \big)t.\]
Expressions for probabilities $p_{k}(t)$ and expressions for moments were also calculated in [11] using the probability generating function of the process $N_{1}(G_{N}(t))$. In [11] the covariance function was also obtained:
(16)
\[\mathsf{Cov}\big(N_{1}\big(G_{N}(t)\big),N_{1}\big(G_{N}(s)\big)\big)=\lambda _{1}\lambda \alpha {\beta }^{-2}\big((1+\alpha )\lambda _{1}+\beta \big)\min (t,s).\]
The very detailed study of time-changed processes $N({H}^{f}(t))$ with ${H}^{f}(t)$ being an arbitrary subordinator, independent of Poisson process $N(t)$, with Laplace exponent $f(u)$, is presented in [16]. In particular, it was shown therein that the probability generating function can be written in the form $G(u,t)={e}^{-tf(\lambda (1-u))}$, where λ is the parameter of the process $N(t)$.
Note also that in order to compute the first two moments and covariance function of time-changed Lévy processes the following result, stated in [14] as Theorem 2.1, can be used:
If $X(t)$ is a homogeneous Lévy process with $X(0)=0$, $Y(t)$ is a nondecreasing process independent of $X(t)$ and $Z(t)=X(Y(t))$, then
(17)
\[\mathsf{E}Z(t)=\mathsf{E}X(1)U(t),\]
provided that $\mathsf{E}X(t)$ and $U(t)=\mathsf{E}Y(t)$ exist;
and if $X(t)$ and $Y(t)$ have finite second moments, then
(18)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{\big[Z(t)\big]}^{2}& \displaystyle =\mathsf{Var}X(1)U(t)-{\big[\mathsf{E}X(1)\big]}^{2}\mathsf{E}{\big[Y(t)\big]}^{2},\end{array}\]
(19)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{Var}Z(t)& \displaystyle ={\big[\mathsf{E}X(1)\big]}^{2}\mathsf{Var}Y(t)+\mathsf{Var}X(1)U(t),\end{array}\]
(20)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{Cov}\big(Z(t),Z(s)\big)& \displaystyle ={\big[\mathsf{E}X(1)\big]}^{2}\mathsf{Cov}\big(Y(t),Y(s)\big)+\mathsf{Var}X(1)U\big(\min (t,s)\big).\end{array}\]
We will use the above formulas further in the paper.
Remark 4.
Let ${\widetilde{E}}^{a}(t)={\widetilde{E}_{N}^{a}}(t)$ be the compound Poisson-exponential process with Laplace exponent $\tilde{f}_{a}(u)=\frac{u}{1+au}$, which we discussed in Remark 2 above, and let $N(t)=N_{\lambda }(t)$ be the Poisson process with parameter λ, independent of ${\widetilde{E}}^{a}(t)$. Denote ${\widetilde{N}}^{a}(t)=N({\widetilde{E}}^{a}(t))$. In view of Remark 2, for the processes ${\widetilde{N}}^{a}(t)$ we have the following property concerning double and iterated compositions:
\[{\widetilde{N}}^{a_{1}}\big({\widetilde{E}}^{a_{2}}(t)\big)=N\big({\widetilde{E}}^{a_{1}}\big({\widetilde{E}}^{a_{2}}(t)\big)\big)\stackrel{d}{=}{\widetilde{N}}^{a_{1}+a_{2}}(t)\]
and
\[{\widetilde{N}}^{a_{1}}\big({\widetilde{E}_{N}^{a_{2}}}\big(\dots {\widetilde{E}_{N}^{a_{n}}}(t)\dots \big)\big)\stackrel{d}{=}{\widetilde{N}}^{{\textstyle\sum _{i=1}^{n}}a_{i}}(t).\]
This property makes processes ${\widetilde{N}}^{a}(t)$ similar to the space-fractional Poisson processes, which are obtained as
\[{N}^{\alpha }(t)=N\big({S}^{\alpha }(t)\big)\]
where ${S}^{\alpha }(t)$ is a stable subordinator with $\mathsf{E}{e}^{-\theta {S}^{\alpha }(t)}={e}^{-t{\theta }^{\alpha }}$. In the papers [15, 7] it was shown that
\[{N}^{\alpha }\big({S}^{\gamma }(t)\big)\stackrel{d}{=}{N}^{\alpha \gamma }(t)\]
and
(21)
\[{N}^{\alpha }\big({S}^{\gamma _{1}}\big(\dots {S}^{\gamma _{n}}(t)\dots \big)\big)\stackrel{d}{=}{N}^{\alpha {\textstyle\prod _{i=1}^{n}}\gamma _{i}}(t),\]
relation (21) is referred to in [7] as auto-conservative property. This property deserves the further investigation.
Remark 5.
Note that the marginal laws of the time-changed processes obtained in Theorems 1, 2 (and in the next theorems in what follows) can be considered as new classes of distributions, in particular, (12) and (32) below represent three-parameter distributions involving the generalized Mittag-Leffler functions.
We now consider Skellam processes $S(t)$ with time change, where the role of time is played by compound Poisson-Gamma subordinators $G_{N}(t)$ with Laplace exponent $f(u)=\lambda {\beta }^{\alpha }({\beta }^{-\alpha }-{(\beta +u)}^{-\alpha })$, $\lambda >0$, $\alpha >0$, $\beta >0$.
Let the process $S(t)$ have parameters $\lambda _{1}$ and $\lambda _{2}$ and let us consider first the time-changed Skellam process of type I, that is the process
\[S_{_{I}}(t)=S\big(G_{N}(t)\big)=N_{1}\big(G_{N}(t)\big)-N_{2}\big(G_{N}(t)\big),\]
where $N_{1}(t)$, $N_{2}(t)$ and $G_{N}(t)$ are mutually independent.
Theorem 3.
Let $S_{_{I}}(t)=S(G_{N}(t))$, then probabilities $r_{k}(t)=P(S_{_{I}}(t)=k)$ are given by
(22)
\[r_{k}(t)={e}^{-\lambda t}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{\frac{k}{2}}{\int _{0}^{\infty }}{e}^{-u(\lambda _{1}+\lambda _{2}+\beta )}I_{|k|}(2u\sqrt{\lambda _{1}\lambda _{2}})\frac{1}{u}\varPhi \big(\alpha ,0,\mu t{(\beta u)}^{\alpha }\big)du.\]
The moment generating function of $S_{_{I}}(t)$ has the following form:
\[\mathsf{E}{e}^{\theta S_{I}(t)}={e}^{-\lambda t(1-{\beta }^{\alpha }{(\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })}^{-\alpha })}\]
for θ such that $\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\ne 0$.
Remark 6.
For the case $\alpha =1$, that is, $S_{_{I}}(t)=S(E_{N}(t))$, we obtain
\[\begin{array}{r@{\hskip0pt}l}\displaystyle r_{k}(t)& \displaystyle ={e}^{-\lambda t}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{\frac{k}{2}}{\int _{0}^{\infty }}{e}^{-u(\lambda _{1}+\lambda _{2}+\beta )}I_{|k|}(2u\sqrt{\lambda _{1}\lambda _{2}})\sqrt{\frac{\lambda \beta t}{u}}I_{1}(2\sqrt{\lambda t\beta u})du,\\{} \displaystyle \mathsf{E}{e}^{\theta S_{I}(t)}& \displaystyle ={e}^{-\lambda t(\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }){(\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })}^{-1}}.\end{array}\]
Proof of Theorem 3.
Using conditioning arguments, we can write:
\[r_{k}(t)={\int _{0}^{\infty }}s_{k}(u)P\big(G_{N}(t)\in du\big),\]
where $s_{k}(u)=P(S(t)=k)$. Inserting the expressions for $s_{k}(u)$ and $P(G_{N}(t)\in du)$, which are given by formulas (1) and (8) correspondingly, we come to (22). The moment generating function can be obtained as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{\theta S_{I}(t)}& \displaystyle ={\int _{0}^{\infty }}\mathsf{E}{e}^{\theta S(t)}P\big(G_{N}(t)\in du\big)=\mathsf{E}{e}^{-f_{s}(-\theta )G_{N}(t)}\\{} & \displaystyle =\mathsf{E}{e}^{-(\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })G_{N}(t)}\\{} & \displaystyle ={e}^{-t\lambda (1-{\beta }^{\alpha }{(\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta })}^{-\alpha })}\end{array}\]
for θ such that $\beta +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\ne 0$.  □
Remark 7.
The mean, variance and covariance function for Skellam process $S_{I}(t)=S(G_{N}(t))$ can be calculated with the use of the general result for time-changed Lévy processes stated in [14], Theorem 2.1 (see our Remark 3, formulas (17)–(20)) and expressions for the mean, variance and covariance of Skellam process, which are
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}S(t)& \displaystyle =(\lambda _{1}-\lambda _{2})t;\\{} \displaystyle \mathsf{Var}S(t)& \displaystyle =(\lambda _{1}+\lambda _{2})t;\\{} \displaystyle \mathsf{Cov}\big(S(t),S(s)\big)& \displaystyle =(\lambda _{1}+\lambda _{2})\min (t,s).\end{array}\]
We obtain:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}S\big(G_{N}(t)\big)& \displaystyle =(\lambda _{1}-\lambda _{2})\alpha {\beta }^{-1}\lambda t;\\{} \displaystyle \mathsf{Var}S\big(G_{N}(t)\big)& \displaystyle =\alpha {\beta }^{-2}\lambda \big[{\big(\lambda _{1}-\lambda _{2}(\alpha +1)\big)}^{2}+(\lambda _{1}+\lambda _{2})\beta \big]t;\\{} \displaystyle \mathsf{Cov}\big(S\big(G_{N}(t)\big),S\big(G_{N}(s)\big)\big)& \displaystyle =\alpha {\beta }^{-2}\lambda \big[{\big(\lambda _{1}-\lambda _{2}(\alpha +1)\big)}^{2}+(\lambda _{1}+\lambda _{2})\beta \big]\min (t,s).\end{array}\]
Consider now the time-changed Skellam process of type II, where the role of time is played by the subordinator $X(t)=E_{N}(t)$ with Laplace exponent $f(u)=\lambda \frac{u}{\beta +u}$, that is, the process
(23)
\[S_{\mathit{II}}(t)=N_{1}\big(X_{1}(t)\big)-N_{2}\big(X_{2}(t)\big),\]
where $X_{1}(t)$ and $X_{2}(t)$ are independent copies $X(t)$ and independent of $N_{1}(t)$, $N_{2}(t)$.
Theorem 4.
Let $S_{\mathit{II}}(t)$ be the time-changed Skellam process of type II given by (23). Its probability mass function is given by
(24)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\lambda t}\frac{{(\lambda t\beta )}^{2}{\lambda _{1}^{k}}}{{(\lambda _{1}+\beta )}^{k+1}(\lambda _{2}+\beta )}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}}{{(\lambda _{1}+\beta )}^{n}{(\lambda _{2}+\beta )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,2}^{n+k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg){\mathcal{E}_{1,2}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\beta }\bigg)\end{array}\]
for $k\in Z$, $k\ge 0$, and when $k<0$
(25)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\lambda t}\frac{{(\lambda t\beta )}^{2}{\lambda _{2}^{|k|}}}{(\lambda _{1}+\beta ){(\lambda _{2}+\beta )}^{|k|+1}}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}}{{(\lambda _{1}+\beta )}^{n}{(\lambda _{2}+\beta )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,2}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg){\mathcal{E}_{1,2}^{n+|k|+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\beta }\bigg).\end{array}\]
The moment generating function is
\[\mathsf{E}{e}^{\theta S_{\mathit{II}}(t)}={e}^{-t\lambda \lambda _{1}(1-{e}^{\theta })\hspace{0.1667em}/\hspace{0.1667em}(\beta +\lambda _{1}(1-{e}^{\theta }))}{e}^{-t\lambda \lambda _{2}(1-{e}^{-\theta })\hspace{0.1667em}/\hspace{0.1667em}(\beta +\lambda _{2}(1-{e}^{-\theta }))}\]
for θ such that $\beta +\lambda _{1}(1-{e}^{\theta })\ne 0$.
Proof of Theorem 4.
Using the independence of $N_{1}(X_{1}(t))$ and $N_{2}(X_{2}(t))$, we can write:
(26)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle =\sum \limits_{n=0}^{\infty }P\big\{N_{1}\big(X_{1}(t)\big)=n+k\big\}P\big\{N_{2}\big(X_{2}(t)\big)=n\big\}I_{\{k\ge 0\}}\\{} & \displaystyle \hspace{1em}+\sum \limits_{n=0}^{\infty }P\big\{N_{1}\big(X_{1}(t)\big)=n\big\}P\big\{N_{2}\big(X_{2}(t)\big)=n+|k|\big\}I_{\{k<0\}}\end{array}\]
and then we use the expressions for probabilities of $N_{i}(E_{N}(t))$ given in Theorem 2. For $k>0$ we have the expression
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle =\sum \limits_{n=0}^{\infty }{e}^{-2\lambda t}\frac{{\lambda _{1}^{n+k}}\lambda t\beta }{{(\lambda _{1}+\beta )}^{n+k+1}}{\mathcal{E}_{1,2}^{n+k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg)\\{} & \displaystyle \hspace{1em}\times \frac{{\lambda _{2}^{n}}\lambda t\beta }{{(\lambda _{2}+\beta )}^{n+1}}{\mathcal{E}_{1,2}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda 2+\beta }\bigg),\end{array}\]
from which we obtain (24); and in the analogous way we come to (25). In view of independence of $N_{1}(X_{1}(t))$ and $N_{2}(X_{2}(t))$, the moment generating function is obtained as the product:
\[\mathsf{E}{e}^{\theta S_{\mathit{II}}(t)}=\mathsf{E}{e}^{\theta N_{1}(X_{1}(t))}\mathsf{E}{e}^{-\theta N_{2}(X_{2}(t))},\]
and then we use expression (15) for $\alpha =1$.  □
Remark 8.
The moments of $S_{\mathit{II}}(t)$ can be calculated using the moment generating function given in Theorem 4, or using the independence of processes $N_{i}(X_{i}(t)),i=1,2$, and corresponding expressions for the moments of $N_{i}(X_{i}(t))$, $i=1,2$. Since $N_{1}(X_{1}(t))$ and $N_{2}(X_{2}(t))$ are independent we can also easily obtain the covariance function as follows:
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathsf{Cov}\big(S_{\mathit{II}}(t),S_{\mathit{II}}(s)\big)\\{} & \displaystyle \hspace{1em}=\mathsf{Cov}\big(N_{1}\big(X_{1}(t)\big),N_{1}\big(X_{1}(s)\big)\big)+\mathsf{Cov}\big(N_{2}\big(X_{2}(t)\big),N_{2}\big(X_{2}(s)\big)\big)\\{} & \displaystyle \hspace{1em}=\lambda {\beta }^{-2}\big[\beta (\lambda _{1}+\lambda _{2})+2\big({\lambda _{1}^{2}}+{\lambda _{2}^{2}}\big)\big]\min (t,s),\end{array}\]
where the expressions for covariance function of the process $N_{1}(E_{N}(t))$ are used (see formula (16) with $\alpha =1$).

5 Inverse compound Poisson-Gamma process as time change

5.1 Inverse compound Poisson-exponential process

We first consider the process $E_{N}(t)$, the compound Poisson process with exponentially distributed jumps with Laplace exponent $f(u)=\lambda \frac{u}{\beta +u}$.
Define the inverse process (or first passage time):
(27)
\[Y(t)=\inf \big\{u\ge 0;E_{N}(u)>t\big\},\hspace{1em}t\ge 0.\]
It is known (see, e.g., [4]) that the process $Y(t)$ has density
(28)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle h(s,t)& \displaystyle =\lambda {e}^{-\lambda s-\beta t}I_{0}(2\sqrt{\lambda \beta st})\\{} & \displaystyle =\lambda {e}^{-\lambda s-\beta t}\varPhi (1,1,\lambda \beta st),\end{array}\]
and its Laplace transform is
(29)
\[\mathsf{E}{e}^{-\theta Y(t)}=\frac{\lambda }{\theta +\lambda }{e}^{-\beta \frac{\theta }{\theta +\lambda }t},\]
which can be also verified as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{-\theta Y(t)}& \displaystyle ={\int _{0}^{\infty }}{e}^{-\theta u}\lambda {e}^{-\lambda u-\beta t}I_{0}(2\sqrt{\lambda \beta ut})du\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\theta u}\lambda {e}^{-\lambda u-\beta t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta ut)}^{n}}{{(n!)}^{2}}du\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}}{{(n!)}^{2}}{\int _{0}^{\infty }}{e}^{-(\theta +\lambda )u}{u}^{n}du\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}}{n!{(\theta +\lambda )}^{n+1}}=\frac{\lambda }{\theta +\lambda }{e}^{-\beta \frac{\theta }{\theta +\lambda }t}.\end{array}\]
Moments of $Y(t)$ can be easily found by the direct calculations. We have
(30)
\[\mathsf{E}Y(t)\hspace{0.1667em}=\hspace{0.1667em}\frac{1}{\lambda }(\beta t+1),\hspace{2em}\mathsf{E}Y{(t)}^{2}\hspace{0.1667em}=\hspace{0.1667em}\frac{1}{{\lambda }^{2}}\big({\beta }^{2}{t}^{2}+4\beta t+2\big),\hspace{2em}\mathsf{Var}Y(t)\hspace{0.1667em}=\hspace{0.1667em}\frac{1}{{\lambda }^{2}}(2\beta t+1).\]
For example, the first moment can be obtained as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}Y(t& \displaystyle )={\int _{0}^{\infty }}uh(u,t)du={\int _{0}^{\infty }}u\lambda {e}^{-\lambda u-\beta t}I_{0}(2\sqrt{\lambda \beta ut})du\\{} & \displaystyle ={\int _{0}^{\infty }}u\lambda {e}^{-\lambda u-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\lambda \beta ut)}^{k}}{{(k!)}^{2}}du=\lambda {e}^{-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\lambda \beta t)}^{k}}{{(k!)}^{2}}{\int _{0}^{\infty }}{u}^{k+1}{e}^{-\lambda u}du\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\lambda \beta t)}^{k}(k+1)!}{{(k!)}^{2}{\lambda }^{k+2}}=\frac{1}{\lambda }{e}^{-\beta t}\sum \limits_{k=0}^{\infty }\frac{{(\beta t)}^{k}(k+1)}{k!}=\frac{1}{\lambda }(\beta t+1).\end{array}\]
The covariance function of the process $Y(t)$ can be calculated using the results on moments of the inverse subordinators stated in [21].
Lemma 1.
Let $Y(t)$ be the inverse process given by (27). Then
(31)
\[\mathsf{Cov}\big(Y(t),Y(s)\big)=\frac{1}{{\lambda }^{2}}\big(2\beta \min (t,s)+1\big).\]
The proof of Lemma 1 is given in Appendix A.1.
Remark 9.
It is known that, generally, the inverse subordinator is a process with non-stationary, non-independent increments (see, e.g., [21]). We have not investigated here this question for the process $Y(t)$, however we can observe the same similarity between the expressions for variance and covariance of $Y(t)$ as that which holds for the processes with stationary independent increments.
Remark 10.
Note that, for inverse processes, the important role is played by the function $U(t)=\mathsf{E}Y(t)$, which is called the renewal function. This function can be calculated using the following formula for its Laplace transform:
\[{\int _{0}^{\infty }}U(t){e}^{-st}dt=\frac{1}{sf(s)},\]
where $f(s)$ is the Laplace exponent of the subordinator, for which $Y(t)$ is the inverse (see, [21], formula (13)).
In our case we obtain
\[{\int _{0}^{\infty }}U(t){e}^{-st}dt=\frac{\beta +u}{\lambda {u}^{2}}=\frac{1}{\lambda }\bigg(\frac{\beta }{{u}^{2}}+\frac{1}{{u}^{}}\bigg),\]
and by inverting this Laplace transform we come again to the expression for the first moment given in (30). The function $U(t)$ characterizes the distribution of the inverse process $Y(t)$, since all the moments of $Y(t)$ (and the mixed moments as well) can be expressed (by recursion) in terms of $U(t)$ (see, [21]).
Let $N_{1}(t)$ be the Poisson process with intensity $\lambda _{1}$. Consider the time-changed process $Z(t)=N_{1}(Y(t))$, where $Y(t)$ is the inverse process given by (27), independent of $N_{1}(t)$.
Theorem 5.
The probability mass function of the process $Z(t)=N_{1}(Y(t))$ is given by
(32)
\[{p_{k}^{I}}(t)=P\big(Z(t)=k\big)={e}^{-\beta t}\frac{\lambda {\lambda _{1}^{k}}(k+1)}{{(\lambda _{1}+\lambda )}^{k+1}}{\mathcal{E}_{1,1}^{k+1}}\bigg(\frac{\lambda t\beta }{\lambda _{1}+\lambda }\bigg),\]
the mean and variance are:
(33)
\[\mathsf{E}Z(t)=\frac{\lambda _{1}}{\lambda }(\beta t+1),\hspace{2em}\mathsf{Var}Z(t)=\frac{{\lambda _{1}^{2}}}{{\lambda }^{2}}(2\beta t+1)+\frac{\lambda _{1}}{\lambda }(\beta t+1),\]
and the covariance function has the following form:
(34)
\[\mathsf{Cov}\big(Z(t),Z(s)\big)=\frac{{\lambda _{1}^{2}}}{{\lambda }^{2}}\big(2\beta \min (t,s)+1\big)+\frac{\lambda _{1}}{\lambda }\big(\beta \min (t,s)+1\big).\]
The Laplace transform is given by
(35)
\[\mathsf{E}{e}^{-\theta Z(t)}=\frac{\lambda }{\lambda +\lambda _{1}(1-{e}^{-\theta })\hspace{0.1667em}}{e}^{-\beta t\lambda _{1}(1-{e}^{-\theta }){(\lambda +\lambda _{1}(1-{e}^{-\theta }))}^{-1}}.\]
Proof of Theorem 5.
The probabilities ${p_{k}^{I}}(t)=P(Z(t)=k)$ can be obtained by means of the following calculations:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {p_{k}^{I}}(t)& \displaystyle =P\big(N_{1}\big(Y(t)\big)=k\big)={\int _{0}^{\infty }}P\big(N_{1}(s)=k\big)h(s,t)ds\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s}\frac{{(\lambda _{1}s)}^{k}}{k!}\lambda {e}^{-\lambda s-\beta t}I_{0}(2\sqrt{\lambda \beta st})ds\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s-\lambda s-\beta t}\lambda \frac{{(\lambda _{1}s)}^{k}}{k!}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta st)}^{n}}{{(n!)}^{2}}ds\\{} & \displaystyle =\frac{{e}^{-\beta t}\lambda {\lambda _{1}^{k}}}{k!}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}}{{(n!)}^{2}}{\int _{0}^{\infty }}{e}^{-(\lambda _{1}+\lambda )s}{s}^{k+n}ds\\{} & \displaystyle =\frac{{e}^{-\beta t}\lambda {\lambda _{1}^{k}}}{k!}\sum \limits_{n=0}^{\infty }\frac{{(\lambda \beta t)}^{n}\varGamma (n+k+1)}{{(n!)}^{2}{(\lambda _{1}+\lambda )}^{n+k+1}}\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda {\lambda _{1}^{k}}(k+1)}{{(\lambda _{1}+\lambda )}^{k+1}}{\mathcal{E}_{1,1}^{k+1}}\bigg(\frac{\lambda t\beta }{\lambda _{1}+\lambda }\bigg).\end{array}\]
The mean and variance can be calculated using formulas (17), (19) and expressions (30).
The Laplace transform is obtained as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{-\theta N_{1}(Y(t))}& \displaystyle ={\int _{0}^{\infty }}\mathsf{E}{e}^{-\theta N_{1}(t)}h(u,t)du=\mathsf{E}{e}^{-\lambda _{1}(1-{e}^{-\theta })Y(t)}\\{} & \displaystyle =\frac{\lambda }{\lambda +\lambda _{1}(1-{e}^{-\theta })\hspace{0.1667em}}\exp \bigg(-t\frac{\beta \lambda _{1}(1-{e}^{-\theta })}{\lambda +\lambda _{1}(1-{e}^{-\theta })}\bigg).\hspace{40.0pt}\end{array}\]
 □
We now state the relationship, in the form of a system of differential equations, between the marginal distributions of the processes $N_{1}(E_{N}(t))$ and $N_{1}(Y(t))$, with $Y(t)$ being the inverse process for $E_{N}(t)$.
Introduce the following notations:
(36)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {p_{k}^{E}}(t)& \displaystyle ={p_{k}^{E}}(t,\lambda ,\beta ,\lambda _{1})=P\big(N_{1}\big(E_{N}(t)\big)=k\big)=P\big({N_{1}^{\lambda _{1}}}\big({E}^{\beta }\big({N}^{\lambda }(t)\big)\big)=k\big),\end{array}\]
(37)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\hat{p}_{k}^{E}}(t)& \displaystyle ={p_{k}^{E}}(t,\beta ,\lambda ,\lambda _{1})=P\big(N_{1}\big(\widehat{E}_{N}(t)\big)=k\big)=P\big({N_{1}^{\lambda _{1}}}\big({E}^{\lambda }\big({N}^{\beta }(t)\big)\big)=k\big),\end{array}\]
for ${\hat{p}_{k}^{E}}(t)$ we changed the order of parameters λ and β, that is, parameters in E and N are interchanged, and λ and β are now parameters of E and N correspondingly; and for the inverse process we denote:
(38)
\[{p_{k}^{I}}(t)={p_{k}^{I}}(t,\lambda ,\beta ,\lambda _{1})=P\big(N_{1}\big(Y(t)\big)=k\big),\]
where $Y(t)$ is the inverse process for $E_{N}(t)={E}^{\beta }({N}^{\lambda }(t))$,
(39)
\[{\hat{p}_{k}^{I}}(t)={p_{k}^{I}}(t,\beta ,\lambda ,\lambda _{1})=P\big(N_{1}\big(\widehat{Y}(t)\big)=k\big),\]
where $\widehat{Y}(t)$ is the inverse process for $\widehat{E}_{N}(t)={E}^{\lambda }({N}^{\beta }(t))$.
Theorem 6.
The probabilities ${p_{k}^{E}}(t)$, ${\hat{p}_{k}^{E}}(t)$ and ${p_{k}^{I}}(t)$, ${\hat{p}_{k}^{I}}(t)$ satisfy the following differential equations:
(40)
\[\frac{d}{dt}{\hat{p}_{k}^{E}}(t)=-\beta {\hat{p}_{k}^{E}}(t)+\frac{\beta }{k+1}{p_{k}^{I}}(t)\]
and
(41)
\[\frac{d}{dt}{p_{k}^{E}}(t)=-\lambda {p_{k}^{E}}(t)+\frac{\lambda }{k+1}{\hat{p}_{k}^{I}}(t).\]
If $\lambda =\beta $ then ${p_{k}^{E}}(t)$ and ${p_{k}^{I}}(t)$ satisfy the following equation:
(42)
\[\frac{d}{dt}{p_{k}^{E}}(t)=-\lambda {p_{k}^{E}}(t)+\frac{\lambda }{k+1}{p_{k}^{I}}(t).\]
Proof.
Firstly we represent the derivative $\frac{d}{dt}{\hat{p}_{k}^{E}}(t)$ in the form:
(43)
\[\frac{d}{dt}{\hat{p}_{k}^{E}}(t)=-\beta {\hat{p}_{k}^{E}}(t)+{e}^{-\beta t}\frac{{\lambda _{1}^{k}}}{{(\lambda _{1}+\lambda )}^{k}}\frac{d}{dt}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\lambda }{\mathcal{E}_{1,2}^{k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\lambda }\bigg)\bigg).\]
Next we notice that the following relation holds:
\[\frac{d}{dz}\big(az{\mathcal{E}_{1,2}^{\gamma }}(az)\big)=a{\mathcal{E}_{1,1}^{\gamma }}(az),\]
which can be easily checked by direct calculations.
Therefore, (43) can be written as
(44)
\[\frac{d}{dt}{\hat{p}_{k}^{E}}(t)=-\beta {\hat{p}_{k}^{E}}(t)+{e}^{-\beta t}\frac{{\lambda _{1}^{k}}\lambda \beta }{{(\lambda _{1}+\lambda )}^{k+1}}{\mathcal{E}_{1,1}^{k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\lambda }\bigg).\]
Comparing the second term in the r.h.s. of (44) with the expression for ${p_{k}^{I}}(t)$, we come to (40). With analogous reasonings we derive (41), and taking $\lambda =\beta $ in (40) or in (41), we obtain (42).  □
Remark 11.
We note that in addition to (40)–(42) some other relations can be written, in particular, the probabilities ${p_{k}^{I}}(t)$ can be represented recursively via ${p_{k}^{E}}(t)$. If $\lambda =\beta $, from (42) and (14) we deduce:
\[{p_{k}^{I}}(t)=(k+1)\frac{\beta }{\lambda _{1}+\beta }\Bigg[{p_{k}^{E}}(t)+\sum \limits_{m=1}^{k}{\bigg(\frac{\lambda _{1}}{\lambda _{1}+\beta }\bigg)}^{m}{p_{k-m}^{E}}(t)\Bigg].\]
Consider now Skellam processes with time change, where the role of time is played by the inverse process $Y(t)$ given by (27).
Let the Skellam process $S(t)$ have parameters $\lambda _{1}$ and $\lambda _{2}$ and let us consider the process
(45)
\[S_{I}(t)=S\big(Y(t)\big)=N_{1}\big(Y(t)\big)-N_{2}\big(Y(t)\big),\]
where $N_{1}(t)$, $N_{2}(t)$ and $Y(t)$ are independent.
Theorem 7.
Let $S_{I}(t)$ be a Skellam process of type I given by (45), then the probabilities $r_{k}(t)=P(S_{I}(t)=k),k\in Z$, are given by
\[r_{k}(t)=\lambda {e}^{-\beta t}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{\frac{k}{2}}{\int _{0}^{\infty }}{e}^{-u(\lambda _{1}+\lambda _{2}+\lambda )}I_{|k|}(2u\sqrt{\lambda _{1}\lambda _{2}})I_{0}(2\sqrt{\lambda \beta ut})du.\]
The moment generating function is given by
(46)
\[\mathsf{E}{e}^{\theta S_{I}(t)}=\frac{\lambda }{\lambda +f_{s}(-\theta )\hspace{0.1667em}}{e}^{-\beta tf_{s}(-\theta ){(\lambda +f_{s}(-\theta ))}^{-1}},\]
for θ such that $\lambda +f_{s}(-\theta )\ne 0$, where $f_{s}(\theta )$ is the Laplace transform of the initial Skellam process $S(t)$.
Proof.
Using conditioning arguments, we write:
\[r_{k}(t)={\int _{0}^{\infty }}s_{k}(u)h(u,t)du,\]
and then we insert the expressions for $s_{k}(u)$ and $h(u,t)$, which are given by formulas (1) and (28) correspondingly. The moment generating function can be calculated as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{e}^{\theta S_{I}(t)}& \displaystyle ={\int _{0}^{\infty }}\mathsf{E}{e}^{\theta S(t)}h(u,t)du=\mathsf{E}{e}^{-f_{s}(-\theta )Y(t)}\\{} & \displaystyle =\frac{\lambda }{\lambda +f_{s}(-\theta )\hspace{0.1667em}}{e}^{-\beta tf_{s}(-\theta ){(\lambda +f_{s}(-\theta ))}^{-1}}\\{} & \displaystyle =\frac{\lambda }{\lambda +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\hspace{0.1667em}}\exp \bigg(-\beta t\frac{\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }}{\lambda +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\hspace{0.1667em}}\bigg)\end{array}\]
for θ such that $\lambda +\lambda _{1}+\lambda _{2}-\lambda _{1}{e}^{\theta }-\lambda _{2}{e}^{-\theta }\ne 0$.  □
Remark 12.
The expressions for mean, variance and covariance function for the Skellam process $S(Y(t))$ can be calculated analogously to corresponding calculations for the process $S(G_{N}(t))$ (see Remark 7). We obtain:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}S\big(Y(t)\big)& \displaystyle =(\lambda _{1}-\lambda _{2})\frac{\beta t+1}{\lambda };\\{} \displaystyle \mathsf{Var}S\big(Y(t)\big)& \displaystyle ={(\lambda _{1}-\lambda _{2})}^{2}\frac{2\beta t+1}{{\lambda }^{2}}+(\lambda _{1}+\lambda _{2})\frac{\beta t+1}{\lambda };\\{} \displaystyle \mathsf{Cov}\big(S\big(Y(t)\big),S\big(Y(s)\big)\big)& \displaystyle ={(\lambda _{1}-\lambda _{2})}^{2}\frac{2\beta \min (t,s)+1}{{\lambda }^{2}}+(\lambda _{1}+\lambda _{2})\frac{\beta \min (t,s)+1}{\lambda }.\end{array}\]
Consider the time-changed Skellam process of type II:
(47)
\[S_{\mathit{II}}(t)=N_{1}\big(Y_{1}(t)\big)-N_{2}\big(Y_{2}(t)\big),\]
where $Y_{1}(t)$ and $Y_{2}(t)$ are independent copies of the inverse process $Y(t)$ and independent of $N_{1}(t)$, $N_{2}(t)$.
Theorem 8.
Let $S_{\mathit{II}}(t)$ be the time-changed Skellam process of type II given by (47). Its probability mass function is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{1}^{k}}}{{(\lambda _{1}+\lambda )}^{k+1}(\lambda _{2}+\lambda )}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}(n+k+1)(n+1)}{{(\lambda _{1}+\lambda )}^{n}{(\lambda _{2}+\lambda )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,1}^{n+k+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\lambda }\bigg){\mathcal{E}_{1,1}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\lambda }\bigg)\end{array}\]
for $k\in Z$, $k\ge 0$, and when $k<0$
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big(S_{\mathit{II}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{2}^{|k|}}}{(\lambda _{1}+\lambda ){(\lambda _{2}+\lambda )}^{|k|+1}}\sum \limits_{n=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{n}(n+k+1)(n+1)}{{(\lambda _{1}+\lambda )}^{n}{(\lambda _{2}+\lambda )}^{n}}\\{} & \displaystyle \hspace{1em}\times \hspace{0.1667em}{\mathcal{E}_{1,1}^{n+1}}\bigg(\frac{\lambda \beta t}{\lambda _{1}+\beta }\bigg){\mathcal{E}_{1,1}^{n+|k|+1}}\bigg(\frac{\lambda \beta t}{\lambda _{2}+\beta }\bigg).\end{array}\]
The moment generating function is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E{e}^{\theta S_{\mathit{II}}(t)}& \displaystyle =\frac{{\lambda }^{2}}{(\lambda +\lambda _{1}(1-{e}^{-\theta }))(\lambda +\lambda _{2}(1-{e}^{\theta }))\hspace{0.1667em}}\\{} & \displaystyle \hspace{1em}\times \exp \bigg(-\beta t\frac{\lambda _{1}(1-{e}^{-\theta })}{\lambda +\lambda _{1}(1-{e}^{-\theta })\hspace{0.1667em}}\bigg)\exp \bigg(-\beta t\frac{\lambda _{2}(1-{e}^{\theta })}{\lambda +\lambda _{2}(1-{e}^{\theta })\hspace{0.1667em}}\bigg)\end{array}\]
for θ such that $\lambda +\lambda _{2}(1-{e}^{\theta })\ne 0$.
Proof.
Analogously to the proof of Theorem 4 we write the expression for $P(S_{\mathit{II}}(t)=k)$ in the form (26) and then we use the expressions for probabilities $P(N_{1}(Y(t))=k)$ from Theorem 5. The moment generating function is obtained as the product: $\mathsf{E}{e}^{\theta S_{\mathit{II}}(t)}=\mathsf{E}{e}^{\theta N_{1}(Y_{1}(t))}\mathsf{E}{e}^{-\theta N_{2}(Y_{2}(t))}$, and then we use expression (35).  □
Remark 13.
The moments of $S_{\mathit{II}}(t)$ can be calculated using the moment generating function given in Theorem 8, or using independence of processes $N_{i}(Y_{i}(t)),i=1,2$, and corresponding expressions for the moments of $N_{i}(Y_{i}(t)),i=1,2$. In view of mutual independence of $N_{1}(Y_{1}(t))$ and $N_{2}(Y_{2}(t))$ and using the formula (31), we obtain the covariance function:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{Cov}\big(S_{\mathit{II}}(t),S_{\mathit{II}}(s)\big)& \displaystyle =\mathsf{Cov}\big(N_{1}\big(Y_{1}(t)\big),N_{1}\big(Y_{1}(s)\big)\big)+\mathsf{Cov}\big(N_{2}\big(Y_{2}(t)\big),N_{2}\big(Y_{2}(s)\big)\big)\\{} & \displaystyle =\frac{\lambda _{1}+\lambda _{2}}{\lambda }\big(\beta \min (t,s)+1\big)+\frac{{\lambda _{1}^{2}}+{\lambda _{2}^{2}}}{{\lambda }^{2}}\big(2\beta \min (t,s)+1\big).\end{array}\]

5.2 Inverse compound Poisson–Erlang process

Consider now the compound Poisson–Erlang process ${G_{N}^{(n)}}(t)$, that is, the Poisson-Gamma process with $\alpha =n$.
For this case the inverse process
(48)
\[{Y}^{(n)}(t)=\inf \big\{u\ge 0;{G_{N}^{(n)}}(u)>t\big\},\hspace{1em}t\ge 0,\]
has density of the following form:
(49)
\[q(s,t)=\lambda {e}^{-\beta t-\lambda s}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\varPhi \big(n,k,\lambda s{(\beta t)}^{n}\big).\]
The formula for the density (49) (in different set of parameters) was obtained in [17] (Theorem 3.1) by developing the approach presented in [4]. This approach is based on calculating and inverting the Laplace transforms, by taking into account the relationship between Laplace transforms of a direct process and its inverse process. We refer for details to [4, 17] (see also Appendix A.2). It should be noted that for the compound Poisson-Gamma processes with a non-integer parameter α the inverting Laplace transforms within this approach leads to complicated infinite integrals (see again [17]).
Laplace transform of the process ${Y}^{(n)}(t)$ can be represented in the following form:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}\big[{e}^{-\theta {Y}^{(n)}(t)}\big]& \displaystyle ={e}^{-\beta t}\sum \limits_{k=1}^{n}\sum \limits_{j=0}^{\infty }\frac{{(\beta t)}^{nj+k-1}}{\varGamma (nj+k)}{\bigg(\frac{\lambda }{\lambda +\theta }\bigg)}^{j+1}\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda }{\lambda +\theta }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +\theta }\bigg)\end{array}\]
(see (60) in Appendix A.2). With direct calculations, using the known form of the density of ${Y}^{(n)}(t)$, we find the following expressions for the moments:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{Y}^{(n)}(t)& \displaystyle ={e}^{-\beta t}\frac{1}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{2}}\big({(\beta t)}^{n}\big),\\{} \displaystyle \mathsf{E}{\big[{Y}^{(n)}(t)\big]}^{2}& \displaystyle ={e}^{-\beta t}\frac{2}{{\lambda }^{2}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{3}}\big({(\beta t)}^{n}\big),\end{array}\]
and, generally, for $p\ge 1$:
\[\mathsf{E}{\big[{Y}^{(n)}(t)\big]}^{p}={e}^{-\beta t}\frac{p!}{{\lambda }^{p}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{p+1}}\big({(\beta t)}^{n}\big)\]
(see details of calculations in Appendix A.2).
Remark 14.
Using the arguments similar to those in [17] (see proof of Lemma 3.11 therein), we can also derive another expression for the first moment, in terms of the two-parameter generalized Mittag-Leffler function:
(50)
\[\mathsf{E}{Y}^{(n)}(t)=\frac{\beta t}{n\lambda }+\frac{{e}^{-\beta t}}{n\lambda }\sum \limits_{k=1}^{n}(n+k-1){(\beta t)}^{k-1}\mathcal{E}_{n,k}\big({(\beta t)}^{n}\big).\]
From (50) we can see that $\mathsf{E}{Y}^{(n)}(t)$ has a linear behavior with respect to t as $t\to \infty $:
(51)
\[\mathsf{E}{Y}^{(n)}(t)\sim \frac{\beta t}{n\lambda }+\frac{n+1}{2n\lambda },\]
which should be indeed expected, since the general result holds for subordinators with finite mean: the mean of their first passage time exhibit linear behavior for large times (see, for example, [21] and references therein).
The details of derivation of (50) and (51) are presented in Appendix A.2.
Consider the time-changed process ${Z}^{(n)}(t)=N_{1}({Y}^{(n)}(t))$, where ${Y}^{(n)}(t)$ is the inverse process given by (48), independent of $N_{1}(t)$.
Theorem 9.
The probability mass function of the process ${Z}^{(n)}(t)=N_{1}({Y}^{(n)}(t))$ is given by
(52)
\[p_{k}(t)=P\big({Z}^{(n)}(t)=k\big)={e}^{-\beta t}\frac{\lambda {\lambda _{1}^{k}}(k+1)}{{(\lambda _{1}+\lambda )}^{k+1}}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}{\mathcal{E}_{n,m}^{k+1}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{1}+\lambda }\bigg),\]
Laplace transform is given by
\[E\big[{e}^{-\theta {Z}^{(n)}(t)}\big]={e}^{-\beta t}\frac{\lambda }{\lambda +\lambda _{1}(1-{e}^{-\theta })\hspace{0.1667em}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +\lambda _{1}(1-{e}^{-\theta })}\bigg).\]
Proof.
Proof is similar to that for Theorem 5. In particular, the probability mass function is obtained as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hspace{20.0pt}p_{k}(t)& \displaystyle =P\big(N_{1}\big({Y}^{(n)}(t)\big)=k\big)\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s}\frac{{(\lambda _{1}s)}^{k}}{k!}\lambda {e}^{-\lambda s-\beta t}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}\varPhi \big(n,m,\lambda s{(\beta t)}^{n}\big)ds\\{} & \displaystyle ={\int _{0}^{\infty }}{e}^{-\lambda _{1}s-\lambda s-\beta t}\lambda \frac{{(\lambda _{1}s)}^{k}}{k!}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda s{(\beta t)}^{n})}^{l}}{l!\varGamma (nl+m)}ds\\{} & \displaystyle =\frac{{e}^{-\beta t}\lambda {\lambda _{1}^{k}}}{k!}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda {(\beta t)}^{n})}^{l}}{l!\varGamma (nl+m)}{\int _{0}^{\infty }}{e}^{-(\lambda _{1}+\lambda )s}{s}^{k+l}ds\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda {\lambda _{1}^{k}}(k+1)}{{(\lambda _{1}+\lambda )}^{k+1}}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}{\mathcal{E}_{n,m}^{k+1}}\bigg(\frac{\lambda {(t\beta )}^{n}}{\lambda _{1}+\lambda }\bigg).\hspace{40.0pt}\end{array}\]
 □
Remark 15.
The first two moments of the process ${Z}^{(n)}(t)$ can be calculated as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{Z}^{(n)}(t)& \displaystyle =\mathsf{E}N_{1}(1)\mathsf{E}{Y}^{(n)}(t)={e}^{-\beta t}\frac{\lambda _{1}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{2}}\big({(\beta t)}^{n}\big),\\{} \displaystyle \mathsf{E}{\big[{Z}^{(n)}(t)\big]}^{2}& \displaystyle =\mathsf{Var}N_{1}(1)U(t)-{\big[\mathsf{E}N_{1}(1)\big]}^{2}\mathsf{E}{\big[Y(t)\big]}^{2}\\{} & \displaystyle ={e}^{-\beta t}\frac{\lambda _{1}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{2}}\big({(\beta t)}^{n}\big)-{e}^{-\beta t}\frac{{\lambda _{1}^{2}}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{3}}\big({(\beta t)}^{n}\big),\end{array}\]
and we can see that, similarly to $\mathsf{E}{Y}^{(n)}(t)$, $\mathsf{E}{Z}^{(n)}(t)$ has linear behavior as $t\to \infty $:
\[\mathsf{E}{Y}^{(n)}(t)\sim \frac{\lambda _{1}\beta t}{n\lambda }+\frac{\lambda _{1}(n+1)}{2n\lambda }.\]
Let the Skellam process $S(t)$ have parameters $\lambda _{1}$ and $\lambda _{2}$ and let us consider the process
(53)
\[{S_{I}^{(n)}}(t)=S\big({Y}^{(n)}(t)\big)=N_{1}\big({Y}^{(n)}(t)\big)-N_{2}\big({Y}^{(n)}(t)\big),\]
where $N_{1}(t)$, $N_{2}(t)$ and ${Y}^{(n)}(t)$ are independent.
Theorem 10.
Let ${S_{I}^{(n)}}(t)$ be a Skellam process of type I given by (53), then the probabilities $r_{k}(t)=P({S_{I}^{(n)}}(t)=k)$, $k\in Z$, are given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle r_{k}(t)& \displaystyle =\lambda {e}^{-\beta t}{\bigg(\frac{\lambda _{1}}{\lambda _{2}}\bigg)}^{\frac{k}{2}}\sum \limits_{m=1}^{n}{(\beta t)}^{m-1}{\int _{0}^{\infty }}{e}^{-u(\lambda _{1}+\lambda _{2}+\lambda )}I_{|k|}(2u\sqrt{\lambda _{1}\lambda _{2}})\\{} & \displaystyle \hspace{1em}\times \varPhi \big(n,m,\lambda u{(\beta t)}^{n}\big)du.\end{array}\]
The moment generating function is given by
\[E\big[{e}^{\theta {S_{I}^{(n)}}(t)}\big]={e}^{-\beta t}\frac{\lambda }{\lambda +f_{s}(-\theta )\hspace{0.1667em}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +f_{s}(-\theta )}\bigg)\]
for θ such that $\lambda +f_{s}(-\theta )\ne 0$, where $f_{s}(\theta )$ is the Laplace transform of the initial Skellam process $S(t)$.
Proof.
Proof is analogous to that of Theorem 7.  □
Consider the time-changed Skellam process of type II:
(54)
\[{S_{\mathit{II}}^{(n)}}(t)=N_{1}\big(Y_{1}(t)\big)-N_{2}\big(Y_{2}(t)\big),\]
where $Y_{1}(t)$ and $Y_{2}(t)$ are independent copies of the inverse process ${Y}^{(n)}(t)$ and independent of $N_{1}(t)$, $N_{2}(t)$.
Theorem 11.
Let ${S_{\mathit{II}}^{(n)}}(t)$ be the time-changed Skellam process of type II given by (54). Its probability mass function is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big({S_{\mathit{II}}^{(n)}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{1}^{k}}}{{(\lambda _{1}+\lambda )}^{k+1}(\lambda _{2}+\lambda )}\sum \limits_{p=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{p}(p+k+1)(n+1)}{{(\lambda _{1}+\lambda )}^{p}{(\lambda _{2}+\lambda )}^{p}}\\{} & \displaystyle \hspace{1em}\times \sum \limits_{m_{1}=1}^{n}\sum \limits_{m_{2}=1}^{n}{(\beta t)}^{m_{1}+m_{2}-2}\hspace{0.1667em}{\mathcal{E}_{n,m_{1}}^{p+k}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{1}+\lambda }\bigg){\mathcal{E}_{n,m_{2}}^{p}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{2}+\lambda }\bigg)\end{array}\]
for $k\in Z$, $k\ge 0$, and when $k<0$
\[\begin{array}{r@{\hskip0pt}l}\displaystyle P\big({S_{\mathit{II}}^{(n)}}(t)=k\big)& \displaystyle ={e}^{-2\beta t}\frac{{\lambda }^{2}{\lambda _{2}^{|k|}}}{(\lambda _{1}+\lambda ){(\lambda _{2}+\lambda )}^{|k|+1}}\sum \limits_{p=0}^{\infty }\frac{{(\lambda _{1}\lambda _{2})}^{p}(p+k+1)(p+1)}{{(\lambda _{1}+\lambda )}^{p}{(\lambda _{2}+\lambda )}^{p}}\\{} & \displaystyle \hspace{1em}\times \sum \limits_{m_{1}=1}^{n}\sum \limits_{m_{2}=1}^{n}{(\beta t)}^{m_{1}+m_{2}-2}\hspace{0.1667em}{\mathcal{E}_{n,m_{1}}^{p}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{1}+\lambda }\bigg){\mathcal{E}_{n,m_{2}}^{p+|k|}}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda _{2}+\lambda }\bigg).\end{array}\]
The moment generating function is
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E\big[{e}^{\theta S_{\mathit{II}}(t)}\big]& \displaystyle ={e}^{-2\beta t}\frac{\lambda }{\lambda +\lambda _{1}(1-{e}^{\theta })\hspace{0.1667em}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +\lambda _{1}(1-{e}^{\theta })}\bigg)\\{} & \displaystyle \hspace{1em}\times \frac{\lambda }{\lambda +\lambda _{2}(1-{e}^{-\theta })\hspace{0.1667em}}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda +\lambda _{2}(1-{e}^{-\theta })}\bigg)\end{array}\]
for θ such that $\lambda +\lambda _{1}(1-{e}^{\theta })\ne 0$.
Proof.
Proof is analogous to that of Theorem 8.  □
Remark 16.
Covariance structure of the Skellam processes considered in this section appears to be of complicated form and we postpone this issue for future research.

Acknowledgments

The authors are grateful to the referees for their valuable comments and suggestions which helped to improve the paper.

A Appendix

A.1 Proof of Lemma 1: calculation of the covariance function of inverse compound Poisson-exponential process

Let $E_{N}(t)$ be the compound Poisson-exponential process with parameters $\lambda ,\beta $, that is, with Laplace exponent $f(u)=\lambda \frac{u}{\beta +u}$. Consider the first passage time of $E_{N}(t)$:
\[Y(t)=\inf \big\{u\ge 0;E_{N}(u)>t\big\},\hspace{1em}t\ge 0.\]
First two moments of $Y(t)$ and $\mathsf{Var}Y(t)$ are presented in (30) and can be directly calculated using the probability density of $Y(t)$ given by (28).
To calculate the covariance we need to find the expression for $\mathsf{E}Y(t_{1})Y(t_{2})$.
We will use Theorem 3.1 from [21] which gives the expressions for the Laplace transforms for n-th order moments of the inverse process in terms of Laplace transforms of lower-order moments.
Denote
\[U(t_{1},t_{2},m_{1},m_{2})=\mathsf{E}Y{(t_{1})}^{m_{1}}Y{(t_{2})}^{m_{2}},\]
where $m_{1}$ and $m_{2}$ are positive integers; denote also $\mathsf{E}Y(t_{1})=U(t_{1})$.
Let $\tilde{U}(u_{1},u_{2},m_{1},m_{2})$ be the Laplace transform of $U(t_{1},t_{2},m_{1},m_{2})$. Then, in these notations,
\[\mathsf{E}Y(t_{1})Y(t_{2})=U(t_{1},t_{2},1,1)\]
and from Theorem 3.1, formula (17) [21] we have:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \tilde{U}(u_{1},u_{2},1,1)& \displaystyle =\frac{1}{f(u_{1}+u_{2})}\big(\tilde{U}(u_{1},u_{2},1,0)+\tilde{U}(u_{1},u_{2},0,1)\big)\\{} & \displaystyle =\frac{1}{f(u_{1}+u_{2})}\bigg(\frac{\tilde{U}(u_{1})}{u_{1}}+\frac{\tilde{U}(u_{2})}{u_{2}}\bigg).\end{array}\]
In the above formula $\tilde{U}(u_{1},u_{2},1,0)$ is the Laplace transform of $U(t_{1},t_{2},1,0)=\mathsf{E}Y(t_{1})=U(t_{1})$ and $\tilde{U}(u_{1},u_{2},0,1)$ is the Laplace transform of $U(t_{1},t_{2},0,1)=\mathsf{E}Y(t_{2})=U(t_{2})$.
The inverse Laplace transform can be found by the following calculations:
(55)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle U(t_{1},t_{2},1,1)\\{} & \displaystyle \hspace{1em}={\mathcal{L}}^{-1}\big(\tilde{U}(u_{1},u_{2},1,1)\big)(t_{1},t_{2})\\{} & \displaystyle \hspace{1em}={\int _{0}^{t_{1}}}{\int _{0}^{t_{2}}}\big[U(t_{1}-\tau _{1})+U(t_{2}-\tau _{2})\big]{\mathcal{L}}^{-1}\bigg(\frac{1}{f(u_{1}+u_{2})}\bigg)(\tau _{1},\tau _{2})d\tau _{1}d\tau _{2};\end{array}\]
for the function
\[\frac{1}{f(u_{1}+u_{2})}=\frac{\beta +u_{1}+u_{2}}{\lambda (u_{1}+u_{2})}=\frac{\beta }{\lambda }\frac{1}{u_{1}+u_{2}}+\frac{1}{\lambda }\]
we write the inverse Laplace transform in the form
(56)
\[{\mathcal{L}}^{-1}\bigg(\frac{1}{f(u_{1}+u_{2})}\bigg)(\tau _{1},\tau _{2})d\tau _{1}d\tau _{2}=\frac{\beta }{\lambda }\delta (\tau _{1}-\tau _{2})d\tau _{1}d\tau _{2}+\frac{1}{\lambda }\delta (\tau _{1})\delta (\tau _{2})d\tau _{1}d\tau _{2},\]
and continue calculations inserting (56) in (55):
\[\begin{array}{r@{\hskip0pt}l}\displaystyle U(t_{1},t_{2},1,1)& \displaystyle =\frac{\beta }{\lambda }{\int _{0}^{t_{1}}}{\int _{0}^{t_{2}}}\big[U(t_{1}-\tau _{1})+U(t_{2}-\tau _{2})\big]\delta (\tau _{1}-\tau _{2})d\tau _{1}d\tau _{2}\\{} & \displaystyle \hspace{1em}+\frac{1}{\lambda }{\int _{0}^{t_{1}}}{\int _{0}^{t_{2}}}\big[U(t_{1}-\tau _{1})+U(t_{2}-\tau _{2})\big]\delta (\tau _{1})\delta (\tau _{2})d\tau _{1}d\tau _{2}\\{} & \displaystyle =\frac{\beta }{\lambda }{\int _{0}^{t_{1}\wedge t_{2}}}\big[U(t_{1}-\tau )+U(t_{2}-\tau )\big]d\tau +\frac{1}{\lambda }\big(U(t_{1})+U(t_{2})\big).\end{array}\]
Therefore, for the covariance of the process $Y(t)$ we obtain the following expression:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{Cov}\big(Y(t_{1}),Y(t_{2})\big)& \displaystyle =\frac{\beta }{\lambda }{\int _{0}^{t_{1}\wedge t_{2}}}\big[U(t_{1}-\tau )d\tau +U(t_{2}-\tau )\big]d\tau \\{} & \displaystyle \hspace{1em}+\frac{1}{\lambda }\big(U(t_{1})+U(t_{2})\big)-U(t_{1})U(t_{2}).\end{array}\]
Using the expression for $U(t)=\frac{1}{\lambda }(\beta t+1)$, we find:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}\big(Y(t_{1})Y(t_{2})\big)=U(t_{1},t_{2},1,1)& \displaystyle =\frac{\beta }{{\lambda }^{2}}\big(\beta (t_{1}+t_{2})+2\big)\min (t_{1},t_{2})\\{} & \displaystyle \hspace{1em}-\frac{{\beta }^{2}}{{\lambda }^{2}}{\big(\min (t_{1},t_{2})\big)}^{2}+\frac{1}{{\lambda }^{2}}\big(\beta (t_{1}+t_{2})+2\big)\end{array}\]
and
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathsf{Cov}\big(Y(t_{1}),Y(t_{2})\big)\\{} & \displaystyle \hspace{1em}=\mathsf{E}\big[Y(t_{1})Y(t_{2})\big]-\mathsf{E}Y(t_{1})\mathsf{E}Y(t_{2})\\{} & \displaystyle \hspace{1em}=\frac{\beta }{{\lambda }^{2}}\big(\beta (t_{1}+t_{2})+2\big)\min (t_{1},t_{2})-\frac{{\beta }^{2}}{{\lambda }^{2}}{\big(\min (t_{1},t_{2})\big)}^{2}-\frac{{\beta }^{2}}{{\lambda }^{2}}t_{1}t_{2}+\frac{1}{{\lambda }^{2}}\\{} & \displaystyle \hspace{1em}=\frac{1}{{\lambda }^{2}}\big(2\beta \min (t_{1},t_{2})+1\big).\end{array}\]

A.2 Marginal distribution and moments of the process ${Y}^{(n)}(t)$

We present some details of the derivation of the expression for probability density of the inverse Poisson–Erlang process ${Y}^{(n)}(t)$ introduced by the formula (48) in Section 5.2.
The inverse Poisson–Erlang process was considered in [17] and its probability density function (p.d.f.) was presented in Theorem 3.1 therein. However, in [17] the different parametrization of the Poisson–Erlang process was used in comparison with that used in our paper.
For convenience of a reader and to make the paper self-contained, we present here some details of calculations following the general approach developed in [4].
Introduce the Laplace transforms related to the process ${G_{N}^{(n)}}(t)$ and its inverse process ${Y}^{(n)}(t)$:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {}^{\ast }l(v,t)& \displaystyle =\mathsf{E}{e}^{-v{G_{N}^{(n)}}(t)};\\{} \displaystyle {}^{\ast }{l}^{\ast }(v,u)& \displaystyle ={\int _{0}^{\infty }}{}^{\ast }l(v,t){e}^{-ut}dt\end{array}\]
and
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {}^{\ast }q(v,t)& \displaystyle =\mathsf{E}{e}^{-v{Y}^{(n)}(t)};\\{} \displaystyle {}^{\ast }{q}^{\ast }(v,u)& \displaystyle ={\int _{0}^{\infty }}{}^{\ast }q(v,t){e}^{-ut}dt.\end{array}\]
Then the following relation holds (see [4], Section 8.4, formula (3)):
(57)
\[{}^{\ast }{q}^{\ast }(v,u)=\frac{1-v\hspace{0.1667em}{}^{\ast }{l}^{\ast }(u,v)}{u}.\]
The above formula holds, in fact, for more general compound Poisson processes. In the case of compound Poisson process with jumps having the p.d.f. $g(x)$, it is possible to write the exact expression for ${}^{\ast }{l}^{\ast }(v,u)$ and formula (57) takes the form:
(58)
\[{}^{\ast }{q}^{\ast }(v,u)=\frac{\hat{f}(v)(1-\hat{g}(u))}{u(1-\hat{g}(u)\hat{f}(v))},\]
where $f(x)$ is the p.d.f. of exponential distribution, $\hat{f}(v)$ and $\hat{g}(u)$ are the Laplace transforms of f and g correspondingly (see formula (4) Section 8.4 in [4]).
For the case of Poisson–Erlang process, $f(x)$ is the p.d.f. of exponential distribution and $g(x)$ is the p.d.f. of Erlang distribution. Inserting the expressions for $\hat{f}(v)$ and $\hat{g}(u)$ in (58) we finally obtain:
(59)
\[{}^{\ast }{q}^{\ast }(v,u)=\frac{\lambda ({(\beta +u)}^{n}-{\beta }^{n})}{u(\lambda +v){(\beta +u)}^{n}-\lambda {\beta }^{n}}.\]
One special case when inversion of (59) can be easily performed was considered in [4], namely, the case when f and g are both exponential, and in such a case we come to the p.d.f. of inverse compound Poisson-exponential process (see formula (28) in Section 5.1). The exact result is also available for the inverse Poisson–Erlang process. This result was stated in Theorem 3.1 of [17], for its proof the inverse Laplace transforms for (59) were calculated consequently with respect to variables v and u.
Here we present a reverse check, namely, we check that the double Laplace transform of the p.d.f. (49) gives the expression (59). We have:
(60)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {}^{\ast }q(v,t)& \displaystyle ={\int _{0}^{\infty }}{e}^{-vs}\lambda {e}^{-\beta t-\lambda s}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{m=0}^{\infty }\frac{{(\lambda s{(\beta t)}^{n})}^{m}}{m!\varGamma (nm+k)}ds\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{m=0}^{\infty }\frac{{(\beta t)}^{nm}{\lambda }^{m}}{m!\varGamma (nm+k)}{\int _{0}^{\infty }}{e}^{-(v+\lambda )s}{s}^{m}ds\\{} & \displaystyle ={e}^{-\beta t}\sum \limits_{k=1}^{n}\sum \limits_{m=0}^{\infty }\frac{{(\beta t)}^{nm+k-1}}{\varGamma (nm+k)}{\bigg(\frac{\lambda }{\lambda +v}\bigg)}^{m+1}\end{array}\]
and
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {}^{\ast }{q}^{\ast }(v,u)& \displaystyle ={\bigg(\frac{\lambda }{\lambda +v}\bigg)}^{m+1}{\int _{0}^{\infty }}{e}^{-\beta t}\sum \limits_{k=1}^{n}\sum \limits_{m=0}^{\infty }\frac{{(\beta t)}^{nm+k-1}}{(nm+k-1)!}{e}^{-ut}dt\\{} & \displaystyle =\sum \limits_{k=1}^{n}\sum \limits_{m=0}^{\infty }\frac{{(\beta )}^{nm+k-1}}{(nm+k-1)!}{\bigg(\frac{\lambda }{\lambda +v}\bigg)}^{m+1}{\int _{0}^{\infty }}{e}^{-(\beta +u)t}{t}^{nm+k-1}dt\\{} & \displaystyle =\sum \limits_{k=1}^{n}\frac{\lambda }{v+\lambda }\frac{{\beta }^{k-1}}{{(\beta +u)}^{k}}\sum \limits_{m=0}^{\infty }{\bigg(\frac{\lambda {\beta }^{n}}{(v+\lambda ){(\beta +u)}^{n}}\bigg)}^{m}\\{} & \displaystyle =\frac{\lambda ({(\beta +u)}^{n}-{\beta }^{n})}{u(\lambda +v){(\beta +u)}^{n}-\lambda {\beta }^{n}},\end{array}\]
which coincides indeed with (59).
We next obtain the expressions for the moments of the process ${Y}^{(n)}(t)$. Using the known form of the probability density of the process ${Y}^{(n)}(t)$, we obtain:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{Y}^{(n)}(t)& \displaystyle ={\int _{0}^{\infty }}s\lambda {e}^{-\beta t-\lambda s}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\varPhi \big(n,k,\lambda s{(\beta t)}^{n}\big)\\{} & \displaystyle ={\int _{0}^{\infty }}s\lambda {e}^{-\beta t-\lambda s}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda s{(\beta t)}^{n})}^{l}}{l!\varGamma (nl+k)}ds\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda {(\beta t)}^{n})}^{l}}{l!\varGamma (nl+k)}{\int _{0}^{\infty }}{e}^{-\lambda s}{s}^{l+1}ds\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda {(\beta t)}^{n})}^{l}\varGamma (l+2)}{l!\varGamma (nl+k){\lambda }^{l+2}}\\{} & \displaystyle =\frac{1}{\lambda }{e}^{-\beta t}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{2}}\big({(\beta t)}^{n}\big),\end{array}\]
and, analogously, for the moment of the general order $p\ge 1$:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{\big[{Y}^{(n)}(t)\big]}^{p}& \displaystyle ={\int _{0}^{\infty }}{s}^{p}\lambda {e}^{-\beta t-\lambda s}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\varPhi \big(n,k,\lambda s{(\beta t)}^{n}\big)\\{} & \displaystyle ={\int _{0}^{\infty }}{s}^{p}\lambda {e}^{-\beta t-\lambda s}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda s{(\beta t)}^{n})}^{l}}{l!\varGamma (nl+k)}ds\\{} & \displaystyle =\lambda {e}^{-\beta t}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{l=0}^{\infty }\frac{{(\lambda {(\beta t)}^{n})}^{l}}{l!\varGamma (nl+k)}{\int _{0}^{\infty }}{e}^{-\lambda s}{s}^{l+p}ds\\{} & \displaystyle =\frac{p!}{{\lambda }^{p}}{e}^{-\beta t}\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}{\mathcal{E}_{n,k}^{p+1}}\big({(\beta t)}^{n}\big).\end{array}\]
To find the moments we can also use the moment generating function of ${Y}^{(n)}(t)$:
\[\mathsf{E}\big[{e}^{\theta {Y}^{(n)}(t)}\big]={e}^{-\beta t}\frac{\lambda }{\lambda -\theta }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\bigg(\frac{\lambda {(\beta t)}^{n}}{\lambda -\theta }\bigg)\]
(with argument $\theta <\lambda )$; differentiating with respect to θ and taking the derivative at $\theta =0$, we obtain the expectation of ${Y}^{(n)}(t)$ in the following form:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathsf{E}{Y}^{(n)}(t)& \displaystyle =\frac{{e}^{-\beta t}}{\lambda }\sum \limits_{k=1}^{n}\sum \limits_{j=0}^{\infty }\frac{(j+1){(\beta t)}^{nj+k-1}}{\varGamma (nj+k)}\\{} & \displaystyle =\frac{{e}^{-\beta t}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\sum \limits_{j=0}^{\infty }\frac{j{(\beta t)}^{nj}}{\varGamma (nj+k)}+\frac{{e}^{-\beta t}}{\lambda }\sum \limits_{k=1}^{n}{(\beta t)}^{k-1}\mathcal{E}_{n,k}\big({(\beta t)}^{n}\big).\end{array}\]
Using some calculations from [17] (see proof of Lemma 3.11 therein), we can represent the first term in the above expression in the following form:
\[\frac{\beta t}{n\lambda }-\frac{{e}^{-\beta t}}{n\lambda }\sum \limits_{k=1}^{n}(k-1){(\beta t)}^{k-1}\mathcal{E}_{n,k}\big({(\beta t)}^{n}\big),\]
and, therefore, we obtain
\[\mathsf{E}{Y}^{(n)}(t)=\frac{\beta t}{n\lambda }+\frac{{e}^{-\beta t}}{n\lambda }\sum \limits_{k=1}^{n}(n-k+1){(\beta t)}^{k-1}\mathcal{E}_{n,k}\big({(\beta t)}^{n}\big);\]
and then we can use the asymptotic relation
\[\mathcal{E}_{n,k}\big({(\beta t)}^{n}\big)\sim \frac{1}{n}{(\beta t)}^{1-k}{e}^{\beta t}\hspace{1em}\text{as}\hspace{2.5pt}t\to \infty ,\]
to come to the following asymptotics for $\mathsf{E}{Y}^{(n)}(t)$:
\[\mathsf{E}{Y}^{(n)}(t)\sim \frac{\beta t}{n\lambda }+\frac{n+1}{2n\lambda }\hspace{1em}\text{as}\hspace{2.5pt}t\to \infty .\]

References

[1] 
Applebaum, D.: Lévy Processes and Stochastic Calculus, 2nd edn. Cambridge University Press (2009). MR2512800. doi:10.1017/CBO9780511809781
[2] 
Barndorff-Nielsen, O.E., Pollard, D., Shephard, N.: Integer-valued Lévy processes and low latency financial econometrics. Quant. Finance 12(4), 587–605 (2011). MR2909600. doi:10.1080/14697688.2012.664935
[3] 
Bertoin, J.: Lévy Processes. Cambridge University Press (1996). MR1406564
[4] 
Cox, D.R.: Renewal Theory. Mathuen, London (1962). MR0153061
[5] 
Crescenzo, A.D., Martinucci, B., Zacks, S.: Compound Poisson process with a Poisson subordinator. J. Appl. Probab. 52(2), 360–374 (2015). MR3372080. doi:10.1239/jap/1437658603
[6] 
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Wiley (1971). MR0038583
[7] 
Garra, R., Orsingher, E., Scavino, M.: Some probabilistic properties of fractional point processes [Ona sechas opublikovana on line i ee koordinati slediushi]. Stoch. Anal. Appl. 35(4), 701–718 (2017). MR3651139. doi:10.1080/07362994.2017.1308831
[8] 
Haubold, H.J., Mathai, A.M., Saxena, R.K.: Mittag-Leffler functions and their applications. J. Appl. Math. (2011), 51 pp. MR2800586. doi:10.1155/2011/298628
[9] 
Irwin, J.O.: The frequency distribution of the difference between two independent variates following the same Poisson distribution. J. R. Stat. Soc. A 100, 415–416 (1937).
[10] 
Kerss, A., Leonenko, N., Sikorskii, A.: Fractional Skellam processes with applications to finance. Fract. Calc. Appl. Anal. 17(2), 532–551 (2014). MR3181070. doi:10.2478/s13540-014-0184-2
[11] 
Kobylych, K., Sakhno, L.: Point processes subordinated to compound Poisson processes. Theory Probab. Math. Stat. 94, 85–92 (2016) (in Ukrainian); English translation to appear in Theory Probab. Math. Stat. 94 (2017). MR3553456
[12] 
Kumar, A., Nane, E., Vellaisamy, P.: Time-changed Poisson processes. Stat. Probab. Lett. 81(12), 1899–1910 (2011). MR2845907. doi:10.1016/j.spl.2011.08.002
[13] 
Leonenko, N., Scalas, E., Trinh, M.: The fractional non-homogeneous Poisson process. Stat. Probab. Lett. 120, 147–156 (2017). MR3567934. doi:10.1016/j.spl.2016.09.024
[14] 
Leonenko, N., Meerschaert, M., Schilling, R., Sikorskii, A.: Correlation Structure of Time-Changed Lévy Processes. Commun. Appl. Ind. Math. 6(1) (2014). MR3277310. doi:10.1685/journal.caim.483
[15] 
Orsingher, E., Polito, F.: The space-fractional Poisson process. Stat. Probab. Lett. 82, 852–858 (2012). MR2899530. doi:10.1016/j.spl.2011.12.018
[16] 
Orsingher, E., Toaldo, B.: Counting processes with Bernštein intertimes and random jumps. J. Appl. Probab. 52, 1028–1044 (2015). MR3439170. doi:10.1239/jap/1450802751
[17] 
Paris, R., Vinogradov, V.: Fluctuation properties of compound Poisson–Erlang Lévy processes. Commun. Stoch. Anal. 7(2), 283–302 (2013). MR3092235
[18] 
Sato, K.: Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press (1999). MR1739520
[19] 
Skellam, J.G.: The frequency distribution of the difference between two Poisson variables belonging to different populations. J. R. Stat. Soc. A, 109–296 (1946). MR0020750
[20] 
Sneddon, I.N.: Special Functions of Mathematical Physics and Chemistry. Oliver and Boyd, Edinburgh (1956). MR0080170
[21] 
Veillette, M., Taqqu, M.S.: Using differential equations to obtain joint moments of first-passage times of increasing Lévy processes. Stat. Probab. Lett. 80, 697–705 (2010). MR2595149. doi:10.1016/j.spl.2010.01.002
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Compound Poisson-Gamma process
  • 4 Compound Poisson-Gamma process as time change
  • 5 Inverse compound Poisson-Gamma process as time change
  • Acknowledgments
  • A Appendix
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy