Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 7, Issue 4 (2020)
  4. Subordinated compound Poisson processes ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Cited by
  • More
    Article info Full article Cited by

Subordinated compound Poisson processes of order k
Volume 7, Issue 4 (2020), pp. 395–413
Ayushi Singh Sengar   Neelesh S. Upadhye  

Authors

 
Placeholder
https://doi.org/10.15559/20-VMSTA165
Pub. online: 5 November 2020      Type: Research Article      Open accessOpen Access

Received
10 May 2020
Revised
31 August 2020
Accepted
24 October 2020
Published
5 November 2020

Abstract

In this article, the compound Poisson process of order k (CPPoK) is introduced and its properties are discussed. Further, using mixture of tempered stable subordinators (MTSS) and its right continuous inverse, the two subordinated CPPoK with various distributional properties are studied. It is also shown that the space and tempered space fractional versions of CPPoK and PPoK can be obtained, which generalize the process defined in [Statist. Probab. Lett. 82 (2012), 852–858].

1 Introduction

The Poisson process has been the conventional model for count data analysis, and due to its popularity and applicability various researchers have generalized it in several directions; e.g., compound Poisson processes, weighted Poisson distributions, fractional (time-changed) versions of Poisson processes (see [6, 21, 13, 2, 1], and references therein). A handful of researchers have also studied the distributions and processes of order k in [11, 23, 18]. In particular, the discrete distribution of order k, introduced by Philippou et al. (see [24]), includes binomial, geometric and negative binomial distributions of order k. These distributions play an important role in several areas, such as reliability, statistics, finance, and actuarial risk analysis (see [19, 29, 4]).
In risk theory, the total claim amount is usually modeled by using a compound Poisson process, say ${Z_{t}}={\textstyle\sum _{i=1}^{N(t)}}{Y_{i}}$, where the compounding random variables ${Y_{i}}$ are iid and the number of claims $N(t)$, which are independent of ${\{{Y_{i}}\}_{i\ge 1}}$, follows the Poisson distribution. Due to the restriction of single arrival in each inter-arrival time, the model is not suitable to use. Kostadinova and Minkova [10] introduced a Poisson process of order k (PPoK), which allows us to model the arrival in a group of size k. Recently, a time-changed version of Poisson processes of order k is studied by [29], which allows group arrivals and also accommodates the case of extreme events, which was not covered by [10]. In spite of its applicability, we observe that this model does not cover the phenomenon of underdispersion, where the variance is less than the mean (see [25, 32] and references therein). Therefore, a generalization of this model is essentially required and is proposed in this article.
To the best of our knowledge, such a generalization is not yet studied. Therefore, we introduce the compound Poisson process of order k (CPPoK) with the help of the Poisson process of order k (PPoK) and study its distributional properties. Recently, tempered stable processes and their inverses have been widely used for time-change (subordination) as their moments are finite and hence various real-life situations can be modeled easily (see [12, 26, 20]). Various versions of Poisson processes, using subordination like space and time-fractional Poisson processes have been studied in the literature (see [22, 17]). Hence, it is worth exploring the time-change of CPPoK with a special type of Lévy subordinator known as mixture of tempered stable subordinators, its right continuous inverse, and analyze some properties of these time-changed processes. These processes also generalize the process discussed in [22].
The article is organized as follows. In Section 2, we introduce CPPoK and derive some of its general properties along with martingale characterization property. In Section 3, we introduce two types of CPPoK with the help of MTSS and its right continuous inverse. Further we derive some important distributional properties.

Acronym

PoK    Poisson distribution of order k
PPoK    Poisson process of order k
PP    Poisson process
CPPoK    Compound Poisson process of order k
FDD    Finite-dimensional distribution
IID    Independent and identically distributed
MTSS    Mixture of tempered stable subordinators
TCPPoK   Time changed compound Poisson process of order k
pmf    Probability mass function
pgf    Probability generating function
LRD    Long-range dependence

2 Compound Poisson process of order k and its properties

2.1 Compound Poisson process of order k

In this section, we introduce CPPoK and derive its distributional properties. First, we define the Poisson distribution of order k.
Definition 1 ([23]).
Let ${N^{(k)}}\sim PoK(\lambda )$, the Poisson distribution of order k (PoK) with rate parameter $\lambda >0$, then the probability mass function (pmf) of ${N^{(k)}}$ is given by
\[ \mathbb{P}[{N^{(k)}}=n]=\sum \limits_{\substack{{x_{1}},{x_{2}},\dots ,{x_{k}}\ge 0\\ {} {\textstyle\textstyle\sum _{j=1}^{k}}j{x_{j}}=n}}{e^{-k\lambda }}\frac{{\lambda ^{{x_{1}}+{x_{2}}+..+{x_{k}}}}}{{x_{1}}!{x_{2}}!...{x_{k}}!},\hspace{2.5pt}n=0,1,\dots ,\]
where the summation is taken over all non-negative integers ${x_{1}},{x_{2}},\dots ,{x_{k}}$ such that ${x_{1}}+2{x_{2}}+\cdots +k{x_{k}}=n$.
Philippou [23] showed the existence of PoK as a limiting distribution of negative binomial distribution of order k. Kostadinova and Minkova [10] later generalized PoK to evolve over time, in terms of a process which can be defined as follows.
Definition 2 ([10]).
Let ${\{N(t)\}_{t\ge 0}}$ denote $\mathit{PP}(k\lambda )$, the Poisson process with rate parameter $k\lambda $, and ${\{{X_{i}}\}_{i\ge 1}}$ be a sequence of independent and identically distributed (IID) discrete uniform random variables with support on $\{1,2,\dots ,k\}$. Also, assume that ${\{{X_{i}}\}_{i\ge 1}}$ and ${\{N(t)\}_{t\ge 0}}$ are independent. Then ${\{{N^{(k)}}(t)\}_{t\ge 0}}$, defined by ${N^{(k)}}(t)={\textstyle\sum _{i=1}^{N(t)}}{X_{i}}$ is called the Poisson process of order k (PPoK) and is denoted by $\mathit{PPoK}(\lambda )$.
However, the clumping behavior associated with the random phenomenon in [8] cannot be accommodated by PPoK [10]. Hence, there is a need to generalize this notion as well. Now we propose the following generalization of PPoK.
Definition 3.
Let ${\{{N^{(k)}}(t)\}_{t\ge 0}}$ be the $\mathit{PPoK}(\lambda )$ and ${\{{Y_{i}}\}_{i\ge 1}}$ be a sequence of IID random variables, independent of ${N^{(k)}}(t)$, with cumulative distribution function (CDF) H. Then the process ${\{Z(t)\}_{t\ge 0}}$ defined by $Z(t)={\textstyle\sum _{i=1}^{{N^{(k)}}(t)}}{Y_{i}}$ is called the compound Poisson process of order k (CPPoK) and is denoted by $\mathit{CPPoK}(\lambda ,H)$.
From the definition, it is clear that:
  • $(i)$ for $k=1$, ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{CPP}(\lambda ,H)$ the usual compound Poisson process.
  • $(ii)$ for $H={\delta _{1}}$, the Dirac measure at 1, ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{PPoK}(\lambda )$.
  • $(iii)$ for $k=1$ and $H={\delta _{1}}$, ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{PP}(\lambda )$.
Next, we present a characterization of $\mathit{CPPoK}(\lambda ,H)$, in terms of the finite-dimensional distribution (FDD).
Theorem 1.
Let ${\{Z(t)\}_{t\ge 0}}$ be as defined in Definition 3. Then the FDD, denoted as ${F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\mathbb{P}[Z({t_{1}})\le {y_{1}},\dots ,Z({t_{n}})\le {y_{n}}]$ has the following form
(1)
\[ {F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}},\]
where the summation is taken over all non-negative integers ${j_{i}}\ge 0,\hspace{2.5pt}i=1,2,\dots ,n$, ${v_{k}}={y_{k}}-{\textstyle\sum _{l=1}^{k-1}}{x_{l}},k=1,\dots ,n$, $\Delta {t_{l}}={t_{l}}-{t_{l-1}}$, h is the density/pmf of H, ${p_{{j_{l}}}}$ is the pmf of $\mathit{PPoK}(\lambda )$, and $(\ast {j_{n}})$ represents the ${j_{n}}$-fold convolution.
Proof.
Let $0={t_{0}}\le {t_{1}}\le \cdots \le {t_{n}}=t$ be the partition of $[0,t]$. Since, the increments of $\{{N^{(k)}}(t)\}$ are independent and stationary, we can write ${N^{(k)}}({t_{i}})={\textstyle\sum _{l=1}^{i}}{N^{(k)}}(\Delta {t_{l}}),i=1,\dots ,n$, and $\mathbb{P}[{N^{(k)}}(t)=j]={p_{j}}(t),j=0,1,\dots $.
\[\begin{aligned}{}{F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})& =\mathbb{P}\Bigg[{\sum \limits_{i=1}^{{N^{(k)}}({t_{1}})}}{Y_{i}}\le {y_{1}},\dots ,{\sum \limits_{i=1}^{{N^{(k)}}({t_{n}})}}{Y_{i}}\le {y_{n}}\Bigg]\\ {} & =\sum \limits_{{j_{1}},\dots {j_{n}}}\mathbb{P}\Bigg[{\sum \limits_{i=1}^{{j_{1}}}}{Y_{i}}\le {y_{1}},\dots ,{\sum \limits_{i=1}^{{\textstyle\textstyle\sum _{l=1}^{n}}{j_{l}}}}{Y_{i}}\le {y_{n}}\Bigg]{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}})\end{aligned}\]
Let us denote ${\textstyle\sum _{i=1}^{{j_{1}}}}{Y_{i}}=Y({j_{1}}),\dots ,{\textstyle\sum _{i={j_{1}}+\cdots +{j_{n-1}}+1}^{{j_{1}}+\cdots +{j_{n}}}}{Y_{i}}=Y({j_{n}})$, then it becomes
\[\begin{aligned}{}& {F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})\\ {} & \hspace{1em}=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}})\mathbb{P}\Bigg[Y({j_{1}})\le {y_{1}},\dots ,{\sum \limits_{l=1}^{n}}Y({j_{l}})\le {y_{n}}\Bigg]\\ {} & \hspace{1em}=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{y_{1}}}}\dots {\int _{-\infty }^{{y_{n}}-{\textstyle\textstyle\sum _{l=1}^{n-1}}{x_{l}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}}\\ {} & \hspace{1em}=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}}.\end{aligned}\]
 □
Remark 1.
For $n=1$, (1) reduces to $\mathbb{P}[Z(t)\le y]={\textstyle\sum _{j=0}^{\infty }}{p_{j}}(t){\textstyle\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(x)dx$, which is the marginal distribution of CPPoK(λ, H).
Remark 2.
For $n=1$ and H has discrete distribution, the probability mass function (pmf) of $\mathit{CPPoK}(\lambda ,H)$, denoted as $\mathbb{P}[Z(t)=n]={P_{n}}(t)$ is given as
\[ {P_{n}}(t)=\mathbb{P}[Z(t)=n]={\sum \limits_{m=0}^{\infty }}{p_{m}}(t){h_{{Y_{1}}}^{\ast m}}(n),\]
where ${p_{m}}(t)=\mathbb{P}[{N^{(k)}}(t)=m]$, the pmf of $\mathit{PPoK}(\lambda )$ and, ${h_{{Y_{1}}}^{\ast m}}$ is the m-fold convolution of pmf of ${Y_{1}}$. Difference-differential equation satisfied by the pmf of $\mathit{CPPoK}(\lambda ,H)$ is given as
\[ \frac{d}{dt}{P_{n}}(t)={\sum \limits_{m=0}^{\infty }}{h_{{Y_{1}}}^{\ast m}}(n)\frac{d}{dt}{p_{m}}(t)={\sum \limits_{m=0}^{\infty }}{h_{{Y_{1}}}^{\ast m}}(n)\Bigg[-k\lambda {p_{m}}(t)+\lambda {\sum \limits_{j=1}^{m\wedge k}}{p_{m-j}}(t)\Bigg]\]
with the initial condition ${P_{n}}(0)={h_{{Y_{1}}}^{\ast m}}(n),\hspace{2.5pt}n=1,2,\dots $ and ${P_{0}}(0)={h_{{Y_{1}}}^{\ast m}}(0)$ and $m\wedge k=\min (m,k)$.
Next, we present some examples of $\mathit{CPPoK}(\lambda ,H)$ by taking different distribution of H.
Example 1.
Let H has geometric distribution with parameter $p\in (0,1]$. Then the marginal distribution of $\mathit{CPPoK}(\lambda ,H)$ is given as
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {F_{Z(t)}}(y)=\mathbb{P}[Z(t)\le y]& \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}{p_{j}}(t){\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(x)dx\\ {} & \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}{I_{1-p}}(j,y+1){p_{j}}(t),\end{array}\]
where ${I_{x}}=\frac{B(x;a,b)}{B(a,b)},\hspace{2.5pt}0<x<1$ is the regularized incomplete beta function, $B(x;a,b)={\textstyle\int _{0}^{x}}{t^{a-1}}{(1-t)^{b-1}}$ is the incomplete beta function.
Example 2.
Let H has exponential distribution with parameter $\mu >0$. Then the marginal distribution of $\mathit{CPPoK}(\lambda ,H)$ is given as
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {F_{Z(t)}}(y)=\mathbb{P}[Z(t)\le y]& \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}{p_{j}}(t){\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(x)dx\\ {} & \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}\frac{\gamma (j,\mu y)}{(j-1)!}{p_{j}}(t),\end{array}\]
where $\gamma (s,x)={\textstyle\int _{0}^{x}}{e^{-t}}{t^{s-1}}dt,\hspace{2.5pt}x\ge 0$ is the incomplete gamma function.
Here we plot the pmf of the considered process when ${Y_{i}}\sim Geo(p)$, $p\in (0,1]$. Fix $t=1,\hspace{2.5pt}\lambda =1,\hspace{2.5pt}p=0.5,\hspace{2.5pt}\text{and}\hspace{2.5pt}k=1$.
vmsta165_g001.jpg
Fig. 1.
Plot for $\mathbb{P}[Z(t)=n]$
Theorem 2.
The mean, variance and covariance of the process ${\{Z(t)\}_{t\ge 0}}$ can be expressed as
  • $(i)$ $\mathbb{E}[Z(t)]=\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)]$,
  • $(ii)$ $\textit{Var}[Z(t)]=\textit{Var}(Y)\mathbb{E}[{N^{(k)}}(t)]+\mathbb{E}{[Y]^{2}}\textit{Var}({N^{(k)}}(t))$.
  • $(iii)$ $\textit{Cov}[Z(s),Z(t)]=\textit{Var}(Y)\mathbb{E}[{N^{(k)}}(s\wedge t)]+\mathbb{E}{[Y]^{2}}\textit{Var}[{N^{(k)}}(s\wedge t)],\hspace{2.5pt}\textit{where}\hspace{2.5pt}s\wedge t=\min (s,t)$.
Proof.
$\mathbb{E}[Z(t)]=\mathbb{E}[{\textstyle\sum _{i=1}^{{N^{(k)}}(t)}}{Y_{i}}]=\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)]=\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]$. Hence, part $(i)$ is proved.
Next, we derive the expression for variance of $Z(t)$. Let ${p_{n}}(t)$ be the pmf of $\mathit{PPoK}(\lambda )$. Then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \text{Var}[Z(t)]& \displaystyle =& \displaystyle \mathbb{E}{[Z(t)-\mathbb{E}[Z(t)]]^{2}}\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}[{[Z(t)-\mathbb{E}[Z(t)]]^{2}}|{N_{k}}(t)=n]{p_{n}}(t)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}{\left[{S_{n}}-\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)]\right]^{2}}{p_{n}}(t),\hspace{2.5pt}\text{where}\hspace{2.5pt}{S_{n}}={\textstyle\textstyle\sum _{i=1}^{n}}{Y_{i}}\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}{[({S_{n}}-n\mathbb{E}[Y])+(n\mathbb{E}[Y]-\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)])]^{2}}{p_{n}}(t)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\left[\text{Var}({S_{n}})+\mathbb{E}{[Y]^{2}}\mathbb{E}{[n-\mathbb{E}[{N^{(k)}}(t)]]^{2}}\right]{p_{n}}(t)\\ {} & \displaystyle =& \displaystyle \text{Var}(Y)\mathbb{E}[{N^{(k)}}(t)]+\mathbb{E}{[Y]^{2}}\text{Var}({N^{(k)}}(t)),\end{array}\]
which proves part $(ii)$.
Now, in order to derive the covariance term, first we evaluate $\mathbb{E}[Z(s)Z(t)]$. So
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[Z(s)Z(t)]& \displaystyle =& \displaystyle \mathbb{E}[Z(s)]\mathbb{E}[Z(t)-Z(s)]+\mathbb{E}[Z{(s)^{2}}]\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(s)]\mathbb{E}[Z(t)-Z(s)]+\text{Var}[Z(s)]+\mathbb{E}{[Z(s)]^{2}}\end{array}\]
Therefore,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle Cov(Z(s),Z(t))& \displaystyle =& \displaystyle \mathbb{E}[Z(s)Z(t)]-\mathbb{E}[Z(s)]\mathbb{E}[Z(t)]\\ {} & \displaystyle =& \displaystyle \text{Var}(Y)\mathbb{E}[{N^{(k)}}(s\wedge t)]+\mathbb{E}{[Y]^{2}}\text{Var}[{N^{(k)}}(s\wedge t)],\end{array}\]
hence, part $(iii)$ is proved.  □
Remark 3.
Now we present the mean and covariance formula for some specific cases of $\mathit{CPPoK}(\lambda ,H)$ that we discussed in Definition 3.
S. No. $\mathbf{CPPoK}(\boldsymbol{\lambda },\mathbf{H})$ Mean Covariance Variance
1. For $H={\delta _{1}}$ $\mathbb{E}[{N^{(k)}}(t)]$ $\textit{Var}[{N^{(k)}}(s\wedge t)]$ $\textit{Var}[{N^{(k)}}(t)]$
2. For $k=1$ $\mathbb{E}[Y]\mathbb{E}[N(t)]$ $\mathbb{E}[{Y^{2}}]\mathbb{E}[N(s\wedge t)]$ $\mathbb{E}[{Y^{2}}]\mathbb{E}[N(t)]$
3. For $H={\delta _{1}}$ and $k=1$ $\mathbb{E}[N(t)]$ $\textit{Var}[N(s\wedge t)]$ $\textit{Var}[N(t)]$
From this table, we make the following observations.
  • 1. For $H={\delta _{1}}$, it reduces to mean and covariance formula for $\mathit{PPoK}(\lambda )$ (see [29]).
  • 2. For $k=1$, it reduces to mean and covariance formula for $\mathit{CPP}(\lambda ,H)$.
  • 3. For $H={\delta _{1}}$ and $k=1$, it reduces to mean and covariance formula for $\mathit{PP}(\lambda )$.
Further we present the plots of mean and variance when ${Y_{i}}\sim exp(\mu )$, $\mu >0$ for different setting of parameters. Fix $t=10$.
vmsta165_g002.jpg
Fig. 2.
Plots for mean and variance

2.2 Index of dispersion

In this subsection, we are going to discuss the index of dispersion of $\mathit{CPPoK}(\lambda ,H)$.
Definition 4 ([16]).
The index of dispersion for a counting process ${\{Z(t)\}_{t\ge 0}}$ is defined by
\[ I(t)=\frac{\text{Var}[Z(t)]}{\mathbb{E}[Z(t)]}.\]
Then the stochastic process ${\{Z(t)\}_{t\ge 0}}$ is said to be overdispersed if $I(t)>1$, underdispersed if $I(t)<1$, and equidispersed if $I(t)=1$.
Alternatively, Definition 4 can be interpreted as follows. A stochastic process ${\{Z(t)\}_{t\ge 0}}$ is over(under)-dispersed if $\text{Var}[Z(t)]-\mathbb{E}[Z(t)]>0(<0)$. Therefore, we first calculate
(2)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \text{Var}[Z(t)]-\mathbb{E}[Z(t)]& \displaystyle =& \displaystyle \frac{k(k+1)}{2}\lambda t\left[\text{Var}[Y]-\mathbb{E}[Y]+\frac{(2k+1)}{3}\mathbb{E}{[Y]^{2}}\right]\\ {} & \displaystyle =& \displaystyle \frac{k(k+1)}{2}\lambda t\left[\mathbb{E}[{Y^{2}}]-\mathbb{E}[Y]+\frac{2k-2}{3}\mathbb{E}{[Y]^{2}}\right].\end{array}\]
From the above definition, the following cases arise:
  • $(i)$ If ${Y^{\prime }_{i}}s$ are over and equidispersed, then $\mathit{CPPoK}(\lambda ,H)$ exhibits overdispersion.
  • $(ii)$ If ${Y^{\prime }_{i}}s$ are underdispersed with non-negative integer valued random variable i.e., $[\mathbb{E}({Y^{2}})-\mathbb{E}(Y)]\ge 0$, then $\mathit{CPPoK}(\lambda ,H)$ shows overdispersion.
  • $(iii)$ If ${Y^{\prime }_{i}}s$ are underdispersed, then $\mathit{CPPoK}(\lambda ,H)$ may show over, equi and underdispersion.
Further we present some examples to discuss the index of dispersion.
Example 3.
If ${Y_{i}}\sim exp(\mu )$, where $\mu >0$, then (2) becomes
\[ \text{Var}[Z(t)]-\mathbb{E}[Z(t)]=\frac{k(k+1)}{2\mu }\lambda t\left[\frac{1}{\mu }\left(\frac{2k+4}{3}\right)-1\right].\]
  • (a) If $0<\mu \le 1$, then $\mathit{CPPoK}(\lambda ,H)$ exhibits overdispersion.
  • (b) If $1<\mu <\frac{2k+4}{3}$, then $\mathit{CPPoK}(\lambda ,H)$ shows overdispersion.
  • (c) If $\mu =\frac{2k+4}{3}$, then $\mathit{CPPoK}(\lambda ,H)$ shows equidispersion.
  • (d) If $\mu >\frac{2k+4}{3}$, then $\mathit{CPPoK}(\lambda ,H)$ shows underdispersion.
Example 4.
If ${Y_{i}}\sim Geo(p)$, where $p\in (0,1]$, then (2) becomes
\[ \text{Var}[Z(t)]-\mathbb{E}[Z(t)]=\frac{k(k+1)\lambda t}{2}{\left(\frac{1-p}{p}\right)^{2}}\left[1+\frac{2k+1}{3}\right].\]
If $0<p<1$, then $\mathit{CPPoK}(\lambda ,H)$ exhibits overdispersion.

2.3 Long range dependence

In this subsection, we prove the long-range dependence (LRD) property for the $\mathit{CPPoK}(\lambda ,H)$. There are several definitions available in literature. We used the definition given in [15].
Definition 5 ([15]).
Let $0\le s<t$ and s be fixed. Assume a stochastic process ${\{X(t)\}_{t\ge 0}}$ has the correlation function $\mathit{Corr}[X(s),X(t)]$ that satisfies
\[ {c_{1}}(s){t^{-d}}\le \mathit{Corr}[X(s),X(t)]\le {c_{2}}(s){t^{-d}},\]
for large $t,d>0,\hspace{2.5pt}{c_{1}}(s)>0\hspace{2.5pt}and\hspace{2.5pt}{c_{2}}(s)>0$. i.e.,
\[ \underset{t\to \infty }{\lim }\frac{\mathit{Corr}[X(s),X(t)]}{{t^{-d}}}=c(s)\]
for some $c(s)>0$ and $d>0$. We say that, $X(t)$ has the long-range dependence (LRD) property if $d\in (0,1)$ and short-range dependence (SRD) property if $d\in (1,2)$.
Proposition 2.1.
The $\mathit{CPPoK}(\lambda ,H)$ has the LRD property.
Proof.
Let $0\le s<t$. Consider
(3)
\[\begin{aligned}{}\mathit{Corr}[Z(s),Z(t)]& =\frac{Cov[Z(s),Z(t)]}{\sqrt{\text{Var}[Z(s)]\text{Var}[Z(t)]}},\\ {} & =\sqrt{\frac{\frac{k(k+1)}{2}\lambda s\text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda s\mathbb{E}{[Y]^{2}}}{\frac{k(k+1)}{2}\lambda t\text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda t\mathbb{E}{[Y]^{2}}}},\hspace{2.5pt}\text{from Theorem 2}\hspace{2.5pt}\\ {} & =c(s){t^{-1/2}},\end{aligned}\]
where, $0<c(s)=\sqrt{\frac{\frac{k(k+1)}{2}\lambda s\text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda s\mathbb{E}{[Y]^{2}}}{\frac{k(k+1)}{2}\lambda \text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda \mathbb{E}{[Y]^{2}}}}$, which decays like the power law ${t^{-1/2}}$. Hence $\mathit{CPPoK}(\lambda ,H)$ has LRD property.  □
Corollary 1.
Let ${Y_{i}}\equiv 1$, then $\mathit{CPPoK}(\lambda ,H)$ reduces to $\mathit{PPoK}(\lambda )$ and $\mathit{Corr}[Z(s),Z(t)]$ becomes ${s^{1/2}}{t^{-1/2}}$. Hence LRD property also holds for $\mathit{PPoK}$ (see [29, Lemma 3.1]).

2.4 Martingale characterization for CPPoK

It is well known that the martingale characterization for homogeneous Poisson process is called Watanabe theorem (see [5]). Now, we extend this theorem for $\mathit{CPPoK}(\lambda ,H)$, where H is discrete distribution with support on ${\mathbb{Z}^{+}}$ and for this, we need following two lemmas.
Lemma 1.
Let $D(t)={\textstyle\sum _{j=1}^{N(t)}}{X_{j}},\hspace{2.5pt}t\ge 0$ be the compound Poisson process, where ${\{N(t)\}_{t\ge 0}}$ is $\mathit{PP}(k\lambda )$ and ${\{{X_{j}}\}_{j\ge 1}}$ are non-negative IID random variable, independent from ${\{N(t)\}_{t\ge 0}}$, with pmf $\mathbb{P}({X_{j}}=i)={\alpha _{i}},\hspace{2.5pt}(i=0,1,2,\dots ,j=1,2,\dots )$. Then ${\{D(t)\}_{t\ge 0}}$ can be represented as
\[ D(t)\stackrel{d}{=}{\sum \limits_{i=1}^{\infty }}i{Z_{i}}(t),\hspace{2.5pt}\hspace{2.5pt}t\ge 0,\]
where, ${Z_{i}}(t),\hspace{2.5pt}i=1,2,\dots $ are independent $\mathit{PP}(k\lambda {\alpha _{i}})$, and the symbol $\stackrel{d}{=}$ denotes the equality in distribution.
Proof.
We prove this lemma by showing that the probability generating function (pgf) of L.H.S. and R.H.S. coincides. Let ${G_{D(t)}}(u)$ is the pgf of ${\{D(t)\}_{t\ge 0}}$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {G_{D(t)}}(u)=\mathbb{E}[{u^{D(t)}}]& \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}{[{u^{{X_{1}}}}]^{n}}\mathbb{P}[N(t)=n]\\ {} & \displaystyle =& \displaystyle \exp [-\lambda kt(1-\mathbb{E}[{u^{X}}])]\\ {} & \displaystyle =& \displaystyle \exp \Bigg[\lambda kt{\sum \limits_{j=1}^{\infty }}{\alpha _{j}}({u^{j}}-1)\Bigg].\end{array}\]
Now we compute the pgf of ${\textstyle\sum _{i=1}^{\infty }}i{Z_{i}}(t)$, i.e.,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{u^{{\textstyle\textstyle\sum _{i=1}^{\infty }}i{Z_{i}}(t)}}]& \displaystyle =& \displaystyle {\prod \limits_{i=1}^{\infty }}\mathbb{E}[{u^{i{Z_{i}}(t)}}]\\ {} & \displaystyle =& \displaystyle {\prod \limits_{i=1}^{\infty }}{\sum \limits_{{n_{i}}=0}^{\infty }}{u^{i{n_{i}}}}\mathbb{P}[{Z_{i}}(t)={n_{i}}]\\ {} & \displaystyle =& \displaystyle {\prod \limits_{i=1}^{\infty }}{\sum \limits_{{n_{i}}=0}^{\infty }}{u^{i{n_{i}}}}{e^{-k\lambda {\alpha _{i}}t}}\frac{{(k\lambda {\alpha _{i}}t)^{{n_{i}}}}}{{n_{i}}!}\\ {} & \displaystyle =& \displaystyle \exp \Bigg[\lambda kt{\sum \limits_{i=1}^{\infty }}{\alpha _{i}}({u^{i}}-1)\Bigg].\end{array}\]
Hence, this lemma is proved.  □
Lemma 2.
The pgf of $Z(t)={\textstyle\sum _{i=1}^{{N^{(k)}}(t)}}{Y_{i}},\hspace{2.5pt}t\ge 0$ has the following form
\[ {G_{Z(t)}}(u)=\exp \left[\lambda kt{\sum \limits_{{j_{1}}=1}^{\infty }}\frac{{q_{{j_{1}}}^{(1)}}+{q_{{j_{1}}}^{(2)}}+\cdots +{q_{{j_{1}}}^{(k)}}}{k}({u^{{j_{1}}}}-1)\right],\]
where ${Y_{i}},\hspace{2.5pt}i=1,2,\dots $ are non-negative integer valued IID random variables with $\mathbb{P}[{Y_{i}}=n]={q_{n}},\hspace{2.5pt}n=0,1,\dots $.
Proof.
Let ${G_{{N^{(k)}}(t)}}(u)$ is the pgf of ${\{{N^{(k)}}(t)\}_{t\ge 0}}$, which is given by (see [29, Remark 2.2])
\[ {G_{{N^{(k)}}(t)}}(u)=\exp [-k\lambda t+\lambda t\{u+{u^{2}}+\cdots +{u^{k}}\}].\]
The pgf of ${\{Z(t)\}_{t\ge 0}}$, denoted as ${G_{Z(t)}}(u)=\mathbb{E}[{u^{Z(t)}}]$, is then given by
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {G_{Z(t)}}(u)& \displaystyle =& \displaystyle \exp [\lambda t\{{G_{Y}}(u)+\cdots +{G_{Y}^{\ast k}}(u)\}-k\lambda t],\hspace{2.5pt}\text{where}\hspace{2.5pt}{G_{Y}}(u)=\mathbb{E}[{u^{Y}}]\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda t\left\{{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}}{u^{{j_{1}}}}+{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{\ast 2}}{u^{{j_{1}}}}+\cdots +{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{\ast k}}{u^{{j_{1}}}}\right\}-k\lambda t\right]\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda t\left\{{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}}{u^{{j_{1}}}}+\cdots +{\sum \limits_{{j_{1}}=0}^{\infty }}\dots {\sum \limits_{{j_{k}}=0}^{{j_{k-1}}}}{q_{{j_{k}}}}\dots {q_{{j_{1}}-{j_{2}}}}{u^{{j_{1}}}}\right\}-k\lambda t\right]\\ {} & & \displaystyle \hspace{-63.0pt}\text{Let us denote}\hspace{2.5pt}{q_{{j_{1}}}^{(n)}}={\prod \limits_{m=2}^{n}}{\sum \limits_{{j_{m}}=0}^{{j_{m-1}}}}{q_{{j_{n}}}}{q_{{j_{n-1}}-{j_{n}}}}\dots {q_{{j_{1}}-{j_{2}}}},\hspace{2.5pt}n=1,\dots ,k.\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda t{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{(1)}}({u^{{j_{1}}}}-1)+\cdots +\lambda t{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{(k)}}({u^{{j_{1}}}}-1)\right]\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda kt{\sum \limits_{{j_{1}}=1}^{\infty }}\frac{{q_{{j_{1}}}^{(1)}}+{q_{{j_{1}}}^{(2)}}+\cdots +{q_{{j_{1}}}^{(k)}}}{k}({u^{{j_{1}}}}-1)\right].\end{array}\]
 □
Remark 4.
Set ${\alpha _{{j_{1}}}}=\frac{{q_{{j_{1}}}^{(1)}}+\cdots +{q_{{j_{1}}}^{(k)}}}{k},\hspace{2.5pt}{j_{1}}=0,1,2,\dots $. Substituting ${\alpha _{{j_{1}}}}$ in Lemma 1, we get the following relation
(4)
\[ D(t)\stackrel{d}{=}{\sum \limits_{i=1}^{\infty }}i{Z_{i}}(t)\stackrel{d}{=}{\sum \limits_{i=1}^{{N^{(k)}}(t)}}{Y_{i}},\hspace{2.5pt}t\ge 0,\]
where ${Y^{\prime }_{i}}s$ are non-negative integer valued random variables and ${N^{(k)}}(t)$ is $\mathit{PPoK}(\lambda )$.
Theorem 3.
Let ${\{Z(t)\}_{t\ge 0}}$ be the ${\mathcal{F}_{t}}$-adapted stochastic process, where $\{{\mathcal{F}_{t}}\}$ is non-decreasing family of sub-sigma algebras. Then ${\{Z(t)\}_{t\ge 0}}$ is a $\mathit{CPPoK}(\lambda ,H)$, where H is discrete distribution with support on ${\mathbb{Z}^{+}}$ iff process $M(t)\hspace{0.1667em}=\hspace{0.1667em}Z(t)-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y],t\ge 0$ is an ${\mathcal{F}_{t}}$ martingale.
Proof.
Let ${\{Z(t)\}_{t\ge 0}}$ be the ${\mathcal{F}_{t}}$ adapted stochastic process. If ${\{Z(t)\}_{t\ge 0}}$ is a compound Poisson process of order k, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[M(t)|{\mathcal{F}_{s}}]& \displaystyle =& \displaystyle \mathbb{E}[Z(t)-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]|{\mathcal{F}_{s}}],\hspace{2.5pt}\hspace{2.5pt}0\le s\le t.\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(t)|{\mathcal{F}_{s}}]-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(t)-Z(s)|{\mathcal{F}_{s}}]+\mathbb{E}[Z(s)|{\mathcal{F}_{s}}]-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(t)-Z(s)]+Z(s)-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]\\ {} & \displaystyle =& \displaystyle Z(s)-\frac{k(k+1)}{2}\lambda s\mathbb{E}[Y]=M(s).\end{array}\]
Hence, the process ${\{M(t)\}_{t\ge 0}}$ is ${\mathcal{F}_{t}}$ martingale.
Since in Remark 4 it is shown that ${\textstyle\sum _{i=1}^{\infty }}i{Z_{i}}(t)\stackrel{d}{=}{\textstyle\sum _{i=1}^{{N^{(k)}}(t)}}{Y_{i}}$, then the other part easily follows using [31, Theorem 5.2].  □
Remark 5.
We know that CPP is a Lévy process and in (4) it is proved that CPPoK is equal in distribution to ${\{D(t)\}_{t\ge 0}}$. Hence, CPPoK is also a Lévy process and hence infinitely divisible.
Remark 6.
The characteristic function of $\mathit{CPPoK}(\lambda ,H)$ can be written as
(5)
\[ \mathbb{E}[{e^{iwZ(t)}}]=\exp \Bigg[t{\sum \limits_{j=1}^{\infty }}({e^{iwj}}-1)k\lambda {\alpha _{j}}\Bigg],\]
where, ${\alpha _{j}},j=1,2,\dots $ are as defined in Remark 4, and $k\lambda {\alpha _{j}}={\nu _{j}}$ is called the Lévy measure of $\mathit{CPPoK}(\lambda ,H)$.
  • 1. For $K=1$, (5) reduces to $\mathbb{E}[{e^{iwZ(t)}}]=\exp [t{\textstyle\sum _{j=1}^{\infty }}({e^{iwj}}-1)\lambda {\alpha _{j}}]$, which is the characteristic function of $\mathit{CPP}(\lambda ,H)$.
  • 2. For $H={\delta _{1}}$, (5) reduces to $\mathbb{E}[{e^{iwZ(t)}}]=\exp [\lambda kt{\textstyle\sum _{j=1}^{k}}(\frac{{e^{iwj}}}{k}-1)]$, which is the characteristic function of $\mathit{PPoK}(\lambda )$.
  • 3. For $H={\delta _{1}}$ and $k=1$, (5) reduces to $\mathbb{E}[{e^{iwZ(t)}}]=\exp [\lambda t({e^{iw}}-1)]$, which is the characteristic function of $\mathit{PP}(\lambda )$.

3 Main results

In this section, we recall the definitions of Lévy subordinator and its first exit time. Further, we define the subordinated versions of $\mathit{CPPoK}(\lambda ,H)$ and discuss their properties.

3.1 Lévy subordinator

A Lévy subordinator ${\{{D_{f}}(t)\}_{t\ge 0}}$ is a one-dimensional non-decreasing Lévy process whose Laplace transform (LT) can be expressed in the form (see [3])
\[ \mathbb{E}[{e^{-\lambda {D_{f}}(t)}}]={e^{-tf(\lambda )}},\hspace{2.5pt}\lambda >0,\]
where the function $f:[0,\infty )\to [0,\infty )$ is called the Laplace exponent and
\[ f(\lambda )=b\lambda +{\int _{0}^{\infty }}(1-{e^{-\lambda x}})\nu (dx),\hspace{2.5pt}b\ge 0.\]
Here b is the drift coefficient and ν is a non-negative Lévy measure on positive half-line satisfying
\[ {\int _{0}^{\infty }}(x\wedge 1)\nu (dx)<\infty \hspace{2.5pt}\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}\hspace{2.5pt}\nu ([0,\infty ))=\infty ,\]
which ensures that the sample paths of ${D_{f}}(t)$ are almost surely $(a.s.)$ strictly increasing. Also, the inverse subordinator ${\{{E_{f}}(t)\}_{t\ge 0}}$ is the first exit time of the Lévy subordinator ${\{{D_{f}}(t)\}_{t\ge 0}}$, and it is defined as
\[ {E_{f}}(t)=\inf \{r\ge 0:{D_{f}}(r)>t\},\hspace{2.5pt}t\ge 0.\]
Next, we study $\mathit{CPPoK}(\lambda ,H)$ by taking subordinator as mixture of tempered stable subordinators (MTSS).

3.2 CPPoK time changed by mixtures of tempered stable subordinators

The mixtures of tempered stable subordinators (MTSS) ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ is a Lévy process with LT (see [7])
\[ \mathbb{E}[{e^{-s{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)}}]=\exp \{-t({c_{1}}({(s+{\mu _{1}})^{{\alpha _{1}}}}-{\mu _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\mu _{2}})^{{\alpha _{2}}}}-{\mu _{2}^{{\alpha _{2}}}}))\},\hspace{2.5pt}s>0,\]
where ${c_{1}}+{c_{2}}=1,{c_{1}},{c_{2}}\ge 0,{\mu _{1}},{\mu _{2}}>0$ are tempering parameters and ${\alpha _{1}},{\alpha _{2}}\in (0,1)$ are stability indices. The function $f(s)={c_{1}}({(s+{\mu _{1}})^{{\alpha _{1}}}}-{\mu _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\mu _{2}})^{{\alpha _{2}}}}-{\mu _{2}^{{\alpha _{2}}}})$ is the Laplace exponent of MTSS. It can also be represented as sum of two independent tempered stable subordinators ${S_{{\alpha _{1}}}^{{\mu _{1}}}}(t)$ and ${S_{{\alpha _{2}}}^{{\mu _{2}}}}(t)$ as
\[ {S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)={S_{{\alpha _{1}}}^{{\mu _{1}}}}({c_{1}}t)+{S_{{\alpha _{2}}}^{{\mu _{2}}}}({c_{2}}t),{c_{1}},{c_{2}}\ge 0.\]
The mean and variance of MTSS are given as
(6)
\[ \mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]=t({c_{1}}{\alpha _{1}}{\mu _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\mu _{2}^{{\alpha _{2}}-1}}),\]
(7)
\[ \text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]=t({c_{1}}{\alpha _{1}}(1-{\alpha _{1}}){\mu _{1}^{{\alpha _{1}}-2}}+{c_{2}}{\alpha _{2}}(1-{\alpha _{2}}){\mu _{2}^{{\alpha _{2}}-2}}).\]
Definition 6.
Let ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ be the Lévy subordinator satisfying $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}{(t)^{\rho }}]<\infty $ for all $\rho >0$. Then the time-changed $\mathit{CPPoK}(\lambda ,H)$, denoted by $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ is defined as
\[ {Z_{1}}(t)=Z({S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))={\sum \limits_{i=1}^{{N^{(k)}}({S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0,\]
where ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{CPPoK}(\lambda ,H)$, independent from ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$.
Remark 7.
If ${\alpha _{1}}={\alpha _{2}}=\alpha $, and ${\mu _{1}}={\mu _{2}}=0$, then MTSS becomes α-stable subordinator, reducing ${Z_{1}}(t)$ to $\mathit{TCPPoK}(\lambda ,H,{S_{\alpha }})$, which we call as space fractional CPPoK and written as
\[ Z({S_{\alpha }}(t))={\sum \limits_{i=1}^{{N^{(k)}}({S_{\alpha }}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0.\]
Let ${Y^{\prime }_{i}}s\equiv 1$, then $Z({S_{\alpha }}(t))$ reduces to space fractional PPoK, denoted as $\mathit{PPoK}(\lambda ,{S_{\alpha }})$. Its pgf is given as
\[ {G_{{Z_{1}}(t)}}(u)={e^{[-t{\lambda ^{\alpha }}{(k-{\textstyle\textstyle\sum _{j=1}^{k}}{u^{j}})^{\alpha }}]}},\hspace{2.5pt}t\ge 0.\]
It can be seen as extension of space fractional Poisson process (see [22]).
Remark 8.
If ${\alpha _{1}}={\alpha _{2}}=\alpha $, and ${\mu _{1}}=\mu ,{\mu _{2}}=0$, then MTSS reduces to tempered α- stable subordinator and ${Z_{1}}(t)$ becomes tempered space fractional CPPoK, denoted as $\mathit{TCPPoK}(\lambda ,H,{S_{\alpha }^{\mu }})$, can be written as
\[ Z({S_{\alpha }^{\mu }}(t))={\sum \limits_{i=1}^{{N^{(k)}}({S_{\alpha }^{\mu }}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0,\]
where $\mu >0$ is tempering parameter. Substituting ${Y^{\prime }_{i}}s\equiv 1$, it becomes tempered space fractional PPoK, denoted as $\mathit{PPoK}(\lambda ,{S_{\alpha }^{\mu }})$.
Theorem 4.
The finite dimensional distribution of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ has the following form
(8)
\[ {F_{{Z_{1}}({t_{1}}),\dots ,{Z_{1}}({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{q_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}},\]
where the summation is taken over all non-negative integers ${j_{i}}\ge 0,\hspace{2.5pt}i=1,2,\dots ,n$, $\Delta {t_{l}}={t_{l}}-{t_{l-1}}$, ${v_{k}}={y_{k}}-{\textstyle\sum _{l=1}^{k-1}}{x_{l}},k=1,\dots ,n$, h is the density of H, and ${q_{j}}(t)=\mathbb{P}[{N^{(k)}}({S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))=j]$, where ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ is the MTSS and ${\{{N^{(k)}}(t)\}_{t\ge 0}}$ is $\mathit{PPoK}(\lambda )$.
Proof.
The result easily follows from the proof of Theorem 1.  □
Now, we present some distributional properties of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
Theorem 5.
Let $0<s\le t<\infty $. Then the mean and covariance function of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ are given as
  • $(i)$ $\mathbb{E}[{Z_{1}}(t)]=\mathbb{E}[Z(1)]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$,
  • $(ii)$ $Cov[{Z_{1}}(s),{Z_{1}}(t)]=\mathbb{E}{[Z(1)]^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\textit{Var}[Z(1)]$.
On putting $s=t$ in part (ii), we can get the expression for variance of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
Proof.
The proof follows from (see [14, Theorem 2.1]).  □
Remark 9.
Here are some specific cases of mean and covariance for $\mathit{CPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ that we discussed in Definition 3.
  • $(a)$ For $H={\delta _{1}}$, (i) reduces to $\mathbb{E}[{Z_{1}}(t)]=\mathbb{E}[{N^{(k)}}(1)]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ and (ii) reduces to $\mathit{Cov}[{Z_{1}}(s),{Z_{1}}(t)]\hspace{0.1667em}=\hspace{0.1667em}\mathbb{E}{[{N^{(k)}}(1)]^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\textit{Var}[{N^{(k)}}(1)]$ which is mean and covariance of TCPPoK-I as discussed in (see [29, Theorem 3.2]).
  • $(b)$ For $k=1$, (i) reduces to $\mathbb{E}[{Z_{1}}(t)]=\lambda \mathbb{E}[Y]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ and (ii) reduces to $Cov[{Z_{1}}(s),{Z_{1}}(t)]={(\lambda \mathbb{E}[Y])^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\lambda \mathbb{E}[{Y^{2}}]$ which is the mean and covariance of time-changed $\mathit{CPP}(\lambda ,H)$.
  • $(c)$ For $H={\delta _{1}}$ and $k=1$, (i) reduces to $\mathbb{E}[{Z_{1}}(t)]=\mathbb{E}[N(1)]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ and (ii) reduces to $Cov[{Z_{1}}(s),{Z_{1}}(t)]=\mathbb{E}{[N(1)]^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\textit{Var}[N(1)]$ which is the mean and covariance of time-changed $\mathit{PP}(\lambda )$.
Now, we discuss the index of dispersion for $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$. For this, we evaluate
(9)
\[\begin{aligned}{}\text{Var}[{Z_{1}}(t)]-\mathbb{E}[{Z_{1}}(t)]& =\mathbb{E}{[Z(1)]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\\ {} & \hspace{1em}+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\left\{\text{Var}[Z(1)]-\mathbb{E}[Z(1)]\right\}.\end{aligned}\]
Since ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ is a Lévy subordinator, therefore $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]>0$. Thus the following cases arises:
  • $(i)$ If $Z(1)$ is over/equidispersed, then $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ exhibits overdispersion.
  • $(ii)$ If $Z(1)$ is underdispersed, then $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ may show over, under and equidispersion.
Moreover, we discuss the index of dispersion by taking some example of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
Example 5.
When H has exponential distribution with parameter $\mu >0$, (9) becomes
(10)
\[\begin{aligned}{}\text{Var}[{Z_{1}}(t)]-\mathbb{E}[{Z_{1}}(t)]=& \text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]{\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\times \\ {} & \frac{k(k+1)\lambda }{2\mu }\left[\frac{2k+4}{3\mu }-1\right].\end{aligned}\]
  • 1. If $0<\mu \le 1$, then $Z(1)$ is overdispersed. Since the sample paths of Lévy subordinator ${S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$ are strictly increasing (see [28, Theorem 21.3]), so $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ is positive, and both the terms of (10) is positive. Therefore ${Z_{1}}(t)$ shows overdispersion.
  • 2. If $\mu =\frac{2k+4}{3}$, then $Z(1)$ is equidispersed. Second term becomes zero but the first terms of (10) is positive. Therefore ${Z_{1}}(t)$ shows overdispersion.
  • 3. If $\mu >\frac{2k+4}{3}$, then $Z(1)$ is underdispersed. So the second term in (10) becomes negative and the following cases arises
    • $(a)$ if ${\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]>\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\frac{k(k+1)\lambda }{2\mu }\left[1-\frac{2k+4}{3\mu }\right]$, then ${Z_{1}}(t)$ shows overdispersion.
    • $(c)$ if ${\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]=\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\frac{k(k+1)\lambda }{2\mu }\left[1-\frac{2k+4}{3\mu }\right]$, then ${Z_{1}}(t)$ shows equidispersion.
    • $(b)$ if ${\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]<\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\frac{k(k+1)\lambda }{2\mu }\left[1-\frac{2k+4}{3\mu }\right]$, then ${Z_{1}}(t)$ shows underdispersion.

3.3 Long-range dependence

Now we analyze the LRD property for $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
Theorem 6.
Let ${\{{Z_{1}}(t)\}_{t\ge 0}}$ be the time-changed $\mathit{CPPoK}(\lambda ,H)$. It has the LRD property.
Proof.
We have $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}{(t)^{n}}]\sim {({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})^{n}}{t^{n}}$, as $t\to \infty $, from [7]. Therefore
(11)
\[ \text{Var}[{Z_{1}}(t)]\sim \mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\text{Var}[Z(1)]\sim Kt,\]
where $K=({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})\text{Var}[Z(1)]$.
Let $0<s<t<\infty $, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathit{Corr}[{Z_{1}}(s),{Z_{1}}(t)]& \displaystyle =& \displaystyle \frac{Cov[{Z_{1}}(s),{Z_{1}}(t)]}{\sqrt{\text{Var}[{Z_{1}}(s)]\text{Var}[{Z_{1}}(t)]}},\\ {} & \displaystyle \sim & \displaystyle \frac{\sqrt{Cov[{Z_{1}}(s),{Z_{1}}(t)]}}{{t^{1/2}}\sqrt{K}},\text{from (11)}\\ {} & \displaystyle =& \displaystyle c(s){t^{-1/2}},\end{array}\]
where $c(s)=\frac{\sqrt{Cov[{Z_{1}}(s),{Z_{1}}(t)]}}{\sqrt{K}}>0$. Hence from the Definition 5, $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ captures LRD property.  □

3.4 CPPoK time changed by the first exit time of mixtures of tempered stable subordinators

In this subsection, we consider $\mathit{CPPoK}(\lambda ,H)$ subordinated with first exit time of MTSS and discuss the asymptotic behavior of its moments.
The first exit time of Lévy subordinator ${S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$ also known as inverse subordinator is defined as
\[ {E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)=\inf \{r\ge 0:{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(r)>t\},\hspace{2.5pt}t\ge 0.\]
Definition 7.
Let ${\{Z(t)\}_{t\ge 0}}$ be the $\mathit{CPPoK}(\lambda ,H)$ as discussed in Definition 3, then the subordinated CPPoK with first exit time of MTSS, denoted as $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ is defined as
\[ {Z_{2}}(t)=Z({E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))={\sum \limits_{i=1}^{{N^{(k)}}({E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0\]
where the process ${\{Z(t)\}_{t\ge 0}}$ is independent from ${\{{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$.
Remark 10.
If ${\mu _{1}}={\mu _{2}}=0$, then ${E_{{\alpha _{1}},{\alpha _{2}}}^{{\lambda _{1}},{\lambda _{2}}}}(t)$ becomes inverse mixed stable subordinator as discussed in [9], reducing ${Z_{2}}(t)$ to $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}})$, which we call as mixed fractional CPPoK and written as
\[ {Z_{2}}(t)={\sum \limits_{i=1}^{{N^{(k)}}({E_{{\alpha _{1}},{\alpha _{2}}}}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0.\]
Proposition 3.1.
The marginal distribution of $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ is given as
\[ {F_{{Z_{2}}(t)}}(y)={\int _{0}^{\infty }}{F_{Z(x)}}(y){g_{E}}(x,t)dx,\]
where ${F_{Z(x)}}(y)={\textstyle\sum _{j=0}^{\infty }}{p_{j}}(x){\textstyle\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(z)dz$ is the marginal distribution of $\mathit{CPPoK}(\lambda ,H)$, and ${g_{E}}(x,t)$ is the density function of ${E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$.
Proof.
We prove this proposition by using the conditioning argument on ${E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$. Then the rest follows from Theorem 1.  □
Theorem 7.
The mean and covariance function of $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ are given as
  • 1. $\mathbb{E}[{Z_{2}}(t)]=\mathbb{E}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$,
  • 2. $\textit{Cov}[{Z_{2}}(s),{Z_{2}}(t)]=\textit{Var}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}{[Z(1)]^{2}}\textit{Cov}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s),{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$.
Proof.
The proof follows as given in [14, Theorem 2.1].  □
Now, we discuss the asymptotic behavior of moments of $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$. First we need the following Theorem (see [3, 30]).
Theorem 8 (Tauberian Theorem).
Let $l:(0,\infty )\to (0,\infty )$ be a slowly varying function at 0 (respectively ∞) and let $\rho \ge 0$. Then for a function $U:(0,\infty )\to (0,\infty )$, the following are equivalent.
  • 1. $U(x)\sim {x^{\rho }}l(x)/\Gamma (1+\rho )$, $x\to 0$ (respectively $x\to \infty $).
  • 2. $\tilde{U}(s)\sim {s^{-\rho -1}}l(1/s)$, $s\to \infty $ (respectively $s\to 0$), where $\tilde{U}(s)$ is the LT of $U(x)$.
We know that LT of pth order moment of inverse subordinator ${\{{E_{f}}(t)\}_{t\ge 0}}$ is given by (see [16])
\[ \mathcal{L}[\mathbb{E}{({E_{f}}(t))^{p}}]=\frac{\Gamma (1+p)}{s{(f(s))^{p}}},\hspace{2.5pt}p>0,\]
where $f(s)$ is the corresponding Bernstein function associated with Lévy subordinator.
Proposition 3.2.
The asymptotic form of mean and variance of ${\{{Z_{2}}(t)\}_{t\ge 0}}$ is given as
\[\begin{array}{l}\displaystyle \mathbb{E}[{Z_{2}}(t)]\sim \mathbb{E}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\\ {} \displaystyle \textit{Var}[{Z_{2}}(t)]\sim \textit{Var}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\end{array}\]
Proof.
Let $\tilde{M}(s)$ be the LT of $\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$. Then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \tilde{M}(s)=\mathcal{L}[\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]]& \displaystyle =& \displaystyle \frac{1}{s({c_{1}}({(s+{\lambda _{1}})^{{\alpha _{1}}}}-{\lambda _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\lambda _{2}})^{{\alpha _{2}}}}-{\lambda _{2}^{{\alpha _{2}}}}))}\\ {} & \displaystyle \sim & \displaystyle \frac{1}{{s^{2}}({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})},\hspace{2.5pt}\text{as}\hspace{2.5pt}s\to 0\text{, (see [7]}).\end{array}\]
Then, by using Theorem 8, we have that
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{Z_{2}}(t)]& \displaystyle =& \displaystyle \mathbb{E}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\\ {} & \displaystyle \sim & \displaystyle \mathbb{E}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\end{array}\]
Now we compute the asymptotic behavior of variance of ${\{{Z_{2}}(t)\}_{t\ge 0}}$. So
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathcal{L}[\mathbb{E}{({E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))^{2}}]& \displaystyle =& \displaystyle \frac{2}{s{({c_{1}}({(s+{\lambda _{1}})^{{\alpha _{1}}}}-{\lambda _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\lambda _{2}})^{{\alpha _{2}}}}-{\lambda _{2}^{{\alpha _{2}}}}))^{2}}}\\ {} & \displaystyle \sim & \displaystyle \frac{2}{{s^{3}}{({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})^{2}}},\hspace{2.5pt}\text{as}\hspace{2.5pt}s\to 0.\end{array}\]
Therefore
\[\begin{aligned}{}\text{Var}{Z_{2}}(t)=& \text{Var}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]+\mathbb{E}{[Z(1)]^{2}}\text{Var}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\\ {} =& \text{Var}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]+\mathbb{E}{[Z(1)]^{2}}\{\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}{(t)^{2}}]-\mathbb{E}{[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]^{2}}\}\\ {} \sim & \text{Var}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\end{aligned}\]
 □
There are few studies available in the literature which consider underdispersion phenomenon such as [27] uses hyper-Poisson regression model, [25] uses weighted Poisson distribution, and [21] uses compound weighted Poisson distributions. In this work, we have proposed a model accomodating more number of parameters, which makes this model more flexible to use.

Acknowledgments

The authors would like to thank Aditya Maheshwari (IIM, Indore) for being part of various discussions on this topic. Also, the authors are grateful to the referees for suggesting various improvements which certainly improved the presentation of the article.

References

[1] 
Beghin, L., Macci, C.: Multivariate fractional poisson processes and compound sums. Adv. Appl. Probab. 48 (2015).MR3568887https://doi.org/10.1017/apr.2016.23
[2] 
Beghin, L., Orsingher, E.: Fractional poisson processes and related planar random motions. Electron. J. Probab. 14, 1790–1827 (2009). MR2535014. https://doi.org/10.1214/EJP.v14-675
[3] 
Bertoin, J.: Lévy Processes. Cambridge University Press, Cambridge (1996). MR1406564
[4] 
Bowers, N.L., Gerber, H.V., Hickman, J.C., Jones, D.A., Nesbitt, C.J.: Actuarial mathematics. ASTIN Bull. 18, 624 (1988). MR1540903. https://doi.org/10.2307/2323489
[5] 
Brémaud, P.: Point Processes and Queues. Springer, New York (1981). MR0636252
[6] 
Feller, W.: An Introduction to Probability Theory and Its Applications. John Wiley & Sons Inc., New York (1971). MR0270403
[7] 
Gupta, N., Kumar, A., Leonenko, N.: Mixtures of Tempered Stable Subordinators. arXiv e-prints, arXiv:1905.00192, 2019.
[8] 
Heinzmann, D., Barbour, A.D., Torgerson, P.R.: Compound processes as models for clumped parasite data. Math. Biosci. 222, 27–35 (2009). MR2597085. https://doi.org/10.1016/j.mbs.2009.08.007
[9] 
Kataria, K.K., Khandakar, M.: On the long-range dependence of mixed fractional poisson process. J. Theor. Probab. (2020).
[10] 
Kostadinova, K.Y., Minkova, L.D.: On the poisson process of order k. Pliska Stud. Math. Bulgar. 22, 117–128 (2012). MR3203700
[11] 
Koutras, E.S., Markos, V.: Compound geometric distribution of order k. Methodol. Comput. Appl. Probab. 19, 377–393 (2017). MR3649550. https://doi.org/10.1007/s11009-016-9482-y
[12] 
Kuchler, U., Tappe, S.: Tempered stable distributions and processes. Stoch. Process. Appl. 123, 4256–4293 (2013). MR3096354. https://doi.org/10.1016/j.spa.2013.06.012
[13] 
Laskin, N.: Fractional poisson process. Commun. Nonlinear Sci. Numer. Simul. 02, 201–213 (2003). MR2007003. https://doi.org/10.1016/S1007-5704(03)00037-6
[14] 
Leonenko, N.N., Meerschaert, M.M., Schilling, R.L., Sikorskii, A.: Correlation structure of time-changed lévy processes. Commun. Appl. Ind. Math. 6, 483–22pp. (2014). MR3277310. https://doi.org/10.1685/journal.caim.483
[15] 
Maheshwari, A., Vellaisamy, P.: On the long-range dependence of fractional poisson and negative binomial processes. J. Appl. Probab. 53, 989–1000 (2016). MR3581236. https://doi.org/10.1017/jpr.2016.59
[16] 
Maheshwari, A., Vellaisamy, P.: Fractional poisson process time-changed by lévy subordinator and its inverse. J. Appl. Probab. 32, 1278–1305 (2019). MR3979669. https://doi.org/10.1007/s10959-017-0797-6
[17] 
Maheshwari, A., Vellaisamy, P.: Non-homogeneous space-time fractional poisson processes. Stoch. Anal. Appl. 37, 137–154 (2019). MR3943680. https://doi.org/10.1080/07362994.2018.1541749
[18] 
Maheshwari, A., Orsingher, E., Sengar, A.S.: Superposition of time-changed Poisson processes and their hitting times. https://arxiv.org/abs/1909.13213
[19] 
Mandelbrot, B., Fisher, A., Calvet, L.: A multifractal model of asset returns. Cowles Foundation Discussion Papers 1164, Cowles Foundation for Research in Economics, Yale University (1997).
[20] 
Meerschaert, M.M., Nane, E., Vellaisamy, P.: The fractional poisson process and the inverse stable subordinator. Electron. J. Probab. 16, 1600–1620 (2011). MR2835248. https://doi.org/10.1214/EJP.v16-920
[21] 
Minkova, L.D., Balakrishnan, N.: Compound weighted poisson distributions. Metrika 76, 543–558 (2013). MR3049501. https://doi.org/10.1007/s00184-012-0403-y
[22] 
Orsingher, E., Polito, F.: The space-fractional poisson process. Stat. Probab. Lett. 82, 852–858 (2012). MR2899530. https://doi.org/10.1016/j.spl.2011.12.018
[23] 
Philippou, A.N.: Poisson and compound poisson distributions of order k and some of their properties. J. Sov. Math. 27, 3294–3297 (1984).
[24] 
Philippou, A.N., Georghiou, C., Philippou, G.N.: A generalized geometric distribution and some of its properties. Stat. Probab. Lett. 1, 171–175 (1983). MR0709243. https://doi.org/10.1016/0167-7152(83)90025-1
[25] 
Ridout, M.S., Besbeas, P.: An empirical model for underdispersed count data. Stat. Model. 4, 77–89 (2004). MR2037815. https://doi.org/10.1191/1471082X04st064oa
[26] 
Rosiński, J.: Tempering stable processes. Stoch. Process. Appl. 117, 677–707 (2007). MR2327834. https://doi.org/10.1016/j.spa.2006.10.003
[27] 
Sáez-Castillo, A.J., Conde-Sánchez, A.: A hyper-poisson regression model for overdispersed and underdispersed count data. Comput. Stat. Data Anal. 61, 148–157 (2013). MR3063007. https://doi.org/10.1016/j.csda.2012.12.009
[28] 
Sato, K.-i.: Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge (1999). MR1739520
[29] 
Sengar, A.S., Maheshwari, A., Upadhye, N.S.: Time-changed poisson processes of order k. Stoch. Anal. Appl. 38, 124–148 (2020). MR4038817. https://doi.org/10.1080/07362994.2019.1653198
[30] 
Veillette, M., Taqqu, M.S.: Numerical computation of first passage times of increasing lévy processes. Methodol. Comput. Appl. Probab. 12, 695–729 (2010). MR2726540. https://doi.org/10.1007/s11009-009-9158-y
[31] 
Zhang, H., Li, B.: Characterizations of discrete compound poisson distributions. Commun. Stat., Theory Methods 45, 6789–6802 (2016). MR3540118. https://doi.org/10.1080/03610926.2014.901375
[32] 
Zhu, F.: Modeling overdispersed or underdispersed count data with generalized poisson integer-valued garch models. J. Math. Anal. Appl. 389, 58–71 (2012). MR2876481. https://doi.org/10.1016/j.jmaa.2011.11.042
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • Acronym
  • 2 Compound Poisson process of order k and its properties
  • 3 Main results
  • Acknowledgments
  • References

Copyright
© 2020 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Compound Poisson process of order k mixture of tempered stable subordinators martingale characterization

MSC2010
60G51 60G48

Metrics
since March 2018
598

Article info
views

398

Full article
views

483

PDF
downloads

143

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    2
  • Theorems
    8
vmsta165_g001.jpg
Fig. 1.
Plot for $\mathbb{P}[Z(t)=n]$
vmsta165_g002.jpg
Fig. 2.
Plots for mean and variance
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 5.
Theorem 6.
Theorem 7.
Theorem 8 (Tauberian Theorem).
vmsta165_g001.jpg
Fig. 1.
Plot for $\mathbb{P}[Z(t)=n]$
vmsta165_g002.jpg
Fig. 2.
Plots for mean and variance
Theorem 1.
Let ${\{Z(t)\}_{t\ge 0}}$ be as defined in Definition 3. Then the FDD, denoted as ${F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\mathbb{P}[Z({t_{1}})\le {y_{1}},\dots ,Z({t_{n}})\le {y_{n}}]$ has the following form
(1)
\[ {F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}},\]
where the summation is taken over all non-negative integers ${j_{i}}\ge 0,\hspace{2.5pt}i=1,2,\dots ,n$, ${v_{k}}={y_{k}}-{\textstyle\sum _{l=1}^{k-1}}{x_{l}},k=1,\dots ,n$, $\Delta {t_{l}}={t_{l}}-{t_{l-1}}$, h is the density/pmf of H, ${p_{{j_{l}}}}$ is the pmf of $\mathit{PPoK}(\lambda )$, and $(\ast {j_{n}})$ represents the ${j_{n}}$-fold convolution.
Theorem 2.
The mean, variance and covariance of the process ${\{Z(t)\}_{t\ge 0}}$ can be expressed as
  • $(i)$ $\mathbb{E}[Z(t)]=\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)]$,
  • $(ii)$ $\textit{Var}[Z(t)]=\textit{Var}(Y)\mathbb{E}[{N^{(k)}}(t)]+\mathbb{E}{[Y]^{2}}\textit{Var}({N^{(k)}}(t))$.
  • $(iii)$ $\textit{Cov}[Z(s),Z(t)]=\textit{Var}(Y)\mathbb{E}[{N^{(k)}}(s\wedge t)]+\mathbb{E}{[Y]^{2}}\textit{Var}[{N^{(k)}}(s\wedge t)],\hspace{2.5pt}\textit{where}\hspace{2.5pt}s\wedge t=\min (s,t)$.
Theorem 3.
Let ${\{Z(t)\}_{t\ge 0}}$ be the ${\mathcal{F}_{t}}$-adapted stochastic process, where $\{{\mathcal{F}_{t}}\}$ is non-decreasing family of sub-sigma algebras. Then ${\{Z(t)\}_{t\ge 0}}$ is a $\mathit{CPPoK}(\lambda ,H)$, where H is discrete distribution with support on ${\mathbb{Z}^{+}}$ iff process $M(t)\hspace{0.1667em}=\hspace{0.1667em}Z(t)-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y],t\ge 0$ is an ${\mathcal{F}_{t}}$ martingale.
Theorem 4.
The finite dimensional distribution of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ has the following form
(8)
\[ {F_{{Z_{1}}({t_{1}}),\dots ,{Z_{1}}({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{q_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}},\]
where the summation is taken over all non-negative integers ${j_{i}}\ge 0,\hspace{2.5pt}i=1,2,\dots ,n$, $\Delta {t_{l}}={t_{l}}-{t_{l-1}}$, ${v_{k}}={y_{k}}-{\textstyle\sum _{l=1}^{k-1}}{x_{l}},k=1,\dots ,n$, h is the density of H, and ${q_{j}}(t)=\mathbb{P}[{N^{(k)}}({S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))=j]$, where ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ is the MTSS and ${\{{N^{(k)}}(t)\}_{t\ge 0}}$ is $\mathit{PPoK}(\lambda )$.
Theorem 5.
Let $0<s\le t<\infty $. Then the mean and covariance function of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ are given as
  • $(i)$ $\mathbb{E}[{Z_{1}}(t)]=\mathbb{E}[Z(1)]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$,
  • $(ii)$ $Cov[{Z_{1}}(s),{Z_{1}}(t)]=\mathbb{E}{[Z(1)]^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\textit{Var}[Z(1)]$.
On putting $s=t$ in part (ii), we can get the expression for variance of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
Theorem 6.
Let ${\{{Z_{1}}(t)\}_{t\ge 0}}$ be the time-changed $\mathit{CPPoK}(\lambda ,H)$. It has the LRD property.
Theorem 7.
The mean and covariance function of $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ are given as
  • 1. $\mathbb{E}[{Z_{2}}(t)]=\mathbb{E}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$,
  • 2. $\textit{Cov}[{Z_{2}}(s),{Z_{2}}(t)]=\textit{Var}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}{[Z(1)]^{2}}\textit{Cov}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s),{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$.
Theorem 8 (Tauberian Theorem).
Let $l:(0,\infty )\to (0,\infty )$ be a slowly varying function at 0 (respectively ∞) and let $\rho \ge 0$. Then for a function $U:(0,\infty )\to (0,\infty )$, the following are equivalent.
  • 1. $U(x)\sim {x^{\rho }}l(x)/\Gamma (1+\rho )$, $x\to 0$ (respectively $x\to \infty $).
  • 2. $\tilde{U}(s)\sim {s^{-\rho -1}}l(1/s)$, $s\to \infty $ (respectively $s\to 0$), where $\tilde{U}(s)$ is the LT of $U(x)$.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy