1 Introduction
The Poisson process has been the conventional model for count data analysis, and due to its popularity and applicability various researchers have generalized it in several directions; e.g., compound Poisson processes, weighted Poisson distributions, fractional (time-changed) versions of Poisson processes (see [6, 21, 13, 2, 1], and references therein). A handful of researchers have also studied the distributions and processes of order k in [11, 23, 18]. In particular, the discrete distribution of order k, introduced by Philippou et al. (see [24]), includes binomial, geometric and negative binomial distributions of order k. These distributions play an important role in several areas, such as reliability, statistics, finance, and actuarial risk analysis (see [19, 29, 4]).
In risk theory, the total claim amount is usually modeled by using a compound Poisson process, say ${Z_{t}}={\textstyle\sum _{i=1}^{N(t)}}{Y_{i}}$, where the compounding random variables ${Y_{i}}$ are iid and the number of claims $N(t)$, which are independent of ${\{{Y_{i}}\}_{i\ge 1}}$, follows the Poisson distribution. Due to the restriction of single arrival in each inter-arrival time, the model is not suitable to use. Kostadinova and Minkova [10] introduced a Poisson process of order k (PPoK), which allows us to model the arrival in a group of size k. Recently, a time-changed version of Poisson processes of order k is studied by [29], which allows group arrivals and also accommodates the case of extreme events, which was not covered by [10]. In spite of its applicability, we observe that this model does not cover the phenomenon of underdispersion, where the variance is less than the mean (see [25, 32] and references therein). Therefore, a generalization of this model is essentially required and is proposed in this article.
To the best of our knowledge, such a generalization is not yet studied. Therefore, we introduce the compound Poisson process of order k (CPPoK) with the help of the Poisson process of order k (PPoK) and study its distributional properties. Recently, tempered stable processes and their inverses have been widely used for time-change (subordination) as their moments are finite and hence various real-life situations can be modeled easily (see [12, 26, 20]). Various versions of Poisson processes, using subordination like space and time-fractional Poisson processes have been studied in the literature (see [22, 17]). Hence, it is worth exploring the time-change of CPPoK with a special type of Lévy subordinator known as mixture of tempered stable subordinators, its right continuous inverse, and analyze some properties of these time-changed processes. These processes also generalize the process discussed in [22].
The article is organized as follows. In Section 2, we introduce CPPoK and derive some of its general properties along with martingale characterization property. In Section 3, we introduce two types of CPPoK with the help of MTSS and its right continuous inverse. Further we derive some important distributional properties.
Acronym
PoK | Poisson distribution of order k |
PPoK | Poisson process of order k |
PP | Poisson process |
CPPoK | Compound Poisson process of order k |
FDD | Finite-dimensional distribution |
IID | Independent and identically distributed |
MTSS | Mixture of tempered stable subordinators |
TCPPoK | Time changed compound Poisson process of order k |
pmf | Probability mass function |
pgf | Probability generating function |
LRD | Long-range dependence |
2 Compound Poisson process of order k and its properties
2.1 Compound Poisson process of order k
In this section, we introduce CPPoK and derive its distributional properties. First, we define the Poisson distribution of order k.
Definition 1 ([23]).
Let ${N^{(k)}}\sim PoK(\lambda )$, the Poisson distribution of order k (PoK) with rate parameter $\lambda >0$, then the probability mass function (pmf) of ${N^{(k)}}$ is given by
\[ \mathbb{P}[{N^{(k)}}=n]=\sum \limits_{\substack{{x_{1}},{x_{2}},\dots ,{x_{k}}\ge 0\\ {} {\textstyle\textstyle\sum _{j=1}^{k}}j{x_{j}}=n}}{e^{-k\lambda }}\frac{{\lambda ^{{x_{1}}+{x_{2}}+..+{x_{k}}}}}{{x_{1}}!{x_{2}}!...{x_{k}}!},\hspace{2.5pt}n=0,1,\dots ,\]
where the summation is taken over all non-negative integers ${x_{1}},{x_{2}},\dots ,{x_{k}}$ such that ${x_{1}}+2{x_{2}}+\cdots +k{x_{k}}=n$.Philippou [23] showed the existence of PoK as a limiting distribution of negative binomial distribution of order k. Kostadinova and Minkova [10] later generalized PoK to evolve over time, in terms of a process which can be defined as follows.
Definition 2 ([10]).
Let ${\{N(t)\}_{t\ge 0}}$ denote $\mathit{PP}(k\lambda )$, the Poisson process with rate parameter $k\lambda $, and ${\{{X_{i}}\}_{i\ge 1}}$ be a sequence of independent and identically distributed (IID) discrete uniform random variables with support on $\{1,2,\dots ,k\}$. Also, assume that ${\{{X_{i}}\}_{i\ge 1}}$ and ${\{N(t)\}_{t\ge 0}}$ are independent. Then ${\{{N^{(k)}}(t)\}_{t\ge 0}}$, defined by ${N^{(k)}}(t)={\textstyle\sum _{i=1}^{N(t)}}{X_{i}}$ is called the Poisson process of order k (PPoK) and is denoted by $\mathit{PPoK}(\lambda )$.
However, the clumping behavior associated with the random phenomenon in [8] cannot be accommodated by PPoK [10]. Hence, there is a need to generalize this notion as well. Now we propose the following generalization of PPoK.
Definition 3.
Let ${\{{N^{(k)}}(t)\}_{t\ge 0}}$ be the $\mathit{PPoK}(\lambda )$ and ${\{{Y_{i}}\}_{i\ge 1}}$ be a sequence of IID random variables, independent of ${N^{(k)}}(t)$, with cumulative distribution function (CDF) H. Then the process ${\{Z(t)\}_{t\ge 0}}$ defined by $Z(t)={\textstyle\sum _{i=1}^{{N^{(k)}}(t)}}{Y_{i}}$ is called the compound Poisson process of order k (CPPoK) and is denoted by $\mathit{CPPoK}(\lambda ,H)$.
From the definition, it is clear that:
Next, we present a characterization of $\mathit{CPPoK}(\lambda ,H)$, in terms of the finite-dimensional distribution (FDD).
-
$(i)$ for $k=1$, ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{CPP}(\lambda ,H)$ the usual compound Poisson process.
-
$(ii)$ for $H={\delta _{1}}$, the Dirac measure at 1, ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{PPoK}(\lambda )$.
-
$(iii)$ for $k=1$ and $H={\delta _{1}}$, ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{PP}(\lambda )$.
Theorem 1.
Let ${\{Z(t)\}_{t\ge 0}}$ be as defined in Definition 3. Then the FDD, denoted as ${F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\mathbb{P}[Z({t_{1}})\le {y_{1}},\dots ,Z({t_{n}})\le {y_{n}}]$ has the following form
where the summation is taken over all non-negative integers ${j_{i}}\ge 0,\hspace{2.5pt}i=1,2,\dots ,n$, ${v_{k}}={y_{k}}-{\textstyle\sum _{l=1}^{k-1}}{x_{l}},k=1,\dots ,n$, $\Delta {t_{l}}={t_{l}}-{t_{l-1}}$, h is the density/pmf of H, ${p_{{j_{l}}}}$ is the pmf of $\mathit{PPoK}(\lambda )$, and $(\ast {j_{n}})$ represents the ${j_{n}}$-fold convolution.
(1)
\[ {F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}},\]Proof.
Let $0={t_{0}}\le {t_{1}}\le \cdots \le {t_{n}}=t$ be the partition of $[0,t]$. Since, the increments of $\{{N^{(k)}}(t)\}$ are independent and stationary, we can write ${N^{(k)}}({t_{i}})={\textstyle\sum _{l=1}^{i}}{N^{(k)}}(\Delta {t_{l}}),i=1,\dots ,n$, and $\mathbb{P}[{N^{(k)}}(t)=j]={p_{j}}(t),j=0,1,\dots $.
\[\begin{aligned}{}{F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})& =\mathbb{P}\Bigg[{\sum \limits_{i=1}^{{N^{(k)}}({t_{1}})}}{Y_{i}}\le {y_{1}},\dots ,{\sum \limits_{i=1}^{{N^{(k)}}({t_{n}})}}{Y_{i}}\le {y_{n}}\Bigg]\\ {} & =\sum \limits_{{j_{1}},\dots {j_{n}}}\mathbb{P}\Bigg[{\sum \limits_{i=1}^{{j_{1}}}}{Y_{i}}\le {y_{1}},\dots ,{\sum \limits_{i=1}^{{\textstyle\textstyle\sum _{l=1}^{n}}{j_{l}}}}{Y_{i}}\le {y_{n}}\Bigg]{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}})\end{aligned}\]
Let us denote ${\textstyle\sum _{i=1}^{{j_{1}}}}{Y_{i}}=Y({j_{1}}),\dots ,{\textstyle\sum _{i={j_{1}}+\cdots +{j_{n-1}}+1}^{{j_{1}}+\cdots +{j_{n}}}}{Y_{i}}=Y({j_{n}})$, then it becomes
\[\begin{aligned}{}& {F_{Z({t_{1}}),\dots ,Z({t_{n}})}}({y_{1}},\dots ,{y_{n}})\\ {} & \hspace{1em}=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}})\mathbb{P}\Bigg[Y({j_{1}})\le {y_{1}},\dots ,{\sum \limits_{l=1}^{n}}Y({j_{l}})\le {y_{n}}\Bigg]\\ {} & \hspace{1em}=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{y_{1}}}}\dots {\int _{-\infty }^{{y_{n}}-{\textstyle\textstyle\sum _{l=1}^{n-1}}{x_{l}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}}\\ {} & \hspace{1em}=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{p_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}}.\end{aligned}\]
□Remark 1.
For $n=1$, (1) reduces to $\mathbb{P}[Z(t)\le y]={\textstyle\sum _{j=0}^{\infty }}{p_{j}}(t){\textstyle\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(x)dx$, which is the marginal distribution of CPPoK(λ, H).
Remark 2.
For $n=1$ and H has discrete distribution, the probability mass function (pmf) of $\mathit{CPPoK}(\lambda ,H)$, denoted as $\mathbb{P}[Z(t)=n]={P_{n}}(t)$ is given as
\[ {P_{n}}(t)=\mathbb{P}[Z(t)=n]={\sum \limits_{m=0}^{\infty }}{p_{m}}(t){h_{{Y_{1}}}^{\ast m}}(n),\]
where ${p_{m}}(t)=\mathbb{P}[{N^{(k)}}(t)=m]$, the pmf of $\mathit{PPoK}(\lambda )$ and, ${h_{{Y_{1}}}^{\ast m}}$ is the m-fold convolution of pmf of ${Y_{1}}$. Difference-differential equation satisfied by the pmf of $\mathit{CPPoK}(\lambda ,H)$ is given as
\[ \frac{d}{dt}{P_{n}}(t)={\sum \limits_{m=0}^{\infty }}{h_{{Y_{1}}}^{\ast m}}(n)\frac{d}{dt}{p_{m}}(t)={\sum \limits_{m=0}^{\infty }}{h_{{Y_{1}}}^{\ast m}}(n)\Bigg[-k\lambda {p_{m}}(t)+\lambda {\sum \limits_{j=1}^{m\wedge k}}{p_{m-j}}(t)\Bigg]\]
with the initial condition ${P_{n}}(0)={h_{{Y_{1}}}^{\ast m}}(n),\hspace{2.5pt}n=1,2,\dots $ and ${P_{0}}(0)={h_{{Y_{1}}}^{\ast m}}(0)$ and $m\wedge k=\min (m,k)$.
Next, we present some examples of $\mathit{CPPoK}(\lambda ,H)$ by taking different distribution of H.
Example 1.
Let H has geometric distribution with parameter $p\in (0,1]$. Then the marginal distribution of $\mathit{CPPoK}(\lambda ,H)$ is given as
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {F_{Z(t)}}(y)=\mathbb{P}[Z(t)\le y]& \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}{p_{j}}(t){\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(x)dx\\ {} & \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}{I_{1-p}}(j,y+1){p_{j}}(t),\end{array}\]
where ${I_{x}}=\frac{B(x;a,b)}{B(a,b)},\hspace{2.5pt}0<x<1$ is the regularized incomplete beta function, $B(x;a,b)={\textstyle\int _{0}^{x}}{t^{a-1}}{(1-t)^{b-1}}$ is the incomplete beta function.Example 2.
Let H has exponential distribution with parameter $\mu >0$. Then the marginal distribution of $\mathit{CPPoK}(\lambda ,H)$ is given as
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {F_{Z(t)}}(y)=\mathbb{P}[Z(t)\le y]& \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}{p_{j}}(t){\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(x)dx\\ {} & \displaystyle =& \displaystyle {\sum \limits_{j=0}^{\infty }}\frac{\gamma (j,\mu y)}{(j-1)!}{p_{j}}(t),\end{array}\]
where $\gamma (s,x)={\textstyle\int _{0}^{x}}{e^{-t}}{t^{s-1}}dt,\hspace{2.5pt}x\ge 0$ is the incomplete gamma function.Here we plot the pmf of the considered process when ${Y_{i}}\sim Geo(p)$, $p\in (0,1]$. Fix $t=1,\hspace{2.5pt}\lambda =1,\hspace{2.5pt}p=0.5,\hspace{2.5pt}\text{and}\hspace{2.5pt}k=1$.
Theorem 2.
The mean, variance and covariance of the process ${\{Z(t)\}_{t\ge 0}}$ can be expressed as
-
$(i)$ $\mathbb{E}[Z(t)]=\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)]$,
-
$(ii)$ $\textit{Var}[Z(t)]=\textit{Var}(Y)\mathbb{E}[{N^{(k)}}(t)]+\mathbb{E}{[Y]^{2}}\textit{Var}({N^{(k)}}(t))$.
-
$(iii)$ $\textit{Cov}[Z(s),Z(t)]=\textit{Var}(Y)\mathbb{E}[{N^{(k)}}(s\wedge t)]+\mathbb{E}{[Y]^{2}}\textit{Var}[{N^{(k)}}(s\wedge t)],\hspace{2.5pt}\textit{where}\hspace{2.5pt}s\wedge t=\min (s,t)$.
Proof.
$\mathbb{E}[Z(t)]=\mathbb{E}[{\textstyle\sum _{i=1}^{{N^{(k)}}(t)}}{Y_{i}}]=\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)]=\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]$. Hence, part $(i)$ is proved.
Next, we derive the expression for variance of $Z(t)$. Let ${p_{n}}(t)$ be the pmf of $\mathit{PPoK}(\lambda )$. Then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \text{Var}[Z(t)]& \displaystyle =& \displaystyle \mathbb{E}{[Z(t)-\mathbb{E}[Z(t)]]^{2}}\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}[{[Z(t)-\mathbb{E}[Z(t)]]^{2}}|{N_{k}}(t)=n]{p_{n}}(t)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}{\left[{S_{n}}-\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)]\right]^{2}}{p_{n}}(t),\hspace{2.5pt}\text{where}\hspace{2.5pt}{S_{n}}={\textstyle\textstyle\sum _{i=1}^{n}}{Y_{i}}\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}{[({S_{n}}-n\mathbb{E}[Y])+(n\mathbb{E}[Y]-\mathbb{E}[Y]\mathbb{E}[{N^{(k)}}(t)])]^{2}}{p_{n}}(t)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\left[\text{Var}({S_{n}})+\mathbb{E}{[Y]^{2}}\mathbb{E}{[n-\mathbb{E}[{N^{(k)}}(t)]]^{2}}\right]{p_{n}}(t)\\ {} & \displaystyle =& \displaystyle \text{Var}(Y)\mathbb{E}[{N^{(k)}}(t)]+\mathbb{E}{[Y]^{2}}\text{Var}({N^{(k)}}(t)),\end{array}\]
which proves part $(ii)$.Now, in order to derive the covariance term, first we evaluate $\mathbb{E}[Z(s)Z(t)]$. So
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[Z(s)Z(t)]& \displaystyle =& \displaystyle \mathbb{E}[Z(s)]\mathbb{E}[Z(t)-Z(s)]+\mathbb{E}[Z{(s)^{2}}]\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(s)]\mathbb{E}[Z(t)-Z(s)]+\text{Var}[Z(s)]+\mathbb{E}{[Z(s)]^{2}}\end{array}\]
Therefore,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle Cov(Z(s),Z(t))& \displaystyle =& \displaystyle \mathbb{E}[Z(s)Z(t)]-\mathbb{E}[Z(s)]\mathbb{E}[Z(t)]\\ {} & \displaystyle =& \displaystyle \text{Var}(Y)\mathbb{E}[{N^{(k)}}(s\wedge t)]+\mathbb{E}{[Y]^{2}}\text{Var}[{N^{(k)}}(s\wedge t)],\end{array}\]
hence, part $(iii)$ is proved. □Remark 3.
Now we present the mean and covariance formula for some specific cases of $\mathit{CPPoK}(\lambda ,H)$ that we discussed in Definition 3.
S. No. | $\mathbf{CPPoK}(\boldsymbol{\lambda },\mathbf{H})$ | Mean | Covariance | Variance |
1. | For $H={\delta _{1}}$ | $\mathbb{E}[{N^{(k)}}(t)]$ | $\textit{Var}[{N^{(k)}}(s\wedge t)]$ | $\textit{Var}[{N^{(k)}}(t)]$ |
2. | For $k=1$ | $\mathbb{E}[Y]\mathbb{E}[N(t)]$ | $\mathbb{E}[{Y^{2}}]\mathbb{E}[N(s\wedge t)]$ | $\mathbb{E}[{Y^{2}}]\mathbb{E}[N(t)]$ |
3. | For $H={\delta _{1}}$ and $k=1$ | $\mathbb{E}[N(t)]$ | $\textit{Var}[N(s\wedge t)]$ | $\textit{Var}[N(t)]$ |
From this table, we make the following observations.
Further we present the plots of mean and variance when ${Y_{i}}\sim exp(\mu )$, $\mu >0$ for different setting of parameters. Fix $t=10$.
-
1. For $H={\delta _{1}}$, it reduces to mean and covariance formula for $\mathit{PPoK}(\lambda )$ (see [29]).
-
2. For $k=1$, it reduces to mean and covariance formula for $\mathit{CPP}(\lambda ,H)$.
-
3. For $H={\delta _{1}}$ and $k=1$, it reduces to mean and covariance formula for $\mathit{PP}(\lambda )$.
2.2 Index of dispersion
In this subsection, we are going to discuss the index of dispersion of $\mathit{CPPoK}(\lambda ,H)$.
Definition 4 ([16]).
Alternatively, Definition 4 can be interpreted as follows. A stochastic process ${\{Z(t)\}_{t\ge 0}}$ is over(under)-dispersed if $\text{Var}[Z(t)]-\mathbb{E}[Z(t)]>0(<0)$. Therefore, we first calculate
From the above definition, the following cases arise:
(2)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \text{Var}[Z(t)]-\mathbb{E}[Z(t)]& \displaystyle =& \displaystyle \frac{k(k+1)}{2}\lambda t\left[\text{Var}[Y]-\mathbb{E}[Y]+\frac{(2k+1)}{3}\mathbb{E}{[Y]^{2}}\right]\\ {} & \displaystyle =& \displaystyle \frac{k(k+1)}{2}\lambda t\left[\mathbb{E}[{Y^{2}}]-\mathbb{E}[Y]+\frac{2k-2}{3}\mathbb{E}{[Y]^{2}}\right].\end{array}\]-
$(i)$ If ${Y^{\prime }_{i}}s$ are over and equidispersed, then $\mathit{CPPoK}(\lambda ,H)$ exhibits overdispersion.
-
$(ii)$ If ${Y^{\prime }_{i}}s$ are underdispersed with non-negative integer valued random variable i.e., $[\mathbb{E}({Y^{2}})-\mathbb{E}(Y)]\ge 0$, then $\mathit{CPPoK}(\lambda ,H)$ shows overdispersion.
-
$(iii)$ If ${Y^{\prime }_{i}}s$ are underdispersed, then $\mathit{CPPoK}(\lambda ,H)$ may show over, equi and underdispersion.
Further we present some examples to discuss the index of dispersion.
Example 3.
If ${Y_{i}}\sim exp(\mu )$, where $\mu >0$, then (2) becomes
\[ \text{Var}[Z(t)]-\mathbb{E}[Z(t)]=\frac{k(k+1)}{2\mu }\lambda t\left[\frac{1}{\mu }\left(\frac{2k+4}{3}\right)-1\right].\]
-
(a) If $0<\mu \le 1$, then $\mathit{CPPoK}(\lambda ,H)$ exhibits overdispersion.
-
(b) If $1<\mu <\frac{2k+4}{3}$, then $\mathit{CPPoK}(\lambda ,H)$ shows overdispersion.
-
(c) If $\mu =\frac{2k+4}{3}$, then $\mathit{CPPoK}(\lambda ,H)$ shows equidispersion.
-
(d) If $\mu >\frac{2k+4}{3}$, then $\mathit{CPPoK}(\lambda ,H)$ shows underdispersion.
Example 4.
If ${Y_{i}}\sim Geo(p)$, where $p\in (0,1]$, then (2) becomes
\[ \text{Var}[Z(t)]-\mathbb{E}[Z(t)]=\frac{k(k+1)\lambda t}{2}{\left(\frac{1-p}{p}\right)^{2}}\left[1+\frac{2k+1}{3}\right].\]
If $0<p<1$, then $\mathit{CPPoK}(\lambda ,H)$ exhibits overdispersion.2.3 Long range dependence
In this subsection, we prove the long-range dependence (LRD) property for the $\mathit{CPPoK}(\lambda ,H)$. There are several definitions available in literature. We used the definition given in [15].
Definition 5 ([15]).
Let $0\le s<t$ and s be fixed. Assume a stochastic process ${\{X(t)\}_{t\ge 0}}$ has the correlation function $\mathit{Corr}[X(s),X(t)]$ that satisfies
for large $t,d>0,\hspace{2.5pt}{c_{1}}(s)>0\hspace{2.5pt}and\hspace{2.5pt}{c_{2}}(s)>0$. i.e.,
for some $c(s)>0$ and $d>0$. We say that, $X(t)$ has the long-range dependence (LRD) property if $d\in (0,1)$ and short-range dependence (SRD) property if $d\in (1,2)$.
Proof.
Let $0\le s<t$. Consider
where, $0<c(s)=\sqrt{\frac{\frac{k(k+1)}{2}\lambda s\text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda s\mathbb{E}{[Y]^{2}}}{\frac{k(k+1)}{2}\lambda \text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda \mathbb{E}{[Y]^{2}}}}$, which decays like the power law ${t^{-1/2}}$. Hence $\mathit{CPPoK}(\lambda ,H)$ has LRD property. □
(3)
\[\begin{aligned}{}\mathit{Corr}[Z(s),Z(t)]& =\frac{Cov[Z(s),Z(t)]}{\sqrt{\text{Var}[Z(s)]\text{Var}[Z(t)]}},\\ {} & =\sqrt{\frac{\frac{k(k+1)}{2}\lambda s\text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda s\mathbb{E}{[Y]^{2}}}{\frac{k(k+1)}{2}\lambda t\text{Var}(Y)+\frac{k(k+1)(2k+1)}{6}\lambda t\mathbb{E}{[Y]^{2}}}},\hspace{2.5pt}\text{from Theorem 2}\hspace{2.5pt}\\ {} & =c(s){t^{-1/2}},\end{aligned}\]Corollary 1.
Let ${Y_{i}}\equiv 1$, then $\mathit{CPPoK}(\lambda ,H)$ reduces to $\mathit{PPoK}(\lambda )$ and $\mathit{Corr}[Z(s),Z(t)]$ becomes ${s^{1/2}}{t^{-1/2}}$. Hence LRD property also holds for $\mathit{PPoK}$ (see [29, Lemma 3.1]).
2.4 Martingale characterization for CPPoK
It is well known that the martingale characterization for homogeneous Poisson process is called Watanabe theorem (see [5]). Now, we extend this theorem for $\mathit{CPPoK}(\lambda ,H)$, where H is discrete distribution with support on ${\mathbb{Z}^{+}}$ and for this, we need following two lemmas.
Lemma 1.
Let $D(t)={\textstyle\sum _{j=1}^{N(t)}}{X_{j}},\hspace{2.5pt}t\ge 0$ be the compound Poisson process, where ${\{N(t)\}_{t\ge 0}}$ is $\mathit{PP}(k\lambda )$ and ${\{{X_{j}}\}_{j\ge 1}}$ are non-negative IID random variable, independent from ${\{N(t)\}_{t\ge 0}}$, with pmf $\mathbb{P}({X_{j}}=i)={\alpha _{i}},\hspace{2.5pt}(i=0,1,2,\dots ,j=1,2,\dots )$. Then ${\{D(t)\}_{t\ge 0}}$ can be represented as
\[ D(t)\stackrel{d}{=}{\sum \limits_{i=1}^{\infty }}i{Z_{i}}(t),\hspace{2.5pt}\hspace{2.5pt}t\ge 0,\]
where, ${Z_{i}}(t),\hspace{2.5pt}i=1,2,\dots $ are independent $\mathit{PP}(k\lambda {\alpha _{i}})$, and the symbol $\stackrel{d}{=}$ denotes the equality in distribution.
Proof.
We prove this lemma by showing that the probability generating function (pgf) of L.H.S. and R.H.S. coincides. Let ${G_{D(t)}}(u)$ is the pgf of ${\{D(t)\}_{t\ge 0}}$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {G_{D(t)}}(u)=\mathbb{E}[{u^{D(t)}}]& \displaystyle =& \displaystyle {\sum \limits_{n=0}^{\infty }}\mathbb{E}{[{u^{{X_{1}}}}]^{n}}\mathbb{P}[N(t)=n]\\ {} & \displaystyle =& \displaystyle \exp [-\lambda kt(1-\mathbb{E}[{u^{X}}])]\\ {} & \displaystyle =& \displaystyle \exp \Bigg[\lambda kt{\sum \limits_{j=1}^{\infty }}{\alpha _{j}}({u^{j}}-1)\Bigg].\end{array}\]
Now we compute the pgf of ${\textstyle\sum _{i=1}^{\infty }}i{Z_{i}}(t)$, i.e.,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{u^{{\textstyle\textstyle\sum _{i=1}^{\infty }}i{Z_{i}}(t)}}]& \displaystyle =& \displaystyle {\prod \limits_{i=1}^{\infty }}\mathbb{E}[{u^{i{Z_{i}}(t)}}]\\ {} & \displaystyle =& \displaystyle {\prod \limits_{i=1}^{\infty }}{\sum \limits_{{n_{i}}=0}^{\infty }}{u^{i{n_{i}}}}\mathbb{P}[{Z_{i}}(t)={n_{i}}]\\ {} & \displaystyle =& \displaystyle {\prod \limits_{i=1}^{\infty }}{\sum \limits_{{n_{i}}=0}^{\infty }}{u^{i{n_{i}}}}{e^{-k\lambda {\alpha _{i}}t}}\frac{{(k\lambda {\alpha _{i}}t)^{{n_{i}}}}}{{n_{i}}!}\\ {} & \displaystyle =& \displaystyle \exp \Bigg[\lambda kt{\sum \limits_{i=1}^{\infty }}{\alpha _{i}}({u^{i}}-1)\Bigg].\end{array}\]
Hence, this lemma is proved. □Lemma 2.
The pgf of $Z(t)={\textstyle\sum _{i=1}^{{N^{(k)}}(t)}}{Y_{i}},\hspace{2.5pt}t\ge 0$ has the following form
\[ {G_{Z(t)}}(u)=\exp \left[\lambda kt{\sum \limits_{{j_{1}}=1}^{\infty }}\frac{{q_{{j_{1}}}^{(1)}}+{q_{{j_{1}}}^{(2)}}+\cdots +{q_{{j_{1}}}^{(k)}}}{k}({u^{{j_{1}}}}-1)\right],\]
where ${Y_{i}},\hspace{2.5pt}i=1,2,\dots $ are non-negative integer valued IID random variables with $\mathbb{P}[{Y_{i}}=n]={q_{n}},\hspace{2.5pt}n=0,1,\dots $.
Proof.
Let ${G_{{N^{(k)}}(t)}}(u)$ is the pgf of ${\{{N^{(k)}}(t)\}_{t\ge 0}}$, which is given by (see [29, Remark 2.2])
The pgf of ${\{Z(t)\}_{t\ge 0}}$, denoted as ${G_{Z(t)}}(u)=\mathbb{E}[{u^{Z(t)}}]$, is then given by
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {G_{Z(t)}}(u)& \displaystyle =& \displaystyle \exp [\lambda t\{{G_{Y}}(u)+\cdots +{G_{Y}^{\ast k}}(u)\}-k\lambda t],\hspace{2.5pt}\text{where}\hspace{2.5pt}{G_{Y}}(u)=\mathbb{E}[{u^{Y}}]\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda t\left\{{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}}{u^{{j_{1}}}}+{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{\ast 2}}{u^{{j_{1}}}}+\cdots +{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{\ast k}}{u^{{j_{1}}}}\right\}-k\lambda t\right]\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda t\left\{{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}}{u^{{j_{1}}}}+\cdots +{\sum \limits_{{j_{1}}=0}^{\infty }}\dots {\sum \limits_{{j_{k}}=0}^{{j_{k-1}}}}{q_{{j_{k}}}}\dots {q_{{j_{1}}-{j_{2}}}}{u^{{j_{1}}}}\right\}-k\lambda t\right]\\ {} & & \displaystyle \hspace{-63.0pt}\text{Let us denote}\hspace{2.5pt}{q_{{j_{1}}}^{(n)}}={\prod \limits_{m=2}^{n}}{\sum \limits_{{j_{m}}=0}^{{j_{m-1}}}}{q_{{j_{n}}}}{q_{{j_{n-1}}-{j_{n}}}}\dots {q_{{j_{1}}-{j_{2}}}},\hspace{2.5pt}n=1,\dots ,k.\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda t{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{(1)}}({u^{{j_{1}}}}-1)+\cdots +\lambda t{\sum \limits_{{j_{1}}=0}^{\infty }}{q_{{j_{1}}}^{(k)}}({u^{{j_{1}}}}-1)\right]\\ {} & \displaystyle =& \displaystyle \exp \left[\lambda kt{\sum \limits_{{j_{1}}=1}^{\infty }}\frac{{q_{{j_{1}}}^{(1)}}+{q_{{j_{1}}}^{(2)}}+\cdots +{q_{{j_{1}}}^{(k)}}}{k}({u^{{j_{1}}}}-1)\right].\end{array}\]
□Remark 4.
Set ${\alpha _{{j_{1}}}}=\frac{{q_{{j_{1}}}^{(1)}}+\cdots +{q_{{j_{1}}}^{(k)}}}{k},\hspace{2.5pt}{j_{1}}=0,1,2,\dots $. Substituting ${\alpha _{{j_{1}}}}$ in Lemma 1, we get the following relation
where ${Y^{\prime }_{i}}s$ are non-negative integer valued random variables and ${N^{(k)}}(t)$ is $\mathit{PPoK}(\lambda )$.
(4)
\[ D(t)\stackrel{d}{=}{\sum \limits_{i=1}^{\infty }}i{Z_{i}}(t)\stackrel{d}{=}{\sum \limits_{i=1}^{{N^{(k)}}(t)}}{Y_{i}},\hspace{2.5pt}t\ge 0,\]Theorem 3.
Let ${\{Z(t)\}_{t\ge 0}}$ be the ${\mathcal{F}_{t}}$-adapted stochastic process, where $\{{\mathcal{F}_{t}}\}$ is non-decreasing family of sub-sigma algebras. Then ${\{Z(t)\}_{t\ge 0}}$ is a $\mathit{CPPoK}(\lambda ,H)$, where H is discrete distribution with support on ${\mathbb{Z}^{+}}$ iff process $M(t)\hspace{0.1667em}=\hspace{0.1667em}Z(t)-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y],t\ge 0$ is an ${\mathcal{F}_{t}}$ martingale.
Proof.
Let ${\{Z(t)\}_{t\ge 0}}$ be the ${\mathcal{F}_{t}}$ adapted stochastic process. If ${\{Z(t)\}_{t\ge 0}}$ is a compound Poisson process of order k, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[M(t)|{\mathcal{F}_{s}}]& \displaystyle =& \displaystyle \mathbb{E}[Z(t)-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]|{\mathcal{F}_{s}}],\hspace{2.5pt}\hspace{2.5pt}0\le s\le t.\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(t)|{\mathcal{F}_{s}}]-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(t)-Z(s)|{\mathcal{F}_{s}}]+\mathbb{E}[Z(s)|{\mathcal{F}_{s}}]-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]\\ {} & \displaystyle =& \displaystyle \mathbb{E}[Z(t)-Z(s)]+Z(s)-\frac{k(k+1)}{2}\lambda t\mathbb{E}[Y]\\ {} & \displaystyle =& \displaystyle Z(s)-\frac{k(k+1)}{2}\lambda s\mathbb{E}[Y]=M(s).\end{array}\]
Hence, the process ${\{M(t)\}_{t\ge 0}}$ is ${\mathcal{F}_{t}}$ martingale.Remark 5.
We know that CPP is a Lévy process and in (4) it is proved that CPPoK is equal in distribution to ${\{D(t)\}_{t\ge 0}}$. Hence, CPPoK is also a Lévy process and hence infinitely divisible.
Remark 6.
The characteristic function of $\mathit{CPPoK}(\lambda ,H)$ can be written as
where, ${\alpha _{j}},j=1,2,\dots $ are as defined in Remark 4, and $k\lambda {\alpha _{j}}={\nu _{j}}$ is called the Lévy measure of $\mathit{CPPoK}(\lambda ,H)$.
(5)
\[ \mathbb{E}[{e^{iwZ(t)}}]=\exp \Bigg[t{\sum \limits_{j=1}^{\infty }}({e^{iwj}}-1)k\lambda {\alpha _{j}}\Bigg],\]-
1. For $K=1$, (5) reduces to $\mathbb{E}[{e^{iwZ(t)}}]=\exp [t{\textstyle\sum _{j=1}^{\infty }}({e^{iwj}}-1)\lambda {\alpha _{j}}]$, which is the characteristic function of $\mathit{CPP}(\lambda ,H)$.
-
2. For $H={\delta _{1}}$, (5) reduces to $\mathbb{E}[{e^{iwZ(t)}}]=\exp [\lambda kt{\textstyle\sum _{j=1}^{k}}(\frac{{e^{iwj}}}{k}-1)]$, which is the characteristic function of $\mathit{PPoK}(\lambda )$.
-
3. For $H={\delta _{1}}$ and $k=1$, (5) reduces to $\mathbb{E}[{e^{iwZ(t)}}]=\exp [\lambda t({e^{iw}}-1)]$, which is the characteristic function of $\mathit{PP}(\lambda )$.
3 Main results
In this section, we recall the definitions of Lévy subordinator and its first exit time. Further, we define the subordinated versions of $\mathit{CPPoK}(\lambda ,H)$ and discuss their properties.
3.1 Lévy subordinator
A Lévy subordinator ${\{{D_{f}}(t)\}_{t\ge 0}}$ is a one-dimensional non-decreasing Lévy process whose Laplace transform (LT) can be expressed in the form (see [3])
where the function $f:[0,\infty )\to [0,\infty )$ is called the Laplace exponent and
Here b is the drift coefficient and ν is a non-negative Lévy measure on positive half-line satisfying
\[ {\int _{0}^{\infty }}(x\wedge 1)\nu (dx)<\infty \hspace{2.5pt}\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}\hspace{2.5pt}\nu ([0,\infty ))=\infty ,\]
which ensures that the sample paths of ${D_{f}}(t)$ are almost surely $(a.s.)$ strictly increasing. Also, the inverse subordinator ${\{{E_{f}}(t)\}_{t\ge 0}}$ is the first exit time of the Lévy subordinator ${\{{D_{f}}(t)\}_{t\ge 0}}$, and it is defined as
Next, we study $\mathit{CPPoK}(\lambda ,H)$ by taking subordinator as mixture of tempered stable subordinators (MTSS).3.2 CPPoK time changed by mixtures of tempered stable subordinators
The mixtures of tempered stable subordinators (MTSS) ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ is a Lévy process with LT (see [7])
\[ \mathbb{E}[{e^{-s{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)}}]=\exp \{-t({c_{1}}({(s+{\mu _{1}})^{{\alpha _{1}}}}-{\mu _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\mu _{2}})^{{\alpha _{2}}}}-{\mu _{2}^{{\alpha _{2}}}}))\},\hspace{2.5pt}s>0,\]
where ${c_{1}}+{c_{2}}=1,{c_{1}},{c_{2}}\ge 0,{\mu _{1}},{\mu _{2}}>0$ are tempering parameters and ${\alpha _{1}},{\alpha _{2}}\in (0,1)$ are stability indices. The function $f(s)={c_{1}}({(s+{\mu _{1}})^{{\alpha _{1}}}}-{\mu _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\mu _{2}})^{{\alpha _{2}}}}-{\mu _{2}^{{\alpha _{2}}}})$ is the Laplace exponent of MTSS. It can also be represented as sum of two independent tempered stable subordinators ${S_{{\alpha _{1}}}^{{\mu _{1}}}}(t)$ and ${S_{{\alpha _{2}}}^{{\mu _{2}}}}(t)$ as
\[ {S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)={S_{{\alpha _{1}}}^{{\mu _{1}}}}({c_{1}}t)+{S_{{\alpha _{2}}}^{{\mu _{2}}}}({c_{2}}t),{c_{1}},{c_{2}}\ge 0.\]
The mean and variance of MTSS are given as
Definition 6.
Let ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ be the Lévy subordinator satisfying $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}{(t)^{\rho }}]<\infty $ for all $\rho >0$. Then the time-changed $\mathit{CPPoK}(\lambda ,H)$, denoted by $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ is defined as
\[ {Z_{1}}(t)=Z({S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))={\sum \limits_{i=1}^{{N^{(k)}}({S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0,\]
where ${\{Z(t)\}_{t\ge 0}}$ is $\mathit{CPPoK}(\lambda ,H)$, independent from ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$.Remark 7.
If ${\alpha _{1}}={\alpha _{2}}=\alpha $, and ${\mu _{1}}={\mu _{2}}=0$, then MTSS becomes α-stable subordinator, reducing ${Z_{1}}(t)$ to $\mathit{TCPPoK}(\lambda ,H,{S_{\alpha }})$, which we call as space fractional CPPoK and written as
\[ Z({S_{\alpha }}(t))={\sum \limits_{i=1}^{{N^{(k)}}({S_{\alpha }}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0.\]
Let ${Y^{\prime }_{i}}s\equiv 1$, then $Z({S_{\alpha }}(t))$ reduces to space fractional PPoK, denoted as $\mathit{PPoK}(\lambda ,{S_{\alpha }})$. Its pgf is given as
\[ {G_{{Z_{1}}(t)}}(u)={e^{[-t{\lambda ^{\alpha }}{(k-{\textstyle\textstyle\sum _{j=1}^{k}}{u^{j}})^{\alpha }}]}},\hspace{2.5pt}t\ge 0.\]
It can be seen as extension of space fractional Poisson process (see [22]).
Remark 8.
If ${\alpha _{1}}={\alpha _{2}}=\alpha $, and ${\mu _{1}}=\mu ,{\mu _{2}}=0$, then MTSS reduces to tempered α- stable subordinator and ${Z_{1}}(t)$ becomes tempered space fractional CPPoK, denoted as $\mathit{TCPPoK}(\lambda ,H,{S_{\alpha }^{\mu }})$, can be written as
\[ Z({S_{\alpha }^{\mu }}(t))={\sum \limits_{i=1}^{{N^{(k)}}({S_{\alpha }^{\mu }}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0,\]
where $\mu >0$ is tempering parameter. Substituting ${Y^{\prime }_{i}}s\equiv 1$, it becomes tempered space fractional PPoK, denoted as $\mathit{PPoK}(\lambda ,{S_{\alpha }^{\mu }})$.
Theorem 4.
The finite dimensional distribution of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ has the following form
where the summation is taken over all non-negative integers ${j_{i}}\ge 0,\hspace{2.5pt}i=1,2,\dots ,n$, $\Delta {t_{l}}={t_{l}}-{t_{l-1}}$, ${v_{k}}={y_{k}}-{\textstyle\sum _{l=1}^{k-1}}{x_{l}},k=1,\dots ,n$, h is the density of H, and ${q_{j}}(t)=\mathbb{P}[{N^{(k)}}({S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))=j]$, where ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ is the MTSS and ${\{{N^{(k)}}(t)\}_{t\ge 0}}$ is $\mathit{PPoK}(\lambda )$.
(8)
\[ {F_{{Z_{1}}({t_{1}}),\dots ,{Z_{1}}({t_{n}})}}({y_{1}},\dots ,{y_{n}})=\sum \limits_{{j_{1}},\dots {j_{n}}}{\prod \limits_{l=1}^{n}}{q_{{j_{l}}}}(\Delta {t_{l}}){\int _{-\infty }^{{v_{1}}}}\dots {\int _{-\infty }^{{v_{n}}}}{\prod \limits_{m=1}^{n}}{h_{{Y_{1}}}^{\ast {j_{m}}}}({x_{m}})d{x_{m}},\]Now, we present some distributional properties of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
Theorem 5.
Let $0<s\le t<\infty $. Then the mean and covariance function of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ are given as
On putting $s=t$ in part (ii), we can get the expression for variance of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
-
$(i)$ $\mathbb{E}[{Z_{1}}(t)]=\mathbb{E}[Z(1)]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$,
-
$(ii)$ $Cov[{Z_{1}}(s),{Z_{1}}(t)]=\mathbb{E}{[Z(1)]^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\textit{Var}[Z(1)]$.
Remark 9.
Here are some specific cases of mean and covariance for $\mathit{CPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ that we discussed in Definition 3.
-
$(a)$ For $H={\delta _{1}}$, (i) reduces to $\mathbb{E}[{Z_{1}}(t)]=\mathbb{E}[{N^{(k)}}(1)]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ and (ii) reduces to $\mathit{Cov}[{Z_{1}}(s),{Z_{1}}(t)]\hspace{0.1667em}=\hspace{0.1667em}\mathbb{E}{[{N^{(k)}}(1)]^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\textit{Var}[{N^{(k)}}(1)]$ which is mean and covariance of TCPPoK-I as discussed in (see [29, Theorem 3.2]).
-
$(b)$ For $k=1$, (i) reduces to $\mathbb{E}[{Z_{1}}(t)]=\lambda \mathbb{E}[Y]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ and (ii) reduces to $Cov[{Z_{1}}(s),{Z_{1}}(t)]={(\lambda \mathbb{E}[Y])^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\lambda \mathbb{E}[{Y^{2}}]$ which is the mean and covariance of time-changed $\mathit{CPP}(\lambda ,H)$.
-
$(c)$ For $H={\delta _{1}}$ and $k=1$, (i) reduces to $\mathbb{E}[{Z_{1}}(t)]=\mathbb{E}[N(1)]\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ and (ii) reduces to $Cov[{Z_{1}}(s),{Z_{1}}(t)]=\mathbb{E}{[N(1)]^{2}}\textit{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]\textit{Var}[N(1)]$ which is the mean and covariance of time-changed $\mathit{PP}(\lambda )$.
Now, we discuss the index of dispersion for $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$. For this, we evaluate
Since ${\{{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$ is a Lévy subordinator, therefore $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]>0$. Thus the following cases arises:
Moreover, we discuss the index of dispersion by taking some example of $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
(9)
\[\begin{aligned}{}\text{Var}[{Z_{1}}(t)]-\mathbb{E}[{Z_{1}}(t)]& =\mathbb{E}{[Z(1)]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\\ {} & \hspace{1em}+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\left\{\text{Var}[Z(1)]-\mathbb{E}[Z(1)]\right\}.\end{aligned}\]-
$(i)$ If $Z(1)$ is over/equidispersed, then $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ exhibits overdispersion.
-
$(ii)$ If $Z(1)$ is underdispersed, then $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ may show over, under and equidispersion.
Example 5.
When H has exponential distribution with parameter $\mu >0$, (9) becomes
(10)
\[\begin{aligned}{}\text{Var}[{Z_{1}}(t)]-\mathbb{E}[{Z_{1}}(t)]=& \text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]{\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}+\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\times \\ {} & \frac{k(k+1)\lambda }{2\mu }\left[\frac{2k+4}{3\mu }-1\right].\end{aligned}\]-
1. If $0<\mu \le 1$, then $Z(1)$ is overdispersed. Since the sample paths of Lévy subordinator ${S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$ are strictly increasing (see [28, Theorem 21.3]), so $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$ is positive, and both the terms of (10) is positive. Therefore ${Z_{1}}(t)$ shows overdispersion.
-
2. If $\mu =\frac{2k+4}{3}$, then $Z(1)$ is equidispersed. Second term becomes zero but the first terms of (10) is positive. Therefore ${Z_{1}}(t)$ shows overdispersion.
-
3. If $\mu >\frac{2k+4}{3}$, then $Z(1)$ is underdispersed. So the second term in (10) becomes negative and the following cases arises
-
$(a)$ if ${\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]>\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\frac{k(k+1)\lambda }{2\mu }\left[1-\frac{2k+4}{3\mu }\right]$, then ${Z_{1}}(t)$ shows overdispersion.
-
$(c)$ if ${\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]=\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\frac{k(k+1)\lambda }{2\mu }\left[1-\frac{2k+4}{3\mu }\right]$, then ${Z_{1}}(t)$ shows equidispersion.
-
$(b)$ if ${\left[\frac{k(k+1)\lambda }{2\mu }\right]^{2}}\text{Var}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]<\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\frac{k(k+1)\lambda }{2\mu }\left[1-\frac{2k+4}{3\mu }\right]$, then ${Z_{1}}(t)$ shows underdispersion.
-
3.3 Long-range dependence
Now we analyze the LRD property for $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$.
Theorem 6.
Let ${\{{Z_{1}}(t)\}_{t\ge 0}}$ be the time-changed $\mathit{CPPoK}(\lambda ,H)$. It has the LRD property.
Proof.
We have $\mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}{(t)^{n}}]\sim {({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})^{n}}{t^{n}}$, as $t\to \infty $, from [7]. Therefore
where $K=({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})\text{Var}[Z(1)]$.
(11)
\[ \text{Var}[{Z_{1}}(t)]\sim \mathbb{E}[{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\text{Var}[Z(1)]\sim Kt,\]Let $0<s<t<\infty $, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathit{Corr}[{Z_{1}}(s),{Z_{1}}(t)]& \displaystyle =& \displaystyle \frac{Cov[{Z_{1}}(s),{Z_{1}}(t)]}{\sqrt{\text{Var}[{Z_{1}}(s)]\text{Var}[{Z_{1}}(t)]}},\\ {} & \displaystyle \sim & \displaystyle \frac{\sqrt{Cov[{Z_{1}}(s),{Z_{1}}(t)]}}{{t^{1/2}}\sqrt{K}},\text{from (11)}\\ {} & \displaystyle =& \displaystyle c(s){t^{-1/2}},\end{array}\]
where $c(s)=\frac{\sqrt{Cov[{Z_{1}}(s),{Z_{1}}(t)]}}{\sqrt{K}}>0$. Hence from the Definition 5, $\mathit{TCPPoK}(\lambda ,H,{S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ captures LRD property. □3.4 CPPoK time changed by the first exit time of mixtures of tempered stable subordinators
In this subsection, we consider $\mathit{CPPoK}(\lambda ,H)$ subordinated with first exit time of MTSS and discuss the asymptotic behavior of its moments.
The first exit time of Lévy subordinator ${S_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$ also known as inverse subordinator is defined as
Definition 7.
Let ${\{Z(t)\}_{t\ge 0}}$ be the $\mathit{CPPoK}(\lambda ,H)$ as discussed in Definition 3, then the subordinated CPPoK with first exit time of MTSS, denoted as $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ is defined as
\[ {Z_{2}}(t)=Z({E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))={\sum \limits_{i=1}^{{N^{(k)}}({E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))}}{Y_{i}},\hspace{2.5pt}t\ge 0\]
where the process ${\{Z(t)\}_{t\ge 0}}$ is independent from ${\{{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)\}_{t\ge 0}}$.Remark 10.
If ${\mu _{1}}={\mu _{2}}=0$, then ${E_{{\alpha _{1}},{\alpha _{2}}}^{{\lambda _{1}},{\lambda _{2}}}}(t)$ becomes inverse mixed stable subordinator as discussed in [9], reducing ${Z_{2}}(t)$ to $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}})$, which we call as mixed fractional CPPoK and written as
Proposition 3.1.
The marginal distribution of $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ is given as
where ${F_{Z(x)}}(y)={\textstyle\sum _{j=0}^{\infty }}{p_{j}}(x){\textstyle\int _{-\infty }^{y}}{h_{{Y_{1}}}^{\ast j}}(z)dz$ is the marginal distribution of $\mathit{CPPoK}(\lambda ,H)$, and ${g_{E}}(x,t)$ is the density function of ${E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$.
Proof.
We prove this proposition by using the conditioning argument on ${E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)$. Then the rest follows from Theorem 1. □
Theorem 7.
The mean and covariance function of $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$ are given as
-
1. $\mathbb{E}[{Z_{2}}(t)]=\mathbb{E}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$,
-
2. $\textit{Cov}[{Z_{2}}(s),{Z_{2}}(t)]=\textit{Var}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s)]+\mathbb{E}{[Z(1)]^{2}}\textit{Cov}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(s),{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$.
Now, we discuss the asymptotic behavior of moments of $\mathit{TCPPoK}(\lambda ,H,{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}})$. First we need the following Theorem (see [3, 30]).
Theorem 8 (Tauberian Theorem).
Let $l:(0,\infty )\to (0,\infty )$ be a slowly varying function at 0 (respectively ∞) and let $\rho \ge 0$. Then for a function $U:(0,\infty )\to (0,\infty )$, the following are equivalent.
We know that LT of pth order moment of inverse subordinator ${\{{E_{f}}(t)\}_{t\ge 0}}$ is given by (see [16])
\[ \mathcal{L}[\mathbb{E}{({E_{f}}(t))^{p}}]=\frac{\Gamma (1+p)}{s{(f(s))^{p}}},\hspace{2.5pt}p>0,\]
where $f(s)$ is the corresponding Bernstein function associated with Lévy subordinator.Proposition 3.2.
The asymptotic form of mean and variance of ${\{{Z_{2}}(t)\}_{t\ge 0}}$ is given as
\[\begin{array}{l}\displaystyle \mathbb{E}[{Z_{2}}(t)]\sim \mathbb{E}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\\ {} \displaystyle \textit{Var}[{Z_{2}}(t)]\sim \textit{Var}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\end{array}\]
Proof.
Let $\tilde{M}(s)$ be the LT of $\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]$. Then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \tilde{M}(s)=\mathcal{L}[\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]]& \displaystyle =& \displaystyle \frac{1}{s({c_{1}}({(s+{\lambda _{1}})^{{\alpha _{1}}}}-{\lambda _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\lambda _{2}})^{{\alpha _{2}}}}-{\lambda _{2}^{{\alpha _{2}}}}))}\\ {} & \displaystyle \sim & \displaystyle \frac{1}{{s^{2}}({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})},\hspace{2.5pt}\text{as}\hspace{2.5pt}s\to 0\text{, (see [7]}).\end{array}\]
Then, by using Theorem 8, we have that
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{Z_{2}}(t)]& \displaystyle =& \displaystyle \mathbb{E}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\\ {} & \displaystyle \sim & \displaystyle \mathbb{E}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\end{array}\]
Now we compute the asymptotic behavior of variance of ${\{{Z_{2}}(t)\}_{t\ge 0}}$. So
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathcal{L}[\mathbb{E}{({E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t))^{2}}]& \displaystyle =& \displaystyle \frac{2}{s{({c_{1}}({(s+{\lambda _{1}})^{{\alpha _{1}}}}-{\lambda _{1}^{{\alpha _{1}}}})+{c_{2}}({(s+{\lambda _{2}})^{{\alpha _{2}}}}-{\lambda _{2}^{{\alpha _{2}}}}))^{2}}}\\ {} & \displaystyle \sim & \displaystyle \frac{2}{{s^{3}}{({c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}})^{2}}},\hspace{2.5pt}\text{as}\hspace{2.5pt}s\to 0.\end{array}\]
Therefore
\[\begin{aligned}{}\text{Var}{Z_{2}}(t)=& \text{Var}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]+\mathbb{E}{[Z(1)]^{2}}\text{Var}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]\\ {} =& \text{Var}[Z(1)]\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]+\mathbb{E}{[Z(1)]^{2}}\{\mathbb{E}[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}{(t)^{2}}]-\mathbb{E}{[{E_{{\alpha _{1}},{\alpha _{2}}}^{{\mu _{1}},{\mu _{2}}}}(t)]^{2}}\}\\ {} \sim & \text{Var}[Z(1)]\frac{t}{{c_{1}}{\alpha _{1}}{\lambda _{1}^{{\alpha _{1}}-1}}+{c_{2}}{\alpha _{2}}{\lambda _{2}^{{\alpha _{2}}-1}}},\hspace{2.5pt}\hspace{2.5pt}as\hspace{2.5pt}t\to \infty .\end{aligned}\]
□There are few studies available in the literature which consider underdispersion phenomenon such as [27] uses hyper-Poisson regression model, [25] uses weighted Poisson distribution, and [21] uses compound weighted Poisson distributions. In this work, we have proposed a model accomodating more number of parameters, which makes this model more flexible to use.