## 1 Introduction

Let $\{L(t),t\ge 0\}$ be a Lévy process with representative random variable $L(1)$ defined by the infinitely divisible logarithmic series distribution. The distribution of any Lévy process is completely determined by the distribution of its representative random variable, which is infinitely divisible [22]. The probability generating function (p.g.f.) of the random variable $L(1)$ is expressed by the Gauss hypergeometric function ${_{2}}{F_{1}}(1,1;2;z)$ [10]. This makes the usage of enumerative combinatorics methods indispensable in this study [19]. Thus, using the partial Bell polynomials we obtain an explicit representation of the transition probability and Lévy measure of this process. But, first of all, we distinguish two logarithmic distributions.

The logarithmic series distribution supported by positive integers $N=\{1,2,\dots \}$ was firstly introduced by R.A. Fisher, A.S. Corbet and C.B. Williams (1943) [9]. It is not an infinitely divisible distribution. It is a Lévy measure for the well-known Negative-Binomial process. The paper [9] represents an impressive combination of empirical data and mathematical analysis, remaining a model for ecology today.

The logarithmic series distribution supported by nonnegative integers ${Z_{+}}=\{0,1,\dots \}$ is a particular case of the Kemp generalized hypergeometric probability distribution (1956) [12]. Its infinite divisibility was proved by K. Katti (1967) [11]. Infinitely divisible random variables with values in ${Z_{+}}$ were first studied by Feller (1968) [8], where it is shown that on ${Z_{+}}$ the infinitely divisible distributions coincide with the compound Poisson distributions. A historical review on the origin of infinitely divisible distributions “from de Finetti’s problem to Lévy–Khintchine formula” is presented by F. Mainardi and S. Rogosin (2006) [15].

At the present time, there are many works related to the topics of infinite divisibility and discrete distributions. Some of them are monographs of F.W. Steutel and K. Van Harn [22], and N.L. Johnson, A.W. Kemp and S. Kotz [10]. Integer-valued Lévy processes and their application in financial econometrics are developed by O.E. Barndorff-Nielsen, D. Pollard and N. Shephard [1]. The compositions of Poisson and Gamma processes are investigated by K. Buchak and L. Sakhno in [4, 5]. The consecutive subordinations of Poisson and Gamma processes realized on two sequences containing only four new processes are studied in [17]. It is shown there how the additional randomness, caused by random time, is accumulated. The transformation of the Poisson random measure and the jump-structure of the subordinated process is described in [16]. Other interesting integer-valued Markov processes are derived from Markov branching processes. In the model of branching particle system with a random initial condition, we obtained distributions describing the number of particles at time

*t*, corresponding to the compound Poisson processes over the radius of a flux of particles [18].The subordination by Bochner is also developed in many books and articles related to applications in financial mathematics and functional analysis, see [20, 7]. There is a special Chapter 3 in [3] devoted to the subordinators. The Lévy measure and potential kernel are also considered in [2]. The properties of the Bernstein functions are studied in [21]. The theory of subordinators and inverse subordinators is applied to study risk processes in insurance models by N. Leonenko et al. in [13, 14].

Our work in the topic is described in the following sections. In Section 2 we introduce the infinitely divisible logarithmic series distribution and its

*m*-fold convolution. Our main tools of investigation are the Gauss hypergeometric function and partial Bell polynomials, Stirling numbers and harmonic numbers. In Section 3 we present two methods defining the transition probability of the Lévy process $L(t)$ – starting with the Lévy measure or starting with p.g.f. $F(t,s)=E[{s^{L(t)}}]$ and its Taylor expansion. Then, in the following two Sections 4 and 5 we consider the subordinated processes $Y(t)$ and $Z(t)$. They are obtained respectively from the Negative-Binomial process $X(t)$ directed by the Gamma one and the Logarithmic Lévy process $L(t)$ directed by the Poisson one. In this study of subordinated processes, we proceed also by two methods – either integrating the transition probability of the ground process, as it is shown in [20], or constructing the compound Poisson process with a prior defined Lévy measure. The Bernstein functions of the processes $L(t)$ and $Y(t)$ contain the iterated logarithmic function. The Lévy measure of $Z(t)$ is a shifted Lévy measure of $X(t)$. We compare the behaviour of all these processes in order to understand better the place of the Logarithmic Lévy process in the picture of compound Poisson processes. Several combinatorics identities arrive as auxiliary results. Finally, some applications derived from studied processes are explained and demonstrated in Section 6.## 2 Infinitely divisible logarithmic series distribution and the Gauss hypergeometric function

The Gauss hypergeometric function is defined in [10], page 20, for $|z|<1$ by The p.g.f. defined by $F(1,s)=E[{s^{L(1)}}]$, $|s|\le 1$, is We remark that the following simple representation is a starting point of the Taylor expansion, Let us denote the finite sum of random variables ${L_{1}},{L_{2}},\dots ,{L_{m}}$, being independent copies of $L(1)$, as By convolution of p.m.f. we express the probability distribution of the random variable $L(2)$ by means of the harmonic numbers as follows: For convenience, we denote the sequences by bullet, as it is shown in [19], Similarly, the sequences of the powers are expressed by ${a^{\bullet }}=(a,{a^{2}},\dots )$, and in particular: ${1^{\bullet }}=(1,1,\dots )$. In both cases of expanding we use the Faa di Bruno formula and the partial Bell polynomials ${B_{n,k}}$ [19], allowing to express the power ${[G(s)]^{k}}$ as follows: where

\[ {_{2}}{F_{1}}(c,d;g;z)={\sum \limits_{k=0}^{\infty }}\frac{{[c]_{k\uparrow }}{[d]_{k\uparrow }}}{{[g]_{k\uparrow }}}\frac{{z^{k}}}{k!},\]

where the increasing factorial, known as Pochhammer’s symbol, is denoted as ${[c]_{k\uparrow }}=c(c+1)\cdots (c+k-1)$, ${[c]_{0\uparrow }}=1$. In particular, \[ {_{2}}{F_{1}}(1,1;2;z)={\sum \limits_{k=0}^{\infty }}\frac{k!k!}{(k+1)!}\frac{{z^{k}}}{k!}=\frac{-\log (1-z)}{z}.\]

By definition, given in [22], Chapter 2, Example 11.7, the infinitely divisible random variable $L(1)$ with logarithmic series distribution has the probability mass function (p.m.f.) supported by $\{0,1,2,\dots \}$ and given as follows: ##### (1)

\[ P(L(1)=k)=\frac{1}{A}\frac{{\alpha ^{k+1}}}{(k+1)},\hspace{1em}0<\alpha <1,A=-\log (1-\alpha ),k=0,1,\dots .\] \[ F(1,s)={\sum \limits_{k=0}^{\infty }}\frac{{\alpha ^{k+1}}}{k+1}\frac{{s^{k}}}{A}=\frac{\alpha }{A}{\sum \limits_{k=0}^{\infty }}\frac{{(\alpha s)^{k}}}{k+1}=\frac{-\log (1-\alpha s)}{As}\]

and can be presented as follows: ##### (2)

\[ F(1,s)=\frac{\alpha }{A}(1+G(s)),\hspace{1em}G(s)={\sum \limits_{k=1}^{\infty }}\frac{{(\alpha s)^{k}}}{k+1},\hspace{1em}G(1)=\frac{A-\alpha }{\alpha }.\] \[ P(L(2)=n)={\left(\frac{\alpha }{A}\right)^{2}}\frac{2{\alpha ^{n}}}{n+2}\left(1+\frac{1}{2}+\frac{1}{3}+\cdots +\frac{1}{n+1}\right),\hspace{1em}n=0,1,2,\dots .\]

Knowing the infinite divisibility of $L(1)$ we write the p.g.f. of $L(m)$ (3) by the power, $F(m,s)={\{F(1,s)\}^{m}}$, namely: We present here two methods on expanding $F(m,s)$ as power series of *s*, expanding only ${(-\log (1-\alpha s))^{m}}$, or with the binomial expanding of ${(\frac{\alpha }{A}(1+G(s)))^{m}}$. For this purpose, the function $G(s)$ is presented as an exponential generating function:##### (4)

\[ G(s)={\sum \limits_{k=1}^{\infty }}{g_{k}}\frac{{s^{k}}}{k!},\hspace{1em}{g_{k}}=\frac{{\alpha ^{k}}k!}{k+1}.\]##### (5)

\[ \frac{{(G(s))^{k}}}{k!}=\frac{1}{k!}{\left({\sum \limits_{j=1}^{\infty }}{g_{j}}\frac{{s^{j}}}{j!}\right)^{k}}={\sum \limits_{n=k}^{\infty }}{B_{n,k}}({g_{\bullet }})\frac{{s^{n}}}{n!},\] \[ {B_{n,k}}({g_{\bullet }})=\sum \limits_{({k_{1}},{k_{2}},\cdots \hspace{0.1667em},{k_{n}})}\frac{n!{g_{1}^{{k_{1}}}}\dots {g_{n}^{{k_{n}}}}}{{k_{1}}!{(1!)^{{k_{1}}}}\cdots {k_{n}}!{(n!)^{{k_{n}}}}}.\]

The sum is over all partitions of *n*into*k*parts, that is over all nonnegative integer solutions $({k_{1}},{k_{2}},\dots ,{k_{n}})$ of the equations: For example, ${B_{n,1}}({x_{\bullet }})={x_{n}}$, ${B_{n,n}}({x_{\bullet }})={({x_{1}})^{n}}$ and The falling factorials are defined as follows: Let us denote the Stirling numbers of the**first**kind by $|s(n,k)|$ and $s(n,k)$, respectively, – unsigned and signed $s(n,k)$ depending on the parity of $n-k$ given by The Stirling numbers of the first kind transform the factorials into powers, Thus, having these definitions and relations, the following useful lemma is formulated and proved.##### Lemma 1.

*The m-fold convolution of the infinitely divisible logarithmic series distribution (*

*1*

*) can be equivalently expressed in the following forms:*

##### (8)

\[ P(L(m)=n)={\left(\frac{\alpha }{A}\right)^{m}}\frac{{\alpha ^{n}}}{n!}|s(m+n,m)|\frac{m!n!}{(m+n)!},\hspace{1em}n=0,1,2,\dots ,\]*or*

##### (9)

\[ P(L(m)=n)={\left(\frac{\alpha }{A}\right)^{m}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{m\wedge n}}\frac{m!}{(m-k)!}{B_{n,k}}({c_{\bullet }}),\hspace{1em}{B_{0,0}}=1,\]*where*

##### Proof.

Expanding only the logarithmic function $(-\log (1-\alpha s))$ we obtain

\[ -\log (1-\alpha s)={\sum \limits_{k=1}^{\infty }}\frac{{(\alpha s)^{k}}}{k}={\sum \limits_{k=1}^{\infty }}\frac{(k-1)!{(\alpha s)^{k}}}{k!}.\]

Then, for the powers we have \[ \frac{1}{m!}{\left(-\log (1-\alpha s)\right)^{m}}={\sum \limits_{k=m}^{\infty }}{B_{k,m}}({\alpha ^{\bullet }}(\bullet -1)!)\frac{{s^{k}}}{k!}\]

and \[ \frac{1}{m!}{\left(\frac{-\log (1-\alpha s)}{As}\right)^{m}}=\frac{1}{{A^{m}}}{\sum \limits_{k=m}^{\infty }}{\alpha ^{k}}|s(k,m)|\frac{{s^{k-m}}}{k!}.\]

The change of variable $n=k-m$ leads to \[ F(m,s)={\left(\frac{\alpha }{A}\right)^{m}}{\sum \limits_{n=0}^{\infty }}\frac{{\alpha ^{n}}{s^{n}}}{n!}|s(m+n,m)|\frac{m!n!}{(m+n)!}.\]

In the Taylor theorem on the binomial expansion of $F(m,s)$, there are only finite numbers of terms: \[ F(m,s)={\left(\frac{\alpha }{A}\right)^{m}}{\left[1+G(s)\right]^{m}}={\left(\frac{\alpha }{A}\right)^{m}}{\sum \limits_{k=0}^{m}}\frac{m!}{(m-k)!}\frac{{[G(s)]^{k}}}{k!}.\]

After replacing $\frac{{[G(s)]^{k}}}{k!}$ by the expansion (5) we change the summation order. Then the obtained result is \[ F(m,s)={\left(\frac{\alpha }{A}\right)^{m}}{\sum \limits_{n=0}^{\infty }}\frac{{\alpha ^{n}}{s^{n}}}{n!}{\sum \limits_{k=0}^{m\wedge n}}\frac{m!}{(m-k)!}{B_{n,k}}({c_{{^{\bullet }}}}),\]

where $m\wedge n=\min \{m,n\}$. The p.m.f. $P(L(m)=n)$ is given by the sequence of coefficients in front of ${s^{n}}$ in the p.g.f. $F(m,s)$. □The comparison of two expressions (8) and (9) leads to the following combinatorics identity:

\[ {\sum \limits_{k=1}^{m\wedge n}}\frac{1}{(m-k)!}{B_{n,k}}({c_{\bullet }})=|s(m+n,m)|\frac{n!}{(m+n)!}.\]

##### Remark 1.

The harmonic numbers take part in the expansion of the hypergeometric functions. A complete review on summation formulas involving generalized harmonic numbers and Stirling numbers is given in [6]. The generalized harmonic numbers are defined as follows:

\[ {H_{n}^{(k)}}:=1+\frac{1}{{2^{k}}}+\frac{1}{{3^{k}}}+\cdots +\frac{1}{{n^{k}}},\hspace{1em}{H_{n}^{(1)}}={H_{n}}.\]

For $m=2,3,4,\dots $, we use the following relations between Stirling numbers of the first kind and generalized harmonic numbers to calculate directly the convolution probability of $L(m)$ and to confirm the previous combinatorics identity. For example, and and The general recurrence formula on this relation is given in [6].## 3 Transition probability and the Bernstein function of the Logarithmic Lévy process

The principal information on the behaviour of any Lévy process is given by the representative random variable and it is expressed by the canonical representation of the Bernstein function and the Lévy measure, [20, 21]. The Laplace transform of the process $L(t)$ is given by where the Laplace exponent is a Bernstein function defined by the random variable $L(1)$ as follows: For integer-valued Lévy processes it is also convenient to work with probability generating functions, see [22], Chapter 2.

Let us denote the Lévy measure of the process $L(t)$ by ${\Pi _{L}}(n),n=1,2,\dots $, its total mass by ${\theta _{L}}={\textstyle\sum _{n=1}^{\infty }}{\Pi _{L}}(n)$, and its generating function by The normalised Lévy measure is denoted by ${\widetilde{\Pi }_{L}}(n)$ and respectively, its p.g.f. as ${\widetilde{Q}_{L}}(s)={Q_{L}}(s)/{\theta _{L}}$. Then in these notations, the p.g.f. ${F_{L}}(t,s)=E[{s^{L(t)}}]$ is given by

\[ {F_{L}}(t,s)=\exp \{-t{\theta _{L}}[1-{\widetilde{Q}_{L}}(s)]\}=\exp \{-t{\theta _{L}}+t{Q_{L}}(s)\},\hspace{1em}|s|\le 1,\]

and the Bernstein function is in the form \[ {\psi _{L}}(\lambda )={\sum \limits_{k=1}^{\infty }}\left(1-{e^{-\lambda k}}\right){\Pi _{L}}(k),\hspace{1em}\lambda \ge 0.\]

All these characteristics of the Logarithmic Lévy process $L(t)$ are specified in the following lemma.##### Lemma 2.

*The Lévy measure of the process*$L(t)$

*generated by the infinitely divisible logarithmic series distribution*$L(1)$

*(*

*1*

*) is given for*$n=1,2,\dots $

*by the partial Bell polynomials as follows:*

##### (11)

\[ {\Pi _{L}}(n)=\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=1}^{n}}{(-1)^{k-1}}(k-1)!{B_{n,k}}({c_{\bullet }}),\hspace{1em}{c_{k}}=\frac{k!}{k+1}.\]*The generating function of the Lévy measure is*

\[ {Q_{L}}(s)=\log (1+G(s)),\hspace{1em}{_{2}}{F_{1}}(1,1;2;\alpha s)=1+G(s),\hspace{1em}|s|\le 1.\]

*The Bernstein function of the Logarithmic Lévy process*$L(t)$

*is*

\[ {\psi _{L}}(\lambda )={\theta _{L}}\left\{1-\frac{1}{{\theta _{L}}}\log (1+G({e^{-\lambda }}))\right\},\hspace{1em}\lambda \ge 0,\]

*where*

##### Proof.

Following representation (2) of p.g.f. $F(1,s)$, it is enough to write

\[ \log (F(1,s))=\log \left(\frac{\alpha }{A}[1+G(s)]\right)=\log \left(\frac{\alpha }{A}\right)+\log \left(1+G(s)\right)\]

in order to get the generating function of the Lévy measure. The total mass of the Lévy measure ${\theta _{L}}=-\log \left(\frac{\alpha }{A}\right)$ because $G(0)=0$.The logarithmic function $\log (1+x)$ is expanding by the signed Stirling numbers of the first kind and the expansion of $G(s)$ is given previously in (5). Then Exchanging the order of summation and in view of ${B_{n,k}}=0$, $k\ge n+1$, we write

##### (12)

\[ \log (1+G(s))={\sum \limits_{k=1}^{\infty }}{(-1)^{k-1}}\frac{{(G(s))^{k}}}{k}={\sum \limits_{k=1}^{\infty }}{(-1)^{k-1}}(k-1)!\frac{{(G(s))^{k}}}{k!}.\] \[ {\sum \limits_{k=1}^{\infty }}{(-1)^{k-1}}(k-1)!{\sum \limits_{n=k}^{\infty }}{B_{n,k}}({g_{\bullet }})\frac{{s^{n}}}{n!}={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}{(-1)^{k-1}}(k-1)!{B_{n,k}}({c_{\bullet }})\frac{{\alpha ^{n}}{s^{n}}}{n!}.\]

The Lévy measure is given by the sequence of coefficients in front of ${s^{n}}$ in ${Q_{L}}(s)$. □As a direct result of (11), the computations of several terms of the Lévy measure are simplified, such as

\[\begin{array}{l}\displaystyle {\Pi _{L}}(1)=\frac{\alpha }{2},\hspace{1em}{\Pi _{L}}(2)=\frac{{\alpha ^{2}}}{2!}\frac{5}{12},\hspace{1em}{\Pi _{L}}(3)=\frac{{\alpha ^{3}}}{3!}\frac{3}{4},\\ {} \displaystyle {\Pi _{L}}(4)=\frac{{\alpha ^{4}}}{4!}\frac{251}{120},\hspace{1em}{\Pi _{L}}(5)=\frac{{\alpha ^{5}}}{5!}\frac{95}{12}.\end{array}\]

It is well known from the [22] (see Theorem 4.4, Chapter 2), that the p.m.f. $P(L(1)=n)$, $n=0,1,\dots $, (1) is related to the sequence of the (canonical) Lévy measure ${\Pi _{L}}(n),n=1,2,\dots $, (11) by the following recurrence equation: It is equivalent to the next combinatorical identity:

There are two ways to define the transition probability $P(L(t)=n)$, $n=0,1,\dots $, of the Lévy process $L(t)$. We could proceed either by starting with p.g.f. ${F_{L}}(t,s)$ and its Taylor expansion or by using the Lévy measure to define the compound Poisson process $L(t)$. We present these methods separately in two independent proofs.

##### Theorem 1.

*Let*$\{L(t),t\ge 0\}$

*be a Lévy process generated by the infinitely divisible logarithmic series distribution (*

*1*

*) supported by*$\{0,1,2,\dots \}$

*. Then its transition probability is given for*$n=0,1,2,\dots $

*by*

##### (13)

\[ P(L(t)=n)={\left(\frac{\alpha }{A}\right)^{t}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{n}}{[t]_{k\downarrow }}{B_{n,k}}({c_{\bullet }}),\]*or equivalently:*

##### (14)

\[ P(L(t)=n)={\left(\frac{\alpha }{A}\right)^{t}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{n}}{t^{k}}{B_{n,k}}({y_{\bullet }}),\hspace{1em}{B_{0,0}}=1,\]*where*

##### Proof 1.

The transition probability $P(L(t)=n)$ is the coefficient in front of ${s^{n}}$ in the expansion of p.g.f. ${F_{L}}(t,s)={\left(\frac{\alpha }{A}\right)^{t}}{(1+G(s))^{t}}$. The Taylor theorem for the binomial expansion following (5) leads to

\[ {F_{L}}(t,s)={\left(\frac{\alpha }{A}\right)^{t}}{\sum \limits_{k=0}^{\infty }}{[t]_{k\downarrow }}{\sum \limits_{n=k}^{\infty }}{B_{n,k}}({g_{\bullet }})\frac{{s^{n}}}{n!}.\]

Then after exchanging the order of summation we find \[ {F_{L}}(t,s)={\left(\frac{\alpha }{A}\right)^{t}}{\sum \limits_{n=0}^{\infty }}\frac{{\alpha ^{n}}{s^{n}}}{n!}{\sum \limits_{k=0}^{n}}{[t]_{k\downarrow }}{B_{n,k}}({c_{\bullet }}).\]

Because the partial Bell polynomials ${B_{0,0}}=1$ and ${B_{0,k}}=0$, $k=1,2,\dots $, the following result is valid: □In particular, it is easy to calculate several terms of the transition probability, directly from (13):

\[\begin{array}{l}\displaystyle P(L(t)=1)={\left(\frac{\alpha }{A}\right)^{t}}\frac{\alpha t}{2},\hspace{1em}P(L(t)=2)={\left(\frac{\alpha }{A}\right)^{t}}\frac{{\alpha ^{2}}}{2!}\left(\frac{2t}{3}+\frac{t(t-1)}{4}\right),\\ {} \displaystyle P(L(t)=3)={\left(\frac{\alpha }{A}\right)^{t}}\frac{{\alpha ^{3}}}{3!}\left\{\frac{3!t}{4}+\frac{t(t-1)3.2!}{2.3}+\frac{t(t-1)(t-2)}{8}\right\},\\ {} \displaystyle P(L(t)=4)={\left(\frac{\alpha }{A}\right)^{t}}\frac{{\alpha ^{4}}}{4!}\left\{\frac{4!t}{5}+\frac{13}{3}{[t]_{2\downarrow }}+{[t]_{3\downarrow }}+\frac{1}{16}{[t]_{4\downarrow }}\right\}.\end{array}\]

##### Proof 2.

Let the positive random variable

*ξ*be defined by the normalised Lévy measure (11), having p.m.f. \[ P(\xi =n)={\Pi _{L}}(n)/{\theta _{L}},\hspace{1em}n=1,2,\dots ,\hspace{1em}{\theta _{L}}=\log \Big(\frac{A}{\alpha }\Big),\]

and p.g.f. $E[{s^{\xi }}]={Q_{L}}(s)/{\theta _{L}}$. Let $({\xi _{1}},{\xi _{2}},\dots ,{\xi _{k}},k=1,2,\dots )$ be independent copies of the random variable *ξ*. Following definition of the compound Poisson process, the transition probability is represented as follows: \[ P(L(t)=n)={\sum \limits_{k=0}^{\infty }}{e^{-\theta t}}\frac{{(\theta t)^{k}}}{k!}P({\xi _{1}}+{\xi _{2}}+\cdots +{\xi _{k}}=n),\hspace{1em}\theta =\log \Big(\frac{A}{\alpha }\Big).\]

Taking into account (12) and (6) we can represent the function as an exponential generating function $\frac{1}{\theta }{Q_{L}}(s)={\textstyle\sum _{n=1}^{\infty }}\frac{{x_{n}}{s^{n}}}{n!}$, where \[ {x_{n}}=\frac{1}{\theta }{\alpha ^{n}}{\sum \limits_{k=1}^{n}}{(-1)^{k-1}}(k-1)!{B_{n,k}}({c_{\bullet }}).\]

It means that the normalised probability convolution distribution Then \[ P(L(t)=n)={\sum \limits_{k=0}^{\infty }}{e^{-\theta t}}\frac{{(\theta t)^{k}}}{k!}{B_{n,k}}({x_{\bullet }})\frac{k!}{n!},\hspace{1em}k\le n.\]

The infinite sum is reduced to the finite one because ${B_{n,k}}({x_{\bullet }})=0$ when $k>n$. We know that $\theta =-\log (\frac{\alpha }{A})$ and ${e^{-\theta t}}={\left(\frac{\alpha }{A}\right)^{t}}$. Let us denote Then following the formula (6), we obtain, for $n=0,1,2,\dots $, \[ P(L(t)=n)={\left(\frac{\alpha }{A}\right)^{t}}{\sum \limits_{k=0}^{n}}{t^{k}}\frac{{\alpha ^{n}}}{n!}{B_{n,k}}({y_{\bullet }}).\]

The probability $P(L(t)=0)={(\frac{\alpha }{A})^{t}}$ corresponds to the ${B_{0,0}}({y_{\bullet }})=1$. □We remark that in the matrix representation of partial Bell polynomials for composition function the numbers ${B_{n,k}}({x_{\bullet }}),n\ge k\ge 1$, are defined as product of matrices, [19], page 19. Let us denote by $H(s)$ and $G(s)$ respectively the exponential generating functions of both sequences $({h_{\bullet }})$ and $({g_{\bullet }})$. Likewise, by $({x_{\bullet }})$ is denoted the sequence whose exponential generating function is the composition $H(G(s))$, such as Then the matrix associated with the sequence $({x_{\bullet }})$ is the product of the triangular matrices associated with $({g_{\bullet }})$ and $({h_{\bullet }})$ respectively:

\[ {B_{n,k}}({x_{\bullet }})={\sum \limits_{j=k}^{n}}{B_{n,j}}({g_{\bullet }}){B_{j,k}}({h_{\bullet }}),\hspace{1em}k\le j\le n.\]

The sequence $({h_{\bullet }})$ defined by the function $H(s)=\log (1+s)$ is exactly the sequence ${h_{k}}={(-1)^{k-1}}(k-1)!$ and ${B_{n,k}}({h_{\bullet }})=s(n,k)$, i.e. signed Stirling numbers of the first kind. Then, after applying formulas (6) and (7) and changing the order of summation, where $k\le j$, we confirm the equivalence of (13) and (14) as follows: The Lévy measure is the infinitesimal generator of the convolution semi-group given by the transition probability $P(L(t)=n)$, $n=0,1,2,\dots $, see [2], page 172. It is a limit in vague convergence, see [3], page 39, as follows: Then

\[ {\Pi _{L}}(n)=\underset{t\downarrow 0}{\lim }{\Big(\frac{\alpha }{A}\Big)^{t}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=1}^{n}}\frac{{[t]_{k\downarrow }}}{t}{B_{n,k}}({c_{\bullet }}).\]

Finally, we know that ${[t]_{k\downarrow }}={[-t]_{k\uparrow }}{(-1)^{k}}$. In this way, ## 4 Negative-Binomial process subordinated by the Gamma process

The concept of subordination was introduced by S. Bochner in 1955 for the Markov processes, Lévy processes, and corresponding semigroups, as randomization of the time parameter: $Y(t)=X(T(t))$. There are two sources of randomness – the underlying process $X(t)$ and a random time process $T(t)$, under the assumption of their independence. The time-change process $T(t)$ is supposed to be a subordinator – the Lévy process with nonnegative increments, [3], Chapter 3. The independence of the ground process and the random time process ensures the preservation of Markov property and Lévy property for the subordinated process. The transformation of the main probabilistic characteristics, such as transition probability, Lévy measure and Laplace exponent, is stated and proved in [20], Chapter 6, Theorem 30.1. See also [7], Chapter 7, Theorem 6.2 and Theorem 6.18. They are our principal references.

In this paragraph, we study the effect of a random time-change for the Negative-Binomial process $\{X(t),t\ge 0\}$. The Lévy measure of a Negative-Binomial process is defined by a logarithmic series distribution supported by positive integers $N=\{1,2,\dots \}$ with the same parameter $0<\alpha <1$ as for the Logarithmic Lévy process $L(t)$. The Gamma subordinator $\{{T_{\beta }}(t),t\ge 0\}$ with selective parameter $\beta >0$ can be considered as a random observation time, where the mathematical expectation $E[{T_{\beta }}(t)]=\beta t$. The obtained results are formulated and proved in the following theorem.

##### Theorem 2.

*Let*$\{X(t),t\ge 0\}$

*be a Negative-Binomial process with the Bernstein function*

\[ {\psi _{X}}(\lambda )=\log \left(\frac{1-\alpha {e^{-\lambda }}}{1-\alpha }\right),\hspace{1em}\lambda \ge 0,\hspace{1em}{\psi _{X}}(\infty )=-\log (1-\alpha )=A.\]

*Let*$\{{T_{\beta }}(t),t\ge 0\}$

*be a Gamma subordinator with the Bernstein function*

\[ {\psi _{T}}(\lambda )=\log (1+\beta \lambda ),\hspace{1em}\lambda \ge 0,\hspace{1em}{\psi _{T}}(\infty )=\infty .\]

*Suppose the processes*$X(t)$

*and*${T_{\beta }}(t)$

*are independent.*

*Then for the subordinated process*$Y(t)=X({T_{\beta }}(t))$

*the following results are valid.*

*The Bernstein function of*$Y(t)$

*is given by*

\[ {\psi _{Y}}(\lambda )=\log \left(1+\beta \log \left(\frac{1-\alpha {e^{-\lambda }}}{1-\alpha }\right)\right),\hspace{1em}\lambda \ge 0,\hspace{1em}{\psi _{Y}}(\infty )=\log (1+A\beta ).\]

*The Lévy measure of the subordinated process is given by*

\[ {\Pi _{Y}}(n)=\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=1}^{n}}|s(n,k)|(k-1)!{\left(\frac{\beta }{1+A\beta }\right)^{k}},\hspace{1em}n=1,2,\dots .\]

*The transition probability*$P(Y(t)=n)$

*,*$n=0,1,2,\dots $

*, is given by*

\[ P(Y(t)=n)=\frac{{\alpha ^{n}}}{n!}\frac{1}{{(1+A\beta )^{t}}}{\sum \limits_{k=0}^{n}}|s(n,k)|{[t]_{k\uparrow }}{\left(\frac{\beta }{1+A\beta }\right)^{k}},\]

*or equivalently:*

\[ P(Y(t)=n)=\frac{{\alpha ^{n}}}{n!}\frac{1}{{(1+A\beta )^{t}}}{\sum \limits_{k=0}^{n}}{t^{k}}{B_{n,k}}({w_{\bullet }}),\]

*where the sequence*$({w_{\bullet }})$

*is defined by*

##### Proof.

The main assumption in the definition of subordination by Bochner is the independence of the ground process and the random time-change process. The methods of the Laplace transform and conditional probability for independent processes give the following convenient representations of the main characteristics, see [20], page 197. The Bernstein function of the subordinated process is the composition of the corresponding Bernstein functions, as follows: The transition probability of the subordinated process is expressed by the conditional probability and is given as the integral of transition probability of the ground process by the transition probability of the Gamma subordinator:

\[\begin{aligned}{}P(Y(t)=n)& ={\int _{0+}^{\infty }}P(X(u)=n){u^{t-1}}{e^{-u/\beta }}\frac{du}{{\beta ^{t}}\Gamma (t)}\\ {} & ={\int _{0+}^{\infty }}{(1-\alpha )^{u}}{[u]_{n\uparrow }}\frac{{\alpha ^{n}}}{n!}{u^{t-1}}{e^{-u/\beta }}\frac{du}{{\beta ^{t}}\Gamma (t)}.\end{aligned}\]

Replacing the increasing factorials (7) ${[u]_{n\uparrow }}={\textstyle\sum _{k=0}^{n}}|s(n,k)|{u^{k}}$ and we obtain \[\begin{aligned}{}P(Y(t)=n)& =\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{n}}|s(n,k)|{\int _{0+}^{\infty }}{e^{-Au}}{e^{-u/\beta }}{u^{k+t-1}}\frac{du}{{\beta ^{t}}\Gamma (t)}\\ {} & =\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{n}}|s(n,k)|\frac{\Gamma (t+k){\beta ^{k}}}{\Gamma (t){(1+A\beta )^{t+k}}}\\ {} & =\frac{{\alpha ^{n}}}{n!}\frac{1}{{(1+A\beta )^{t}}}{\sum \limits_{k=0}^{n}}|s(n,k)|{[t]_{k\uparrow }}{\left(\frac{\beta }{1+A\beta }\right)^{k}}.\end{aligned}\]

Let us remark, that the Lévy measure of the Gamma subordinator in our parametrisation is given by see [3], page 73. Then, from the results proved in [20, 7], the Lévy measure of the subordinated process can be calculated as the integral of transition probability of the ground process by the Lévy measure of the Gamma subordinator: \[\begin{aligned}{}{\Pi _{Y}}(n)& ={\int _{0+}^{\infty }}P(X(u)=n){e^{-u/\beta }}\frac{du}{u}={\int _{0+}^{\infty }}{(1-\alpha )^{u}}{[u]_{n\uparrow }}\frac{{\alpha ^{n}}}{n!}{e^{-u/\beta }}\frac{du}{u}\\ {} & =\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=1}^{n}}|s(n,k)|{\int _{0+}^{\infty }}{e^{-Au}}{e^{-u/\beta }}{u^{k}}\frac{du}{u}\\ {} & =\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=1}^{n}}|s(n,k)|\Gamma (k){\left(\frac{\beta }{1+A\beta }\right)^{k}}.\end{aligned}\]

From the Bernstein function ${\psi _{Y}}(\lambda )$ we derive the generating function of the Lévy measure ${\Pi _{Y}}(n)$ in the following form: \[ {Q_{Y}}(s)={\theta _{Y}}{\widetilde{Q}_{Y}}(s)=-\log \left(1-\frac{\beta }{1+A\beta }\{-\log (1-\alpha s)\}\right).\]

It can be presented as an exponential generating function as follows: \[ {Q_{Y}}(s)={\sum \limits_{n=1}^{\infty }}{u_{n}}\frac{{s^{n}}}{n!},\hspace{1em}{u_{n}}={\sum \limits_{k=1}^{n}}{B_{n,k}}({v_{\bullet }}){B_{k,1}}({s_{\bullet }}),\]

where the sequences $({v_{\bullet }})$ and $({s_{\bullet }})$ are defined respectively by \[ {v_{k}}=\frac{k!{\alpha ^{k}}}{k},\hspace{1em}{s_{k}}=\frac{k!}{k}{\left(\frac{\beta }{1+a\beta }\right)^{k}}.\]

Moreover, \[ {B_{n,k}}({v_{\bullet }})={\alpha ^{n}}|s(n,k)|,\hspace{1em}{B_{k,1}}({s_{\bullet }})={\left(\frac{\beta }{1+A\beta }\right)^{k}}|s(k,1)|\]

and \[ {u_{n}}={\alpha ^{n}}{\sum \limits_{k=1}^{n}}|s(n,k)||s(k,1)|{\left(\frac{\beta }{1+A\beta }\right)^{k}}.\]

Let be independent copies of the positive random variable *ξ*with p.m.f. \[ P(\xi =n)={\Pi _{Y}}(n)/\theta ,\hspace{1em}n=1,2,\dots ,\hspace{1em}\theta ={\theta _{Y}}=\log (1+A\beta ).\]

In a complete analogy with the Proof 2 of Theorem 1, we find the normalised probability convolution distribution \[ P({\xi _{1}}+{\xi _{2}}+\cdots +{\xi _{k}}=n)=\frac{1}{{\theta ^{k}}}{B_{n,k}}({u_{\bullet }})\frac{k!}{n!}.\]

Then for $n=0,1,2,\dots $, we have: \[ P(Y(t)=n)={\sum \limits_{k=0}^{\infty }}{e^{-\theta t}}\frac{{(\theta t)^{k}}}{k!}\frac{1}{{\theta ^{k}}}{B_{n,k}}({u_{\bullet }})\frac{k!}{n!},\hspace{1em}{B_{0,0}}=1.\]

Obviously, the exponential decay is ${e^{-\theta t}}={(\frac{1}{1+A\beta })^{t}}$. Additionally, from the formula (6) we derived that \[ {B_{n,k}}({u_{\bullet }})={\alpha ^{n}}{B_{n,k}}({w_{\bullet }}),\hspace{1em}{w_{n}}={\sum \limits_{k=1}^{n}}|s(n,k)||s(k,1)|{\left(\frac{\beta }{1+a\beta }\right)^{k}}.\]

So, taking into account, that ${B_{n,k}}({u_{\bullet }})=0$ for all $k>n$, we see that the infinite sum is reduced to the finite one □## 5 Logarithmic Lévy process subordinated by the Poisson process

The next studied process $\{Z(t),t\ge 0\}$ is constructed as a random time-change of the Logarithmic Lévy process $L(t)$ with the Poisson one in the assumption of their independence. The selective parameter $b>0$ of the Poisson process $\{{T_{b}}(t),t\ge 0\}$ is introduced, such as mathematical expectation $E[{T_{b}}(t)]=bt$. The results are formulated and proved in the following theorem.

##### Theorem 3.

*Let*$\{L(t),t\ge 0\}$

*be a Logarithmic Lévy process with the Bernstein function*

\[ {\psi _{L}}(\lambda )=\log \left(\frac{A{e^{-\lambda }}}{-\log (1-\alpha {e^{-\lambda }})}\right),\hspace{1em}\lambda \ge 0,\hspace{1em}{\psi _{L}}(\infty )=\log \left(\frac{A}{\alpha }\right)>0.\]

*Let*$\{{T_{b}}(t),t\ge 0\}$

*be a Poisson subordinator with the Bernstein function*

\[ {\psi _{T}}(\lambda )=b(1-{e^{-\lambda }}),\hspace{1em}\lambda \ge 0,\hspace{1em}{\psi _{T}}(\infty )=b>0.\]

*Suppose the processes*$L(t)$

*and*${T_{b}}(t)$

*are independent.*

*Then for the subordinated Lévy process*$Z(t)=L({T_{b}}(t))$

*the following results are valid.*

*The Bernstein function of the subordinated process*$Z(t)$

*is given by*

\[ {\psi _{Z}}(\lambda )=b\left(1+\frac{\log (1-\alpha {e^{-\lambda }})}{A{e^{-\lambda }}}\right),\hspace{1em}\lambda \ge 0,\hspace{1em}{\psi _{Z}}(\infty )=b\left(1-\frac{\alpha }{A}\right)>0.\]

*The Lévy measure of*$Z(t)$

*is given by*

*The transition probability of the subordinated process*$Z(t)$

*is, for*$n=0,1,\dots $

*,*

##### Proof.

Once again, the composition of two Bernstein functions is obvious: The Lévy measure of the subordinated process is given by the following infinite sum, as it is shown in [20, 7],

\[ {\Pi _{Z}}(n)={\sum \limits_{k=1}^{\infty }}P(L(k)=n){\Pi _{T}}(k)=bP(L(1)=n)=\frac{b}{A}\frac{{\alpha ^{n+1}}}{(n+1)},\hspace{1em}n=1,2,\dots ,\]

because the normalised Lévy measure ${\widetilde{\Pi }_{L}}(n),n=1,2,\dots $, of the Poisson process is exactly the delta function in $n=1$. The total mass of the Lévy measure ${\Pi _{Z}}(n),n=1,2,\dots $, is calculated directly from (1) as ${\theta _{Z}}=\frac{b}{A}(A-\alpha )$. The exponential generating function of the Lévy measure ${\Pi _{Z}}$ is given by (2) and (4) as follows: \[ {Q_{Z}}(s)=\frac{b\alpha }{A}G(s)=\frac{b\alpha }{A}{\sum \limits_{k=1}^{\infty }}{g_{k}}\frac{{s^{k}}}{k!},\hspace{1em}{g_{k}}=\frac{{\alpha ^{k}}k!}{k+1}.\]

Let be independent copies of the positive random variable *ξ*with p.m.f. \[ P(\xi =n)={\Pi _{Z}}(n)/\theta ,\hspace{1em}n=1,2,\dots ,\hspace{1em}\theta ={\theta _{Z}}=b-\frac{b\alpha }{A}.\]

The normalised probability convolution distribution is given by \[ P({\xi _{1}}+{\xi _{2}}+\cdots +{\xi _{k}}=n)={\left(\frac{\alpha }{A-\alpha }\right)^{k}}{B_{n,k}}({c_{\bullet }})\frac{{\alpha ^{n}}k!}{n!},\hspace{1em}\frac{\alpha }{A-\alpha }=\frac{b\alpha }{A\theta }.\]

Then the elementary transformations lead to (15) as follows: \[\begin{aligned}{}P(Z(t)=n)& ={\sum \limits_{k=0}^{\infty }}{e^{-\theta t}}\frac{{(\theta t)^{k}}}{k!}{\left(\frac{\alpha }{A-\alpha }\right)^{k}}{B_{n,k}}({c_{\bullet }})\frac{{\alpha ^{n}}k!}{n!}\\ {} & ={e^{-\theta t}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{n}}{\left(\frac{\alpha \theta t}{A-\alpha }\right)^{k}}{B_{n,k}}({c_{\bullet }})={e^{-\theta t}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{n}}{\left(\frac{\alpha bt}{A}\right)^{k}}{B_{n,k}}({c_{\bullet }}).\end{aligned}\]

In particular, Knowing that ${B_{2,1}}={c_{2}}=\frac{2!}{3}$ and ${B_{2,2}}={({c_{1}})^{2}}=\frac{1}{4}$ we find \[ P(Z(t)=2)={e^{-\theta t}}\frac{{\alpha ^{2}}}{2!}\left\{\frac{2\alpha bt}{3A}+{\left(\frac{\alpha bt}{2A}\right)^{2}}\right\}.\]

In the same way, as ${B_{3,1}}=\frac{3!}{4}$, ${B_{3,2}}=1$ and ${B_{3,3}}=\frac{1}{8}$ we obtain and so on. □##### Remark 2.

In this situation, the range of the random time process ${T_{b}}(t)$ is a discrete integer-valued set ${Z_{+}}=\{0,1,2,\dots \}$. The subordination by Bochner gives the transition probability of the subordinated process $Z(t)=L({T_{b}}(t))$ as the following conditional probability: The transition probability of the ground process $L(t)$ for integer-valued time $t=k$ is given by the

*k*-fold convolution of the representative random variable $L(1)$, as the two equivalent expressions (9) and (8).After replacing $P(L(k)=n)$ in (17) by (9), it is enough to exchange the order of summation to prove (15), as follows:

\[\begin{aligned}{}P(Z(t)=n)& ={\sum \limits_{k=0}^{\infty }}{\left(\frac{\alpha }{A}\right)^{k}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{j=0}^{k\wedge n}}\frac{k!}{(k-j)!}{B_{n,j}}({c_{\bullet }})\frac{{(bt)^{k}}{e^{-bt}}}{k!}\\ {} & =\frac{{\alpha ^{n}}{e^{-bt}}}{n!}{\sum \limits_{j=0}^{n}}{\left(\frac{\alpha bt}{A}\right)^{j}}{B_{n,j}}({c_{\bullet }}){\sum \limits_{k=j}^{\infty }}{\left(\frac{\alpha bt}{A}\right)^{k-j}}\frac{1}{(k-j)!}.\end{aligned}\]

But, if we take $P(L(k)=n)$ from (8), then replacing it in (17) we obtain The relation of the Stirling numbers on the binomial coefficients explains the equivalence of (18) to (15) and the presence of ${e^{-bt}}\ne {e^{-\theta t}}$ in (18). For $n=1$ we have Then

##### (18)

\[ P(Z(t)=n)={e^{-bt}}\frac{{\alpha ^{n}}}{n!}{\sum \limits_{k=0}^{\infty }}{\left(\frac{\alpha bt}{A}\right)^{k}}|s(k+n,k)|\frac{n!}{(n+k)!}.\] \[\begin{aligned}{}P(Z(t)=1)& =\alpha {e^{-bt}}{\sum \limits_{k=1}^{\infty }}{\left(\frac{\alpha bt}{A}\right)^{k}}|s(k+1,k)|\frac{1!}{(1+k)!}\\ {} & =\alpha {e^{-bt}}\frac{\alpha bt}{2A}{\sum \limits_{k=1}^{\infty }}\frac{1}{(k-1)!}{\left(\frac{\alpha bt}{A}\right)^{k-1}}={e^{-bt}}{e^{\frac{\alpha bt}{A}}}\frac{{\alpha ^{2}}bt}{2A}.\end{aligned}\]

For $n=2$ we have \[ |s(k+2,k)|=\frac{[3(k+2)-1]}{4}\frac{(k+2)!}{(k-1)!3!},\hspace{1em}|s(2,0)|=0,\hspace{1em}|s(3,1)|=2.\]

We calculate \[ P(Z(t)=2)=\frac{{\alpha ^{2}}}{2!}{e^{-bt}}{\sum \limits_{k=1}^{\infty }}{\left(\frac{\alpha bt}{A}\right)^{k}}\left(\frac{3k+5}{4}\right)\left(\frac{2!}{(k-1)!3!}\right).\]

Obviously, \[ \left(\frac{3k+5}{4}\right)\left(\frac{2!}{(k-1)!3!}\right)=\frac{1}{4(k-2)!}+\frac{2}{3(k-1)!},\hspace{1em}k=2,3,\dots .\]

It means that the probability $P(Z(t)=2)$ is equal to: \[ \frac{{\alpha ^{2}}{e^{-bt}}}{2!}\left\{\frac{1}{4}{\left(\frac{\alpha bt}{A}\right)^{2}}{\sum \limits_{k=2}^{\infty }}{\left(\frac{\alpha bt}{A}\right)^{k-2}}\frac{1}{(k-2)!}+\frac{2}{3}\frac{\alpha bt}{A}{\sum \limits_{k=1}^{\infty }}{\left(\frac{\alpha bt}{A}\right)^{k-1}}\frac{1}{(k-1)!}\right\}.\]

Finally, \[ P(Z(t)=2)=\frac{{\alpha ^{2}}}{2!}{e^{-bt}}{e^{\frac{\alpha bt}{A}}}\left\{\frac{1}{4}{\left(\frac{\alpha bt}{A}\right)^{2}}+\frac{2}{3}\frac{\alpha bt}{A}\right\}.\]

For $n=3$ we use the consecutive relations of the Stirling numbers on the binomial coefficients: \[ |s(k+3,k)|=\frac{(k+3)!}{(k+1)!2!}\frac{(k+3)!}{(k-1)!4!},\hspace{1em}|s(3,0)|=0,\hspace{1em}|s(3,2)|=3.\]

The equivalent representation of $P(Z(t)=3)$ will be transformed as follows: \[ P(Z(t)=3)=\frac{{\alpha ^{3}}}{3!}{e^{-bt}}{\sum \limits_{k=1}^{\infty }}{\left(\frac{\alpha bt}{A}\right)^{k}}\frac{(k+3)(k+2)}{8(k-1)!}.\]

Obviously, \[ \frac{(k+3)(k+2)}{8(k-1)!}=\frac{1}{8(k-3)!}+\frac{1}{(k-2)!}+\frac{3!}{4(k-1)!},\hspace{1em}k=3,4,\dots .\]

In the same way we obtain (16), and so on. The Stirling numbers are very convenient in applications because they have recurrence relation and tables of their values.## 6 Applications

An important problem in many applications is how to recognize the original process from the observation data when the registration is randomly perturbed. The problem is growing in cases when the process is composed of several different processes. We see that the probabilistic characteristics for the couples of processes $Y(t)$ and $L(t)$ as well as for $Z(t)$ and $X(t)$ are similar. The best way to demonstrate their different properties is the comparison between Bernstein functions and Lévy measures with different parameters. The Bernstein functions ${\psi _{L}}(\lambda )$ and ${\psi _{Y}}(\lambda )$, how they are defined in Theorems 2 and 3, contain the iterated logarithmic function and have the following derivatives at zero:

\[ {\psi ^{\prime }_{L}}(\lambda )=\frac{\alpha {e^{-\lambda }}}{(1-\alpha {e^{-\lambda }})(-\log (1-\alpha {e^{-\lambda }}))}-1,\hspace{1em}{\psi ^{\prime }_{L}}(0)=\frac{\alpha }{A(1-\alpha )}-1\]

and \[\begin{array}{l}\displaystyle {\psi ^{\prime }_{Y}}(\lambda )=\frac{\alpha \beta {e^{-\lambda }}}{\{1+A\beta +\beta \log (1-\alpha {e^{-\lambda }})\}(1-\alpha {e^{-\lambda }})},\\ {} \displaystyle {\psi ^{\prime }_{Y}}(0)=\frac{\alpha \beta }{(1+A\beta -A\beta )(1-\alpha )}=\frac{\alpha \beta }{1-\alpha }.\end{array}\]

The first cumulants are equal to the corresponding mathematical expectations $E[L(1)]$ or $E[Y(1)]$ and for the subordinated processes we have Similarly, the Bernstein functions for processes $X(t)$ and $Z(t)$ are denoted by ${\psi _{X}}(\lambda )$ and ${\psi _{Z}}(\lambda )$ as in Theorems 2 and 3. Their derivatives at zero are as follows: \[ {\psi ^{\prime }_{X}}(\lambda )=\frac{\alpha {e^{-\lambda }}}{1-\alpha {e^{-\lambda }}},\hspace{1em}{\psi ^{\prime }_{X}}(0)=\frac{\alpha }{1-\alpha }\]

and \[\begin{array}{l}\displaystyle {\psi ^{\prime }_{Z}}(\lambda )=\frac{b}{A}\frac{\alpha {e^{-\lambda }}+(1-\alpha {e^{-\lambda }})\log (1-\alpha {e^{-\lambda }})}{{e^{-\lambda }}(1-\alpha {e^{-\lambda }})},\\ {} \displaystyle {\psi ^{\prime }_{Z}}(0)=\frac{b}{A}\frac{\alpha +(1-\alpha )\log (1-\alpha )}{1-\alpha }=\frac{b}{A}\left(\frac{\alpha }{1-\alpha }-A\right).\end{array}\]

For the application, we constructed two different case tests, named as *Selection A*and*Selection B*.##### Fig. 1.

Selection A. Comparison between the Lévy measure ${\Pi _{X}}$ and ${\Pi _{Z}}$ for $\alpha =1/2$ (left) and $\alpha =2/3$ (right), where ${\textstyle\sum _{n=1}^{\infty }}{\Pi _{Z}}(n)={\textstyle\sum _{n=1}^{\infty }}{\Pi _{X}}(n)=A$

*Selection A.*We can choose the parameters

*β*and

*b*in a way that the corresponding total masses of the Lévy measures are equal, ${\theta _{L}}={\theta _{Y}}$, and ${\theta _{X}}={\theta _{Z}}$, as follows: By this choice (selection) of parameters

*β*and

*b*the mathematical expectations are in the following inequalities:

\[ {\psi ^{\prime }_{L}}(0)<{\psi ^{\prime }_{Y}}(0)<{\psi ^{\prime }_{X}}(0)<{\psi ^{\prime }_{Z}}(0).\]

Namely, \[ \frac{\alpha }{A(1-\alpha )}-1<\frac{(A-\alpha )}{A(1-\alpha )}<\frac{\alpha }{1-\alpha }<\frac{A}{A-\alpha }\left(\frac{\alpha }{1-\alpha }-A\right).\]

The values of the Lévy measures ${\Pi _{X}}$ and ${\Pi _{Z}}$ at $n=1$ satisfy the following inequality ${\Pi _{Z}}(1)<{\Pi _{X}}(1)$, when ${\textstyle\sum _{n=1}^{\infty }}{\Pi _{Z}}(n)={\textstyle\sum _{n=1}^{\infty }}{\Pi _{X}}(n)=A$, as it is demonstrated in Figure 1 and in Figure 2.##### Fig. 2.

Selection A. Comparative plot of main Bernstein functions after rescaling with

*α*equal to $1/2$ and $2/3$, where ${\psi ^{\prime }_{L}}(0)<{\psi ^{\prime }_{Y}}(0)<{\psi ^{\prime }_{X}}(0)<{\psi ^{\prime }_{Z}}(0)$, knowing that ${\theta _{L}}={\theta _{Y}}=\log (\frac{A}{\alpha })$ and ${\theta _{X}}={\theta _{Z}}=A=-\log (1-\alpha )$##### Fig. 3.

Selection B: Comparative plot of main Bernstein functions after rescaling with

*α*equal to $1/2$ and $2/3$, where ${\psi _{Y}}(\infty )<{\psi _{L}}(\infty )<{\psi _{Z}}(\infty )<{\psi _{X}}(\infty )$, knowing that ${\psi ^{\prime }_{L}}(0)={\psi ^{\prime }_{Y}}(0)$ and ${\psi ^{\prime }_{Z}}(0)={\psi ^{\prime }_{X}}(0)$*Selection B.*If we choose

\[ \beta =\frac{1}{A}-\frac{1-\alpha }{\alpha }>0,\hspace{1em}b=\frac{A\alpha }{\alpha -A(1-\alpha )}>0,\]

then the mathematical expectations ${\psi ^{\prime }_{L}}(0)={\psi ^{\prime }_{Y}}(0)$ and ${\psi ^{\prime }_{Z}}(0)={\psi ^{\prime }_{X}}(0)$, and the total masses of the Lévy measures are in the following inequalities: Specifically, \[ \log \left(2-\frac{A(1-\alpha )}{\alpha }\right)<\log \left(\frac{A}{\alpha }\right)<\frac{\alpha (A-\alpha )}{A\alpha -A+\alpha }<A.\]

It is demonstrated in Figure 3.##### Remark 3.

All these inequalities are related to the following: and where we remark only that using (10) we have

\[\begin{aligned}{}{A^{2}}& ={(-\log (1-\alpha ))^{2}}=2{\sum \limits_{n=2}^{\infty }}|s(n,2)|\frac{{\alpha ^{n}}}{n!}\\ {} & =2{\sum \limits_{n=2}^{\infty }}\frac{{\alpha ^{n}}{H_{n-1}}}{n}={\alpha ^{2}}+{\alpha ^{3}}+\frac{11}{12}{\alpha ^{4}}+\frac{10}{12}{\alpha ^{5}}+\frac{137}{180}{\alpha ^{6}}\cdots \end{aligned}\]

and ## 7 Conclusion

The Negative-Binomial process in consideration can be constructed by the subordination of a Poisson process by a Gamma process. In this way, the process $Y(t)$ is a Poisson process subordinated by an iterated Gamma process. In potential theory, a Gamma subordinator and an iterated Gamma subordinator are classified as slow subordinators.

In the

*selection A*, the inter-arrival times of the processes $L(t)$ and $Y(t)$ are exponentially distributed with the same parameter ${\theta _{L}}={\theta _{Y}}$, but the mathematical expectation of the Logarithmic Lévy process in the unit time interval $E[L(1)]$ is less than $E[Y(1)]$. It is the same for the processes $X(t)$ and $Z(t)$.In the

*selection B*, the mathematical expectations of jumps altitude for the processes $L(t)$ and $Y(t)$ are equal ${\psi ^{\prime }_{L}}(0)={\psi ^{\prime }_{Y}}(0)$ and $E[L(1)]=E[Y(1)]$, but the mean number of jumps in the unit time interval of the Logarithmic process is greater than that for the subordinated process $Y(t)$, ${\theta _{L}}>{\theta _{Y}}$. It is the same for the processes $X(t)$ and $Z(t)$, ${\theta _{X}}>{\theta _{Z}}$.In the general setting: $Y(t)=X(T(t))$, when the underlying process $X(t)$ is a compound Poisson process without drift, any randomness of $T(t)$ before it passes the level given by the first jump time of $X(t)$ is not reflected by $Y(t)=X(T(t))$, see [16].