1 Introduction
A branching process model driven by geometric reproduction of particles was introduced in [12]. By design, it is a time-homogeneous Markov branching process $X(t),t>0$, with probability generating function $F(t,s),|s|<1$. The geometric branching mechanism is defined by the integer-valued random variable η representing the particles offspring number with probability mass function (p.m.f.)
The lifetime of particles is an exponentially distributed random variable with a constant parameter $K>0$. The initial condition is $X(0)=1$. The time interval to the first splitting is exponentially distributed with density $K{e^{-Kt}}$. We remark that at time $t>0$, when the number of particles in the system is $X(t)=n$, the time interval to the next splitting is exponentially distributed with density $nK{e^{-nKt}}$ as the minimum of n independent exponentially distributed random variables. The probability generating function (p.g.f.) $h(s):=E[{s^{\eta }}]$, $|s|\le 1$, and the mathematical expectation $m=E[\eta ]$ are
The p.g.f. $F(t,s),|s|<1$, is determined in [12] as the unique solution to the Kolmogorov equations [2, 7, 10] after the Lagrange inversion method following the classical theory [5, 8, 9].
\[ h(s)=E[{s^{\eta }}]=\frac{1-\varrho }{1-\varrho s},\hspace{1em}|s|\le 1,\hspace{1em}m={h^{\prime }}(1)=\frac{\varrho }{1-\varrho }>0.\]
The reproduction is classified as critical when the mean of progeny $m=1$, i.e. the parameter $\varrho =1/2$. Then the p.g.f. $h(s)$ is reduced to
(1)
\[ h(s)=\frac{1}{2-s},\hspace{1em}{h^{\prime }}(s)=\frac{1}{{(2-s)^{2}}},\hspace{1em}{h^{\prime\prime }}(s)=\frac{2}{{(2-s)^{3}}}.\]The factorial moments in the subcritical case are computed in [12]. They are used to obtain the probability mass function, central moments and variance-to-mean ratio (VMR) for the limiting random variable. In critical case, the p.g.f. $F(t,s)$ is defined by the composition of the Lambert-W and linear-fractional functions. The probability mass function of $X(t),t>0$, is expressed by the values of the Lambert-W function at the point $x={e^{Kt+1}}$. The agreement between Lagrange inversion method and series expansion of the Lambert-W function is confirmed by the approximation of extinction probability. The detailed explanation can be found in [12].
Fig. 1.
The graph of the Lambert-W function $W(x)$ for $-1/e\le x\le 5$. The real principal branch of solution ${W_{0}}$ is shown in red. The second branch of real solution ${W_{-1}}$ in the interval $-1/e\le x<0$ is drawn in blue. The used software implementation in R is available in [1]
However, the use of the Lambert-W function on its principal branch ${W_{0}}(x)$ of $x\ge -1/e$ is limited to obtaining values of $F(t,s)$ only for $s<1$ [12]. Thus, major statistical moments as variance and skewness remain undefined. The graph of the Lambert-W function is shown in Figure 1. Very precise approximations for $F(t,s),s\le 1$, can be computed by any dedicated software packages. But, the real positive values of the p.g.f. $F(t,s)$ for $s>1$ can be derived only from the ${W_{-1}}(x)$ branch of $-1/e\le x<0$, as it is shown in Figure 2. To find analytical solution, the series expansion of the p.g.f. $F(t,s)$ in the neighbourhood of $s=1$ is obtained. The results follow from the solution to the forward Kolmogorov equation and rely on the factorial moments of the critical infinitesimal geometric branching reproduction.
In the next Section 2, the description of the proposed model is given through the solution to the backward Kolmogorov equation. We outline there the problem of continuity of the p.g.f. $F(t,s)$ at the point $s=1$ for $s\ge 1$. Section 3 is devoted to the factorial moments of the number of particles alive to time $t>0$. The recurrent relation is established. The main result, concerning the representation of the p.g.f. $F(t,s)$ as a convergent series in the domain $|s-1|<1/Kt$, is proved in Section 4. Finally, the statistical inference is introduced in Section 5.
The series expansion of many special functions is based on the combinatorics relations, especially for the Lambert-W function. In Appendix we state some useful formulas for quick orientation on the outcome of the summation by diagonals introduced in Section 4.
2 Backward Kolmogorov equation for critical geometric branching mechanism
Suppose that the lifetime of particles is exponentially distributed with constant rate $K>0$. Then with positive probability, there is a family of particles from all generations at time $t>0$. The number of particles alive at time $t>0$ is described by a Markov branching process (MBP) $X(t),t\ge 0$, $X(0)=1$, [2, 7, 10]. Only the critical case, defined by (1) with parameter $\varrho =1/2$, is considered. Then the reproduction of particles is of mean $m=1$, therefore the ultimate extinction probability $q={\lim \nolimits_{t\to \infty }}F(t,0)=1$ and the mathematical expectation $E[X(t)]=1$ for $t>0$, see [10, Theorem 4, page 53]. For the reproduction p.g.f. introduced in (1) the infinitesimal generating function is defined as
Now, for the implicit solution to (3) in the critical case the following equation is valid:
Once we have (4), we can find the explicit solution to (3). It is defined by the composite function $\mathcal{C}(s)=V(G(s))$, for $s\ne 1$, introduced in the form
Then, from Equation (4), the solution $F(t,s)$ (see (2)) to the backward Kolmogorov equation in the critical case is $\mathcal{C}(F(t,s))={e^{Kt}}\mathcal{C}(s)$, $|s|<1$. The function $\mathcal{C}(s)$ from Equation (5) has a vertical asymptote at the point $s=1$ (see Figure 3). Its first derivative
\[ f(s):=K(h(s)-s)=\frac{K{(1-s)^{2}}}{2-s},\hspace{1em}\frac{K}{f(s)}=\frac{1}{{(1-s)^{2}}}+\frac{1}{1-s}.\]
The probability generating function of the process $X(t),t>0$, defined by
satisfies the backward Kolmogorov equation
Equation (3) is nonlinear with separable differentials due to the property of time-homogeneity. It can be written as
\[ \frac{dF}{f(F)}=dt,\hspace{1em}\hspace{1em}f(s)=\frac{K{(1-s)^{2}}}{2-s},\hspace{1em}f(0)=\frac{K}{2}.\]
It means that the indefinite integral obtains the form
(4)
\[ \int \frac{Kdx}{f(x)}=\int \left(\frac{1}{{(1-x)^{2}}}+\frac{1}{1-x}\right)dx=\log \left(\frac{1}{|1-x|}\exp \left(\frac{1}{1-x}\right)\right).\](5)
\[ \mathcal{C}(s)=\frac{1}{1-s}\exp \left(\frac{1}{1-s}\right),\frac{{\mathcal{C}^{\prime }}(s)}{\mathcal{C}(s)}=\frac{K}{f(s)},V(x)=x{e^{x}},G(s)=\frac{1}{1-s}.\]
\[ {\mathcal{C}^{\prime }}(s)=\frac{2-s}{{(1-s)^{3}}}\exp \left(\frac{1}{1-s}\right),\hspace{1em}s\ne 1,\hspace{1em}\mathcal{C}(0)=e,\hspace{1em}{\mathcal{C}^{\prime }}(0)=2e,\]
shows that $\mathcal{C}(s)$ is increasing in the intervals $-\infty <s<1$ and $2<s<\infty $. It is negative right to the point $s=1$, decreases in the interval $1<s<2$, and has a minimum at the point $s=2$. Only the interval $-\infty <s<1$, where $\mathcal{C}(s)$ is positive and increasing, is convenient for definition of the inverse function, ${\mathcal{C}^{-1}}(x)=s$, if and only if, $x=\mathcal{C}(s)$.Fig. 3.
The graphs of ${e^{Kt}}\mathcal{C}(s)={e^{Kt}}V(G(s))$, $s\ne 1$, as a function of s for different $Kt$. The horizontal red dashed line marks the $\min \mathcal{C}(s)=\mathcal{C}(2)=-1/e$. The result lines crossing dashed line are inflated by ${e^{Kt}}$, but they preserve the abscissa of minimum $\min ({e^{Kt}}\mathcal{C}(s))={e^{Kt}}\mathcal{C}(2)=-{e^{Kt}}/e<-1/e$
The composite inverse function of $V(G(s))$ is obtained in [12] by introduction of the Lambert-W function. As the Lambert-W function is multivalued with a branching point at $z=-{e^{-1}}$, the solution is limited only to the principal branch ${W_{0}}(x)$, shown in Figure 1, where
The inversion of the composite function $V(G(s))$ for $s<1$ is defined as
This form of the p.g.f. $F(t,s)$ is very convenient to realise the polynomial rate of increase with time parameter $t>0$. It is a result derived from the rising logarithmic rate of the Lambert-W function [6, formula (28)]. In particular, the extinction probability $P(X(t)=0)$ by time $t>0$ is directly computed from (6) as
The convergence of the family of functions $F(t,s)$, $t>0$, $|s|<1$, to the constant function equal to one on the interval $0\le s\le 1$ is monotone by $t>0$ and uniform by $|s|\le 1-\varepsilon <1$, $\varepsilon >0$, [10]. Moreover, the following limit exists: ${\lim \nolimits_{s\to {1_{-}}}}F(t,s)=1$. The conditional mathematical expectation, knowing the nonextinction, is given by
\[ \mathcal{C}(s)=V(G(s)),\hspace{1em}{\mathcal{C}^{-1}}(x)={G^{-1}}({V^{-1}}(x))={G^{-1}}(W(x)),\hspace{1em}x>0.\]
Then, the solution $F(t,s)$ to the backward Kolmogorov equation is given by
(6)
\[ \frac{1}{1-F(t,s)}=W\left(\frac{{e^{Kt}}}{(1-s)}\exp \left(\frac{1}{1-s}\right)\right),\hspace{1em}|s|<1.\]3 Forward Kolmogorov equation and factorial moments
The inconsistent part of the solution (5), (6) for the p.g.f. $F(t,s)$ is when $s>1$. The asymptotic behaviour of derivatives of the function $\mathcal{C}(s)$ in the neighbourhood of the vertical asymptote, left and right, are
\[ \underset{s\to {1_{-}}}{\lim }{\mathcal{C}^{(n)}}(s)=+\infty ,\hspace{1em}\underset{s\to {1_{+}}}{\lim }{\mathcal{C}^{(n)}}(s)=0,\hspace{1em}{\mathcal{C}^{(n)}}(1+1/k)=0,\hspace{1em}k=2,3,\dots \]
(see again Figure 3). This is a serious obstacle for studying the continuity of $F(t,s)$ at the point $s=1$ (left and right).A possible resolution to this problem is the method of factorial moments. It begins with the definition of the forward Kolmogorov equation,
with partial derivatives of the p.g.f. $F(t,s)$ denoted as follows:
(7)
\[ \frac{\partial F(t,s)}{\partial t}=f(s)\frac{\partial F(t,s)}{\partial s},\hspace{1em}F(0,s)=s,\hspace{1em}f(s)=\frac{K{(s-1)^{2}}}{2-s},\]
\[ \frac{{\partial ^{n}}F(t,s)}{\partial {s^{n}}}={F_{s}^{(n)}}(t,s),\hspace{1em}|s|<1,\hspace{1em}{F_{s}^{(n)}}(t,1)=\underset{s\to {1_{-}}}{\lim }{F_{s}^{(n)}}(t,s),\hspace{1em}s<1.\]
Next, the required n-th factorial moments can be expressed by $E{[X(t)]_{n\downarrow }}:={F_{s}^{(n)}}(t,1)$. For them, it is valid that if ${f^{(n)}}(1)<\infty $ then ${F_{s}^{(n)}}(t,1)<\infty $, directly following well-known results from [10, Theorem 2, page 33]. Now, the factorial moments of a critical geometric branching process are well defined and can be obtained by recurrence, as it is proved in the following lemma.Lemma 1.
Let $X(t)$ be a critical MBP with geometric branching mechanism. Then all factorial moments exist,
and they satisfy the recurrence relation
Proof.
We consider the consecutive derivatives of the forward Kolmogorov equation (7) for any critical MBP, knowing that $F(t,1)={F^{\prime }_{s}}(t,1)=1$ and ${F^{\prime\prime }_{s}}(t,1)={f^{\prime\prime }}(1)t$, see [2, 7, 10]:
\[ \frac{\partial {F_{s}^{(n)}}(t,s)}{\partial t}={\sum \limits_{j=0}^{n}}\frac{n!}{j!(n-j)!}{f^{(j)}}(s){F_{s}^{(n+1-j)}}(t,s),\hspace{1em}|s|<1,\hspace{1em}n=1,2,\dots .\]
Let $s=1$. Then for the Markov process with geometric branching mechanism it is valid that $f(1)=0$, ${f^{\prime }}(1)=0$, ${f^{(j)}}(1)=j!K$ for $j=2,3,4,\dots $. Thus,
\[ \frac{\partial {F_{s}^{(n)}}(t,1)}{\partial t}={\sum \limits_{j=2}^{n}}\frac{n!Kj!}{j!(n-j)!}{F_{s}^{(n+1-j)}}(t,1),\hspace{1em}n=2,3,\dots .\]
It is equivalent to
\[ \frac{\partial {F_{s}^{(n)}}(t,1)}{\partial t}=Kn!\left\{\frac{{F^{\prime }_{s}}}{0!}+\frac{{F^{\prime\prime }_{s}}}{1!}+\cdots +\frac{{F_{s}^{(n-2)}}}{(n-3)!}+\frac{{F_{s}^{(n-1)}}}{(n-2)!}\right\},\hspace{1em}n=2,3,\dots .\]
The next derivative for $n=1,2,\dots $ becomes
\[ \frac{\partial {F_{s}^{(n+1)}}(t,1)}{\partial t}=K(n+1)n!\left\{\frac{{F^{\prime }_{s}}}{0!}+\frac{{F^{\prime\prime }_{s}}}{1!}+\cdots +\frac{{F_{s}^{(n-1)}}}{(n-2)!}\right\}+K(n+1)n!\frac{{F_{s}^{(n)}}}{(n-1)!}.\]
It leads us to
\[ \frac{\partial {F_{s}^{(n+1)}}(t,1)}{\partial t}=(n+1)\frac{\partial {F_{s}^{(n)}}(t,1)}{\partial t}+K(n+1)n{F_{s}^{(n)}}(t,1),\hspace{1em}n=1,2,\dots .\]
Now, the following recurrence relation is yielded after integration:
In particular,
□Obviously, the obtained recurrent solution for factorial moments enables a form more convenient for computation polynomial,
with the following recurrence relation
Consequently, we obtain the remarkable property
The coefficients ${a_{n,k}}$ are obtained recurrently as a direct result from (8),
with the following initial and boundary values,
(8)
\[ {Q^{\prime }_{n}}(x)={Q^{\prime }_{n-1}}(x)+n{Q_{n-1}}(x),\hspace{1em}{Q_{1}}(x)=x,\hspace{1em}n=1,2,\dots .\]
\[ {Q^{\prime }_{n}}(x)=1+2{Q_{1}}(x)+\cdots +(n-1){Q_{n-2}}(x)+n{Q_{n-1}}(x),\hspace{1em}n=1,2,\dots .\]
The polynomials ${Q_{n}}$ with their coefficients are denoted by
(9)
\[ {Q_{n}}(x)=x+{a_{n,2}}{x^{2}}+{a_{n,3}}{x^{3}}+{a_{n,4}}{x^{4}}+\cdots +{a_{n,n-1}}{x^{n-1}}+{x^{n}}.\]4 Summation by diagonals
The application of the Taylor theorem for the p.g.f. $F(t,s)$ in the neighbourhood of $s=1$, where $F(t,1)=1$, yields the series expansion
This series expansion (11) in the neighbourhood of the point $s=1$ has a very little radius of convergence. This prevents the calculation of values of p.g.f. $F(t,s)$ in the neighbourhood of zero by (11). To overcome this disadvantage we proceed with solution using summation of diagonal terms. The following convenient diagonal form for $F(t,s)$ is obtained after placing (9) in (11)
The main diagonal is obtained for $j=0$, where ${a_{0,0}}=1$, ${a_{n,n}}=1$, and
To avoid a confusion with the boundary values, the formula (12) is rewritten as follows:
The summation by the second diagonal (the first after the main one) is computed for $j=1$ and it is based on the recurrence relation (10), taking following form,
It can be easily rewritten by using harmonic numbers ${H_{n}}$ [3] as follows,
After some technical work, the summation on the second diagonal for $|s-1|<1/Kt$ is generalised as
(11)
\[ F(t,s)=1+{\sum \limits_{n=1}^{\infty }}{F_{s}^{(n)}}(t,1)\frac{{(s-1)^{n}}}{n!}=1+{\sum \limits_{n=1}^{\infty }}{Q_{n-1}}(Kt){(s-1)^{n}}.\]
\[ F(t,s)=1+(s-1)+{\sum \limits_{n=2}^{\infty }}{(s-1)^{n}}{\sum \limits_{j=1}^{n-1}}{a_{n-1,j}}{(Kt)^{j}}.\]
When the order of summation is exchanged, the p.g.f. takes the form
(12)
\[ F(t,s)=1+(s-1){\sum \limits_{j=0}^{\infty }}{(s-1)^{j}}{\sum \limits_{n=j}^{\infty }}{a_{n,n-j}}{(Kt(s-1))^{n-j}}.\](13)
\[ F(t,s)=1+\frac{(s-1)}{1-Kt(s-1)}+(s-1){\sum \limits_{j=1}^{\infty }}{(s-1)^{j}}{\sum \limits_{n=j+1}^{\infty }}{a_{n,n-j}}{(Kt(s-1))^{n-j}}.\]
\[ {(s-1)^{2}}{\sum \limits_{n=2}^{\infty }}{a_{n,n-1}}{(Kt(s-1))^{n-1}}={\left(\frac{s-1}{1-Kt(s-1)}\right)^{2}}(-\log (1-Kt(s-1))).\]
Proceeding with the summation by the following diagonals, $j\ge 2$, as it is shown in Appendix, leads us to a generalised solution for coefficients ${a_{n,n-j}}$ and series expansion (13). Not surprisingly, working with diagonal forms implies the inclusion of combinatorial recurrent relations. In this case, the generalised solution and its triangular form involve the usage of the unsigned Stirling numbers of the first kind $|s(n,k)|$ and equivalently of harmonic numbers [14, 4]. The complete implementation is summarised in the following lemma. Lemma 2.
The coefficients of polynomials (9),
are expressed by the unsigned Stirling numbers of the first kind as follows:
Proof.
The representation (14) is valid for the first and second diagonals $(j=0,j=1)$, and for the first vertical column $(j=n-1)$,
\[\begin{array}{l}\displaystyle {a_{n,n}}=\frac{1}{n!}|s(n+1,n)|=1,\hspace{1em}{a_{n,1}}=\frac{1}{1!}{\sum \limits_{k=1}^{n}}{(-1)^{n-k}}|s(n+1,k)|=1,\\ {} \displaystyle {a_{n,n-1}}=\frac{1}{(n-1)!}\{-|s(n+1,1)|+|s(n+1,2)|\}=\frac{1}{(n-1)!}\{-n!+n!{H_{n}}\}.\end{array}\]
Moreover, for $j=n-2$ we have
\[ {a_{n,2}}=\frac{1}{2!}{\sum \limits_{k=1}^{n-1}}{(-1)^{n-1-k}}|s(n+1,k)|=\frac{1}{2!}\{-1+|s(n+1,n)|\}=\frac{(n+2)(n-1)}{4},\]
where $|s(n+1,n)|=\left(\genfrac{}{}{0.0pt}{}{n+1}{2}\right)$, see [3].Now we verify whether the representation (14) yields the main recurrence relation (10) on the coefficients of the polynomials ${Q_{n}}(x)$, rewritten as follows,
The proof is based on the main relation of the unsigned Stirling numbers of the first kind represented as $|s(n+1,k)|=n|s(n,k)|+|s(n,k-1)|$. Now, it is computed from (14) and (15) that
(15)
\[ {a_{n,n-j}}={a_{n-1,n-j}}+\left(\frac{n}{n-j}\right){a_{n-1,n-1-j}},\hspace{1em}j=1,2,\dots ,\hspace{1em}n-j>0.\]
\[ {a_{n-1,n-j}}=\frac{1}{(n-j)!}{\sum \limits_{k=1}^{j}}{(-1)^{j-k}}|s(n,k)|=\frac{1}{(n-j)!}{\sum \limits_{k=1}^{j+1}}{(-1)^{j+1-k}}|s(n,k-1)|,\]
where the number of summands $j=(n-1)-(n-j)+1$. We remark that for $k=1$, and $n>1$ the $|s(n,k-1)|=|s(n,0)|=0$.4.1 Solution of the Kolmogorov equation by the diagonal summation
The computed coefficients ${a_{n,n-j}}$ (14) allow to obtain the solution to the forward Kolmogorov equation of $F(t,s)$. This can be done with a composite recurrent procedure of polynomials ${Q_{n}}$. The sequence of iterations yields the following functions as a part of this solution:
Their consecutive derivatives by s (respectively by t) will be denoted as
We remark also the following identities:
Now, the solution to the forward Kolmogorov equation (7) can be defined and proved by the following theorem.
\[ \frac{{\partial ^{n}}F(t,s)}{\partial {s^{n}}}={F_{s}^{(n)}}(t,s),\hspace{1em}\frac{{\partial ^{n}}L(t,s)}{\partial {s^{n}}}={L_{s}^{(n)}}(t,s),\hspace{1em}\frac{{\partial ^{n}}A(t,s)}{\partial {s^{n}}}={A_{s}^{(n)}}(t,s).\]
Obviously, the following generalised forms are valid: (18)
\[ {A^{\prime }_{t}}(t,s)=K{(s-1)^{2}}{A^{\prime }_{s}}(t,s),\hspace{1em}{L^{\prime }_{t}}(t,s)=\frac{(s-1){L^{\prime }_{s}}(t,s)}{t}.\]Theorem 1.
Proof.
After denoting $L:=L(t,s)$, $A:=A(t,s)$, we rewrite (19) as
The first derivatives by t and s, respectively, in (20) are
(20)
\[ F(t,s)=1+{\sum \limits_{n=1}^{\infty }}{\{A(t,s)\}^{n}}{Q_{n-1}}(L(t,s)),\hspace{1em}{Q_{0}}(L)=1.\]
\[ {F^{\prime }_{t}}(t,s)={\sum \limits_{n=1}^{\infty }}\{n{A^{n-1}}(t,s){Q_{n-1}}(L){A^{\prime }_{t}}(t,s)+{A^{n}}(t,s){Q^{\prime }_{n-1}}(L){L^{\prime }_{t}}(t,s)\}\]
and
\[ {F^{\prime }_{s}}(t,s)={\sum \limits_{n=1}^{\infty }}\{n{A^{n-1}}(t,s){Q_{n-1}}(L){A^{\prime }_{s}}(t,s)+{A^{n}}(t,s){Q^{\prime }_{n-1}}(L){L^{\prime }_{s}}(t,s)\}.\]
Then, taking into account the relations (16)–(18) and (8), the first derivatives by t and s are
\[ {F^{\prime }_{t}}(t,s)=K{A^{2}}(t,s){\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s){Q^{\prime }_{n}}(L)={A^{\prime }_{t}}{\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s){Q^{\prime }_{n}}(L)\]
and
\[ {F^{\prime }_{s}}(t,s)={A^{\prime }_{s}}(t,s){\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s)\{n{Q_{n-1}}(L)+Kt(s-1){Q^{\prime }_{n-1}}(L)\},\]
or equivalently,
\[ {F^{\prime }_{s}}(t,s)={A^{\prime }_{s}}(t,s){\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s)\{{Q^{\prime }_{n}}(L)-{Q^{\prime }_{n-1}}(L)+Kt(s-1){Q^{\prime }_{n-1}}(L)\}.\]
The forward Kolmogorov equation can be written as
The left-hand side of (21) is
\[ K{A^{2}}(t,s)\left\{{\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s){Q^{\prime }_{n}}(L)-(s-1){\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s){Q^{\prime }_{n}}(L)\right\}.\]
The right-hand side of (21) is
\[\begin{array}{l}\displaystyle K{A^{2}}\left\{{\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s){Q^{\prime }_{n}}(L)-(1-Kt(s-1))A(t,s){\sum \limits_{n=1}^{\infty }}{A^{n-2}}(t,s){Q^{\prime }_{n-1}}(L)\right\}\\ {} \displaystyle =K{A^{2}}\left\{{\sum \limits_{n=1}^{\infty }}{A^{n-1}}(t,s){Q^{\prime }_{n}}(L)-(s-1){\sum \limits_{n=2}^{\infty }}{A^{n-2}}(t,s){Q^{\prime }_{n-1}}(L)\right\},\end{array}\]
because, ${Q_{0}}=1$ and ${Q^{\prime }_{0}}=0$. Then, the change of the summation index gives
□5 Statistical inferences
The polynomial recurrent relation of the form (8), (9) is an important computation enhancement enabling a tabular representation, as is shown in Table 1. This recurrent relation converges quickly due to minimising of the fraction ${Q_{n}}/{Q_{n-1}}$ (see Figure 4). For practical implementations, only about $n=15$ recurrent computations are required to obtain values for the p.g.f. $F(t,s)$ as close as about ${10^{-4}}$ to the solution obtained through the Lambert-W function (see Figure 2).
Table 1.
${Q^{\prime }_{n}}(x)={Q^{\prime }_{n-1}}(x)+n{Q_{n-1}}(x)$
Polynomials ${Q_{n}}$ | Polynomials ${Q^{\prime }_{n}}$ |
${Q_{0}}(x)=1$ | ${Q^{\prime }_{0}}(x)=0$ |
${Q_{1}}(x)=x$ | ${Q^{\prime }_{1}}(x)=1$ |
${Q_{2}}(x)=x+{x^{2}}$ | ${Q^{\prime }_{2}}(x)=1+2x$ |
${Q_{3}}(x)=x+\frac{5{x^{2}}}{2}+{x^{3}}$ | ${Q^{\prime }_{3}}(x)=1+5x+3{x^{2}}$ |
${Q_{4}}(x)=x+\frac{9{x^{2}}}{2}+\frac{13{x^{3}}}{3}+{x^{4}}$ | ${Q^{\prime }_{4}}(x)=1+9x+13{x^{2}}+4{x^{3}}$ |
${Q_{5}}(x)=x+\frac{14{x^{2}}}{2}+\frac{71{x^{3}}}{3!}+\frac{77{x^{4}}}{12}+{x^{5}}$ | ${Q^{\prime }_{5}}(x)=1+14x+\frac{71{x^{2}}}{2}+\frac{77{x^{3}}}{3}+5{x^{4}}$ |
The probabilities $P(X(t)=n)$ of the critical geometric branching process can be computed directly by the usage of any available software tool for the Lambert-W function, as it is shown in [12]. However, the statistical measures as variance, skewness, and kurtosis remain undefined relying on the solution based on the Lambert-W function. Their expressions follow directly from the representation of the n-th moments. It is given by the Stirling numbers of the second kind $S(n,k)$ [14, 4] and factorial moments as
Thus, the variance of particles $X(t)$ when $E[X(t)]=1$ is
The shape and skewness are also defined by the factorial moments:
and
Finally, the function $A(t,s)$ in the series expansion (19), (20) for geometric branching reproduction is of exactly twice greater intensity $\beta =2K$ than the p.g.f. of the critical birth-death process, see [13]. In this case the lifetime parameter $K>0$ (called killing rate) controls the intensity of reproduction and the size of population $X(t),t>0$.
This connection conveniently enables comparison between different models for statistical inferences. For instance, it is not difficult to observe in parallel the models of critical geometric branching (GB), critical linear birth-death processes (BD) and the models with the number of progeny particles concentrated in $(0,2,3,4)$. The reproduction p.g.f. for the new proposed models are
\[ {h_{BD}}(s)=E[{s^{\eta }}]=\frac{1+{s^{2}}}{2},\hspace{1em}{h_{3}}(s)=\frac{7}{12}+\frac{{s^{2}}}{4}+\frac{{s^{3}}}{6},\hspace{1em}{h_{4}}(s)=\frac{2}{3}+\frac{{s^{2}}}{8}+\frac{{s^{3}}}{12}+\frac{{s^{4}}}{8}.\]
The intensities of reproduction are respectively K, β, α, γ. The corresponding infinitesimal generating functions and derivatives at $s=1$ are, respectively,
\[\begin{array}{l}\displaystyle {f_{GB}}(s)=\frac{K{(s-1)^{2}}}{2-s},\hspace{1em}{f^{\prime\prime }_{GB}}(s)=2!K,\dots ,{f_{GB}^{(n)}}(s)=n!K,\\ {} \displaystyle {f_{BD}}(s)=\frac{\beta {(s-1)^{2}}}{2},\hspace{1em}{f^{\prime\prime }_{BD}}(s)=\beta ,\hspace{1em}{f_{BD}^{(3)}}(s)=0,\\ {} \displaystyle {f_{3}}(s)=\frac{\alpha }{12}{(1-s)^{2}}(2s+7),\hspace{1em}{f^{\prime\prime }_{3}}(1)=\frac{3\alpha }{2},\hspace{1em}{f_{3}^{(3)}}(1)=\alpha ,\hspace{1em}{f_{3}^{(4)}}(1)=0.\\ {} \displaystyle {f_{4}}(s)=\frac{\gamma }{24}{(1-s)^{2}}(3{s^{2}}+8s+16),\hspace{1em}{f^{\prime\prime }_{4}}(1)=\frac{9\gamma }{4},\\ {} \displaystyle {f_{4}^{(3)}}(1)=\frac{7\gamma }{2},\hspace{1em}{f_{4}^{(4)}}(1)=3\gamma .\end{array}\]
In order to keep equal variances for all models, we take
Under this condition, the measures of asymmetry are ordered as follows:
The series expansion for reproduction defined by ${f_{3}}$ is approximated as
\[\begin{array}{l}\displaystyle F(t,s)=1+\frac{(s-1)}{1-\frac{3\alpha t}{4}(s-1)}\\ {} \displaystyle +\frac{2}{9}{\left\{\frac{(s-1)}{1-\frac{3\alpha t}{4}(s-1)}\right\}^{2}}\left(-\log \left(1-\frac{3\alpha t}{4}(s-1)\right)\right)+\cdots \hspace{0.1667em}.\end{array}\]
The explicit solution to the Kolmogorov equation for ${f_{3}}$ is
Furthermore, the intensities for similar branching processes could be computed using a higher order approximation of the p.g.f. $F(t,s)$.
6 Conclusion
The continuity of the p.g.f. $F(t,s)$ in the neighbourhood of the point $s=1$ for critical branching process with geometric reproduction is proved analytically by the representations (19), (20) based on the factorial moments. This is an important extension to the already available solution based on the Lambert-W function and provides a computational tool for MBP with geometric reproduction.