1 Introduction
Any time-homogeneous Markov branching process (MBP) $X(t),t>0$, is defined and studied by its probability generating function (p.g.f.) $F(t,s),|s|<1$, as an unique solution of the Kolmogorov equations [1, 7, 19]. The backward Kolmogorov equation is nonlinear, of the type separate differentials, due to the time-homogeneous property. First of all, we find the indefinite integral primitive. Then, in order to respect the initial conditions, it is essential to define the inverse function of the primitive. We apply the Lagrange inversion method as it is shown in many books and articles [3, 8, 12].
The correspondence between MBP and integer-valued positive Lévy processes is noted in the books [9], page 236, and [1], page 126. The geometric distribution is present in many application models, mainly in biology and epidemiology. However, the most important implementation is in statistical physics, where it is known as Bose-Einstein distribution [20].
In the present text, we are focused on the subcritical and critical infinitesimal geometric branching reproductions. We prove that the p.g.f. $F(t,s)$ in the subcritical case is expressed as a composition of Gauss hypergeometric and Wright functions. Many special cases of the Wright function ${_{1}}{\Psi _{1}}(\alpha ,a;\beta ,b;z)$ are considered in [11, 16]. The numerical evaluation of the Wright function is studied in [10]. The Laplace transform pairs related to the Mittag-Leffler functions and the reduced Wright functions is surveyed in [11]. We remark that in our model, for the subcritical MBP with mean of reproduction $0<m<1$, the parameters of the Wright function are $\alpha =a=m$, and $-1<\beta =m-1<0$, and $b=m+1$. The principal parameter, defining the domain of convergence is equal to $1+\beta -\alpha =1+(m-1)-m=0$ [16]. The factorial moments of $X(t),t>0$, are presented by series containing the partial Bell polynomials [21]. The conditional limit distribution is defined in its explicit form. It is a new unimodal integer-valued distribution supported by $(1,2,\dots )$. Its index of dispersion depends on the solution of a transcendental equation.
In the critical case, the p.g.f. $F(t,s)$ is defined by the composition of the Lambert-W function and linear-fractional one. The probability mass function (p.m.f.) of $X(t)$, $t>0$, is expressed by the values of the Lambert-W function at the point $x={e^{Kt+1}}$ and can be calculated by using any corresponding software packages. The agreement between Lagrange inversion method and series expansion of the Lambert-W function is confirmed at the approximation of the extinction probability.
The Lambert-W function is considered as the real-valued composite inverse function of the function $V(x)=x{e^{x}},x\ge -1$. The properties and many applications of the Lambert-W function are provided in [5]. The sequence of series and derivatives of $W(x)$ are studied in [4]. The new probability property such that $W(x)$ is a Bernstein function is developed in [15]. The computation of Lambert W-functions without transcendental function evaluations is considered in [6]. A historical review is given in [2].
The main convenience of the obtained results based on special functions is computation simplicity. It is not a novelty and there are many applications. The most important ones are related to the multi-body computational problems, firstly noticed in the numeric calculation of the eigenstates of hydrogen molecular ion ${H^{+}}$. The effective solution was obtained by using the Lambert-W function, which becomes a common method for solving similar problems in Physics [13]. The best advantage of using Lambert-W is the effective approach to the solution of slowly convergent series expansions [17]. It is possible due to very well studied properties and well-developed software packages, noted in [5, 6, 22].
We began our work on the infinitesimal geometric branching mechanism following the methods of branching processes theory and Lagrange inversion. Using them, we extended our results to obtain a computational effective generalisation based on Lambert-W and Wright functions. Some of the improvements are demonstrated by easily computed numerical applications.
2 Subcritical geometric branching
The studied geometric branching reproduction process $X(t),t>0$, is a time-homogeneous MBP starting with one particle as initial condition. Its Markov property is guaranteed by the assumption that the lifetime of particles is an exponentially distributed random variable with a constant parameter $K>0$. The reproduction is defined by the integer-valued random variable η representing the offspring numbers as follows,
The parameter $m>0$ defines the mean of the offspring numbers. In this notation the p.g.f. of the reproduction law is,
All derivatives for $k=2,3,\dots $, at the points $s=0$ and $s=1$ are related by the constant ${(1+m)^{k+1}}$ as follows,
The solution of the indefinite integral is
Knowing the primitive (8) of the equation (3) we formulate the following lemma without proof.
\[ h(s)=\frac{1}{1+m-ms},\hspace{1em}{h^{\prime }}(s)=\frac{m}{{(1+m-ms)^{2}}},\hspace{1em}{h^{\prime\prime }}(s)=\frac{2{m^{2}}}{{(1+m-ms)^{3}}}.\]
The infinitesimal generating function $f(s)$ present in the Kolmogorov equations is defined by the constant K and p.g.f. $h(s)$ as
and has the following derivatives, for $k=1,2,3,\dots $,
\[ {f^{\prime }}(s)=\frac{{f^{\prime }}(1)-Km(1-s)(2+m-ms)}{{(1+m-ms)^{2}}},\hspace{1em}{f^{(k)}}(s)=\frac{K{m^{k}}k!}{{(1+m-ms)^{k+1}}}.\]
The p.g.f. of the number of particles alive at the time $t>0$
yields the backward Kolmogorov equation
The equation $h(s)=s$ has two solutions, ${s_{1}}=1/m$ and ${s_{2}}=1$, where $m={h^{{^{\prime }}}}(1)$. The value $s=1/m$ is the fixed point for the p.g.f. $h(s)$ and for its first derivative ${h^{{^{\prime }}}}(s)$. The branching reproduction is classified as subcritical if $0<m<1$ and critical if $m=1$. This classification is in accordance with the notion of ultimate extinction probability $q=1$ in both cases in consideration, subcritical and critical one. In particular, for the subcritical reproduction when $0<m<1$, the first derivative ${f^{\prime }}(s)$ is negative in the interval $0\le s\le 1$, and
(4)
\[ {f^{\prime }}(0)=\frac{-K(1+m+{m^{2}})}{{(1+m)^{2}}}<0,\hspace{1em}{f^{\prime }}(1)=K(m-1)<0.\]
\[ {f^{(k)}}(0)=K\frac{k!{m^{k}}}{{(1+m)^{k+1}}},\hspace{1em}{f^{(k)}}(1)=Kk!{m^{k}},\hspace{1em}{f^{(k)}}(0)=\frac{{f^{(k)}}(1)}{{(1+m)^{k+1}}}.\]
In the subcritical case, we study in parallel the p.g.f. $F(t,s)$ of the branching process $X(t),t>0$, defined by (2) and equation (3), and the function
satisfying the following non-linear equation:
In order to solve these equations (3) and (6), when ${f^{\prime }}(1)<0$, we define respectively the function
\[ {\mathcal{A}_{0}}(s)=\exp \left\{{f^{\prime }}(1){\int _{0}^{s}}\frac{dx}{f(x)}\right\},\hspace{1em}s\in [0,1),\hspace{1em}{\mathcal{A}_{0}}(0)=1,\hspace{1em}{\mathcal{A}_{0}}(1)=0.\]
and the function
\[ \mathcal{A}(s)=\exp \left\{{f^{\prime }}(1){\int _{0}^{s}}\frac{-dx}{f(1-x)}\right\},\hspace{1em}s\in [0,1),\hspace{1em}\mathcal{A}(0)=0,\hspace{1em}\mathcal{A}(1)=1,\]
where
\[ \frac{{\mathcal{A}^{\prime }_{0}}(s)}{{\mathcal{A}_{0}}(s)}=\frac{{f^{\prime }}(1)}{f(s)},\hspace{1em}\frac{{\mathcal{A}^{\prime }}(s)}{\mathcal{A}(s)}=\frac{-{f^{\prime }}(1)}{f(1-s)},\]
see [14, 1]. We remark that in the subcritical case
(7)
\[ \frac{1}{f(s)}=\frac{-1}{{f^{\prime }}(1)}\left\{\frac{1}{1-s}-\frac{{m^{2}}}{1-ms}\right\},\hspace{1em}{f^{\prime }}(1)=K(m-1)<0.\](8)
\[ \int \frac{dx}{f(x)}=\int \frac{-1}{{f^{\prime }}(1)}\left(\frac{1}{1-x}-\frac{{m^{2}}}{1-mx}\right)dx=\frac{1}{{f^{\prime }}(1)}\log \left(\frac{|1-x|}{|1-mx{|^{m}}}\right).\]Lemma 1.
Let $X(t),t>0$, be a subcritical time-homogeneous MBP with branching mechanism given by (1) where $0<m<1$. The solution of the backward Kolmogorov equation (3) satisfying $F(t,1)=1$, is
\[ F(t,s)={\mathcal{A}_{0}^{-1}}({e^{{f^{\prime }}(1)t}}{\mathcal{A}_{0}}(s)),\hspace{1em}{\mathcal{A}_{0}}(s)=\frac{1-s}{{(1-ms)^{m}}},\hspace{1em}s<1/m,\]
respectively, the solution of the equation (6) with $R(t,1)=0$, is
\[ R(t,s)={\mathcal{A}^{-1}}({e^{{f^{\prime }}(1)t}}\mathcal{A}(1-s)),\hspace{1em}\mathcal{A}(s)=\frac{s}{{(1-m+ms)^{m}}},\hspace{1em}1-1/m<s.\]
where ${\mathcal{A}_{0}^{-1}}(x)$ and ${\mathcal{A}^{-1}}(x)$ are the composite inverse functions, and
The mathematical expectation of the number of particles alive at the time $t>0$ in the subcritical case has the following exponential decreasing behaviour,
and
Obviously, the following relation follows from (11) and (12)
The range of the function $\mathcal{A}(s)$ is the whole real line $(-\infty ,\infty )$. It is considered as the domain of definition for the ${\mathcal{A}^{-1}}(x)$. Knowing the derivatives (11), we find the function ${\mathcal{A}^{-1}}(x)$ in its representation as an exponential generating function in the interval $|x|<1/{m^{m+1}},0<m<1$.
\[ E[X(t)]=\exp \{{f^{\prime }}(1)t\}={e^{Mt}}<1,\hspace{1em}M={f^{\prime }}(1)=K(m-1)<0,\hspace{1em}m={h^{\prime }}(1).\]
The extinction probability to the positive time $P(X(t)=0)=F(t,0),t>0$, is expressed by the inversion function
\[ F(t,0)={\mathcal{A}_{0}^{-1}}({e^{Mt}}{\mathcal{A}_{0}}(0))={\mathcal{A}_{0}^{-1}}({e^{Mt}}),\hspace{1em}{e^{Mt}}<1,\hspace{1em}{\mathcal{A}_{0}}(0)=1.\]
Respectively, the survival probability is given as follows,
\[ R(t,0)={\mathcal{A}^{-1}}({e^{Mt}}\mathcal{A}(1))={\mathcal{A}^{-1}}({e^{Mt}})=1-{\mathcal{A}_{0}^{-1}}({e^{Mt}}).\]
We remark that at the point $s=1/2$ the functions ${\mathcal{A}_{0}}(1/2)=\mathcal{A}(1/2)$ and the respective vertical asymptotes are left and right symmetric to $s=1/2$. The function ${\mathcal{A}_{0}}(s)$ is decreasing, concave and all its derivatives ${\mathcal{A}_{0}^{(k)}}(s)$ are negative in the interval, $s<1/m$. The function $\mathcal{A}(s)$ is increasing, concave, the consecutive derivatives are alternating positive and negative in the interval, $s>1-1/m$. Namely,
\[ {\mathcal{A}_{0}^{(2k)}}(s)={\mathcal{A}^{(2k)}}(1-s)<0,\hspace{1em}{\mathcal{A}_{0}^{(2k-1)}}(s)=-{\mathcal{A}^{(2k-1)}}(1-s)<0.\]
In details, denoting the falling and rising factorials as follows:
and
we write the derivatives in the following form,
\[ {\mathcal{A}^{(k)}}(s)=\frac{{(-1)^{k-1}}{[m]_{(k-1)\uparrow }}{\theta ^{k-1}}(k+ms)}{{(1-m)^{m}}{(1+\theta s)^{m+k}}},\hspace{1em}\theta =\frac{m}{1-m}>0.\]
In particular, for $k=1,2,\dots $, at the points $s=0$ and $s=1$ we have
(11)
\[ {a_{k}}={\mathcal{A}^{(k)}}(0)=\frac{{(-1)^{k-1}}k{[m]_{(k-1)\uparrow }}{\theta ^{k-1}}}{{(1-m)^{m}}}={(-1)^{k}}{\mathcal{A}_{0}^{(k)}}(1),\](12)
\[ {\mathcal{A}^{(k)}}(1)=\frac{{(-1)^{k-1}}{[m]_{(k-1)\uparrow }}{\theta ^{k-1}}(k+m)}{{(1-m)^{m}}{(1+\theta )^{m+k}}}={(-1)^{k}}{\mathcal{A}_{0}^{(k)}}(0).\]2.1 Lagrange inversion and special functions
Now, we look for the properties of the composite inverse of the functions ${\mathcal{A}_{0}}(s)$ and $\mathcal{A}(s)$. The function $\mathcal{A}(s)$ and its inverse ${\mathcal{A}^{-1}}(s)$ are some particular cases of the Wright function [11, 16]
Using the previous definitions, it will be proved in the following theorem that for $|x|<1/{m^{m+1}}$ and $0<m<1$ the inverse function
\[ {_{1}}{\Psi _{1}}(\alpha ,a;\beta ,b;z):={\sum \limits_{n=0}^{\infty }}\frac{\Gamma (\alpha n+a)}{\Gamma (\beta n+b)}\frac{{z^{n}}}{n!},\]
and the Gauss hypergeometric function [8, 3]
\[ {_{2}}{F_{1}}(c,d;g;z):={\sum \limits_{k=0}^{\infty }}\frac{{[c]_{k\uparrow }}{[d]_{k\uparrow }}}{{[g]_{k\uparrow }}}\frac{{z^{k}}}{k!},{\hspace{1em}_{2}}{F_{1}}\left(c,d;d;z\right)={\left(\frac{1}{1-z}\right)^{c}},\hspace{1em}|z|<1.\]
Obviously, knowing the notation $\theta =\frac{m}{1-m}>0,0<m<1$, we see that the
(13)
\[ \mathcal{A}(s)=\frac{s}{{(1-m)^{m}}{(1+\theta s)^{m}}}{=_{2}}{F_{1}}(m,1;1;-\theta s)\frac{s}{{(1-m)^{m}}},\hspace{1em}|\theta s|<1.\]Theorem 1.
Let us consider the function ${\mathcal{A}_{0}}(s)$ given by
\[ {\mathcal{A}_{0}}(s)=\mathcal{A}(1-s)=\frac{1-s}{{(1-ms)^{m}}},\hspace{1em}0<m<1,\hspace{1em}{\mathcal{A}_{0}}(0)=1,\hspace{1em}{\mathcal{A}_{0}}(1)=0.\]
Then for its composite inverse function is valid
where
Proof.
First of all, we confirm the relation
To apply the Lagrange inversion method to it, we use the following representation,
\[ {\mathcal{A}_{0}^{-1}}({\mathcal{A}_{0}}(s))=1-{\mathcal{A}^{-1}}({\mathcal{A}_{0}}(s))=1-{\mathcal{A}^{-1}}(\mathcal{A}(1-s))=1-(1-s)=s.\]
The function $\mathcal{A}(s)$ is represented as an exponential generating function in the neighbourhood of zero $|s|<1/\theta =(1-m)/m,0<m<1$ (13), with coefficients ${a_{k}}={\mathcal{A}^{(k)}}(0)$ (11) as follows
(16)
\[ \mathcal{A}(s)={\sum \limits_{k=1}^{\infty }}\frac{k{(-1)^{k-1}}{[m]_{(k-1)\uparrow }}}{{(1-m)^{m}}}{\left(\frac{m}{1-m}\right)^{k-1}}\frac{{s^{k}}}{k!},\hspace{1em}{a_{1}}=\frac{1}{{(1-m)^{m}}}>1.\]
\[ \mathcal{A}(s)=\frac{s}{g(s)},\hspace{1em}g(s)={(1-m)^{m}}{\left(1+\frac{ms}{1-m}\right)^{m}},\hspace{1em}g(0)={(1-m)^{m}}>0,\]
and obviously $g(1)=1$, see [3, 8]. Denote the inverse function in its series expansion as
Then the coefficients ${b_{k}}$ are given by the derivatives of the function ${(g(s))^{k}}$ at the point $s=0$ as follows,
The Taylor series expansion of the function
is given by the binomial coefficients as follows,
\[ {(g(s))^{k}}={(1-m)^{mk}}{\sum \limits_{j=0}^{\infty }}{\left(\frac{ms}{1-m}\right)^{j}}\frac{{[mk]_{j\downarrow }}}{j!}.\]
The derivative of the order j to this function at the point $s=0$ is
\[ \frac{{d^{j}}}{d{s^{j}}}{[{(g(s))^{k}}]_{(s=0)}}={(1-m)^{mk}}{\left(\frac{m}{1-m}\right)^{j}}{[mk]_{j\downarrow }}.\]
It is enough to take $j=k-1$ in order to obtain the coefficient ${b_{k}}$ in (15). Then
\[ {\mathcal{A}^{-1}}(x)={\sum \limits_{k=1}^{\infty }}{(1-m)^{mk}}{\left(\frac{m}{1-m}\right)^{k-1}}{[mk]_{(k-1)\downarrow }}\left(\frac{{x^{k}}}{k!}\right),\hspace{1em}{b_{1}}={(1-m)^{m}}<1.\]
Applying the definition of decreasing factorials by Gamma function (9, 10),
\[ \frac{\Gamma (mk+1)}{\Gamma (mk+1-k+1)}=\frac{mk\Gamma (mk)}{\Gamma ((m-1)k+2)}=\frac{mk\Gamma (m(k-1)+m)}{\Gamma ((m-1)(k-1)+m+1)}\]
we find
\[ {\mathcal{A}^{-1}}(x)=m(1-m)x{\sum \limits_{k=1}^{\infty }}{\left(\frac{m}{{(1-m)^{1-m}}}\right)^{k-1}}\frac{\Gamma (m(k-1)+m)}{\Gamma ((m-1)(k-1)+m+1)}\frac{{x^{k-1}}}{(k-1)!}.\]
The change of variable $k-1=j$ leads to the representation (14).We remark that the coefficients ${b_{k}}$ are either positive, zero or negative, due to the decreasing factorials. It is confirmed, when the reflection formulas for Gamma function
\[ \Gamma (z)\Gamma (-z)=\frac{-\pi }{z\sin (\pi z)},\hspace{1em}\Gamma (z)\Gamma (1-z)=\frac{\pi }{\sin (\pi z)},\hspace{1em}z\ne 0,\pm 1,\pm 2,\dots ,\]
are applied on the representation
\[ {[mk]_{(k-1)\downarrow }}=\frac{\Gamma (mk+1)}{\Gamma (1+(1-(1-m)k))}=\frac{\Gamma (mk+1)\Gamma ((1-m)k)\sin (\pi (1-m)k)}{\pi (1-(1-m)k)}.\]
Obviously, for $k(1-m)=j,j=2,3,\dots $, the value ${b_{k}}=0$. By this reason, in the considered case, the function ${\mathcal{A}^{-1}}(x)$ is not a p.g.f..The radius of convergence of the series expansion
\[ {\mathcal{A}^{-1}}(x)={\sum \limits_{k=1}^{\infty }}\frac{{b_{k}}{x^{k}}}{k!},\hspace{1em}{b_{k}}={\left(\frac{m}{{(1-m)^{1-m}}}\right)^{k}}\frac{(1-m){[mk]_{(k-1)\downarrow }}}{m},\]
is calculated with a root test based on
\[ \underset{k\to \infty }{\limsup }\sqrt[k]{\frac{|{b_{k}}|}{k!}}=\left(\frac{m}{{(1-m)^{1-m}}}\right)\underset{k\to \infty }{\limsup }\sqrt[k]{\frac{(1-m)|{[mk]_{(k-1)\downarrow }}|}{mk!}}\]
and Stiling’s formula for the Gamma function. When $k\to \infty $ we have
and
\[ k!\sim \sqrt{2\pi k}{\left(\frac{k}{e}\right)^{k}},\hspace{1em}\Gamma ((1-m)k)\sim \frac{\sqrt{2\pi (1-m)k}}{(1-m)k}{\left(\frac{(1-m)k}{e}\right)^{(1-m)k}}.\]
We extract the multiple
\[ {\left\{{\left(\frac{mk}{e}\right)^{m}}{\left(\frac{(1-m)k}{e}\right)^{(1-m)}}\frac{e}{k}\right\}^{k}}={\{{m^{m}}{(1-m)^{1-m}}\}^{k}}.\]
Then, obviously,
\[ \underset{k\to \infty }{\limsup }\sqrt[2k]{\frac{2\pi (1-m)}{mk}}=1,\hspace{1em}\underset{k\to \infty }{\limsup }\sqrt[k]{|\sin (\pi (1-m)k)|}=1.\]
And finally,
\[ \underset{k\to \infty }{\limsup }\sqrt[k]{\frac{|{b_{k}}|}{k!}}=\left(\frac{m}{{(1-m)^{1-m}}}\right)\{{m^{m}}{(1-m)^{1-m}}\}={m^{1+m}}.\]
We obtain the radius of convergence of the series expansion to the inverse function ${\mathcal{A}^{-1}}(x)$ in dependence of the parameter m as follows,
□In particular, when the parameter $0<m<1$ is a rational number then the inverse function can be calculated as a solution of corresponding algebraic equation. For example, when $m=1/2$ then
\[ \mathcal{A}(s)=\frac{\sqrt{2}s}{\sqrt{1+s}},\hspace{1em}{\mathcal{A}^{-1}}(x)=\frac{x(x+\sqrt{{x^{2}}+8})}{4},\]
and if $m=1/3$ then
\[ \mathcal{A}(s)=\frac{\sqrt[3]{3}s}{\sqrt[3]{2+s}},\hspace{1em}{\mathcal{A}^{-1}}(x)=\frac{x}{\sqrt[3]{3}}\left\{\sqrt[3]{1+\sqrt{1-\frac{{x^{3}}}{81}}}+\sqrt[3]{1-\sqrt{1-\frac{{x^{3}}}{81}}}\right\},\]
and if $m=2/3$ then
and
\[\begin{array}{l}\displaystyle {\mathcal{A}^{-1}}(x)=\frac{x}{\sqrt[3]{18}}\left\{\sqrt[3]{{\left(\frac{2}{3}\right)^{7}}{x^{6}}+\frac{16{x^{3}}}{27}+1+\sqrt{{\left(\frac{2}{3}\right)^{5}}{x^{3}}+1}}\right\}\\ {} \displaystyle +\frac{x}{\sqrt[3]{18}}\left\{\sqrt[3]{{\left(\frac{2}{3}\right)^{7}}{x^{6}}+\frac{16{x^{3}}}{27}+1-\sqrt{{\left(\frac{2}{3}\right)^{5}}{x^{3}}+1}}\right\}+\frac{4{x^{3}}}{27}.\end{array}\]
Respectively, their radiuses of convergence are as follows
In the aim to describe the behaviour of the process $X(t),t>0$, we consider the factorial moments $E{[X(t)]_{n\downarrow }}={F_{s}^{(n)}}(t,1)$. The derivatives of the p.g.f. $F(t,s)$ are expressed by the partial Bell polynomials defined as
\[ {B_{n,k}}({a_{\bullet }})=\sum \limits_{({k_{1}},{k_{2}},\dots ,{k_{n}})}\frac{n!{a_{1}^{{k_{1}}}}...{a_{n}^{{k_{n}}}}}{{k_{1}}!{(1!)^{{k_{1}}}}...{k_{n}}!{(n!)^{{k_{n}}}}},\hspace{1em}{a_{\bullet }}=({a_{1}},{a_{2}},\dots ),\]
see [21]. The sum is over all partitions of n into k parts, that is over all nonnegative integer solutions $({k_{1}},{k_{2}},\dots ,{k_{n}})$ of the equations:
The following theorem is based on the inequality (17) showing that the convergence interval of the series ${\textstyle\sum _{k=1}^{\infty }}{b_{k}}{s^{k}}/k!$ contains the point $s=1$.Theorem 2.
Let $X(t),t>0$, be a subcritical time-homogeneous MBP with branching mechanism given by (1) where $0<m<1$. Then the factorial moments $E{[X(t)]_{n\downarrow }}={F_{s}^{(n)}}(t,1)$ are expressed by the mathematical expectation ${e^{Mt}}=E[X(t)]={F^{\prime }_{s}}(t,1)$ and partial Bell polynomials ${B_{n,k}}({a_{\bullet }})$ as follows,
\[ {F_{s}^{(n)}}(t,1)={(-1)^{n+1}}{\sum \limits_{k=1}^{n}}{B_{n,k}}({a_{\bullet }}){b_{k}}{({e^{Mt}})^{k}},\hspace{1em}M<0,\]
where the sequences ${a_{\bullet }}=({a_{1}},{a_{2}},\dots )$ and ${b_{\bullet }}=({b_{1}},{b_{2}},\dots )$ are defined as (11) and (15).
Proof.
As we have seen previously, the solution of the equation (6) is
Then, exchanging the order of summation we obtain from (18),
\[ 1-F(t,s)={\mathcal{A}^{-1}}({e^{Mt}}\mathcal{A}(1-s))={\sum \limits_{k=1}^{\infty }}\frac{{b_{k}}{({e^{Mt}})^{k}}{(\mathcal{A}(1-s))^{k}}}{k!}.\]
The change of variable $z=1-s,s\to 1,z\to 0$, gives the opportunity to work with the representation (16) of $\mathcal{A}$ as the exponential generating function in the neighbourhood of zero. The powers of the exponential generating functions, see [21], are given by
(18)
\[ \frac{{(\mathcal{A}(z))^{k}}}{k!}=\frac{1}{k!}{\left({\sum \limits_{j=1}^{\infty }}{a_{j}}\frac{{z^{j}}}{j!}\right)^{k}}={\sum \limits_{n=k}^{\infty }}{B_{n,k}}({a_{\bullet }})\frac{{z^{n}}}{n!}.\]
\[\begin{array}{l}\displaystyle 1-F(t,s)={\sum \limits_{k=1}^{\infty }}{b_{k}}{({e^{Mt}})^{k}}{\sum \limits_{n=k}^{\infty }}{B_{n,k}}({a_{\bullet }})\frac{{(1-s)^{n}}}{n!}\\ {} \displaystyle ={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}{B_{n,k}}({a_{\bullet }}){b_{k}}{({e^{Mt}})^{k}}\frac{{(1-s)^{n}}}{n!}.\end{array}\]
□Note, that applying the Faa Di Bruno formula for derivatives of the composite functions $\mathcal{A}({\mathcal{A}^{-1}}(x))=x$ and ${\mathcal{A}^{-1}}(\mathcal{A}(s))=s$ gives
\[ {B_{1,1}}({a_{\bullet }}){b_{1}}=1,\hspace{1em}{\sum \limits_{k=1}^{n}}{B_{n,k}}({a_{\bullet }}){b_{k}}=0,\hspace{1em}n=2,3,\dots \]
It means that all factorial moments ${F_{s}^{(k)}}(t,1)$ of order $k=2,3,\dots $ contain the multiple ${e^{Mt}}(1-{e^{Mt}})$.Finally, the important process behaviour like conditional limit probability is obtained in the following theorem.
Theorem 3.
Let $X(t),t>0$, be a subcritical time-homogeneous MBP with branching mechanism given by (1) where $0<m<1$. Then the conditional limit probability exists and is given by
with p.g.f.
\[ {F^{\ast }}(s)={\sum \limits_{n=1}^{\infty }}{p_{n}}{s^{n}}=1-\frac{1}{{(1-ms)^{m}}}+\frac{s}{{(1-ms)^{m}}}.\]
The p.m.f. is defined by the increasing factorials (10), ${[m]_{(k-1)\uparrow }}$ as follows
\[ {p_{k}}=\frac{{m^{k-1}}(1-m)(m+k){[m]_{(k-1)\uparrow }}}{k!}>0,\hspace{1em}0<m<1,\hspace{1em}k=1,2,\dots .\]
The factorial moments $E{[\xi ]_{k\downarrow }}$ of the limit random variable is
Proof.
The p.g.f. of the conditional probability is written as follows
\[ {\sum \limits_{n=1}^{\infty }}{s^{n}}P(X(t)=n|X(t)>0)=\frac{F(t,s)-F(t,0)}{1-F(t,0)}=1-\frac{R(t,s)}{R(t,0)}.\]
Knowing the differentiability of the function ${\mathcal{A}^{-1}}$ in the neighborhood of zero, we apply the theorem of l’Hospital to find the limit
\[ \underset{t\to \infty }{\lim }\frac{R(t,s)}{R(t,0)}=\underset{t\to \infty }{\lim }\frac{{\mathcal{A}^{-1}}({e^{Mt}}\mathcal{A}(1-s))}{{\mathcal{A}^{-1}}({e^{Mt}})}=\mathcal{A}(1-s)={\mathcal{A}_{0}}(s)=\frac{1-s}{{(1-ms)^{m}}}.\]
To obtain the p.m.f. we consider the Taylor series expansion of the p.g.f. ${F^{\ast }}(s)$ in the neighbourhood of zero,
\[ {F^{\ast }}(s)=1-{\mathcal{A}_{0}}(s)=1-{\sum \limits_{k=0}^{\infty }}\frac{{[m]_{k\uparrow }}{(ms)^{k}}}{k!}+s{\sum \limits_{j=0}^{\infty }}\frac{{[m]_{j\uparrow }}{(ms)^{j}}}{j!}.\]
The change of variable $j+1=k$ in the second sum leads to,
\[\begin{array}{l}\displaystyle {F^{\ast }}(s)={\sum \limits_{k=1}^{\infty }}\left(\frac{-{[m]_{k\uparrow }}{(m)^{k}}}{k!}+\frac{{[m]_{(k-1)\uparrow }}{(m)^{k-1}}}{(k-1)!}\right){s^{k}}\\ {} \displaystyle ={\sum \limits_{k=1}^{\infty }}\frac{{[m]_{(k-1)\uparrow }}{(m)^{k-1}}}{(k-1)!}\left(1-\frac{m(m+k-1)}{k}\right){s^{k}}\\ {} \displaystyle =(1-m){\sum \limits_{k=1}^{\infty }}\frac{{m^{k-1}}{s^{k}}(m+k){[m]_{(k-1)\uparrow }}}{k!}.\end{array}\]
We remark that
\[ \left(1-\frac{m(m+k-1)}{k}\right)=\left(\frac{k-{m^{2}}-mk+m}{k}\right)=\left(\frac{m(1-m)+k(1-m)}{k}\right).\]
The relation between derivatives of the functions $\mathcal{A}$ and ${\mathcal{A}_{0}}$ at the points zero and $s=1$ is obvious. In this way, we recognise the factorial moments ${f_{k}}$ as given previously in (11), and also their relation with the p.m.f. ${p_{k}}$ in (12). □In a more general view, the conditional limit distribution for the studied process can be also considered as a mixture of shifted Negative-Binomial distributions, closely connected to the family of delta Lagrangian probability distribution [3, 8]. The probabilities ${p_{k}}$ of this process can be computed numerically after direct implementation of the Theorem 3. The computation process is recurrent and relies on the following ratio, where for $k=1,2,\dots $.,
This ratio (19) shows that the Panjer recursion (even generalized) does not take place. The inequality ${\Upsilon _{k}}(m)<1$ is equivalent to the following
where $(m+1-2{m^{2}})>0$ when $-1/2<m<1$. It is important to note that the ratio (19) is increasing as a function of the parameter m, namely
The histogram of the p.m.f. can be generated by the consecutive multiplication with ${\Upsilon _{k}}(m)<1$, where the maximum is given by ${p_{1}}=1-{m^{2}}$. Some results for ${\Upsilon _{k}}(m)$ and computed from them probabilities ${p_{k}}$ are shown in Figure 1.
(19)
\[ {\Upsilon _{k}}(m)=\frac{{p_{k+1}}}{{p_{k}}}=m\left(1+\frac{m}{k+1}\right)\left(1-\frac{1}{k+m}\right)<1,\hspace{1em}0<m<1.\]Mean-Variance-Skewness-Kurtosis can be easily expressed by the factorial moments. In particular, for several factorial moments we have,
\[ E[\xi ]={f_{1}}=\frac{1}{{(1-m)^{m}}},\hspace{1em}{f_{2}}=\frac{2{m^{2}}}{{(1-m)^{m+1}}},\hspace{1em}{f_{3}}=\frac{3{m^{3}}(m+1)}{{(1-m)^{m+2}}}.\]
The central moments, respectively, are as follows
\[ Var[\xi ]={f_{2}}+{f_{1}}-{({f_{1}})^{2}}=\frac{1}{{(1-m)^{m}}}\left\{\frac{2{m^{2}}}{1-m}+1-\frac{1}{{(1-m)^{m}}}\right\},\]
and
The main parameter for applications is the variance-to-mean ratio (VMR) or index of dispersion equal respectively to
The measure VMR of dispersion obtains the value equal to one for the threshold parameter ${m^{\ast }}$, which is estimated as $0.58905<{m^{\ast }}<0.589058$. This result is obtained after the numeric solution of the equation
with a precision higher than ${10^{-5}}$. The conditional limit probability is over-dispersed when the branching process $X(t),t>0$, is near critical, ${m^{\ast }}<m<1$, and under-dispersed when $0<m<{m^{\ast }}<1$. The values of VMR are shown in Figure 2.Fig. 1.
The graphics shows ${\Upsilon _{k}}(m)$ (left) and related values for ${p_{k}}$ (right) computed for $m=1/3$ (red), $m=1/2$ (green) and $m=2/3$ (blue)
In the class of Panjer probability distributions, the VMR $=1$ for the Poisson distribution, respectively, VMR $>1$ for the geometric and Negative-Binomial distribution, and VMR $<1$ for the binomial distribution. This enables the index of dispersion to assess whether observed data can be modeled using a Poisson process. The “under-dispersed” distribution corresponds to the relatively regular randomness. If the index of dispersion is larger than 1, a data-set can correspond to the existence of clusters of occurrences.
Remark 1.
In its general setting, the problem of conditional limit behaviour of MBP is studied in the book [19]. It is interesting to compare the infinitesimal geometric branching reproduction with a reproduction defined by the quadratic p.g.f. having the same offspring’s mean and variance. Let the p.g.f. of the offspring’s numbers for the process $Y(t)$ be given by
\[ u(s)=({m^{2}}-m+1)+s(m-2{m^{2}})+{m^{2}}{s^{2}},\hspace{1em}{u^{\prime }}(1)=m,\hspace{1em}{u^{\prime\prime }}(1)=2{m^{2}}.\]
The infinitesimal generating function is
\[ v(s)=K(m-1)(s-1)+K{m^{2}}{(s-1)^{2}},\hspace{1em}{v^{\prime }}(1)={f^{\prime }}(1)=M,\hspace{1em}{v^{\prime\prime }}(1)={f^{\prime\prime }}(1).\]
Then the decomposition
\[ \frac{{v^{\prime }}(1)}{v(s)}=\frac{1}{s-1}-\frac{\varrho }{\varrho s-1},\hspace{1em}\varrho =\frac{{m^{2}}}{{m^{2}}+(1-m)},\hspace{1em}1-\varrho =\frac{(1-m)}{{m^{2}}+(1-m)}.\]
We denote the corresponding functions expressing the solutions of the Kolmogorov equations by the letters ${\mathcal{D}_{0}}$ and $\mathcal{D}$,
\[ {\mathcal{D}_{0}}(s)={\int _{0}^{s}}\frac{{v^{\prime }}(1)dx}{v(x)}=\frac{1-s}{1-\varrho s}=\frac{(1-s)}{1-\varrho +\varrho (1-s)},\hspace{1em}\mathcal{D}(s)=\frac{s}{1-\varrho +\varrho s}.\]
Then
\[ {\mathcal{D}_{0}^{-1}}(s)={\mathcal{D}_{0}}(s),\hspace{1em}{\mathcal{D}^{-1}}(s)=1-{\mathcal{D}_{0}}(s)=\frac{(1-\varrho )s}{1-\varrho s}={D^{\ast }}(s).\]
We see that the conditional limit distribution is a shifted geometric distribution with parameter $\varrho <1$. The solution of the equation corresponding to (5, 6) is given by
\[ r(t,s)={\mathcal{D}^{-1}}({e^{Mt}}\mathcal{D}(1-s))=\frac{{e^{Mt}}(1-s)}{1+(1-{e^{Mt}})(1-s)\varrho /(1-\varrho )},\]
where
\[ \frac{\varrho }{1-\varrho }=\frac{{m^{2}}}{1-m}=\frac{{f^{\prime\prime }}(1)}{-2{f^{\prime }}(1)}.\]
The transient phenomena when the process $X(t)$ is near critical is introduced by Sevastyanov, B. A. in 1959 [19, 18]. It take place when $(t\to \infty ,m\to 1)$ as follows,
3 Critical geometric branching
Let the reproduction of particles be of mean $m=1$, therefore $q=1$ and mathematical expectation $E[X(t)]=1$ for any $t>0$. Hence, the p.g.f.
The backward Kolmogorov equation is the same as (3), but with a new parameter (4), namely ${f^{\prime }}(1)=0$. The analytical deviation (difference) of critical and subcritical MBP starts with decomposition (7) and hence (8). Now, in the critical case, we have only similarly (analogously), the following indefinite integral primitive, for $x\ne 1$,
Let us introduce the composite function $\mathcal{C}(s)=V(G(s)),s\ne 1$, such that,
Then obviously, knowing (21), the solution $F(t,s)$ of the backward Kolmogorov equation in the critical case is expressed by the following relation,
The function $\mathcal{C}(s)$ (22) has a vertical asymptote at the point $s=1$. First of all, we study its behaviour left and right of the vertical $s=1$, especially the domain of increasing, and then we look for its inverse. The first derivative
(20)
\[ h(s)=\frac{1}{2-s},\hspace{1em}f(s)=\frac{K{(1-s)^{2}}}{2-s},\hspace{1em}\frac{K}{f(s)}=\frac{1}{{(1-s)^{2}}}+\frac{1}{1-s}.\](21)
\[ \int \frac{Kdx}{f(x)}=\int \left(\frac{1}{{(1-x)^{2}}}+\frac{1}{1-x}\right)dx=\log \left(\frac{1}{|1-x|}\exp \left(\frac{1}{1-x}\right)\right).\](22)
\[ \mathcal{C}(s)=\frac{1}{1-s}\exp \left(\frac{1}{1-s}\right),\frac{{\mathcal{C}^{\prime }}(s)}{\mathcal{C}(s)}=\frac{K}{f(s)},\hspace{1em}V(x)=x{e^{x}},G(s)=\frac{1}{1-s}.\]
\[ {\mathcal{C}^{\prime }}(s)=\frac{2-s}{{(1-s)^{3}}}\exp \left(\frac{1}{1-s}\right),\hspace{1em}s\ne 1,\hspace{1em}\mathcal{C}(0)=e,\hspace{1em}{\mathcal{C}^{\prime }}(0)=2e,\]
shows that $\mathcal{C}(s)$ is increasing in the following two intervals $-\infty <s<1$ and $2<s<\infty $. Note that the $V(G(s))$ is negative right of the point $s=1$, decreasing in the interval $1<s<2$ and has minimum at the point $s=2$. Only the interval $-\infty <s<1$, where $\mathcal{C}(s)$ is positive and increasing, is convenient for definition of inverse function, ${\mathcal{C}^{-1}}(x)=s,x>0$, if and only if $x=\mathcal{C}(s),s<1$. Obviously, the inverse function ${\mathcal{C}^{-1}}(x)$ has a vertical asymptote at the point $x=0$ and ${\lim \nolimits_{x\to \infty }}{\mathcal{C}^{-1}}(x)=1$.An important advantage of this composite function $\mathcal{C}(s)=V(G(s))$ is that all higher order derivatives at zero can be expressed by the Lah numbers [21], as it is shown in the following lemma.
Lemma 2.
Let the composite function $\mathcal{C}(s)=V(G(s)),s\ne 1$ satisfies (22). Then the Taylor series expansion of the function $\mathcal{C}(s)$ in the neighbourhood of zero is expressed by the Lah numbers as follows,
where the Lah numbers $L(n,k)$ are defined by the partial Bell polynomials over the sequence
(23)
\[ \mathcal{C}(s)=e+{\mathcal{C}_{0}}(s),\hspace{1em}{\mathcal{C}_{0}}(s)={\sum \limits_{k=1}^{\infty }}\frac{{c_{k}}{s^{k}}}{k!},\hspace{1em}{c_{n}}=e{\sum \limits_{k=1}^{n}}(k+1)L(n,k),\]Proof.
At first, directly from the definition we have $\mathcal{C}(0)=V(1)=e$. Now, let us consider the derivatives of the function
Then, at the point $x=1$ all derivatives are the multiples of the number e, as follows,
and at the point $x=-1$ as follows
In particular, at the point $s=0$ the corresponding derivatives are
\[ V(-1)=-{e^{-1}},\hspace{1em}{V^{\prime }}(-1)=0,\dots ,{V^{(n)}}(-1)=(n-1){e^{-1}}>0,\hspace{1em}n=2,3,\dots .\]
We remark that, the first derivative ${V^{\prime }}(x)<0$ in the interval $(-\infty ,-1)$ and the function $V(x)$ is decreasing in this interval. But ${V^{(n)}}(-n)=0$ and the function $V(x)$ changes the convexity at these points. Moreover,
\[ \underset{x\to \infty }{\lim }{V^{(n)}}(x)=+\infty ,\hspace{1em}\underset{x\to -\infty }{\lim }{V^{(n)}}(x)=0.\]
The linear-fractional function $G(s)$ has the following derivatives for $s\ne 1$,
\[ G(s)=\frac{1}{1-s},\hspace{1em}{G^{(n)}}(s)=\frac{n!}{{(1-s)^{n+1}}},\hspace{1em}{G^{(n)}}(0)=n!.\]
We have, left of the point $s={1_{-}},s<1$,
and right of the point $s={1_{+}},s>1$,
\[ \underset{s\to {1_{+}}}{\lim }{G^{(2n)}}(s)=-\infty ,\hspace{1em}\underset{s\to {1_{+}}}{\lim }{G^{(2n+1)}}(s)=+\infty ,\hspace{1em}n=0,1,2,\dots .\]
The $n-th$ derivative of the composite function $\mathcal{C}(s)=V(G(s))$ at any point $s\ne 1$ is given by the Faa Di Bruno formula,
(24)
\[ {\mathcal{C}^{(n)}}(s)={\sum \limits_{k=1}^{n}}{V^{(k)}}(G(s)){B_{n,k}}({G_{\bullet }}),\hspace{1em}({G_{\bullet }})=({G^{(n)}}(s),n=1,2,\dots ).\]
\[ G(0)=1,\hspace{1em}{V^{(k)}}(1)=e(k+1),\hspace{1em}{G^{(j)}}(0)=j!,\hspace{1em}{B_{n,k}}(\bullet !)=L(n,k).\]
Namely,
\[ {\mathcal{C}^{(n)}}(0)={\sum \limits_{k=1}^{n}}{V^{(k)}}(1){B_{n,k}}({g_{\bullet }}),\hspace{1em}({g_{\bullet }})=({g_{n}}={G^{(n)}}(0),n=1,2,\dots ).\]
The Taylor series expansion confirms the representation (23) with ${c_{n}}={\mathcal{C}^{(n)}}(0)$. □A direct outcome of from the lemma is the explanation of how the inverse function ${\mathcal{C}^{-1}}(x),x>0$, approaches its horizontal asymptote. The conclusion follows from (24), implying the asymptotic behaviour of derivatives in the neighbourhood of the vertical asymptote, left and right,
However, this approach expects applications to perform large iterative computations in aim to obtain highly precise results. For this reason, we looked for an inversion based on the composite function $V(G(s))$ by introducing the Lambert-W function (considered as inverse of V(x)), to obtain another more effective solution.
\[ \underset{s\to {1_{-}}}{\lim }{\mathcal{C}^{(n)}}(s)=+\infty ,\hspace{1em}\underset{s\to {1_{+}}}{\lim }{\mathcal{C}^{(n)}}(s)=0,\hspace{1em}{\mathcal{C}^{(n)}}(1+1/k)=0,\hspace{1em}k=2,3,\dots .\]
The purpose of the lemma is that if the coefficients ${c_{k}}$ in the Taylor series expansion (23) of the function $\mathcal{C}(s)$ are known, we can apply the Lagrange inversion method in its general setting to obtain the representation
\[ {\mathcal{C}_{0}}(s)={\sum \limits_{k=1}^{\infty }}\frac{{c_{k}}{s^{k}}}{k!},\hspace{1em}\hspace{1em}{\mathcal{C}_{0}^{-1}}(x)={\sum \limits_{k=1}^{\infty }}\frac{{d_{k}}{x^{k}}}{k!},\hspace{1em}{\mathcal{C}^{-1}}(x)={\mathcal{C}_{0}^{-1}}(x-e),\hspace{1em}x>0,\]
where ${d_{1}}{c_{1}}=1$ and the coefficients ${d_{n}}$, for $n=2,3,\dots $, are given by
(25)
\[ {(2e)^{n}}{d_{n}}={\sum \limits_{k=1}^{n-1}}{(-1)^{k}}{[n]_{k\uparrow }}{B_{n-1,k}}({q_{\bullet }}),\hspace{1em}({q_{\bullet }})=({q_{n}}={c_{n+1}}/(n+1){c_{1}}).\]3.1 Lambert-W function and Lagrange inversion
In general, the Lambert-W function is a complex-valued function with branching point $z=-{e^{-1}}$. We consider only its principal branch as the real-valued inversion function of
The inversion of the composite function $V(G(s))$ is defined as follows,
We consider only interval $x>0$ because the function ${G^{-1}}$ has a vertical asymptote at zero, $W(0)=0$. Then, the solution $F(t,s)$ of the backward Kolmogorov equation is given by,
(26)
\[ \mathcal{C}(s)=V(G(s)),s<1,\hspace{1em}{\mathcal{C}^{-1}}(x)={G^{-1}}({V^{-1}}(x))={G^{-1}}(W(x)),x>0.\]
\[ \frac{1}{1-F(t,s)}=W\left(\frac{{e^{Kt}}}{(1-s)}\exp \left(\frac{1}{1-s}\right)\right),\hspace{1em}|s|<1.\]
This representation of the p.g.f. $F(t,s)$ is very convenient to realize the polynomial rate of increasing with time parameter $t>0$, based on the logarithmic rate of increasing of the Lambert-W function [4], formula $(28)$. In particular,
The family of functions $\{F(t,s),\hspace{1em}0<s<1,\hspace{1em}t>0,\}$ converges to the constant function $y(s)=1$ uniformly by $|s|<1$ and increasing by $t>0$ [19].The expression for the derivatives of Lambert-W function on the domain of differentiability, $-{e^{-1}}<x<\infty $ is given in [4, 15] as
Then the consecutive derivatives of the Lambert-W function at the point $x=e$ are easily calculated as,
\[ \frac{{d^{n}}}{d{x^{n}}}W(x)=\frac{{(-1)^{n-1}}{A_{n}}(W(x))\exp (-nW(x))}{{(1+W(x))^{2n-1}}},\hspace{1em}n=1,2,\dots ,\]
where ${A_{n}}(x)$ is a polynomial of degree $(n-1)$ specified by the recursion, ${A_{1}}(x)=1$, and for $n\ge 1$,
For $x\ne 0$ it is convenient to write the derivatives as follows,
(28)
\[ {W^{(n)}}(x)=\frac{{(-1)^{n-1}}{A_{n}}(W(x))}{{(1+W(x))^{2n-1}}}{\left(\frac{W(x)}{x}\right)^{n}},\hspace{1em}n=1,2,\dots .\]
\[ {W^{(n)}}(e)=\frac{{(-1)^{n-1}}{A_{n}}(1)}{{2^{2n-1}}}\frac{1}{{e^{n}}}=\frac{{(-1)^{n-1}}{A_{n}}(1)}{{2^{n-1}}{(2e)^{n}}},\hspace{1em}n=1,2,\dots .\]
The relation between inversion of the composite function $V(G(s))$ and Lambert-W function is defined and proved in the following theorem.Theorem 4.
Let the composite function $\mathcal{C}(s)=V(G(s))$ satisfies
\[ \mathcal{C}(s)=\frac{1}{1-s}\exp \left(\frac{1}{1-s}\right)=e+{\mathcal{C}_{0}}(s),\hspace{1em}{\mathcal{C}_{0}}(s)={\sum \limits_{k=1}^{\infty }}\frac{{c_{k}}{s^{k}}}{k!},\hspace{1em}s<1,\]
then
\[ {\mathcal{C}_{0}^{-1}}(x)={\sum \limits_{k=1}^{\infty }}\frac{{d_{k}}{x^{k}}}{k!},\hspace{1em}{\mathcal{C}^{-1}}(x)={\mathcal{C}_{0}^{-1}}(x-e),\hspace{1em}x>0,\]
and
where the sequence $({w_{\bullet }})=({w_{n}}={W^{(n)}}(e),n=1,2,\dots )$ is defined by the derivatives of the Lambert-W function at the point $x=e$.
Proof.
In this case, knowing the representation (26), the coefficients ${d_{n}}$ can be calculated directly by derivatives of the representation
at the point $x>0$. The function ${G^{-1}}(x)$ has the following derivatives,
\[ {G^{-1}}(x)=1-\frac{1}{x},\hspace{1em}{\left(1-\frac{1}{x}\right)^{(k)}}=\frac{{(-1)^{k-1}}k!}{{x^{k+1}}},\hspace{1em}{({G^{-1}})^{(k)}}(1)={(-1)^{k-1}}k!.\]
At the point $x=0$ the function $W(x+e)$ take values $W(e)=1$. Applying the Faa Di Bruno formula on the composite function (30) we obtain,
\[ {d_{n}}={\sum \limits_{k=1}^{n}}{(-1)^{k-1}}k!{B_{n,k}}({w_{\bullet }}),\hspace{1em}({w_{\bullet }})=({w_{n}}={W^{(n)}}(e),n=1,2,\dots ),\]
We remark the equivalence of (25) obtained by the Lagrange inversion method and (29) expressing ${d_{n}}$ by the values of the Lambert-W function. □The assumption for computational simplification is justified during the calculation of the very important for every branching process extinction probability. It can be computed from (27) applying any software packages giving the values of the function $W({e^{Kt+1}})$. In our case, we used the function $lambertW$ from package $VGAM$ (see [22]) in R environment for statistical computing, where the Lambert-W functionality properties are implemented according [5]. The computed results are shown in Figure 3.
For comparison, the series expansion of $F(t,0)$ is expressed by the Stirling numbers of second kind $S(j,n)$ and coefficients ${d_{n}}$ as follows,
It is easy to notice that the value $F(t,0)=P(X(t)=0)$, representing the extinction probability to the positive time $t>0$, is
(31)
\[ F(t,0)={\sum \limits_{j=1}^{\infty }}\frac{{r_{j}}{(Kt)^{j}}}{j!},\hspace{1em}{r_{j}}={\sum \limits_{n=1}^{j}}S(j,n){e^{n}}{d_{n}}.\]
\[ F(t,0)={\mathcal{C}^{-1}}({e^{Kt+1}})={\mathcal{C}_{0}^{-1}}({e^{Kt+1}}-e),\hspace{1em}\mathcal{C}(0)=e.\]
Thus, after applying the Theorem (4) we obtain
\[ {\mathcal{C}_{0}^{-1}}({e^{Kt+1}}-e)={\mathcal{C}_{0}^{-1}}(e({e^{Kt}}-1))={\sum \limits_{n=1}^{\infty }}\frac{{e^{n}}{d_{n}}{({e^{Kt}}-1)^{n}}}{n!}.\]
The properties of exponential generating function and definition of the partial exponential Bell polynomials, as it is shown in [21], allow writing
\[ \frac{{({e^{Kt}}-1)^{n}}}{n!}=\frac{1}{n!}{\left({\sum \limits_{j=1}^{\infty }}\frac{{(Kt)^{j}}}{j!}\right)^{n}}={\sum \limits_{j=n}^{\infty }}\frac{{B_{j,n}}({1^{\bullet }}){(Kt)^{j}}}{j!},\]
where ${B_{j,n}}({1^{\bullet }})=S(j,n),{1^{\bullet }}=(1,{1^{2}},{1^{3}},\dots .)$ are the Stirling numbers of the second kind. In this way the extinction probability is obtained as expansion by double sum
\[ F(t,0)={\sum \limits_{n=1}^{\infty }}{e^{n}}{d_{n}}{\sum \limits_{j=n}^{\infty }}\frac{S(j,n){(Kt)^{j}}}{j!}.\]
The change of summation order $n\ge 1,j\ge n$ into $j\ge 1,1\le n\le j$ leads to (31).For the practical implementation of this result, it is easier to calculate several terms of the coefficients ${r_{j}}$ with any computational tool. Then, knowing these coefficients ${r_{n}}$ the approximation of the extinction probability is expanded as
\[ F(t,0)=\frac{1}{2}.Kt-\frac{3}{{2^{3}}}.\frac{{(Kt)^{2}}}{2!}+\frac{11}{{2^{5}}}.\frac{{(Kt)^{3}}}{3!}-\frac{45}{{2^{7}}}.\frac{{(Kt)^{4}}}{4!}+\frac{193}{{2^{9}}}.\frac{{(Kt)^{5}}}{5!}+\cdots \hspace{0.1667em}.\]
On the other side, the series expansion of the Lambert-W function [4], see formula $(28)$ with the radius of convergence $\sqrt{4+{\pi ^{2}}}$, is
\[ W({e^{z}})=1+\frac{(z-1)}{2}+\frac{{(z-1)^{2}}}{16}-\frac{{(z-1)^{3}}}{192}-\frac{{(z-1)^{4}}}{3072}+\frac{13{(z-1)^{5}}}{61440}....\]
Applied on (27), it gives the following representation for the extinction probability
\[ \frac{1}{1-F(t,0)}=1+\frac{(Kt)}{2}+\frac{{(Kt)^{2}}}{16}-\frac{{(Kt)^{3}}}{192}-\frac{{(Kt)^{4}}}{3072}+\frac{13{(Kt)^{5}}}{61440}........\]
The division of polynomials in the increasing order confirms the previous approximation obtained by the coefficients ${r_{n}}$ from (31) and agreement of two series expansions.The series expansion obtained by the Lagrange inversion method converges very slowly with growing $t>0$ requiring expansion to larger order number n. This issue implies the already known computational problem in similar applications, in particular, for calculation of eigenstates of hydrogen molecular ion ${H^{+}}$ [13, 17]. To solve this computational issue, the representation of $F(t,s)$ and its derivatives at the point $s=0$ is redefined in terms of the values $W({e^{Kt+1}})$, calculated with the corresponding software packages.
The probability mass function of $X(t),t>0$, is defined by the derivatives of the p.g.f. $F(t,s)$ at zero as follows
Obviously, for $n=0$ we have the extinction probability. Applying (28) on the derivatives of the Lambert-W function, we give the representation of p.m.f. in terms of the values $W({e^{Kt+1}})$ in the following theorem.
Theorem 5.
Let $X(t),t>0$, be a critical time-homogeneous MBP with branching mechanism given by (20). The consecutive derivatives of the p.g.f. $F(t,s)$ at zero are
\[ {F_{s}^{(n)}}(t,0)={\sum \limits_{k=1}^{n}}{({e^{Kt}})^{k}}{B_{n,k}}({c_{\bullet }})\left({\sum \limits_{j=1}^{k}}\frac{{(-1)^{j-1}}j!{B_{k,j}}({W_{\bullet }})}{{(W({e^{Kt+1}}))^{j+1}}}\right).\]
where the sequence $({W_{\bullet }})$ is given by the derivatives of the Lambert-W function at the point ${e^{Kt+1}}$.
Proof.
We start with the representation
Then the ${n^{\prime }}th$ derivative by s at the point $s=0$ is given by
\[ {F_{s}^{(n)}}(t,0)={\sum \limits_{k=1}^{n}}{({\mathcal{C}^{-1}})^{(k)}}({e^{Kt}}\mathcal{C}(0)){B_{n,k}}({e^{Kt}}{c_{\bullet }}).\]
To obtain the derivatives of the inverse function ${\mathcal{C}^{-1}}$ at the point $\mathcal{C}(0)=e$ we apply the Faa Di Bruno formula on the representation
Knowing the derivatives of the linear-fractional function at the point $x={e^{Kt+1}}$,
we obtain for the derivatives of the composite function the following representation
\[ {({\mathcal{C}^{-1}})^{(k)}}({e^{Kt}}\mathcal{C}(0))={({\mathcal{C}^{-1}})^{(k)}}({e^{Kt+1}})={\sum \limits_{j=1}^{k}}\frac{{(-1)^{j-1}}j!{B_{k,j}}({W_{\bullet }})}{{(W({e^{Kt+1}}))^{j+1}}},\]
where the sequence $({W_{\bullet }})$ is defined by derivatives of the Lambert-W function at the point ${e^{Kt+1}}$ as follows
□As a demonstrative example, several derivatives presenting probabilities $P(X(t)=n)$ are computed and their values are shown in Figure 3.
\[\begin{array}{r}\displaystyle {F^{\prime }_{s}}(t,0)=\frac{2}{W(1+W)},\hspace{1em}{F^{\prime\prime }_{s}}(t,0)=\frac{(W-1)(3W+1)}{W{(1+W)^{3}}},\\ {} \displaystyle {F^{(3)}}(t,0)=\frac{2(W-1)\{4{W^{3}}+2{W^{2}}+5W+1\}}{W{(1+W)^{5}}},\\ {} \displaystyle {F^{(4)}}(t,0)=\frac{(W-1)(30{W^{5}}+28{W^{4}}+91{W^{3}}+3{W^{2}}+35W+5)}{W{(1+W)^{7}}}.\end{array}\]
4 Conclusions
The infinitesimal geometric branching reproduction describes the models wherein at any time $t>0$ there is a family of particles from all generations (with positive probability). Similar physical problems are very difficult and time consuming to solve analytically, leading to the preference of numerical methods. However, with probabilistic methods, we found explicit solutions for subcritical and critical cases.