1 Introduction
Taylor’s power law is a well-known empirical pattern in ecology. Its general form is
where μ is the mean and $V(\mu )$ is the variance of a non-negative random variable, a and b are constants. $V(\mu )$ is also called the variance function (see [15]). Taylor’s power law is called after the British ecologist L. R. Taylor (see [18]). Taylor’s power law is observed for population densities of hundreds of species in ecology. It is observed in medical sciences, demography ([6]), physics, finance (for an overview see [11]). Most papers on the topic present empirical studies, but some of them offer models as well (e.g. [6] for mortality data, [7] for population dynamics, [11] for complex systems). We mention that in the theory of complex systems Taylor’s power law is called ‘fluctuation scaling’, $V(\mu )$ is called the fluctuation and μ is the average. There are papers studying Taylor’s power law on networks (see, e.g. [9]). In those papers Taylor’s law concerns some random variable produced by a certain process on the network.
However, there is another power law for networks. There are large networks satisfying ${p_{k}}\sim C{k^{-\gamma }}$ as $k\to \infty $, where ${p_{k}}$ is the probability that a node has degree k. This relation is often referred to as a power-law degree distribution or a scale-free network. Here and in what follows ${a_{k}}\sim {b_{k}}$ means that ${\lim \nolimits_{k\to \infty }}{a_{k}}/{b_{k}}=1$. In their seminal paper [3] Barabási and Albert list several scale-free large networks (actor collaboration, WWW, power grid, etc.), they introduce the preferential attachment model and give an argument and numerical evidence that the preferential attachment rule leads to a scale-free network. A short description of the preferential attachment network evolution model is the following. At every time step $t=2,3,\dots \hspace{0.1667em}$ a new vertex with N edges is added to the existing graph so that the N edges link the new vertex to N old vertices. The probability ${\pi _{i}}$ that the new vertex will be connected to the old vertex i depends on the degree ${d_{i}}$ of vertex i, so that ${\pi _{i}}={d_{i}}/{\textstyle\sum _{j}}{d_{j}}$, where ${\textstyle\sum _{j}}{d_{j}}$ is the cumulated sum of degrees. A rigorous definition of the preferential attachment model was given in [4], where a mathematical proof of the power law degree distribution was also presented. The idea of preferential attachment and the scale-free property incited enormous research activity. The mathematical theory is described in the monograph [19] written by van der Hofstad (see also [10] and [5]). The general aspects of network theory are included in the comprehensive book [2] by A. L. Barabási.
There are lot of modifications of the preferential attachment model, here we can list only a few of them. The following general graph evolution model was introduced by Cooper and Frieze in [8]. At each time step either a new vertex or an old one generates new edges. In both cases the terminal vertices can be chosen either uniformly or according to the preferential attachment rule. In [1, 13] and [12] the ideas of Cooper and Frieze [8] were applied, but instead of the original preferential attachment rule, the terminal vertices were chosen according to the weights of certain cliques.
In several cases the connection of two edges in a network can be interpreted as co-operation (collaboration). For example in the movie actor network two actors are connected by an edge if they have appeared in a film together. In the collaboration graph of scientists an edge connects two people if they have been co-authors of a paper (see, e.g. [10]). In social networks, besides connections of two members, other structures are also important. In [17] or [1] cliques are considered to describe co-operations. In a clique any two vertices are connected, that is, any two members of the clique co-operate. However, in real-life examples, in a co-operation the members can play different roles. In a team usually one person plays central role and the other ones play peripheral roles. Trying to handle this situation and to find a mathematically tractable model leads to the study of star-like structures, see [14].
In [14] the concept of [13] was applied but instead of cliques, star-like structures were considered. A team has star structure if there is a head of the team and all other members are connected to him/her. We call a graph N-star graph if it has N vertices, one of them is called the central vertex, the remaining $N-1$ vertices are called peripheral vertices, and it has $N-1$ edges. The edges are directed, they start from the $N-1$ peripheral vertices and their end point is the central vertex. In [14] the following N-stars network evolution model was presented. In this model at each step either a new N-star is constructed or an old one is selected (activated) again. When N vertices form an N star, then we say that they are in interaction (in other words they co-operate). During the evolution, a vertex can be in interaction several times. We define for any vertex its central weight and its peripheral weight. The central weight of a vertex is ${w_{1}}$, if the vertex is a central vertex in interactions ${w_{1}}$ times. The peripheral weight of a vertex is ${w_{2}}$, if the vertex is a peripheral vertex in interactions ${w_{2}}$ times. In [14] asymptotic power law distribution was proved both for ${w_{1}}$ and ${w_{2}}$.
We are interested in the following general question. Is the original Taylor’s power law true for random networks? First we considered data sets of real life networks. We analysed them and the statistical analysis showed that there are cases when Taylor’s law is true and there are cases when it is not true (our empirical results will be published elsewhere). So we encountered the following more specific problem: Find network structures where Taylor’s power law is true. To this end we analysed the above N-stars network evolution model.
In this paper we prove an asymptotic Taylor’s power law for the N-stars network evolution model. We shall calculate the mean and the variance of ${w_{2}}$ when ${w_{1}}$ is fixed, and we shall see that the variance function is asymptotically quadratic. In Section 2, the precise mathematical description of the model and the results are given. We recall from [14] the asymptotic joint distribution of ${w_{1}}$ and ${w_{2}}$ (Proposition 2.1). Then we calculate the marginal distribution (Proposition 2.2), the expectation (Proposition 2.3), and the second moment (Proposition 2.4). The main result is Theorem 2.1. The proofs are presented in Section 3. Besides mathematical proofs, we give also a numerical evidence. In Section 4 simulation results are presented supporting our theoretical results.
2 The N-stars network evolution model and the main results
First we give a short mathematical description of our random graph model from [14].
Let $N\ge 3$ be a fixed number. We start at time 0 with an N-star graph. Throughout the paper we call a graph N-star graph if it has N vertices, one of them is the central vertex, the remaining $N-1$ vertices are peripheral ones, and the graph has $N-1$ directed edges. The edges start from the $N-1$ peripheral vertices and their end point is the central vertex. So the central vertex has in-degree $N-1$, and each of the $N-1$ peripheral vertices has out-degree 1. The evolution of our graph is governed by the weights of the N-stars and the $(N-1)$-stars. In our model, the initial weight of the N-star is 1, and the initial weights of its $(N-1)$-star sub-graphs are also 1. (An $(N-1)$-star sub-graph is obtained if a peripheral vertex is deleted from the N-star graph. The number of these $(N-1)$-star sub-graphs is $N-1$.)
We first explain the model on a high level, before giving a formal definition in the next paragraphs. The general rules of the evolution of our graph are the following. At each time step, N vertices interact, that is, they form an N-star. It means that we draw all edges from the peripheral vertices to the central vertex so that the vertices will form an N-star graph. During the evolution we allow parallel edges. When N vertices interact, not only new edges are drawn, but the weights of the stars are also increased. At the first interaction of N vertices the newly created N-star gets weight 1, and its new $(N-1)$-star sub-graphs also get weight 1. If an $(N-1)$-star sub-graph is not newly created, then its weight is increased by 1. When an existing N-star is selected (activated) again, then its weight and the weights of its $(N-1)$-star sub-graphs are increased by 1. So the weight of an N-star is the number of its activations. We can see that the weight of an $(N-1)$-star is equal to the sum of the weights of the N-stars containing it. The weights will play crucial role in our model. The higher the weight of a star the higher the chance that it will be selected (activated) again.
Now we describe the details of the evolution steps of our graph. We have two options in every step of the evolution. Option I has probability p. In this case we add a new vertex, and it interacts with $N-1$ old vertices. Option II has probability $1-p$. In this case we do not add any new vertex, but N old vertices interact. Here $0<p\le 1$ is fixed.
Option I. In this case, that is, when a new vertex is born, we have again two possibilities: I/1 and I/2.
I/1. The first possibility, which has probability r, is the following. (Here $0\le r\le 1$ is fixed.) We choose one of the existing $(N-1)$-star sub-graphs according to the preferential attachment rule, and its $N-1$ vertices and the new vertex will interact. Here the preferential attachment rule means that an $(N-1)$-star of weight ${v_{t}}$ is chosen with probability ${v_{t}}/{\textstyle\sum _{h}}{v_{h}}$, where ${\textstyle\sum _{h}}{v_{h}}$ is the cumulated weight of the $(N-1)$-stars. The interaction of the new vertex and the old $(N-1)$-star means that they establish a new N-star. In this newly created N-star the center will be the vertex which was the center in the old $(N-1)$-star, the former $N-2$ peripheral vertices remain peripheral and the newly born vertex will be also peripheral. A new edge is drawn from each peripheral vertex to the central one, and then the weights are increased by 1. More precisely, the just created N-star gets weight 1, among its $(N-1)$-star sub-graphs there are $(N-2)$ new ones, so each of them gets weight 1, finally the weight of the only old $(N-1)$-star sub-graph is increased by 1.
I/2. The second possibility has probability $1-r$. In this case we choose $N-1$ old vertices uniformly at random, and they will form an N-star graph with the new vertex, so that the new vertex will be the center. The edges are drawn from the peripheral vertices to the center. As here the newly created N-star graph and all of its $(N-1)$-star sub-graphs are new, so all of them get weight 1.
Option II. In this case, that is, when we do not add any new vertex, we have two ways again: II/1 and II/2.
II/1. The first way has probability q. (Here $0\le q\le 1$ is fixed.) We choose one of the existing N-star sub-graphs by the preferential attachment rule, then draw a new edge from each of its peripheral vertices to its center vertex. Then the weight of the N-star and the weights of its $(N-1)$-star sub-graphs are increased by 1. Here the preferential attachment rule means that an N-star of weight ${v_{t}}$ is chosen with probability ${v_{t}}/{\textstyle\sum _{h}}{v_{h}}$, where ${\textstyle\sum _{h}}{v_{h}}$ is the cumulated weight of the N-stars.
II/2. The second way has probability $1-q$. In this case we choose N old vertices uniformly at random, and they establish an N-star graph. Its center is chosen again uniformly at random out of the N vertices. Then, as before, new edges are drawn from the peripheral vertices to the central one, and the weights of the N-star and its $(N-1)$-star sub-graphs are increased by 1.
Remark 2.1.
For every vertex we shall use its central weight and its peripheral weight. The central weight of a vertex is ${w_{1}}$, if the vertex was a central vertex in interactions ${w_{1}}$ times. The peripheral weight of a vertex is ${w_{2}}$, if the vertex was a peripheral vertex in interactions ${w_{2}}$ times. We can see that the central weight of a vertex is equal to ${w_{1}}=\frac{{d_{1}}}{N-1}$ and the peripheral weight of a vertex is equal to ${w_{2}}={d_{2}}$, where ${d_{1}}$ denotes the in-degree of the vertex and ${d_{2}}$ denotes its out-degree. The weights ${w_{1}}$ and ${w_{2}}$ describe well the role of a vertex in the network. Moreover, we use ${w_{1}}$ and ${w_{2}}$ instead of degrees to obtain symmetric formulae that allow us to translate the result from ${w_{1}}$ to ${w_{2}}$ and vice versa without having to change the proofs.
Throughout the paper $0<p\le 1$, $0\le r\le 1$, $0\le q\le 1$ are fixed numbers. In our formulae the following parameters are used.
(2.1)
\[\begin{array}{r@{\hskip0pt}l@{\hskip0pt}r@{\hskip0pt}l}\displaystyle {\alpha _{11}}& \displaystyle =pr,\hspace{2em}& \displaystyle {\alpha _{12}}& \displaystyle =(1-p)q,\\ {} \displaystyle {\alpha _{1}}& \displaystyle ={\alpha _{11}}+{\alpha _{12}},\hspace{2em}& \displaystyle {\alpha _{2}}& \displaystyle =pr\frac{N-2}{N-1}+(1-p)q,\\ {} \displaystyle {\beta _{1}}& \displaystyle =\frac{(1-p)(1-q)}{p},\hspace{2em}& \displaystyle {\beta _{2}}& \displaystyle =(N-1)\bigg[(1-r)+\frac{(1-p)(1-q)}{p}\bigg],\\ {} \displaystyle \alpha & \displaystyle ={\alpha _{1}}+{\alpha _{2}},\hspace{2em}& \displaystyle \beta & \displaystyle ={\beta _{1}}+{\beta _{2}}.\end{array}\]In [14] it was shown that the above evolution leads to a scale-free graph. To describe the result, let ${V_{n}}$ denote the number of all vertices and let $X(n,{w_{1}},{w_{2}})$ denote the number of vertices with central weight ${w_{1}}$ and peripheral weight ${w_{2}}$ after the nth step.
Proposition 2.1 (Theorem 2.1 of [14]).
Let $0<p<1$, $0<q<1$, $0<r<1$. Then for any fixed ${w_{1}}$ and ${w_{2}}$ with either ${w_{1}}=0$ and $1\le {w_{2}}$ or $1\le {w_{1}}$ and ${w_{2}}\ge 0$ we have
almost surely as $n\to \infty $, where ${x_{{w_{1}},{w_{2}}}}$ are fixed non-negative numbers.
Let ${w_{2}}$ be fixed, then as ${w_{1}}\to \infty $
where
Let ${w_{1}}$ be fixed. Then, as ${w_{2}}\to \infty $,
where
Here Γ denotes the Gamma function.
Remark 2.2.
Using ${w_{1}}$ and ${w_{2}}$, we obtained symmetric formulae in the following sense. If we interchange subscripts 1 and 2 of α and β (and use r instead of $1-r$), then we obtain formulae (2.5)–(2.6) from formulae (2.3)–(2.4). Therefore we do not need new proofs when we interchange the roles of ${w_{1}}$ and ${w_{2}}$. (Of course the basic relations (2.3)–(2.4) and (2.5)–(2.6) were proved separately. To do it we applied the properties of our model and introduced the appropriate parametrization given in (2.1), see [14].)
Remark 2.3.
We see that ${x_{{w_{1}},{w_{2}}}}$ is the asymptotic joint distribution of the central weight and the peripheral weight. To obtain Taylor’s power law, we have to find the conditional expectation ${E_{{w_{1}}}}$ and the conditional second moment ${M_{{w_{1}}}}$ given that ${w_{1}}$ is fixed. Then the asymptotic behaviour of ${E_{{w_{1}}}}$ and ${M_{{w_{1}}}}$ will imply that Taylor’s power law is satisfied asymptotically. We underline that the asymptotic relations (2.3) and (2.5) do not provide enough information to find the asymptotics of ${E_{{w_{1}}}}$ and ${M_{{w_{1}}}}$. So we need deep analysis of the joint distribution ${x_{{w_{1}},{w_{2}}}}$ to obtain Taylor’s power law.
Now we turn to the new results of this paper. First we consider the marginals of the asymptotic joint distribution ${x_{{w_{1}},{w_{2}}}}$. Let
be the first marginal distribution.
Proposition 2.2.
Let $0<p<1$, $0<q<1$, $0<r<1$. Then
and for ${w_{1}}>0$
Moreover
and
for ${w_{1}}>1$. We have a proper distribution, that is, ${\textstyle\sum _{{w_{1}}=0}^{\infty }}{x_{{w_{1}},\cdot }}=1$.
(2.9)
\[ {x_{{w_{1}},\cdot }}={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+{x_{{w_{1}},0}}\bigg(\frac{{\beta _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+1\bigg).\](2.10)
\[ {x_{1,\cdot }}=\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg),\](2.11)
\[ {x_{{w_{1}},\cdot }}=\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{1-r+{\beta _{1}}}{{\beta _{1}}({\beta _{1}}+1)}\]Now we turn to the conditional expectations of the asymptotic distribution. Let
be the expectation when the central weight ${w_{1}}$ is fixed.
Proposition 2.3.
Let $0<p<1$, $0<q<1$, $0<r<1$. Then for ${w_{1}}>1$ we have
where
Moreover
That is, the magnitude of ${E_{{w_{1}}}}$ approaches ${w_{1}^{\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ when ${w_{1}}\to \infty $.
(2.13)
\[ {E_{{w_{1}}}}=\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}\frac{{A_{1}}}{{x_{1,\cdot }}}-\frac{{\beta _{2}}}{{\alpha _{2}}},\](2.14)
\[\begin{aligned}{}{A_{1}}& =\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & \hspace{1em}+(1-r)\frac{{\beta _{2}}}{{\alpha _{2}}}\frac{1}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}.\end{aligned}\](2.15)
\[ {E_{{w_{1}}}}\sim \frac{{A_{1}}}{{x_{1,\cdot }}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{w_{1}^{\frac{{\alpha _{2}}}{{\alpha _{1}}}}}.\]Now we turn to the conditional second moments of the asymptotic distribution. Let
be the second moment when the central weight ${w_{1}}$ is fixed.
Proposition 2.4.
Let $0<p<1$, $0<q<1$, $0<r<1$. Assume that ${\beta _{1}}+1>2{\alpha _{2}}$. Then for ${w_{1}}>1$ we have
where
that is, the magnitude of ${M_{{w_{1}}}}$ approaches ${w_{1}^{2\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ when ${w_{1}}\to \infty $.
(2.17)
\[\begin{aligned}{}{M_{{w_{1}}}}& =\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}\frac{{B_{1}}}{{x_{1,\cdot }}}\\ {} & \hspace{1em}-\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg),\end{aligned}\]
\[\begin{aligned}{}{B_{1}}& =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{r}{{\beta _{1}}+1-2{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(2+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{1-r}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}.\end{aligned}\]
Moreover
(2.18)
\[ {M_{{w_{1}}}}\sim \frac{{B_{1}}}{{x_{1,\cdot }}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{w_{1}^{2\frac{{\alpha _{2}}}{{\alpha _{1}}}}},\]Now Propositions 2.3, 2.4 imply the main result of this paper, that is, we obtain that Taylor’s law is satisfied asymptotically.
Remark 2.4.
How can we observe the above Taylor’s law in practice? As ${x_{{w_{1}},{w_{2}}}}$ is the asymptotic joint distribution of the central weight and the peripheral weight, we should consider a network large enough for asymptotic properties to show up. Fix the central weight at ${w_{1}}$, calculate the expectation ${E_{{w_{1}}}}$ and the second moment ${M_{{w_{1}}}}$ of the peripheral weight. Then we shall find that ${M_{{w_{1}}}}$ is approximately equal to $C{E_{{w_{1}}}^{2}}$ for large ${w_{1}}$. Our simulation results in Section 4 will show a bit more, that is, the result takes place for small ${w_{1}}$, too.
Remark 2.5.
If ${\beta _{1}}+1\le 2{\alpha _{2}}$, then ${M_{{w_{1}}}}$ is not finite, so Taylor’s law is not satisfied.
Remark 2.6.
Now we consider the case when we interchange the roles of ${w_{1}}$ and ${w_{2}}$. Let ${w_{2}}$ be fixed and let
\[ {x_{\cdot ,{w_{2}}}}={\sum \limits_{l=0}^{\infty }}{x_{l,{w_{2}}}},\hspace{1em}{E_{{w_{2}}}}={\sum \limits_{l=0}^{\infty }}{x_{l,{w_{2}}}}l/{x_{\cdot ,{w_{2}}}},\hspace{1em}{M_{{w_{2}}}}={\sum \limits_{l=0}^{\infty }}{x_{l,{w_{2}}}}{l^{2}}/{x_{\cdot ,{w_{2}}}}\]
be the other marginal distribution, conditional expectation and conditional second moment. By Remark 2.2, if we interchange subscripts 1 and 2 of α and β (and use r instead of $1-r$), then from Proposition 2.2 we obtain the description of the behaviour of ${x_{\cdot ,{w_{2}}}}$. Similarly, from Proposition 2.3 (resp. Proposition 2.4) we get the appropriate results for ${E_{{w_{2}}}}$ (resp. ${M_{{w_{2}}}}$). Finally, Theorem 2.1 implies the following. If ${\beta _{2}}+1>2{\alpha _{1}}$, then Taylor’s law is satisfied asymptotically with exponent 2 for ${E_{{w_{2}}}}$ and ${M_{{w_{2}}}}$, too.Remark 2.7.
For the in-degree ${d_{1}}$ of a vertex we have ${d_{1}}=(N-1){w_{1}}$ and for the out-degree we have ${d_{2}}={w_{2}}$. Let ${E_{{d_{1}}}}$ be the conditional expectation of the out-degree if ${d_{1}}$ is fixed and let ${M_{{d_{1}}}}$ be the conditional second moment of the out-degree if ${d_{1}}$ is fixed. Then Theorem 2.1 implies that ${M_{{d_{1}}}}\sim \mathrm{const}.{E_{{d_{1}}}^{2}}$ as ${d_{1}}\to \infty $. Similarly, Remark 2.6 implies that ${M_{{d_{2}}}}\sim \mathrm{const}.{E_{{d_{2}}}^{2}}$ as ${d_{2}}\to \infty $. So Taylor’s power law is true for the in-degrees and the out-degrees.
3 Proofs and auxiliary results
For the joint limiting distribution we have the following result.
Lemma 3.1 (Lemma 3.2 of [14]).
Let $p>0$ and let ${w_{1}}\ge 0$, ${w_{2}}\ge 0$ with ${w_{1}}+{w_{2}}\ge 1$. Then ${x_{{w_{1}},{w_{2}}}}$ are positive numbers satisfying the following recurrence relation
if $1<{w_{1}}+{w_{2}}$.
(3.1)
\[\begin{aligned}{}{x_{1,0}}& =\frac{1-r}{{\alpha _{1}}+\beta +1},\hspace{1em}{x_{0,1}}=\frac{r}{{\alpha _{2}}+\beta +1},\\ {} {x_{{w_{1}},{w_{2}}}}& =\frac{({\alpha _{1}}({w_{1}}-1)+{\beta _{1}}){x_{{w_{1}}-1,{w_{2}}}}+({\alpha _{2}}({w_{2}}-1)+{\beta _{2}}){x_{{w_{1}},{w_{2}}-1}}}{{\alpha _{1}}{w_{1}}+{\alpha _{2}}{w_{2}}+\beta +1}\end{aligned}\]Throughout the proof we shall use the following facts on the Γ-function.
see [16]. Stirling’s formula implies that
The above two formulae imply that
if $b>a+1$. The following facts on ${x_{{w_{1}},{w_{2}}}}$ will be useful.
(3.2)
\[ {\sum \limits_{i=0}^{n}}\frac{\varGamma (i+a)}{\varGamma (i+b)}=\frac{1}{a-b+1}\bigg[\frac{\varGamma (n+a+1)}{\varGamma (n+b)}-\frac{\varGamma (a)}{\varGamma (b-1)}\bigg],\](3.4)
\[ {\sum \limits_{i=0}^{\infty }}\frac{\varGamma (i+a)}{\varGamma (i+b)}=\frac{1}{b-a-1}\frac{\varGamma (a)}{\varGamma (b-1)}\]Lemma 3.2 (See the proof of Theorem 3.2 of [14]).
Let ${w_{1}}=0$, then
and
When ${w_{2}}=0$, then we have
and
If ${w_{1}}>0$ and $l>0$, then
where
for $1\le i\le l$, and
(3.7)
\[ {x_{0,l}}=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}=C(0)\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}.\](3.10)
\[ {x_{k,0}}=\frac{(1-r)\varGamma (1+\frac{\beta +1}{{\alpha _{1}}})}{{\alpha _{1}}\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (k+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (k+\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}})}=A(0)\frac{\varGamma (k+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (k+\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}})}.\](3.11)
\[ {x_{{w_{1}},l}}={\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}+{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}},\](3.12)
\[ {b_{{w_{1}}-1,i}^{(l)}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})},\]Now we turn to the proofs of the new results. First we deal with the marginal distribution.
Proof of Proposition 2.2.
To calculate the marginal distribution ${x_{{w_{1}},\cdot }}={\textstyle\sum _{l=0}^{\infty }}{x_{{w_{1}},l}}$ we shall use mathematical induction. So first consider ${x_{0,\cdot }}$. Because ${x_{0,0}}=0$, by equation (3.7) we have
\[\begin{aligned}{}{x_{0,\cdot }}& ={\sum \limits_{l=1}^{\infty }}{x_{0,l}}=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=1}^{\infty }}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}\\ {} & =\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+2+\frac{\beta +1}{{\alpha _{2}}})}.\end{aligned}\]
By (3.4), the sum in the above formula is always finite, and we have
(3.14)
\[ {x_{0,\cdot }}=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{{\alpha _{2}}}{{\beta _{1}}+1}\frac{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}=\frac{r}{{\beta _{1}}+1}.\]For ${w_{1}}>0$, by (3.11), we have
The coefficients ${b_{k,i}}$ satisfy formulae (3.12) and (3.13). Therefore we shall use (3.4) for the two sums in the above expression. We can see that both sums are always finite and
From this point we should proceed carefully, as we should distinguish the case of ${x_{1,0}}$ and the case of ${x_{{w_{1}},0}}$ for ${w_{1}}>1$. From equation (3.14) ${x_{0,\cdot }}=\frac{r}{{\beta _{1}}+1}$, from equation (3.8) ${x_{1,0}}=\frac{1-r}{{\alpha _{1}}+\beta +1}$ and ${x_{0,0}}=0$, so equation (3.16) gives that
For ${w_{1}}>1$ equation (3.9) gives us ${x_{{w_{1}},0}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+\beta +1}{x_{{w_{1}}-1,0}}$, so equation (3.16) implies
(3.15)
\[\begin{aligned}{}{x_{{w_{1}},\cdot }}& ={\sum \limits_{l=1}^{\infty }}{\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}+{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}}+{x_{{w_{1}},0}}\\ {} & ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}{\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}+{x_{{w_{1}},0}}{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}+{x_{{w_{1}},0}}.\end{aligned}\]
\[\begin{aligned}{}& {\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+i+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}\frac{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}.\end{aligned}\]
For the other sum, a similar calculation shows that
\[ {\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}=\frac{{\beta _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}.\]
Insert these expressions into (3.15) to obtain
(3.16)
\[ {x_{{w_{1}},\cdot }}={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+{x_{{w_{1}},0}}.\](3.17)
\[\begin{aligned}{}{x_{1,\cdot }}& =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}{x_{0,\cdot }}+{x_{1,0}}\bigg(\frac{{\beta _{2}}}{{\alpha _{1}}+{\beta _{1}}+1}+1\bigg)\\ {} & =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\alpha _{1}}+\beta +1}\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}+{\beta _{1}}+1}\\ {} & =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg).\end{aligned}\]
\[ {x_{{w_{1}},\cdot }}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}{\sum \limits_{i=0}^{\infty }}{x_{{w_{1}}-1,i}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}{x_{{w_{1}}-1,\cdot }}.\]
Therefore, using (3.17) and recursion, for ${w_{1}}>1$ we obtain that
(3.18)
\[\begin{aligned}{}{x_{{w_{1}},\cdot }}& =\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}{x_{{w_{1}}-1,\cdot }}={\prod \limits_{k=2}^{{w_{1}}}}\frac{k-1+\frac{{\beta _{1}}}{{\alpha _{1}}}}{k+\frac{{\beta _{1}}+1}{{\alpha _{1}}}}{x_{1,\cdot }}\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg)\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{1-r+{\beta _{1}}}{{\beta _{1}}({\beta _{1}}+1)}.\end{aligned}\]Now we check if the sum of the values of ${x_{{w_{1}},\cdot }}$ is equal to 1.
\[\begin{aligned}{}{x_{0,\cdot }}+{x_{1,\cdot }}+{\sum \limits_{{w_{1}}=2}^{\infty }}{x_{{w_{1}},\cdot }}& =\frac{r}{{\beta _{1}}+1}+\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg)\\ {} & \hspace{1em}+{\sum \limits_{{w_{1}}=2}^{\infty }}\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{1-r+{\beta _{1}}}{{\beta _{1}}({\beta _{1}}+1)}.\end{aligned}\]
Here
Therefore, after some calculation, we get
so we have a proper distribution. □Now we consider the expectation.
Proof of Proposition 2.3.
We calculate
that is, the expectation when the central weight ${w_{1}}$ is fixed. We shall calculate the value of
As
using ${A_{{w_{1}}}}$, we shall find the value of ${E_{{w_{1}}}}$.
(3.20)
\[ {A_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\](3.21)
\[ {A_{{w_{1}}}}={x_{{w_{1}},\cdot }}\bigg({E_{{w_{1}}}}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg),\]Let us start with ${A_{0}}$. Because ${x_{0,0}}=0$, and using equation (3.7), we have
\[\begin{aligned}{}{A_{0}}& ={\sum \limits_{l=1}^{\infty }}{x_{0,l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=1}^{\infty }}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})(l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}\\ {} & =\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+2+\frac{\beta +1}{{\alpha _{2}}})}.\end{aligned}\]
By (3.4), the sum in the above formula is always finite, moreover
If ${w_{1}}>0$, then by (3.11) we have
We know that the coefficients ${b_{k,i}}$ satisfy formulae (3.12) and (3.13). Therefore we can apply (3.4) for the first two terms in the above expression. So we can see that both sums are always finite and
Now we should distinguish the case of ${w_{1}}=1$ and the case of ${w_{1}}>1$. For ${w_{1}}=1$ we use that from equation (3.8) ${x_{1,0}}=\frac{1-r}{{\alpha _{1}}+\beta +1}$ and ${x_{0,0}}=0$, so equation (3.24) implies that
For ${w_{1}}>1$ we know that ${x_{{w_{1}},0}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+\beta +1}{x_{{w_{1}}-1,0}}$, therefore equation (3.24) implies that
Therefore, by equation (3.21), we have
Therefore, by (3.3), the magnitude of ${E_{{w_{1}}}}$ approaches ${w_{1}^{\frac{{\beta _{1}}+1}{{\alpha _{1}}}-\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}}}}={w_{1}^{\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ when ${w_{1}}\to \infty $. More precisely,
□
(3.23)
\[\begin{aligned}{}{A_{{w_{1}}}}& ={\sum \limits_{l=1}^{\infty }}{\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\\ {} & ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}{\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}.\end{aligned}\]
\[\begin{aligned}{}& {\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+i+1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+i+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\\ {} & \hspace{2em}\times \frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{\varGamma (i+1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\]
Similarly
\[ {\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)=\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{\varGamma (2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}.\]
Using these expressions, from (3.23) we get
(3.24)
\[\begin{aligned}{}{A_{{w_{1}}}}& ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & \hspace{1em}+{x_{{w_{1}},0}}\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}.\end{aligned}\](3.25)
\[\begin{aligned}{}{A_{1}}& ={A_{0}}\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}+{x_{1,0}}\frac{{\alpha _{2}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{1,0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\\ {} & =\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & \hspace{1em}+\frac{1-r}{{\alpha _{1}}+\beta +1}\frac{{\beta _{2}}}{{\alpha _{2}}}\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & =\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}+(1-r)\frac{{\beta _{2}}}{{\alpha _{2}}}\frac{1}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}.\end{aligned}\]
\[ {A_{{w_{1}}}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}{\sum \limits_{i=0}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\]
From this equation we obtain that
(3.26)
\[\begin{aligned}{}{A_{{w_{1}}}}& =\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}{A_{{w_{1}}-1}}={\prod \limits_{k=2}^{{w_{1}}}}\frac{k-1+\frac{{\beta _{1}}}{{\alpha _{1}}}}{k+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}}}{A_{1}}\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{A_{1}}.\end{aligned}\](3.27)
\[\begin{aligned}{}{E_{{w_{1}}}}& =\frac{{A_{{w_{1}}}}}{{x_{{w_{1}},\cdot }}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\\ {} & =\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}\frac{{A_{1}}}{{x_{1,\cdot }}}-\frac{{\beta _{2}}}{{\alpha _{2}}}.\end{aligned}\]Now we turn to the second moment.
Proof of Proposition 2.4.
To find the second moment
when the central weight ${w_{1}}$ is fixed, we shall calculate the value of
We can see that
Therefore, using ${B_{{w_{1}}}}$, we shall find the value of ${M_{{w_{1}}}}$.
(3.29)
\[ {B_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\](3.30)
\[ {B_{{w_{1}}}}={x_{{w_{1}},\cdot }}\bigg({M_{{w_{1}}}}+\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg).\]We start with ${B_{0}}$. As ${x_{0,0}}=0$, applying equation (3.7), we obtain
\[ {B_{0}}={\sum \limits_{l=1}^{\infty }}{x_{0,l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+3+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+2+\frac{\beta +1}{{\alpha _{2}}})}.\]
By (3.4), the sum in the above formula is finite if ${\beta _{1}}+1>2{\alpha _{2}}$, and in this case
Now turn to ${B_{{w_{1}}}}$ when ${w_{1}}>0$. By (3.11),
We use formulae (3.12) and (3.13), then apply (3.4) for the first two terms in the above expression. So we obtain that both sums are finite if ${w_{1}}{\alpha _{1}}+{\beta _{1}}+1>2{\alpha _{2}}$, and
When ${w_{1}}=1$, we use that from equation (3.8) ${x_{1,0}}=\frac{1-r}{{\alpha _{1}}+\beta +1}$ and ${x_{0,0}}=0$, so equation (3.33) implies that
(3.32)
\[\begin{aligned}{}{B_{{w_{1}}}}& ={\sum \limits_{l=1}^{\infty }}{\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}{\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+{x_{{w_{1}},0}}{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\]
\[\begin{aligned}{}& {\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+i+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+i+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\\ {} & \hspace{2em}\times \frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{\varGamma (i+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{\varGamma (i+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\end{aligned}\]
and
\[\begin{aligned}{}& {\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}=\frac{\varGamma (1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=1}^{\infty }}\frac{\varGamma (l+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{\varGamma (3+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}.\end{aligned}\]
Inserting these expressions into (3.32), we get
(3.33)
\[\begin{aligned}{}{B_{{w_{1}}}}& ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(i+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\\ {} & \hspace{1em}+{x_{{w_{1}},0}}\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1\hspace{0.1667em}+\hspace{0.1667em}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(2\hspace{0.1667em}+\hspace{0.1667em}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1\hspace{0.1667em}+\hspace{0.1667em}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\](3.34)
\[\begin{aligned}{}{B_{1}}& ={B_{0}}\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}+{x_{1,0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg[1+\frac{2{\alpha _{2}}+{\beta _{2}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\bigg]\\ {} & =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{r}{{\beta _{1}}+1-2{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(2+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{1-r}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}.\end{aligned}\]When ${w_{1}}>1$, equation (3.33) implies that
where we applied that ${x_{{w_{1}},0}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+\beta +1}{x_{{w_{1}}-1,0}}$. From equation (3.35) we obtain that
So from equation (3.30) we obtain that
Therefore (3.3) implies that the magnitude of ${M_{{w_{1}}}}$ approaches ${w_{1}^{\frac{{\beta _{1}}+1}{{\alpha _{1}}}-\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}}}}={w_{1}^{2\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ as ${w_{1}}\to \infty $. More precisely,
□
(3.35)
\[ {B_{{w_{1}}}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}{\sum \limits_{i=0}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(i+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg),\](3.36)
\[\begin{aligned}{}{B_{{w_{1}}}}& =\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}{B_{{w_{1}}-1}}={\prod \limits_{k=2}^{{w_{1}}}}\frac{k-1+\frac{{\beta _{1}}}{{\alpha _{1}}}}{k+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}}}{B_{1}}\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{B_{1}}.\end{aligned}\](3.37)
\[\begin{aligned}{}{M_{{w_{1}}}}& =\frac{{B_{{w_{1}}}}}{{x_{{w_{1}},\cdot }}}-\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & =\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}\frac{{B_{1}}}{{x_{1,\cdot }}}\\ {} & \hspace{1em}-\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\]Proof of Theorem 2.1.
Propositions 2.3 and 2.4 imply
as ${w_{1}}\to \infty $. So Taylor’s law is satisfied asymptotically. □
(3.39)
\[ \frac{{M_{{w_{1}}}}}{{E_{{w_{1}}}^{2}}}\sim \frac{{B_{1}}{x_{1,\cdot }}}{{A_{1}^{2}}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{{(\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}}))^{2}}}\]4 Numerical results
Here we present some numerical evidence supporting our result. The scheme of our computer experiment is the following. We fixed the size N of the stars, the values of the probabilities p, q and r and generated the graph as described in Section 2 up to a fixed step n. Then we calculated ${E_{{w_{1}}}}$ and ${M_{{w_{1}}}}$, that is, the expectation and the second moment of peripheral weight ${w_{2}}$ of the vertices when their central weight ${w_{1}}$ is fixed. We visualized the function ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ using the logarithmic scale on both axes. According to Theorem 2.1 the result should approximately be a straight line with slope 2. We also calculated ${E_{{w_{2}}}}$ and ${M_{{w_{2}}}}$, that is, the expectation and the second moment of central weight ${w_{1}}$ of the vertices when their peripheral weight ${w_{2}}$ was fixed. We visualized the function ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ using the logarithmic scale on both axes. This curve also should approximately be a straight line with slope 2.
In the following five experiments we used various parameter sets. The step size was always $n={10^{8}}$. One can check that in these five examples the conditions ${\beta _{1}}+1>2{\alpha _{2}}$ and ${\beta _{2}}+1>2{\alpha _{1}}$ are satisfied. In each case we see that both ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ are approximately straight lines on the log-log scale.
Experiment 4.1.
Here $N=4$, $p=0.4$, $q=0.4$, $r=0.4$. On Figure 1 we see that both ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) are approximately straight lines on the log-log scale.
Finally, we show a numerical result when the conditions of Theorem 2.1 are not satisfied.
Experiment 4.6.
Let $N=5$, $p=0.9$, $q=0.5$ and $r=0.9$, $n={10^{8}}$. On Figure 6 one can see that Taylor’s power law is not satisfied.