1 Introduction
We consider spatial birth-and-death processes with time dependent birth and death rates. At each moment of time the system is represented as a finite collection of motionless particles in ${\mathbb{R}^{\mathrm{d}}}$. The particles can also be interpreted as individuals. Existing particles may die and new particles may appear. Each particle is characterized by its location.
The state space of a spatial birth-and-death Markov process on ${\mathbb{R}^{\mathrm{d}}}$ with finite number of particles is the space of finite subsets of ${\mathbb{R}^{\mathrm{d}}}$
\[ {\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})=\{\eta \subset {\mathbb{R}^{\mathrm{d}}}:|\eta |<\infty \},\]
where $|\eta |$ is the number of points of η. ${\Gamma _{0}}:={\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})$ is also called the space of finite configurations.Denote by $\mathcal{B}({\mathbb{R}^{\mathrm{d}}})$ the Borel σ-algebra on ${\mathbb{R}^{\mathrm{d}}}$. The evolution of the spatial birth-and-death process on ${\mathbb{R}^{\mathrm{d}}}$ admits the following description. Let ${\mathbb{R}_{+}}:=[0,+\infty )$. Two measurable functions characterize the development in time – the birth rate $b:{\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}\times {\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})\to [0,\infty )$ and the death rate $d:{\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}\times {\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})\to [0,\infty )$. If the system is in state $\eta \in {\Gamma _{0}}$ at time t, then the probability that a new particle appears (a “birth”) in a bounded set $B\in \mathcal{B}({\mathbb{R}^{\mathrm{d}}})$ over time interval $[t;t+\Delta t]$ is
the probability that a particle $x\in \eta $ is deleted from the configuration (a “death”) over time interval $[t;t+\Delta t]$ is
and no two events happen simultaneously. By an event we mean a birth or a death. Using a slightly different terminology, we can say that the rate at which a birth occurs in B is ${\textstyle\int _{B}}b(x,t,\eta )dx$, the rate at which a particle $x\in \eta $ dies is $d(x,t,\eta )$, and no two events happen at the same time.
Such processes, in which the birth and death rates depend on the spatial structure of the system, as opposed to classical ${\mathbb{Z}_{+}}$-valued birth-and-death processes (see, e.g., [26, page 116], [1, page 109]), were first studied by Preston [38]. A heuristic description similar to that above appeared already there. Our description resembles the one in [24].
We say that the rates b and d, or the corresponding birth-and-death process, are time-homogeneous if b and d do not depend on time. By abuse of notation we write in this case $b(x,s,\eta )=b(x,\eta )$, $d(x,s,\eta )=d(x,\eta )$. The (heuristic) generator of a time-homogeneous spatial birth-and-death process should be of the form
for F in an appropriate domain, where $\eta \cup x$ and $\eta \setminus x$ are shorthands for $\eta \cup \{x\}$ and $\eta \setminus \{x\}$, respectively.
(1)
\[ LF(\eta )=\underset{x\in {\mathbb{R}^{\mathrm{d}}}}{\int }b(x,\eta )[F(\eta \cup x)-F(\eta )]dx+\sum \limits_{x\in \eta }d(x,\eta )(F(\eta \setminus x)-F(\eta )),\]The purpose of this paper is twofold. First, we would like to lay the groundwork for a rigorous analysis of spatial birth-and-death processes with a finite number of particles. To this end we provide construction and the basic properties of the obtained process, such as the strong Markov property, martingale properties, and a coupling result ensuring that under certain conditions one birth-and-death process dominates another. The approach of obtaining the process as a solution to a certain stochastic equation can be deemed as an equivalent of the graphical representation for classical interacting particle systems, for example, the contact process or the voter model. The similarity manifests itself in that in both cases the entire family of processes starting at different possibly random times from different possibly random initial conditions and with different birth or death rates can be constructed from a single ‘noise’ process. Furthermore, the construction automatically provides a coupling for the entire family. The latter was used in [7] in the proof of a shape theorem; see also [16, page 301], [32, pages 33–34 and elsewhere] for the role of the graphical representation in the analysis of discrete-space models.
Of course, the birth-and-death process with a finite number of particles with time-homogeneous birth and death rates can be relatively easily constructed as a pure jump type Markov process (see, e.g., [29, Chapter 12]). However constructing a coupling for the entire infinite family of processes as described above would be rather challenging in that framework. Additionally, the stochastic equation approach also allows us to naturally incorporate the case of time-inhomogeneous birth and death rates. Not much attention has been given to spatial time-inhomogeneous birth-and-death processes in the mathematical literature yet, even though such temporally variant models have been shown to perform better as predictors in ecological models, see, e.g., [12, 39]. Of particular interest are periodic rates reflecting seasonal changes. In [36] a nearest-neighbor birth-and-death process is fitted to describe the movement of sand dunes. In [40] spatial birth-and-death processes are used to describe the process of openings and closures of restaurants and stores in an area in Tokyo. The estimation of the birth and death rates is discussed in [31] in various settings. We note that in [31] the particles are allowed to move. In [10] (see also [11]) the dynamics and moment equations are investigated for a model of biological population which essentially is a birth and death process with rates that in our notation can be described by
\[ b(x,\eta )={c_{+}}\sum \limits_{y\in \eta }{a_{+}}(y-x),\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}d(x,\eta )=\mu +{c_{-}}\sum \limits_{y\in \eta }{a_{-}}(y-x).\]
Here $\mu ,{c_{+}},{c_{-}}>0$, and ${a_{+}}$ and ${a_{-}}$ are some kernels with compact support. The interpretation is as follows:-
• each individual reproduces independently of the others at a constant rate ${c_{+}}>0$, and the offspring is displaced with a given kernel ${a_{+}}$;
-
• each individual dies at a constant rate $\mu >0$;
-
• additionally, each individual dies at rate governed by the kernel ${a_{-}}$ due to competition.
Further extensions related to this model can be found in [23, 18, 20, 37, 22]. The works [23, 18] are focused on microscopic, or probabilistic, aspects, whereas in [20, 37, 22] the approach is more macroscopic and analytical. Among exciting open problems for a continuous space birth-and-death process are questions related to the asymptotic shape (see [7] for a shape theorem for a spatial birth processes) and survival of the process started from a single point configuration.
Our second aim is to give a detailed asymptotic analysis for the aggregation model and to demonstrate that it behaves differently from the corresponding mesoscopic model [21]. We show certain fine asymptotic properties of the process, such as the finiteness of the total number of deaths over an infinite time interval and an exponential growth of the number of particles within a certain region.
A short literature overview. Garcia and Kurtz [24] obtained birth-and-death processes as solutions to certain stochastic integral equations for the case when the death rate $d\equiv 1$. The systems treated there involve an infinite number of particles. In the earlier work [33] of Garcia another approach was used: birth-and-death processes were obtained as projections of Poisson point processes. A further development of the projection method appears in [25]. Fournier and Méléard [23] used a similar equation for the construction of the Bolker–Pacala–Dieckmann–Law process with finitely many particles. Following ideas of [24] and [23], we construct the birth-and-death process described above as a solution to a stochastic equation.
Holley and Stroock [27] constructed the spatial birth-and-death process as a Markov family of unique solutions to the corresponding martingale problem. For the most part, they consider a process contained in a bounded volume, with bounded birth and death rates. They also proved the corresponding result for the nearest neighbor model in ${\mathbb{R}^{1}}$ with an infinite number of particles. Bezborodov et al. [9] construct and study infinite particle birth-and-death systems on the integer lattice with birth and death rates satisfying some general conditions. The approach taken in this paper somewhat resembles that in [9], however, in the continuous-space settings the death part of the stochastic equation cannot be designed by assigning to each place its own independent Poisson process as is done in [9]. Therefore the stochastic equation we use differs significantly from the one in [9].
Belavkin and Kolokoltsov [4] discuss, among other things, a general structure of a Feller semigroup on disjoint unions of Euclidean spaces (see also references therein for the construction of the Markov processes with a given generator). We note in this regard that time-homogeneous birth-and-death processes need not have the ${C_{0}}$-Feller property. Eibeck and Wagner [17] discuss convergence of particle systems to limiting kinetic equations. In particular, they construct the stochastic process corresponding to the particle system as a minimal jump process, or pure jump type Markov process in the terminology of Kallenberg [29]. The jump kernel is assumed to be locally bounded.
The scheme proposed by Etheridge and Kurtz [19] covers a wide range of interactions and applies to discrete and continuous models. Their approach is based on, among other things, assigning a certain mark (‘level’) to each particle and letting this mark evolve according to some law. A critical event, such as birth or death, occurs when the level hit some threshold. Recurrence properties of birth-and-death processes and convergence to the invariant distribution are analyzed by Møller [35]. Shcherbakov and Volkov [41] consider the long term behavior of birth-and-death processes on a finite graph with constant death rate and the birth rate of a special exponential form. Density bounds, the existence of an invariant measure, and certain return times are studied in [6]. A birth-and-death process with constant birth rate involving infinitely many particles was constructed in [2] using a completely different approach based on a comparison with a Poisson random connection graph. In [8] it is shown that the Lebesgue–Poisson measure is a maximal irreducible measure. Bezborodov et al. [7] prove a shape theorem for a wide class of continuous-space birth processes which match the above description with the death rate $d\equiv 0$. The stochastic equation used in [7] to construct the process is a special case of our equation (3). Age-dependent birth-and-death processes and their scaling limits are studied in [34].
In the aforementioned references as well as in the present work the system is represented by a Markov process. An alternative approach consists in using the concept of statistical dynamics that substitutes the notion of a Markov stochastic process. This approach is based on considering evolutions of measures and their correlation functions. For details, see, e.g., [20, 21], and references therein.
Finkelshtein et al. [21] consider different aspects of statistical dynamics for the aggregation model. In this model the death rate is given by
where ϕ is a positive measurable function. For more details, see [21]. In this paper we present an analysis of the long time behavior of a microscopic version of this model. In particular, we estimate the probability of extinction and the speed of growth of the average number of particles.
2 The set-up and main results
2.1 Construction and basic properties
The state space of a continuous-time, continuous-space birth and death process with a finite number of particles is
where minimum is taken over the set of all bijections $\varsigma :\eta \to \zeta $. Note that in (2) the notation $|\cdot |$ is used for the Euclidean distance in ${\mathbb{R}^{\mathrm{d}}}$ (not for the number of points as in $|\eta |$), which hopefully should not lead to ambiguity. Define a metric $\tilde{\rho }$ on ${\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})$ by setting $\tilde{\rho }(\eta ,\zeta )=1\wedge \rho (\eta ,\zeta )$ if $|\eta |=|\zeta |>0$, $\tilde{\rho }(\varnothing ,\varnothing )=0$, and $\tilde{\rho }(\eta ,\zeta )=1$ if $|\eta |\ne |\zeta |$. Denote by $\mathcal{B}({\Gamma _{0}})$ the Borel σ-algebra generated by $\tilde{\rho }$. For $\eta \in {\Gamma _{0}}$ and $a>0$ set
Note that
\[ {\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})=\{\eta \subset {\mathbb{R}^{\mathrm{d}}}:|\eta |<\infty \},\]
where $|\eta |$ is the number of points of η. ${\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})$ is often called the space of finite configurations. The space of n-point configuration is ${\Gamma _{0}^{(n)}}({\mathbb{R}^{\mathrm{d}}}):=\{\eta \subset {\mathbb{R}^{\mathrm{d}}}:|\eta |=n\}\subset {\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})$. We will use ${\Gamma _{0}}$ and ${\Gamma _{0}^{(n)}}$ as shorthands for ${\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})$ and ${\Gamma _{0}^{(n)}}({\mathbb{R}^{\mathrm{d}}})$, respectively. For $\eta ,\zeta \in {\Gamma _{0}}$, $|\eta |=|\zeta |>0$, we define
(2)
\[ \rho (\eta ,\zeta ):=\underset{\varsigma }{\min }\underset{x\in \eta }{\max }\{|\varsigma (x)-x|\},\]Let X be a locally compact separable metric space (in this paper X will be a subset of ${\mathbb{R}^{m}}$ for some $m\in \mathbb{N}$). Even though our solution process will stay in ${\Gamma _{0}}$, we introduce now a more general configuration space to accommodate the driving process. Denote by $\Gamma (X)$ the space of locally finite subsets of X
\[ \Gamma (X)=\{\gamma \subset X\mid |\gamma \cap K|<\infty \hspace{2.5pt}\text{for all compact}\hspace{2.5pt}K\},\]
also called the space of configurations over X. The space $\Gamma (X)$ can be endowed with the σ-field $\mathcal{B}(X)$ generated by the projection maps
where B is an arbitrary bounded Borel subset of X.
Convention. With a slight abuse of notation, we identify $\gamma \in \Gamma $ with the induced point measure on X, so that
This convention also applies to elements of ${\Gamma _{0}}$ and other point processes and is used throughout the paper.
For more details about the notions introduced here, see, e.g., [15], [29, Chapter 12] or [30]. Throughout this paper ${\Gamma _{2}}$ stands for $\Gamma ((0,+\infty )\times {\mathbb{R}_{+}})$. Let π be the distribution of a Poisson random measure on $({\Gamma _{2}},\mathcal{B}({\Gamma _{2}}))$, with the intensity measure being the Lebesgue measure on $(0,+\infty )\times {\mathbb{R}_{+}}$ (here and throughout $\mathcal{B}(X)$ is the Borel σ-algebra of X). Let ${\mathcal{B}_{t}}({\Gamma _{2}})$ be the smallest sub-σ-algebra of $\mathcal{B}({\Gamma _{2}})$ such that for every ${A_{1}}\in \mathcal{B}((0,t])$, ${A_{2}}\in \mathcal{B}({\mathbb{R}_{+}})$ the map
\[ {\Gamma _{2}}\ni \gamma \mapsto \gamma ({A_{1}}\times {A_{2}})\in {\mathbb{Z}_{+}}\cup \{+\infty \}\]
is ${\mathcal{B}_{t}}({\Gamma _{2}})$-measurable. Similarly, define ${\mathcal{B}_{>t}}({\Gamma _{2}})$ as the smallest sub-σ-algebra of $\mathcal{B}({\Gamma _{2}})$ such that for every ${A_{1}}\in \mathcal{B}((t,\infty ))$, ${A_{2}}\in \mathcal{B}({\mathbb{R}_{+}})$ the map
\[ {\Gamma _{2}}\ni \gamma \mapsto \gamma ({A_{1}}\times {A_{2}})\in {\mathbb{Z}_{+}}\cup \{+\infty \}\]
is ${\mathcal{B}_{>t}}({\Gamma _{2}})$-measurable.Let ${\eta _{0}}$ be a (random) finite initial configuration, and let ${\hat{\eta }_{0}}$ be the point process on ${\mathbb{R}^{\mathrm{d}}}\times {\Gamma _{2}}$ obtained by associating to each point in ${\eta _{0}}$ an independent Poisson point process on ${\mathbb{R}_{+}}\times {\mathbb{R}_{+}}$, with the distribution π. That is, if ${\eta _{0}}={\textstyle\sum _{i=1}^{|{\eta _{0}}|}}{\delta _{{x_{i}}}}$, then
where $\{{\gamma _{i}}\}$ is an independent collection of Poisson point processes on ${\Gamma _{2}}$.
Let us now introduce a stochastic differential equation driven by a Poisson point process designed in such a way that its solution is going to be a spatial birth-and-death process with birth and death rates b and d. It is not unusual to construct processes with jumps as solution to certain stochastic equations [24, 23, 3]. A short discussion of the structure of (3) can be found in Remark 2.1.
Consider the stochastic equation with the Poisson noise
where ${({\eta _{t}})_{t\ge 0}}$ is a cadlag ${\Gamma _{0}}$-valued solution process, N is a Poisson point process on ${\mathbb{R}_{+}}\times {\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}\times {\Gamma _{2}}$, the mean measure of N is $ds\times dx\times du\times \pi $. We require the processes N and ${\hat{\eta }_{0}}$ to be independent of each other. Equation (3) is understood in the sense that the equality holds a.s. for every bounded $B\in \mathcal{B}({\mathbb{R}^{\mathrm{d}}})$ and $t\ge 0$.
(3)
\[\begin{aligned}{}{\eta _{t}}(B)=& \underset{(0,t]\times B\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u)I\Bigg\{\underset{\substack{r\in (s,t],\\ {} v\ge 0}}{\int }{I_{[0,d(x,r,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}\\ {} & \times N(ds,dx,du,d\gamma )\\ {} & +\underset{B\times {\Gamma _{2}}}{\int }I\Bigg\{\underset{\substack{r\in (0,t],\\ {} v\ge 0}}{\int }{I_{[0,d(x,r,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}{\hat{\eta }_{0}}(dx,d\gamma ),\end{aligned}\]Remark 2.1.
In the first integral on the right-hand side of (3) x is the place and s is the time of birth of a new particle. This particle is alive as long as ${\textstyle\int _{s}^{t}}{I_{[0,d(x,r,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0$, where $(x,s,u,\gamma )\in N$. Thus, γ is the process ‘responsible’ for death. The variable u is a randomizer controlling whether birth occurs at a given time and location. In other words, each point of the driving Poisson process N in space-time carries an extra mark $u\in {\mathbb{R}_{+}}$ (used to decide whether the potential birth actually occurs) and a further two-dimensional Poisson process $\gamma \in {\Gamma _{2}}$ (used to decide when it dies). The main difference to the equation considered by Garcia and Kurtz [24] lies in the death term. Adapted to our notation, the equation there is of the form
where $\tilde{N}$ is a Poisson point process on ${\mathbb{R}_{+}}\times {\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}\times {\mathbb{R}_{+}}$ with mean measure $ds\times dx\times du\times {e^{-r}}dr$, and ${\tilde{\eta }_{0}}$ is obtained from ${\eta _{0}}$ by attaching an independent unit exponential to each point. At first glance, (3) is more complicated than (4), since the death mechanism requires a whole Poisson random measure on ${[0;\infty )^{2}}$ instead of just one exponential random variable. However, it is more difficult a priori to define a filtration ${\{{\tilde{\mathcal{F}}_{t}}\}_{t\ge 0}}$ such that a solution to (4), if unique, should be adapted to and possess the Markov property with respect to ${\{{\tilde{\mathcal{F}}_{t}}\}_{t\ge 0}}$. This makes working with martingale properties of a solution to (4) more convoluted.
(4)
\[\begin{aligned}{}{\eta _{t}}(B)=& \underset{(0,t]\times B\times [0,\infty )\times [0,\infty )}{\int }{I_{[0,b(x,{\eta _{s-}})]}}(u)I\Bigg\{\underset{r\in (s,t]}{\int }d(x,{\eta _{r-}})dv<r\Bigg\}\\ {} & \times \tilde{N}(ds,dx,du,dr)\\ {} & +\underset{B\times [0,\infty )}{\int }I\Bigg\{\underset{r\in (0,t]}{\int }d(x,{\eta _{r-}})dv<r\Bigg\}{\tilde{\eta }_{0}}(dx,dr),\end{aligned}\]
Conditions on b, d and ${\eta _{0}}$. The birth rate b and death rate d are measurable maps from ${\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}\times {\Gamma _{0}}$ to $[0,\infty )$. We assume that the birth rate b satisfies the following conditions: sublinear growth on the second variable in the sense that
for some constants ${c_{1}},{c_{2}}>0$, and that $b(x,\cdot ,\eta )$ and $d(x,\cdot ,\eta )$ are left-continuous for any $x\in {\mathbb{R}^{\mathrm{d}}}$ and $\eta \in {\Gamma _{0}}$.
(5)
\[ \underset{{\mathbb{R}^{\mathrm{d}}}}{\int }\underset{s>0}{\sup }b(x,s,\eta )dx\le {c_{1}}|\eta |+{c_{2}},\]Remark 2.2.
Note that we consider a very general death rate: apart from measurability, d is only required to be left-continuous in the second argument.
We say that N is compatible with a right-continuous complete filtration $\{{\mathcal{F}_{t}}\}$ if for every $t\ge 0$
is ${\mathcal{F}_{t}}$-measurable for any $q\in [0,t]$, $B\in \mathcal{B}({\mathbb{R}^{\mathrm{d}}})$, $C\in \mathcal{B}({\mathbb{R}_{+}})$, and $\Xi \in {\mathcal{B}_{t}}({\Gamma _{2}})$, and also
\[ N((t+{q^{\prime }},t+{q^{\prime }}+{q^{\prime\prime }}]\times {B^{\prime }}\times {C^{\prime }}\times {\Xi ^{\prime }})\]
is independent of ${\mathcal{F}_{t}}$ for any ${q^{\prime\prime }}>{q^{\prime }}\ge 0$, ${B^{\prime }}\in \mathcal{B}({\mathbb{R}^{\mathrm{d}}})$, ${C^{\prime }}\in \mathcal{B}({\mathbb{R}_{+}})$, and ${\Xi ^{\prime }}\in {\mathcal{B}_{>t}}({\Gamma _{2}})$. We say that ${\hat{\eta }_{0}}$ is compatible with $\{{\mathcal{F}_{t}}\}$ if for every $t\ge 0$
is ${\mathcal{F}_{t}}$-measurable for any $q\in [0,t]$ and $\Xi \in {\mathcal{B}_{t}}({\Gamma _{2}})$, and also
\[ {\hat{\eta }_{0}}((t+{q^{\prime }},t+{q^{\prime }}+{q^{\prime\prime }}]\times {\Xi ^{\prime }})\]
is independent of ${\mathcal{F}_{t}}$ for any ${q^{\prime\prime }}>{q^{\prime }}\ge 0$ and ${\Xi ^{\prime }}\in {\mathcal{B}_{>t}}({\Gamma _{2}})$. Sometimes we will use the representations
\[ N=\sum \limits_{q\in \mathcal{I}}{\delta _{({s_{q}},{x_{q}},{u_{q}},{\gamma _{q}})}},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}{\hat{\eta }_{0}}=\sum \limits_{q\in \mathcal{J}}{\delta _{({x_{q}},{\gamma _{q}})}},\]
where $\mathcal{I}$ and $\mathcal{J}$ are some countable disjoint sets. Since N and ${\hat{\eta }_{0}}$ are independent and the intensity measure of N is nonatomic, the following holds a.s.: if $q\ne {q^{\prime }}$, $q,{q^{\prime }}\in \mathcal{I}\cup \mathcal{J}$, then ${x_{q}}\ne {x_{{q^{\prime }}}}$.Definition 2.3.
A (weak) solution of equation (3) is a triple $({({\eta _{t}})_{t\ge 0}},N)$, $(\Omega ,\mathcal{F},P)$, $({\{{\mathcal{F}_{t}}\}_{t\ge 0}})$, where
(i) $(\Omega ,\mathcal{F},P)$ is a probability space, and ${\{{\mathcal{F}_{t}}\}_{t\ge 0}}$ is an increasing, right-continuous and complete filtration of sub-σ-algebras of $\mathcal{F}$,
(ii) N is a Poisson point process on ${\mathbb{R}_{+}}\times {\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}\times {\Gamma _{2}}$ with intensity $ds\times dx\times du\times \pi $,
(iii) ${\eta _{0}}$ is a random ${\mathcal{F}_{0}}$-measurable element in ${\Gamma _{0}}$ satisfying (6),
(iv) the processes N and ${\hat{\eta }_{0}}$ are independent, and are compatible with ${\{{\mathcal{F}_{t}}\}_{t\ge 0}}$,
(v) ${({\eta _{t}})_{t\ge 0}}$ is a cadlag ${\Gamma _{0}}$-valued process adapted to ${\{{\mathcal{F}_{t}}\}_{t\ge 0}}$, ${\eta _{t}}{\big|_{t=0}}={\eta _{0}}$,
(vi) all integrals in (3) are well-defined,
and
(vii) equality (3) holds a.s. for all $t\in [0,\infty )$ and all Borel sets B.
Following standard convention, we also call the process ${({\eta _{t}})_{t\ge 0}}$ just a solution. Note that for any solution ${({\eta _{t}})_{t\ge 0}}$ to (3) a.s.
Let
and let ${\mathcal{S}_{t}}$ be the completion of ${\mathcal{S}_{t}^{0}}$ under P. Note that ${\{{\mathcal{S}_{t}}\}_{t\ge 0}}$ is a right-continuous filtration, see Section A.2 in the Appendix.
(8)
\[\begin{aligned}{}{\mathcal{S}_{t}^{0}}=\sigma \big\{& {\eta _{0}},N([0,q]\times B\times C\times \Xi ),\\ {} & q\in [0,t],B\in \mathcal{B}({\mathbb{R}^{\mathrm{d}}}),C\in \mathcal{B}({\mathbb{R}_{+}}),\Xi \in {\mathcal{B}_{t}}({\Gamma _{2}})\big\},\end{aligned}\]Definition 2.4.
A solution of (3) is called strong if ${({\eta _{t}})_{t\ge 0}}$ is adapted to $({\mathcal{S}_{t}},t\ge 0)$.
Remark 2.5.
In the definition above we considered solutions as processes indexed by $t\in [0,\infty )$. The reformulations for the case $t\in [0,T]$, $0<T<\infty $, are straightforward. This remark also applies to many of the results below.
For σ-algebras ${\mathcal{A}_{1}}$ and ${\mathcal{A}_{2}}$, let ${\mathcal{A}_{1}}\vee {\mathcal{A}_{2}}$ be the smallest σ-algebra containing both ${\mathcal{A}_{1}}$ and ${\mathcal{A}_{2}}$.
Definition 2.6.
We say that pathwise uniqueness holds for equation (3) and an initial distribution ν if, whenever the triples $({({\eta _{t}})_{t\ge 0}},N)$, $(\Omega ,\mathcal{F},P)$, $({\{{\mathcal{F}_{t}}\}_{t\ge 0}})$ and $({({\bar{\eta }_{t}})_{t\ge 0}},N)$, $(\Omega ,\mathcal{F},P)$, $({\{{\bar{\mathcal{F}}_{t}}\}_{t\ge 0}})$ are weak solutions of (3) with $P\{{\eta _{0}}={\bar{\eta }_{0}}\}=1$ and $Law({\eta _{0}})=\nu $, and such that N is compatible with ${\left\{{\mathcal{F}_{t}}\vee {\bar{\mathcal{F}}_{t}}\right\}_{t\in [0,T]}}$, we have $P\{{\eta _{t}}={\bar{\eta }_{t}},t\in [0,\infty )\}=1$ (that is, the processes η, $\bar{\eta }$ are indistinguishable).
Definition 2.7.
Theorem 2.8.
Pathwise uniqueness, strong existence and joint uniqueness in law hold for equation (3). If b and d are time-homogeneous, then the unique solution is a strong Markov process, and the family of push-forward measures $\{{P_{\alpha }},\alpha \in {\Gamma _{0}}\}$ defined in Remark 3.3 constitutes a Markov process, or a Markov family of probability measures, on ${D_{{\Gamma _{0}}}}[0,\infty )$.
We call the unique solution of (3) (or, sometimes, the corresponding family of measures on ${D_{{\Gamma _{0}}}}[0,\infty )$) a (spatial) birth-and-death Markov process.
Remark 2.9.
For time-homogeneous b and d, the transition probabilities of the embedded Markov chain (see, e.g., [29, Chapter 12]) of the birth-and-death process are completely described by
where $(B+D)(\eta )={\textstyle\int _{{\mathbb{R}^{\mathrm{d}}}}}b(x,\eta )dx+{\textstyle\sum _{x\in \eta }}d(x,\eta )$.
(9)
\[\begin{aligned}{}Q(\eta ,\{\eta \setminus \{x\}\})& =\frac{d(x,\eta )}{(B+D)(\eta )},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}x\in \eta ,\hspace{2.5pt}\eta \in {\Gamma _{0}},\\ {} Q(\eta ,\{\eta \cup \{x\},x\in U\})& =\frac{{\textstyle\int _{x\in U}}b(x,\eta )dx}{(B+D)(\eta )},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}U\in \mathcal{B}({\mathbb{R}^{\mathrm{d}}}),\hspace{2.5pt}\eta \in {\Gamma _{0}},\end{aligned}\]The following two propositions establish a rigorous relation between the unique solution to (3) and L defined by (1). To formulate the first of them, let us consider the class ${\mathcal{C}_{b}}$ of cylindrical functions $F:{\Gamma _{0}}\to {\mathbb{R}_{+}}$ with bounded increments. We say that F has bounded increments if
\[ \underset{\eta \in {\Gamma _{0}},x\in {\mathbb{R}^{\mathrm{d}}}}{\sup }\big|F(\eta \cup \{x\})-F(\eta )\big|<\infty .\]
We say that F is cylindrical if for some $R={R_{F}}>0$
\[ F(\eta )=F(\zeta )\hspace{2.5pt}\text{whenever}\hspace{2.5pt}\eta \cap \mathbf{B}({\mathbf{o}_{\mathrm{d}}},R)=\zeta \cap \mathbf{B}({\mathbf{o}_{\mathrm{d}}},R),\]
where $\mathbf{B}(x,R)$ is the closed ball of radius R around x, and ${\mathbf{o}_{\mathrm{d}}}$ is the origin in ${\mathbb{R}^{\mathrm{d}}}$. We recall that the filtration $\{{\mathcal{S}_{t}},t\ge 0\}$ is introduced before Definition 2.4.Proposition 2.10.
Let ${({\eta _{t}})_{t\ge 0}}$ be a weak solution to (3). Then for any $F\in {\mathcal{C}_{b}}$ the process
is an $\{{\mathcal{S}_{t}},t\ge 0\}$-martingale. In particular, the integral in (10) is well-defined a.s.
(10)
\[ \begin{array}{r}\displaystyle F({\eta _{t}})-{\underset{0}{\overset{t}{\int }}}\Bigg\{\underset{{\mathbb{R}^{\mathrm{d}}}}{\int }b(x,s,{\eta _{s-}})[F({\eta _{s-}}\cup \{x\})-F({\eta _{s-}})]dx\\ {} \displaystyle -\sum \limits_{x\in {\eta _{s-}}}d(x,s,{\eta _{s-}})[F({\eta _{s-}}\setminus \{x\})-F({\eta _{s-}})]\Bigg\}ds\end{array}\]Remark 2.11.
Assume that all conditions we imposed on b, d, and ${\eta _{0}}$ are satisfied, except (6). Then we cannot claim that $E|{\eta _{t}}|<\infty $ for $t\ge 0$. However, we would still get a unique solution on $[0,\infty )$ satisfying all the items of Definition 2.3 except (iii) and (vi). One way to see this is to consider a sequence of initial conditions ${\{{\eta _{0}^{(m)}}\}_{m\in \mathbb{N}}}$, ${\eta _{0}^{(m)}}\subset {\eta _{0}}$, such that a.s. $|{\eta _{0}^{(m)}}|\le m$ and
\[ P\{{\eta _{0}^{(m)}}={\eta _{0}}\hspace{2.5pt}\text{for sufficiently large}\hspace{2.5pt}m\}=1.\]
We are mostly interested in the case of a nonrandom initial condition, therefore we do not discuss the case when (6) is not satisfied in more detail.Remark 2.12.
The process that starts at a possibly random time τ from a possibly random configuration ${\zeta _{\tau }}$ can be obtained from the equation
This is the equation of the type (3) with the initial condition ${\zeta _{\tau }}$ and the driving process $\overline{N}$ being the shift of N by τ as defined in (57). We rely here on the strong Markov property of the driving process N in the sense of Proposition A.2. Of course, τ should be an $\{{\mathcal{S}_{t}},t\ge 0\}$-stopping time, and ${\zeta _{\tau }}$ needs to be ${\mathcal{S}_{\tau }}$-measurable as a map from $(\Omega ,{\mathcal{S}_{\tau }})$ to $({\Gamma _{0}},\mathcal{B}({\Gamma _{0}}))$ and such that $E|{\zeta _{\tau }}|<\infty $. Considering different pairs $(\tau ,{\zeta _{t}})$, we obtain a coupled family of the birth-and-death processes as mentioned in the introduction.
(11)
\[\begin{aligned}{}& {\eta _{t+\tau }}(B)\\ {} & \hspace{1em}=\underset{(\tau ,\tau +t]\times B\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u)I\Bigg\{\underset{\substack{r\in (s,\tau +t],\\ {} v\ge 0}}{\int }{I_{[0,d(x,r,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}\\ {} & \hspace{1em}\hspace{1em}\times N(ds,dx,du,d\gamma )\\ {} & \hspace{1em}\hspace{1em}+\hspace{-0.1667em}\underset{B\times {\Gamma _{2}}}{\int }\hspace{-0.1667em}I\Bigg\{\underset{\substack{r\in (\tau ,\tau +t],\\ {} v\ge 0}}{\int }\hspace{-0.1667em}{I_{[0,d(x,r,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}{\hat{\zeta }_{\tau }}(dx,d\gamma )+{\zeta _{\tau }}(B),\hspace{0.5em}t\ge 0.\end{aligned}\]We also discuss a stochastic domination of one birth-and-death process by another. Consider two equations of the form (3),
(12)
\[ \begin{aligned}{}& {\xi _{t}^{(k)}}(B)\\ {} & \hspace{1em}=\underset{(0,t]\times B\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,{b_{k}}(x,s,{\xi _{s-}^{(k)}})]}}(u)I\Bigg\{\underset{\substack{r\in (s,t],v\ge 0}}{\int }{I_{[0,{d_{k}}(x,r,{\xi _{r-}^{(k)}})]}}(v)\gamma (dr,dv)=0\Bigg\}\\ {} & \hspace{1em}\hspace{1em}\times N(ds,dx,du,d\gamma )\\ {} & \hspace{1em}\hspace{1em}+\underset{B\times {\Gamma _{2}}}{\int }I\Bigg\{\underset{\substack{r\in (0,t],v\ge 0}}{\int }{I_{[0,{d_{k}}(x,r,{\xi _{r-}^{(k)}})]}}(v)\gamma (dr,dv)=0\Bigg\}{\hat{\xi }_{0}^{(k)}}(dx,d\gamma ),\hspace{2.5pt}\hspace{2.5pt}k=1,2.\end{aligned}\]We require the initial conditions ${\xi _{0}^{(k)}}$ and the rates ${b_{k}}$ to ${d_{k}}$ to satisfy the conditions imposed on ${\eta _{0}}$, b, and d. Let ${({\xi _{t}^{(k)}})_{t\in [0,\infty )}}$ be the unique strong solutions.
Proposition 2.13.
2.2 Aggregation model
Here we consider a specific time-homogeneous model which we call an aggregation model. This model has a property that the death rate decreases as the number of neighbors grows. We treat here the death rate given below in (16), and, in addition to previous assumptions, we require the birth rate to grow linearly on the number of points in configuration in the sense of (17). We prove in Proposition 2.14 that the probability of extinction is small if the initial configuration has many points in some fixed Borel set $\Lambda \subset {\mathbb{R}^{\mathrm{d}}}$. Propositions 2.15, 2.16 and Theorem 2.17 describe the pathwise behavior of the process.
Let
where φ is a nonnegative measurable function. Our prime examples are $\varphi (z)=c>0$, or $\varphi (z)=cI\{z\in \widetilde{\Lambda }\}$, where $\widetilde{\Lambda }$ is a Borel set such that $\Lambda -\Lambda \in \widetilde{\Lambda }$. Theorem 2.8 ensures existence and uniqueness of solutions, and that the unique solution is a pure jump type Markov process.
More specifically, let Λ be a measurable nonempty subset of ${\mathbb{R}^{d}}$. Assume that the birth rate and the initial condition ${\eta _{0}}$ satisfy (5) and (6), and, besides that, the inequalities
and
hold for some positive c. Note that Λ is of positive Lebesgue measure by (17). We assume also that
where $a>1$.
(17)
\[ \underset{\Lambda }{\int }b(x,\eta )dx\ge c|\eta \cap \Lambda |,\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\eta \in {\Gamma _{0}},\](18)
\[ \hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}b(x,{\eta ^{1}})\le b(x,{\eta ^{2}}),\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}{\eta ^{1}},{\eta ^{2}}\in {\Gamma _{0}},{\eta ^{1}}\subset {\eta ^{2}},\]We say that the process ${({\eta _{t}})_{t\ge 0}}$ goes extinct if $\inf \{t\ge 0:{\eta _{t}}=\varnothing \}<\infty $. This infimum is called the time of extinction.
We want to show that the probability of extinction decays exponentially fast as the number of points of initial configuration inside Λ grows. Also, we will give a few statements describing the pace of growth of the number of points in the system.
Note that we do not require $b(\cdot ,\varnothing )\equiv 0$; if ${\textstyle\int _{\Lambda }}b(x,\varnothing )dx>0$, then (20) implies
The next proposition is a consequence of the exponentially fast decay of the death rate.
Remark.
These two growth estimates stand in contrast to the mesoscopic behavior of the system [21]. Theorem 5.3 in [21] says that for some values of parameters the solution to the mesoscopic equation starting from sufficiently small initial condition stays bounded. On the contrary, the microscopic system grows whenever it survives, and the density always grows.
3 Proof of Theorem 2.8 and Proposition 2.10
Let us start with the equation
where $\overline{b}(x,\eta ):={\sup _{s>0,\xi \subset \eta }}b(x,s,\xi )$. Note that $\overline{b}$ satisfies sublinear growth condition (5) if b does.
(24)
\[ {\overline{\eta }_{t}}(B)=\underset{(0,t]\times B\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,\overline{b}(x,s,{\eta _{s-}})]}}(u)N(ds,dx,du,d\gamma )+{\eta _{0}}(B),\]This equation is of the type (3), with $\overline{b}$ being the birth rate and the zero function being the death rate, and all definitions of existence and uniqueness of solutions are applicable here. Later a unique solution of (24) will be used as a dominating process to a solution to (3).
Proof.
For $\omega \in \{{\textstyle\int _{{\mathbb{R}^{\mathrm{d}}}}}\overline{b}(x,{\eta _{0}})dx=0\}$, set ${\zeta _{t}}\equiv {\eta _{0}}$, ${\sigma _{n}}=\infty $, $n\in \mathbb{N}$.
For $\omega \in F:=\{{\textstyle\int _{{\mathbb{R}^{\mathrm{d}}}}}\overline{b}(x,{\eta _{0}})dx>0\}$, we define the sequence of random pairs $\{({\sigma _{n}},{\zeta _{{\sigma _{n}}}})\}$, where
This relation is evidently true on the complement of F. If $P(F)=0$, then (26) is proven.
\[\begin{aligned}{}{\sigma _{n+1}}& =\inf \Bigg\{t>0:\underset{({\sigma _{n}},{\sigma _{n}}+t]\times B\times [0,\infty )\times {\Gamma _{2}}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}{I_{[0,\overline{b}(x,{\zeta _{{\sigma _{n}}}})]}}(u)N(ds,dx,du,d\gamma )>0\Bigg\}\hspace{-0.1667em}+{\sigma _{n}},\\ {} {\sigma _{0}}& =0,\end{aligned}\]
and
\[ {\zeta _{0}}={\eta _{0}},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}{\zeta _{{\sigma _{n+1}}}}={\zeta _{{\sigma _{n}}}}\cup \{{z_{n+1}}\}\]
for ${z_{n+1}}=\{x\in {\mathbb{R}^{\mathrm{d}}}:N(\{{\sigma _{n+1}}\}\times \{x\}\times [0,\overline{b}(x,{\zeta _{{\sigma _{n}}}})]\times {\Gamma _{2}})>0\}$. The positions ${z_{n}}$ are uniquely determined almost surely on F. Furthermore, ${\sigma _{n+1}}>{\sigma _{n}}$ a.s., and ${\sigma _{n}}$ are finite a.s. on F (in particular, because $\overline{b}(x,{\zeta _{{\sigma _{n}}}})\ge \overline{b}(x,{\eta _{0}})$). For $\omega \in F$, we define ${\zeta _{t}}={\zeta _{{\sigma _{n}}}}$ for $t\in [{\sigma _{n}},{\sigma _{n+1}})$. Then by induction on n it follows that ${\sigma _{n}}$ is a stopping time for each $n\in \mathbb{N}$, and ${\zeta _{{\sigma _{n}}}}$ is ${\mathcal{F}_{{\sigma _{n}}}}\cap F$-measurable. By direct substitution we see that ${({\zeta _{t}})_{t\ge 0}}$ is a strong solution to (24) on the time interval $t\in [0,{\lim \nolimits_{n\to \infty }}{\sigma _{n}})$. We are going to show that
(26)
\[ \underset{n\to \infty }{\lim }{\sigma _{n}}=\infty \hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\text{a.s.}\]If $P(F)>0$, define a probability measure on F, $Q(A)=\frac{P(A)}{P(F)}$, $A\in \mathcal{I}:=\mathcal{F}\cap F$, and define ${\mathcal{I}_{t}}={\mathcal{F}_{t}}\cap F$.
The process N is independent of F, therefore it is a Poisson point process on the probability space $(F,\mathcal{I},Q)$ with the same intensity, compatible with ${\{{\mathcal{I}_{t}}\}_{t\ge 0}}$. From now on and until it is specified otherwise, we work on the filtered probability space $(F,\mathcal{I},{\{{\mathcal{I}_{t}}\}_{t\ge 0}},Q)$. We use the same symbols for random processes and random variables, having in mind that we consider their restrictions to F.
The process ${({\zeta _{t}})_{t\in [0,{\lim \nolimits_{n\to \infty }}{\sigma _{n}})}}$ has the Markov property, because the process N has the strong Markov property and independent increments by Proposition (A.2) in the Appendix. Recall that for $\eta \in {\Gamma _{0}}$ and $x\in {\mathbb{R}^{\mathrm{d}}}$, $\eta \cup x$ is a shorthand for $\eta \cup \{x\}$. Indeed, conditioning on ${\mathcal{I}_{{\sigma _{n}}}}$,
\[ E\big[{I_{\{{\zeta _{{\sigma _{n+1}}}}={\zeta _{{\sigma _{n}}}}\cup x\hspace{2.5pt}\text{for some}\hspace{2.5pt}x\in B\}}}\mid {\mathcal{I}_{{\sigma _{n}}}}\big]=\frac{\underset{B}{\textstyle\int }\overline{b}(x,{\zeta _{{\sigma _{n}}}})dx}{\underset{{\mathbb{R}^{\mathrm{d}}}}{\textstyle\int }\overline{b}(x,{\zeta _{{\sigma _{n}}}})dx},\]
thus the chain ${\{{\zeta _{{\sigma _{n}}}}\}_{n\in {\mathbb{Z}_{+}}}}$ is a Markov chain, and, given ${\{{\zeta _{{\sigma _{n}}}}\}_{n\in {\mathbb{Z}_{+}}}}$, ${\sigma _{n+1}}-{\sigma _{n}}$ are distributed exponentially:
\[ E\{{I_{\{{\sigma _{n+1}}-{\sigma _{n}}>a\}}}\mid {\{{\zeta _{{\sigma _{n}}}}\}_{n\in {\mathbb{Z}_{+}}}}\}=\exp \Bigg\{-a\underset{{\mathbb{R}^{\mathrm{d}}}}{\int }\overline{b}(x,{\zeta _{{\sigma _{n}}}})dx\Bigg\}.\]
Therefore, the random variables ${\gamma _{n}}=({\sigma _{n}}-{\sigma _{n-1}})({\textstyle\int _{{\mathbb{R}^{\mathrm{d}}}}}\overline{b}(x,{\zeta _{{\sigma _{n}}}})dx)$ constitute a sequence of independent random variables exponentially distributed with parameter 1, independent of ${\{{\zeta _{{\sigma _{n}}}}\}_{n\in {\mathbb{Z}_{+}}}}$. Thus Theorem 12.18 in [29] implies that ${({\zeta _{t}})_{t\in [0,{\lim \nolimits_{n\to \infty }}{\sigma _{n}})}}$ is a pure jump type Markov process.The jump rate of ${({\zeta _{t}})_{t\in [0,{\lim \nolimits_{n\to \infty }}{\sigma _{n}})}}$ is given by
Condition (5) implies that $c(\alpha )\le {c_{1}}|\alpha |+{c_{2}}$. Hence
We see that ${\textstyle\sum _{n}}\frac{1}{c({\zeta _{{\sigma _{n}}}})}=\infty $ a.s., hence Proposition 12.19 in [29] implies that ${\sigma _{n}}\to \infty $.
Now we return again to our initial probability space $(\Omega ,\mathcal{F},{\{{\mathcal{F}_{t}}\}_{t\ge 0}},P)$. We have proved the existence of a strong solution. The uniqueness follows by induction on jumps of the process. Namely, let ${({\tilde{\zeta }_{t}})_{t\ge 0}}$ be another solution of (24). Since a.s.
\[ \underset{(0,{\sigma _{1}})\times {\mathbb{R}^{\mathrm{d}}}\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,0]}}(u)N(ds,dx,du,d\gamma )=0,\]
(here ${I_{[0,0]}}(u)=I\{u=0\}$) we have ${\zeta _{t}}={\tilde{\zeta }_{t}}={\eta _{0}}$ a.s. on the complement ${F^{c}}$ for all $t\ge 0$. From (vii) of Definition 2.3 and the equality
\[ \underset{(0,{\sigma _{1}})\times {\mathbb{R}^{\mathrm{d}}}\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,\overline{b}(x,{\eta _{0}})]}}(u)N(ds,dx,du,d\gamma )=0,\]
it follows that $P\big(\{\tilde{\zeta }\hspace{2.5pt}\text{has a birth before}\hspace{5pt}{\sigma _{1}}\}\cap F\big)=0$. At the same time, the equality
\[ \underset{\{{\sigma _{1}}\}\times {\mathbb{R}^{\mathrm{d}}}\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,\overline{b}(x,{\eta _{0}})]}}(u)N(ds,dx,du,d\gamma )=1,\]
which holds a.s. on F, yields that $\tilde{\zeta }$ has a birth at the moment ${\sigma _{1}}$, and in the same point of space. Therefore, $\tilde{\zeta }$ coincides with ζ on $[0,{\sigma _{1}}]$ a.s. on F. Similar reasoning shows that they coincide up to ${\sigma _{n}}$ a.s. on F, and, since ${\sigma _{n}}\to \infty $ a.s. on F,
Thus, pathwise uniqueness holds.
Now we turn our attention to (25). Since ${\zeta _{t}}\equiv {\eta _{0}}$ on ${F^{c}}$, we can assume without loss of generality that $P(F)=1$. We can write
Since ${\sigma _{n}}={\textstyle\sum _{i=1}^{n}}\frac{{\gamma _{i}}}{{\textstyle\int _{{\mathbb{R}^{\mathrm{d}}}}}\overline{b}(x,{\zeta _{{\sigma _{i}}}})dx}$, we have
\[ \{{\sigma _{n}}\le t\}=\Bigg\{{\sum \limits_{i=1}^{n}}\frac{{\gamma _{i}}}{\underset{{\mathbb{R}^{\mathrm{d}}}}{\textstyle\int }\overline{b}(x,{\zeta _{{\sigma _{i}}}})dx}\le t\Bigg\}\subset \Bigg\{{\sum \limits_{i=1}^{n}}\frac{{\gamma _{i}}}{{c_{1}}|{\zeta _{{\sigma _{i}}}}|+{c_{2}}}\le t\Bigg\}\]
\[ \subset \Bigg\{{\sum \limits_{i=1}^{n}}\frac{{\gamma _{i}}}{({c_{1}}+{c_{2}})(|{\eta _{0}}|+i)}\le t\Bigg\}=\{{Z_{t}}-{Z_{0}}\ge n\},\]
where $({Z_{t}})$ is the Yule process, i.e., the birth process on ${\mathbb{Z}_{+}}$ with transition rates ${p_{k,k+1}}=({c_{1}}+{c_{2}})k$, ${p_{k,l}}=0$, $l\ne k+1$, see, e.g., [1, Chapter 3, Section 5]. Here $({Z_{t}})$ is defined as follows: ${Z_{t}}-{Z_{0}}=n$ when
\[ {\sum \limits_{i=1}^{n}}\frac{{\gamma _{i}}}{({c_{1}}+{c_{2}})(|{\eta _{0}}|+i)}\le t<{\sum \limits_{i=1}^{n+1}}\frac{{\gamma _{i}}}{({c_{1}}+{c_{2}})(|{\eta _{0}}|+i)},\]
and ${Z_{0}}=|{\eta _{0}}|$. Thus, we have $|{\zeta _{t}}|\le {Z_{t}}$ a.s., hence $E|{\zeta _{t}}|\le E{Z_{t}}<\infty $. The constructed solution is strong. □Proof.
Let us define stopping times with respect to $\{{\mathcal{F}_{t}},t\ge 0\}$, $0={\theta _{0}}\le {\theta _{1}}\le {\theta _{2}}\le {\theta _{3}}\le $..., and the sequence of (random) configurations ${\{{\eta _{{\theta _{j}}}}\}_{j\in \mathbb{N}}}$ as follows: as long as
where
\[ {\theta _{n+1}^{b}}=\inf \Bigg\{t>0:\underset{({\theta _{n}},{\theta _{n}}+t]\times {\mathbb{R}^{\mathrm{d}}}\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,b(x,s,{\eta _{{\theta _{n}}}})]}}(u)N(ds,dx,du,d\gamma )>0\Bigg\},\]
\[ {\theta _{n+1}^{\mathrm{d}}}=\inf \Bigg\{t>0:\sum \limits_{\substack{q\in \mathcal{I}\cup \mathcal{J},\\ {} {x_{q}}\in {\eta _{{\theta _{n}}}}}}{\int _{({\theta _{n}},{\theta _{n}}+t]\times [0,\infty )}}{I_{[0,d({x_{q}},r,{\eta _{{\theta _{n}}}})]}}(v){\gamma _{q}}(dr,dv)>0\Bigg\},\]
we set ${\eta _{{\theta _{n+1}}}}={\eta _{{\theta _{n}}}}\cup \{{z_{n+1}}\}$ if ${\theta _{n+1}^{b}}\le {\theta _{n+1}^{\mathrm{d}}}$, where $\{{z_{n+1}}\}=\{z\in {\mathbb{R}^{\mathrm{d}}}:N(\{{\theta _{n}}+{\theta _{n+1}^{b}}\}\times \{z\}\times {\mathbb{R}_{+}}\times {\Gamma _{2}})>0\}$; ${\eta _{{\theta _{n+1}}}}={\eta _{{\theta _{n}}}}\setminus \{{z_{n+1}}\}$ if ${\theta _{n+1}^{b}}>{\theta _{n+1}^{\mathrm{d}}}$, where $\{{z_{n+1}}\}=\{{x_{q}}\in {\eta _{{\theta _{n}}}}:{\gamma _{q}}(\{{\theta _{n}}+{\theta _{n+1}^{\mathrm{d}}}\}\times {\mathbb{R}_{+}})>0\}$; the configuration ${\eta _{{\theta _{0}}}}={\eta _{0}}$ is the initial condition of (3), ${\eta _{t}}={\eta _{{\theta _{n}}}}$ for $t\in [{\theta _{n}},{\theta _{n+1}})$. Note that
\[ P\{{\theta _{n+1}^{b}}={\theta _{n+1}^{\mathrm{d}}}\mid \min \{{\theta _{n+1}^{b}},{\theta _{n+1}^{\mathrm{d}}}\}<\infty \}=0,\]
the points ${z_{n}}$ are a.s. uniquely determined, and
\[ P\{{z_{n+1}}\in {\eta _{{\theta _{n}}}}\mid {\theta _{n+1}^{b}}\le {\theta _{n+1}^{\mathrm{d}}}\}=0.\]
If for some n
we set ${\theta _{n+k}}=\infty $, $k\in \mathbb{N}$, and ${\eta _{t}}={\eta _{{\theta _{n}}}}$, $t\ge {\theta _{n}}$.Random variables ${\theta _{n}},n\in \mathbb{N}$, are stopping times with respect to the filtration $\{{\mathcal{F}_{t}},t\ge 0\}$. By the strong Markov property of a Poisson point process, see Proposition A.2, we obtain that a.s. on $\{{\theta _{n}}<\infty \}$ the conditional distribution of ${\theta _{n+1}^{b}}$ given ${\mathcal{F}_{{\theta _{n}}}}$ is
\[ P\left\{{\theta _{n+1}^{b}}>p\mid {\mathcal{F}_{{\theta _{n}}}}\right\}=\exp \Bigg\{-{\int _{{\theta _{n}}}^{{\theta _{n}}+p}}ds\underset{{\mathbb{R}^{\mathrm{d}}}}{\int }b(x,s,{\eta _{{\theta _{n}}}})dx\Bigg\},\]
and a.s. on $\{{\theta _{n}}<\infty \}$ the conditional distribution of ${\theta _{n+1}^{\mathrm{d}}}$ given ${\mathcal{F}_{{\theta _{n}}}}$ is
\[ P\left\{{\theta _{n+1}^{\mathrm{d}}}>p\mid {\mathcal{F}_{{\theta _{n}}}}\right\}=\exp \Bigg\{-{\int _{{\theta _{n}}}^{{\theta _{n}}+p}}ds\sum \limits_{x\in {\eta _{{\theta _{n}}}}}d(x,s,{\eta _{{\theta _{n}}}})\Bigg\}.\]
In particular, ${\theta _{n}^{b}},{\theta _{n}^{\mathrm{d}}}>0$, $n\in \mathbb{N}$.We are going to show that a.s.
Denote by ${\theta ^{\prime }_{k}}$ the moment of the k-th birth. It is sufficient to show that ${\theta ^{\prime }_{k}}\to \infty $, $k\to \infty $, because only finitely many deaths may occur between any two births, since there are only finitely many particles. By induction on ${k^{\prime }}$ we can see that ${\{{\theta ^{\prime }_{k}}\}_{{k^{\prime }}\in \mathbb{N}}}\subset {\{{\sigma _{i}}\}_{i\in \mathbb{N}}}$, where ${\sigma _{i}}$ are the moments of births of ${({\overline{\eta }_{t}})_{t\ge 0}}$, the solution of (24), and ${\eta _{t}}\subset {\overline{\eta }_{t}}$ for all $t\in [0,{\lim \nolimits_{n}}{\theta _{n}})$. For instance, let us show that ${({\overline{\eta }_{t}})_{t\ge 0}}$ has a birth at ${\theta ^{\prime }_{1}}$. We have ${\overline{\eta }_{{\theta ^{\prime }_{1}}-}}\supset {\overline{\eta }_{0}}={\eta _{0}}$, and ${\eta _{{\theta ^{\prime }_{1}}-}}\subset {\eta _{t}}{\mid _{t=0}}={\eta _{0}}$, hence for all $x\in {\mathbb{R}^{\mathrm{d}}}$
The latter implies that at time moment ${\theta ^{\prime }_{1}}$ a birth occurs for the process ${({\overline{\eta }_{t}})_{t\ge 0}}$ in the same point. Hence, ${\eta _{{\theta ^{\prime }_{1}}}}\subset {\overline{\eta }_{{\theta ^{\prime }_{1}}}}$, and we can go on. Since ${\sigma _{k}}\to \infty $ as $k\to \infty $, we also have ${\theta ^{\prime }_{k}}\to \infty $, and therefore ${\theta _{n}}\to \infty $, $n\to \infty $.
Let us now prove the inequality from item (vi) of Definition 2.3,
Denote the number of births and deaths before t by ${b_{t}}$ and ${d_{t}}$ respectively, i.e.
and
(30)
\[ E{\underset{0}{\overset{t}{\int }}}ds\Big[\underset{{\mathbb{R}^{\mathrm{d}}}}{\int }b(x,s,{\eta _{s-}})dx+\sum \limits_{x\in {\eta _{s-}}}d(x,s,{\eta _{s-}})\Big]<\infty ,\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}t>0.\]Note that $|{\eta _{t}}|={b_{t}}-{d_{t}}+|{\eta _{0}}|$ and ${\theta _{k}}$ are the moments of jumps for ${c_{t}}:={b_{t}}+{d_{t}}$, so that
\[ {c_{t}}=\sum \limits_{k\in \mathbb{N}}I\{{\theta _{k}}\le t\},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}t\ge 0.\]
For $n\in \mathbb{N}$ define
\[ {c_{t}^{(n)}}=\underset{(0,t]\times {\mathbb{R}^{\mathrm{d}}}\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,b(x,s,{\eta _{s-}})\wedge n]}}(u)I\{|x|\le n\}N(ds,dx,du,d\gamma )\]
\[ +{\int _{(0,t]\times [0,\infty )}}\sum \limits_{\substack{q\in \mathcal{I}\cup \mathcal{J}\\ {} {x_{q}}\in {\eta _{r-}}}}{I_{[0,d({x_{q}},r,{\eta _{r-}})\wedge n]}}(v)I\{|x|\le n\}{\gamma _{q}}(dr,dv).\]
Then
\[ {M_{t}^{(n)}}={c_{t}^{(n)}}-{\underset{0}{\overset{t}{\int }}}{\int _{x:|x|\le n}}\big(b(x,s,{\eta _{s-}})\wedge n\big)dxds-{\underset{0}{\overset{t}{\int }}}\sum \limits_{x\in {\eta _{s-}},|x|\le n}\big(d(x,s,{\eta _{s-}})\wedge n\big)ds\]
is a martingale with respect to $\{{\mathcal{S}_{t}}\}$, see, e.g., [28, (3.8), Section 3, Chapter 2]. By the optional stopping theorem $E{M_{{\theta _{1}}\wedge t}^{(n)}}=0$, hence
\[\begin{aligned}{}& E{\underset{0}{\overset{{\theta _{1}}\wedge t}{\int }}}\Bigg({\int _{x:|x|\le n}}b(x,s,{\eta _{s-}})\wedge n\hspace{2.5pt}dx+\sum \limits_{x\in {\eta _{s-}},|x|\le n}d(x,s,{\eta _{s-}})\wedge n\Bigg)ds\\ {} & \hspace{1em}=E{c_{t\wedge {\theta _{1}}}^{(n)}}\le P\{{\theta _{1}}<t\}\le 1.\end{aligned}\]
Similarly,
\[ E{\underset{{\theta _{m}}\wedge t}{\overset{{\theta _{m+1}}\wedge t}{\int }}}\Bigg({\int _{x:|x|\le n}}b(x,s,{\eta _{s-}})\wedge n\hspace{2.5pt}dx+\sum \limits_{x\in {\eta _{s-}},|x|\le n}d(x,s,{\eta _{s-}})\wedge n\Bigg)ds\]
\[ =E{c_{t\wedge {\theta _{m+1}}}^{(n)}}-E{c_{t\wedge {\theta _{m}}}^{(n)}}\le P\{{\theta _{m+1}}<t\}.\]
Consequently,
\[ E{\underset{0}{\overset{t}{\int }}}\Bigg({\int _{x:|x|\le n}}b(x,s,{\eta _{s-}})\wedge n\hspace{2.5pt}dx+\sum \limits_{x\in {\eta _{s-}},|x|\le n}d(x,s,{\eta _{s-}})\wedge n\Bigg)ds\]
\[ \le {\sum \limits_{m=1}^{\infty }}E{\underset{{\theta _{m}}\wedge t}{\overset{{\theta _{m+1}}\wedge t}{\int }}}\Bigg({\int _{x:|x|\le n}}b(x,s,{\eta _{s-}})\wedge n\hspace{2.5pt}dx+\sum \limits_{x\in {\eta _{s-}},|x|\le n}d(x,s,{\eta _{s-}})\wedge n\Bigg)ds\]
\[ \le {\sum \limits_{m=1}^{\infty }}P\{{\theta _{m}}\le t\}={\sum \limits_{m=1}^{\infty }}P\{{c_{t}}\ge m\}=E{c_{t}}.\]
Letting $n\to \infty $, we get, by the monotone convergence theorem,
\[ E{\underset{0}{\overset{t}{\int }}}\Bigg({\int _{x\in {\mathbb{R}^{\mathrm{d}}}}}b(x,s,{\eta _{s-}})dx+\sum \limits_{x\in {\eta _{s-}}}d(x,s,{\eta _{s-}})\Bigg)ds\le E{c_{t}}.\]
Only existing particles may disappear, hence the number of deaths ${d_{t}}$ satisfies
Thus,
and (30) follows.If follows from the above construction, (29), and (30) that $({\eta _{t}})$ is a strong solution to (3). Similarly to the proof of Proposition 3.1, we can show by induction on n that equation (3) has a unique solution on $[0,{\theta _{n}}]$. Namely, each two solutions coincide on $[0,{\theta _{n}}]$ a.s. Thus, any solution coincides with $({\eta _{t}})$ a.s. for all $t\in [0,{\theta _{n}}]$. □
Remark 3.3.
Assume that b and d are time-homogeneous. Let ${\eta _{0}}$ be a nonrandom initial condition, ${\eta _{0}}\equiv \alpha $, $\alpha \in {\Gamma _{0}}$. The solution of (3) with ${\eta _{0}}\equiv \alpha $ will be denoted as ${(\eta (\alpha ,t))_{t\ge 0}}$. Let ${P_{\alpha }}$ be the push-forward of P under the mapping
It can be derived from the proof of Proposition 3.2 that, for fixed $\omega \in \Omega $, the unique solution is jointly measurable in $(t,\alpha )$. Thus, the family $\{{P_{\alpha }}\}$ of probability measures on ${D_{{\Gamma _{0}}}}[0,\infty )$ is measurable in α, that is, for any Borel set $\mathcal{D}\subset {D_{{\Gamma _{0}}}}[0,\infty )$ the map ${\Gamma _{0}}\ni \alpha \mapsto {P_{\alpha }}(\mathcal{D})$ is measurable. We will often use formulations related to the probability space $({D_{{\Gamma _{0}}}}[0,\infty ),\mathcal{B}({D_{{\Gamma _{0}}}}[0,\infty )),{P_{\alpha }})$; in this case, coordinate mappings will be denoted by ${\eta _{t}}$,
The processes ${({\eta _{t}})_{t\in [0,\infty )}}$ and ${(\eta (\alpha ,\cdot ))_{t\in [0,\infty )}}$ have the same law (under ${P_{\alpha }}$ and P, respectively). As one would expect, the family of measures $\{{P_{\alpha }},\alpha \in {\Gamma _{0}}\}$ is a Markov process, or a Markov family of probability measures; see Proposition 3.6 below. For a measure μ on ${\Gamma _{0}}$, we define
We denote by ${E_{\mu }}$ the expectation under ${P_{\mu }}$.
Remark 3.4.
We solved equation (3) ω-wisely. We can deduce from the proof of Proposition 3.2 that ${\theta _{n}}$ and ${z_{n}}$ are measurable functions of ${\eta _{0}}$ and N in the sense that, for example, ${\theta _{1}}={F_{1}}({\eta _{0}},N)$ a.s. for a measurable function ${F_{1}}:{\Gamma _{0}}\times \Gamma ({\mathbb{R}_{+}}\times {\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}\times {\Gamma _{2}})\to {\mathbb{R}_{+}}$. As a consequence, there is a functional dependence of the solution process and the “input”: the process ${({\eta _{t}})_{t\ge 0}}$ is some function of ${\eta _{0}}$ and N.
Corollary 3.5.
Joint uniqueness in law holds for equation (3) with initial distribution ν satisfying
As usually, the Markov property of a solution follows from uniqueness.
Proposition 3.6 (The strong Markov property).
Let b and d be time-homogenious. The unique solution ${({\eta _{t}})_{t\in [0,\infty )}}$ of (3) is a strong Markov process in the following sense. Let τ be an a.s. finite $({\mathcal{S}_{t}},t\ge 0)$-stopping time such that $E|{\eta _{\tau }}|<\infty $. Then
Furthermore, for any $\mathcal{D}\in \mathcal{B}({D_{{\Gamma _{0}}}}[0,\infty ))$,
that is, given ${\eta _{\tau }}$, $({\eta _{\tau +t}},t\ge 0)$ is conditionally independent of $({\mathcal{S}_{t}},t\ge 0)$.
(34)
\[ P\{({\eta _{\tau +t}},t\ge 0)\in \mathcal{D}\mid {\mathcal{S}_{\tau }}\}=P\{({\eta _{\tau +t}},t\ge 0)\in \mathcal{D}\mid {\eta _{\tau }}\};\]Proof.
For $t\ge 0$,
where ${\hat{\eta }_{\tau }}={\textstyle\sum _{\substack{q\in \mathcal{I}\cup \mathcal{J},\\ {} {x_{q}}\in {\eta _{\tau }}}}}({x_{q}},{\gamma _{q}})$. Here we need the strong Markov property of the driving process as given in Proposition A.2. Note that (35) can be considered as an equation of the type (3) with the unique solution being ${({\eta _{\tau +t}})_{t\in [0,\infty )}}$. From Proposition 3.2, Remark 3.4, and Corollary 3.5 we get (33). The conditional independence (34) follows from Remark 3.4. □
(35)
\[\begin{aligned}{}& {\eta _{\tau +t}}(B)\\ {} & \hspace{0.5em}=\underset{(\tau ,\tau +t]\times B\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,b(x,{\eta _{s-}})]}}(u)I\Bigg\{\underset{\substack{r\in (s,\tau +t],\\ {} v\ge 0}}{\int }{I_{[0,d(x,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}\\ {} & \hspace{0.5em}\hspace{1em}\times N(ds,dx,du,d\gamma )\\ {} & \hspace{0.5em}\hspace{1em}+\underset{B\times {\Gamma _{2}}}{\int }I\Bigg\{\underset{\substack{r\in (\tau ,\tau +t],\\ {} v\ge 0}}{\int }\hspace{-0.1667em}{I_{[0,d(x,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}{\hat{\eta }_{\tau }}(dx,d\gamma )+{\eta _{\tau }}(B),\hspace{0.5em}t\ge 0.\end{aligned}\]Let ${N_{1}}$ be the image of N under the projection
The process ${N_{1}}$ is a Poisson point process on ${\mathbb{R}_{+}}\times {\mathbb{R}^{\mathrm{d}}}\times {\mathbb{R}_{+}}$ with intensity measure $dsdxdu$.
Proof of Proposition 2.10.
We have
\[\begin{aligned}{}{\eta _{t}}(B)=& \underset{(0,t]\times B\times [0,\infty )\times {\Gamma _{2}}}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u)I\Bigg\{\underset{\substack{r\in (s,t],\\ {} v\ge 0}}{\int }{I_{[0,d(x,r,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}\\ {} & \times \hspace{1em}N(ds,dx,du,d\gamma )\\ {} & +\underset{B\times {\Gamma _{2}}}{\int }I\Bigg\{\underset{\substack{r\in (0,t],\\ {} v\ge 0}}{\int }{I_{[0,d(x,r,{\eta _{r-}})]}}(v)\gamma (dr,dv)=0\Bigg\}{\hat{\eta }_{0}}(dx,d\gamma )\\ {} =& \underset{(0,t]\times B\times [0,\infty )}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u){N_{1}}(ds,dx,du)+{\eta _{0}}(B)\\ {} & -\sum \limits_{q\in \mathcal{I}\cup \mathcal{J}}\underset{(0,t]\times [0,\infty )}{\int }I\{{x_{q}}\in {\eta _{r-}}\}{I_{[0,d({x_{q}},r,{\eta _{r-}})]}}(v){\gamma _{q}}(dr,dv).\end{aligned}\]
Recall that $\eta \cup x$ and $\eta \setminus x$ are shorthands for $\eta \cup \{x\}$ and $\eta \setminus \{x\}$, respectively. By Ito’s formula ([28, Chapter 2, Theorem 5.1]), for $F\in {\mathcal{C}_{b}}$,
\[\begin{aligned}{}F({\eta _{t}})-F({\eta _{0}})=& \sum \limits_{s\le t}(F({\eta _{s}})-F({\eta _{s-}}))\\ {} =& \underset{(0,t]\times \mathbf{B}({\mathbf{o}_{d}},{R_{F}})\times [0,\infty )}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u)\big\{F({\eta _{s-}}\cup x)-F({\eta _{s-}})\big\}\\ {} & \times {N_{1}}(ds,dx,du)\\ {} & +\sum \limits_{q\in \mathcal{I}\cup \mathcal{J}}\hspace{0.5em}\underset{(0,t]\times [0,\infty )}{\int }I\{{x_{q}}\in {\eta _{r-}}\}{I_{[0,d({x_{q}},r,{\eta _{r-}})]}}\\ {} & \times (v)\big\{F({\eta _{r-}}\setminus x)-F({\eta _{r-}})\big\}{\gamma _{q}}(dr,dv).\end{aligned}\]
We can write
\[\begin{array}{c}\displaystyle \underset{(0,t]\times \mathbf{B}({\mathbf{o}_{d}},{R_{F}})\times [0,\infty )}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u)\big\{F({\eta _{s-}}\cup x)-F({\eta _{s-}})\big\}{N_{1}}(ds,dx,du)\\ {} \displaystyle =\underset{(0,t]\times \mathbf{B}({\mathbf{o}_{d}},{R_{F}})}{\int }b(x,s,{\eta _{s-}})\big\{F({\eta _{s-}}\cup x)-F({\eta _{s-}})\big\}dxds\\ {} \displaystyle +\underset{(0,t]\times \mathbf{B}({\mathbf{o}_{d}},{R_{F}})\times [0,\infty )}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u)\big\{F({\eta _{s-}}\cup x)-F({\eta _{s-}})\big\}{\tilde{N}_{1}}(ds,dx,du),\end{array}\]
where $\tilde{N}=N-dsdxdu$. Since $F\in {\mathcal{C}_{b}}$, the process
\[ \underset{(0,t]\times \mathbf{B}({\mathbf{o}_{d}},{R_{F}})\times [0,\infty )}{\int }{I_{[0,b(x,s,{\eta _{s-}})]}}(u)\big\{F({\eta _{s-}}\cup x)-F({\eta _{s-}})\big\}{\tilde{N}_{1}}(ds,dx,du)\]
is a martingale by item (vi) of Definition 2.3, see, e.g., [28, Section 3 of Chapter 2]. Similarly,
can be decomposed into a sum of
and a martingale. The desired statement follows. □Proof of Proposition 2.13.
Let ${\tau _{1}}$, ${\tau _{2}}$, ... be consecutive jump moments of the process $({\xi _{t}^{(1)}},{\xi _{t}^{(2)}})$. We will show by induction that each moment of birth for ${({\xi _{t}^{(1)}})_{t\in [0,\infty )}}$ is a moment of birth for ${({\xi _{t}^{(2)}})_{t\in [0,\infty )}}$, too, and each moment of death for ${({\xi _{t}^{(2)}})_{t\in [0,\infty )}}$ is a moment of death for ${({\xi _{t}^{(1)}})_{t\in [0,\infty )}}$ if the dying particle is in ${({\xi _{t}^{(1)}})_{t\in [0,\infty )}}$. Moreover, in both cases the birth or the death occurs at exactly the same place. Here a moment of birth is a random time at which a new particle appears, a moment of death is a random time at which an existing particle disappears from the configuration. The statement formulated here is in fact equivalent to (15).
Here we deal only with the base case, the induction step is done in the same way. We have nothing to show if ${\tau _{1}}$ is a moment of a birth of ${({\xi _{t}^{(2)}})_{t\in [0,\infty )}}$ or a moment of death of ${({\xi _{t}^{(1)}})_{t\in [0,\infty )}}$. Assume that a new particle is born for ${({\xi _{t}^{(1)}})_{t\in [0,\infty )}}$ at ${\tau _{1}}$,
The process ${({\xi ^{(1)}})_{t\in [0,\infty )}}$ satisfies (3), therefore a.s. ${N_{1}}(\{x\}\times \{{\tau _{1}}\}\times [0,{b_{1}}({x_{1}},{\tau _{1}},{\xi _{{\tau _{1}}-}^{(1)}})])=1$. Since
by (13) we have ${b_{1}}({x_{1}},{\tau _{1}},{\xi _{{\tau _{1}}-}^{(1)}})\subset {b_{2}}({x_{1}},{\tau _{1}},{\xi _{{\tau _{1}}-}^{(2)}})$, and hence
hence
(36)
\[ {\xi _{{\tau _{1}}-}^{(1)}}={\xi _{0}^{(1)}}\subset {\xi _{0}^{(2)}}={\xi _{{\tau _{1}}-}^{(2)}},\]Now let ${\tau _{1}}$ be a moment of death for ${({\xi _{t}^{(2)}})_{t\in [0,\infty )}}$, and let ${\xi _{{\tau _{1}}-}^{(2)}}\setminus {\xi _{{\tau _{1}}}^{(2)}}=\{{x_{q}}\}$ for some $q\in \mathcal{I}\cup \mathcal{J}$ (such a q always exists because of (7), and is unique). If ${x_{q}}\notin {\xi _{{\tau _{1}}-}^{(1)}}$, we have nothing to prove. Hence we also assume ${x_{q}}\notin {\xi _{{\tau _{1}}-}^{(1)}}$. We have a.s. ${\gamma _{q}}(\{{\tau _{1}}\}\times [0,{d_{2}}({x_{q}},{\tau _{1}},{\xi _{{\tau _{1}}-}^{(2)}})])=1$. By (36) and (14), ${d_{1}}({x_{q}},{\tau _{1}},{\xi _{{\tau _{1}}-}^{(1)}})\ge {d_{2}}({x_{q}},{\tau _{1}},{\xi _{{\tau _{1}}-}^{(2)}})$, hence
\[ {\gamma _{q}}(\{{\tau _{1}}\}\times [0,{d_{1}}({x_{q}},{\tau _{1}},{\xi _{{\tau _{1}}-}^{(1)}})])=1.\]
It follows that ${\xi _{{\tau _{1}}-}^{(1)}}\setminus {\xi _{{\tau _{1}}}^{(1)}}=\{{x_{q}}\}$. □4 Aggregation model: proofs
The main idea behind our analysis in this section is to couple the process ${({\eta _{t}})_{t\ge 0}}$ with another birth-and-death process, to which we can apply Lemma A.1.
To do so, let us introduce another pair of the birth and death rates, ${b_{1}}$, ${d_{1}}$, and an initial condition ${\xi _{0}}={\eta _{0}}\cap \Lambda $, such that ${b_{1}}(x,\eta )={d_{1}}(x,\eta )=0$ for $x\notin \Lambda $, ${d_{1}}(x,\eta )={a^{-|\eta |}}$ for $x\in \Lambda $, ${b_{1}}(x,\eta )\le b(x,\eta )$ for all x, η, and for some constant $c>0$
\[ \underset{\Lambda }{\int }{b_{1}}(x,\eta )dx=c|\eta \cap \Lambda |,\hspace{1em}\eta \in {\Gamma _{0}}.\]
It follows from (17) that there exists a function ${b_{1}}$ satisfying these assumptions.Functions ${b_{1}}$, ${d_{1}}$ satisfy conditions of Theorem 2.8. Furthermore, the conditions of Proposition 2.13 are satisfied here: for ${\eta ^{1}},{\eta ^{2}}\in {\Gamma _{0}}$, ${\eta ^{1}}\subset {\eta ^{2}}$ we have
as well as
Denote by ${({\xi _{t}})_{t\ge 0}}$ the unique solution of (3) with the birth and death rates ${b_{1}}$, ${d_{1}}$ and initial condition ${\xi _{0}}$. By Proposition 2.13, ${\xi _{t}}\subset {\eta _{t}}$ hold a.s. for all $t\ge 0$.
In this section we will work on the canonical probability space
\[ \big({D_{{\Gamma _{0}}}}[0,\infty )\times {D_{{\Gamma _{0}}}}[0,\infty ),\mathcal{B}({D_{{\Gamma _{0}}}}[0,\infty )\times {D_{{\Gamma _{0}}}}[0,\infty )),{P_{\alpha }}\big),\]
where ${P_{\alpha }}$ is the push-forward of the measure P under
Consider the embedded Markov chain of the process ${({\xi _{t}})_{t\ge 0}}$, ${Y_{k}}:={\xi _{{\tau _{k}}}}$, where ${\tau _{k}}$ are the moments of jumps of $({\xi _{t}})$. It turns out that the process $u={\{{u_{k}}\}_{k\in \mathbb{N}}}$, where ${u_{k}}:=|{Y_{k}}|$, is a Markov chain, too. Indeed, the equality
\[ {P_{{\alpha _{1}}}}\{|{Y_{1}}|=k\}={P_{{\alpha _{2}}}}\{|{Y_{1}}|=k\},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}k\in \mathbb{N},\alpha \in {\Gamma _{0}}.\]
holds when $|{\alpha _{1}}\cap \Lambda |=|{\alpha _{2}}\cap \Lambda |$, since both sides are equal to
\[ \left\{\begin{array}{l@{\hskip10.0pt}l}\frac{c}{c+{a^{-|{\alpha _{1}}\cap \Lambda |}}}& \hspace{1em}\text{if}\hspace{1em}k=|{\alpha _{1}}\cap \Lambda |+1,\\ {} \frac{{a^{-|{\alpha _{1}}\cap \Lambda |}}}{c+{a^{-|{\alpha _{1}}\cap \Lambda |}}}& \hspace{1em}\text{if}\hspace{1em}k=|{\alpha _{1}}\cap \Lambda |-1,\\ {} 0& \hspace{1em}\text{in other cases.}\end{array}\right.\]
Therefore, Lemma A.1 is applicable here, with $f(\cdot )=|\cdot |$.Proof of Proposition 2.14.
Having in mind the inclusion ${\xi _{t}}\subset {\eta _{t}}$ (${P_{\alpha }}$-a.s.), we will prove this proposition for $({\xi _{t}})$.
It follows from (9) that the transition probabilities for the Markov chain ${\{{u_{k}}\}_{k\in {\mathbb{Z}_{+}}}}$ are given by
for $i\in \mathbb{N}$, $j\in {\mathbb{Z}_{+}}$, and ${p_{0,j}}={I_{\{j=0\}}}$.
(37)
\[ {p_{i,j}}={P_{\alpha }}\{{u_{k}}=j\mid {u_{k-1}}=i\}=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{c}{c+{a^{-i}}}& \hspace{1em}\text{if}\hspace{1em}j=i+1,\\ {} \frac{{a^{-i}}}{c+{a^{-i}}}& \hspace{1em}\text{if}\hspace{1em}j=i-1,\\ {} 0& \hspace{1em}\text{in other cases,}\end{array}\right.\]Since the zero is a trap and it is accessible from all other states, there are no recurrent states except zero, and the process u has only two possible types of behavior on infinity:
We will now use properties of countable state space Markov chains, see, e.g., [14, § 12, chapter 1]. Chung considers there Markov chain with a reflecting barrier at 0, but we may still apply those results, adapting them correspondingly. Denote ${\varrho _{m}}={\textstyle\prod _{k=1}^{m}}\frac{{p_{k,k-1}}}{{p_{k,k+1}}}$. Then the probability ${P_{\alpha }}\{\exists k\in \mathbb{N}\hspace{2.5pt}\text{s.t.}\hspace{2.5pt}{u_{k}}=0\}$ equals to 1 if and only if ${\textstyle\sum _{j=1}^{\infty }}{\varrho _{j}}=\infty $, whichever initial condition α, $|\alpha \cap \Lambda |>0$, we have. Moreover, if ${\textstyle\sum _{j=1}^{\infty }}{\varrho _{j}}<\infty $ and ${P_{\alpha }}\{{u_{0}}=q\}=1$ (or, equivalently, $|\alpha \cap \Lambda |=q$, $q\in \mathbb{N}$), then ${p_{q}}:={P_{\alpha }}\{\exists k\in \mathbb{N}\hspace{2.5pt}\text{s.t.}\hspace{2.5pt}{u_{k}}=0\}=\frac{{\textstyle\sum _{j=q}^{\infty }}{\varrho _{j}}}{1+{\textstyle\sum _{j=1}^{\infty }}{\varrho _{j}}}$. From (37) we see that in our case ${\varrho _{j}}={c^{-j}}{a^{-\frac{j(j+1)}{2}}}$, and
(38)
\[ {p_{q}}=\frac{{\textstyle\sum \limits_{j=q}^{\infty }}{c^{-j}}{a^{-\frac{j(j+1)}{2}}}}{1+{\textstyle\sum \limits_{j=1}^{\infty }}{c^{-j}}{a^{-\frac{j(j+1)}{2}}}}\le \frac{{\textstyle\sum \limits_{j=q}^{\infty }}{c^{-j}}{a^{-\frac{{j^{2}}}{2}}}}{1+{\textstyle\sum \limits_{j=1}^{\infty }}{c^{-j}}{a^{-\frac{{j^{2}}}{2}}}}.\]Now, for arbitrary $C>1$ choose $q\in \mathbb{N}$ for which ${c^{-1}}{a^{-\frac{q}{2}}}<{C^{-1}}$. For $j>q$ we have ${c^{-j}}{a^{-\frac{{j^{2}}}{2}}}<{c^{-j}}{a^{-\frac{jq}{2}}}={({c^{-1}}{a^{-\frac{-q}{2}}})^{j}}<{C^{-j}}$, and
\[ {\sum \limits_{j=q}^{\infty }}{c^{-j}}{a^{-\frac{{j^{2}}}{2}}}<{\sum \limits_{j=q}^{\infty }}{C^{-j}}=\frac{{C^{-q}}}{1-{C^{-1}}},\]
so that the statement of the proposition for ${({\xi _{t}})_{t\ge 0}}$ follows from (38). □Note that for $({\eta _{t}})$ the events comprising number of particles going to infinity and extinction are not exclusive, in particular not if ${\textstyle\int _{\Lambda }}b(x,\varnothing )dx>0$. However, it holds that
and
The following equality is also taken from [14, § 12, chapter 1]; for $q>s$, $q,s\in \mathbb{N}$, and all β with $|\beta \cap \Lambda |=q$,
\[ {P_{\beta }}\{\exists k\in \mathbb{N}:|{u_{k}}|=s\}=\frac{{\textstyle\sum \limits_{j=q}^{\infty }}{\varrho _{j}}(s)}{1+{\textstyle\sum \limits_{j=s+1}^{\infty }}{\varrho _{j}}(s)},\]
where ${\varrho _{m}}(s)={\textstyle\prod _{k=s+1}^{m}}\frac{{p_{k,k-1}}}{{p_{k,k+1}}}={c^{-(m-s)}}{a^{-\frac{1}{2}(m-s)(m+s+1)}}$; in our case
Note that
Proof of Proposition 2.15.
Let ${({X_{k}})_{k\in {\mathbb{Z}_{+}}}}$ be the embedded chain of ${({\eta _{t}})_{t\ge 0}}$. First we will show that for all $m\in \mathbb{N}$ and $\alpha \in {\Gamma _{0}}$,
Let $\beta \in {\Gamma _{0}}$, $|\beta \cap \Lambda |=m$, $m\in \mathbb{N}$ (the case of $m=0$ is similar, and we do not write it down). Denote $\tilde{k}=\min \{k\in \mathbb{N}:{X_{k}}\cap \Lambda \ne {X_{0}}\cap \Lambda \}$. Since ${\xi _{t}}\subset {\eta _{t}}$ holds ${P_{\beta }}$-a.s.,
By (41), the probability ${P_{\beta }}\{{u_{k}}>m,\forall k\ge 1\}$ is positive and does not depend on β, $|\beta \cap \Lambda |=m$:
Define ${k_{i}^{m}}$, $i\in \mathbb{N}$, subsequently by ${k_{j+1}^{m}}=\min \{k>{k_{j}^{m}}:|{X_{k}}\cap \Lambda |=m\hspace{2.5pt}\text{and}\hspace{2.5pt}\exists \bar{k}<k:|{X_{\bar{k}}}\cap \Lambda |\ne m\}$, ${k_{0}^{m}}=0$. Note that for all β
By the strong Markov property,
by (44) and (45). Indeed, if ${P_{\alpha }}\{{k_{j}^{m}}<\infty \}>0$, then
Note that if for some element of probability space $\omega \in \Omega $ the process ${({\eta _{t}})_{t\ge 0}}$ is stuck in a trap γ, $\gamma \cap \Lambda =\varnothing $, then ω belongs to the set on the left-hand side of (47) and does not belong to the set $\big\{|{X_{k}}\cap \Lambda |=m\hspace{2.5pt}\text{infinitely often}\big\}$, $m\in \mathbb{N}$.
(46)
\[ \begin{array}{c}\displaystyle {P_{\alpha }}\bigg\{|{X_{k}}\cap \Lambda |=m\hspace{2.5pt}\text{infinitely often}\hspace{2.5pt}\bigg\}\le {P_{\alpha }}\bigg\{{k_{j}^{m}}<\infty ,\forall j\in \mathbb{N}\bigg\}\\ {} \displaystyle ={\prod \limits_{j=1}^{\infty }}{P_{\alpha }}\big\{{k_{j+1}^{m}}<\infty \mid {k_{j}^{m}}<\infty \big\}=0,\end{array}\]
\[ {P_{\alpha }}\{{k_{j+1}^{m}}<\infty \mid {k_{j}^{m}}<\infty \}=\frac{{E_{\alpha }}{I_{\{{k_{j}^{m}}<\infty \}}}{P_{{X_{{k_{j}^{m}}}}}}\{{k_{1}^{m}}<\infty \}}{{E_{\alpha }}{I_{\{{k_{j}^{m}}<\infty \}}}}\]
\[ \le \frac{{E_{\alpha }}{I_{\{{k_{j}^{m}}<\infty \}}}\big(1-{P_{{X_{{k_{j}^{m}}}}}}\{|{X_{k}}\cap \Lambda |>m,\forall k\ge \tilde{k}\}\big)}{{E_{\alpha }}{I_{\{{k_{j}^{m}}<\infty \}}}}\]
\[ \le \frac{{E_{\alpha }}{I_{\{{k_{j}^{m}}<\infty \}}}\big(1-{P_{{X_{{k_{j}^{m}}}}}}\{{u_{k}}>m,\forall k\ge 1\}\big)}{{E_{\alpha }}{I_{\{{k_{j}^{m}}<\infty \}}}}=1-{s_{m}}.\]
Note that $1-{s_{m}}<1$ does not depend on j, hence (46) follows. Having proved (43), we observe that
(47)
\[ \begin{aligned}{}\big\{|{\eta _{t}}\cap \Lambda |\to \infty \big\}\cup \big\{\exists & {t^{\prime }}:\forall t\ge {t^{\prime }},|{\eta _{t}}\cap \Lambda |=\varnothing \big\}\\ {} =\bigg({\bigcup \limits_{m=1}^{\infty }}\{|{X_{k}}\cap \Lambda |=& m\hspace{2.5pt}\text{infinitely often}\}{\bigg)^{c}}.\end{aligned}\]Proof of Proposition2.16.
Define ${\widetilde{\eta }_{t}}:={\eta _{t}}\cap \Lambda $ and let ${\widetilde{X}_{k}}={\widetilde{\eta }_{{\varsigma _{k}}}}$, where ${\varsigma _{k}}$ is the ordered sequence of jumps of ${({\widetilde{\eta }_{t}})_{t\ge 0}}$. Of course, the process ${\{{\widetilde{\eta }_{t}}\}_{t\ge 0}}$ is not Markov in general, and neither is ${\{{\widetilde{X}_{k}}\}_{k\in \mathbb{N}}}$. However, for all $\alpha \in {\Gamma _{0}}({\mathbb{R}^{\mathrm{d}}})$ the inequality
\[ {P_{\alpha }}\{|{\widetilde{X}_{1}}|-|{\widetilde{X}_{0}}|=1\}\ge {p_{|\alpha \cap \Lambda |,|\alpha \cap \Lambda |+1}}\]
holds, because for every $\zeta \in {\Gamma _{0}}$, $\zeta \cap \Lambda =m$, the integral of the birth rate $b(\cdot ,\zeta )$ over Λ is larger than $cm$, and the cumulative death rate in Λ, ${\textstyle\sum _{x\in \zeta \cap \Lambda }}d(x,\zeta )$, is less than $m{a^{-m}}$.The probability of the event that absolutely no death occurs is positive, even when the initial configuration contains only one point inside Λ:
\[ {P_{\alpha }}\bigg\{|{\widetilde{\eta }_{t}}|-|{\widetilde{\eta }_{t-}}|\ge 0\hspace{2.5pt}\text{for all}\hspace{5pt}t\ge 0\bigg\}={P_{\alpha }}\bigg\{|{\widetilde{X}_{k+1}}|-|{\widetilde{X}_{k}}|=1\hspace{2.5pt}\text{for all}\hspace{5pt}k\in \mathbb{N}\bigg\}\]
\[ =\prod \limits_{k\in \mathbb{N}}{P_{\alpha }}\left\{|{\widetilde{X}_{k+1}}|-|{\widetilde{X}_{k}}|=1||{\widetilde{X}_{k}}|-|{\widetilde{X}_{k-1}}|=1,\dots ,|{\widetilde{X}_{1}}|-|{\widetilde{X}_{0}}|=1\right\}\]
\[ \ge \prod \limits_{k\in \mathbb{N}}\underset{\substack{\zeta \in {\Gamma ^{0}}({\mathbb{R}^{\mathrm{d}}}),\\ {} |\zeta \cap \Lambda |=|\alpha \cap \Lambda |+k}}{\inf }{P_{\zeta }}\{|{\widetilde{X}_{1}}|-|{\widetilde{X}_{0}}|=1\}\]
\[ \ge {\prod \limits_{i=|\alpha |}^{\infty }}{p_{i,i+1}}={\prod \limits_{i=|\alpha |}^{\infty }}\frac{c}{c+{a^{-i}}}={\prod \limits_{i=|\alpha |}^{\infty }}\big(1-\frac{{a^{-i}}}{c+{a^{-i}}}\big)>0,\]
because the series ${\textstyle\sum _{i=|\alpha |}^{\infty }}\frac{{a^{-i}}}{c+{a^{-i}}}$ converges. In particular, ${\textstyle\prod _{i=m}^{\infty }}{p_{i,i+1}}\to 1$ as m goes to ∞. Also,
It is clear that only a.s. finite number of deaths inside Λ occurs on $\{\exists {t^{\prime }}:\forall t\ge {t^{\prime }},|{\eta _{t}}\cap \Lambda |=\varnothing \}$. By Proposition 2.15, it remains to show that only a.s. finite number of deaths inside Λ occurs on $\{|{\eta _{t}}\cap \Lambda |\to \infty \}=\{|{\widetilde{\eta }_{t}}|\to \infty \}$. Let us introduce the stopping times ${\sigma _{n}}=\inf \{s\in \mathbb{R}:|{\widetilde{\eta }_{s}}|\ge n\}$, which are finite on $\{|{\widetilde{\eta }_{t}}|\to \infty \}$. Only a finite number of events (births and deaths) occur until arbitrary finite time ${P_{\beta }}$-a.s. for all $\beta \in {\Gamma _{0}}$, hence for $n\in \mathbb{N}$
\[ {P_{\alpha }}\Big(\{|{\widetilde{\eta }_{t}}|-|{\widetilde{\eta }_{t-}}|\ge 0\hspace{2.5pt}\text{for all but finitely many}\hspace{5pt}t\ge 0\}\cap \{|{\widetilde{\eta }_{t}}|\to \infty \}\Big)\]
\[ \ge {P_{\alpha }}\Big(\{|{\widetilde{\eta }_{t}}|-|{\widetilde{\eta }_{t-}}|\ge 0\hspace{2.5pt}\text{for all}\hspace{5pt}t\ge {\sigma _{n}}\}\cap \{|{\widetilde{\eta }_{t}}|\to \infty \}\Big)\]
\[ ={E_{\alpha }}\Big[{I_{\{|{\widetilde{\eta }_{t}}|\to \infty \}}}{P_{{\eta _{{\sigma _{n}}}}}}\big\{|{\widetilde{\eta }_{t}}|-|{\widetilde{\eta }_{t-}}|\ge 0\hspace{2.5pt}\text{for all}\hspace{5pt}t\ge 0\big\}\Big].\]
From $|{\eta _{{\sigma _{n}}}}|\ge n$ we have by (48)
Therefore,
□Proposition 2.16 is also applicable to ${(\xi )_{t\ge 0}}$, since ${b_{1}}$, ${d_{1}}$ satisfy all the conditions imposed on b, d.
Proof of Theorem 2.17.
First we prove the theorem for ${(\xi )_{t\ge 0}}$: we prove that for ${P_{\alpha }}$-almost all $\omega \in {F_{1}}:=\{{\lim \nolimits_{t\to \infty }}|{\xi _{t}}\cap \Lambda |=\infty \}$,
Without loss of generality we assume ${u_{0}}=|\alpha \cap \Lambda |>0$. Let $0={\tau _{0}}<{\tau _{1}}<{\tau _{2}}<\cdots \hspace{0.1667em}$ be the moments of jumps of ${({\xi _{t}})_{t\ge 0}}$, so that ${\xi _{{\tau _{k}}}}={Y_{k}}$. We recall that the random variables ${u_{n}}=|{Y_{n}}|$ make up a Markov chain by Lemma A.1. Note that a.s. on ${F_{1}}$, ${u_{n}}>0$ for all $n\in \mathbb{N}$. Denote $\psi (n)=cn+n{a^{-n}}$. Then
By Theorem 12.17 in [29] there exists a sequence of independent unit exponentials ${\{{\gamma _{k}}\}_{k\in \mathbb{N}}}$, such that ${\gamma _{k}}=\psi ({u_{k}})({\tau _{k}}-{\tau _{k-1}})$ a.s. on $\{{\tau _{k}}<\infty \}\supset {F_{1}}$, which is independent of Y. In particular, ${\{{\gamma _{k}}\}_{k\in \mathbb{N}}}$ is independent of ${\{{u_{k}}\}_{k\in {Z_{+}}}}$.
From Proposition 2.16 we know that only a finite number of deaths inside Λ occur a.s. In particular, there exists a positive finite random variable m such that the inequalities
hold a.s. on ${F_{1}}$.
(50)
\[ {u_{0}}+n\ge {u_{n}}\ge {u_{0}}+n-\mathbf{m}(\omega ),\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}n\in \mathbb{N},\]A.s. on ${F_{1}}$
Due to Kolmogorov’s two-series theorem, the series ${\textstyle\sum _{k=1}^{\infty }}\frac{{\gamma _{k}}}{{u_{0}}+ck}$ is divergent a.s. (we recall that $E{\gamma _{k}}=D{\gamma _{k}}=1$). Hence ${\tau _{n}}\to \infty $ a.s.
We will show below that a.s. on ${F_{1}}$
where $\tilde{\gamma }$ is some finite a.s. on ${F_{1}}$ random variable. Using (51), we obtain
(51)
\[ c{\tau _{n}}\le \ln n+c\tilde{\gamma },\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}n\in \mathbb{N},\]
\[\begin{aligned}{}{P_{\alpha }}({F_{1}})& \le {P_{\alpha }}\left\{|{\xi _{t}}|\ge \frac{{e^{ct}}}{(\mathbf{m}+1){e^{c\tilde{\gamma }}}},t\ge 0\right\}={P_{\alpha }}\left\{|{\xi _{{\tau _{n}}}}|\ge \frac{{e^{c{\tau _{n+1}}}}}{(\mathbf{m}+1){e^{c\tilde{\gamma }}}},n\in \mathbb{N}\right\}\\ {} & ={P_{\alpha }}\big\{{u_{n}}\ge \frac{1}{\mathbf{m}+1}{e^{c{\tau _{n+1}}-c\tilde{\gamma }}},n\in \mathbb{N}\big\}\\ {} & ={P_{\alpha }}\big\{\ln ({u_{n}})+\ln (\mathbf{m}+1)\ge c{\tau _{n+1}}-c\tilde{\gamma },n\in \mathbb{N}\big\}\le {P_{\alpha }}({F_{1}}).\end{aligned}\]
Therefore, a.s. on ${F_{1}}$, $|{\xi _{t}}|\ge \frac{{e^{ct}}}{(\mathbf{m}+1){e^{c\tilde{\gamma }}}}$ for all $t\ge 0$, and hence (49) holds.Inequality (51) follows from the convergence a.s. on ${F_{1}}$ of the series
To establish the convergence of (52), we note that
converges a.s. on ${F_{1}}$ by Kolmogorov’s two-series theorem:
converges a.s. on ${F_{1}}$, too, by Kolmogorov’s theorem, (50), and since $\{{\gamma _{k}}\}$ is independent of $\{{u_{k}}\}$: using conditioning on $\{{u_{k}}\}$ we get
(53)
\[ {\sum \limits_{k=1}^{\infty }}\bigg(\frac{{\gamma _{k}}}{\psi ({u_{k}})}-\frac{{\gamma _{k}}}{c{u_{k}}}\bigg)\]
\[ -{\sum \limits_{k=1}^{\infty }}\bigg(\frac{{\gamma _{k}}}{\psi ({u_{k}})}-\frac{{\gamma _{k}}}{c{u_{k}}}\bigg)={\sum \limits_{k=1}^{\infty }}{\gamma _{k}}\frac{{u_{k}}{a^{-{u_{k}}}}}{c{u_{k}}\psi ({u_{k}})}\le \frac{1}{{c^{2}}}{\sum \limits_{k=1}^{\infty }}{\gamma _{k}}\frac{{a^{-{u_{k}}}}}{{u_{k}}}\]
\[ =\frac{1}{{c^{2}}}{\sum \limits_{k=1}^{\mathbf{m}}}+\frac{1}{{c^{2}}}{\sum \limits_{k=\mathbf{m}+1}^{\infty }}\le \frac{1}{{c^{2}}}{\sum \limits_{k=1}^{\mathbf{m}}}{\gamma _{k}}\frac{{a^{-{u_{k}}}}}{{u_{k}}}+\frac{1}{{c^{2}}}{\sum \limits_{j=1}^{\infty }}{\gamma _{k}}\frac{{a^{-j}}}{j}<\infty .\]
The series
(54)
\[ {\sum \limits_{k=1}^{\infty }}\bigg(\frac{{\gamma _{k}}}{c{u_{k}}}-\frac{1}{c{u_{k}}}\bigg)={\sum \limits_{k=1}^{\infty }}\frac{{\gamma _{k}}-1}{c{u_{k}}}\]
\[\begin{aligned}{}& {P_{\alpha }}\left(\left\{{\sum \limits_{k=1}^{\infty }}\frac{{\gamma _{k}}-1}{c{u_{k}}}\hspace{2.5pt}\text{converges}\hspace{2.5pt}\right\}\cap {F_{1}}\right)\\ {} & \hspace{1em}={E_{\alpha }}{P_{\alpha }}\left[\left\{{\sum \limits_{k=1}^{\infty }}\frac{{\gamma _{k}}-1}{c{u_{k}}}\hspace{2.5pt}\text{converges}\hspace{2.5pt}\right\}\cap {F_{1}}\Bigg|\{{u_{k}}\}\right]\\ {} & \hspace{1em}={E_{\alpha }}{P_{\alpha }}\big[{F_{1}}\big|\{{u_{k}}\}\big]={P_{\alpha }}({F_{1}}).\end{aligned}\]
Finally, by (50)
also converges a.s. on ${F_{1}}$.We have thus proved that (49) holds a.s. on ${F_{1}}$. To establish the statement of the theorem, note that ${\tilde{\sigma }_{n}}=\inf \{t>0:|{\eta _{t}}|\ge n\}$ is finite on F and a.s.
\[ \left\{\underset{t\to \infty }{\liminf }\frac{|{\eta _{t}}\cap \Lambda |}{{e^{ct}}}=0,|{\eta _{t}}|\to \infty \right\}\subset \left\{\underset{t\to \infty }{\liminf }\frac{|{\xi _{t}}|}{{e^{ct}}}=0\right\}.\]
It follows from (39) and (40) that
Therefore, by Proposition 2.14 and the strong Markov property
\[ {P_{\alpha }}\left\{\underset{t\to \infty }{\liminf }\frac{|{\eta _{t}}\cap \Lambda |}{{e^{ct}}}=0,|{\eta _{t}}|\to \infty \right\}={E_{\alpha }}{P_{{\eta _{{\tilde{\sigma }_{n}}}}}}\left\{\underset{t\to \infty }{\liminf }\frac{|{\eta _{t}}\cap \Lambda |}{{e^{ct}}}=0,|{\eta _{t}}|\to \infty \right\}\]
\[ \le {E_{\alpha }}{P_{{\eta _{{\tilde{\sigma }_{n}}}}}}\left\{\underset{t\to \infty }{\liminf }\frac{|{\xi _{t}}|}{{e^{ct}}}=0\right\}\le {\tilde{C}^{-n}},\]
where $\tilde{C}$ is the constant from Proposition 2.14. Since n is arbitrary,
□Proof of Corollary 2.18.
Let us fix a configuration α, $\alpha \cap \Lambda \ne \varnothing $. We saw in the proof of Theorem 2.17 that for ${P_{\alpha }}$-almost all $\omega \in {F_{1}}$ we have
\[ |{\xi _{t}}|\ge \frac{1}{(\mathbf{m}+1){e^{c\tilde{\gamma }}}}{e^{ct}},\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}t\ge 0,\]
where m and $\tilde{\gamma }$ are random variables a.s. finite on ${F_{1}}$. Let ${G_{k}}$ be the set $\{\omega :\frac{1}{(\mathbf{m}+1){e^{c\tilde{\gamma }}}}\ge \frac{1}{k}\}$, $k\in \mathbb{N}$. Then ${\textstyle\bigcup _{k\in \mathbb{N}}}{G_{k}}\supset {F_{1}}$, and, since ${P_{\alpha }}({F_{1}})>0$,
for some $k\in \mathbb{N}$. Hence
□