Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 7, Issue 1 (2020)
  4. Stochastic two-species mutualism model w ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Stochastic two-species mutualism model with jumps
Volume 7, Issue 1 (2020), pp. 1–15
Olga Borysenko   Oleksandr Borysenko  

Authors

 
Placeholder
https://doi.org/10.15559/20-VMSTA150
Pub. online: 3 March 2020      Type: Research Article      Open accessOpen Access

Received
3 December 2019
Revised
21 February 2020
Accepted
21 February 2020
Published
3 March 2020

Abstract

The existence and uniqueness are proved for the global positive solution to the system of stochastic differential equations describing a two-species mutualism model disturbed by the white noise, the centered and non-centered Poisson noises. We obtain sufficient conditions for stochastic ultimate boundedness, stochastic permanence, nonpersistence in the mean, strong persistence in the mean and extinction of the solution to the considered system.

1 Introduction

The construction of the mutualism model and its properties are presented in K. Gopalsamy [4]. Mutualism occurs when one species provides some benefit in exchange for another benefit. A deterministic two-species mutualism model is described by the system
\[\begin{array}{r}\displaystyle \frac{d{N_{1}}(t)}{dt}={r_{1}}(t){N_{1}}(t)\left[\frac{{K_{1}}(t)+{\alpha _{1}}(t){N_{2}}(t)}{1+{N_{2}}(t)}-{N_{1}}(t)\right],\\ {} \displaystyle \frac{d{N_{2}}(t)}{dt}={r_{2}}(t){N_{2}}(t)\left[\frac{{K_{2}}(t)+{\alpha _{2}}(t){N_{1}}(t)}{1+{N_{1}}(t)}-{N_{2}}(t)\right],\end{array}\]
where ${N_{1}}(t)$ and ${N_{2}}(t)$ denote the population densities of each species at time t, ${r_{i}}(t)>0$, $i=1,2$, denotes the intrinsic growth rate of species ${N_{i}},i=1,2$, and ${\alpha _{i}}(t)>{K_{i}}(t)>0$, $i=1,2$. The carrying capacity of species ${N_{i}}(t)$ is ${K_{i}}(t)$, $i=1,2$, in the absence of other species. In the paper by Hong Qiu, Jingliang Lv and Ke Wang [9] the stochastic mutualism model of the form
(1)
\[\begin{array}{r}\displaystyle dx(t)=x(t)\left[\frac{{a_{1}}(t)+{a_{2}}(t)y(t)}{1+y(t)}-{c_{1}}(t)x(t)\right]+{\sigma _{1}}(t)x(t)d{w_{1}}(t),\\ {} \displaystyle dy(t)=y(t)\left[\frac{{b_{1}}(t)+{b_{2}}(t)x(t)}{1+x(t)}-{c_{2}}(t)y(t)\right]+{\sigma _{2}}(t)y(t)d{w_{2}}(t)\end{array}\]
is considered, where ${a_{i}}(t),{b_{i}}(t),{c_{i}}(t),{\sigma _{i}}(t),i=1,2$, are all positive, continuous and bounded functions on $[0,+\infty )$, and ${w_{1}}(t),{w_{2}}(t)$ are independent Wiener processes. The authors show that the stochastic system (1) has a unique global (no explosion in a finite time) solution for any positive initial value and that this stochastic model is stochastically ultimately bounded. The sufficient conditions for stochastic permanence and persistence in the mean of the solution to the system (1) are obtained.
Population systems may suffer abrupt environmental perturbations, such as epidemics, fires, earthquakes, etc. So it is natural to introduce Poisson noises into the population model for describing such discontinuous systems. In the paper by Mei Li, Hongjun Gao and Binjun Wang [5] the authors consider the stochastic mutualism model with the white and centered Poisson noises:
\[\begin{aligned}{}dx(t)& =x({t^{-}})\left[\left({r_{1}}(t)-\frac{{b_{1}}(t)x(t)}{{K_{1}}(t)+y(t)}-{\varepsilon _{1}}(t)x(t)\right)dt\right.\\ {} & +{\alpha _{1}}(t)d{w_{1}}(t)+\left.\underset{\mathbb{Y}}{\int }{\gamma _{1}}(t,z)\tilde{\nu }(dt,dz)\right],\\ {} dy(t)& =y({t^{-}})\left[\left({r_{2}}(t)-\frac{{b_{2}}(t)y(t)}{{K_{2}}(t)+x(t)}-{\varepsilon _{2}}(t)y(t)\right)dt\right.\\ {} & +{\alpha _{2}}(t)d{w_{2}}(t)+\left.\underset{\mathbb{Y}}{\int }{\gamma _{2}}(t,z)\tilde{\nu }(dt,dz)\right],\end{aligned}\]
where $x({t^{-}})$, $y({t^{-}})$ are the left limit of $x(t)$ and $y(t)$ respectively, ${r_{i}}(t)$, ${b_{i}}(t)$, ${K_{i}}(t)$, ${\alpha _{i}}(t)$, $i=1,2$, are all positive, continuous and bounded functions, $\mathbb{Y}$ is measurable subset of $(0,+\infty )$, ${w_{i}}(t)$, $i=1,2$, are independent standard one-dimensional Wiener processes, $\tilde{\nu }(t,A)=\nu (t,A)-t\Pi (A)$ is the centered Poisson measure independent on ${w_{i}}(t)$, $i=1,2$, $E[\nu (t,A)]=t\Pi (A)$, $\Pi (\mathbb{Y})<\infty $, ${\gamma _{i}}(t,z),i=1,2$, are random, measurable, bounded, continuous in t. The global existence and uniqueness of the positive solution to this problem are proved. The sufficient conditions of stochastic boundedness, stochastic permanence, persistence in the mean and extinction of the solution are obtained.
In this paper, we consider the stochastic mutualism model with jumps generated by centered and noncentered Poisson measures. So, we take into account not only “small” jumps, corresponding to the centered Poisson measure, but also “large” jumps, corresponding to the noncentered Poisson measure. This model is driven by the system of stochastic differential equations
(2)
\[\begin{array}{r}\displaystyle d{x_{i}}(t)={x_{i}}(t)\left[\frac{{a_{i1}}(t)+{a_{i2}}(t){x_{3-i}}(t)}{1+{x_{3-i}}(t)}-{c_{i}}(t){x_{i}}(t)\right]dt+{\sigma _{i}}(t){x_{i}}(t)d{w_{i}}(t)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }{\gamma _{i}}(t,z){x_{i}}(t){\tilde{\nu }_{1}}(dt,dz)+\underset{\mathbb{R}}{\int }{\delta _{i}}(t,z){x_{i}}(t){\nu _{2}}(dt,dz),\\ {} \displaystyle {x_{i}}(0)={x_{i0}}>0,\hspace{1em}i=1,2,\end{array}\]
where ${w_{i}}(t),i=1,2$, are independent standard one-dimensional Wiener processes, ${\tilde{\nu }_{1}}(t,A)={\nu _{1}}(t,A)-t{\Pi _{1}}(A)$, ${\nu _{i}}(t,A),i=1,2$, are independent Poisson measures, which are independent on ${w_{i}}(t),i=1,2$, $E[{\nu _{i}}(t,A)]=t{\Pi _{i}}(A)$, $i=1,2$, ${\Pi _{i}}(A),i=1,2$, are finite measures on the Borel sets A in $\mathbb{R}$.
To the best of our knowledge, there are no papers devoted to the dynamical properties of the stochastic mutualism model (2), even in the case of the centered Poisson noise. It is worth noting that the impact of the centered and noncentered Poisson noises to the stochastic nonautonomous logistic model is studied in the papers by O.D. Borysenko and D.O. Borysenko [1, 2].
In the following we will use the notations $X(t)=({x_{1}}(t),{x_{2}}(t))$, ${X_{0}}=({x_{10}},{x_{20}})$, $|X(t)|=\sqrt{{x_{1}^{2}}(t)+{x_{2}^{2}}(t)}$, ${\mathbb{R}_{+}^{2}}=\{X\in {\mathbb{R}^{2}}:\hspace{2.5pt}{x_{1}}>0,{x_{2}}>0\}$,
\[ {\beta _{i}}(t)={\sigma _{i}^{2}}(t)/2+\underset{\mathbb{R}}{\int }[{\gamma _{i}}(t,z)-\ln (1+{\gamma _{i}}(t,z))]{\Pi _{1}}(dz)-\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(t,z))]{\Pi _{2}}(dz),\]
$i=1,2$. For the bounded, continuous functions ${f_{i}}(t),t\in [0,+\infty )$, $i=1,2$, let us denote
\[\begin{array}{r}\displaystyle {f_{i\sup }}=\underset{t\ge 0}{\sup }{f_{i}}(t),\hspace{2em}{f_{i\inf }}=\underset{t\ge 0}{\inf }{f_{i}}(t),\hspace{1em}i=1,2,\\ {} \displaystyle {f_{\max }}=\max \{{f_{1\sup }},{f_{2\sup }}\},\hspace{2em}{f_{\min }}=\min \{{f_{1\inf }},{f_{2\inf }}\}.\end{array}\]
We will prove that system (2) has a unique, positive, global (no explosion in a finite time) solution for any positive initial value, and that this solution is stochastically ultimate bounded. The sufficient conditions for stochastic permanence, nonpersistence in the mean, strong persistence in the mean and extinction of solution are derived.
The rest of this paper is organized as follows. In Section 2, we prove the existence of the unique global positive solution to the system (2). In Section 3, we prove the stochastic ultimate boundedness of the solution to the system (2). In Section 4, we obtain conditions under which the solution to the system (2) is stochastically permanent, and in Section 5 the sufficient conditions for nonpersistence in the mean, strong persistence in the mean and extinction of the solution are obtained.

2 Existence of the global solution

Let $(\Omega ,\mathcal{F},\mathrm{P})$ be a probability space, ${w_{i}}(t),i=1$, $2,t\ge 0$, are independent standard one-dimensional Wiener processes on $(\Omega ,\mathcal{F},\mathrm{P})$, and ${\nu _{i}}(t,A),i=1,2$, are independent Poisson measures defined on $(\Omega ,\mathcal{F},\mathrm{P})$ independent on ${w_{i}}(t),i=1,2$. Here $\mathrm{E}[{\nu _{i}}(t,A)]=t{\Pi _{i}}(A)$, $i=1,2$, ${\tilde{\nu }_{i}}(t,A)={\nu _{i}}(t,A)-t{\Pi _{i}}(A)$, $i=1,2$, ${\Pi _{i}}(\cdot ),i=1,2$, are finite measures on the Borel sets in $\mathbb{R}$. On the probability space $(\Omega ,\mathcal{F},\mathrm{P})$ we consider an increasing, right-continuous family of complete sub-σ-algebras ${\{{\mathcal{F}_{t}}\}_{t\ge 0}}$, where ${\mathcal{F}_{t}}=\sigma \{{w_{i}}(s),{\nu _{i}}(s,A),s\le t,i=1,2\}$.
We need the following assumption.
Assumption 1.
It is assumed that ${a_{ij}}(t),i,j=1,2$, ${c_{i}}(t),{\sigma _{i}}(t),i=1,2$, are bounded, continuous in t functions, ${a_{ij}}(t)>0$, $i,j=1,2$, ${c_{\min }}>0$, ${\gamma _{i}}(t,z),{\delta _{i}}(t,z)$, $i=1,2$, are continuous in t functions and $\ln (1+{\gamma _{i}}(t,z)),\ln (1+{\delta _{i}}(t,z))$, $i=1,2$, are bounded, ${\Pi _{i}}(\mathbb{R})<\infty $, $i=1,2$.
Theorem 1.
Let Assumption 1 be fulfilled. Then there exists a unique global solution $X(t)$ to the system (2) for any initial value $X(0)={X_{0}}>0$, and $\mathrm{P}\{X(t)\in {\mathbb{R}_{+}^{2}}\}=1$.
Proof.
Let us consider the system of stochastic differential equations
(3)
\[\begin{array}{r}\displaystyle d{v_{i}}(t)=\left[\frac{{a_{i1}}(t)+{a_{i2}}(t)\exp \{{v_{3-i}}(t)\}}{1+\exp \{{v_{3-i}}(t)\}}-{c_{i}}(t)\exp \{{v_{i}}(t)\}-{\beta _{i}}(t)\right]dt\\ {} \displaystyle +{\sigma _{i}}(t)d{w_{i}}(t)+\underset{\mathbb{R}}{\int }\ln (1+{\gamma _{i}}(t,z)){\tilde{\nu }_{1}}(dt,dz)+\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(t,z)){\tilde{\nu }_{2}}(dt,dz),\\ {} \displaystyle {v_{i}}(0)=\ln {x_{i0}},\hspace{1em}i=1,2.\end{array}\]
The coefficients of equation (3) are locally Lipschitz continuous. Therefore, for any initial value $({v_{1}}(0),{v_{2}}(0))$ there exists a unique local solution $V(t)=({v_{1}}(t),{v_{2}}(t))$ on $[0,{\tau _{e}})$, where ${\sup _{t<{\tau _{e}}}}|V(t)|=+\infty $ (cf. Theorem 6, p. 246, [3]). So, from the Itô formula we derive that the process $X(t)=(\exp \{{v_{1}}(t)\},\exp \{{v_{2}}(t)\})$ is a unique, positive local solution to the system (2). To show this solution is global, we need to show that ${\tau _{e}}=+\infty $ a.s. Let ${n_{0}}\in \mathbb{N}$ be sufficiently large for ${x_{i0}}\in [1/{n_{0}},{n_{0}}]$, $i=1,2$. For any $n\ge {n_{0}}$ we define the stopping time
\[ {\tau _{n}}=\inf \left\{t\in [0,{\tau _{e}}):\hspace{2.5pt}X(t)\notin \left(\frac{1}{n},n\right)\times \left(\frac{1}{n},n\right)\right\}.\]
It is clear that ${\tau _{n}}$ is increasing as $n\to +\infty $. Set ${\tau _{\infty }}={\lim \nolimits_{n\to \infty }}{\tau _{n}}$, whence ${\tau _{\infty }}\le {\tau _{e}}$ a.s. If we prove that ${\tau _{\infty }}=\infty $ a.s., then ${\tau _{e}}=\infty $ a.s. and $X(t)\in {\mathbb{R}_{+}^{2}}$ a.s. for all $t\in [0,+\infty )$. So we need to show that ${\tau _{\infty }}=\infty $ a.s. If this statement is false, there are constants $T>0$ and $\varepsilon \in (0,1)$, such that $\mathrm{P}\{{\tau _{\infty }}<T\}>\varepsilon $. Hence, there is ${n_{1}}\ge {n_{0}}$ such that
(4)
\[ \mathrm{P}\{{\tau _{n}}<T\}>\varepsilon ,\hspace{1em}\forall n\ge {n_{1}}.\]
For the nonnegative function $V(X)={\textstyle\sum \limits_{i=1}^{2}}({x_{i}}-1-\ln {x_{i}})$, ${x_{i}}>0$, $i=1,2$, by the Itô formula we have
(5)
\[\begin{array}{r}\displaystyle V(X(T\wedge {\tau _{n}}))=V({X_{0}})+{\sum \limits_{i=1}^{2}}\left\{{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\left[({x_{i}}(t)-1)\left(\frac{{a_{i1}}(t)+{a_{i2}}(t){x_{3-i}}(t)}{1+{x_{3-i}}(t)}\right.\right.\right.\\ {} \displaystyle \left.\left.-{c_{i}}(t){x_{i}}(t)\right)+{\beta _{i}}(t)+\underset{\mathbb{R}}{\int }{\delta _{i}}(t,z){x_{i}}(t){\Pi _{2}}(dz)\right]dt\\ {} \displaystyle +{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}({x_{i}}(t)-1){\sigma _{i}}(t)d{w_{i}}(t)\hspace{-0.1667em}+\hspace{-0.1667em}{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}[{\gamma _{i}}(t,z){x_{i}}(t)-\ln (1+{\gamma _{i}}(t,z))]{\tilde{\nu }_{1}}(dt,dz)\\ {} \displaystyle \left.+{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }[{\delta _{i}}(t,z){x_{i}}(t)-\ln (1+{\delta _{i}}(t,z))]{\tilde{\nu }_{2}}(dt,dz)\right\}.\end{array}\]
Under the conditions of the theorem there is a constant $K>0$ such that
(6)
\[\begin{array}{r}\displaystyle {\sum \limits_{i=1}^{2}}\left[({x_{i}}-1)\left(\frac{{a_{i1}}(t)+{a_{i2}}(t){x_{3-i}}}{1+{x_{3-i}}}-{c_{i}}(t){x_{i}}\right)+{\beta _{i}}(t)\right.\\ {} \displaystyle \left.+\underset{\mathbb{R}}{\int }{\delta _{i}}(t,z){x_{i}}{\Pi _{2}}(dz)\right]\le {\sum \limits_{i=1}^{2}}\left[({a_{\max }}+{c_{\max }}){x_{i}}-{c_{\min }}{x_{i}^{2}}+{\beta _{\max }}\right.\\ {} \displaystyle \left.+{x_{i}}{\delta _{\max }}{\Pi _{2}}(\mathbb{R})\right]\le K,\end{array}\]
where ${a_{\max }}={\max _{i,j=1,2}}\{{\sup _{t\ge 0}}{a_{ij}}(t)\}$. From (5) and (6) we have
\[\begin{array}{r}\displaystyle V(X(T\wedge {\tau _{n}}))\le V({X_{0}})+K(T\wedge {\tau _{n}})+{\sum \limits_{i=1}^{2}}\left\{{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}({x_{i}}(t)-1){\sigma _{i}}(t)d{w_{i}}(t)\right.\\ {} \displaystyle +\left.{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }[{\gamma _{i}}(t,z){x_{i}}(t)-\ln (1+{\gamma _{i}}(t,z))]{\tilde{\nu }_{1}}(dt,dz)\right.\\ {} \displaystyle +\left.{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }[{\delta _{i}}(t,z){x_{i}}(t)-\ln (1+{\delta _{i}}(t,z))]{\tilde{\nu }_{2}}(dt,dz)\right\}.\end{array}\]
Whence, taking expectations yields
(7)
\[ \mathrm{E}\left[V(X(T\wedge {\tau _{n}}))\right]\le V({X_{0}})+KT.\]
Set ${\Omega _{n}}=\{{\tau _{n}}\le T\}$ for $n\ge {n_{1}}$. Then by (4), $\mathrm{P}({\Omega _{n}})=\mathrm{P}\{{\tau _{n}}\le t\}>\varepsilon $, $\forall n\ge {n_{1}}$. Note that for every $\omega \in {\Omega _{n}}$ there is some i such that ${x_{i}}({\tau _{n}},\omega )$ equals either n or $1/n$. So
\[ V(X({\tau _{n}}))\ge \min \{n-1-\ln n,\frac{1}{n}-1+\ln n\}.\]
It then follows from (7) that
\[ V({X_{0}})+KT\ge \mathrm{E}[{\mathbf{1}_{{\Omega _{n}}}}V(X({\tau _{n}}))]\ge \varepsilon \min \{n-1-\ln n,\frac{1}{n}-1+\ln n\},\]
where ${\mathbf{1}_{{\Omega _{n}}}}$ is the indicator function of ${\Omega _{n}}$. Letting $n\to \infty $ leads to the contradiction $\infty >V({X_{0}})+KT=\infty $. This completes the proof of the theorem.  □

3 Stochastically ultimate boundedness

Definition 1.
([6]) The solution $X(t)$ to the system (2) is said to be stochastically ultimately bounded, if for any $\varepsilon \in (0,1)$ there is a positive constant $\chi =\chi (\varepsilon )>0$ such that for any initial value ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ this solution has the property
\[ \underset{t\to \infty }{\limsup }\mathrm{P}\{|X(t)|>\chi \}<\varepsilon .\]
Theorem 2.
Under Assumption 1 the solution $X(t)$ to the system (2) is stochastically ultimately bounded for any initial value ${X_{0}}\in {\mathbb{R}_{+}^{2}}$.
Proof.
Let ${\tau _{n}}$ be the stopping time defined in Theorem 1. Applying the Itô formula to the process $V(t,{x_{i}}(t))={e^{t}}{x_{i}^{p}}(t)$, $i=1,2$, $p>0$, we obtain for $i=1,2$ that
(8)
\[\begin{array}{r}\displaystyle V(t\wedge {\tau _{n}},{x_{i}}(t\wedge {\tau _{n}}))={x_{i0}^{p}}+{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}{e^{s}}{x_{i}^{p}}(s)\left\{1+p\left[\frac{{a_{i1}}(s)+{a_{i2}}(s){x_{3-i}}(s)}{1+{x_{3-i}}(s)}\right.\right.\\ {} \displaystyle \left.\left.-{c_{i}}(s){x_{i}}(s)\right]\hspace{-0.1667em}+\hspace{-0.1667em}\frac{p(p-1){\sigma _{i}^{2}}(s)}{2}+\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1+{\gamma _{i}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\hspace{-0.1667em}-\hspace{-0.1667em}p{\gamma _{i}}(s,z)\right]{\Pi _{1}}(dz)\hspace{-0.1667em}\right.\\ {} \displaystyle \left.+\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1+{\delta _{i}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\right]{\Pi _{2}}(dz)\right\}ds+{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}p{e^{s}}{x_{i}^{p}}(s){\sigma _{i}}(s)d{w_{i}}(s)\\ {} \displaystyle +{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}{x_{i}^{p}}(s)\left[{(1+{\gamma _{i}}(s,z))^{p}}-1\right]{\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}{x_{i}^{p}}(s)\left[{(1+{\delta _{i}}(s,z))^{p}}-1\right]{\tilde{\nu }_{2}}(ds,dz).\end{array}\]
Under Assumption 1 there is a constant ${K_{i}}(p)>0$ such that
(9)
\[\begin{array}{r}\displaystyle {e^{s}}{x_{i}^{p}}\left\{1+p\left[\frac{{a_{i1}}(s)+{a_{i2}}(s){x_{3-i}}}{1+{x_{3-i}}}-{c_{i}}(s){x_{i}}\right]\hspace{-0.1667em}+\hspace{-0.1667em}\frac{p(p-1){\sigma _{i}^{2}}(s)}{2}+\right.\\ {} \displaystyle \left.+\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1+{\gamma _{i}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\hspace{-0.1667em}-\hspace{-0.1667em}p{\gamma _{i}}(s,z)\right]{\Pi _{1}}(dz)+\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1+{\delta _{i}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\right]{\Pi _{2}}(dz)\right\}\\ {} \displaystyle \le {K_{i}}(p){e^{s}}.\end{array}\]
From (8) and (9), taking expectations, we obtain
\[ \mathrm{E}[V(t\wedge {\tau _{n}},{x_{i}}(t\wedge {\tau _{n}}))]\le {x_{i0}^{p}}+K(p)\mathrm{E}[{e^{t\wedge {\tau _{n}}}}]\le {x_{i0}^{p}}+{K_{i}}(p){e^{t}}.\]
Letting $n\to \infty $ leads to the estimate
(10)
\[ {e^{t}}\mathrm{E}[{x_{i}^{p}}(t)]\le {x_{i0}^{p}}+{e^{t}}{K_{i}}(p).\]
So we have for $i=1,2$ that
(11)
\[ \lim \underset{t\to \infty }{\sup }E[{x_{i}^{p}}(t)]\le {K_{i}}(p).\]
For $X=({x_{1}},{x_{2}})\in {\mathbb{R}_{+}^{2}}$ we have $|X{|^{p}}\le {2^{p/2}}({x_{1}^{p}}+{x_{2}^{p}})$, therefore, from (11) $\lim {\sup _{t\to \infty }}E[|X(t){|^{p}}]\le L(p)={2^{p/2}}({K_{1}}(p)+{K_{2}}(p))$. Let $\chi >{(L(p)/\varepsilon )^{1/p}}$, $p>0$, $\forall \varepsilon \in (0,1)$. Then applying the Chebyshev inequality yields
\[ \underset{t\to \infty }{\limsup }\mathrm{P}\{|X(t)|>\chi \}\le \frac{1}{{\chi ^{p}}}\lim \underset{t\to \infty }{\sup }E[|X(t){|^{p}}]\le \frac{L(p)}{{\chi ^{p}}}<\varepsilon .\]
 □

4 Stochastic permanence

The property of stochastic permanence is important since it means the long-time survival in a population dynamics.
Definition 2.
([5]) The solution $X(t)$ to the system (2) is said to be stochastically permanent if for any $\varepsilon >0$ there are positive constants $H=H(\varepsilon )$, $h=h(\varepsilon )$ such that
\[ \underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{i}}(t)\le H\}\ge 1-\varepsilon ,\hspace{2em}\underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{i}}(t)\ge h\}\ge 1-\varepsilon ,\]
for $i=1,2$ and for any inial value ${X_{0}}\in {\mathbb{R}_{+}^{2}}$.
Theorem 3.
Let Assumption 1 be fulfilled. If
\[ \underset{i=1,2}{\min }\underset{t\ge 0}{\inf }({a_{i\min }}(t)-{\beta _{i}}(t))>0,\hspace{1em}\textit{where}\hspace{2.5pt}{a_{i\min }}(t)=\underset{j=1,2}{\min }{a_{ij}}(t),\hspace{1em}i=1,2,\]
then the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ is stochastically permanent.
Proof.
For the process ${U_{i}}(t)=1/{x_{i}}(t)$, $i=1,2$, by the Itô formula we have
\[\begin{array}{r}\displaystyle {U_{i}}(t)={U_{i}}(0)+{\underset{0}{\overset{t}{\int }}}{U_{i}}(s)\left[-\frac{{a_{i1}}(s)+{a_{i2}}(s){x_{3-i}}(s)}{1+{x_{3-i}}(s)}+{c_{i}}(s){x_{i}}(s)+{\sigma _{i}^{2}}(s)\right.\\ {} \displaystyle \left.+\underset{\mathbb{R}}{\int }\frac{{\gamma _{i}^{2}}(s,z)}{1+{\gamma _{i}}(s,z)}{\Pi _{1}}(dz)\right]ds-{\underset{0}{\overset{t}{\int }}}{U_{i}}(s){\sigma _{i}}(s)d{w_{i}}(s)\\ {} \displaystyle -{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{U_{i}}(s)\frac{{\gamma _{i}}(s,z)}{1+{\gamma _{i}}(s,z)}{\tilde{\nu }_{1}}(ds,dz)-{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{U_{i}}(s)\frac{{\delta _{i}}(s,z)}{1+{\delta _{i}}(s,z)}{\nu _{2}}(ds,dz).\end{array}\]
Then by the Itô formula we derive for $0<\theta <1$:
\[\begin{array}{r}\displaystyle {(1+{U_{i}}(t))^{\theta }}={(1+{U_{i}}(0))^{\theta }}+{\underset{0}{\overset{t}{\int }}}\theta {(1+{U_{i}}(s))^{\theta -2}}\left\{(1+{U_{i}}(s)){U_{i}}(s)\right.\\ {} \displaystyle \times \left[-\frac{{a_{i1}}(s)+{a_{i2}}(s){x_{3-i}}(s)}{1+{x_{3-i}}(s)}+{c_{i}}(s){x_{i}}(s)+{\sigma _{i}^{2}}(s)\phantom{\underset{\mathbb{R}}{\int }}\right.\\ {} \displaystyle \left.+\underset{\mathbb{R}}{\int }\frac{{\gamma _{i}^{2}}(s,z)}{1+{\gamma _{i}}(s,z)}{\Pi _{1}}(dz)\right]+\frac{\theta -1}{2}{U_{i}^{2}}(s){\sigma _{i}^{2}}(s)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }\left[{(1+{U_{i}}(s))^{2}}\left({\left(\frac{1+{U_{i}}(s)+{\gamma _{i}}(s,z)}{(1+{\gamma _{i}}(s,z))(1+{U_{i}}(s))}\right)^{\theta }}-1\right)\right.\\ {} \displaystyle +\left.\theta (1+{U_{i}}(s))\frac{{U_{i}}(s){\gamma _{i}}(s,z)}{1+{\gamma _{i}}(s,z)}\right]{\Pi _{1}}(dz)\end{array}\]
(12)
\[\begin{array}{r}\displaystyle \left.+\frac{1}{\theta }\underset{\mathbb{R}}{\int }{(1+{U_{i}}(s))^{2}}\left[{\left(\frac{1+{U_{i}}(s)+{\delta _{i}}(s,z)}{(1+{\delta _{i}}(s,z))(1+{U_{i}}(s))}\right)^{\theta }}-1\right]{\Pi _{2}}(dz)\right\}ds\\ {} \displaystyle -{\underset{0}{\overset{t}{\int }}}\theta {(1+{U_{i}}(s))^{\theta -1}}{U_{i}}(s){\sigma _{i}}(s)d{w_{i}}(s)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\left[{\left(1+\frac{{U_{i}}(s)}{1+{\gamma _{i}}(s,z)}\right)^{\theta }}-{(1+{U_{i}}(s))^{\theta }}\right]{\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\left[{\left(1+\frac{{U_{i}}(s)}{1+{\delta _{i}}(s,z)}\right)^{\theta }}-{(1+{U_{i}}(s))^{\theta }}\right]{\tilde{\nu }_{2}}(ds,dz)={(1+{U_{i}}(0))^{\theta }}\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\theta {(1+{U_{i}}(s))^{\theta -2}}J(s)ds-{I_{1,\mathrm{stoch}}}(t)+{I_{2,\mathrm{stoch}}}(t)+{I_{3,\mathrm{stoch}}}(t),\end{array}\]
where ${I_{i,\mathrm{stoch}}}(t),i=\overline{1,3}$, are corresponding stochastic integrals in (12). Under the Assumption 1 there exist constants $|{K_{1}}(\theta )|<\infty $, $|{K_{2}}(\theta )|<\infty $ such that for the process $J(t)$ we have the estimate
\[\begin{array}{r}\displaystyle J(t)\le (1+{U_{i}}(t)){U_{i}}(t)\left[-{a_{i\min }}(t)+{c_{\max }}{U_{i}^{-1}}(t)+{\sigma _{i}^{2}}(t)\hspace{-0.1667em}\hspace{-0.1667em}\phantom{\underset{\mathbb{R}}{\int }}\right.\\ {} \displaystyle \left.+\underset{\mathbb{R}}{\int }\frac{{\gamma _{i}^{2}}(s,z)}{1+{\gamma _{i}}(s,z)}{\Pi _{1}}(dz)\right]+\frac{\theta -1}{2}{U_{i}^{2}}(s){\sigma _{i}^{2}}(s)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }\left[{(1+{U_{i}}(s))^{2}}\left({\left(\frac{1}{1+{\gamma _{i}}(s,z)}+\frac{1}{1+{U_{i}}(s)}\right)^{\theta }}-1\right)\right.\\ {} \displaystyle +\left.\theta (1+{U_{i}}(s))\frac{{U_{i}}(s){\gamma _{i}}(s,z)}{1+{\gamma _{i}}(s,z)}\right]{\Pi _{1}}(dz)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }{(1+{U_{i}}(s))^{2}}\left[{\left(\frac{1}{1+{\delta _{i}}(s,z)}+\frac{1}{1+{U_{i}}(s)}\right)^{\theta }}-1\right]{\Pi _{2}}(dz)\\ {} \displaystyle \le {U_{i}^{2}}(t)\left[-{a_{i\min }}(t)+\frac{{\sigma _{i}^{2}}(t)}{2}+\underset{\mathbb{R}}{\int }{\gamma _{i}}(t,z){\Pi _{1}}(dz)+\frac{\theta }{2}{\sigma _{i}^{2}}(t)\right.\\ {} \displaystyle \left.+\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\gamma _{i}}(t,z))^{-\theta }}-1]{\Pi _{1}}(dz)+\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\delta _{i}}(t,z))^{-\theta }}-1]{\Pi _{2}}(dz)\right]\\ {} \displaystyle +{K_{1}}(\theta ){U_{i}}(t)+{K_{2}}(\theta ).\end{array}\]
Here we used the inequality ${(x+y)^{\theta }}\le {x^{\theta }}+\theta {x^{\theta -1}}y$, $0<\theta <1$, $x,y>0$. Due to
\[\begin{array}{r}\displaystyle \underset{\theta \to 0+}{\lim }\left[\frac{\theta }{2}{\sigma _{i}^{2}}(t)+\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\gamma _{i}}(t,z))^{-\theta }}-1]{\Pi _{1}}(dz)\right.\\ {} \displaystyle \left.+\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\delta _{i}}(t,z))^{-\theta }}-1]{\Pi _{2}}(dz)\right]\\ {} \displaystyle =-\underset{\mathbb{R}}{\int }\ln (1+{\gamma _{i}}(t,z)){\Pi _{1}}(dz)-\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(t,z)){\Pi _{2}}(dz),\end{array}\]
and the condition ${\min _{i=1,2}}{\inf _{t\ge 0}}({a_{i\min }}(t)-{\beta _{i}}(t))>0$ we can choose a sufficiently small $0<\theta <1$ to satisfy
\[\begin{array}{r}\displaystyle {K_{0}}(\theta )=\underset{i=1,2}{\min }\underset{t\ge 0}{\inf }\left\{{a_{i\min }}(t)-\frac{{\sigma _{i}^{2}}(t)}{2}-\underset{\mathbb{R}}{\int }{\gamma _{i}}(t,z){\Pi _{1}}(dz)-\frac{\theta }{2}{\sigma _{i}^{2}}(t)\right.\\ {} \displaystyle \left.-\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\gamma _{i}}(t,z))^{-\theta }}-1]{\Pi _{1}}(dz)-\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\delta _{i}}(t,z))^{-\theta }}-1]{\Pi _{2}}(dz)\right\}>0.\end{array}\]
So from (12) and the estimate for $J(t)$ we derive
(13)
\[\begin{array}{r}\displaystyle d\left[{(1+{U_{i}}(t))^{\theta }}\right]\le \theta {(1+{U_{i}}(t))^{\theta -2}}[-{K_{0}}(\theta ){U_{i}^{2}}(t)+{K_{1}}(\theta ){U_{i}}(t)+{K_{2}}(\theta )]dt\\ {} \displaystyle -\theta {(1+{U_{i}}(t))^{\theta -1}}{U_{i}}(t){\sigma _{i}}(t)d{w_{i}}(t)+\underset{\mathbb{R}}{\int }\left[{\left(1+\frac{{U_{i}}(t)}{1+{\gamma _{i}}(t,z)}\right)^{\theta }}\right.\\ {} \displaystyle \left.-{(1+{U_{i}}(t))^{\theta }}\right]{\tilde{\nu }_{1}}(dt,dz)+\underset{\mathbb{R}}{\int }\left[{\left(1+\frac{{U_{i}}(t)}{1+{\delta _{i}}(t,z)}\right)^{\theta }}-{(1+{U_{i}}(t))^{\theta }}\right]{\tilde{\nu }_{2}}(dt,dz).\end{array}\]
By the Itô formula and (13) we have
(14)
\[\begin{array}{r}\displaystyle d\left[{e^{\lambda t}}{(1+{U_{i}}(t))^{\theta }}\right]=\lambda {e^{\lambda t}}{(1+{U_{i}}(t))^{\theta }}dt+{e^{\lambda t}}d\left[{(1+{U_{i}}(t))^{\theta }}\right]\\ {} \displaystyle \le {e^{\lambda t}}\theta {(1+{U_{i}}(t))^{\theta -2}}\left[-\left({K_{0}}(\theta )-\frac{\lambda }{\theta }\right){U_{i}^{2}}(t)+\left({K_{1}}(\theta )+\frac{2\lambda }{\theta }\right){U_{i}}(t)\right.\\ {} \displaystyle \left.+{K_{2}}(\theta )+\frac{\lambda }{\theta }\right]dt-\theta {e^{\lambda t}}{(1+{U_{i}}(t))^{\theta -1}}{U_{i}}(t){\sigma _{i}}(t)d{w_{i}}(t)\\ {} \displaystyle +{e^{\lambda t}}\underset{\mathbb{R}}{\int }\left[{\left(1+\frac{{U_{i}}(t)}{1+{\gamma _{i}}(t,z)}\right)^{\theta }}-{(1+{U_{i}}(t))^{\theta }}\right]{\tilde{\nu }_{1}}(dt,dz)\\ {} \displaystyle +{e^{\lambda t}}\underset{\mathbb{R}}{\int }\left[{\left(1+\frac{{U_{i}}(t)}{1+{\delta _{i}}(t,z)}\right)^{\theta }}-{(1+{U_{i}}(t))^{\theta }}\right]{\tilde{\nu }_{2}}(dt,dz).\end{array}\]
Let us choose $\lambda >0$ such that ${K_{0}}(\theta )-\lambda /\theta >0$. Then the function
\[ {(1+{U_{i}}(t))^{\theta -2}}\left[-\left({K_{0}}(\theta )-\frac{\lambda }{\theta }\right){U_{i}^{2}}(t)+\left({K_{1}}(\theta )+\frac{2\lambda }{\theta }\right){U_{i}}(t)+{K_{2}}(\theta )+\frac{\lambda }{\theta }\right]\]
is bounded from above by some constant $K>0$. So by integrating (14) and taking the expectation we obtain
(15)
\[ {e^{\lambda t}}\mathrm{E}\left[{(1+{U_{i}}(t))^{\theta }}\right]\le {\left(1+\frac{1}{{x_{i0}}}\right)^{\theta }}+\frac{\lambda }{\theta }K\left({e^{\lambda t}}-1\right),\]
because the expectation of stochastic integrals are equal zero by (10) and
\[\begin{array}{r}\displaystyle \mathrm{E}\left[{\underset{0}{\overset{t}{\int }}}{e^{2\lambda s}}{(1+{U_{i}}(s))^{2\theta -2}}{U_{i}^{2}}(s){\sigma _{i}^{2}}(s)ds\right]=\mathrm{E}\left[{\underset{0}{\overset{t}{\int }}}\frac{{e^{2\lambda s}}{\sigma _{i}^{2}}(s)}{{(1+{x_{i}^{-1}}(s))^{2-2\theta }}{x_{i}^{2}}(s)}ds\right]\\ {} \displaystyle \le {\underset{0}{\overset{t}{\int }}}{e^{2\lambda s}}{\sigma _{i}^{2}}(s)\mathrm{E}[{x_{i}^{2\theta }}]ds<\infty ,\\ {} \displaystyle \mathrm{E}\left[{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{2\lambda s}}{\left[{\left(1+\frac{{U_{i}}(s)}{1+{\gamma _{i}}(s,z)}\right)^{\theta }}-{(1+{U_{i}}(s))^{\theta }}\right]^{2}}{\Pi _{1}}(dz)ds\right]\\ {} \displaystyle \le {L_{1}}{\underset{0}{\overset{t}{\int }}}{e^{2\lambda s}}\mathrm{E}\left[{x_{i}^{2\theta }}(s)\right]ds<\infty ,\\ {} \displaystyle \mathrm{E}\left[{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{2\lambda s}}{\left[{\left(1+\frac{{U_{i}}(s)}{1+{\delta _{i}}(s,z)}\right)^{\theta }}-{(1+{U_{i}}(s))^{\theta }}\right]^{2}}{\Pi _{2}}(dz)ds\right]\\ {} \displaystyle \le {L_{2}}{\underset{0}{\overset{t}{\int }}}{e^{2\lambda s}}\mathrm{E}\left[{x_{i}^{2\theta }}(s)\right]ds<\infty ,\end{array}\]
where
\[\begin{array}{r}\displaystyle {L_{1}}={\theta ^{2}}\max \left\{\frac{{({\gamma ^{2}})_{\max }}}{{(1+{\gamma _{\min }})^{2}}},\frac{{({\gamma ^{2}})_{\max }}}{{(1+{\gamma _{\min }})^{2\theta }}}\right\}{\Pi _{1}}(\mathbb{R})<\infty ,\\ {} \displaystyle {L_{2}}={\theta ^{2}}\max \left\{\frac{{({\delta ^{2}})_{\max }}}{{(1+{\delta _{\min }})^{2}}},\frac{{({\delta ^{2}})_{\max }}}{{(1+{\delta _{\min }})^{2\theta }}}\right\}{\Pi _{2}}(\mathbb{R})<\infty .\end{array}\]
From (15) we obtain
(16)
\[\begin{array}{r}\displaystyle \underset{t\to \infty }{\limsup }\mathrm{E}\left[{\left(\frac{1}{{x_{i}}(t)}\right)^{\theta }}\right]=\underset{t\to \infty }{\limsup }\mathrm{E}\left[{U_{i}^{\theta }}(t)\right]\\ {} \displaystyle \le \underset{t\to \infty }{\limsup }\mathrm{E}\left[{(1+{U_{i}}(t))^{\theta }}\right]\le \frac{\theta K}{\lambda },\hspace{1em}i=1,2.\end{array}\]
From (11) and (16) by the Chebyshev inequality we can derive that for an arbitrary $\varepsilon \in (0,1)$ there are positive constants $H=H(\varepsilon )$ and $h=h(\varepsilon )$ such that
\[ \underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{i}}(t)\le H\}\ge 1-\varepsilon ,\hspace{2em}\underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{i}}(t)\ge h\}\ge 1-\varepsilon ,\hspace{1em}i=1,2.\]
 □

5 Extinction, nonpersistence and strong persistence in the mean

The property of extinction in the stochastic models of population dynamics means that every species will become extinct with probability 1.
Definition 3.
The solution $X(t)=({x_{1}}(t),{x_{2}}(t))$, $t\ge 0$, to the system (2) is said to be extinct if for every initial data ${X_{0}}>0$ we have ${\lim \nolimits_{t\to \infty }}{x_{i}}(t)=0$ almost surely (a.s.), $i=1,2$.
Theorem 4.
Let Assumption 1 be fulfilled. If
\[ {\bar{p}_{i}^{\ast }}=\underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{i\max }}(s)ds<0,\hspace{1em}\textit{where}\hspace{2.5pt}{p_{i\max }}(s)={a_{i\max }}(s)-{\beta _{i}}(s),\]
${a_{i\max }}(t)={\max _{j=1,2}}{a_{ij}}(t)$, $i=1,2$, then the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ will be extinct.
Proof.
By the Itô formula, we have
(17)
\[\begin{array}{r}\displaystyle \ln {x_{i}}(t)=\ln {x_{i0}}+{\underset{0}{\overset{t}{\int }}}\left[\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\phantom{\underset{\mathbb{R}}{\int }}\frac{{a_{i1}}(s)+{a_{i2}}(s){x_{3-i}}(s)}{1+{x_{3-i}}(s)}-{c_{i}}(s){x_{i}}(s)-\frac{{\sigma _{i}^{2}}(s)}{2}\right.\\ {} \displaystyle \left.+\underset{\mathbb{R}}{\int }[\ln (1+{\gamma _{i}}(s,z))-{\gamma _{i}}(s,z)]{\Pi _{1}}(dz)+\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(s,z)){\Pi _{2}}(dz)\right]ds\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}{\sigma _{i}}(s)d{w_{i}}(s)+{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\ln (1+{\gamma _{i}}(s,z)){\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(s,z)){\tilde{\nu }_{2}}(ds,dz)\le \ln {x_{i0}}+{\underset{0}{\overset{t}{\int }}}{p_{i\max }}(s)ds+M(t),\end{array}\]
where the martingale
(18)
\[\begin{array}{r}\displaystyle M(t)={\underset{0}{\overset{t}{\int }}}{\sigma _{i}}(s)d{w_{i}}(s)+{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\ln (1+{\gamma _{i}}(s,z)){\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(s,z)){\tilde{\nu }_{2}}(ds,dz)\end{array}\]
has quadratic variation
\[\begin{array}{r}\displaystyle \langle M,M\rangle (t)={\underset{0}{\overset{t}{\int }}}{\sigma _{i}^{2}}(s)ds+{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{\ln ^{2}}(1+{\gamma _{i}}(s,z)){\Pi _{1}}(dz)ds\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{\ln ^{2}}(1+{\delta _{i}}(s,z)){\Pi _{2}}(dz)ds\le Kt.\end{array}\]
Then the strong law of large numbers for local martingales ([7]) yields ${\lim \nolimits_{t\to \infty }}M(t)/t=0$ a.s. Therefore, from (17) we have
\[ \underset{t\to \infty }{\limsup }\frac{\ln {x_{i}}(t)}{t}\le \underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{i\max }}(s)ds<0,\hspace{1em}\text{a.s.}\]
So ${\lim \nolimits_{t\to \infty }}{x_{i}}(t)=0$, $i=1,2$, a.s.  □
Definition 4.
([8]) The solution $X(t)=({x_{1}}(t),{x_{2}}(t))$, $t\ge 0$, to the system (2) is said to be nonpersistence in the mean if for every initial data ${X_{0}}>0$ we have ${\lim \nolimits_{t\to \infty }}\frac{1}{t}{\underset{0}{\overset{t}{\textstyle\int }}}{x_{i}}(s)ds=0$ a.s., $i=1,2$.
Theorem 5.
Let Assumption 1 be fulfilled. If ${\bar{p}_{i}^{\ast }}=0$, $i=1,2$, then the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ will be nonpersistence in the mean.
Proof.
From the first equality in (17) we have
(19)
\[ \ln {x_{i}}(t)\le \ln {x_{i0}}+{\underset{0}{\overset{t}{\int }}}{p_{i\max }}(s)ds-{c_{\min }}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds+M(t),\]
where the martingale $M(t)$ is defined in (18). From the definition of ${\bar{p}_{i}^{\ast }}$ and the strong law of large numbers for $M(t)$ it follows that $\forall \varepsilon >0$, $\exists {t_{0}}\ge 0$ such that
\[ \frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{i\max }}(s)ds\le {\bar{p}_{i}^{\ast }}+\frac{\varepsilon }{2},\hspace{2.5pt}\frac{M(t)}{t}\le \frac{\varepsilon }{2},\hspace{1em}\forall t\ge {t_{0}},\hspace{2.5pt}\mathrm{a}.\mathrm{s}.\]
So, from (19) we derive
(20)
\[ \ln {x_{i}}(t)-\ln {x_{i0}}\le t({\bar{p}_{i}^{\ast }}+\varepsilon )-{c_{\min }}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds=t\varepsilon -{c_{\min }}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds.\]
Let ${y_{i}}(t)={\textstyle\int _{0}^{t}}{x_{i}}(s)ds$, then from (20) we have
\[ \ln \left(\frac{d{y_{i}}(t)}{dt}\right)\le \varepsilon t-{c_{\min }}{y_{i}}(t)+\ln {x_{i0}}\Rightarrow {e^{{c_{\min }}{y_{i}}(t)}}\frac{d{y_{i}}(t)}{dt}\le {x_{i0}}{e^{\varepsilon t}}.\]
Integrating the last inequality from ${t_{0}}$ to t yields
\[ {e^{{c_{\min }}{y_{i}}(t)}}\le \frac{{c_{\min }}{x_{i0}}}{\varepsilon }\left({e^{\varepsilon t}}-{e^{\varepsilon {t_{0}}}}\right)+{e^{{c_{\min }}{y_{i}}({t_{0}})}},\hspace{1em}\forall t\ge {t_{0}},\hspace{2.5pt}\mathrm{a}.\mathrm{s}.\]
So
\[ {y_{i}}(t)\le \frac{1}{{c_{\min }}}\ln \left[{e^{{c_{\min }}{y_{i}}({t_{0}})}}+\frac{{c_{\min }}{x_{i0}}}{\varepsilon }\left({e^{\varepsilon t}}-{e^{\varepsilon {t_{0}}}}\right)\right],\hspace{1em}\forall t\ge {t_{0}},\hspace{2.5pt}\mathrm{a}.\mathrm{s}.,\]
and therefore
\[ \underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds\le \frac{\varepsilon }{{c_{\min }}},\hspace{2.5pt}\hspace{2.5pt}\mathrm{a}.\mathrm{s}.\]
Since $\varepsilon >0$ is arbitrary and $X(t)\in {\mathbb{R}_{+}^{2}}$, we have
\[ \underset{t\to \infty }{\lim }\frac{1}{t}{\underset{0}{\overset{s}{\int }}}{x_{i}}(s)ds=0,\hspace{1em}i=1,2,\hspace{2.5pt}\mathrm{a}.\mathrm{s}.\]
 □
Definition 5.
([8]) The solution $X(t)=({x_{1}}(t),{x_{2}}(t))$, $t\ge 0$, to the system (2) is said to be strongly persistence in the mean if for every initial data ${X_{0}}>0$ we have ${\liminf _{t\to \infty }}\frac{1}{t}{\underset{0}{\overset{t}{\textstyle\int }}}{x_{i}}(s)ds>0$ a.s., $i=1,2$.
Theorem 6.
Let Assumption 1 be fulfilled. If ${\bar{p}_{i\ast }}={\liminf _{t\to \infty }}\frac{1}{t}{\underset{0}{\overset{t}{\textstyle\int }}}{p_{i\min }}(s)ds>0$, where ${p_{i\min }}(s)={a_{i\min }}(s)-{\beta _{i}}(s)$, ${a_{i\min }}(t)={\min _{j=1,2}}{a_{ij}}(t)$, $i=1,2$, then
\[ \underset{t\to \infty }{\liminf }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds\ge \frac{{\bar{p}_{i\ast }}}{{c_{i\sup }}}.\]
Therefore, the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ will be strongly persistence in the mean.
Proof.
From the first equality in (17) we have
(21)
\[ \ln {x_{i}}(t)\ge \ln {x_{i0}}+{\underset{0}{\overset{t}{\int }}}{p_{i\min }}(s)ds-{c_{i\sup }}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds+M(t),\]
where the martingale $M(t)$ is defined in (18). From the definition of ${\bar{p}_{i\ast }}$ and the strong law of large numbers for $M(t)$ it follows that $\forall \varepsilon >0$, $\exists {t_{0}}\ge 0$ such that $\frac{1}{t}{\underset{0}{\overset{t}{\textstyle\int }}}{p_{i\min }}(s)ds\ge {\bar{p}_{i\ast }}-\frac{\varepsilon }{2}$, $\frac{M(t)}{t}\ge -\frac{\varepsilon }{2}$, $\forall t\ge {t_{0}}$, a.s. So, from (21) we obtain
(22)
\[ \ln {x_{i}}(t)\ge \ln {x_{i0}}+t({\bar{p}_{i\ast }}-\varepsilon )-{c_{i\sup }}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds.\]
Let us choose sufficiently small $\varepsilon >0$ such that ${\bar{p}_{i\ast }}-\varepsilon >0$.
Let ${y_{i}}(t)={\textstyle\int _{0}^{t}}{x_{i}}(s)ds$, then from (22) we have
\[ \ln \left(\frac{d{y_{i}}(t)}{dt}\right)\ge ({\bar{p}_{i\ast }}-\varepsilon )t-{c_{i\sup }}{y_{i}}(t)+\ln {x_{i0}}.\]
Hence ${e^{{c_{i\sup }}{y_{i}}(t)}}\frac{d{y_{i}}(t)}{dt}\ge {x_{i0}}{e^{({\bar{p}_{i\ast }}-\varepsilon )t}}$. Integrating the last inequality from ${t_{0}}$ to t yields
\[ {e^{{c_{i\sup }}{y_{i}}(t)}}\ge \frac{{c_{i\sup }}{x_{i0}}}{{\bar{p}_{i\ast }}-\varepsilon }\left({e^{({\bar{p}_{i\ast }}-\varepsilon )t}}-{e^{({\bar{p}_{i\ast }}-\varepsilon ){t_{0}}}}\right)+{e^{{c_{i\sup }}{y_{i}}({t_{0}})}},\hspace{1em}\forall t\ge {t_{0}},\hspace{2.5pt}\mathrm{a}.\mathrm{s}.\]
So
\[ {y_{i}}(t)\ge \frac{1}{{c_{i\sup }}}\ln \left[{e^{{c_{i\sup }}{y_{i}}({t_{0}})}}+\frac{{c_{i\sup }}{x_{i0}}}{{\bar{p}_{i\ast }}-\varepsilon }\left({e^{({\bar{p}_{i\ast }}-\varepsilon )t}}-{e^{({\bar{p}_{i\ast }}-\varepsilon ){t_{0}}}}\right)\right],\hspace{2.5pt}\mathrm{a}.\mathrm{s}.,\]
$\forall t\ge {t_{0}}$, and therefore
\[ \underset{t\to \infty }{\liminf }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds\ge \frac{({\bar{p}_{i\ast }}-\varepsilon )}{{c_{i\sup }}},\hspace{2.5pt}\hspace{2.5pt}\mathrm{a}.\mathrm{s}.\]
Using the arbitrariness of $\varepsilon >0$ we get the assertion of the theorem.  □

References

[1] 
Borysenko, O.D., Borysenko, D.O.: Persistence and Extinction in Stochastic Nonautonomous Logistic Model of Population Dynamics. Theory Probab. Math. Stat. 2(99), 63–70 (2018). MR3908656
[2] 
Borysenko, O.D., Borysenko, D.O.: Asymptotic Behavior of the Solution to the Non-Autonomous Stochastic Logistic Differential Equation. Theory Probab. Math. Stat. 2(101), 40–48 (2019)
[3] 
Gikhman, I.I., Skorokhod, A.V.: Stochastic Differential Equations and its Applications. Kyiv, Naukova Dumka, (1982). (In Russian). MR0678374
[4] 
Gopalsamy, K.: Stability and Oscillation in Delay Differential Equations of Population Dynamics. Kluwer Academic Publishers, (1992). MR1163190. https://doi.org/10.1007/978-94-015-7920-9
[5] 
Li, M., Gao, H., Wang, B.: Analysis of a non-autonomous mutualism model driven by Lévy jumps. Discrete Contin. Dyn. Syst., Ser. B 21(4), 1189–1202 (2016). MR3483558. https://doi.org/10.3934/dcdsb.2016.21.1189
[6] 
Li, X., Mao, X.: Population Dynamical Behavior of Non-Autonomous Lotka-Volterra Competitive System with Random Perturbation. Discrete Contin. Dyn. Syst. 24, 523–545 (2009). MR2486589. https://doi.org/10.3934/dcds.2009.24.523
[7] 
Lipster, R.: A strong law of large numbers for local martingales. Stochastics 3, 217–228 (1980). MR0573205. https://doi.org/10.1080/17442508008833146
[8] 
Liu, M., Wanga, K.: Persistence and extinction in stochastic non-autonomous logistic systems. J. Math. Anal. Appl. 375, 443–457 (2011). MR2735535. https://doi.org/10.1016/j.jmaa.2010.09.058
[9] 
Qiu, H., Lv, J., Wang, K.: Two types of permanence of a stochastic mutualism model. Adv. Differ. Equ. 2013, 37 (2013). http://www.advancesindifferenceequations.com/content/2013/1/37. MR3033721. https://doi.org/10.1186/1687-1847-2013-37
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Existence of the global solution
  • 3 Stochastically ultimate boundedness
  • 4 Stochastic permanence
  • 5 Extinction, nonpersistence and strong persistence in the mean
  • References

Copyright
© 2020 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Stochastic mutualism model global solution stochastic ultimate boundedness stochastic permanence extinction nonpersistence in the mean strong persistence in the mean

MSC2010
92D25 60H10 60H30

Metrics
since March 2018
619

Article info
views

534

Full article
views

366

PDF
downloads

149

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    6
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 5.
Theorem 6.
Theorem 1.
Let Assumption 1 be fulfilled. Then there exists a unique global solution $X(t)$ to the system (2) for any initial value $X(0)={X_{0}}>0$, and $\mathrm{P}\{X(t)\in {\mathbb{R}_{+}^{2}}\}=1$.
Theorem 2.
Under Assumption 1 the solution $X(t)$ to the system (2) is stochastically ultimately bounded for any initial value ${X_{0}}\in {\mathbb{R}_{+}^{2}}$.
Theorem 3.
Let Assumption 1 be fulfilled. If
\[ \underset{i=1,2}{\min }\underset{t\ge 0}{\inf }({a_{i\min }}(t)-{\beta _{i}}(t))>0,\hspace{1em}\textit{where}\hspace{2.5pt}{a_{i\min }}(t)=\underset{j=1,2}{\min }{a_{ij}}(t),\hspace{1em}i=1,2,\]
then the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ is stochastically permanent.
Theorem 4.
Let Assumption 1 be fulfilled. If
\[ {\bar{p}_{i}^{\ast }}=\underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{i\max }}(s)ds<0,\hspace{1em}\textit{where}\hspace{2.5pt}{p_{i\max }}(s)={a_{i\max }}(s)-{\beta _{i}}(s),\]
${a_{i\max }}(t)={\max _{j=1,2}}{a_{ij}}(t)$, $i=1,2$, then the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ will be extinct.
Theorem 5.
Let Assumption 1 be fulfilled. If ${\bar{p}_{i}^{\ast }}=0$, $i=1,2$, then the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ will be nonpersistence in the mean.
Theorem 6.
Let Assumption 1 be fulfilled. If ${\bar{p}_{i\ast }}={\liminf _{t\to \infty }}\frac{1}{t}{\underset{0}{\overset{t}{\textstyle\int }}}{p_{i\min }}(s)ds>0$, where ${p_{i\min }}(s)={a_{i\min }}(s)-{\beta _{i}}(s)$, ${a_{i\min }}(t)={\min _{j=1,2}}{a_{ij}}(t)$, $i=1,2$, then
\[ \underset{t\to \infty }{\liminf }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{x_{i}}(s)ds\ge \frac{{\bar{p}_{i\ast }}}{{c_{i\sup }}}.\]
Therefore, the solution $X(t)$ to the system (2) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ will be strongly persistence in the mean.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy