1 Introduction
The deterministic predator–prey model with modified version of Leslie–Gower term and Holling-type II functional response is studied in [1]. This model has the form
where ${x_{1}}(t)$ and ${x_{2}}(t)$ are the prey and predator population densities at time t, respectively. The positive constants a, b, c, r, f, ${m_{1}}$, ${m_{2}}$ are defined as follows: a is the growth rate of prey ${x_{1}}$; b measures the strength of competition among individuals of species ${x_{1}}$; c is the maximum value of the per capita reduction rate of ${x_{1}}$ due to ${x_{2}}$; ${m_{1}}$ and ${m_{2}}$ measure the extent to which the environment provides protection to the prey ${x_{1}}$ and to the predator ${x_{2}}$, respectively; r is the growth rate of the predator ${x_{2}}$, and f has a similar meaning to c. In [1] the authors study boundedness and global stability of the positive equilibrium of the model (1).
(1)
\[\begin{array}{r}\displaystyle d{x_{1}}(t)={x_{1}}(t)\left(a-b{x_{1}}(t)-\frac{c{x_{2}}(t)}{{m_{1}}+{x_{1}}(t)}\right)dt,\\ {} \displaystyle d{x_{2}}(t)={x_{2}}(t)\left(r-\frac{f{x_{2}}(t)}{{m_{2}}+{x_{1}}(t)}\right)dt,\end{array}\]In the papers [6, 7, 9] the stochastic version of model (1) is considered in the form
where ${w_{1}}(t)$ and ${w_{2}}(t)$ are mutually independent Wiener processes in [6, 7], and processes ${w_{1}}(t)$, ${w_{2}}(t)$ are correlated in [9]. In [6] the authors proved that there is a unique positive solution to the system (2), obtaining the sufficient conditions for extinction and persistence in the mean of predator and prey. In [7] it is shown that, under appropriate conditions, there is a stationary distribution of the solution to the system (2) which is ergodic. In [9] the authors prove that the densities of the distributions of the solution to the system (2) can converge in ${L^{1}}$ to an invariant density or can converge weakly to a singular measure under appropriate conditions.
(2)
\[\begin{array}{r}\displaystyle d{x_{1}}(t)={x_{1}}(t)\left(a-b{x_{1}}(t)-\frac{c{x_{2}}(t)}{{m_{1}}+{x_{1}}(t)}\right)dt+\alpha {x_{1}}(t)d{w_{1}}(t),\\ {} \displaystyle d{x_{2}}(t)={x_{2}}(t)\left(r-\frac{f{x_{2}}(t)}{{m_{2}}+{x_{1}}(t)}\right)dt+\beta {x_{2}}(t)d{w_{2}}(t),\end{array}\]Population systems may suffer abrupt environmental perturbations, such as epidemics, fires, earthquakes, etc. So it is natural to introduce Poisson noises into the population model for describing such discontinuous systems.
In this paper, we consider the nonautonomous predator–prey model with modified version of the Leslie–Gower term and Holling-type II functional response, disturbed by white noise and jumps generated by centered and noncentered Poisson measures. So, we take into account not only “small” jumps, corresponding to the centered Poisson measure, but also the “large” jumps, corresponding to the noncentered Poisson measure. This model is driven by the system of stochastic differential equations
where ${x_{1}}(t)$ and ${x_{2}}(t)$ are the prey and predator population densities at time t, respectively, ${b_{2}}(t)\equiv 0$, ${w_{i}}(t),i=1,2$, are independent standard one-dimensional Wiener processes, ${\nu _{i}}(t,A),i=1,2$, are independent Poisson measures, which are independent on ${w_{i}}(t),i=1,2$, ${\tilde{\nu }_{1}}(t,A)={\nu _{1}}(t,A)-t{\Pi _{1}}(A)$, $E[{\nu _{i}}(t,A)]=t{\Pi _{i}}(A)$, $i=1,2$, ${\Pi _{i}}(A),i=1,2$, are finite measures on the Borel sets A in $\mathbb{R}$.
(3)
\[\begin{array}{r}\displaystyle d{x_{i}}(t)={x_{i}}(t)\left[{a_{i}}(t)-{b_{i}}(t){x_{i}}(t)-\frac{{c_{i}}(t){x_{2}}(t)}{m(t)+{x_{1}}(t)}\right]dt+{\sigma _{i}}(t){x_{i}}(t)d{w_{i}}(t)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }{\gamma _{i}}(t,z){x_{i}}(t){\tilde{\nu }_{1}}(dt,dz)+\underset{\mathbb{R}}{\int }{\delta _{i}}(t,z){x_{i}}(t){\nu _{2}}(dt,dz),\\ {} \displaystyle {x_{i}}(0)={x_{i0}}>0,\hspace{2.5pt}i=1,2,\end{array}\]To the best of our knowledge, there have been no papers devoted to the dynamical properties of the stochastic predator–prey model (3), even in the case of centered Poisson noise. It is worth noting that the impact of centered and noncentered Poisson noises to the stochastic nonautonomous logistic model and to the stochastic two-species mutualism model is studied in the papers [2–4].
In the following we will use the notations $X(t)=({x_{1}}(t),{x_{2}}(t))$, ${X_{0}}=({x_{10}},{x_{20}})$, $|X(t)|=\sqrt{{x_{1}^{2}}(t)+{x_{2}^{2}}(t)}$, ${\mathbb{R}_{+}^{2}}=\{X\in {\mathbb{R}^{2}}:\hspace{2.5pt}{x_{1}}>0,{x_{2}}>0\}$,
\[\begin{array}{r}\displaystyle {\alpha _{i}}(t)={a_{i}}(t)+{\int _{\mathbb{R}}}{\delta _{i}}(t,z){\Pi _{2}}(dz),\\ {} \displaystyle {\beta _{i}}(t)\hspace{-0.1667em}=\hspace{-0.1667em}\frac{{\sigma _{i}^{2}}(t)}{2}\hspace{-0.1667em}+\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }[{\gamma _{i}}(t,z)\hspace{-0.1667em}-\hspace{-0.1667em}\ln (1\hspace{-0.1667em}+\hspace{-0.1667em}{\gamma _{i}}(t,z))]{\Pi _{1}}(dz)\hspace{-0.1667em}-\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\ln (1\hspace{-0.1667em}+\hspace{-0.1667em}{\delta _{i}}(t,z)){\Pi _{2}}(dz),\end{array}\]
$i=1,2$. For bounded, continuous functions ${f_{i}}(t),t\in [0,+\infty )$, $i=1,2$, let us denote
We prove that the system (3) has a unique, positive, global (no explosion in a finite time) solution for any positive initial value, and that this solution is stochastically ultimately bounded. The sufficient conditions for stochastic permanence, nonpersistence in the mean, weak persistence in the mean and extinction of solution are derived.
The rest of this paper is organized as follows. In Section 2, we prove the existence of the unique global positive solution to the system (3) and derive some auxiliary results. In Section 3, we prove the stochastic ultimate boundedness of the solution to the system (3), obtainig conditions under which the solution is stochastically permanent. The sufficient conditions for nonpersistence in the mean, weak persistence in the mean and extinction of the solution are derived.
2 Existence of global solution and some auxiliary lemmas
Let $(\Omega ,\mathcal{F},\mathrm{P})$ be a probability space, ${w_{i}}(t),i=1$, $2,t\ge 0$, are independent standard one-dimensional Wiener processes on $(\Omega ,\mathcal{F},\mathrm{P})$, and ${\nu _{i}}(t,A),i=1,2$, are independent Poisson measures defined on $(\Omega ,\mathcal{F},\mathrm{P})$ independent on ${w_{i}}(t),i=1,2$. Here $\mathrm{E}[{\nu _{i}}(t,A)]=t{\Pi _{i}}(A)$, $i=1,2$, ${\tilde{\nu }_{i}}(t,A)={\nu _{i}}(t,A)-t{\Pi _{i}}(A)$, $i=1,2$, ${\Pi _{i}}(\cdot ),i=1,2$, are finite measures on the Borel sets in $\mathbb{R}$. On the probability space $(\Omega ,\mathcal{F},\mathrm{P})$ we consider an increasing, right continuous family of complete sub-σ-algebras ${\{{\mathcal{F}_{t}}\}_{t\ge 0}}$, where ${\mathcal{F}_{t}}=\sigma \{{w_{i}}(s),{\nu _{i}}(s,A),s\le t,i=1,2\}$.
We need the following assumption.
Assumption 1.
It is assumed that ${a_{i}}(t)$, ${b_{1}}(t)$, ${c_{i}}(t),{\sigma _{i}}(t),{\gamma _{i}}(t,z),{\delta _{i}}(t,z),i=1,2$, $m(t)$ are bounded, continuous on t functions, ${a_{i}}(t)>0$, $i=1,2$, ${b_{1\inf }}>0$, ${c_{i\inf }}>0$, $i=1,2$, ${m_{\inf }}={\inf _{t\ge 0}}m(t)>0$, and $\ln (1+{\gamma _{i}}(t,z)),\ln (1+{\delta _{i}}(t,z)),i=1,2$, are bounded, ${\Pi _{i}}(\mathbb{R})<\infty $, $i=1,2$.
In what follows we will assume that Assumption 1 holds.
Theorem 1.
There exists a unique global solution $X(t)$ to the system (3) for any initial value $X(0)={X_{0}}\in {\mathbb{R}_{+}^{2}}$, and $\mathrm{P}\{X(t)\in {\mathbb{R}_{+}^{2}}\}=1$.
Proof.
Let us consider the system of stochastic differential equations
The coefficients of the system (4) are locally Lipschitz continuous. So, for any initial value $({\xi _{1}}(0),{\xi _{2}}(0))$ there exists a unique local solution $\Xi (t)=({\xi _{1}}(t),{\xi _{2}}(t))$ on $[0,{\tau _{e}})$, where ${\sup _{t<{\tau _{e}}}}|\Xi (t)|=+\infty $ (cf. Theorem 6, p. 246, [5]). Therefore, from the Itô formula we derive that the process $X(t)=(\exp \{{\xi _{1}}(t)\},\exp \{{\xi _{2}}(t)\})$ is a unique, positive local solution to the system (3). To show this solution is global, we need to show that ${\tau _{e}}=+\infty $ a.s. Let ${n_{0}}\in \mathbb{N}$ be sufficiently large for ${x_{i0}}\in [1/{n_{0}},{n_{0}}]$, $i=1,2$. For any $n\ge {n_{0}}$ we define the stopping time
Let us consider the function $f(t,{x_{1}},{x_{2}})=\phi (t,{x_{1}})+\psi (t,{x_{1}},{x_{2}})$, ${x_{1}}>0$, ${x_{2}}>0$, where
Taking expectations we derive from (7)
Set ${\Omega _{n}}=\{{\tau _{n}}\le T\}$ for $n\ge {n_{1}}$. Then by (5), $\mathrm{P}({\Omega _{n}})=\mathrm{P}\{{\tau _{n}}\le T\}>\varepsilon $, $\forall n\ge {n_{1}}$. Note that for every $\omega \in {\Omega _{n}}$ at least one of ${x_{1}}({\tau _{n}},\omega )$ and ${x_{2}}({\tau _{n}},\omega )$ equals either n or $1/n$. So
From (8) it follows
(4)
\[\begin{array}{r}\displaystyle d{\xi _{i}}(t)=\left[{a_{i}}(t)-{b_{i}}(t)\exp \{{\xi _{i}}(t)\}-\frac{{c_{i}}(t)\exp \{{\xi _{2}}(t)\}}{m(t)+\exp \{{\xi _{1}}(t)\}}-{\beta _{i}}(t)\right]dt\\ {} \displaystyle +{\sigma _{i}}(t)d{w_{i}}(t)+\underset{\mathbb{R}}{\int }\ln (1+{\gamma _{i}}(t,z)){\tilde{\nu }_{1}}(dt,dz)+\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(t,z)){\tilde{\nu }_{2}}(dt,dz),\\ {} \displaystyle {v_{i}}(0)=\ln {x_{i0}},\hspace{2.5pt}i=1,2.\end{array}\]
\[ {\tau _{n}}=\inf \left\{t\in [0,{\tau _{e}}):\hspace{2.5pt}X(t)\notin \left(\frac{1}{n},n\right)\times \left(\frac{1}{n},n\right)\right\}.\]
It is easy to see that ${\tau _{n}}$ is increasing as $n\to +\infty $. Denote ${\tau _{\infty }}={\lim \nolimits_{n\to \infty }}{\tau _{n}}$, whence ${\tau _{\infty }}\le {\tau _{e}}$ a.s. If we prove that ${\tau _{\infty }}=\infty $ a.s., then ${\tau _{e}}=\infty $ a.s. and $X(t)\in {\mathbb{R}_{+}^{2}}$ a.s. for all $t\in [0,+\infty )$. So we need to show that ${\tau _{\infty }}=\infty $ a.s. If it is not true, there are constants $T>0$ and $\varepsilon \in (0,1)$, such that $\mathrm{P}\{{\tau _{\infty }}<T\}>\varepsilon $. Hence, there is ${n_{1}}\ge {n_{0}}$ such that
For the nonnegative function $V(X)={\textstyle\sum \limits_{i=1}^{2}}{k_{i}}({x_{i}}-1-\ln {x_{i}})$, $X=({x_{1}},{x_{2}})$, ${x_{i}}>0$, ${k_{i}}>0$, $i=1,2$, by the Itô formula we obtain
(6)
\[\begin{array}{r}\displaystyle dV(X(t))={\sum \limits_{i=1}^{2}}{k_{i}}\Bigg\{({x_{i}}(t)-1)\left[{a_{i}}(t)-{b_{i}}(t){x_{i}}(t)-\frac{{c_{i}}(t){x_{2}}(t)}{m(t)+{x_{1}}(t)}\right]\\ {} \displaystyle +{\beta _{i}}(t)+\underset{\mathbb{R}}{\int }{\delta _{i}}(t,z){x_{i}}(t){\Pi _{2}}(dz)\Bigg\}dt+{\sum \limits_{i=1}^{2}}{k_{i}}\Bigg\{({x_{i}}(t)-1){\sigma _{i}}(t)d{w_{i}}(t)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }\hspace{-0.1667em}[{\gamma _{i}}(t,z){x_{i}}(t)-\ln (1+{\gamma _{i}}(t,z))]{\tilde{\nu }_{1}}(dt,dz)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }[{\delta _{i}}(t,z){x_{i}}(t)-\ln (1+{\delta _{i}}(t,z))]{\tilde{\nu }_{2}}(dt,dz)\Bigg\}.\end{array}\]
\[\begin{array}{r}\displaystyle \phi (t,{x_{1}})=-{k_{1}}{b_{1}}(t){x_{1}^{2}}+{k_{1}}\Big({\alpha _{1}}(t)+{b_{1}}(t)\Big){x_{1}}+{k_{1}}{\beta _{1}}(t)+{k_{2}}{\beta _{2}}(t)\\ {} \displaystyle -{k_{1}}{a_{1}}(t)-{k_{2}}{a_{2}}(t),\\ {} \displaystyle \psi (t,{x_{1}},{x_{2}})={(m(t)+{x_{1}})^{-1}}\Big[-{k_{2}}{c_{2}}(t){x_{2}^{2}}+\Big({k_{2}}{\alpha _{2}}(t)-{k_{1}}{c_{1}}(t)\Big){x_{1}}{x_{2}}\\ {} \displaystyle +\Big({k_{2}}{\alpha _{2}}(t)m(t)+{k_{1}}{c_{1}}(t)+{k_{2}}{c_{2}}(t)\Big){x_{2}}\Big].\end{array}\]
Under Assumption 1 there is a constant ${L_{1}}({k_{1}},{k_{2}})>0$, such that
\[ \phi (t,{x_{1}})\le {k_{1}}\left[-{b_{1\inf }}{x_{1}^{2}}+\left({\alpha _{1\sup }}+{b_{1\sup }}\right){x_{1}}\right]+{\beta _{\max }}({k_{1}}+{k_{2}})\le {L_{1}}({k_{1}},{k_{2}}).\]
If ${\alpha _{2\sup }}\le 0$, then for the function $\psi (t,{x_{1}},{x_{2}})$ we have
\[ \psi (t,{x_{1}},{x_{2}})\le \frac{-{k_{2}}{c_{2\inf }}{x_{2}^{2}}+({k_{1}}+{k_{2}}){c_{\max }}{x_{2}}}{m(t)+{x_{1}}}\le {L_{2}}({k_{1}},{k_{2}}).\]
If ${\alpha _{2\sup }}>0$, then for ${k_{2}}={k_{1}}\frac{{c_{1\inf }}}{{\alpha _{2\sup }}}$ there is a constant ${L_{3}}({k_{1}},{k_{2}})>0$, such that
\[\begin{array}{r}\displaystyle \psi (t,{x_{1}},{x_{2}})\le \Bigg\{-{k_{2}}{c_{2\inf }}{x_{2}^{2}}+({k_{2}}{\alpha _{2\sup }}-{k_{1}}{c_{1\inf }}){x_{1}}{x_{2}}+\Bigg[{k_{2}}{\alpha _{2\sup }}{m_{\sup }}\\ {} \displaystyle +({k_{1}}+{k_{2}}){c_{\max }}\Bigg]{x_{2}}\Bigg\}{(m(t)+{x_{1}})^{-1}}=\frac{{k_{1}}}{m(t)+{x_{1}}}\bigg\{-\frac{{c_{1\inf }}{c_{2\inf }}}{{\alpha _{2\sup }}}{x_{2}^{2}}\\ {} \displaystyle +\bigg[{c_{1\inf }}{m_{\sup }}+\left(1+\frac{{c_{1\inf }}}{{\alpha _{2\sup }}}\right){c_{\max }}\bigg]{x_{2}}\bigg\}\le {L_{3}}({k_{1}},{k_{2}}).\end{array}\]
Therefore, there is a constant $L({k_{1}},{k_{2}})>0$, such that $f(t,{x_{1}},{x_{2}})\le L({k_{1}},{k_{2}})$. So, from (6) we obtain by integrating
(7)
\[\begin{array}{r}\displaystyle V(X(T\wedge {\tau _{n}}))\le V({X_{0}})+L({k_{1}},{k_{2}})(T\wedge {\tau _{n}})\\ {} \displaystyle +{\sum \limits_{i=1}^{2}}{k_{i}}\Bigg\{{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}({x_{i}}(t)-1){\sigma _{i}}(t)d{w_{i}}(t)+{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\Big[{\gamma _{i}}(t,z){x_{i}}(t)-\ln (1\\ {} \displaystyle +{\gamma _{i}}(t,z))\Big]{\tilde{\nu }_{1}}(dt,dz)+{\underset{0}{\overset{T\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }[{\delta _{i}}(t,z){x_{i}}(t)-\ln (1+{\delta _{i}}(t,z))]{\tilde{\nu }_{2}}(dt,dz)\Bigg\}.\end{array}\]
\[\begin{array}{r}\displaystyle V({X_{0}})+L({k_{1}},{k_{2}})T\ge \mathrm{E}[{\mathbf{1}_{{\Omega _{n}}}}V(X({\tau _{n}}))]\\ {} \displaystyle \ge \varepsilon \min \{{k_{1}},{k_{2}}\}\min \{n-1-\ln n,\frac{1}{n}-1+\ln n\},\end{array}\]
where ${\mathbf{1}_{{\Omega _{n}}}}$ is the indicator function of ${\Omega _{n}}$. Letting $n\to \infty $ leads to the contradiction $\infty >V({X_{0}})+L({k_{1}},{k_{2}})T=\infty $. This completes the proof of the theorem. □Proof.
By the Itô formula for the process ${e^{t}}\ln (m+{x_{1}}(t))$ we have
For $0<\kappa \le 1$, let us denote the process
Choosing $T=k\tau $, $k\in \mathbb{N}$, $\tau >0$, $\kappa =\varepsilon {e^{-k\tau }}$, $\beta =\theta {e^{k\tau }}{\varepsilon ^{-1}}\ln k$, $0<\varepsilon <1$, $\theta >1$, we get
By using the inequality ${x^{r}}\le 1+r(x-1)$, $\forall x\ge 0$, $0\le r\le 1$, with $x=1+\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}$, $r=\varepsilon {e^{s-k\tau }}$, and then with $x=1+\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}$, $r=\varepsilon {e^{s-k\tau }}$, we derive the estimates
From (10), by using (12)–(14) we get
It is easy to see that, under Assumption 1, for any $x>0$ there exists a constant $L>0$ independent on k, s and x, such that
So, from (15) for any $(k-1)\tau \le t\le k\tau $ we have (a.s.)
(10)
\[\begin{array}{r}\displaystyle {e^{t}}\ln (m+{x_{1}}(t))-\ln (m+{x_{10}})={\int _{0}^{t}}{e^{s}}\Bigg\{\ln (m+{x_{1}}(s))\\ {} \displaystyle +\hspace{-0.1667em}\frac{{x_{1}}(s)}{m+{x_{1}}(s)}\bigg[{a_{1}}(s)\hspace{-0.1667em}-\hspace{-0.1667em}{b_{1}}(s){x_{1}}(s)\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{c_{1}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}\bigg]\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{\sigma _{1}^{2}}(s){x_{1}^{2}}(s)}{2{(m+{x_{1}}(s))^{2}}}\\ {} \displaystyle +\underset{\mathbb{R}}{\int }\bigg[\ln \bigg(1+\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg)-\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg]{\Pi _{1}}(dz)\Bigg\}ds\\ {} \displaystyle +{\int _{0}^{t}}{e^{s}}\frac{{\sigma _{1}}(s){x_{1}}(s)}{m+{x_{1}}(s)}d{w_{1}}(s)+{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}\ln \bigg(1+\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg){\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}\ln \bigg(1+\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg){\nu _{2}}(ds,dz).\end{array}\]
\[\begin{array}{r}\displaystyle {\zeta _{\kappa }}(t)={\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{e^{s}}\frac{{\sigma _{1}}(s){x_{1}}(s)}{m+{x_{1}}(s)}d{w_{1}}(s)\hspace{-0.1667em}+\hspace{-0.1667em}{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{e^{s}}\ln \bigg(1+\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg){\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +\hspace{-0.1667em}{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{e^{s}}\ln \bigg(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg)\hspace{-0.1667em}{\nu _{2}}(ds,dz)\hspace{-0.1667em}-\hspace{-0.1667em}\frac{\kappa }{2}\hspace{-0.1667em}{\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}{e^{2s}}{\sigma _{1}^{2}}(s){\left(\frac{{x_{1}}(s)}{m+{x_{1}}(s)}\right)^{2}}\hspace{-0.1667em}\hspace{-0.1667em}ds\\ {} \displaystyle -\frac{1}{\kappa }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\left[{\left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)^{\kappa {e^{s}}}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}-\hspace{-0.1667em}1\hspace{-0.1667em}-\hspace{-0.1667em}\kappa {e^{s}}\ln \left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)\right]{\Pi _{1}}(dz)ds\\ {} \displaystyle -\frac{1}{\kappa }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\left[{\left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)^{\kappa {e^{s}}}}-1\right]{\Pi _{2}}(dz)ds.\end{array}\]
By virtue of the exponential inequality ([3], Lemma 2.2) for any $T>0$, $0<\kappa \le 1$, $\beta >0$, we have
(11)
\[ \mathrm{P}\{\underset{0\le t\le T}{\sup }{\zeta _{\kappa }}(t)>\beta \}\le {e^{-\kappa \beta }}.\]
\[ \mathrm{P}\{\underset{0\le t\le T}{\sup }{\zeta _{\kappa }}(t)>\theta {e^{k\tau }}{\varepsilon ^{-1}}\ln k\}\le \frac{1}{{k^{\theta }}}.\]
By the Borel–Cantelli lemma, for almost all $\omega \in \Omega $, there is a random integer ${k_{0}}(\omega )$, such that, for $\forall k\ge {k_{0}}(\omega )$ and $0\le t\le k\tau $, we have
(12)
\[\begin{array}{r}\displaystyle {\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{e^{s}}\frac{{\sigma _{1}}(s){x_{1}}(s)}{m+{x_{1}}(s)}d{w_{1}}(s)\hspace{-0.1667em}+\hspace{-0.1667em}{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{e^{s}}\ln \bigg(1+\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg){\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +\hspace{-0.1667em}{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{e^{s}}\ln \bigg(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\bigg)\hspace{-0.1667em}{\nu _{2}}(ds,dz)\le \\ {} \displaystyle \frac{\varepsilon }{2{e^{k\tau }}}\hspace{-0.1667em}\hspace{-0.1667em}{\int _{0}^{t}}\hspace{-0.1667em}\hspace{-0.1667em}{e^{2s}}{\left(\frac{{\sigma _{1}}(s){x_{1}}(s)}{m+{x_{1}}(s)}\right)^{2}}\hspace{-0.1667em}\hspace{-0.1667em}ds+\frac{{e^{k\tau }}}{\varepsilon }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\bigg[{\left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)^{\varepsilon {e^{s-k\tau }}}}\\ {} \displaystyle -1-\varepsilon {e^{s-k\tau }}\ln \left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)\bigg]{\Pi _{1}}(dz)ds\\ {} \displaystyle +\frac{{e^{k\tau }}}{\varepsilon }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\left[{\left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)^{\varepsilon {e^{s-k\tau }}}}-1\right]{\Pi _{2}}(dz)ds+\frac{\theta {e^{k\tau }}\ln k}{\varepsilon }.\end{array}\](13)
\[\begin{array}{r}\displaystyle \frac{{e^{k\tau }}}{\varepsilon }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\bigg[{\left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)^{\varepsilon {e^{s-k\tau }}}}-1\\ {} \displaystyle -\varepsilon {e^{s-k\tau }}\ln \left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)\bigg]{\Pi _{1}}(dz)ds\\ {} \displaystyle \le {\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{e^{s}}\left[\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}-\ln \left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\gamma _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)\right]{\Pi _{1}}(dz)ds,\end{array}\](14)
\[\begin{array}{r}\displaystyle \frac{{e^{k\tau }}}{\varepsilon }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\left[{\left(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}\right)^{\varepsilon {e^{s-k\tau }}}}-1\right]{\Pi _{2}}(dz)ds\\ {} \displaystyle \le {\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{e^{s}}\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}{\Pi _{2}}(dz)ds.\end{array}\](15)
\[\begin{array}{r}\displaystyle {e^{t}}\ln (m+{x_{1}}(t))\le \ln (m+{x_{10}})+{\int _{0}^{t}}{e^{s}}\Bigg\{\ln (m+{x_{1}}(s))\\ {} \displaystyle +\frac{{x_{1}}(s)}{m+{x_{1}}(s)}\bigg[{a_{1}}(s)\hspace{-0.1667em}-\hspace{-0.1667em}{b_{1}}(s){x_{1}}(s)\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{c_{1}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}\bigg]\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{\sigma _{1}^{2}}(s){x_{1}^{2}}(s)}{2{(m+{x_{1}}(s))^{2}}}\\ {} \displaystyle \times \left(1-\varepsilon {e^{s-k\tau }}\right)+\underset{\mathbb{R}}{\int }\frac{{\delta _{1}}(s,z){x_{1}}(s)}{m+{x_{1}}(s)}{\Pi _{2}}(dz)\Bigg\}ds+\frac{\theta {e^{k\tau }}\ln k}{\varepsilon },\hspace{2.5pt}\text{a.s.}\end{array}\]
\[ \frac{\ln (m+{x_{1}}(t))}{\ln t}\le {e^{-t}}\frac{\ln (m+{x_{10}})}{\ln t}+\frac{L}{\ln t}(1-{e^{-t}})+\frac{\theta {e^{k\tau }}\ln k}{\varepsilon {e^{(k-1)\tau }}\ln (k-1)\tau }.\]
Therefore,
\[ \underset{t\to \infty }{\limsup }\frac{\ln (m+{x_{1}}(t))}{\ln t}\le \frac{\theta {e^{\tau }}}{\varepsilon },\hspace{2.5pt}\forall \theta >1,\tau >0,\varepsilon \in (0,1),\hspace{1em}\text{a.s.}\]
If $\theta \downarrow 1$, $\tau \downarrow 0$, $\varepsilon \uparrow 1$, we obtain
So,
□Proof.
Making use of the Itô formula we get
where
for all sufficiently large $k\ge {k_{0}}(\omega )$ and $0\le t\le k\tau $.
(16)
\[\begin{array}{r}\displaystyle {e^{t}}\ln {x_{2}}(t)-\ln {x_{20}}={\int _{0}^{t}}{e^{s}}\Bigg\{\ln {x_{2}}(s)+{a_{2}}(s)\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{c_{2}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{\sigma _{2}^{2}}(s)}{2}\\ {} \displaystyle +\underset{\mathbb{R}}{\int }\bigg[\ln \Big(1+{\gamma _{2}}(s,z)\Big)-{\gamma _{2}}(s,z)\bigg]{\Pi _{1}}(dz)\Bigg\}ds+\psi (t),\end{array}\]
\[\begin{array}{r}\displaystyle \psi (t)={\int _{0}^{t}}{e^{s}}{\sigma _{2}}(s)d{w_{2}}(s)+{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}\ln \Big(1+{\gamma _{2}}(s,z)\Big){\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}\ln \Big(1+{\delta _{2}}(s,z)\Big){\nu _{2}}(ds,dz).\end{array}\]
By virtue of the exponential inequality (11) we have
\[ \mathrm{P}\{\underset{0\le t\le T}{\sup }{\zeta _{\kappa }}(t)>\beta \}\le {e^{-\kappa \beta }},\forall 0<\kappa \le 1,\beta >0,\]
where
\[\begin{array}{r}\displaystyle {\zeta _{\kappa }}(t)=\psi (t)-\frac{\kappa }{2}{\underset{0}{\overset{t}{\int }}}{e^{2s}}{\sigma _{2}^{2}}(s)ds-\frac{1}{\kappa }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\Big[{(1+{\gamma _{2}}(s,z))^{\kappa {e^{s}}}}-1\\ {} \displaystyle -\kappa {e^{s}}\ln (1+{\gamma _{2}}(s,z))\Big]{\Pi _{1}}(dz)ds-\frac{1}{\kappa }{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\left[{(1+{\delta _{2}}(s,z))^{\kappa {e^{s}}}}-1\right]{\Pi _{2}}(dz)ds.\end{array}\]
Choosing $T=k\tau $, $k\in \mathbb{N}$, $\tau >0$, $\kappa ={e^{-k\tau }}$, $\beta =\theta {e^{k\tau }}\ln k$, $\theta >1$, we get
\[ \mathrm{P}\{\underset{0\le t\le T}{\sup }{\zeta _{\kappa }}(t)>\theta {e^{k\tau }}\ln k\}\le \frac{1}{{k^{\theta }}}.\]
By the same arguments as in the proof of Lemma 1, using the Borel–Cantelli lemma, we derive from (16)
(17)
\[\begin{array}{r}\displaystyle {e^{t}}\ln {x_{2}}(t)\le \ln {x_{20}}+{\int _{0}^{t}}{e^{s}}\bigg\{\ln {x_{2}}(s)+{a_{2}}(s)\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{c_{2}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}\\ {} \displaystyle -\frac{{\sigma _{2}^{2}}(s)}{2}\left(1-{e^{s-k\tau }}\right)+\underset{\mathbb{R}}{\int }{\delta _{2}}(s,z){\Pi _{2}}(dz)\bigg\}ds+\theta {e^{k\tau }}\ln k,\hspace{1em}\text{a.s.,}\end{array}\]Using inequality $\ln x-cx\le -\ln c-1$, $\forall x\ge 0$, $c>0$, with $x={x_{2}}(s)$, $c=\frac{{c_{2}}(s)}{m(s)+{x_{1}}(s)}$, we derive from (17) the estimate
\[ {e^{t}}\ln {x_{2}}(t)\le \ln {x_{20}}+{\underset{0}{\overset{t}{\int }}}{e^{s}}\ln \Big({m_{\sup }}+{x_{1}}(s)\Big)ds+L({e^{t}}-1)+\theta {e^{k\tau }}\ln k,\]
for some constant $L>0$.So, for $(k-1)\tau \le t\le k\tau $, $k\ge {k_{0}}(\omega )$, we have
\[ \underset{t\to \infty }{\limsup }\frac{\ln {x_{2}}(t)}{t}\le \underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{e^{s-t}}\ln \Big({m_{\sup }}+{x_{1}}(s)\Big)ds\le 0,\]
by virtue of Lemma 1. □Lemma 3.
Let $p>0$. Then for any initial value ${x_{10}}>0$, the pth-moment of the prey population density ${x_{1}}(t)$ obeys
where ${K_{1}}(p)>0$ is independent of ${x_{10}}$.
Proof.
Let ${\tau _{n}}$ be the stopping time defined in Theorem 1. Applying the Itô formula to the process $V(t,{x_{1}}(t))={e^{t}}{x_{1}^{p}}(t)$, $p>0$, we obtain
(20)
\[\begin{array}{r}\displaystyle V(t\wedge {\tau _{n}},{x_{1}}(t\wedge {\tau _{n}}))={x_{10}^{p}}+{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}{e^{s}}{x_{1}^{p}}(s)\Bigg\{1+p\bigg[{a_{1}}(s)-{b_{1}}(s){x_{1}}(s)\\ {} \displaystyle -\frac{{c_{1}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}\bigg]+\frac{p(p-1){\sigma _{1}^{2}}(s)}{2}\hspace{-0.1667em}+\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1+{\gamma _{1}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\hspace{-0.1667em}-\hspace{-0.1667em}p{\gamma _{1}}(s,z)\right]{\Pi _{1}}(dz)\hspace{-0.1667em}\\ {} \displaystyle +\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1+{\delta _{1}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\right]{\Pi _{2}}(dz)\Bigg\}ds+{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}p{e^{s}}{x_{1}^{p}}(s){\sigma _{1}}(s)d{w_{1}}(s)\\ {} \displaystyle +{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}{x_{1}^{p}}(s)\left[{(1+{\gamma _{1}}(s,z))^{p}}-1\right]{\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t\wedge {\tau _{n}}}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{e^{s}}{x_{1}^{p}}(s)\left[{(1+{\delta _{1}}(s,z))^{p}}-1\right]{\tilde{\nu }_{2}}(ds,dz).\end{array}\]Under Assumption 1 there is a constant ${K_{1}}(p)>0$, such that
From (20) and (21), taking expectations, we obtain
(21)
\[\begin{array}{r}\displaystyle {e^{s}}{x_{1}^{p}}\Bigg\{1+p\left[{a_{1}}(s)-{b_{1}}(s){x_{1}}-\frac{{c_{1}}(s){x_{2}}}{m(s)+{x_{1}}}\right]\hspace{-0.1667em}+\hspace{-0.1667em}\frac{p(p-1){\sigma _{1}^{2}}(s)}{2}+\\ {} \displaystyle +\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1\hspace{-0.1667em}+\hspace{-0.1667em}{\gamma _{1}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\hspace{-0.1667em}-\hspace{-0.1667em}p{\gamma _{1}}(s,z)\right]{\Pi _{1}}(dz)\hspace{-0.1667em}+\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}\hspace{-0.1667em}\left[{(1+{\delta _{1}}(s,z))^{p}}\hspace{-0.1667em}-\hspace{-0.1667em}1\right]{\Pi _{2}}(dz)\Bigg\}\\ {} \displaystyle \le {K_{1}}(p){e^{s}}.\end{array}\]Let us prove the estimate (19). Applying the Itô formula to the process $U(t,X(t))={e^{t}}[{k_{1}}{x_{1}}(t)+{k_{2}}{x_{2}}(t)]$, ${k_{i}}>0$, $i=1,2$, we obtain
For the function
From (25) we have (19). □
(23)
\[\begin{array}{r}\displaystyle dU(t,X(t))={e^{t}}\Bigg\{{k_{1}}{x_{1}}(t)+{k_{2}}{x_{2}}(t)+{k_{1}}\bigg[{a_{1}}(t){x_{1}}(t)-{b_{1}}(t){x_{1}^{2}}(t)\\ {} \displaystyle -\frac{{c_{1}}(t){x_{1}}(t){x_{2}}(t)}{m(t)+{x_{1}}(t)}\bigg]+{k_{2}}\bigg[{a_{2}}(t){x_{2}}(t)-\frac{{c_{2}}(t){x_{2}^{2}}(t)}{m(t)+{x_{1}}(t)}\bigg]\\ {} \displaystyle +\hspace{-0.1667em}{\sum \limits_{i=1}^{2}}{k_{i}}\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{x_{i}}(t){\delta _{i}}(t,z){\Pi _{2}}(dz)\Bigg\}dt+{e^{t}}\Bigg\{{\sum \limits_{i=1}^{2}}{k_{i}}\bigg[{x_{i}}(t){\sigma _{i}}(t)d{w_{i}}(t)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }{x_{i}}(t){\gamma _{i}}(t,z){\tilde{\nu }_{1}}(dt,dz)+\underset{\mathbb{R}}{\int }{x_{i}}(t){\delta _{i}}(t,z){\tilde{\nu }_{2}}(dt,dz)\bigg]\Bigg\}.\end{array}\]
\[\begin{array}{r}\displaystyle f(t,{x_{1}},{x_{2}})\hspace{-0.1667em}=\hspace{-0.1667em}\frac{1}{m(t)+{x_{1}}}\bigg\{{k_{1}}\Big[-{b_{1}}(t){x_{1}^{3}}\hspace{-0.1667em}+\Big(1\hspace{-0.1667em}+\hspace{-0.1667em}{a_{1}}(t)\hspace{-0.1667em}+\hspace{-0.1667em}{\bar{\delta }_{1}}(t)\hspace{-0.1667em}-\hspace{-0.1667em}{b_{1}}(t)m(t)\Big){x_{1}^{2}}\\ {} \displaystyle +m(t)\Big(1\hspace{-0.1667em}+\hspace{-0.1667em}{a_{1}}(t)\hspace{-0.1667em}+\hspace{-0.1667em}{\bar{\delta }_{1}}(t)\Big){x_{1}}\Big]+\Big[{k_{2}}\Big(1+{a_{2}}(t)+{\bar{\delta }_{2}}(t)\Big)-{k_{1}}{c_{1}}(t)\Big]{x_{1}}{x_{2}}\\ {} \displaystyle +{k_{2}}\Big[-{c_{2}}(t){x_{2}^{2}}+m(t)\Big(1+{a_{2}}(t)+{\bar{\delta }_{2}}(t)\Big){x_{2}}\Big]\bigg\},\\ {} \displaystyle \text{where}\hspace{2.5pt}{\bar{\delta }_{i}}(t)=\underset{\mathbb{R}}{\int }\hspace{-0.1667em}{\delta _{i}}(t,z){\Pi _{2}}(dz),\hspace{2.5pt}i=1,2,\end{array}\]
we have
\[ f(t,{x_{1}},{x_{2}})\le \frac{{\phi _{1}}({x_{1}},{x_{2}})+{\phi _{2}}({x_{2}})}{m(t)+{x_{1}}},\]
where
\[\begin{array}{r}\displaystyle {\phi _{1}}({x_{1}},{x_{2}})={k_{1}}\hspace{-0.1667em}\left[\hspace{-0.1667em}-{b_{1\inf }}{x_{1}^{3}}\hspace{-0.1667em}+\hspace{-0.1667em}\Big({d_{1}}-\hspace{-0.1667em}{b_{1\inf }}{m_{\inf }}\Big){x_{1}^{2}}+{m_{\sup }}{d_{1}}{x_{1}}\right]\\ {} \displaystyle +\Big[{k_{2}}{d_{2}}-{k_{1}}{c_{1\inf }}\Big]{x_{1}}{x_{2}},\\ {} \displaystyle {\phi _{2}}({x_{2}})={k_{2}}\left[-{c_{2\inf }}{x_{2}^{2}}+{m_{\sup }}{d_{2}}{x_{2}}\right],\hspace{2.5pt}{d_{i}}=1\hspace{-0.1667em}+\hspace{-0.1667em}{a_{i\sup }}+|{\bar{\delta }_{i}}{|_{\sup }},\hspace{2.5pt}i=1,2.\end{array}\]
For ${k_{2}}={k_{1}}{c_{1\inf }}/{d_{2}}$ there is a constant ${L^{\prime }}>0$, such that ${\phi _{1}}({x_{1}},{x_{2}})\le {L^{\prime }}{k_{1}}$ and ${\phi _{2}}({x_{2}})\le {L^{\prime }}{k_{1}}$. So, there is a constant $L>0$, such that
From (23) and (24) by integrating and taking expectation, we derive
\[ \mathrm{E}[U(t\wedge {\tau _{n}},X(t\wedge {\tau _{n}}))]\le {k_{1}}\left[{x_{10}}+\frac{{c_{1\inf }}}{{d_{2}}}{x_{20}}+L{e^{t}}\right].\]
Letting $n\to \infty $ leads to the estimate
\[ {e^{t}}\mathrm{E}\left[{x_{1}}(t)+\frac{{c_{1\inf }}}{{d_{2}}}{x_{2}}(t)\right]\le {x_{10}}+\frac{{c_{1\inf }}}{{d_{2}}}{x_{20}}+L{e^{t}}.\]
So,
(25)
\[ \mathrm{E}[{x_{2}}(t)]\le \left(\frac{{d_{2}}}{{c_{1\inf }}}{x_{10}}+{x_{20}}\right){e^{-t}}+\frac{{d_{2}}}{{c_{1\inf }}}L.\]Lemma 4.
If ${p_{2\inf }}>0$, where ${p_{2}}(t)={a_{2}}(t)-{\beta _{2}}(t)$, then for any initial value ${x_{20}}>0$, the predator population density ${x_{2}}(t)$ satisfies
Proof.
For the process $U(t)=1/{x_{2}}(t)$ by the Itô formula we derive
where ${I_{j,stoch}}(t),j=\overline{1,3}$, are the corresponding stochastic integrals in (27). Under Assumption 1 there exist constants $|{K_{1}}(\theta )|<\infty $, $|{K_{2}}(\theta )|<\infty $, such that for the process $J(t)$ we have the estimate
By the Itô formula and (28) we have
\[\begin{array}{r}\displaystyle U(t)=U(0)+{\underset{0}{\overset{t}{\int }}}U(s)\Bigg[\frac{{c_{2}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}-{a_{2}}(s)+{\sigma _{2}^{2}}(s)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }\frac{{\gamma _{2}^{2}}(s,z)}{1+{\gamma _{2}}(s,z)}{\Pi _{1}}(dz)\Bigg]ds-{\underset{0}{\overset{t}{\int }}}U(s){\sigma _{2}}(s)d{w_{2}}(s)\\ {} \displaystyle -{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }U(s)\frac{{\gamma _{2}}(s,z)}{1+{\gamma _{2}}(s,z)}{\tilde{\nu }_{1}}(ds,dz)-{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }U(s)\frac{{\delta _{2}}(s,z)}{1+{\delta _{2}}(s,z)}{\nu _{2}}(ds,dz).\end{array}\]
Then, by applying the Itô formula, we derive, for $0<\theta <1$,
(27)
\[\begin{array}{r}\displaystyle {(1+U(t))^{\theta }}={(1+U(0))^{\theta }}+{\underset{0}{\overset{t}{\int }}}\theta {(1+U(s))^{\theta -2}}\Bigg\{(1+U(s))U(s)\\ {} \displaystyle \times \bigg[\frac{{c_{2}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}-{a_{2}}(s)+{\sigma _{2}^{2}}(s)\hspace{-0.1667em}+\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\frac{{\gamma _{2}^{2}}(s,z)}{1+{\gamma _{2}}(s,z)}{\Pi _{1}}(dz)\bigg]\\ {} \displaystyle +\frac{\theta -1}{2}{U^{2}}(s){\sigma _{2}^{2}}(s)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }\bigg[{(1+U(s))^{2}}\bigg({\bigg(\frac{1+U(s)+{\gamma _{2}}(s,z)}{(1+{\gamma _{2}}(s,z))(1+U(s))}\bigg)^{\theta }}-1\bigg)\\ {} \displaystyle +\theta (1+U(s))\frac{U(s){\gamma _{2}}(s,z)}{1+{\gamma _{2}}(s,z)}\bigg]{\Pi _{1}}(dz)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }{(1+U(s))^{2}}\bigg[{\bigg(\frac{1+U(s)+{\delta _{2}}(s,z)}{(1+{\delta _{2}}(s,z))(1+U(s))}\bigg)^{\theta }}-1\bigg]{\Pi _{2}}(dz)\Bigg\}ds\\ {} \displaystyle -{\underset{0}{\overset{t}{\int }}}\theta {(1+U(s))^{\theta -1}}U(s){\sigma _{2}}(s)d{w_{2}}(s)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\bigg[{\bigg(1+\frac{U(s)}{1+{\gamma _{2}}(s,z)}\bigg)^{\theta }}-{(1+U(s))^{\theta }}\bigg]{\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\bigg[{\bigg(1+\frac{U(s)}{1+{\delta _{2}}(s,z)}\bigg)^{\theta }}-{(1+U(s))^{\theta }}\bigg]{\tilde{\nu }_{2}}(ds,dz)\\ {} \displaystyle ={(1+U(0))^{\theta }}+{\underset{0}{\overset{t}{\int }}}\theta {(1+U(s))^{\theta -2}}J(s)ds\\ {} \displaystyle -{I_{1,stoch}}(t)+{I_{2,stoch}}(t)+{I_{3,stoch}}(t),\end{array}\]
\[\begin{array}{r}\displaystyle J(t)\le (1+U(t))U(t)\Bigg[-{a_{2}}(t)+\frac{{c_{2\sup }}{U^{-1}}(t)}{{m_{inf}}}+{\sigma _{2}^{2}}(t)\hspace{-0.1667em}\hspace{-0.1667em}\phantom{\underset{\mathbb{R}}{\int }}\\ {} \displaystyle +\underset{\mathbb{R}}{\int }\frac{{\gamma _{2}^{2}}(s,z)}{1+{\gamma _{2}}(s,z)}{\Pi _{1}}(dz)\Bigg]+\frac{\theta -1}{2}{U^{2}}(s){\sigma _{2}^{2}}(s)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }\Bigg[{(1+U(s))^{2}}\Bigg({\bigg(\frac{1}{1+{\gamma _{2}}(s,z)}+\frac{1}{1+U(s)}\bigg)^{\theta }}-1\Bigg)\\ {} \displaystyle +\theta (1+U(s))\frac{U(s){\gamma _{2}}(s,z)}{1+{\gamma _{2}}(s,z)}\bigg]{\Pi _{1}}(dz)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }{(1+U(s))^{2}}\Bigg[{\bigg(\frac{1}{1+{\delta _{2}}(s,z)}+\frac{1}{1+U(s)}\bigg)^{\theta }}-1\Bigg]{\Pi _{2}}(dz)\\ {} \displaystyle \le {U^{2}}(t)\Bigg[-{a_{2}}(t)+\frac{{\sigma _{2}^{2}}(t)}{2}+\underset{\mathbb{R}}{\int }{\gamma _{2}}(t,z){\Pi _{1}}(dz)+\frac{\theta }{2}{\sigma _{2}^{2}}(t)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\gamma _{2}}(t,z))^{-\theta }}-1]{\Pi _{1}}(dz)+\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\delta _{2}}(t,z))^{-\theta }}-1]{\Pi _{2}}(dz)\Bigg]\\ {} \displaystyle +{K_{1}}(\theta )U(t)+{K_{2}}(\theta )=-{K_{0}}(t,\theta ){U^{2}}(t)+{K_{1}}(\theta )U(t)+{K_{2}}(\theta ),\end{array}\]
where we used the inequality ${(x+y)^{\theta }}\le {x^{\theta }}+\theta {x^{\theta -1}}y$, $0<\theta <1$, $x,y>0$. Due to
\[\begin{array}{r}\displaystyle \underset{\theta \to 0+}{\lim }\Bigg[\frac{\theta }{2}{\sigma _{2}^{2}}(t)+\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\gamma _{2}}(t,z))^{-\theta }}-1]{\Pi _{1}}(dz)\\ {} \displaystyle +\frac{1}{\theta }\underset{\mathbb{R}}{\int }[{(1+{\delta _{2}}(t,z))^{-\theta }}-1]{\Pi _{2}}(dz)+\underset{\mathbb{R}}{\int }\ln (1+{\gamma _{2}}(t,z)){\Pi _{1}}(dz)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }\ln (1+{\delta _{2}}(t,z)){\Pi _{2}}(dz)\Bigg]=\underset{\theta \to 0+}{\lim }\Delta (\theta )=0,\end{array}\]
and the condition ${p_{2\inf }}>0$ we can choose a sufficiently small $0<\theta <1$ so that
\[ {K_{0}}(\theta )=\underset{t\ge 0}{\inf }{K_{0}}(t,\theta )=\underset{t\ge 0}{\inf }[{p_{2}}(t)-\Delta (\theta )]={p_{2\inf }}-\Delta (\theta )>0\]
is satisfied. So, from (27) and the estimate for $J(t)$ we derive
(28)
\[\begin{array}{r}\displaystyle d\big[{(1+U(t))^{\theta }}\big]\le \theta {(1+U(t))^{\theta -2}}[-{K_{0}}(\theta ){U^{2}}(t)+{K_{1}}(\theta )U(t)+{K_{2}}(\theta )]dt\\ {} \displaystyle -\theta {(1+U(t))^{\theta -1}}U(t){\sigma _{2}}(t)d{w_{2}}(t)+\underset{\mathbb{R}}{\int }\bigg[{\Big(1+\frac{U(t)}{1+{\gamma _{2}}(t,z)}\Big)^{\theta }}\\ {} \displaystyle -{(1\hspace{-0.1667em}+\hspace{-0.1667em}U(t))^{\theta }}\bigg]{\tilde{\nu }_{1}}(dt,dz)\hspace{-0.1667em}+\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\bigg[{\bigg(1\hspace{-0.1667em}+\hspace{-0.1667em}\frac{U(t)}{1+{\delta _{2}}(t,z)}\bigg)^{\theta }}\hspace{-0.1667em}\hspace{-0.1667em}-\hspace{-0.1667em}{(1\hspace{-0.1667em}+\hspace{-0.1667em}U(t))^{\theta }}\bigg]{\tilde{\nu }_{2}}(dt,dz).\end{array}\](29)
\[\begin{array}{r}\displaystyle d\left[{e^{\lambda t}}{(1+U(t))^{\theta }}\right]=\lambda {e^{\lambda t}}{(1+U(t))^{\theta }}dt+{e^{\lambda t}}d\left[{(1+U(t))^{\theta }}\right]\\ {} \displaystyle \le {e^{\lambda t}}\theta {(1+U(t))^{\theta -2}}\bigg[-\bigg({K_{0}}(\theta )-\frac{\lambda }{\theta }\bigg){U^{2}}(t)+\bigg({K_{1}}(\theta )+\frac{2\lambda }{\theta }\bigg)U(t)\\ {} \displaystyle +{K_{2}}(\theta )+\frac{\lambda }{\theta }\bigg]dt-\theta {e^{\lambda t}}{(1+U(t))^{\theta -1}}U(t){\sigma _{2}}(t)d{w_{2}}(t)\\ {} \displaystyle +{e^{\lambda t}}\underset{\mathbb{R}}{\int }\bigg[{\bigg(1+\frac{U(t)}{1+{\gamma _{2}}(t,z)}\bigg)^{\theta }}-{(1+U(t))^{\theta }}\bigg]{\tilde{\nu }_{1}}(dt,dz)\\ {} \displaystyle +{e^{\lambda t}}\underset{\mathbb{R}}{\int }\bigg[{\bigg(1+\frac{U(t)}{1+{\delta _{2}}(t,z)}\bigg)^{\theta }}-{(1+U(t))^{\theta }}\bigg]{\tilde{\nu }_{2}}(dt,dz).\end{array}\]Let us choose $\lambda =\lambda (\theta )>0$, such that ${K_{0}}(\theta )-\lambda /\theta >0$. Then there is a constant $K>0$, such that
Let ${\tau _{n}}$ be the stopping time defined in Theorem 1. Then by integrating (29), using (30) and taking the expectation we obtain
From (31) we obtain
(30)
\[\begin{array}{r}\displaystyle {(1+U(t))^{\theta -2}}\bigg[-\bigg({K_{0}}(\theta )-\frac{\lambda }{\theta }\bigg){U^{2}}(t)\\ {} \displaystyle +\bigg({K_{1}}(\theta )+\frac{2\lambda }{\theta }\bigg)U(t)+{K_{2}}(\theta )+\frac{\lambda }{\theta }\bigg]\le K.\end{array}\]
\[ \mathrm{E}\left[{e^{\lambda (t\wedge {\tau _{n}})}}{(1+U(t\wedge {\tau _{n}}))^{\theta }}\right]\le {\left(1+\frac{1}{{x_{20}}}\right)^{\theta }}+\frac{\theta }{\lambda }K\left({e^{\lambda t}}-1\right).\]
Letting $n\to \infty $ leads to the estimate
(31)
\[ {e^{t}}\mathrm{E}\left[{(1+U(t))^{\theta }}\right]\le {\left(1+\frac{1}{{x_{20}}}\right)^{\theta }}+\frac{\theta }{\lambda }K\left({e^{\lambda t}}-1\right).\]
\[\begin{array}{r}\displaystyle \underset{t\to \infty }{\limsup }\mathrm{E}\left[{\left(\frac{1}{{x_{2}}(t)}\right)^{\theta }}\right]=\underset{t\to \infty }{\limsup }\mathrm{E}\left[{U^{\theta }}(t)\right]\\ {} \displaystyle \le \underset{t\to \infty }{\limsup }\mathrm{E}\left[{(1+U(t))^{\theta }}\right]\le \frac{\theta }{\lambda (\theta )}K,\end{array}\]
this implies (26). □3 The long-time behaviour
Definition 1 ([8]).
The solution $X(t)$ to the system (3) is said to be stochastically ultimately bounded, if for any $\varepsilon \in (0,1)$, there is a positive constant $\chi =\chi (\varepsilon )>0$, such that for any initial value ${X_{0}}\in {\mathbb{R}_{+}^{2}}$, the solution to the system (3) has the property that
In what follows in this section we will assume that Assumption 1 holds.
Theorem 2.
The solution $X(t)$ to the system (3) with the initial value ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ is stochastically ultimately bounded.
Proof.
From Lemma 3 we have the estimate
For $X=({x_{1}},{x_{2}})\in {\mathbb{R}_{+}^{2}}$ we have $|X|\le {x_{1}}+{x_{2}}$, therefore, from (32) ${\limsup _{t\to \infty }}E[|X(t)|]\le L={K_{1}}+{K_{2}}$. Let $\chi >L/\varepsilon $, $\forall \varepsilon \in (0,1)$. Then applying the Chebyshev inequality yields
□
The property of stochastic permanence is important since it means the long-time survival in a population dynamics.
Definition 2.
The population density $x(t)$ is said to be stochastically permanent if for any $\varepsilon >0$, there are positive constants $H=H(\varepsilon )$, $h=h(\varepsilon )$ such that
\[ \underset{t\to \infty }{\liminf }\mathrm{P}\{x(t)\le H\}\ge 1-\varepsilon ,\hspace{1em}\underset{t\to \infty }{\liminf }\mathrm{P}\{x(t)\ge h\}\ge 1-\varepsilon ,\]
for any inial value ${x_{0}}>0$.Theorem 3.
If ${p_{2\inf }}>0$, where ${p_{2}}(t)={a_{2}}(t)-{\beta _{2}}(t)$, then for any initial value ${x_{20}}>0$, the predator population density ${x_{2}}(t)$ is stochastically permanent.
Proof.
From Lemma 3 we have estimate
Thus for any given $\varepsilon >0$, let $H=K/\varepsilon $, by virtue of Chebyshev’s inequality, we can derive that
\[ \underset{t\to \infty }{\limsup }\mathrm{P}\{{x_{2}}(t)\ge H\}\le \frac{1}{H}\underset{t\to \infty }{\limsup }E[{x_{2}}(t)]\le \varepsilon .\]
Consequently, $\underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{2}}(t)\le H\}\ge 1-\varepsilon $.From Lemma 4 we have the estimate
\[ \underset{t\to \infty }{\limsup }\mathrm{E}\left[{\left(\frac{1}{{x_{2}}(t)}\right)^{\theta }}\right]\le K(\theta ),\hspace{2.5pt}0<\theta <1.\]
For any given $\varepsilon >0$, let $h={(\varepsilon /K(\theta ))^{1/\theta }}$, then by Chebyshev’s inequality, we have
\[\begin{array}{r}\displaystyle \underset{t\to \infty }{\limsup }\mathrm{P}\{{x_{2}}(t)<h\}\le \underset{t\to \infty }{\limsup }\mathrm{P}\left\{{\left(\frac{1}{{x_{2}}(t)}\right)^{\theta }}>{h^{-\theta }}\right\}\\ {} \displaystyle \le {h^{\theta }}\underset{t\to \infty }{\limsup }\mathrm{E}\left[{\left(\frac{1}{{x_{2}}(t)}\right)^{\theta }}\right]\le \varepsilon .\end{array}\]
Consequently, $\underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{2}}(t)\ge h\}\ge 1-\varepsilon $. □Theorem 4.
If the predator is absent, i.e. ${x_{2}}(t)=0$ a.s., and ${p_{1\inf }}>0$, where ${p_{1}}(t)={a_{1}}(t)-{\beta _{1}}(t)$, then for any initial value ${x_{10}}>0$, the prey population density ${x_{1}}(t)$ is stochastically permanent.
Proof.
From Lemma 3 we have the estimate
Thus for any given $\varepsilon >0$, let $H=K/\varepsilon $, by virtue of Chebyshev’s inequality, we can derive that
\[ \underset{t\to \infty }{\limsup }\mathrm{P}\{{x_{1}}(t)\ge H\}\le \frac{1}{H}\underset{t\to \infty }{\limsup }E[{x_{1}}(t)]\le \varepsilon .\]
Consequently, $\underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{1}}(t)\le H\}\ge 1-\varepsilon $.For the process $U(t)=1/{x_{1}}(t)$, by the Itô formula we have
\[\begin{array}{r}\displaystyle U(t)=U(0)+{\underset{0}{\overset{t}{\int }}}U(s)\Bigg[{b_{1}}(s){x_{1}}(s)-{a_{1}}(s)+{\sigma _{1}^{2}}(s)\\ {} \displaystyle +\underset{\mathbb{R}}{\int }\frac{{\gamma _{1}^{2}}(s,z)}{1+{\gamma _{1}}(s,z)}{\Pi _{1}}(dz)\Bigg]ds-{\underset{0}{\overset{t}{\int }}}U(s){\sigma _{1}}(s)d{w_{1}}(s)\\ {} \displaystyle -{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }U(s)\frac{{\gamma _{1}}(s,z)}{1+{\gamma _{1}}(s,z)}{\tilde{\nu }_{1}}(ds,dz)-{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }U(s)\frac{{\delta _{1}}(s,z)}{1+{\delta _{1}}(s,z)}{\nu _{2}}(ds,dz).\end{array}\]
Then, using the same arguments as in the proof of Lemma 4 we can derive the estimate
\[ \underset{t\to \infty }{\limsup }\mathrm{E}\left[{\left(\frac{1}{{x_{1}}(t)}\right)^{\theta }}\right]\le K(\theta ),\hspace{2.5pt}0<\theta <1.\]
For any given $\varepsilon >0$, let $h={(\varepsilon /K(\theta ))^{1/\theta }}$. Then by Chebyshev’s inequality, we have
\[\begin{array}{r}\displaystyle \underset{t\to \infty }{\limsup }\mathrm{P}\{{x_{1}}(t)<h\}=\underset{t\to \infty }{\limsup }\mathrm{P}\left\{{\left(\frac{1}{{x_{1}}(t)}\right)^{\theta }}>{h^{-\theta }}\right\}\\ {} \displaystyle \le {h^{\theta }}\underset{t\to \infty }{\limsup }\mathrm{E}\left[{\left(\frac{1}{{x_{1}}(t)}\right)^{\theta }}\right]\le \varepsilon .\end{array}\]
Consequently, $\underset{t\to \infty }{\liminf }\mathrm{P}\{{x_{1}}(t)\ge h\}\ge 1-\varepsilon $. □Remark 1.
If the predator is absent, i.e. ${x_{2}}(t)=0$ a.s., then the equation for the prey ${x_{1}}(t)$ has the logistic form. So, Theorem 4 gives us the sufficient conditions for the stochastic permanence of the solution to the stochastic nonautonomous logistic equation disturbed by white noise, centered and noncentered Poisson noises.
Definition 3.
The solution $X(t)=({x_{1}}(t),{x_{2}}(t))$, $t\ge 0$, to equation (3) will be said extinct if for every initial data ${X_{0}}\in {\mathbb{R}_{+}^{2}}$, we have ${\lim \nolimits_{t\to \infty }}{x_{i}}(t)=0$ almost surely (a.s.), $i=1,2$.
Theorem 5.
If
\[ {\bar{p}_{i}^{\ast }}=\underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{i}}(s)ds<0,\hspace{2.5pt}\textit{where}\hspace{2.5pt}{p_{i}}(t)={a_{i}}(t)-{\beta _{i}}(t),\hspace{2.5pt}i=1,2,\]
then the solution $X(t)$ to equation (3) with the initial condition ${X_{0}}\in {\mathbb{R}_{+}^{2}}$ will be extinct.
Proof.
By the Itô formula, we have
where the martingale
has quadratic variation
(33)
\[\begin{array}{r}\displaystyle d\ln {x_{i}}(t)=\left[{a_{i}}(t)-{b_{i}}(t){x_{i}}(t)-\frac{{c_{i}}(t){x_{2}}(t)}{m(t)+{x_{1}}(t)}-{\beta _{i}}(t)\right]dt+d{M_{i}}(t)\\ {} \displaystyle \le [{a_{i}}(t)-{\beta _{i}}(t)]dt+d{M_{i}}(t),\hspace{2.5pt}i=1,2,\end{array}\](34)
\[\begin{array}{r}\displaystyle {M_{i}}(t)={\underset{0}{\overset{t}{\int }}}{\sigma _{i}}(s)d{w_{i}}(s)+{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\ln (1+{\gamma _{i}}(s,z)){\tilde{\nu }_{1}}(ds,dz)\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }\ln (1+{\delta _{i}}(s,z)){\tilde{\nu }_{2}}(ds,dz),\hspace{2.5pt}i=1,2,\end{array}\]
\[\begin{array}{r}\displaystyle \langle {M_{i}},{M_{i}}\rangle (t)={\underset{0}{\overset{t}{\int }}}{\sigma _{i}^{2}}(s)ds+{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{\ln ^{2}}(1+{\gamma _{i}}(s,z)){\Pi _{1}}(dz)ds\\ {} \displaystyle +{\underset{0}{\overset{t}{\int }}}\hspace{-0.1667em}\hspace{-0.1667em}\underset{\mathbb{R}}{\int }{\ln ^{2}}(1+{\delta _{i}}(s,z)){\Pi _{2}}(dz)ds\le Kt,\hspace{2.5pt}i=1,2.\end{array}\]
Then the strong law of large numbers for local martingales ([10]) yields ${\lim \nolimits_{t\to \infty }}{M_{i}}(t)/t=0$, $i=1,2$, a.s. Therefore, from (33) we obtain
\[ \underset{t\to \infty }{\limsup }\frac{\ln {x_{i}}(t)}{t}\le \underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{i}}(s)ds<0,\hspace{1em}\text{a.s.}\]
So, ${\lim \nolimits_{t\to \infty }}{x_{i}}(t)=0$, $i=1,2$, a.s. □Definition 4 ([11]).
Theorem 6.
If ${\bar{p}_{1}^{\ast }}=0$, then the prey population density ${x_{1}}(t)$ with the initial condition ${x_{10}}>0$ will be nonpersistent in the mean.
Proof.
From the first equality in (33) for $i=1$ we have
where the martingale ${M_{1}}(t)$ is defined in (34). From the definition of ${\bar{p}_{1}^{\ast }}$ and the strong law of large numbers for ${M_{1}}(t)$ it follows, that $\forall \varepsilon >0$, $\exists {t_{0}}\ge 0$, and $\exists {\Omega _{\varepsilon }}\subset \Omega $, with $\mathrm{P}({\Omega _{\varepsilon }})\ge 1-\varepsilon $, such that
Let ${y_{1}}(t)={\textstyle\int _{0}^{t}}{x_{1}}(s)ds$, then from (36) we have
(35)
\[ \ln {x_{1}}(t)\le \ln {x_{10}}+{\underset{0}{\overset{t}{\int }}}{p_{1}}(s)ds-{b_{1\inf }}{\underset{0}{\overset{t}{\int }}}{x_{1}}(s)ds+{M_{1}}(t),\]
\[ \frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{1}}(s)ds\le {\bar{p}_{1}^{\ast }}+\frac{\varepsilon }{2},\hspace{2.5pt}\frac{{M_{1}}(t)}{t}\le \frac{\varepsilon }{2},\hspace{2.5pt}\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\]
So, from (35) we derive
(36)
\[\begin{array}{r}\displaystyle \ln {x_{1}}(t)-\ln {x_{10}}\le t({\bar{p}_{1}^{\ast }}+\varepsilon )-{b_{1\inf }}{\underset{0}{\overset{t}{\int }}}{x_{1}}(s)ds\\ {} \displaystyle =t\varepsilon -{b_{1\inf }}{\underset{0}{\overset{t}{\int }}}{x_{1}}(s)ds,\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\end{array}\]
\[\begin{array}{r}\displaystyle \ln \left(\frac{d{y_{1}}(t)}{dt}\right)\le \varepsilon t-{b_{1\inf }}{y_{1}}(t)+\ln {x_{10}}\\ {} \displaystyle \Rightarrow {e^{{b_{1\inf }}{y_{1}}(t)}}\frac{d{y_{1}}(t)}{dt}\le {x_{10}}{e^{\varepsilon t}},\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\end{array}\]
By integrating the last inequality from ${t_{0}}$ to t we obtain
\[ {e^{{b_{1\inf }}{y_{1}}(t)}}\le \frac{{b_{1\inf }}{x_{10}}}{\varepsilon }\left({e^{\varepsilon t}}-{e^{\varepsilon {t_{0}}}}\right)+{e^{{b_{1\inf }}{y_{1}}({t_{0}})}},\hspace{2.5pt}\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\]
So,
\[ {y_{1}}(t)\le \frac{1}{{b_{1\inf }}}\ln \left[{e^{{b_{1\inf }}{y_{1}}({t_{0}})}}+\frac{{b_{1\inf }}{x_{10}}}{\varepsilon }\left({e^{\varepsilon t}}-{e^{\varepsilon {t_{0}}}}\right)\right],\hspace{2.5pt}\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }},\]
and therefore
\[ \underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{x_{1}}(s)ds\le \frac{\varepsilon }{{b_{1\inf }}},\hspace{2.5pt}\forall \omega \in {\Omega _{\varepsilon }}.\]
Since $\varepsilon >0$ is arbitrary and ${x_{1}}(t)>0$ a.s., we have
□Theorem 7.
If ${\bar{p}_{2}^{\ast }}=0$ and ${\bar{p}_{1}^{\ast }}<0$, then the predator population density ${x_{2}}(t)$ with the initial condition ${x_{20}}>0$ will be nonpersistent in the mean.
Proof.
From the first equality in (33) with $i=2$ we have, for $c={c_{2\inf }}/{m_{\sup }}$,
where the martingale ${M_{2}}(t)$ is defined in (34). From Theorem 5, the definition of ${\bar{p}_{2}^{\ast }}$ and the strong law of large numbers for ${M_{2}}(t)$ it follows, that $\forall \varepsilon >0$, $\exists {t_{0}}\ge 0$, and $\exists {\Omega _{\varepsilon }}\subset \Omega $ with $\mathrm{P}({\Omega _{\varepsilon }})\ge 1-\varepsilon $, such that
Let ${y_{2}}(t)={\textstyle\int _{{t_{0}}}^{t}}{x_{2}}(s)ds$. Then from (38) we have
(37)
\[\begin{array}{r}\displaystyle \ln {x_{2}}(t)\le \ln {x_{20}}+{\underset{0}{\overset{t}{\int }}}{p_{2}}(s)ds-{c_{2\inf }}{\underset{0}{\overset{t}{\int }}}\frac{{x_{2}}(s)}{m(s)+{x_{1}}(s)}ds+{M_{2}}(t)\\ {} \displaystyle =\ln {x_{20}}\hspace{-0.1667em}+\hspace{-0.1667em}{\underset{0}{\overset{t}{\int }}}{p_{2}}(s)ds\hspace{-0.1667em}-\hspace{-0.1667em}{c_{2\inf }}{\underset{0}{\overset{t}{\int }}}\frac{1}{m(s)}\left[{x_{2}}(s)\hspace{-0.1667em}-\hspace{-0.1667em}\frac{{x_{1}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}\right]ds\hspace{-0.1667em}+\hspace{-0.1667em}{M_{2}}(t)\\ {} \displaystyle \le \ln {x_{20}}+{\underset{0}{\overset{t}{\int }}}{p_{2}}(s)ds-c{\underset{0}{\overset{t}{\int }}}{x_{2}}(s)ds+c{\underset{0}{\overset{t}{\int }}}\frac{{x_{1}}(s){x_{2}}(s)}{{m_{\sup }}+{x_{1}}(s)}ds+{M_{2}}(t),\end{array}\]
\[ \frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{2}}(s)ds\le {\bar{p}_{2}^{\ast }}+\frac{\varepsilon }{2},\hspace{2.5pt}\frac{{M_{2}}(t)}{t}\le \frac{\varepsilon }{2},\hspace{2.5pt}\frac{{x_{1}}(t)}{{m_{\sup }}+{x_{1}}(t)}\le \varepsilon ,\hspace{2.5pt}\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\]
So, from (37) we derive
(38)
\[\begin{array}{r}\displaystyle \ln {x_{2}}(t)-\ln {x_{20}}\le t({\bar{p}_{2}^{\ast }}+\varepsilon )-c(1-\varepsilon ){\underset{{t_{0}}}{\overset{t}{\int }}}{x_{2}}(s)ds\\ {} \displaystyle =t\varepsilon -c(1-\varepsilon ){\underset{{t_{0}}}{\overset{t}{\int }}}{x_{2}}(s)ds,\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\end{array}\]
\[\begin{array}{r}\displaystyle \ln \left(\frac{d{y_{2}}(t)}{dt}\right)\le \varepsilon t-c(1-\varepsilon ){y_{2}}(t)+\ln {x_{20}}\\ {} \displaystyle \Rightarrow {e^{c(1-\varepsilon ){y_{2}}(t)}}\frac{d{y_{2}}(t)}{dt}\le {x_{20}}{e^{\varepsilon t}},\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\end{array}\]
By integrating the last inequality from ${t_{0}}$ to t we obtain
\[ {e^{c(1-\varepsilon ){y_{2}}(t)}}\le \frac{c(1-\varepsilon ){x_{20}}}{\varepsilon }\left({e^{\varepsilon t}}-{e^{\varepsilon {t_{0}}}}\right)+1,\hspace{2.5pt}\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }}.\]
So,
\[ {y_{2}}(t)\le \frac{1}{c(1-\varepsilon )}\ln \left[1+\frac{c(1-\varepsilon ){x_{20}}}{\varepsilon }\left({e^{\varepsilon t}}-{e^{\varepsilon {t_{0}}}}\right)\right],\hspace{2.5pt}\forall t\ge {t_{0}},\hspace{2.5pt}\omega \in {\Omega _{\varepsilon }},\]
and therefore
\[ \underset{t\to \infty }{\limsup }\frac{1}{t}{\underset{0}{\overset{t}{\int }}}{x_{2}}(s)ds\le \frac{\varepsilon }{c(1-\varepsilon )},\hspace{2.5pt}\forall \omega \in {\Omega _{\varepsilon }}.\]
Since $\varepsilon >0$ is arbitrary and ${x_{2}}(t)>0$ a.s., we have
□Definition 5 ([11]).
Theorem 8.
If ${\bar{p}_{2}^{\ast }}>0$, then the predator population density ${x_{2}}(t)$ with the initial condition ${x_{20}}>0$ will be weakly persistent in the mean.
Proof.
If the assertion of theorem is not true, then $\mathrm{P}\{{\bar{x}_{2}^{\ast }}=0\}>0$. From the first equality in (33) we get
\[\begin{array}{r}\displaystyle \frac{1}{t}(\ln {x_{2}}(t)-\ln {x_{20}})=\frac{1}{t}{\int _{0}^{t}}{p_{2}}(s)ds-\frac{1}{t}{\int _{0}^{t}}\frac{{c_{2}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}ds+\frac{{M_{2}}(t)}{t}\\ {} \displaystyle \ge \frac{1}{t}{\int _{0}^{t}}{p_{2}}(s)ds-\frac{{c_{2\sup }}}{{m_{\inf }}t}{\int _{0}^{t}}{x_{2}}(s)ds+\frac{{M_{2}}(t)}{t},\end{array}\]
where the martingale ${M_{2}}(t)$ is defined in (34). For $\forall \omega \in \{\omega \in \Omega |\hspace{2.5pt}{\bar{x}_{2}^{\ast }}=0\}$ in virtue of the strong law of large numbers for the martingale ${M_{2}}(t)$ we have
Therefore,
\[ \mathrm{P}\left\{\omega \in \Omega |\hspace{2.5pt}\underset{t\to \infty }{\limsup }\frac{\ln {x_{2}}(t)}{t}>0\right\}>0.\]
But from Lemma 2 we have
This is a contradiction. □Theorem 9.
If ${\bar{p}_{1}^{\ast }}>0$ and ${\bar{p}_{2}^{\ast }}<0$, then the prey population density ${x_{1}}(t)$ with the initial condition ${x_{10}}>0$ will be weakly persistent in the mean.
Proof.
Let $\mathrm{P}\{{\bar{x}_{1}^{\ast }}=0\}>0$. From the first equality in (33) with $i=1$ we get
where the martingale ${M_{1}}(t)$ is defined in (34). From the definition of ${\bar{p}_{1}^{\ast }}$, the strong law of large numbers for the martingale ${M_{1}}(t)$ and Theorem 2 for ${x_{2}}(t)$, we have $\forall \varepsilon >0$, $\exists {t_{0}}\ge 0$, $\exists {\Omega _{\varepsilon }}\subset \Omega $ with $\mathrm{P}({\Omega _{\varepsilon }})\ge 1-\varepsilon $, such that
(39)
\[\begin{array}{r}\displaystyle \frac{1}{t}(\ln {x_{1}}(t)-\ln {x_{10}})=\frac{1}{t}{\int _{0}^{t}}{p_{1}}(s)ds-\frac{1}{t}{\int _{0}^{t}}{b_{1}}(s){x_{1}}(s)ds\\ {} \displaystyle -\frac{1}{t}{\int _{0}^{t}}\frac{{c_{1}}(s){x_{2}}(s)}{m(s)+{x_{1}}(s)}ds+\frac{{M_{1}}(t)}{t}\\ {} \displaystyle \ge \frac{1}{t}{\int _{0}^{t}}{p_{1}}(s)ds-\frac{{b_{1\sup }}}{t}{\int _{0}^{t}}{x_{1}}(s)ds-\frac{{c_{1\sup }}}{{m_{\inf }}t}{\int _{0}^{t}}{x_{2}}(s)ds+\frac{{M_{1}}(t)}{t}\end{array}\]
\[ \frac{1}{t}{\underset{0}{\overset{t}{\int }}}{p_{1}}(s)ds\ge {\bar{p}_{1}^{\ast }}-\frac{\varepsilon }{3},\hspace{2.5pt}\frac{{M_{1}}(t)}{t}\ge -\frac{\varepsilon }{3},\hspace{2.5pt}\frac{1}{t}{\int _{0}^{t}}{x_{2}}(s)ds\le \frac{\varepsilon {m_{\inf }}}{3{c_{1\sup }}},\forall t\ge {t_{0}},\omega \in {\Omega _{\varepsilon }}.\]
So, from (39) we get for $\omega \in \{\omega \in \Omega |{\bar{x}_{1}^{\ast }}=0\}\cap {\Omega _{\varepsilon }}$
\[ \underset{t\to \infty }{\limsup }\frac{\ln {x_{1}}(t)}{t}\ge {\bar{p}_{1}^{\ast }}-\varepsilon >0\]
for a sufficiently small $\varepsilon >0$. Therefore,
\[ \mathrm{P}\left\{\omega \in \Omega |\hspace{2.5pt}\underset{t\to \infty }{\limsup }\frac{\ln {x_{1}}(t)}{t}>0\right\}>0.\]
But from Corollary 1
Therefore we have a contradiction. □