Introduction
The Cox–Ingersoll–Ross (CIR) process, that was first introduced and studied by Cox, Ingersoll and Ross in papers [4–6], can be defined as the process $X=\{{X_{t}},t\ge 0\}$ that satisfies the stochastic differential equation of the form
where $W=\{{W_{t}},t\ge 0\}$ is the Wiener process.
The CIR process was originally proposed as a model for interest rate evolution in time and in this framework the parameters a, b and σ have the interpretation as follows: b is the “mean” that characterizes the level, around which the trajectories of X will evolve in a long run; a is “the speed of adjustment” that measures the velocity of regrouping around b; σ is an instantaneous volatility which shows the amplitude of randomness entering the system. Another common application is the stochastic volatility modeling in the Heston model which was proposed in [10] (an extensive bibliography on the subject can be found in [11] and [12]).
For the sake of simplicity, we shall use another parametrization of the equation (1), namely
According to [9], if the condition $2k\ge {\sigma ^{2}}$, sometimes referred to as the Feller condition, holds, then the CIR process is strictly positive. It is also well-known that this process is ergodic and has a stationary distribution.
However, in reality the dynamics on financial markets is characterized by the so-called “memory phenomenon”, which cannot be reflected by the standard CIR model (for more details on financial markets with memory, see [1, 2, 7, 21]). Therefore, it is reasonable to introduce a fractional Cox–Ingersoll–Ross process, i.e. to modify the equation $\text{(2)}$ by replacing (in some sense) the standard Wiener process with the fractional Brownian motion ${B^{H}}=\{{B_{t}^{H}},t\ge 0\}$, i.e. with a centered Gaussian process with the covariance function $\mathbb{E}[{B_{t}^{H}}{B_{s}^{H}}]=\hspace{2.5pt}\frac{1}{2}({t^{2H}}+{s^{2H}}-|t-s{|^{2H}})$.
There are several approaches to the definition of the fractional Cox–Ingersoll–Ross process. The paper [15] introduces the so-called “rough-path” approach; in [13, 14] it is defined as a time-changed standard CIR process with inverse stable subordinator; another definition is presented in [8] as a part of discussion on rough Heston models.
A simpler pathwise approach is presented in [17] and [18]. There, the fractional Cox–Ingersoll–Ross process was defined as the square of the solution of the SDE
until the first moment of zero hitting and zero after this moment (the latter condition was necessary as in the case of $k>0$ the existence of the solution of (3) could not be guaranteed after the first moment of reaching zero).
(3)
\[ d{Y_{t}}=\frac{1}{2}\bigg(\frac{k}{{Y_{t}}}-a{Y_{t}}\bigg)dt+\frac{\sigma }{2}d{B_{t}^{H}},\hspace{1em}{Y_{0}}>0,\]The reason of such definition lies in the fact that the fractional Cox–Ingersoll–Ross process X defined in that way satisfies pathwisely (until the first moment of zero hitting) the equation
where the integral with respect to the fractional Brownian motion is considered as the pathwise Stratonovich integral.
It was shown that if $k>0$ and $H>\frac{1}{2}$, such process is strictly positive and never hits zero; if $k>0$ and $H<\frac{1}{2}$, the probability that there is no zero hitting on the fixed interval $[0,T]$, $T>0$, tends to 1 as $k\to \infty $.
The special case of $k=0$ was considered in [17]. In this situation Y is the well-known fractional Ornstein–Uhlenbeck process (for more details see [3]) and, if $a\ge 0$, the fractional Cox–Ingersoll–Ross process hits zero almost surely, and if $a<0$ the probability of hitting zero is greater than 0 but less than 1.
However, such a definition has a significant disadvantage: according to it, the process remains on the zero level after reaching the latter, and if $H<1/2$, such case cannot be excluded. In this paper we generalize the approach presented in [18] and [17] in order to solve this issue.
We define the fractional CIR process on ${\mathbb{R}_{+}}$ for $H<1/2$ as the square of Y which is the pointwise limit as $\varepsilon \downarrow 0$ of the processes ${Y_{\varepsilon }}$ that satisfy the SDE of the following type:
\[ d{Y_{\varepsilon }}(t)=\frac{1}{2}\bigg(\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }-a{Y_{\varepsilon }}(t)\bigg)dt+\frac{\sigma }{2}d{B^{H}}(t),\hspace{1em}{Y_{\varepsilon }}(0)={Y_{0}}>0,\]
where a, k, $\sigma >0$.We prove that this limit indeed exists, is nonnegative a.s. and is positive a.e. with respect to the Lebesgue measure a.s. Moreover, Y is continuous and satisfies the equation of the type
\[ Y(t)=Y(\alpha )+\frac{1}{2}{\int _{\alpha }^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\]
for all $t\in [\alpha ,\beta ]$ where $(\alpha ,\beta )$ is any interval of Y’s positiveness.The possibility of getting the equation of the form (3) is restricted due to the obscure structure of the set $\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)=0\}$ which is connected to the structure of the level sets of the fractional Brownian motion. For some results on the latter see, for example, [19].
This paper is organised as follows.
In Section 1 we give preliminary remarks on some results concerning the existence and uniqueness of solutions of SDEs driven by an additive fractional Brownian motion, as well as explain connection of this paper to [17] and [18].
In Section 2 we construct the square root process as the limit of approximations. In particular, it contains a variant of the comparison Lemma and a uniform boundary for all moments of the prelimit processes.
In Section 3 we consider properties of the square root process. We prove that Y is nonnegative and positive a.e., and also continuous.
In Section 4 we give some remarks concerning the equation for the limit square root process on ${\mathbb{R}_{+}}$. We obtain the equation for this process with the noise in form of sum of increments of the fractional Brownian motion on the intervals of Y’s positiveness.
Section 5 is fully devoted to the pathwise definition and equation of the fractional Cox–Ingersoll–Ross process. We prove that on each interval of positiveness the process satisfies the CIR SDE with the integral with respect to a fractional Brownian motion, considered as the pathwise Stratonovich integral. The equation on an arbitrary finite interval is also obtained, with the integral with respect to a fractional Brownian motion considered as the sum of pathwise Stratonovich integrals on intervals of positiveness.
1 Fractional Cox–Ingersoll–Ross process until the first moment of zero hitting
Let ${B^{H}}=\{{B^{H}}(t),t\ge 0\}$ be a fractional Brownian motion with an arbitrary Hurst index $H\in (0,1)$. Consider the process $Y=\{Y(t),\hspace{2.5pt}t\ge 0\}$, such that
Note that, according to [20], the sufficient conditions that guarantee the existence and the uniqueness of the strong solution of the equation
are as follows:
(i) for $H<1/2$: linear growth condition:
where C is a positive constant;
(ii) for $H>1/2$: Hölder continuity:
where C is a positive constant, ${z_{1}},{z_{2}}\in \mathbb{R}$, ${t_{1}},{t_{2}}\in [0,T]$, $1>\alpha >1-1/2H$, $\gamma >H-\frac{1}{2}$.
(6)
\[ \big|f({t_{1}},{z_{1}})-f({t_{2}},{z_{2}})\big|\le C\big(|{z_{1}}-{z_{2}}{|^{\alpha }}+|{t_{1}}-{t_{2}}{|^{\gamma }}\big),\]The function
satisfies both (i) and (ii) for all $y\in (\delta ,+\infty )$, where $\delta \in (0,{Y_{0}})$ is arbitrary. Therefore for all $H\in (0,1)$ the unique strong solution of (4) on $[0,T]$ exists until the first moment of hitting the level $\delta \in (0,{Y_{0}})$ and, from the fact that δ is arbitrary, it exists until the first moment of zero hitting.
Let $\tau :=\sup \{t>0\hspace{2.5pt}|\hspace{2.5pt}\forall s\in [0,t):Y(s)>0\}$ be the first moment of zero hitting by Y. The fractional Cox-Ingersoll-Ross process was defined in [18] as follows.
Definition 1.1.
The fractional Cox–Ingersoll–Ross (CIR) process is the process $X=\{X(t),\hspace{2.5pt}t\ge 0\}$, such that
where Y is the solution of the equation (4).
It is known ([18], Theorem 2) that if $H>\frac{1}{2}$, the process Y is strictly positive a.s., therefore the fractional Cox–Ingersoll–Ross process is simply ${Y^{2}}(t)$, $t\ge 0$.
However, in the case of $H<\frac{1}{2}$, the process Y may hit zero. In such situation, according to (7), the corresponding trajectories of the fractional Cox–Ingersoll–Ross process remain in zero after this moment, which is an undesirable property for any financial applications.
Our goal is to modify the definition of the fractional CIR process in order to remove the problem mentioned above.
2 Construction of the square root process on ${\mathbb{R}_{+}}$ as a limit of ε-approximations
Consider the process ${Y_{\varepsilon }}=\{{Y_{\varepsilon }}(t),t\in [0,T]\}$ that satisfies the equation of the form
where ${Y_{0}},k,a,\sigma >0$ and ${B^{H}}=\{{B^{H}}(t),t\ge 0\}$ is a fractional Brownian motion with $H\in (0,1/2)$.
(8)
\[ {Y_{\varepsilon }}(t)={Y_{0}}+{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds-a{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds+\sigma {B^{H}}(t),\hspace{1em}t\ge 0,\]Remark 2.1.
We will sometimes call ${Y_{\varepsilon }}$ an ε-approximation of the square-root process Y.
For any $T>0$, the function ${f_{\varepsilon }}$: $\mathbb{R}\to \mathbb{R}$ such that
satisfies the conditions (5) and (6), therefore the strong solution of (8) exists and is unique.
The goal of this section is to prove that there is a pointwise limit of ${Y_{\varepsilon }}$ as $\varepsilon \to 0$.
First, let us prove the analogue of the comparison Lemma.
Lemma 2.1.
Assume that continuous random processes ${Y_{1}}=\{{Y_{1}}(t),t\ge 0\}$ and ${Y_{2}}=\{{Y_{2}}(t),t\ge 0\}$ satisfy the equations of the form
where ${B^{H}}=\{{B^{H}}(t),t\ge 0\}$ is a fractional Brownian motion, ${Y_{0}}$, σ>0 are constants, ${\alpha _{i}}=\{{\alpha _{i}}(t),t\ge 0\}$, $i=1,2$, are continuous random processes and ${f_{1}}$, ${f_{2}}$: $\mathbb{R}\to \mathbb{R}$ are continuous functions, such that:
(9)
\[ \displaystyle {Y_{i}}(t)={Y_{0}}+{\int _{0}^{t}}{f_{i}}\big({Y_{i}}(s)\big)ds+{\int _{0}^{t}}{\alpha _{i}}(s)ds+\sigma {B^{H}}(t),\hspace{1em}t\ge 0,\hspace{2.5pt}i=1,2,\]
(i) for all $y\in \mathbb{R}$: ${f_{1}}(y)<{f_{2}}(y)$;
(ii) for all $\omega \in \varOmega $, for all $t\ge 0$: ${\alpha _{1}}(t,\omega )\le {\alpha _{2}}(t,\omega )$.
Then, for all $\omega \in \varOmega $, $t\ge 0$: ${Y_{1}}(t,\omega )<{Y_{2}}(t,\omega )$.
Proof.
Let $\omega \in \varOmega $ be fixed (we will omit ω in brackets in what follows).
Denote
The function δ is differentiable, $\delta (0)=0$ and
It is clear that $\delta (t)={\delta ^{\prime }_{+}}(0)t+o(t)$, $t\to 0+$, so there exists the maximal interval $(0,{t^{\ast }})\subset (0,\infty )$ such that $\delta (t)>0$ for all $t\in (0,{t^{\ast }})$. It is also clear that
Assume that ${t^{\ast }}<\infty $. According to the definition of ${t^{\ast }}$, $\delta ({t^{\ast }})=0$. Hence ${Y_{1}}({t^{\ast }})={Y_{2}}({t^{\ast }})={Y^{\ast }}$ and
As $\delta (t)={\delta ^{\prime }}({t^{\ast }})(t-{t^{\ast }})+o(t-{t^{\ast }})$, $t\to {t^{\ast }}$, there exists $\varepsilon >0$ such that $\delta (t)<0$ for all $t\in ({t^{\ast }}-\varepsilon ,{t^{\ast }})$, that contradicts the definition of ${t^{\ast }}$.
Remark 2.2.
It is obvious that Lemma 2.1 still holds if we replace the index set $[0,+\infty )$ by $[a,b]$, $a<b$, or if we consider the case ${Y_{1}}(0)<{Y_{2}}(0)$.
Moreover, the condition (i) can be replaced by ${f_{1}}(y)\le {f_{2}}(y)$. In this case it can be obtained that ${Y_{1}}(t,\omega )\le {Y_{2}}(t,\omega )$ for all $\omega \in \varOmega $ and $t\ge 0$.
According to Lemma 2.1, for any ${\varepsilon _{1}}>{\varepsilon _{2}}$ and for all $t\in (0,\infty )$:
Now, let us show that there exists the limit
We will need an auxiliary result, presented in [16].
Let a, k, $\sigma >0$, $H\in (0,1)$.
Theorem 2.1.
For all $H\in (0,1)$, $T>0$ and $r\ge 1$ there are non-random constants ${C_{1}}={C_{1}}(T,r,{Y_{0}},a,k)>0$ and ${C_{2}}={C_{2}}(T,r,a,\sigma )>0$ such that for all $\varepsilon >0$ and for all $t\in [0,T]$:
Proof.
Let an arbitrary $\omega \in \varOmega $, $r\ge 1$ and $T>0$ be fixed (we will omit ω in what follows).
Let us prove that for all $\varepsilon >0$ and for all $t\in [0,T]$:
Consider the moment
(13)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)\\ {} \displaystyle +{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
\[ {\tau _{1}}(\varepsilon ):=\sup \bigg\{s\in [0,T]\hspace{2.5pt}|\hspace{2.5pt}\forall u\in [0,s]:{Y_{\varepsilon }}(s)\ge \frac{{Y_{0}}}{2}\bigg\}.\]
Note that ${Y_{\varepsilon }}$ is continuous, so $0<{\tau _{1}}(\varepsilon )\le T$ and, moreover, for all $t\in [0,{\tau _{1}}(\varepsilon )]$: ${Y_{\varepsilon }}(t)\ge \frac{{Y_{0}}}{2}>0$.In order to make the further proof more convenient for the reader, we shall divide it into 3 steps. In Steps 1 and 2, we will separately show that (2) holds for all $t\in [0,{\tau _{1}}(\varepsilon )]$ and for all $t\in ({\tau _{1}}(\varepsilon ),T]$, and in Step 3 we will obtain the final result.
Step 1. Assume that $t\in [0,{\tau _{1}}(\varepsilon )]$. Then,
(14)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}={\Bigg|{Y_{0}}+{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds-a{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds+\sigma {B^{H}}(t)\Bigg|^{r}}\\ {} \displaystyle \le {\Bigg({Y_{0}}+{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+a{\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds+\sigma \big|{B^{H}}(t)\big|\Bigg)^{r}}.\end{array}\]If $r>1$, by applying the Hölder inequality to the right-hand side of (2), we obtain that
Note that (2) is also true for $r=1$ as in this case it simply coincides with right-hand side of (2).
(15)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le {4^{r-1}}\Bigg({Y_{0}^{r}}+{\Bigg({\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}\\ {} \displaystyle +{a^{r}}{\Bigg({\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}+{\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}\Bigg).\end{array}\]For all $t\in [0,{\tau _{1}}(\varepsilon )]$
\[ {\Bigg({\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}\le {\Bigg({\int _{0}^{t}}\frac{2k}{{Y_{0}}}ds\Bigg)^{r}}\le {\bigg(\frac{2kT}{{Y_{0}}}\bigg)^{r}}.\]
For $r\ge 1$, from Jensen’s inequality,
\[ {\Bigg({\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\le {t^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\le {T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\]
Finally, for all $t\in [0,{\tau _{1}}(\varepsilon )]$:
Hence, for all $t\in [0,{\tau _{1}}(\varepsilon )]$:
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le {4^{r-1}}\Bigg({Y_{0}^{r}}+{\bigg(\frac{2kT}{{Y_{0}}}\bigg)^{r}}+{a^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds+{\sigma ^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\Bigg)\\ {} \displaystyle \le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{8kT}{{Y_{0}}}\bigg)^{r}}+{(4\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)+{(4a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\\ {} \displaystyle \le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)+{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
Step 2. Assume that ${\tau _{1}}(\varepsilon )<T$, i.e. the interval $({\tau _{1}}(\varepsilon ),T]$ is non-empty. From definition of ${\tau _{1}}(\varepsilon )$ and continuity of ${Y_{\varepsilon }}$, ${Y_{\varepsilon }}({\tau _{1}}(\varepsilon ))=\frac{{Y_{0}}}{2}$, for all $t\in ({\tau _{1}}(\varepsilon ),T]$:
Denote
\[ {\tau _{2}}(\varepsilon ,t):=\sup \bigg\{s\in ({\tau _{1}}(\varepsilon ),t]\hspace{2.5pt}|\hspace{2.5pt}\big|{Y_{\varepsilon }}(s)\big|<\frac{{Y_{0}}}{2}\bigg\}.\]
Note that for all $t\in ({\tau _{1}}(\varepsilon ),T]$: ${\tau _{1}}(\varepsilon )<{\tau _{2}}(\varepsilon ,t)\le t$ and $|{Y_{\varepsilon }}({\tau _{2}}(\varepsilon ,t))|\le \frac{{Y_{0}}}{2}$, so
(16)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}={\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)+{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\\ {} \displaystyle \le {2^{r-1}}\big({\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}+{\big|{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\big)\\ {} \displaystyle \le {2^{r-1}}\bigg({\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}+{\bigg(\frac{{Y_{0}}}{2}\bigg)^{r}}\bigg)\\ {} \displaystyle \le {2^{r-1}}{\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}+{Y_{0}^{r}}.\end{array}\]In addition, if ${\tau _{2}}(\varepsilon ,t)=t$, then
\[ {\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}=0,\]
and if ${\tau _{2}}(\varepsilon ,t)<t$, then
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}=\Bigg|{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\\ {} \displaystyle -a{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}{Y_{\varepsilon }}(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big){\Bigg|^{r}}\\ {} \displaystyle \le \Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+a{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds+\sigma \big|{B^{H}}(t)\big|\\ {} \displaystyle +\sigma \big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|\Bigg){^{r}}\\ {} \displaystyle \le {4^{r-1}}\Bigg[{\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}+{a^{r}}{\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\\ {} \displaystyle +{\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}+{\sigma ^{r}}{\big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\Bigg].\end{array}\]
In this case, from definition of ${\tau _{2}}(\varepsilon ,t)$, for all $s\in [{\tau _{2}}(\varepsilon ,t),t]$: ${Y_{\varepsilon }}(s)\ge \frac{{Y_{0}}}{2}$, so
Furthermore, from Jensen’s inequality,
\[\begin{array}{c}\displaystyle {\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\le {\Bigg({\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\le {t^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\\ {} \displaystyle \le {T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
Next,
\[ {\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}+{\sigma ^{r}}{\big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\le 2{\sigma ^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\]
Hence,
(17)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\\ {} \displaystyle \le {\bigg(\frac{8kT}{{Y_{0}}}\bigg)^{r}}+{(4a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds+{(4\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\end{array}\]Finally, from (2) and (2), for all $t\in ({\tau _{1}}(\varepsilon ),T]$:
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\\ {} \displaystyle \le \bigg({Y_{0}^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)+{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\\ {} \displaystyle \le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)\\ {} \displaystyle +{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
Therefore, (2) indeed holds for all $t\in [0,T]$.
Step 3. From (2), by applying the Grönwall inequality, we obtain that for all $t\in [0,T]$:
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg){e^{{(8aT)^{r}}}}\\ {} \displaystyle =:{C_{1}}+{C_{2}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}},\end{array}\]
where
which ends the proof. □Proof.
Let $\omega \in \varOmega $ and $t\in (0,T]$ be fixed (if $t=0$, the result is trivial).
Proof.
First, note that the trajectories of Y are measurable as they are the pointwise limits of continuous functions.
Let $t\in {\mathbb{R}_{+}}$ be fixed. Due to Tonelli’s theorem, for any $r\ge 1$ there is such C that
therefore
□
Remark 2.4.
Later it will be shown that Y is Riemann integrable as well. Until that, all integrals should be considered as the Lebesgue integrals.
Remark 2.5.
We will sometimes refer to the limit process Y as to the square root process. It will be shown that it coincides with the square root process presented in Section 1 until the first zero hitting by the latter.
3 Properties of ε-approximations and the square root process
Now let us prove several properties of both square root process and its ε-approximations.
Proof.
Let $\omega \in \varOmega $ be fixed (we will omit ω in what follows).
From the definition of Y and Remark 2.3, for any $t\in [0,T]$ the left-hand side of
\[ {Y_{\varepsilon }}(t)-{Y_{0}}+a{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds-\sigma {B^{H}}(t)={\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\]
converges to
as $\varepsilon \to 0$. Therefore there exists a limit
Assume that there exists a sequence $\{{\varepsilon _{n}}:n\ge 1\}$, ${\varepsilon _{n}}\downarrow 0$, and $\delta >0$ such that for all $n\ge 1$:
In such case, as
\[\begin{array}{c}\displaystyle {\int _{0}^{T}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle ={\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)\le 0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle +{\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)>0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle \ge {\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)\le 0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\ge \frac{k\delta }{{\varepsilon _{n}}},\end{array}\]
it is clear that
\[ {\int _{0}^{T}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\to \infty ,\hspace{1em}n\to \infty ,\]
that contradicts (19). □Corollary 3.1.
For any $T>0$, $Y(t)>0$ almost everywhere on $[0,T]$ a.s. and hence $Y(t)>0$ almost everywhere on ${\mathbb{R}_{+}}$ a.s.
For the next result, we will require the following well-known property of the fractional Brownian motion (see, for example, [16]).
Lemma 3.2.
Let $\{{B^{H}}(t),t\ge 0\}$ be a fractional Brownian motion with the Hurst index H. Then there is such ${\varOmega ^{\prime }}\subset \varOmega $, $\mathbb{P}({\varOmega ^{\prime }})=1$, that for all $\omega \in {\varOmega ^{\prime }}$, $T>0$, $\gamma >0$ and $0\le s\le t\le T$ there is a positive $C=C(\omega ,T,\gamma )$ for which:
Proof.
Let an arbitrary $\omega \in {\varOmega ^{\prime }}$ from Lemma 3.2 be fixed and assume that there is such $\tau >0$ that $Y(\tau )\le 0$. Then, for all $\varepsilon >0$:
Let an arbitrary $\varepsilon >0$ be fixed. Denote
\[\begin{array}{c}\displaystyle {\tau _{-}}(\varepsilon ):=\sup \big\{t\in (0,\tau )\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}(t)>0\big\},\\ {} \displaystyle {\tau _{+}}(\varepsilon ):=\inf \big\{t\in (\tau ,\infty )\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}(t)>0\big\}.\end{array}\]
Note that, due to continuity of ${Y_{\varepsilon }}$ and Lemma 3.1, $0<{\tau _{-}}(\varepsilon )<\tau <{\tau _{+}}(\varepsilon )<\infty $.It is clear that for all $t\in ({\tau _{-}}(\varepsilon ),{\tau _{+}}(\varepsilon ))$: ${Y_{\varepsilon }}(t)<0$, therefore ${1_{\{{Y_{\varepsilon }}(t)>0\}}}=0$ and, due to Lemma 3.2, there is such $C>0$ that for all $t\in ({\tau _{-}}(\varepsilon ),{\tau _{+}}(\varepsilon ))$:
It is sufficient to prove that
(20)
\[\begin{array}{l}\displaystyle 0<{Y_{\varepsilon }}(t)={Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{-}}(\varepsilon )\big)\\ {} \displaystyle ={\int _{{\tau _{-}}(\varepsilon )}^{t}}\frac{k}{\varepsilon }ds-a{\int _{{\tau _{-}}(\varepsilon )}^{t}}{Y_{\varepsilon }}(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}\big({\tau _{-}}(\varepsilon )\big)\big)\\ {} \displaystyle \ge \frac{k}{\varepsilon }\big(t-{\tau _{-}}(\varepsilon )\big)-C\sigma {\big(t-{\tau _{-}}(\varepsilon )\big)^{H/2}}.\end{array}\]Indeed,
Equating the right-hand side of (21) to 0 and solving the equation with respect to t, we obtain
where ${C_{1}}={\left(\frac{CH\sigma }{2k}\right)^{\frac{2}{2-H}}}$.
(21)
\[ {\bigg(\frac{k}{\varepsilon }\big(t-{\tau _{-}}(\varepsilon )\big)-C\sigma {\big(t-{\tau _{-}}(\varepsilon )\big)^{H/2}}\bigg)^{\prime }_{t}}=\frac{k}{\varepsilon }-\frac{CH\sigma }{2}{\big(t-{\tau _{-}}(\varepsilon )\big)^{\frac{H-2}{2}}}.\]It is easy to check that the second derivative of the right-hand side of (3) is positive on $({\tau _{-}}(\varepsilon ),{\tau _{+}}(\varepsilon ))$, so ${t_{\ast }}$ is indeed its point of minimum. Therefore
\[\begin{array}{l}\displaystyle F(\varepsilon )=\frac{k}{\varepsilon }\big({t_{\ast }}-{\tau _{-}}(\varepsilon )\big)-C\sigma {\big({t_{\ast }}-{\tau _{-}}(\varepsilon )\big)^{H/2}}=\frac{k}{\varepsilon }{C_{1}}{\varepsilon ^{\frac{2}{2-H}}}-C\sigma {\big({C_{1}}{\varepsilon ^{\frac{2}{2-H}}}\big)^{H/2}}\\ {} \displaystyle =k{C_{1}}{\varepsilon ^{\frac{H}{2-H}}}-C{C_{1}^{H/2}}\sigma {\varepsilon ^{\frac{H}{2-H}}}\to 0,\hspace{1em}\varepsilon \to 0.\end{array}\]
□Proof.
Indeed, denote
\[ \underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)={Z_{+}}(t)\le {Y_{{\varepsilon ^{\ast }}}}(t).\]
It is clear that for all $t\ge 0$, ${Y_{\varepsilon }}(t)\uparrow {Z_{+}}(t)$, $\varepsilon \downarrow {\varepsilon ^{\ast }}$, so for any $t>0$:
\[ \underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds={\int _{0}^{t}}{Z_{+}}(s)ds.\]
Moreover, for all $\varepsilon >{\varepsilon ^{\ast }}$:
\[ \frac{1}{{Y_{\varepsilon }}(t){1_{\{{Y_{\varepsilon }}(t)>0\}}}+\varepsilon }\le \frac{1}{{\varepsilon ^{\ast }}},\]
therefore
\[ \underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{1}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{1}{{Z_{+}}(s){1_{\{{Z_{+}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds\]
and hence
(22)
\[\begin{array}{l}\displaystyle {Z_{+}}(t)=\underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)\\ {} \displaystyle ={Y_{0}}+\underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\\ {} \displaystyle -a\underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{1}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+\sigma {B^{H}}(t)\\ {} \displaystyle ={Y_{0}}+{\int _{0}^{t}}\frac{1}{{Z_{+}}(s){1_{\{{Z_{+}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds-a{\int _{0}^{t}}{Z_{+}}(s)ds+\sigma {B^{H}}(t).\end{array}\]It is known that ${Y_{{\varepsilon ^{\ast }}}}$ is the unique solution to the equation (3), therefore for all $t\ge 0$:
Next, denote
\[ \underset{\varepsilon \uparrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)={Z_{-}}(t)\ge {Y_{{\varepsilon ^{\ast }}}}(t).\]
For all $t\ge 0$, ${Y_{\varepsilon }}(t)\downarrow {Z_{-}}(t)$, $\varepsilon \uparrow {\varepsilon ^{\ast }}$, so
\[ \underset{\varepsilon \uparrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds={\int _{0}^{t}}{Z_{-}}(s)ds\]
and for all $\varepsilon \in (\frac{{\varepsilon ^{\ast }}}{2},{\varepsilon ^{\ast }})$:
so
Therefore, similarly to (3), ${Z_{-}}$ satisfies the same equation as ${Y_{{\varepsilon ^{\ast }}}}$, so
□
Theorem 3.1.
Let $Y=\{Y(t),t\ge 0\}$ be the process defined by the formula (18). Then
1) the set $\{t>0|Y(t)>0\}$ is open in the natural topology on $\mathbb{R}$;
2) Y is continuous on $\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$.
Proof.
We shall divide the proof into 3 steps.
Step 1. Let $\omega \in \varOmega $ be fixed. Consider an arbitrary ${t^{\ast }}\in \{t>0|Y(t)>0\}$. As ${Y_{\varepsilon }}({t^{\ast }})\to Y({t^{\ast }})$, $\varepsilon \to 0$, there exists such ${\varepsilon ^{\ast }}>0$ that for all $\varepsilon <{\varepsilon ^{\ast }}$: ${Y_{\varepsilon }}({t^{\ast }})>0$. From continuity of ${Y_{\varepsilon }}$ with respect to t and their monotonicity with respect to ε, it follows that there exists such ${\delta ^{\ast }}={\delta ^{\ast }}({t^{\ast }})>0$ that
Hence for all $t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$: $Y(t)>0$ and therefore the set $\{t>0|Y(t)>0\}$ is open.
Step 2. Let us prove that
\[ \underset{t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})}{\sup }\big(Y(t)-{Y_{\varepsilon }}(t)\big)\to 0,\hspace{1em}\varepsilon \to 0,\]
and therefore Y is continuous on the interval $({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$.It is enough to prove that for any $\theta >0$ there exists such ${\varepsilon _{0}}={\varepsilon _{0}}(\theta )>0$ that for all $\varepsilon <{\varepsilon _{0}}$:
Indeed, let us fix an arbitrary $\theta >0$. From the definition of Y it follows that ${Y_{\varepsilon }}({t^{\ast }}-{\delta ^{\ast }})\to Y({t^{\ast }}-{\delta ^{\ast }})$ as $\varepsilon \to 0$, so there is such ${\varepsilon ^{\ast \ast }}<{\varepsilon ^{\ast }}$ that for all $\varepsilon <{\varepsilon ^{\ast \ast }}$ the following inequality holds:
Denote
\[\begin{aligned}{}{\varepsilon _{1}}:=\min \big\{\theta ,\hspace{2.5pt}\sup \big\{{\varepsilon ^{\ast \ast }}\in \big(0,{\varepsilon ^{\ast }}\big)\hspace{2.5pt}|\hspace{2.5pt}\forall \varepsilon \in \big(& 0,{\varepsilon ^{\ast \ast }}\big):\\ {} & Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)<\theta \big\}\big\}.\end{aligned}\]
As ${\varepsilon _{1}}\le \theta $, there exists such $C\in (0,1]$ that ${\varepsilon _{1}}=C\theta $.From the continuity with respect to ε,
\[ Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\le \theta \]
and, from the monotonicity with respect to ε, for all $\varepsilon <{\varepsilon _{1}}$:
It is obvious that ${Y_{\varepsilon }}({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})\downarrow 0$ as $\varepsilon \uparrow {\varepsilon _{1}}$, so let us denote
\[\begin{aligned}{}{\varepsilon _{0}}:=\sup \bigg\{\varepsilon \in (0,{\varepsilon _{1}})\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)& -{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\\ {} & \ge \frac{Y({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})}{2}\bigg\}.\end{aligned}\]
It is obvious that
\[ {Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)=\frac{Y({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})}{2}\]
and therefore
\[\begin{array}{c}\displaystyle Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\\ {} \displaystyle =\big(Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\big)-\big({Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\big)\\ {} \displaystyle =\frac{Y({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})}{2}\le \frac{\theta }{2}.\end{array}\]
Moreover, for all $\varepsilon <{\varepsilon _{0}}$:
Now consider an arbitrary $\varepsilon <{\varepsilon _{0}}$ and assume that there is such ${\tau _{0}}\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$ that
Denote
\[ \tau :=\inf \big\{t\in \big({t^{\ast }}-{\delta ^{\ast }},{\tau _{0}}\big)\hspace{2.5pt}|\hspace{2.5pt}\forall s\in (t,{\tau _{0}}):\hspace{2.5pt}{Y_{\varepsilon }}(s)-{Y_{{\varepsilon _{0}}}}(s)>\theta \big\}<{\tau _{0}}.\]
From the definition of ${\tau _{0}}$ and τ, for all $t\in (\tau ,{\tau _{0}})$:
However, as for all $t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$ and for all $\varepsilon <{\varepsilon ^{\ast }}$ it is true that ${1_{\{{Y_{\varepsilon }}(t)>0\}}}=1$,
(24)
\[\begin{array}{l}\displaystyle {\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)^{\prime }}\\ {} \displaystyle =\bigg(\frac{k}{{Y_{\varepsilon }}(\tau ){1_{\{{Y_{\varepsilon }}(\tau )>0\}}}+\varepsilon }-\frac{k}{{Y_{{\varepsilon _{2}}}}(\tau ){1_{\{{Y_{{\varepsilon _{0}}}}(\tau )>0\}}}+{\varepsilon _{0}}}\bigg)\\ {} \displaystyle -a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)\\ {} \displaystyle =\bigg(\frac{k}{{Y_{\varepsilon }}(\tau )+\varepsilon }-\frac{k}{{Y_{{\varepsilon _{0}}}}(\tau )+{\varepsilon _{0}}}\bigg)-a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)\\ {} \displaystyle =\frac{k({Y_{{\varepsilon _{0}}}}(\tau )-{Y_{\varepsilon }}(\tau ))+k({\varepsilon _{0}}-\varepsilon )}{({Y_{\varepsilon }}(\tau )+\varepsilon )({Y_{{\varepsilon _{0}}}}(\tau )+{\varepsilon _{0}})}-a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big).\end{array}\]From the continuity of ${Y_{{\varepsilon _{0}}}}(t)-{Y_{\varepsilon }}(t)$ with respect to t and definition of τ, it is clear that ${Y_{{\varepsilon _{0}}}}(\tau )-{Y_{\varepsilon }}(\tau )=-\theta $ and, as $0<\varepsilon <{\varepsilon _{0}}<{\varepsilon _{1}}=C\theta $, where $C\in (0,1]$,
Therefore
Hence, as
\[ {Y_{\varepsilon }}(t)-{Y_{{\varepsilon _{0}}}}(t)=\theta +{\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)^{\prime }}(t-\tau )+o(t-\tau ),\hspace{1em}t\to \tau ,\]
there exists such interval $(\tau ,{\tau _{1}})\subset (\tau ,{\tau _{0}})$ that for all $t\in (\tau ,{\tau _{1}})$:
which contradicts (23).So, for all $t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$:
\[ {Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\le \theta ,\]
and hence, as for all $\varepsilon <{\varepsilon _{0}}$ and $t\in \mathbb{R}$ it holds that ${Y_{{\varepsilon _{0}}}}(t)<{Y_{\varepsilon }}(t)<Y(t)$,
Step 3. In order to prove that
it is enough to notice that for any $\tilde{\varepsilon }>0$ there is such $\tilde{t}>0$ that for all $\varepsilon <\tilde{\varepsilon }$: ${Y_{\varepsilon }}(t)>\frac{{Y_{0}}}{2}$, $t\in [0,\tilde{t}]$.
Hence, for each all $\varepsilon <\tilde{\varepsilon }$ and $t\in [0,\tilde{t}]$
\[ \frac{k}{{Y_{\varepsilon }}(t){1_{\{{Y_{\varepsilon }}(t)>0\}}}+\varepsilon }=\frac{k}{{Y_{\varepsilon }}(t)+\varepsilon }\le \frac{k}{{Y_{\varepsilon }}(t)}\le \frac{2k}{{Y_{0}}}\]
and so
\[ \underset{\varepsilon \to 0}{\lim }{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{k}{Y(s)}ds,\]
hence, for all $t\in [0,\tilde{t}]$:
This equation has a unique continuous solution, therefore Y is continuous on $[0,\tilde{t}]$. □
Remark 3.2.
From Theorem 3.1 it is easy to see that the limit square root process Y satisfies the equation of the form (4) until the first moment τ of zero hitting. Indeed, on each compact set $[0,\tilde{t}]\subset [0,\tau )$ ${Y_{\varepsilon }}$ converges to Y uniformly as $\varepsilon \to 0$ due to Dini’s theorem. Hence there is such $\tilde{\varepsilon }>0$ that for all $\varepsilon <\tilde{\varepsilon }$: ${Y_{\varepsilon }}(t)>\frac{{\min _{s\in [0,\tilde{t}]}}Y(s)}{2}>0$ for all $t\in [0,\tilde{t}]$, and, similarly to Step 3 of Theorem 3.1, it can be shown that Y satisfies equation (4) on $[0,\tilde{t}]$.
Corollary 3.3.
The trajectories of the process $Y=\{Y(t),\hspace{2.5pt}t\ge 0\}$ are continuous a.e. on ${\mathbb{R}_{+}}$ a.s. and therefore are Riemann integrable a.s.
The set $\{t>0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$ is open in the natural topology on $\mathbb{R}$ a.s., so it can be a.s. represented as the finite or countable union of non-intersecting intervals, i.e.
\[ \big\{t>0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\big\}={\bigcup \limits_{i=0}^{N}}({\alpha _{i}},{\beta _{i}}),\hspace{1em}N\in \mathbb{N}\cup \{\infty \},\]
where $({\alpha _{i}},{\beta _{i}})\cap ({\alpha _{j}},{\beta _{j}})=\varnothing $, $i\ne j$.Moreover, the set $\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$ can be a.s. represented as
where ${\alpha _{0}}=0$, ${\beta _{0}}$ is the first moment of zero hitting by the square root process Y, $({\alpha _{i}},{\beta _{i}})\cap ({\alpha _{j}},{\beta _{j}})=\varnothing $, $i\ne j$, and $({\alpha _{i}},{\beta _{i}})\cap [{\alpha _{0}},{\beta _{0}})=\varnothing $, $i\ne 0$.
(25)
\[ \big\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\big\}=[{\alpha _{0}},{\beta _{0}})\cup \Bigg({\bigcup \limits_{i=1}^{N}}({\alpha _{i}},{\beta _{i}})\Bigg),\]Theorem 3.2.
Let $({\alpha _{i}},{\beta _{i}})$, $i\ge 1$, be an arbitrary interval from the representation (25). Then
1) ${\lim \nolimits_{t\to {\alpha _{i}}+}}Y(t)=0$, ${\lim \nolimits_{t\to {\beta _{i}}-}}Y(t)=0$ a.s.;
Proof.
Let ${\varOmega ^{\prime }}$ be from Lemma 3.2 and an arbitrary $\omega \in {\varOmega ^{\prime }}$ be fixed.
1) Proofs for both left and right ends of the segment are similar, so we shall give a proof for the left end only.
Y is positive on $({\alpha _{i}},{\beta _{i}})$, so it is sufficient to prove that ${\overline{\lim }_{t\to {\alpha _{i}}+}}Y(t)=0$.
Assume that ${\overline{\lim }_{t\to {\alpha _{i}}+}}Y(t)=x>0$. Then for any $\delta >0$ there exists such $\tau \in ({\alpha _{i}},{\alpha _{i}}+\delta )$ that $Y(\tau )\in (\frac{3x}{4},\frac{5x}{4})$.
Let δ and such $\tau \in ({\alpha _{i}},{\alpha _{i}}+\delta )$ be fixed. ${Y_{\varepsilon }}(\tau )\uparrow Y(\tau )$ as $\varepsilon \to 0$, so there is such $\varepsilon =\varepsilon (\tau )$ that ${Y_{\varepsilon }}(\tau )\in (\frac{x}{2},\frac{5x}{4})$. It is clear that ${Y_{\varepsilon }}({\alpha _{i}})<0$, therefore there is such a moment ${\tau _{1}}\in ({\alpha _{i}},\tau )$ that
From continuity of ${Y_{\varepsilon }}$, ${Y_{\varepsilon }}({\tau _{1}})=\frac{x}{4}$, so ${Y_{\varepsilon }}(\tau )-{Y_{\varepsilon }}({\tau _{1}})>\frac{x}{4}$. On the other hand, from definitions of τ and ${\tau _{1}}$, for all $t\in [{\tau _{1}},\tau ]$: ${Y_{\varepsilon }}(t)\in (\frac{x}{4},\frac{5x}{4})$. That, together with Lemma 3.2, gives:
\[\begin{array}{c}\displaystyle \frac{x}{4}<{Y_{\varepsilon }}(\tau )-{Y_{\varepsilon }}({\tau _{1}})\\ {} \displaystyle ={\int _{{\tau _{1}}}^{\tau }}\frac{k}{{Y_{\varepsilon }}(s)+\varepsilon }ds-a{\int _{{\tau _{1}}}^{\tau }}{Y_{\varepsilon }}(s)ds+\sigma \big({B^{H}}(\tau )-{B^{H}}({\tau _{1}})\big)\\ {} \displaystyle \le \bigg(\frac{2k}{x}+\frac{ax}{4}\bigg)(\tau -{\tau _{1}})+C{({\tau _{2}}-{\tau _{1}})^{H/2}},\end{array}\]
i.e. for any $\delta >0$
which is not possible.Therefore
2) From Theorem 3.1, Y is continuous on each segment $[{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]\subset [{\alpha _{i}},{\beta _{i}}]$, so, due to Dini’s theorem, ${Y_{\varepsilon }}$ converges uniformly to Y on $[{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$ as $\varepsilon \to 0$. Moreover, there is such $\delta >0$ such that for all $t\in [{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$: $Y(t)>\delta $, therefore it is easy to see that $\frac{k}{{Y_{\varepsilon }}(\cdot ){1_{\{{Y_{\varepsilon }}(\cdot )>0\}}}+\varepsilon }$ converges uniformly to $\frac{k}{Y(\cdot )}$ on $[{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$ as $\varepsilon \to 0$.
Hence, for all $t\in [{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$:
\[\begin{array}{c}\displaystyle {\int _{{\alpha _{i}^{\ast }}}^{t}}\frac{1}{Y(s)}ds=\underset{\varepsilon \to 0}{\lim }{\int _{{\alpha _{i}^{\ast }}}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\\ {} \displaystyle =\underset{\varepsilon \to 0}{\lim }\Bigg({Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\alpha _{i}^{\ast }}\big)+a{\int _{{\alpha _{i}^{\ast }}}^{t}}{Y_{\varepsilon }}(s)ds-\sigma \big({B^{H}}(t)-{B^{H}}\big({\alpha _{i}^{\ast }}\big)\big)\Bigg)\\ {} \displaystyle =Y(t)-Y\big({\alpha _{i}^{\ast }}\big)+a{\int _{{\alpha _{i}^{\ast }}}^{t}}Y(s)ds-\sigma \big({B^{H}}(t)-{B^{H}}\big({\alpha _{i}^{\ast }}\big)\big),\end{array}\]
or
The right-hand side of (27) is right continuous with respect to ${\alpha _{i}^{\ast }}$ due to the previous clause of proof, i.e.
\[\begin{array}{c}\displaystyle \underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }Y\big({\alpha _{i}^{\ast }}\big)=Y({\alpha _{i}});\hspace{2em}\underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }{\int _{{\alpha _{i}^{\ast }}}^{t}}\frac{k}{Y(s)}ds={\int _{{\alpha _{i}}}^{t}}\frac{k}{Y(s)}ds;\\ {} \displaystyle \underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }{\int _{{\alpha _{i}^{\ast }}}^{t}}Y(s)ds={\int _{{\alpha _{i}}}^{t}}Y(s)ds;\\ {} \displaystyle \underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }\big({B^{H}}(t)-{B^{H}}\big({\alpha _{i}^{\ast }}\big)\big)=\big({B^{H}}(t)-{B^{H}}({\alpha _{i}})\big).\end{array}\]
Due to Lemma 3.3, $Y({\alpha _{i}})=0$, therefore (26) holds for an arbitrary $t\in [{\alpha _{i}},{\beta _{i}})$.
To get the result for $t={\beta _{i}}$, it is sufficient to pass to the limit as $t\to {\beta _{i}}$. □
Remark 3.4.
The choice of ε-approximations may be different. For example, instead of (8), it is possible to consider the equation of the form
By following the proofs of the results above, it can be verified that all of them hold for the resulting limit process $\tilde{Y}$. Furthermore, if $k,a>0$, it coincides with Y on $[0,+\infty )$.
Indeed, let $\omega \in \varOmega $ be fixed. Consider the difference
\[\begin{array}{c}\displaystyle {\Delta _{\varepsilon }}(t):=\tilde{{Y_{\varepsilon }}}(t)-{Y_{\varepsilon }}(t)\\ {} \displaystyle ={\int _{0}^{t}}\bigg(\frac{k}{\max \{\tilde{{Y_{\varepsilon }}}(s),\varepsilon \}}-\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }\bigg)ds\\ {} \displaystyle -a{\int _{0}^{t}}\big(\tilde{{Y_{\varepsilon }}}(s)-{Y_{\varepsilon }}(s)\big)ds.\end{array}\]
As $\frac{k}{\max \{x,\varepsilon \}}\ge \frac{k}{x{1_{\{x>0\}}}+\varepsilon }$ for all $x\in \mathbb{R}$, it is easy to see from Lemma 2.1 and Remark 2.2 that ${\Delta _{\varepsilon }}(t)=\tilde{{Y_{\varepsilon }}}(t)-{Y_{\varepsilon }}(t)\ge 0$. Furthermore, ${\Delta _{\varepsilon }}$ is differentiable on $(0,+\infty )$ and ${\Delta _{\varepsilon }}(0)=0$.
Assume that there is such $\tau >0$ that ${\Delta _{\varepsilon }}(\tau )\ge 2\varepsilon $ and denote
Note that, due to continuity of $\tilde{{Y_{\varepsilon }}}$ and ${Y_{\varepsilon }}$, ${\Delta _{\varepsilon }}({\tau _{\varepsilon }})=\varepsilon $ and therefore for all $t\in ({\tau _{\varepsilon }},\tau )$:
so, as ${\Delta _{\varepsilon }}$ is differentiable in ${\tau _{\varepsilon }}$,
However, as
\[\begin{array}{c}\displaystyle \max \big\{\tilde{{Y_{\varepsilon }}}({\tau _{\varepsilon }}),\varepsilon \big\}=\max \big\{{Y_{\varepsilon }}({\tau _{\varepsilon }})+\varepsilon ,\varepsilon \big\}=\max \big\{{Y_{\varepsilon }}({\tau _{\varepsilon }}),0\big\}+\varepsilon \\ {} \displaystyle ={Y_{\varepsilon }}({\tau _{\varepsilon }}){1_{\{{Y_{\varepsilon }}({\tau _{\varepsilon }})>0\}}}+\varepsilon ,\end{array}\]
it is easy to see that
\[\begin{array}{c}\displaystyle {\Delta ^{\prime }_{\varepsilon }}({\tau _{\varepsilon }})=\frac{k}{\max \{\tilde{{Y_{\varepsilon }}}({\tau _{\varepsilon }}),\varepsilon \}}-\frac{k}{{Y_{\varepsilon }}({\tau _{\varepsilon }}){1_{\{{Y_{\varepsilon }}({\tau _{\varepsilon }})>0\}}}+\varepsilon }-a\big(\tilde{{Y_{\varepsilon }}}({\tau _{\varepsilon }})-{Y_{\varepsilon }}({\tau _{\varepsilon }})\big)\\ {} \displaystyle =-a\varepsilon <0.\end{array}\]
The obtained contradiction shows that for all $t\ge 0$: $\tilde{{Y_{\varepsilon }}}(t)-{Y_{\varepsilon }}(t)<2\varepsilon $ and therefore, by letting $\varepsilon \to 0$, it is easy to verify that for all $t\ge 0$: $\tilde{Y}(t)=Y(t)$.
Simulations illustrate that the processes indeed coincide (see Fig. 1). Euler approximations of ${Y_{\varepsilon }}$ and $\tilde{{Y_{\varepsilon }}}$ on $[0,5]$ were used with $\varepsilon =0.01$, $H=0.3$, ${Y_{0}}=a=k=\sigma =1$. The mesh of the partition was $\Delta t=0.0001$.
4 Equation for the square root process Y for $t\ge 0$
The equation for $Y(t)$ in case of $t\in [0,{\beta _{0}}]$ has already been obtained in Remark 3.3. In order to get the equation for an arbitrary $t\in {\mathbb{R}_{+}}$, consider the following procedure.
Let $\mathcal{I}$ be the set of all non-intersecting open intervals from representation (25), i.e.
Consider an arbitrary interval $(\alpha ,\beta )\in \mathcal{I}$ and assume that $t\in [\alpha ,\beta ]$. Then, from Theorem 3.2,
Consider all such intervals $({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}})\in \mathcal{I}$, $j=1,\dots ,M$, $M\in \mathbb{N}\cup \{\infty \}$, that ${\tilde{\alpha }_{j}}<\alpha $.
For each $j=1,\dots ,M$,
\[ 0=Y({\tilde{\beta }_{j}})={\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}\frac{k}{Y(s)}ds-a{\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}Y(s)ds+\sigma \big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big).\]
Moreover,
\[ 0=Y({\beta _{0}})={Y_{0}}+{\int _{0}^{{\beta _{0}}}}\frac{k}{Y(s)}ds-a{\int _{0}^{{\beta _{0}}}}Y(s)ds+\sigma {B^{H}}({\beta _{0}}).\]
Therefore
\[\begin{array}{c}\displaystyle Y(t)=Y({\beta _{0}})+{\sum \limits_{j=1}^{M}}Y({\tilde{\beta }_{j}})+Y(t)\\ {} \displaystyle ={Y_{0}}+\Bigg({\int _{0}^{{\beta _{0}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\sum \limits_{j=1}^{M}}{\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +{\int _{\alpha }^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle +\sigma \Bigg({B^{H}}({\beta _{0}})+{\sum \limits_{j=1}^{M}}\big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big)+\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\Bigg)\\ {} \displaystyle ={Y_{0}}+{\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\sigma \Bigg({B^{H}}({\beta _{0}})+{\sum \limits_{j=1}^{M}}\big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big)+\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\Bigg)\\ {} \displaystyle ={Y_{0}}+{\int _{0}^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\sigma \Bigg({B^{H}}({\beta _{0}})+{\sum \limits_{j=1}^{M}}\big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big)+\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\Bigg).\end{array}\]
The question whether Y satisfies the equation of the form (28) on ${\mathbb{R}_{+}}$ remains open due to the obscure structure of its zero set. The simulation studies do not contradict with that either.
Indeed, consider the process $Z=\{Z(t),t\ge 0\}$ that satisfies the SDE
where Y is, as usual, the pointwise limit of ${Y_{\varepsilon }}$ and ${Y_{0}}$, k, a, σ are positive constants. By following explicitly the proof of Proposition A.1 in [3] it can be shown that
Now assume that Y satisfies the equation of the form (28) for all $t\ge 0$. Then Y admits the representation of the form (29), i.e.
so the paths of Y and paths obtained by direct simulation of the right-hand side of (30) must coincide.
(30)
\[ Y(t)={e^{-at}}\Bigg({Y_{0}}+{\int _{0}^{t}}\frac{k{e^{as}}}{Y(s)}ds+\sigma {\int _{0}^{t}}{e^{as}}d{B^{H}}(s)\Bigg),\]In other words, if we simulate the trajectory of Y, apply transform given in the RHS of (30) to it and receive the trajectory that significantly differs from the initial one, it would be an evidence that Y does not satisfy the equation of the form (28) for all $t\ge 0$.
To simulate the left-hand side of (30), the Euler approximations of the process ${Y_{\varepsilon }}$ with $\varepsilon =0.01$ are used. They are compared to the right-hand side of (30) simulated explicitly using the same ${Y_{\varepsilon }}$. The mesh of the partition is $\Delta t=0.0001$, $T=5$, $H=0.4$, ${Y_{0}}=k=a=\sigma =1$ (see Fig. 2).
Fig. 2.
Comparison of right-hand and left-hand sides of (30)
As we see, the Euler approximation of Y (of ${Y_{\varepsilon }}$ with small ε, black) indeed almost coincides with its transformation given by the right-hand side of (30) (red), so no contradiction is obtained.
5 Fractional Cox–Ingersoll–Ross process on ${\mathbb{R}_{+}}$
Consider a set of random processes $\{{Y_{\varepsilon }},\varepsilon >0\}$ which satisfy the equations of the form
\[ {Y_{\varepsilon }}(t)={Y_{0}}+\frac{1}{2}{\int _{0}^{t}}\bigg(\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }-a{Y_{\varepsilon }}(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}(t),\]
driven by the same fractional Brownian motion with the same parameters k, a, $\sigma >0$ and the same starting point ${Y_{0}}>0$.Let the process $Y=\{Y(t),t\in [0,T]\}$ be such that for all $t\ge 0$:
Let us show that this definition is indeed a generalization of the original Definition 1.1.
First, we will require the following definition.
Definition 5.2.
Let $\{{U_{t}},t\ge 0\}$, $\{{V_{t}},t\ge 0\}$ be random processes. The pathwise Stratonovich integral ${\int _{0}^{T}}{U_{s}}\circ d{V_{s}}$ is a pathwise limit of the following sums
\[ {\sum \limits_{k=1}^{n}}\frac{{U_{{t_{k}}}}+{U_{{t_{k-1}}}}}{2}({V_{{t_{k}}}}-{V_{{t_{k-1}}}}),\]
as the mesh of the partition $0={t_{0}}<{t_{1}}<{t_{2}}<\cdots <{t_{n-1}}<{t_{n}}=T$ tends to zero, in case if this limit exists.Taking into account the results of previous sections and from Theorem 1 in [18], for all $t\in [0,{\beta _{0}}]$:
A similar result holds for all $t\in [{\alpha _{i}},{\beta _{i}}]$, $i\ge 1$.
Theorem 5.1.
Let $({\alpha _{i}},{\beta _{i}})$, $i\ge 1$, be an arbitrary interval from the representation (25). Consider the fractional Cox–Ingersoll–Ross process $X=\{X(t),\hspace{2.5pt}t\in [{\alpha _{i}},{\beta _{i}}]\}$ from Definition 5.1.
Then, for ${\alpha _{i}}\le t\le {\beta _{i}}$ the process X a.s. satisfies the SDE
where the integral with respect to the fractional Brownian motion is defined as the pathwise Stratonovich integral.
Proof.
We shall follow the proof of Theorem 1 from [18].
Let us fix an $\omega \in \varOmega $ and consider an arbitrary $t\in [{\alpha _{i}},{\beta _{i}}]$.
It is clear that
Consider an arbitrary partition of the interval $[{\alpha _{i}},t]$:
Taking into account that $X({\alpha _{i}})=0$ and using (5), we obtain
\[\begin{aligned}{}X(t)& ={\sum \limits_{j=1}^{n}}\big(X({t_{j}})-X({t_{j-1}})\big)\\ {} & ={\sum \limits_{j=1}^{n}}\Bigg({\Bigg[\frac{1}{2}{\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}({t_{j}})-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg]^{2}}\\ {} & \hspace{1em}-{\Bigg[\frac{1}{2}{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}({t_{j-1}})-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg]^{2}}\Bigg)\end{aligned}\]
Factoring each summand as the difference of squares, we get:
\[\begin{aligned}{}X(t)& ={\sum \limits_{j=1}^{n}}\Bigg[\frac{1}{2}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} & \hspace{1em}+\frac{\sigma }{2}\big({B_{{t_{j}}}^{H}}+{B_{{t_{j-1}}}^{H}}\big)-\sigma {B^{H}}({\alpha _{i}})\Bigg]\\ {} & \hspace{2em}\times \Bigg[\frac{1}{2}{\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\Bigg].\end{aligned}\]
Expanding the brackets in the last expression, we obtain:
(34)
\[\begin{array}{l}\displaystyle X(t)=\frac{1}{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg){\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\big({B_{{t_{j}}}^{H}}+{B_{{t_{j-1}}}^{H}}\big){\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle -\frac{\sigma }{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\Bigg({\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle +\frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \times \big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle +\frac{{\sigma ^{2}}}{4}{\sum \limits_{j=1}^{n}}\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big({B^{H}}({t_{j}})+{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle -\frac{{\sigma ^{2}}}{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\big(\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big).\end{array}\]Let the mesh $\Delta t$ of the partition tend to zero. The first three summands
and the last three summands
(35)
\[\begin{array}{l}\displaystyle \frac{1}{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \times {\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\big({B_{{t_{j}}}^{H}}+{B_{{t_{j-1}}}^{H}}\big){\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle -\frac{\sigma }{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\Bigg({\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \to {\int _{{\alpha _{i}}}^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)\\ {} \displaystyle \times \Bigg(\frac{1}{2}{\int _{{\alpha _{i}}}^{s}}\bigg(\frac{k}{Y(u)}-aY(u)\bigg)du+\frac{\sigma }{2}{B^{H}}(s)-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg)ds\\ {} \displaystyle ={\int _{{\alpha _{i}}}^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)Y(s)ds={\int _{{\alpha _{i}}}^{t}}\big(k-aX(s)\big)ds,\hspace{1em}\Delta t\to 0,\end{array}\](36)
\[\begin{array}{l}\displaystyle \frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \times \big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle +\frac{{\sigma ^{2}}}{4}{\sum \limits_{j=1}^{n}}\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big({B^{H}}({t_{j}})+{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle -\frac{{\sigma ^{2}}}{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\big(\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big)\\ {} \displaystyle \to \sigma {\int _{{\alpha _{i}}}^{t}}\Bigg(\frac{1}{2}{\int _{{\alpha _{i}}}^{s}}\bigg(\frac{k}{Y(u)}-aY(u)\bigg)du\\ {} \displaystyle +\frac{\sigma }{2}{B^{H}}(s)-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg)\circ d{B^{H}}(s)\\ {} \displaystyle =\sigma {\int _{{\alpha _{i}}}^{t}}Y(s)\circ d{B^{H}}(s)=\sigma {\int _{{\alpha _{i}}}^{t}}\sqrt{X(s)}\circ d{B^{H}}(s),\hspace{1em}\Delta t\to 0.\end{array}\]Note that the left-hand side of (5) does not depend on the partition and the limit in (5) exists as the pathwise Riemann integral, therefore the corresponding pathwise Stratonovich integral exists and the passage to the limit in (5) is correct.
Thus, the process X satisfies the SDE of the form
where ${\int _{{\alpha _{i}}}^{t}}\sqrt{X(s)}\circ d{B^{H}}(s)$ is a pathwise Stratonovich integral. □
(37)
\[ \displaystyle X(t)={\int _{{\alpha _{i}}}^{t}}\big(k-aX(s)\big)ds+\sigma {\int _{{\alpha _{i}}}^{t}}\sqrt{X(s)}\circ d{B^{H}}(s),\hspace{1em}t\in [{\alpha _{i}},{\beta _{i}}],\]Consider an arbitrary $(\alpha ,\beta )\in \mathcal{I}$, where $\mathcal{I}$ is a set of all open intervals from representation (25) (see Section 4). Let ${\beta _{0}}$ be the first moment of zero hitting by Y and $({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}})\in \mathcal{I}$, $j=1,\dots ,M$, $M\in \mathbb{N}\cup \{\infty \}$, are all such intervals that ${\tilde{\alpha }_{j}}<\alpha $.
Theorem 5.2.
Let $t\in [\alpha ,\beta ]$. Then
\[\begin{array}{c}\displaystyle X(t)=X(0)+{\int _{0}^{t}}\big(k-aX(s)\big)ds\\ {} \displaystyle +\sigma {\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\sqrt{X(s)}\circ d{B^{H}}(s),\end{array}\]
where
\[\begin{array}{c}\displaystyle {\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\sqrt{X(s)}\circ d{B^{H}}(s)\\ {} \displaystyle :={\int _{0}^{{\beta _{0}}}}\sqrt{X(s)}\circ d{B^{H}}(s)+{\sum \limits_{j=1}^{M}}{\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}\sqrt{X(s)}\circ d{B^{H}}(s)\\ {} \displaystyle +{\int _{\alpha }^{t}}\sqrt{X(s)}\circ d{B^{H}}(s).\end{array}\]