Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 6, Issue 1 (2019)
  4. Fractional Cox–Ingersoll–Ross process wi ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Fractional Cox–Ingersoll–Ross process with small Hurst indices
Volume 6, Issue 1 (2019), pp. 13–39
Yuliya Mishura   Anton Yurchenko-Tytarenko  

Authors

 
Placeholder
https://doi.org/10.15559/18-VMSTA126
Pub. online: 21 December 2018      Type: Research Article      Open accessOpen Access

Received
27 August 2018
Revised
3 December 2018
Accepted
3 December 2018
Published
21 December 2018

Abstract

In this paper the fractional Cox–Ingersoll–Ross process on ${\mathbb{R}_{+}}$ for $H<1/2$ is defined as a square of a pointwise limit of the processes ${Y_{\varepsilon }}$, satisfying the SDE of the form $d{Y_{\varepsilon }}(t)=(\frac{k}{{Y_{\varepsilon }}(t){1_{\{{Y_{\varepsilon }}(t)>0\}}}+\varepsilon }-a{Y_{\varepsilon }}(t))dt+\sigma d{B^{H}}(t)$, as $\varepsilon \downarrow 0$. Properties of such limit process are considered. SDE for both the limit process and the fractional Cox–Ingersoll–Ross process are obtained.

Introduction

The Cox–Ingersoll–Ross (CIR) process, that was first introduced and studied by Cox, Ingersoll and Ross in papers [4–6], can be defined as the process $X=\{{X_{t}},t\ge 0\}$ that satisfies the stochastic differential equation of the form
(1)
\[ d{X_{t}}=a(b-{X_{t}})dt+\sigma \sqrt{{X_{t}}}d{W_{t}},\hspace{1em}{X_{0}},a,b,\sigma >0,\]
where $W=\{{W_{t}},t\ge 0\}$ is the Wiener process.
The CIR process was originally proposed as a model for interest rate evolution in time and in this framework the parameters a, b and σ have the interpretation as follows: b is the “mean” that characterizes the level, around which the trajectories of X will evolve in a long run; a is “the speed of adjustment” that measures the velocity of regrouping around b; σ is an instantaneous volatility which shows the amplitude of randomness entering the system. Another common application is the stochastic volatility modeling in the Heston model which was proposed in [10] (an extensive bibliography on the subject can be found in [11] and [12]).
For the sake of simplicity, we shall use another parametrization of the equation (1), namely
(2)
\[ d{X_{t}}=(k-a{X_{t}})dt+\sigma \sqrt{{X_{t}}}d{W_{t}},\hspace{1em}{X_{0}},k,a,\sigma >0.\]
According to [9], if the condition $2k\ge {\sigma ^{2}}$, sometimes referred to as the Feller condition, holds, then the CIR process is strictly positive. It is also well-known that this process is ergodic and has a stationary distribution.
However, in reality the dynamics on financial markets is characterized by the so-called “memory phenomenon”, which cannot be reflected by the standard CIR model (for more details on financial markets with memory, see [1, 2, 7, 21]). Therefore, it is reasonable to introduce a fractional Cox–Ingersoll–Ross process, i.e. to modify the equation $\text{(2)}$ by replacing (in some sense) the standard Wiener process with the fractional Brownian motion ${B^{H}}=\{{B_{t}^{H}},t\ge 0\}$, i.e. with a centered Gaussian process with the covariance function $\mathbb{E}[{B_{t}^{H}}{B_{s}^{H}}]=\hspace{2.5pt}\frac{1}{2}({t^{2H}}+{s^{2H}}-|t-s{|^{2H}})$.
There are several approaches to the definition of the fractional Cox–Ingersoll–Ross process. The paper [15] introduces the so-called “rough-path” approach; in [13, 14] it is defined as a time-changed standard CIR process with inverse stable subordinator; another definition is presented in [8] as a part of discussion on rough Heston models.
A simpler pathwise approach is presented in [17] and [18]. There, the fractional Cox–Ingersoll–Ross process was defined as the square of the solution of the SDE
(3)
\[ d{Y_{t}}=\frac{1}{2}\bigg(\frac{k}{{Y_{t}}}-a{Y_{t}}\bigg)dt+\frac{\sigma }{2}d{B_{t}^{H}},\hspace{1em}{Y_{0}}>0,\]
until the first moment of zero hitting and zero after this moment (the latter condition was necessary as in the case of $k>0$ the existence of the solution of (3) could not be guaranteed after the first moment of reaching zero).
The reason of such definition lies in the fact that the fractional Cox–Ingersoll–Ross process X defined in that way satisfies pathwisely (until the first moment of zero hitting) the equation
\[ d{X_{t}}=(k-a{X_{t}})dt+\sigma \sqrt{{X_{t}}}\circ d{B_{t}^{H}},\hspace{1em}{X_{0}}={Y_{0}^{2}}>0,\]
where the integral with respect to the fractional Brownian motion is considered as the pathwise Stratonovich integral.
It was shown that if $k>0$ and $H>\frac{1}{2}$, such process is strictly positive and never hits zero; if $k>0$ and $H<\frac{1}{2}$, the probability that there is no zero hitting on the fixed interval $[0,T]$, $T>0$, tends to 1 as $k\to \infty $.
The special case of $k=0$ was considered in [17]. In this situation Y is the well-known fractional Ornstein–Uhlenbeck process (for more details see [3]) and, if $a\ge 0$, the fractional Cox–Ingersoll–Ross process hits zero almost surely, and if $a<0$ the probability of hitting zero is greater than 0 but less than 1.
However, such a definition has a significant disadvantage: according to it, the process remains on the zero level after reaching the latter, and if $H<1/2$, such case cannot be excluded. In this paper we generalize the approach presented in [18] and [17] in order to solve this issue.
We define the fractional CIR process on ${\mathbb{R}_{+}}$ for $H<1/2$ as the square of Y which is the pointwise limit as $\varepsilon \downarrow 0$ of the processes ${Y_{\varepsilon }}$ that satisfy the SDE of the following type:
\[ d{Y_{\varepsilon }}(t)=\frac{1}{2}\bigg(\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }-a{Y_{\varepsilon }}(t)\bigg)dt+\frac{\sigma }{2}d{B^{H}}(t),\hspace{1em}{Y_{\varepsilon }}(0)={Y_{0}}>0,\]
where a, k, $\sigma >0$.
We prove that this limit indeed exists, is nonnegative a.s. and is positive a.e. with respect to the Lebesgue measure a.s. Moreover, Y is continuous and satisfies the equation of the type
\[ Y(t)=Y(\alpha )+\frac{1}{2}{\int _{\alpha }^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\]
for all $t\in [\alpha ,\beta ]$ where $(\alpha ,\beta )$ is any interval of Y’s positiveness.
The possibility of getting the equation of the form (3) is restricted due to the obscure structure of the set $\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)=0\}$ which is connected to the structure of the level sets of the fractional Brownian motion. For some results on the latter see, for example, [19].
This paper is organised as follows.
In Section 1 we give preliminary remarks on some results concerning the existence and uniqueness of solutions of SDEs driven by an additive fractional Brownian motion, as well as explain connection of this paper to [17] and [18].
In Section 2 we construct the square root process as the limit of approximations. In particular, it contains a variant of the comparison Lemma and a uniform boundary for all moments of the prelimit processes.
In Section 3 we consider properties of the square root process. We prove that Y is nonnegative and positive a.e., and also continuous.
In Section 4 we give some remarks concerning the equation for the limit square root process on ${\mathbb{R}_{+}}$. We obtain the equation for this process with the noise in form of sum of increments of the fractional Brownian motion on the intervals of Y’s positiveness.
Section 5 is fully devoted to the pathwise definition and equation of the fractional Cox–Ingersoll–Ross process. We prove that on each interval of positiveness the process satisfies the CIR SDE with the integral with respect to a fractional Brownian motion, considered as the pathwise Stratonovich integral. The equation on an arbitrary finite interval is also obtained, with the integral with respect to a fractional Brownian motion considered as the sum of pathwise Stratonovich integrals on intervals of positiveness.

1 Fractional Cox–Ingersoll–Ross process until the first moment of zero hitting

Let ${B^{H}}=\{{B^{H}}(t),t\ge 0\}$ be a fractional Brownian motion with an arbitrary Hurst index $H\in (0,1)$. Consider the process $Y=\{Y(t),\hspace{2.5pt}t\ge 0\}$, such that
(4)
\[ dY(t)=\frac{1}{2}\bigg(\frac{k}{Y(t)}-aY(t)\bigg)dt+\frac{\sigma }{2}d{B^{H}}(t),\hspace{1em}t\ge 0,\hspace{2.5pt}{Y_{0}},a,k,\sigma >0.\]
Note that, according to [20], the sufficient conditions that guarantee the existence and the uniqueness of the strong solution of the equation
\[ Z(t)=z+{\int _{0}^{t}}f\big(s,Z(s)\big)ds+{B^{H}}(t),\hspace{1em}t\in [0,T],\]
are as follows:
(i) for $H<1/2$: linear growth condition:
(5)
\[ \big|f(t,z)\big|\le C\big(1+|z|\big),\]
where C is a positive constant;
(ii) for $H>1/2$: Hölder continuity:
(6)
\[ \big|f({t_{1}},{z_{1}})-f({t_{2}},{z_{2}})\big|\le C\big(|{z_{1}}-{z_{2}}{|^{\alpha }}+|{t_{1}}-{t_{2}}{|^{\gamma }}\big),\]
where C is a positive constant, ${z_{1}},{z_{2}}\in \mathbb{R}$, ${t_{1}},{t_{2}}\in [0,T]$, $1>\alpha >1-1/2H$, $\gamma >H-\frac{1}{2}$.
The function
\[ f(y)=\frac{k}{y}-ay\]
satisfies both (i) and (ii) for all $y\in (\delta ,+\infty )$, where $\delta \in (0,{Y_{0}})$ is arbitrary. Therefore for all $H\in (0,1)$ the unique strong solution of (4) on $[0,T]$ exists until the first moment of hitting the level $\delta \in (0,{Y_{0}})$ and, from the fact that δ is arbitrary, it exists until the first moment of zero hitting.
Let $\tau :=\sup \{t>0\hspace{2.5pt}|\hspace{2.5pt}\forall s\in [0,t):Y(s)>0\}$ be the first moment of zero hitting by Y. The fractional Cox-Ingersoll-Ross process was defined in [18] as follows.
Definition 1.1.
The fractional Cox–Ingersoll–Ross (CIR) process is the process $X=\{X(t),\hspace{2.5pt}t\ge 0\}$, such that
(7)
\[ X(t)={Y^{2}}(t){1_{\{t<\tau \}}},\]
where Y is the solution of the equation (4).
Remark 1.1.
Further, Y will be sometimes referred to as the square root process, as it is indeed a square root of the fractional CIR process.
It is known ([18], Theorem 2) that if $H>\frac{1}{2}$, the process Y is strictly positive a.s., therefore the fractional Cox–Ingersoll–Ross process is simply ${Y^{2}}(t)$, $t\ge 0$.
However, in the case of $H<\frac{1}{2}$, the process Y may hit zero. In such situation, according to (7), the corresponding trajectories of the fractional Cox–Ingersoll–Ross process remain in zero after this moment, which is an undesirable property for any financial applications.
Our goal is to modify the definition of the fractional CIR process in order to remove the problem mentioned above.

2 Construction of the square root process on ${\mathbb{R}_{+}}$ as a limit of ε-approximations

Consider the process ${Y_{\varepsilon }}=\{{Y_{\varepsilon }}(t),t\in [0,T]\}$ that satisfies the equation of the form
(8)
\[ {Y_{\varepsilon }}(t)={Y_{0}}+{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds-a{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds+\sigma {B^{H}}(t),\hspace{1em}t\ge 0,\]
where ${Y_{0}},k,a,\sigma >0$ and ${B^{H}}=\{{B^{H}}(t),t\ge 0\}$ is a fractional Brownian motion with $H\in (0,1/2)$.
Remark 2.1.
We will sometimes call ${Y_{\varepsilon }}$ an ε-approximation of the square-root process Y.
For any $T>0$, the function ${f_{\varepsilon }}$: $\mathbb{R}\to \mathbb{R}$ such that
\[ {f_{\varepsilon }}(y)=\frac{k}{y{1_{\{y>0\}}}+\varepsilon }-ay\]
satisfies the conditions (5) and (6), therefore the strong solution of (8) exists and is unique.
The goal of this section is to prove that there is a pointwise limit of ${Y_{\varepsilon }}$ as $\varepsilon \to 0$.
First, let us prove the analogue of the comparison Lemma.
Lemma 2.1.
Assume that continuous random processes ${Y_{1}}=\{{Y_{1}}(t),t\ge 0\}$ and ${Y_{2}}=\{{Y_{2}}(t),t\ge 0\}$ satisfy the equations of the form
(9)
\[ \displaystyle {Y_{i}}(t)={Y_{0}}+{\int _{0}^{t}}{f_{i}}\big({Y_{i}}(s)\big)ds+{\int _{0}^{t}}{\alpha _{i}}(s)ds+\sigma {B^{H}}(t),\hspace{1em}t\ge 0,\hspace{2.5pt}i=1,2,\]
where ${B^{H}}=\{{B^{H}}(t),t\ge 0\}$ is a fractional Brownian motion, ${Y_{0}}$, σ>0 are constants, ${\alpha _{i}}=\{{\alpha _{i}}(t),t\ge 0\}$, $i=1,2$, are continuous random processes and ${f_{1}}$, ${f_{2}}$: $\mathbb{R}\to \mathbb{R}$ are continuous functions, such that:
(i) for all $y\in \mathbb{R}$: ${f_{1}}(y)<{f_{2}}(y)$;
(ii) for all $\omega \in \varOmega $, for all $t\ge 0$: ${\alpha _{1}}(t,\omega )\le {\alpha _{2}}(t,\omega )$.
Then, for all $\omega \in \varOmega $, $t\ge 0$: ${Y_{1}}(t,\omega )<{Y_{2}}(t,\omega )$.
Proof.
Let $\omega \in \varOmega $ be fixed (we will omit ω in brackets in what follows).
Denote
\[\begin{array}{c}\displaystyle \delta (t)={Y_{2}}(t)-{Y_{1}}(t)\\ {} \displaystyle ={\int _{0}^{t}}\big({f_{2}}\big({Y_{2}}(s)\big)-{f_{1}}\big({Y_{1}}(s)\big)\big)ds+{\int _{0}^{t}}\big({\alpha _{2}}(s)-{\alpha _{1}}(s)\big)ds,\hspace{1em}t\ge 0.\end{array}\]
The function δ is differentiable, $\delta (0)=0$ and
\[ {\delta ^{\prime }_{+}}(0)=\big({f_{2}}({Y_{0}})-{f_{1}}({Y_{0}})\big)+\big({\alpha _{2}}(0)-{\alpha _{1}}(0)\big)>0.\]
It is clear that $\delta (t)={\delta ^{\prime }_{+}}(0)t+o(t)$, $t\to 0+$, so there exists the maximal interval $(0,{t^{\ast }})\subset (0,\infty )$ such that $\delta (t)>0$ for all $t\in (0,{t^{\ast }})$. It is also clear that
\[ {t^{\ast }}=\sup \big\{t>0\hspace{2.5pt}|\hspace{2.5pt}\forall s\in (0,t):\delta (s)>0\big\}.\]
Assume that ${t^{\ast }}<\infty $. According to the definition of ${t^{\ast }}$, $\delta ({t^{\ast }})=0$. Hence ${Y_{1}}({t^{\ast }})={Y_{2}}({t^{\ast }})={Y^{\ast }}$ and
\[ {\delta ^{\prime }}\big({t^{\ast }}\big)=\big({f_{2}}\big({Y^{\ast }}\big)-{f_{1}}\big({Y^{\ast }}\big)\big)+\big({\alpha _{2}}\big({t^{\ast }}\big)-{\alpha _{1}}\big({t^{\ast }}\big)\big)>0.\]
As $\delta (t)={\delta ^{\prime }}({t^{\ast }})(t-{t^{\ast }})+o(t-{t^{\ast }})$, $t\to {t^{\ast }}$, there exists $\varepsilon >0$ such that $\delta (t)<0$ for all $t\in ({t^{\ast }}-\varepsilon ,{t^{\ast }})$, that contradicts the definition of ${t^{\ast }}$.
Therefore, $\forall t>0$:
(10)
\[ {Y_{1}}(t)<{Y_{2}}(t).\]
 □
Remark 2.2.
It is obvious that Lemma 2.1 still holds if we replace the index set $[0,+\infty )$ by $[a,b]$, $a<b$, or if we consider the case ${Y_{1}}(0)<{Y_{2}}(0)$.
Moreover, the condition (i) can be replaced by ${f_{1}}(y)\le {f_{2}}(y)$. In this case it can be obtained that ${Y_{1}}(t,\omega )\le {Y_{2}}(t,\omega )$ for all $\omega \in \varOmega $ and $t\ge 0$.
According to Lemma 2.1, for any ${\varepsilon _{1}}>{\varepsilon _{2}}$ and for all $t\in (0,\infty )$:
\[ {Y_{{\varepsilon _{1}}}}(t)<{Y_{{\varepsilon _{2}}}}(t).\]
Now, let us show that there exists the limit
(11)
\[ \underset{\varepsilon \to 0}{\lim }{Y_{\varepsilon }}(t)=Y(t)<\infty .\]
We will need an auxiliary result, presented in [16].
Lemma 2.2.
For all $r\ge 1$:
\[ \mathbb{E}\Big[\underset{t\in [0,T]}{\sup }{\big|{B^{H}}(t)\big|^{r}}\Big]<\infty .\]
Let a, k, $\sigma >0$, $H\in (0,1)$.
Theorem 2.1.
For all $H\in (0,1)$, $T>0$ and $r\ge 1$ there are non-random constants ${C_{1}}={C_{1}}(T,r,{Y_{0}},a,k)>0$ and ${C_{2}}={C_{2}}(T,r,a,\sigma )>0$ such that for all $\varepsilon >0$ and for all $t\in [0,T]$:
(12)
\[ {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le {C_{1}}+{C_{2}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\]
Proof.
Let an arbitrary $\omega \in \varOmega $, $r\ge 1$ and $T>0$ be fixed (we will omit ω in what follows).
Let us prove that for all $\varepsilon >0$ and for all $t\in [0,T]$:
(13)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)\\ {} \displaystyle +{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
Consider the moment
\[ {\tau _{1}}(\varepsilon ):=\sup \bigg\{s\in [0,T]\hspace{2.5pt}|\hspace{2.5pt}\forall u\in [0,s]:{Y_{\varepsilon }}(s)\ge \frac{{Y_{0}}}{2}\bigg\}.\]
Note that ${Y_{\varepsilon }}$ is continuous, so $0<{\tau _{1}}(\varepsilon )\le T$ and, moreover, for all $t\in [0,{\tau _{1}}(\varepsilon )]$: ${Y_{\varepsilon }}(t)\ge \frac{{Y_{0}}}{2}>0$.
In order to make the further proof more convenient for the reader, we shall divide it into 3 steps. In Steps 1 and 2, we will separately show that (2) holds for all $t\in [0,{\tau _{1}}(\varepsilon )]$ and for all $t\in ({\tau _{1}}(\varepsilon ),T]$, and in Step 3 we will obtain the final result.
Step 1. Assume that $t\in [0,{\tau _{1}}(\varepsilon )]$. Then,
(14)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}={\Bigg|{Y_{0}}+{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds-a{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds+\sigma {B^{H}}(t)\Bigg|^{r}}\\ {} \displaystyle \le {\Bigg({Y_{0}}+{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+a{\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds+\sigma \big|{B^{H}}(t)\big|\Bigg)^{r}}.\end{array}\]
If $r>1$, by applying the Hölder inequality to the right-hand side of (2), we obtain that
(15)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le {4^{r-1}}\Bigg({Y_{0}^{r}}+{\Bigg({\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}\\ {} \displaystyle +{a^{r}}{\Bigg({\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}+{\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}\Bigg).\end{array}\]
Note that (2) is also true for $r=1$ as in this case it simply coincides with right-hand side of (2).
For all $t\in [0,{\tau _{1}}(\varepsilon )]$
\[ {\Bigg({\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}\le {\Bigg({\int _{0}^{t}}\frac{2k}{{Y_{0}}}ds\Bigg)^{r}}\le {\bigg(\frac{2kT}{{Y_{0}}}\bigg)^{r}}.\]
For $r\ge 1$, from Jensen’s inequality,
\[ {\Bigg({\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\le {t^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\le {T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\]
Finally, for all $t\in [0,{\tau _{1}}(\varepsilon )]$:
\[ {\big|{B^{H}}(t)\big|^{r}}\le \underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\]
Hence, for all $t\in [0,{\tau _{1}}(\varepsilon )]$:
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le {4^{r-1}}\Bigg({Y_{0}^{r}}+{\bigg(\frac{2kT}{{Y_{0}}}\bigg)^{r}}+{a^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds+{\sigma ^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\Bigg)\\ {} \displaystyle \le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{8kT}{{Y_{0}}}\bigg)^{r}}+{(4\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)+{(4a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\\ {} \displaystyle \le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)+{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
Note that the inequality (2) holds for all $t\in [0,T]$, if ${\tau _{1}}(\varepsilon )=T$.
Step 2. Assume that ${\tau _{1}}(\varepsilon )<T$, i.e. the interval $({\tau _{1}}(\varepsilon ),T]$ is non-empty. From definition of ${\tau _{1}}(\varepsilon )$ and continuity of ${Y_{\varepsilon }}$, ${Y_{\varepsilon }}({\tau _{1}}(\varepsilon ))=\frac{{Y_{0}}}{2}$, for all $t\in ({\tau _{1}}(\varepsilon ),T]$:
\[ \bigg\{s\in ({\tau _{1}}(\varepsilon ),t]\hspace{2.5pt}|\hspace{2.5pt}\big|{Y_{\varepsilon }}(s)\big|<\frac{{Y_{0}}}{2}\bigg\}\ne \varnothing .\]
Denote
\[ {\tau _{2}}(\varepsilon ,t):=\sup \bigg\{s\in ({\tau _{1}}(\varepsilon ),t]\hspace{2.5pt}|\hspace{2.5pt}\big|{Y_{\varepsilon }}(s)\big|<\frac{{Y_{0}}}{2}\bigg\}.\]
Note that for all $t\in ({\tau _{1}}(\varepsilon ),T]$: ${\tau _{1}}(\varepsilon )<{\tau _{2}}(\varepsilon ,t)\le t$ and $|{Y_{\varepsilon }}({\tau _{2}}(\varepsilon ,t))|\le \frac{{Y_{0}}}{2}$, so
(16)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}={\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)+{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\\ {} \displaystyle \le {2^{r-1}}\big({\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}+{\big|{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\big)\\ {} \displaystyle \le {2^{r-1}}\bigg({\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}+{\bigg(\frac{{Y_{0}}}{2}\bigg)^{r}}\bigg)\\ {} \displaystyle \le {2^{r-1}}{\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}+{Y_{0}^{r}}.\end{array}\]
In addition, if ${\tau _{2}}(\varepsilon ,t)=t$, then
\[ {\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}=0,\]
and if ${\tau _{2}}(\varepsilon ,t)<t$, then
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}=\Bigg|{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\\ {} \displaystyle -a{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}{Y_{\varepsilon }}(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big){\Bigg|^{r}}\\ {} \displaystyle \le \Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+a{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds+\sigma \big|{B^{H}}(t)\big|\\ {} \displaystyle +\sigma \big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|\Bigg){^{r}}\\ {} \displaystyle \le {4^{r-1}}\Bigg[{\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}+{a^{r}}{\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\\ {} \displaystyle +{\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}+{\sigma ^{r}}{\big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\Bigg].\end{array}\]
In this case, from definition of ${\tau _{2}}(\varepsilon ,t)$, for all $s\in [{\tau _{2}}(\varepsilon ,t),t]$: ${Y_{\varepsilon }}(s)\ge \frac{{Y_{0}}}{2}$, so
\[ {\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}\le {\bigg(\frac{2kT}{{Y_{0}}}\bigg)^{r}}.\]
Furthermore, from Jensen’s inequality,
\[\begin{array}{c}\displaystyle {\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\le {\Bigg({\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\le {t^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\\ {} \displaystyle \le {T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
Next,
\[ {\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}+{\sigma ^{r}}{\big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\le 2{\sigma ^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\]
Hence,
(17)
\[\begin{array}{l}\displaystyle {\big|{Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\\ {} \displaystyle \le {\bigg(\frac{8kT}{{Y_{0}}}\bigg)^{r}}+{(4a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds+{(4\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\end{array}\]
Finally, from (2) and (2), for all $t\in ({\tau _{1}}(\varepsilon ),T]$:
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\\ {} \displaystyle \le \bigg({Y_{0}^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)+{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds\\ {} \displaystyle \le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg)\\ {} \displaystyle +{(8a)^{r}}{T^{r-1}}{\int _{0}^{t}}{\big|{Y_{\varepsilon }}(s)\big|^{r}}ds.\end{array}\]
Therefore, (2) indeed holds for all $t\in [0,T]$.
Step 3. From (2), by applying the Grönwall inequality, we obtain that for all $t\in [0,T]$:
\[\begin{array}{c}\displaystyle {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le \bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}+{(8\sigma )^{r}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}\bigg){e^{{(8aT)^{r}}}}\\ {} \displaystyle =:{C_{1}}+{C_{2}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}},\end{array}\]
where
\[\begin{array}{l}\displaystyle {C_{1}}={e^{{(8aT)^{r}}}}\bigg({(4{Y_{0}})^{r}}+{\bigg(\frac{16kT}{{Y_{0}}}\bigg)^{r}}\bigg),\\ {} \displaystyle {C_{2}}={(8\sigma )^{r}}{e^{{(8aT)^{r}}}},\end{array}\]
which ends the proof.  □
Corollary 2.1.
For all $T>0$ and $r\ge 1$ there exists $C=C(T,r,{Y_{0}},k,a,\sigma ,H)<\infty $ such that for all $\varepsilon >0$ and $t\in [0,T]$:
\[ \mathbb{E}{\big|{Y_{\varepsilon }}(t)\big|^{r}}<C.\]
Proof.
The proof immediately follows from Lemma 2.2 and Theorem 2.1.  □
Corollary 2.2.
Let $r\ge 1$ and ${C_{1}}$, ${C_{2}}$ be constants from Theorem 2.1, and
(18)
\[ Y(t,\omega )=\underset{\varepsilon \to 0}{\lim }{Y_{\varepsilon }}(t,\omega ),\hspace{1em}t\in [0,T],\hspace{2.5pt}\omega \in \varOmega .\]
Then
\[ {\big|Y(t)\big|^{r}}<{C_{1}}+{C_{2}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\]
In particular,
\[ Y(t)<\infty \hspace{1em}a.s.\]
Proof.
Let $\omega \in \varOmega $ and $t\in (0,T]$ be fixed (if $t=0$, the result is trivial).
From Lemma 2.1, if ${\varepsilon _{1}}>{\varepsilon _{2}}$, then ${Y_{{\varepsilon _{1}}}}(t)<{Y_{{\varepsilon _{2}}}}(t)$, therefore the limit in (18) exists.
The upper bound for $|Y(t){|^{r}}$ immediately follows from Theorem 2.1 as the right-hand side of (12) does not depend on ε. The a.s. finiteness of Y follows from the a.s. finiteness of ${\sup _{s\in [0,T]}}|{B^{H}}(s)|$, which is a direct consequence of Lemma 2.2.  □
Corollary 2.3.
The process Y is Lebesgue integrable on an arbitrary interval $[0,t]$ a.s.
Proof.
First, note that the trajectories of Y are measurable as they are the pointwise limits of continuous functions.
Let $t\in {\mathbb{R}_{+}}$ be fixed. Due to Tonelli’s theorem, for any $r\ge 1$ there is such C that
\[ \mathbb{E}\Bigg[{\Bigg|{\int _{0}^{t}}Y(s)ds\Bigg|^{r}}\Bigg]\le {T^{r-1}}\mathbb{E}\Bigg[{\int _{0}^{t}}{\big|Y(s)\big|^{r}}ds\Bigg]={T^{r-1}}{\int _{0}^{t}}\mathbb{E}{\big|Y(s)\big|^{r}}ds\le C{T^{r}},\]
therefore
\[ {\int _{0}^{t}}Y(s)ds<\infty \hspace{1em}\mathrm{a}.\mathrm{s}.\]
 □
Remark 2.3.
For all $t>0$:
\[ \underset{\varepsilon \to 0}{\lim }{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds={\int _{0}^{t}}Y(s)ds\hspace{1em}\mathrm{a}.\mathrm{s}.\]
Proof.
It follows immediately from monotonicity of ${Y_{\varepsilon }}$ with respect to ε.  □
Remark 2.4.
Later it will be shown that Y is Riemann integrable as well. Until that, all integrals should be considered as the Lebesgue integrals.
Remark 2.5.
We will sometimes refer to the limit process Y as to the square root process. It will be shown that it coincides with the square root process presented in Section 1 until the first zero hitting by the latter.
Remark 2.6.
Further, we will consider only finite and integrable paths of Y.

3 Properties of ε-approximations and the square root process

Now let us prove several properties of both square root process and its ε-approximations.
Lemma 3.1.
Let $T>0$ and λ be the Lebesgue measure on $[0,T]$. Then
\[ \lambda \big\{t\in [0,T]\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}(t)\le 0\big\}\to 0,\hspace{1em}\varepsilon \to 0,\hspace{2.5pt}a.s.\]
Proof.
Let $\omega \in \varOmega $ be fixed (we will omit ω in what follows).
From the definition of Y and Remark 2.3, for any $t\in [0,T]$ the left-hand side of
\[ {Y_{\varepsilon }}(t)-{Y_{0}}+a{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds-\sigma {B^{H}}(t)={\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\]
converges to
\[ Y(t)-{Y_{0}}+a{\int _{0}^{t}}Y(s)ds-\sigma {B^{H}}(t)<\infty ,\]
as $\varepsilon \to 0$. Therefore there exists a limit
(19)
\[ \underset{\varepsilon \to 0}{\lim }{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds<\infty .\]
Assume that there exists a sequence $\{{\varepsilon _{n}}:n\ge 1\}$, ${\varepsilon _{n}}\downarrow 0$, and $\delta >0$ such that for all $n\ge 1$:
\[ \lambda \big\{t\in [0,T]\hspace{2.5pt}|\hspace{2.5pt}{Y_{{\varepsilon _{n}}}}(t)\le 0\big\}\ge \delta >0.\]
In such case, as
\[\begin{array}{c}\displaystyle {\int _{0}^{T}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle ={\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)\le 0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle +{\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)>0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle \ge {\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)\le 0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\ge \frac{k\delta }{{\varepsilon _{n}}},\end{array}\]
it is clear that
\[ {\int _{0}^{T}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\to \infty ,\hspace{1em}n\to \infty ,\]
that contradicts (19).  □
Corollary 3.1.
For any $T>0$, $Y(t)>0$ almost everywhere on $[0,T]$ a.s. and hence $Y(t)>0$ almost everywhere on ${\mathbb{R}_{+}}$ a.s.
Corollary 3.2.
Let $T>0$ be arbitrary. Then, for all $t\in [0,T]$:
\[ {\int _{0}^{t}}\frac{k}{Y(s)}ds<\infty .\]
Proof.
According to the Fatou lemma,
\[ {\int _{0}^{t}}\frac{k}{Y(s)}ds\le \underset{\varepsilon \to 0}{\underline{\lim }}{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds<\infty .\]
 □
For the next result, we will require the following well-known property of the fractional Brownian motion (see, for example, [16]).
Lemma 3.2.
Let $\{{B^{H}}(t),t\ge 0\}$ be a fractional Brownian motion with the Hurst index H. Then there is such ${\varOmega ^{\prime }}\subset \varOmega $, $\mathbb{P}({\varOmega ^{\prime }})=1$, that for all $\omega \in {\varOmega ^{\prime }}$, $T>0$, $\gamma >0$ and $0\le s\le t\le T$ there is a positive $C=C(\omega ,T,\gamma )$ for which:
\[ \big|{B^{H}}(t)-{B^{H}}(s)\big|\le C|t-s{|^{H-\gamma }}.\]
Lemma 3.3.
The process $Y=\{Y(t),\hspace{2.5pt}t\ge 0\}$ is non-negative a.s., so
\[ \big\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)\le 0\big\}=\big\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)=0\big\}\hspace{1em}a.s.\]
Proof.
Let an arbitrary $\omega \in {\varOmega ^{\prime }}$ from Lemma 3.2 be fixed and assume that there is such $\tau >0$ that $Y(\tau )\le 0$. Then, for all $\varepsilon >0$:
\[ {Y_{\varepsilon }}(\tau )<Y(\tau )\le 0.\]
Let an arbitrary $\varepsilon >0$ be fixed. Denote
\[\begin{array}{c}\displaystyle {\tau _{-}}(\varepsilon ):=\sup \big\{t\in (0,\tau )\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}(t)>0\big\},\\ {} \displaystyle {\tau _{+}}(\varepsilon ):=\inf \big\{t\in (\tau ,\infty )\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}(t)>0\big\}.\end{array}\]
Note that, due to continuity of ${Y_{\varepsilon }}$ and Lemma 3.1, $0<{\tau _{-}}(\varepsilon )<\tau <{\tau _{+}}(\varepsilon )<\infty $.
It is clear that for all $t\in ({\tau _{-}}(\varepsilon ),{\tau _{+}}(\varepsilon ))$: ${Y_{\varepsilon }}(t)<0$, therefore ${1_{\{{Y_{\varepsilon }}(t)>0\}}}=0$ and, due to Lemma 3.2, there is such $C>0$ that for all $t\in ({\tau _{-}}(\varepsilon ),{\tau _{+}}(\varepsilon ))$:
(20)
\[\begin{array}{l}\displaystyle 0<{Y_{\varepsilon }}(t)={Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\tau _{-}}(\varepsilon )\big)\\ {} \displaystyle ={\int _{{\tau _{-}}(\varepsilon )}^{t}}\frac{k}{\varepsilon }ds-a{\int _{{\tau _{-}}(\varepsilon )}^{t}}{Y_{\varepsilon }}(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}\big({\tau _{-}}(\varepsilon )\big)\big)\\ {} \displaystyle \ge \frac{k}{\varepsilon }\big(t-{\tau _{-}}(\varepsilon )\big)-C\sigma {\big(t-{\tau _{-}}(\varepsilon )\big)^{H/2}}.\end{array}\]
It is sufficient to prove that
\[ F(\varepsilon ):=\underset{t\in [{\tau _{-}}(\varepsilon ),{\tau _{+}}(\varepsilon )]}{\min }\bigg(\frac{k}{\varepsilon }\big(t-{\tau _{-}}(\varepsilon )\big)-C\sigma {\big(t-{\tau _{-}}(\varepsilon )\big)^{H/2}}\bigg)\to 0,\hspace{1em}\varepsilon \to 0.\]
Indeed,
(21)
\[ {\bigg(\frac{k}{\varepsilon }\big(t-{\tau _{-}}(\varepsilon )\big)-C\sigma {\big(t-{\tau _{-}}(\varepsilon )\big)^{H/2}}\bigg)^{\prime }_{t}}=\frac{k}{\varepsilon }-\frac{CH\sigma }{2}{\big(t-{\tau _{-}}(\varepsilon )\big)^{\frac{H-2}{2}}}.\]
Equating the right-hand side of (21) to 0 and solving the equation with respect to t, we obtain
\[ {t_{\ast }}={\tau _{-}}(\varepsilon )+{C_{1}}{\varepsilon ^{\frac{2}{2-H}}},\]
where ${C_{1}}={\left(\frac{CH\sigma }{2k}\right)^{\frac{2}{2-H}}}$.
It is easy to check that the second derivative of the right-hand side of (3) is positive on $({\tau _{-}}(\varepsilon ),{\tau _{+}}(\varepsilon ))$, so ${t_{\ast }}$ is indeed its point of minimum. Therefore
\[\begin{array}{l}\displaystyle F(\varepsilon )=\frac{k}{\varepsilon }\big({t_{\ast }}-{\tau _{-}}(\varepsilon )\big)-C\sigma {\big({t_{\ast }}-{\tau _{-}}(\varepsilon )\big)^{H/2}}=\frac{k}{\varepsilon }{C_{1}}{\varepsilon ^{\frac{2}{2-H}}}-C\sigma {\big({C_{1}}{\varepsilon ^{\frac{2}{2-H}}}\big)^{H/2}}\\ {} \displaystyle =k{C_{1}}{\varepsilon ^{\frac{H}{2-H}}}-C{C_{1}^{H/2}}\sigma {\varepsilon ^{\frac{H}{2-H}}}\to 0,\hspace{1em}\varepsilon \to 0.\end{array}\]
 □
Remark 3.1.
It is clear that for all $t\in [0,T]$:
\[ Y(t)\ge {Y_{0}}+{\int _{0}^{t}}\frac{k}{Y(s)}ds-a{\int _{0}^{t}}Y(s)ds+\sigma {B_{t}^{H}}.\]
Lemma 3.4.
For any ${\varepsilon ^{\ast }}>0$ and all $t>0$:
\[ \underset{\varepsilon \to {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)={Y_{{\varepsilon ^{\ast }}}}(t).\]
Proof.
Indeed, denote
\[ \underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)={Z_{+}}(t)\le {Y_{{\varepsilon ^{\ast }}}}(t).\]
It is clear that for all $t\ge 0$, ${Y_{\varepsilon }}(t)\uparrow {Z_{+}}(t)$, $\varepsilon \downarrow {\varepsilon ^{\ast }}$, so for any $t>0$:
\[ \underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds={\int _{0}^{t}}{Z_{+}}(s)ds.\]
Moreover, for all $\varepsilon >{\varepsilon ^{\ast }}$:
\[ \frac{1}{{Y_{\varepsilon }}(t){1_{\{{Y_{\varepsilon }}(t)>0\}}}+\varepsilon }\le \frac{1}{{\varepsilon ^{\ast }}},\]
therefore
\[ \underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{1}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{1}{{Z_{+}}(s){1_{\{{Z_{+}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds\]
and hence
(22)
\[\begin{array}{l}\displaystyle {Z_{+}}(t)=\underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)\\ {} \displaystyle ={Y_{0}}+\underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\\ {} \displaystyle -a\underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{1}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+\sigma {B^{H}}(t)\\ {} \displaystyle ={Y_{0}}+{\int _{0}^{t}}\frac{1}{{Z_{+}}(s){1_{\{{Z_{+}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds-a{\int _{0}^{t}}{Z_{+}}(s)ds+\sigma {B^{H}}(t).\end{array}\]
It is known that ${Y_{{\varepsilon ^{\ast }}}}$ is the unique solution to the equation (3), therefore for all $t\ge 0$:
\[ \underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)={Y_{{\varepsilon ^{\ast }}}}(t).\]
Next, denote
\[ \underset{\varepsilon \uparrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)={Z_{-}}(t)\ge {Y_{{\varepsilon ^{\ast }}}}(t).\]
For all $t\ge 0$, ${Y_{\varepsilon }}(t)\downarrow {Z_{-}}(t)$, $\varepsilon \uparrow {\varepsilon ^{\ast }}$, so
\[ \underset{\varepsilon \uparrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds={\int _{0}^{t}}{Z_{-}}(s)ds\]
and for all $\varepsilon \in (\frac{{\varepsilon ^{\ast }}}{2},{\varepsilon ^{\ast }})$:
\[ \frac{1}{{Y_{\varepsilon }}(t){1_{\{{Y_{\varepsilon }}(t)>0\}}}+\varepsilon }\le \frac{2}{{\varepsilon ^{\ast }}},\]
so
\[ \underset{\varepsilon \uparrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{1}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{1}{{Z_{-}}(s){1_{\{{Z_{-}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds.\]
Therefore, similarly to (3), ${Z_{-}}$ satisfies the same equation as ${Y_{{\varepsilon ^{\ast }}}}$, so
\[ \underset{\varepsilon \uparrow {\varepsilon ^{\ast }}}{\lim }{Y_{\varepsilon }}(t)={Y_{{\varepsilon ^{\ast }}}}(t).\]
 □
Theorem 3.1.
Let $Y=\{Y(t),t\ge 0\}$ be the process defined by the formula (18). Then
1) the set $\{t>0|Y(t)>0\}$ is open in the natural topology on $\mathbb{R}$;
2) Y is continuous on $\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$.
Proof.
We shall divide the proof into 3 steps.
Step 1. Let $\omega \in \varOmega $ be fixed. Consider an arbitrary ${t^{\ast }}\in \{t>0|Y(t)>0\}$. As ${Y_{\varepsilon }}({t^{\ast }})\to Y({t^{\ast }})$, $\varepsilon \to 0$, there exists such ${\varepsilon ^{\ast }}>0$ that for all $\varepsilon <{\varepsilon ^{\ast }}$: ${Y_{\varepsilon }}({t^{\ast }})>0$. From continuity of ${Y_{\varepsilon }}$ with respect to t and their monotonicity with respect to ε, it follows that there exists such ${\delta ^{\ast }}={\delta ^{\ast }}({t^{\ast }})>0$ that
\[ \forall \varepsilon <{\varepsilon ^{\ast }},\hspace{2.5pt}\forall t\in \big({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }}\big):\hspace{1em}{Y_{\varepsilon }}(t)>0.\]
Hence for all $t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$: $Y(t)>0$ and therefore the set $\{t>0|Y(t)>0\}$ is open.
Step 2. Let us prove that
\[ \underset{t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})}{\sup }\big(Y(t)-{Y_{\varepsilon }}(t)\big)\to 0,\hspace{1em}\varepsilon \to 0,\]
and therefore Y is continuous on the interval $({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$.
It is enough to prove that for any $\theta >0$ there exists such ${\varepsilon _{0}}={\varepsilon _{0}}(\theta )>0$ that for all $\varepsilon <{\varepsilon _{0}}$:
\[ \underset{t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})}{\sup }\big(Y(t)-{Y_{\varepsilon }}(t)\big)\le \theta .\]
Indeed, let us fix an arbitrary $\theta >0$. From the definition of Y it follows that ${Y_{\varepsilon }}({t^{\ast }}-{\delta ^{\ast }})\to Y({t^{\ast }}-{\delta ^{\ast }})$ as $\varepsilon \to 0$, so there is such ${\varepsilon ^{\ast \ast }}<{\varepsilon ^{\ast }}$ that for all $\varepsilon <{\varepsilon ^{\ast \ast }}$ the following inequality holds:
\[ Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)<\theta .\]
Denote
\[\begin{aligned}{}{\varepsilon _{1}}:=\min \big\{\theta ,\hspace{2.5pt}\sup \big\{{\varepsilon ^{\ast \ast }}\in \big(0,{\varepsilon ^{\ast }}\big)\hspace{2.5pt}|\hspace{2.5pt}\forall \varepsilon \in \big(& 0,{\varepsilon ^{\ast \ast }}\big):\\ {} & Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)<\theta \big\}\big\}.\end{aligned}\]
As ${\varepsilon _{1}}\le \theta $, there exists such $C\in (0,1]$ that ${\varepsilon _{1}}=C\theta $.
From the continuity with respect to ε,
\[ Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\le \theta \]
and, from the monotonicity with respect to ε, for all $\varepsilon <{\varepsilon _{1}}$:
\[ 0\le {Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\le \theta .\]
It is obvious that ${Y_{\varepsilon }}({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})\downarrow 0$ as $\varepsilon \uparrow {\varepsilon _{1}}$, so let us denote
\[\begin{aligned}{}{\varepsilon _{0}}:=\sup \bigg\{\varepsilon \in (0,{\varepsilon _{1}})\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)& -{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\\ {} & \ge \frac{Y({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})}{2}\bigg\}.\end{aligned}\]
It is obvious that
\[ {Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)=\frac{Y({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})}{2}\]
and therefore
\[\begin{array}{c}\displaystyle Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\\ {} \displaystyle =\big(Y\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\big)-\big({Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{1}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\big)\\ {} \displaystyle =\frac{Y({t^{\ast }}-{\delta ^{\ast }})-{Y_{{\varepsilon _{1}}}}({t^{\ast }}-{\delta ^{\ast }})}{2}\le \frac{\theta }{2}.\end{array}\]
Moreover, for all $\varepsilon <{\varepsilon _{0}}$:
\[ {Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\le \frac{\theta }{2}.\]
Now consider an arbitrary $\varepsilon <{\varepsilon _{0}}$ and assume that there is such ${\tau _{0}}\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$ that
\[ {Y_{\varepsilon }}({\tau _{0}})-{Y_{{\varepsilon _{0}}}}({\tau _{0}})>\theta .\]
Denote
\[ \tau :=\inf \big\{t\in \big({t^{\ast }}-{\delta ^{\ast }},{\tau _{0}}\big)\hspace{2.5pt}|\hspace{2.5pt}\forall s\in (t,{\tau _{0}}):\hspace{2.5pt}{Y_{\varepsilon }}(s)-{Y_{{\varepsilon _{0}}}}(s)>\theta \big\}<{\tau _{0}}.\]
From the definition of ${\tau _{0}}$ and τ, for all $t\in (\tau ,{\tau _{0}})$:
(23)
\[ {Y_{\varepsilon }}(t)-{Y_{{\varepsilon _{0}}}}(t)>\theta .\]
However, as for all $t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$ and for all $\varepsilon <{\varepsilon ^{\ast }}$ it is true that ${1_{\{{Y_{\varepsilon }}(t)>0\}}}=1$,
(24)
\[\begin{array}{l}\displaystyle {\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)^{\prime }}\\ {} \displaystyle =\bigg(\frac{k}{{Y_{\varepsilon }}(\tau ){1_{\{{Y_{\varepsilon }}(\tau )>0\}}}+\varepsilon }-\frac{k}{{Y_{{\varepsilon _{2}}}}(\tau ){1_{\{{Y_{{\varepsilon _{0}}}}(\tau )>0\}}}+{\varepsilon _{0}}}\bigg)\\ {} \displaystyle -a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)\\ {} \displaystyle =\bigg(\frac{k}{{Y_{\varepsilon }}(\tau )+\varepsilon }-\frac{k}{{Y_{{\varepsilon _{0}}}}(\tau )+{\varepsilon _{0}}}\bigg)-a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)\\ {} \displaystyle =\frac{k({Y_{{\varepsilon _{0}}}}(\tau )-{Y_{\varepsilon }}(\tau ))+k({\varepsilon _{0}}-\varepsilon )}{({Y_{\varepsilon }}(\tau )+\varepsilon )({Y_{{\varepsilon _{0}}}}(\tau )+{\varepsilon _{0}})}-a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big).\end{array}\]
From the continuity of ${Y_{{\varepsilon _{0}}}}(t)-{Y_{\varepsilon }}(t)$ with respect to t and definition of τ, it is clear that ${Y_{{\varepsilon _{0}}}}(\tau )-{Y_{\varepsilon }}(\tau )=-\theta $ and, as $0<\varepsilon <{\varepsilon _{0}}<{\varepsilon _{1}}=C\theta $, where $C\in (0,1]$,
\[ k\big({Y_{{\varepsilon _{0}}}}(\tau )-{Y_{\varepsilon }}(\tau )\big)+k({\varepsilon _{0}}-\varepsilon )<k(-\theta +C\theta )=k(C-1)\theta \le 0.\]
Therefore
\[ {\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)^{\prime }}<\frac{k(C-1)\theta }{({Y_{\varepsilon }}(\tau )+\varepsilon )({Y_{{\varepsilon _{0}}}}(\tau )+{\varepsilon _{0}})}-a\theta <0.\]
Hence, as
\[ {Y_{\varepsilon }}(t)-{Y_{{\varepsilon _{0}}}}(t)=\theta +{\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)^{\prime }}(t-\tau )+o(t-\tau ),\hspace{1em}t\to \tau ,\]
there exists such interval $(\tau ,{\tau _{1}})\subset (\tau ,{\tau _{0}})$ that for all $t\in (\tau ,{\tau _{1}})$:
\[ {Y_{\varepsilon }}(t)-{Y_{{\varepsilon _{0}}}}(t)<\theta ,\]
which contradicts (23).
So, for all $t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})$:
\[ {Y_{\varepsilon }}\big({t^{\ast }}-{\delta ^{\ast }}\big)-{Y_{{\varepsilon _{0}}}}\big({t^{\ast }}-{\delta ^{\ast }}\big)\le \theta ,\]
and hence, as for all $\varepsilon <{\varepsilon _{0}}$ and $t\in \mathbb{R}$ it holds that ${Y_{{\varepsilon _{0}}}}(t)<{Y_{\varepsilon }}(t)<Y(t)$,
\[ \underset{t\in ({t^{\ast }}-{\delta ^{\ast }},{t^{\ast }}+{\delta ^{\ast }})}{\sup }\big({Y_{\varepsilon }}(t)-{Y_{\varepsilon }}(t)\big)\le \theta ,\hspace{1em}\forall \varepsilon <{\varepsilon _{0}}.\]
Step 3. In order to prove that
\[ \underset{t\to 0+}{\lim }Y(t)=Y(0)={Y_{0}},\]
it is enough to notice that for any $\tilde{\varepsilon }>0$ there is such $\tilde{t}>0$ that for all $\varepsilon <\tilde{\varepsilon }$: ${Y_{\varepsilon }}(t)>\frac{{Y_{0}}}{2}$, $t\in [0,\tilde{t}]$.
Hence, for each all $\varepsilon <\tilde{\varepsilon }$ and $t\in [0,\tilde{t}]$
\[ \frac{k}{{Y_{\varepsilon }}(t){1_{\{{Y_{\varepsilon }}(t)>0\}}}+\varepsilon }=\frac{k}{{Y_{\varepsilon }}(t)+\varepsilon }\le \frac{k}{{Y_{\varepsilon }}(t)}\le \frac{2k}{{Y_{0}}}\]
and so
\[ \underset{\varepsilon \to 0}{\lim }{\int _{0}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{k}{Y(s)}ds,\]
hence, for all $t\in [0,\tilde{t}]$:
\[ Y(t)={Y_{0}}+{\int _{0}^{t}}\frac{k}{Y(s)}ds-a{\int _{0}^{t}}Y(s)ds+\sigma {B^{H}}(t).\]
This equation has a unique continuous solution, therefore Y is continuous on $[0,\tilde{t}]$.  □
Remark 3.2.
From Theorem 3.1 it is easy to see that the limit square root process Y satisfies the equation of the form (4) until the first moment τ of zero hitting. Indeed, on each compact set $[0,\tilde{t}]\subset [0,\tau )$ ${Y_{\varepsilon }}$ converges to Y uniformly as $\varepsilon \to 0$ due to Dini’s theorem. Hence there is such $\tilde{\varepsilon }>0$ that for all $\varepsilon <\tilde{\varepsilon }$: ${Y_{\varepsilon }}(t)>\frac{{\min _{s\in [0,\tilde{t}]}}Y(s)}{2}>0$ for all $t\in [0,\tilde{t}]$, and, similarly to Step 3 of Theorem 3.1, it can be shown that Y satisfies equation (4) on $[0,\tilde{t}]$.
Corollary 3.3.
The trajectories of the process $Y=\{Y(t),\hspace{2.5pt}t\ge 0\}$ are continuous a.e. on ${\mathbb{R}_{+}}$ a.s. and therefore are Riemann integrable a.s.
Proof.
The claim follows directly from Theorem 3.1 and Corollary 3.1.  □
The set $\{t>0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$ is open in the natural topology on $\mathbb{R}$ a.s., so it can be a.s. represented as the finite or countable union of non-intersecting intervals, i.e.
\[ \big\{t>0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\big\}={\bigcup \limits_{i=0}^{N}}({\alpha _{i}},{\beta _{i}}),\hspace{1em}N\in \mathbb{N}\cup \{\infty \},\]
where $({\alpha _{i}},{\beta _{i}})\cap ({\alpha _{j}},{\beta _{j}})=\varnothing $, $i\ne j$.
Moreover, the set $\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$ can be a.s. represented as
(25)
\[ \big\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\big\}=[{\alpha _{0}},{\beta _{0}})\cup \Bigg({\bigcup \limits_{i=1}^{N}}({\alpha _{i}},{\beta _{i}})\Bigg),\]
where ${\alpha _{0}}=0$, ${\beta _{0}}$ is the first moment of zero hitting by the square root process Y, $({\alpha _{i}},{\beta _{i}})\cap ({\alpha _{j}},{\beta _{j}})=\varnothing $, $i\ne j$, and $({\alpha _{i}},{\beta _{i}})\cap [{\alpha _{0}},{\beta _{0}})=\varnothing $, $i\ne 0$.
Theorem 3.2.
Let $({\alpha _{i}},{\beta _{i}})$, $i\ge 1$, be an arbitrary interval from the representation (25). Then
1) ${\lim \nolimits_{t\to {\alpha _{i}}+}}Y(t)=0$, ${\lim \nolimits_{t\to {\beta _{i}}-}}Y(t)=0$ a.s.;
2) for any $t\in [{\alpha _{i}},{\beta _{i}}]$:
(26)
\[ Y(t)={\int _{{\alpha _{i}}}^{t}}\frac{k}{Y(s)}ds-a{\int _{{\alpha _{i}}}^{t}}Y(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}({\alpha _{i}})\big)\hspace{1em}\textit{a.s.}\]
Proof.
Let ${\varOmega ^{\prime }}$ be from Lemma 3.2 and an arbitrary $\omega \in {\varOmega ^{\prime }}$ be fixed.
1) Proofs for both left and right ends of the segment are similar, so we shall give a proof for the left end only.
Y is positive on $({\alpha _{i}},{\beta _{i}})$, so it is sufficient to prove that ${\overline{\lim }_{t\to {\alpha _{i}}+}}Y(t)=0$.
Assume that ${\overline{\lim }_{t\to {\alpha _{i}}+}}Y(t)=x>0$. Then for any $\delta >0$ there exists such $\tau \in ({\alpha _{i}},{\alpha _{i}}+\delta )$ that $Y(\tau )\in (\frac{3x}{4},\frac{5x}{4})$.
Let δ and such $\tau \in ({\alpha _{i}},{\alpha _{i}}+\delta )$ be fixed. ${Y_{\varepsilon }}(\tau )\uparrow Y(\tau )$ as $\varepsilon \to 0$, so there is such $\varepsilon =\varepsilon (\tau )$ that ${Y_{\varepsilon }}(\tau )\in (\frac{x}{2},\frac{5x}{4})$. It is clear that ${Y_{\varepsilon }}({\alpha _{i}})<0$, therefore there is such a moment ${\tau _{1}}\in ({\alpha _{i}},\tau )$ that
\[ {\tau _{1}}=\sup \bigg\{t\in ({\alpha _{i}},\tau )\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}(t)=\frac{x}{4}\bigg\}.\]
From continuity of ${Y_{\varepsilon }}$, ${Y_{\varepsilon }}({\tau _{1}})=\frac{x}{4}$, so ${Y_{\varepsilon }}(\tau )-{Y_{\varepsilon }}({\tau _{1}})>\frac{x}{4}$. On the other hand, from definitions of τ and ${\tau _{1}}$, for all $t\in [{\tau _{1}},\tau ]$: ${Y_{\varepsilon }}(t)\in (\frac{x}{4},\frac{5x}{4})$. That, together with Lemma 3.2, gives:
\[\begin{array}{c}\displaystyle \frac{x}{4}<{Y_{\varepsilon }}(\tau )-{Y_{\varepsilon }}({\tau _{1}})\\ {} \displaystyle ={\int _{{\tau _{1}}}^{\tau }}\frac{k}{{Y_{\varepsilon }}(s)+\varepsilon }ds-a{\int _{{\tau _{1}}}^{\tau }}{Y_{\varepsilon }}(s)ds+\sigma \big({B^{H}}(\tau )-{B^{H}}({\tau _{1}})\big)\\ {} \displaystyle \le \bigg(\frac{2k}{x}+\frac{ax}{4}\bigg)(\tau -{\tau _{1}})+C{({\tau _{2}}-{\tau _{1}})^{H/2}},\end{array}\]
i.e. for any $\delta >0$
\[ 0<\frac{x}{4}\le \bigg(\frac{2k}{x}+\frac{ax}{4}\bigg)(\tau -{\tau _{1}})+C{({\tau _{2}}-{\tau _{1}})^{H/2}}\le \bigg(\frac{2k}{x}+\frac{ax}{4}\bigg)\delta +C{\delta ^{H/2}},\]
which is not possible.
Therefore
\[ \underset{t\to {\alpha _{i}}+}{\overline{\lim }}Y(t)=\underset{t\to {\alpha _{i}}+}{\underline{\lim }}Y(t)=\underset{t\to {\alpha _{i}}+}{\lim }Y(t)=0.\]
2) From Theorem 3.1, Y is continuous on each segment $[{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]\subset [{\alpha _{i}},{\beta _{i}}]$, so, due to Dini’s theorem, ${Y_{\varepsilon }}$ converges uniformly to Y on $[{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$ as $\varepsilon \to 0$. Moreover, there is such $\delta >0$ such that for all $t\in [{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$: $Y(t)>\delta $, therefore it is easy to see that $\frac{k}{{Y_{\varepsilon }}(\cdot ){1_{\{{Y_{\varepsilon }}(\cdot )>0\}}}+\varepsilon }$ converges uniformly to $\frac{k}{Y(\cdot )}$ on $[{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$ as $\varepsilon \to 0$.
Hence, for all $t\in [{\alpha _{i}^{\ast }},{\beta _{i}^{\ast }}]$:
\[\begin{array}{c}\displaystyle {\int _{{\alpha _{i}^{\ast }}}^{t}}\frac{1}{Y(s)}ds=\underset{\varepsilon \to 0}{\lim }{\int _{{\alpha _{i}^{\ast }}}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\\ {} \displaystyle =\underset{\varepsilon \to 0}{\lim }\Bigg({Y_{\varepsilon }}(t)-{Y_{\varepsilon }}\big({\alpha _{i}^{\ast }}\big)+a{\int _{{\alpha _{i}^{\ast }}}^{t}}{Y_{\varepsilon }}(s)ds-\sigma \big({B^{H}}(t)-{B^{H}}\big({\alpha _{i}^{\ast }}\big)\big)\Bigg)\\ {} \displaystyle =Y(t)-Y\big({\alpha _{i}^{\ast }}\big)+a{\int _{{\alpha _{i}^{\ast }}}^{t}}Y(s)ds-\sigma \big({B^{H}}(t)-{B^{H}}\big({\alpha _{i}^{\ast }}\big)\big),\end{array}\]
or
(27)
\[ Y(t)=Y\big({\alpha _{i}^{\ast }}\big)+{\int _{{\alpha _{i}^{\ast }}}^{t}}\frac{k}{Y(s)}ds-a{\int _{{\alpha _{i}^{\ast }}}^{t}}Y(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}\big({\alpha _{i}^{\ast }}\big)\big).\]
The right-hand side of (27) is right continuous with respect to ${\alpha _{i}^{\ast }}$ due to the previous clause of proof, i.e.
\[\begin{array}{c}\displaystyle \underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }Y\big({\alpha _{i}^{\ast }}\big)=Y({\alpha _{i}});\hspace{2em}\underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }{\int _{{\alpha _{i}^{\ast }}}^{t}}\frac{k}{Y(s)}ds={\int _{{\alpha _{i}}}^{t}}\frac{k}{Y(s)}ds;\\ {} \displaystyle \underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }{\int _{{\alpha _{i}^{\ast }}}^{t}}Y(s)ds={\int _{{\alpha _{i}}}^{t}}Y(s)ds;\\ {} \displaystyle \underset{{\alpha _{i}^{\ast }}\to {\alpha _{i}}+}{\lim }\big({B^{H}}(t)-{B^{H}}\big({\alpha _{i}^{\ast }}\big)\big)=\big({B^{H}}(t)-{B^{H}}({\alpha _{i}})\big).\end{array}\]
Due to Lemma 3.3, $Y({\alpha _{i}})=0$, therefore (26) holds for an arbitrary $t\in [{\alpha _{i}},{\beta _{i}})$.
To get the result for $t={\beta _{i}}$, it is sufficient to pass to the limit as $t\to {\beta _{i}}$.  □
Remark 3.3.
Similarly to Theorem 3.2, it is easy to prove that
\[ \underset{t\to {\beta _{0}}-}{\lim }Y(t)=Y({\beta _{0}})=0,\]
and therefore, taking into account Remark 3.2, for all $t\in [0,{\beta _{0}}]$:
(28)
\[ Y(t)={Y_{0}}+{\int _{0}^{t}}\frac{k}{Y(s)}ds-a{\int _{0}^{t}}Y(s)ds+\sigma {B^{H}}(t).\]
Remark 3.4.
The choice of ε-approximations may be different. For example, instead of (8), it is possible to consider the equation of the form
\[ \tilde{{Y_{\varepsilon }}}(t)={Y_{0}}+{\int _{0}^{t}}\frac{k}{\max \{\tilde{{Y_{\varepsilon }}}(s),\varepsilon \}}ds-a{\int _{0}^{t}}\tilde{{Y_{\varepsilon }}}(s)ds+\sigma {B^{H}}(t).\]
By following the proofs of the results above, it can be verified that all of them hold for the resulting limit process $\tilde{Y}$. Furthermore, if $k,a>0$, it coincides with Y on $[0,+\infty )$.
Indeed, let $\omega \in \varOmega $ be fixed. Consider the difference
\[\begin{array}{c}\displaystyle {\Delta _{\varepsilon }}(t):=\tilde{{Y_{\varepsilon }}}(t)-{Y_{\varepsilon }}(t)\\ {} \displaystyle ={\int _{0}^{t}}\bigg(\frac{k}{\max \{\tilde{{Y_{\varepsilon }}}(s),\varepsilon \}}-\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }\bigg)ds\\ {} \displaystyle -a{\int _{0}^{t}}\big(\tilde{{Y_{\varepsilon }}}(s)-{Y_{\varepsilon }}(s)\big)ds.\end{array}\]
As $\frac{k}{\max \{x,\varepsilon \}}\ge \frac{k}{x{1_{\{x>0\}}}+\varepsilon }$ for all $x\in \mathbb{R}$, it is easy to see from Lemma 2.1 and Remark 2.2 that ${\Delta _{\varepsilon }}(t)=\tilde{{Y_{\varepsilon }}}(t)-{Y_{\varepsilon }}(t)\ge 0$. Furthermore, ${\Delta _{\varepsilon }}$ is differentiable on $(0,+\infty )$ and ${\Delta _{\varepsilon }}(0)=0$.
Assume that there is such $\tau >0$ that ${\Delta _{\varepsilon }}(\tau )\ge 2\varepsilon $ and denote
\[ {\tau _{\varepsilon }}:=\inf \big\{t\in (0,\tau )\hspace{2.5pt}|\hspace{2.5pt}\forall s\in (t,\tau ]:{\Delta _{\varepsilon }}(s)>\varepsilon \big\}.\]
Note that, due to continuity of $\tilde{{Y_{\varepsilon }}}$ and ${Y_{\varepsilon }}$, ${\Delta _{\varepsilon }}({\tau _{\varepsilon }})=\varepsilon $ and therefore for all $t\in ({\tau _{\varepsilon }},\tau )$:
\[ {\Delta _{\varepsilon }}(t)-{\Delta _{\varepsilon }}({\tau _{\varepsilon }})>0,\]
so, as ${\Delta _{\varepsilon }}$ is differentiable in ${\tau _{\varepsilon }}$,
\[ {\Delta ^{\prime }_{\varepsilon }}({\tau _{\varepsilon }})={({\Delta _{\varepsilon }})^{\prime }_{+}}({\tau _{\varepsilon }})=\underset{t\to {\tau _{\varepsilon }}+}{\lim }\frac{{\Delta _{\varepsilon }}(t)-{\Delta _{\varepsilon }}({\tau _{\varepsilon }})}{t-{\tau _{\varepsilon }}}\ge 0.\]
However, as
\[\begin{array}{c}\displaystyle \max \big\{\tilde{{Y_{\varepsilon }}}({\tau _{\varepsilon }}),\varepsilon \big\}=\max \big\{{Y_{\varepsilon }}({\tau _{\varepsilon }})+\varepsilon ,\varepsilon \big\}=\max \big\{{Y_{\varepsilon }}({\tau _{\varepsilon }}),0\big\}+\varepsilon \\ {} \displaystyle ={Y_{\varepsilon }}({\tau _{\varepsilon }}){1_{\{{Y_{\varepsilon }}({\tau _{\varepsilon }})>0\}}}+\varepsilon ,\end{array}\]
it is easy to see that
\[\begin{array}{c}\displaystyle {\Delta ^{\prime }_{\varepsilon }}({\tau _{\varepsilon }})=\frac{k}{\max \{\tilde{{Y_{\varepsilon }}}({\tau _{\varepsilon }}),\varepsilon \}}-\frac{k}{{Y_{\varepsilon }}({\tau _{\varepsilon }}){1_{\{{Y_{\varepsilon }}({\tau _{\varepsilon }})>0\}}}+\varepsilon }-a\big(\tilde{{Y_{\varepsilon }}}({\tau _{\varepsilon }})-{Y_{\varepsilon }}({\tau _{\varepsilon }})\big)\\ {} \displaystyle =-a\varepsilon <0.\end{array}\]
The obtained contradiction shows that for all $t\ge 0$: $\tilde{{Y_{\varepsilon }}}(t)-{Y_{\varepsilon }}(t)<2\varepsilon $ and therefore, by letting $\varepsilon \to 0$, it is easy to verify that for all $t\ge 0$: $\tilde{Y}(t)=Y(t)$.
Simulations illustrate that the processes indeed coincide (see Fig. 1). Euler approximations of ${Y_{\varepsilon }}$ and $\tilde{{Y_{\varepsilon }}}$ on $[0,5]$ were used with $\varepsilon =0.01$, $H=0.3$, ${Y_{0}}=a=k=\sigma =1$. The mesh of the partition was $\Delta t=0.0001$.
vmsta-6-1-vmsta126-g001.jpg
Fig. 1.
Comparison of ${Y_{\varepsilon }}$ (black) and $\tilde{{Y_{\varepsilon }}}$ (red), $\varepsilon =0.01$. Two paths are close to each other, so that they cannot be distinguished on the figure

4 Equation for the square root process Y for $t\ge 0$

The equation for $Y(t)$ in case of $t\in [0,{\beta _{0}}]$ has already been obtained in Remark 3.3. In order to get the equation for an arbitrary $t\in {\mathbb{R}_{+}}$, consider the following procedure.
Let $\mathcal{I}$ be the set of all non-intersecting open intervals from representation (25), i.e.
\[ \mathcal{I}=\big\{({\alpha _{i}},{\beta _{i}}),\hspace{2.5pt}i=1,\dots ,N\big\},\hspace{1em}N\in \mathbb{N}\cup \{\infty \}.\]
Consider an arbitrary interval $(\alpha ,\beta )\in \mathcal{I}$ and assume that $t\in [\alpha ,\beta ]$. Then, from Theorem 3.2,
\[ Y(t)={\int _{\alpha }^{t}}\frac{k}{Y(s)}ds-a{\int _{\alpha }^{t}}Y(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}(\alpha )\big)\]
Consider all such intervals $({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}})\in \mathcal{I}$, $j=1,\dots ,M$, $M\in \mathbb{N}\cup \{\infty \}$, that ${\tilde{\alpha }_{j}}<\alpha $.
For each $j=1,\dots ,M$,
\[ 0=Y({\tilde{\beta }_{j}})={\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}\frac{k}{Y(s)}ds-a{\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}Y(s)ds+\sigma \big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big).\]
Moreover,
\[ 0=Y({\beta _{0}})={Y_{0}}+{\int _{0}^{{\beta _{0}}}}\frac{k}{Y(s)}ds-a{\int _{0}^{{\beta _{0}}}}Y(s)ds+\sigma {B^{H}}({\beta _{0}}).\]
Therefore
\[\begin{array}{c}\displaystyle Y(t)=Y({\beta _{0}})+{\sum \limits_{j=1}^{M}}Y({\tilde{\beta }_{j}})+Y(t)\\ {} \displaystyle ={Y_{0}}+\Bigg({\int _{0}^{{\beta _{0}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\sum \limits_{j=1}^{M}}{\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +{\int _{\alpha }^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle +\sigma \Bigg({B^{H}}({\beta _{0}})+{\sum \limits_{j=1}^{M}}\big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big)+\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\Bigg)\\ {} \displaystyle ={Y_{0}}+{\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\sigma \Bigg({B^{H}}({\beta _{0}})+{\sum \limits_{j=1}^{M}}\big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big)+\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\Bigg)\\ {} \displaystyle ={Y_{0}}+{\int _{0}^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\sigma \Bigg({B^{H}}({\beta _{0}})+{\sum \limits_{j=1}^{M}}\big({B^{H}}({\tilde{\beta }_{j}})-{B^{H}}({\tilde{\alpha }_{j}})\big)+\big({B^{H}}(t)-{B^{H}}(\alpha )\big)\Bigg).\end{array}\]
The question whether Y satisfies the equation of the form (28) on ${\mathbb{R}_{+}}$ remains open due to the obscure structure of its zero set. The simulation studies do not contradict with that either.
Indeed, consider the process $Z=\{Z(t),t\ge 0\}$ that satisfies the SDE
\[ Z(t)={Y_{0}}-a{\int _{0}^{t}}Z(s)ds+{\int _{0}^{t}}\frac{k}{Y(s)}ds+\sigma {B^{H}}(t),\]
where Y is, as usual, the pointwise limit of ${Y_{\varepsilon }}$ and ${Y_{0}}$, k, a, σ are positive constants. By following explicitly the proof of Proposition A.1 in [3] it can be shown that
(29)
\[ Z(t)={e^{-at}}\Bigg({Y_{0}}+{\int _{0}^{t}}\frac{k{e^{as}}}{Y(s)}ds+\sigma {\int _{0}^{t}}{e^{as}}d{B^{H}}(s)\Bigg).\]
Now assume that Y satisfies the equation of the form (28) for all $t\ge 0$. Then Y admits the representation of the form (29), i.e.
(30)
\[ Y(t)={e^{-at}}\Bigg({Y_{0}}+{\int _{0}^{t}}\frac{k{e^{as}}}{Y(s)}ds+\sigma {\int _{0}^{t}}{e^{as}}d{B^{H}}(s)\Bigg),\]
so the paths of Y and paths obtained by direct simulation of the right-hand side of (30) must coincide.
In other words, if we simulate the trajectory of Y, apply transform given in the RHS of (30) to it and receive the trajectory that significantly differs from the initial one, it would be an evidence that Y does not satisfy the equation of the form (28) for all $t\ge 0$.
To simulate the left-hand side of (30), the Euler approximations of the process ${Y_{\varepsilon }}$ with $\varepsilon =0.01$ are used. They are compared to the right-hand side of (30) simulated explicitly using the same ${Y_{\varepsilon }}$. The mesh of the partition is $\Delta t=0.0001$, $T=5$, $H=0.4$, ${Y_{0}}=k=a=\sigma =1$ (see Fig. 2).
vmsta-6-1-vmsta126-g002.jpg
Fig. 2.
Comparison of right-hand and left-hand sides of (30)
As we see, the Euler approximation of Y (of ${Y_{\varepsilon }}$ with small ε, black) indeed almost coincides with its transformation given by the right-hand side of (30) (red), so no contradiction is obtained.

5 Fractional Cox–Ingersoll–Ross process on ${\mathbb{R}_{+}}$

Consider a set of random processes $\{{Y_{\varepsilon }},\varepsilon >0\}$ which satisfy the equations of the form
\[ {Y_{\varepsilon }}(t)={Y_{0}}+\frac{1}{2}{\int _{0}^{t}}\bigg(\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }-a{Y_{\varepsilon }}(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}(t),\]
driven by the same fractional Brownian motion with the same parameters k, a, $\sigma >0$ and the same starting point ${Y_{0}}>0$.
Let the process $Y=\{Y(t),t\in [0,T]\}$ be such that for all $t\ge 0$:
(31)
\[ Y(t)=\underset{\varepsilon \to 0}{\lim }{Y_{\varepsilon }}(t).\]
Definition 5.1.
The fractional Cox–Ingersoll–Ross process is the process $X=\{X(t),t\ge 0\}$, such that
\[ X(t)={Y^{2}}(t),\hspace{1em}\forall t\ge 0.\]
Let us show that this definition is indeed a generalization of the original Definition 1.1.
First, we will require the following definition.
Definition 5.2.
Let $\{{U_{t}},t\ge 0\}$, $\{{V_{t}},t\ge 0\}$ be random processes. The pathwise Stratonovich integral ${\int _{0}^{T}}{U_{s}}\circ d{V_{s}}$ is a pathwise limit of the following sums
\[ {\sum \limits_{k=1}^{n}}\frac{{U_{{t_{k}}}}+{U_{{t_{k-1}}}}}{2}({V_{{t_{k}}}}-{V_{{t_{k-1}}}}),\]
as the mesh of the partition $0={t_{0}}<{t_{1}}<{t_{2}}<\cdots <{t_{n-1}}<{t_{n}}=T$ tends to zero, in case if this limit exists.
Taking into account the results of previous sections and from Theorem 1 in [18], for all $t\in [0,{\beta _{0}}]$:
\[ X(t)={X_{0}}+{\int _{0}^{t}}\big(k-aX(s)\big)ds+\sigma {\int _{0}^{t}}\sqrt{X(s)}\circ d{B^{H}}(s)\hspace{2.5pt}a.s.\]
A similar result holds for all $t\in [{\alpha _{i}},{\beta _{i}}]$, $i\ge 1$.
Theorem 5.1.
Let $({\alpha _{i}},{\beta _{i}})$, $i\ge 1$, be an arbitrary interval from the representation (25). Consider the fractional Cox–Ingersoll–Ross process $X=\{X(t),\hspace{2.5pt}t\in [{\alpha _{i}},{\beta _{i}}]\}$ from Definition 5.1.
Then, for ${\alpha _{i}}\le t\le {\beta _{i}}$ the process X a.s. satisfies the SDE
(32)
\[ \displaystyle dX(t)=\big(k-aX(t)\big)dt+\sigma \sqrt{X(t)}\circ d{B^{H}}(t),\hspace{1em}X({\alpha _{i}})=0,\]
where the integral with respect to the fractional Brownian motion is defined as the pathwise Stratonovich integral.
Proof.
We shall follow the proof of Theorem 1 from [18].
Let us fix an $\omega \in \varOmega $ and consider an arbitrary $t\in [{\alpha _{i}},{\beta _{i}}]$.
It is clear that
(33)
\[\begin{array}{l}\displaystyle X(t)={Y^{2}}(t)\\ {} \displaystyle ={\Bigg(\frac{1}{2}{\int _{{\alpha _{i}}}^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}(t)-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg)^{2}}.\end{array}\]
Consider an arbitrary partition of the interval $[{\alpha _{i}},t]$:
\[ {\alpha _{i}}={t_{0}}<{t_{1}}<{t_{2}}<\cdots <{t_{n-1}}<{t_{n}}=t.\]
Taking into account that $X({\alpha _{i}})=0$ and using (5), we obtain
\[\begin{aligned}{}X(t)& ={\sum \limits_{j=1}^{n}}\big(X({t_{j}})-X({t_{j-1}})\big)\\ {} & ={\sum \limits_{j=1}^{n}}\Bigg({\Bigg[\frac{1}{2}{\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}({t_{j}})-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg]^{2}}\\ {} & \hspace{1em}-{\Bigg[\frac{1}{2}{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}({t_{j-1}})-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg]^{2}}\Bigg)\end{aligned}\]
Factoring each summand as the difference of squares, we get:
\[\begin{aligned}{}X(t)& ={\sum \limits_{j=1}^{n}}\Bigg[\frac{1}{2}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} & \hspace{1em}+\frac{\sigma }{2}\big({B_{{t_{j}}}^{H}}+{B_{{t_{j-1}}}^{H}}\big)-\sigma {B^{H}}({\alpha _{i}})\Bigg]\\ {} & \hspace{2em}\times \Bigg[\frac{1}{2}{\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+\frac{\sigma }{2}\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\Bigg].\end{aligned}\]
Expanding the brackets in the last expression, we obtain:
(34)
\[\begin{array}{l}\displaystyle X(t)=\frac{1}{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg){\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\big({B_{{t_{j}}}^{H}}+{B_{{t_{j-1}}}^{H}}\big){\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle -\frac{\sigma }{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\Bigg({\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle +\frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \times \big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle +\frac{{\sigma ^{2}}}{4}{\sum \limits_{j=1}^{n}}\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big({B^{H}}({t_{j}})+{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle -\frac{{\sigma ^{2}}}{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\big(\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big).\end{array}\]
Let the mesh $\Delta t$ of the partition tend to zero. The first three summands
(35)
\[\begin{array}{l}\displaystyle \frac{1}{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \times {\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle +\frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\big({B_{{t_{j}}}^{H}}+{B_{{t_{j-1}}}^{H}}\big){\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\\ {} \displaystyle -\frac{\sigma }{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\Bigg({\int _{{t_{j-1}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \to {\int _{{\alpha _{i}}}^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)\\ {} \displaystyle \times \Bigg(\frac{1}{2}{\int _{{\alpha _{i}}}^{s}}\bigg(\frac{k}{Y(u)}-aY(u)\bigg)du+\frac{\sigma }{2}{B^{H}}(s)-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg)ds\\ {} \displaystyle ={\int _{{\alpha _{i}}}^{t}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)Y(s)ds={\int _{{\alpha _{i}}}^{t}}\big(k-aX(s)\big)ds,\hspace{1em}\Delta t\to 0,\end{array}\]
and the last three summands
(36)
\[\begin{array}{l}\displaystyle \frac{\sigma }{4}{\sum \limits_{j=1}^{n}}\Bigg({\int _{{\alpha _{i}}}^{{t_{j}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds+{\int _{{\alpha _{i}}}^{{t_{j-1}}}}\bigg(\frac{k}{Y(s)}-aY(s)\bigg)ds\Bigg)\\ {} \displaystyle \times \big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle +\frac{{\sigma ^{2}}}{4}{\sum \limits_{j=1}^{n}}\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big({B^{H}}({t_{j}})+{B^{H}}({t_{j-1}})\big)\\ {} \displaystyle -\frac{{\sigma ^{2}}}{2}{B^{H}}({\alpha _{i}}){\sum \limits_{j=1}^{n}}\big(\big({B^{H}}({t_{j}})-{B^{H}}({t_{j-1}})\big)\big)\\ {} \displaystyle \to \sigma {\int _{{\alpha _{i}}}^{t}}\Bigg(\frac{1}{2}{\int _{{\alpha _{i}}}^{s}}\bigg(\frac{k}{Y(u)}-aY(u)\bigg)du\\ {} \displaystyle +\frac{\sigma }{2}{B^{H}}(s)-\frac{\sigma }{2}{B^{H}}({\alpha _{i}})\Bigg)\circ d{B^{H}}(s)\\ {} \displaystyle =\sigma {\int _{{\alpha _{i}}}^{t}}Y(s)\circ d{B^{H}}(s)=\sigma {\int _{{\alpha _{i}}}^{t}}\sqrt{X(s)}\circ d{B^{H}}(s),\hspace{1em}\Delta t\to 0.\end{array}\]
Note that the left-hand side of (5) does not depend on the partition and the limit in (5) exists as the pathwise Riemann integral, therefore the corresponding pathwise Stratonovich integral exists and the passage to the limit in (5) is correct.
Thus, the process X satisfies the SDE of the form
(37)
\[ \displaystyle X(t)={\int _{{\alpha _{i}}}^{t}}\big(k-aX(s)\big)ds+\sigma {\int _{{\alpha _{i}}}^{t}}\sqrt{X(s)}\circ d{B^{H}}(s),\hspace{1em}t\in [{\alpha _{i}},{\beta _{i}}],\]
where ${\int _{{\alpha _{i}}}^{t}}\sqrt{X(s)}\circ d{B^{H}}(s)$ is a pathwise Stratonovich integral.  □
Finally, similarly to Section 4, the next result follows from Theorem 5.1.
Consider an arbitrary $(\alpha ,\beta )\in \mathcal{I}$, where $\mathcal{I}$ is a set of all open intervals from representation (25) (see Section 4). Let ${\beta _{0}}$ be the first moment of zero hitting by Y and $({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}})\in \mathcal{I}$, $j=1,\dots ,M$, $M\in \mathbb{N}\cup \{\infty \}$, are all such intervals that ${\tilde{\alpha }_{j}}<\alpha $.
Theorem 5.2.
Let $t\in [\alpha ,\beta ]$. Then
\[\begin{array}{c}\displaystyle X(t)=X(0)+{\int _{0}^{t}}\big(k-aX(s)\big)ds\\ {} \displaystyle +\sigma {\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\sqrt{X(s)}\circ d{B^{H}}(s),\end{array}\]
where
\[\begin{array}{c}\displaystyle {\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\sqrt{X(s)}\circ d{B^{H}}(s)\\ {} \displaystyle :={\int _{0}^{{\beta _{0}}}}\sqrt{X(s)}\circ d{B^{H}}(s)+{\sum \limits_{j=1}^{M}}{\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}\sqrt{X(s)}\circ d{B^{H}}(s)\\ {} \displaystyle +{\int _{\alpha }^{t}}\sqrt{X(s)}\circ d{B^{H}}(s).\end{array}\]

Acknowledgments

We are deeply grateful to anonymous referees whose valuable comments helped us to improve the manuscript significantly.

References

[1] 
Anh, V., Inoue, A.: Financial Markets with Memory I: Dynamic Models. Stoch. Anal. Appl. 23(2), 275–300 (2005). MR2130350. https://doi.org/10.1081/SAP-200050096
[2] 
Bollerslev, T., Mikkelsen, H.O.: Modelling and pricing long memory in stock market volatility. J. Econom. 73(1), 151–184 (2005)
[3] 
Cheridito, P., Kawaguchi, H., Maejima, M.: Fractional Ornstein-Uhlenbeck processes. Electron. J. Probab. 8(1), 1–14 (2003). MR1961165. https://doi.org/10.1214/EJP.v8-125
[4] 
Cox, J.C., Ingersoll, J.E., Ross, S.A.: A re-examination of traditional hypotheses about the term structure of interest rates. J. Finance 36, 769–799 (1981)
[5] 
Cox, J.C., Ingersoll, J.E., Ross, S.A.: An intertemporal general equilibrium model of asset prices. Econometrica 53(1), 363–384 (1985). MR0785474. https://doi.org/10.2307/1911241
[6] 
Cox, J.C., Ingersoll, J.E., Ross, S.A.: A theory of the term structure of interest rates. J. Finance 53(2), 385–408 (1985). MR0785475. https://doi.org/10.2307/1911242
[7] 
Ding, Z., Granger, C.W., Engle, R.F.: A long memory property of stock market returns and a new model. J. Empir. Finance 1(1), 83–106 (1993)
[8] 
Euch, O., Rosenbaum, M.: The characteristic function of rough Heston models. https://arxiv.org/pdf/1609.02108.pdf. Accessed 18 Aug 2018. arXiv: 1609.02108
[9] 
Feller, W.: Two singular diffusion problems. Ann. Math. 54, 173–182 (1951). MR0054814. https://doi.org/10.2307/1969318
[10] 
Heston, S.L.: A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev. Financ. Stud. 6(2), 327–343 (1993)
[11] 
Kuchuk-Iatsenko, S., Mishura, Y.: Pricing the European call option in the model with stochastic volatility driven by Ornstein-Uhlenbeck process. Exact formulas. Mod. Stoch. Theory Appl. 2(3), 233–249 (2015). MR3407504. https://doi.org/10.15559/15-VMSTA36CNF
[12] 
Kuchuk-Iatsenko, S., Mishura, Y., Munchak, Y.: Application of Malliavin calculus to exact and approximate option pricing under stochastic volatility. Theory Probab. Math. Stat. 94, 93–115 (2016). MR3553457. https://doi.org/10.1090/tpms/1012
[13] 
Leonenko, N., Meerschaert, M., Sikorskii, A.: Correlation structure of fractional Pearson diffusion. Comput. Math. Appl. 66(5), 737–745 (2013). MR3089382. https://doi.org/10.1016/j.camwa.2013.01.009
[14] 
Leonenko, N., Meerschaert, M., Sikorskii, A.: Fractional Pearson diffusion. J. Math. Anal. Appl. 403(2), 532–546 (2013). MR3037487. https://doi.org/10.1016/j.jmaa.2013.02.046
[15] 
Marie, N.: A generalized mean-reverting equation and applications. ESAIM Probab. Stat. 18, 799–828 (2014). MR3334015. https://doi.org/10.1051/ps/2014002
[16] 
Mishura, Y.: Stochastic Calculus for Fractional Brownian Motion and Related Processes. Springer, Berlin (2008). MR2378138. https://doi.org/10.1007/978-3-540-75873-0
[17] 
Mishura, Y., Piterbarg, V., Ralchenko, K., Yurchenko-Tytarenko, A.: Stochastic representation and pathwise properties of fractional Cox-Ingersoll-Ross process (in Ukrainian). Theory Probab. Math. Stat. 97, 157–170 (2017). Available in English at: https://arxiv.org/pdf/1708.02712.pdf. MR3746006
[18] 
Mishura, Y., Yurchenko-Tytarenko, A.: Fractional Cox-Ingersoll-Ross process with non-zero “mean”. Mod. Stoch. Theory Appl. 5(1), 99–111 (2018). MR3784040. https://doi.org/10.15559/18-vmsta97
[19] 
Mukeru, S.: The zero set of fractional Brownian motion is a Salem set. J. Fourier Anal. Appl. 24(4), 957–999 (2018). MR3843846. https://doi.org/10.1007/s00041-017-9551-9
[20] 
Nualart, D., Ouknine, Y.: Regularization of differential equations by fractional noise. Stoch. Process. Appl. 102, 103–116 (2002). MR1934157. https://doi.org/10.1016/S0304-4149(02)00155-2
[21] 
Yamasaki, K., Muchnik, L., Havlin, S., Bunde, A., Stanley, H.E.: Scaling and memory in volatility return intervals in financial markets. Proc. Natl. Acad. Sci. USA 102(26), 9424–9428 (2005)
Reading mode PDF XML

Table of contents
  • Introduction
  • 1 Fractional Cox–Ingersoll–Ross process until the first moment of zero hitting
  • 2 Construction of the square root process on ${\mathbb{R}_{+}}$ as a limit of ε-approximations
  • 3 Properties of ε-approximations and the square root process
  • 4 Equation for the square root process Y for $t\ge 0$
  • 5 Fractional Cox–Ingersoll–Ross process on ${\mathbb{R}_{+}}$
  • Acknowledgments
  • References

Copyright
© 2019 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Fractional Cox–Ingersoll–Ross process fractional Brownian motion stochastic differential equation pathwise Stratonovich integral

MSC2010
60G22 60H05 60H10

Metrics
since March 2018
876

Article info
views

688

Full article
views

604

PDF
downloads

169

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    2
  • Theorems
    5
vmsta-6-1-vmsta126-g001.jpg
Fig. 1.
Comparison of ${Y_{\varepsilon }}$ (black) and $\tilde{{Y_{\varepsilon }}}$ (red), $\varepsilon =0.01$. Two paths are close to each other, so that they cannot be distinguished on the figure
vmsta-6-1-vmsta126-g002.jpg
Fig. 2.
Comparison of right-hand and left-hand sides of (30)
Theorem 2.1.
Theorem 3.1.
Theorem 3.2.
Theorem 5.1.
Theorem 5.2.
vmsta-6-1-vmsta126-g001.jpg
Fig. 1.
Comparison of ${Y_{\varepsilon }}$ (black) and $\tilde{{Y_{\varepsilon }}}$ (red), $\varepsilon =0.01$. Two paths are close to each other, so that they cannot be distinguished on the figure
vmsta-6-1-vmsta126-g002.jpg
Fig. 2.
Comparison of right-hand and left-hand sides of (30)
Theorem 2.1.
For all $H\in (0,1)$, $T>0$ and $r\ge 1$ there are non-random constants ${C_{1}}={C_{1}}(T,r,{Y_{0}},a,k)>0$ and ${C_{2}}={C_{2}}(T,r,a,\sigma )>0$ such that for all $\varepsilon >0$ and for all $t\in [0,T]$:
(12)
\[ {\big|{Y_{\varepsilon }}(t)\big|^{r}}\le {C_{1}}+{C_{2}}\underset{s\in [0,T]}{\sup }{\big|{B^{H}}(s)\big|^{r}}.\]
Theorem 3.1.
Let $Y=\{Y(t),t\ge 0\}$ be the process defined by the formula (18). Then
1) the set $\{t>0|Y(t)>0\}$ is open in the natural topology on $\mathbb{R}$;
2) Y is continuous on $\{t\ge 0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$.
Theorem 3.2.
Let $({\alpha _{i}},{\beta _{i}})$, $i\ge 1$, be an arbitrary interval from the representation (25). Then
1) ${\lim \nolimits_{t\to {\alpha _{i}}+}}Y(t)=0$, ${\lim \nolimits_{t\to {\beta _{i}}-}}Y(t)=0$ a.s.;
2) for any $t\in [{\alpha _{i}},{\beta _{i}}]$:
(26)
\[ Y(t)={\int _{{\alpha _{i}}}^{t}}\frac{k}{Y(s)}ds-a{\int _{{\alpha _{i}}}^{t}}Y(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}({\alpha _{i}})\big)\hspace{1em}\textit{a.s.}\]
Theorem 5.1.
Let $({\alpha _{i}},{\beta _{i}})$, $i\ge 1$, be an arbitrary interval from the representation (25). Consider the fractional Cox–Ingersoll–Ross process $X=\{X(t),\hspace{2.5pt}t\in [{\alpha _{i}},{\beta _{i}}]\}$ from Definition 5.1.
Then, for ${\alpha _{i}}\le t\le {\beta _{i}}$ the process X a.s. satisfies the SDE
(32)
\[ \displaystyle dX(t)=\big(k-aX(t)\big)dt+\sigma \sqrt{X(t)}\circ d{B^{H}}(t),\hspace{1em}X({\alpha _{i}})=0,\]
where the integral with respect to the fractional Brownian motion is defined as the pathwise Stratonovich integral.
Theorem 5.2.
Let $t\in [\alpha ,\beta ]$. Then
\[\begin{array}{c}\displaystyle X(t)=X(0)+{\int _{0}^{t}}\big(k-aX(s)\big)ds\\ {} \displaystyle +\sigma {\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\sqrt{X(s)}\circ d{B^{H}}(s),\end{array}\]
where
\[\begin{array}{c}\displaystyle {\int _{[0,{\beta _{0}})\cup ({\textstyle\bigcup _{j=1}^{M}}({\tilde{\alpha }_{j}},{\tilde{\beta }_{j}}))\cup [\alpha ,t)}}\sqrt{X(s)}\circ d{B^{H}}(s)\\ {} \displaystyle :={\int _{0}^{{\beta _{0}}}}\sqrt{X(s)}\circ d{B^{H}}(s)+{\sum \limits_{j=1}^{M}}{\int _{{\tilde{\alpha }_{j}}}^{{\tilde{\beta }_{j}}}}\sqrt{X(s)}\circ d{B^{H}}(s)\\ {} \displaystyle +{\int _{\alpha }^{t}}\sqrt{X(s)}\circ d{B^{H}}(s).\end{array}\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy