1 Introduction
The problem of convergence of discrete-time financial models to the models with continuous time is well developed; see, e.g., [6, 7, 9, 11, 14, 17, 19]. The reason for such an interest can be explained as follows: from the analytical point of view, it is much simpler to deal with continuous-time models although all real-world models operate in the discrete time. In what concerns the rate of convergence, there can be different approaches to its estimation. Some of this approaches are established in [6, 7, 23–27]. In this paper, we consider the Cox–Ingersoll–Ross process and its approximation on a finite time interval. The CIR process was originally proposed by Cox, Ingersoll, and Ross [8] as a model for short-term interest rates. Nowadays, this model is widely used in financial modeling, for example, as the volatility process in the Heston model [16]. The strong global approximation of CIR process is studied in several articles. Strong convergence (without a rate or with a logarithmic rate) of several discretization schemes is shown by [1, 4, 12, 15, 18]. In [1], a general framework for the analysis of strong approximation of the CIR process is presented along with extensive simulation studies. Nonlogarithmic convergence rates are obtained in [2]. In [10], the author extends the CIR model of the short interest rate by assuming a stochastic reversion level, which better reflects the time dependence caused by the cyclical nature of the economy or by expectations concerning the future impact of monetary policies. In this framework, the convergence of the long-term return by using the theory of generalized Bessel-square processes is studied. In [28], the authors propose an empirical method that utilizes the conditional density of the state variables to estimate and test a term structure model with known price formula using data on both discount and coupon bonds. The method is applied to an extension of a two-factor model due to Cox, Ingersoll, and Ross. Their results show that estimates based solely on bills imply unreasonably large price errors for longer maturities. The process is also discussed in [5].
In this article, we focus on the regime where the CIR process does not hit zero and study weak approximation of this process. In the first case, the sequence of prelimit markets is modeled as the sequence of the discrete-time additive stochastic processes, whereas in the second case, the sequence of multiplicative stochastic processes is modeled. The additive scheme is widely used, for example, in the papers [1, 4, 13]. The papers [10, 28] are recent examples of modeling a stochastic interest rate by the multiplicative model of CIR process. In [10], the authors say that the model has the “strong convergence property,” whereas they refer to models as having the “weak convergence property” when the returns converge to a constant, which generally depends upon the current economic environment and that may change in a stochastic fashion over time. We construct a discrete approximation scheme for the price of asset that is modeled by the Cox–Ingersoll–Ross process. In order to construct these additive and multiplicative processes, we take the Euler approximations of the CIR process itself but replace the increments of the Wiener process with iid bounded vanishing symmetric random variables. We introduce a “truncated” CIR process and use it to prove the weak convergence of asset prices.
The paper is organized as follows. In Section 2, we present a complete and “truncated” CIR process and establish that the “truncated” CIR process can be described as the unique strong solution to the corresponding stochastic differential equation. We establish that this “truncated” process does not hit zero under the same condition as for the original nontruncated process. In Section 3, we present discrete approximation schemes for both these processes and prove the weak convergence of asset prices for the additive model. In the next section, we prove the weak convergence of asset prices for the multiplicative model. Appendix contains additional and technical results.
2 Original and “truncated” Cox–Ingersoll–Ross processes and some of their properties
Let $\varOmega _{\mathcal{F}}=(\varOmega ,\mathcal{F},(\mathcal{F}_{t},t\ge 0),\operatorname{\mathsf{P}})$ be a complete filtered probability space, and $W=\{W_{t},\mathcal{F}_{t},t\ge 0\}$ be an adapted Wiener process. Consider a Cox–Ingersoll–Ross process with constant parameters on this space. This process is described as the unique strong solution of the following stochastic differential equation:
where $b>0$, $\sigma >0$. The integral form of the process X has the following form:
(1)
\[dX_{t}=(b-X_{t})dt+\sigma \sqrt{X_{t}}dW_{t},\hspace{1em}X_{0}=x_{0}>0,\hspace{0.2778em}t\ge 0,\]
\[X_{t}=x_{0}+\underset{0}{\overset{t}{\int }}(b-X_{s})ds+\sigma \underset{0}{\overset{t}{\int }}\sqrt{X_{s}}dW_{s}.\]
According to the paper [8], the condition ${\sigma }^{2}\le 2b$ is necessary and sufficient for the process X to get positive values and not to hit zero. Further, we will assume that this condition is satisfied.For the proof of functional limit theorems, we will need a modification of the Cox–Ingersoll–Ross process with bounded coefficients. This process is called a truncated Cox–Ingerssol–Ross process. Let $C>0$. Consider the following stochastic differential equation with the same coefficients b and σ as in (1):
Remark 2.1.
Denote $\sigma _{-\epsilon }=\inf \{t:{X_{t}^{C}}=-\epsilon \}$ with $\epsilon >0$ such that $-\epsilon +b>0$. Suppose that $\operatorname{\mathsf{P}}(\sigma _{-\epsilon }<\infty )>0$. Then for any $r<\sigma _{-\epsilon }$ such that ${X_{t}^{C}}<0$ for $t\in (r,\sigma _{-\epsilon })$, we would have, with positive probability,
on the interval $(r,\sigma _{-\epsilon })$, and hence $t\to {X_{t}^{C}}$ would increase in this interval. This is obviously impossible. Therefore, ${X_{t}^{C}}$ is nonnegative and can be written as
The integral form of the process ${X}^{C}$ is as follows:
Lemma 2.2.
Let $2b\ge {\sigma }^{2}$ and $C>b\vee 1$. Then the trajectories of the process ${X}^{C}$ are positive with probability 1.
Proof.
In order to prove that the process ${X}^{C}$ is positive, we will use the proof similar to that given in [22, p. 308] for the complete Cox–Ingersoll–Ross process with corresponding modifications. Note that the coefficients $g(x):=\sigma \sqrt{x\wedge C}$ and $f(x):=b-x\wedge C$ of (3) are continuous and ${g}^{2}(x)>0$ on $x\in (0,\infty )$. Fix α and β such that $0<\alpha <x_{0}<\beta $. Due to the nonsingularity of g on $[\alpha ,\beta ]$, there exists a unique solution $F(x)$ of the ordinary differential equation
This formula and nonnegativity of F imply that
and, as $t\to \infty $,
This means that ${X}^{C}$ exits from every compact subinterval of $[\alpha ,\beta ]\subset (0,\infty )$ in finite time. It follows from the boundary conditions and equality $\operatorname{\mathsf{P}}(\tau _{\alpha }\wedge \tau _{\beta }<\infty )=1$ that $\lim _{t\to \infty }\operatorname{\mathsf{E}}F({X}^{C}(t\wedge \tau _{\alpha }\wedge \tau _{\beta }))=0$, and then from (4) we have
Let us now define the function
Consider the integral
\[f(x){F^{\prime }}(x)+\frac{1}{2}{g}^{2}(x){F^{\prime\prime }}(x)=-1,\hspace{1em}\alpha <x<\beta ,\]
with boundary conditions $F(\alpha )=F(\beta )=0$, and this solution is nonnegative, which follows from its representation through a nonnegative Green function given in [21, p. 343]. Define the stopping times
\[\tau _{\alpha }=\inf \big\{t\ge 0:{X_{t}^{C}}\le \alpha \big\}\hspace{1em}\text{and}\hspace{1em}\tau _{\beta }=\inf \big\{t\ge 0:{X_{t}^{C}}\ge \beta \big\}.\]
By the Itô formula, for any $t>0$,
(4)
\[\operatorname{\mathsf{E}}F\big({X}^{C}(t\wedge \tau _{\alpha }\wedge \tau _{\beta })\big)=F(x_{0})-\operatorname{\mathsf{E}}(t\wedge \tau _{\alpha }\wedge \tau _{\beta }).\]
\[V(x)=\underset{1}{\overset{x}{\int }}\exp \bigg\{-\underset{1}{\overset{y}{\int }}\frac{2f(z)}{{g}^{2}(z)}dz\bigg\}dy,\hspace{1em}x\in (0,\infty ),\]
which has a continuous strictly positive derivative ${V^{\prime }}(x)$, and the second derivative ${V^{\prime\prime }}(x)$ exists and satisfies ${V^{\prime\prime }}(x)=-\frac{2f(x)}{{g}^{2}(x)}{V^{\prime }}(x)$. The Itô formula shows that, for any $t>0$,
\[V\big({X}^{C}(t\wedge \tau _{\alpha }\wedge \tau _{\beta })\big)=V(x_{0})+\underset{0}{\overset{t\wedge \tau _{\alpha }\wedge \tau _{\beta }}{\int }}{V^{\prime }}\big({X_{u}^{C}}\big)g\big({X_{u}^{C}}\big)dW(u)\]
and
\[\operatorname{\mathsf{E}}V\big({X}^{C}(t\wedge \tau _{\alpha }\wedge \tau _{\beta })\big)=V(x_{0}).\]
Taking the limit as $t\to \infty $, we get
\[V(x_{0})=\operatorname{\mathsf{E}}V\big({X}^{C}(\tau _{\alpha }\wedge \tau _{\beta })\big)=V(\alpha )\operatorname{\mathsf{P}}(\tau _{\alpha }<\tau _{\beta })+V(\beta )\operatorname{\mathsf{P}}(\tau _{\beta }<\tau _{\alpha }),\]
and hence
(5)
\[\operatorname{\mathsf{P}}(\tau _{\alpha }<\tau _{\beta })=\frac{V(\beta )-V(x_{0})}{V(\beta )-V(\alpha )}\hspace{1em}\text{and}\hspace{1em}\operatorname{\mathsf{P}}(\tau _{\beta }<\tau _{\alpha })=\frac{V(x_{0})-V(\alpha )}{V(\beta )-V(\alpha )}.\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle V(x)& \displaystyle =\underset{1}{\overset{x}{\int }}\exp \bigg\{-\underset{1}{\overset{y}{\int }}\frac{2(b-z\wedge C)}{{\sigma }^{2}(z\wedge C)}dz\bigg\}dy.\end{array}\]
First, consider the case $x<1$. Then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle V(x)& \displaystyle =\underset{1}{\overset{x}{\int }}\exp \bigg\{-\underset{1}{\overset{y}{\int }}\frac{2(b-z)}{{\sigma }^{2}z}dz\bigg\}dy=\underset{1}{\overset{x}{\int }}{y}^{-\frac{2b}{{\sigma }^{2}}}\exp \bigg\{\frac{2(y-1)}{{\sigma }^{2}}\bigg\}dy,\end{array}\]
and if ${\sigma }^{2}\le 2b$, then
Now let x increase and tend to infinity. Denote $C_{1}={\int _{1}^{C}}\exp \{\frac{2(y-1)}{{\sigma }^{2}}\}{y}^{-\frac{2b}{{\sigma }^{2}}}dy$. Then, for $x>C$,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle V(x)& \displaystyle =\underset{1}{\overset{C}{\int }}\exp \bigg\{-\underset{1}{\overset{y}{\int }}\frac{2(b-z)}{{\sigma }^{2}z}dz\bigg\}dy\\{} & \displaystyle \hspace{1em}+\underset{C}{\overset{x}{\int }}\exp \bigg\{-\underset{1}{\overset{C}{\int }}\frac{2(b-z)}{{\sigma }^{2}z}dz-\underset{C}{\overset{y}{\int }}\frac{2(b-C)}{{\sigma }^{2}C}dz\bigg\}dy\\{} & \displaystyle =\underset{1}{\overset{C}{\int }}\exp \bigg\{\frac{2(y-1)}{{\sigma }^{2}}\bigg\}{y}^{-\frac{2b}{{\sigma }^{2}}}dy+{C}^{-\frac{2b}{{\sigma }^{2}}}\exp \bigg\{\frac{2(C-1)}{{\sigma }^{2}}\bigg\}\\{} & \displaystyle \hspace{1em}\times \underset{C}{\overset{x}{\int }}\exp \bigg\{-\frac{2(b-C)}{{\sigma }^{2}C}(y-C)\bigg\}dy\\{} & \displaystyle =C_{1}+{C}^{-\frac{2b}{{\sigma }^{2}}+1}\frac{{\sigma }^{2}}{2(C-b)}\exp \bigg\{\frac{2(C-1)}{{\sigma }^{2}}\bigg\}\\{} & \displaystyle \hspace{1em}\times \bigg(\exp \bigg\{\frac{2(C-b)}{{\sigma }^{2}C}(x-C)\bigg\}-1\bigg),\end{array}\]
and thus $\lim _{x↑\infty }V(x)=\infty $. Define
\[\tau _{0}=\underset{\alpha \downarrow 0}{\lim }\tau _{\alpha }\hspace{1em}\text{and}\hspace{1em}\tau _{\infty }=\underset{\beta ↑\infty }{\lim }\tau _{\beta }\]
and put $\tau =\tau _{0}\wedge \tau _{\infty }$. From (5) we get
\[\operatorname{\mathsf{P}}\Big(\underset{0\le t<\tau }{\inf }{X_{t}^{C}}\le \alpha \Big)\ge \operatorname{\mathsf{P}}(\tau _{\alpha }<\tau _{\beta })=\frac{1-V(x_{0})/V(\beta )}{1-V(\alpha )/V(\beta )},\]
and, as $\beta ↑\infty $, we get that, for any $\alpha >0$, $\operatorname{\mathsf{P}}(\inf _{0\le t<\tau }{X_{t}^{C}}\le \alpha )=1$, whence, finally, $\operatorname{\mathsf{P}}(\inf _{0\le t<\tau }{X_{t}^{C}}=0)=1$. Similarly, $\operatorname{\mathsf{P}}(\sup _{0\le t<\tau }{X_{t}^{C}}=\infty )=1$. Assume now that $\operatorname{\mathsf{P}}(\tau <\infty )>0$. Then
\[\operatorname{\mathsf{P}}\Big(\underset{t\to \tau }{\lim }{X_{t}^{C}}\hspace{2.5pt}\text{exists and equals}\hspace{2.5pt}0\hspace{2.5pt}\text{or}\hspace{2.5pt}\infty \Big)>0.\]
So the events $\{\inf _{0\le t<\tau }{X_{t}^{C}}=0\}$ and $\{\sup _{0\le t<\tau }{X_{t}^{C}}=\infty \}$ cannot both have probability 1. This contradiction shows that $\operatorname{\mathsf{P}}(\tau <\infty )=0$, whence
\[\operatorname{\mathsf{P}}(\tau =\infty )=\operatorname{\mathsf{P}}\Big(\underset{0\le t<\tau }{\inf }{X_{t}^{C}}=0\Big)=\operatorname{\mathsf{P}}\Big(\underset{0\le t<\tau }{\sup }{X_{t}^{C}}=\infty \Big)=1\]
if $2b\ge {\sigma }^{2}$. □Now, let $T>0$ be fixed.
Proof.
Obviously, it suffices to show that
\[\operatorname{\mathsf{P}}\Big\{\underset{t\in [0,T]}{\sup }|X_{t}|\ge C\Big\}\to 0\hspace{1em}\text{as}\hspace{0.2778em}C\to \infty .\]
It is well known (see, e.g., [29]) that $\frac{4}{{\sigma }^{2}(1-{e}^{-t})}X_{t}$ follows a noncentral ${\chi }^{2}$ distribution with (in general) noninteger degree of freedom $\frac{4b}{{\sigma }^{2}}$ and noncentrality parameter $\frac{4}{{\sigma }^{2}(1-{e}^{-t})}x_{0}{e}^{-t}$. The first and second moments for any $t\ge 0$ are given by
\[\operatorname{\mathsf{E}}{(X_{t})}^{2}=x_{0}\big(2b+{\sigma }^{2}\big){e}^{-t}+\big({x_{0}^{2}}-x_{0}{\sigma }^{2}-2x_{0}b\big){e}^{-2t}+\bigg(\frac{b{\sigma }^{2}}{2}+{b}^{2}\bigg){\big(1-{e}^{-t}\big)}^{2}.\]
Therefore, there exists a constant $B>0$ such that $\operatorname{\mathsf{E}}{X_{t}^{2}}\le B$, whence $\operatorname{\mathsf{E}}X_{t}\le {B}^{1/2}$, $0\le t\le T$.Using the Doob inequality, we estimate
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \operatorname{\mathsf{P}}\Big\{\underset{t\in [0,T]}{\sup }|X_{t}|\ge C\Big\}\le \frac{1}{{C}^{2}}\operatorname{\mathsf{E}}\underset{t\in [0,T]}{\sup }{X_{t}^{2}}\\{} & \displaystyle \hspace{1em}=\frac{1}{{C}^{2}}\operatorname{\mathsf{E}}\underset{t\in [0,T]}{\sup }\bigg\{{\bigg(X_{0}+\underset{0}{\overset{t}{\int }}(b-X_{s})ds+\sigma \underset{0}{\overset{t}{\int }}\sqrt{X_{s}}dW_{s}\bigg)}^{2}\bigg\}\\{} & \displaystyle \hspace{1em}\le \frac{3}{{C}^{2}}\bigg\{{X_{0}^{2}}+T\operatorname{\mathsf{E}}{\bigg(\underset{0}{\overset{T}{\int }}|b-X_{s}|ds\bigg)}^{2}+{\sigma }^{2}\operatorname{\mathsf{E}}\underset{t\in [0,T]}{\sup }{\bigg(\underset{0}{\overset{t}{\int }}\sqrt{X_{s}}dW_{s}\bigg)}^{2}\bigg\}\\{} & \displaystyle \hspace{1em}\le \frac{3}{{C}^{2}}\bigg\{{X_{0}^{2}}+T\operatorname{\mathsf{E}}\underset{0}{\overset{T}{\int }}{(b-X_{s})}^{2}ds+4{\sigma }^{2}\operatorname{\mathsf{E}}\underset{0}{\overset{T}{\int }}X_{s}ds\bigg\}\le \frac{B_{1}}{{C}^{2}}\end{array}\]
for some constant $B_{1}>0$. The lemma is proved. □3 Discrete approximation schemes for complete and “truncated” Cox–Ingersoll–Ross processes
Consider the following discrete approximation scheme for the process X. Assume that we have a sequence of the probability spaces $({\varOmega }^{(n)},{\mathcal{F}}^{(n)},{\operatorname{\mathsf{P}}}^{(n)})$, $n\ge 1$. Let $\{{q_{k}^{(n)}},n\ge 1$, $0\le k\le n\}$ be the sequence of symmetric iid random variables defined on the corresponding probability space and taking values $\pm \sqrt{\frac{T}{n}}$, that is, ${\operatorname{\mathsf{P}}}^{n}({q_{k}^{(n)}}=\pm \sqrt{\frac{T}{n}})=\frac{1}{2}$. Let further $n>T$. We construct discrete approximation schemes for the stochastic processes X and ${X}^{C}$ as follows. Consider the following approximation for the complete process:
and the corresponding approximations for ${X}^{C}$ given by
The following lemma confirms the correctness of the construction of these approximations.
(6)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {X_{0}^{(n)}}=x_{0}>0,\hspace{2em}{X_{k}^{(n)}}={X_{k-1}^{(n)}}+\frac{(b-{X_{k-1}^{(n)}})T}{n}+\sigma {q_{k}^{(n)}}\sqrt{{X_{k-1}^{(n)}}},\\{} & \displaystyle {Q_{k}^{(n)}}:={X_{k}^{(n)}}-{X_{k-1}^{(n)}}=\frac{(b-{X_{k-1}^{(n)}})T}{n}+\sigma {q_{k}^{(n)}}\sqrt{{X_{k-1}^{(n)}}},\hspace{1em}1\le k\le n,\end{array}\](7)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {X_{0}^{(n,C)}}& \displaystyle =x_{0}>0,\\{} \displaystyle {X_{k}^{(n,C)}}& \displaystyle ={X_{k-1}^{(n,C)}}+\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}+\sigma {q_{k}^{(n)}}\sqrt{{X_{k-1}^{(n,C)}}\wedge C},\\{} \displaystyle {Q_{k}^{(n,C)}}:& \displaystyle ={X_{k}^{(n,C)}}-{X_{k-1}^{(n,C)}}\\{} & \displaystyle =\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}+\sigma {q_{k}^{(n)}}\sqrt{{X_{k-1}^{(n,C)}}\wedge C},1\le k\le n.\end{array}\]Proof.
$1)$ We apply the method of mathematical induction. When $k=1$,
Let us show that
We denote $\alpha :=\sqrt{x_{0}}$ and reduce (9) to the quadratic inequality
which obviously holds because the discriminant $D=\frac{{\sigma }^{2}T}{n}-\frac{4bT}{n}(1-\frac{T}{n})<0$ when ${\sigma }^{2}\le 2b$ and $n>2T$. So, ${X_{1}^{(n)}}>0$. Assume now that ${X_{k}^{(n)}}>0$. It can be shown by applying the same transformation that when ${\sigma }^{2}\le 2b$ and $n>2T$, the values ${X_{k+1}^{(n)}}>0$.
It can be proved similarly that the values given by (7) are positive.
2) ${X_{k}^{(n)}}$ can be represented as
Compute
Assume that $\operatorname{\mathsf{E}}{({X_{j}^{(n)}})}^{2}\le {\beta }^{2}$, $1\le j\le i-1$, for some $\beta >0$. Then $\operatorname{\mathsf{E}}{X_{j}^{(n)}}\le \beta $, $1\le j\le i-1$. We get the quadratic inequality of the form
(10)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {X_{k}^{(n)}}& \displaystyle =x_{0}+\sum \limits_{i=1}^{k}\big(b-{X_{i-1}^{(n)}}\big)\frac{T}{n}+\sigma \sum \limits_{i=1}^{k}{q_{i}^{(n)}}\sqrt{{X_{i-1}^{(n)}}}\\{} & \displaystyle ={X_{k-1}^{(n)}}+\frac{(b-{X_{k-1}^{(n)}})T}{n}+\sigma {q_{k}^{(n)}}\sqrt{{X_{k-1}^{(n)}}}.\end{array}\](11)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \operatorname{\mathsf{E}}{\big({X_{i}^{(n)}}\big)}^{2}& \displaystyle =\operatorname{\mathsf{E}}{\bigg({X_{i-1}^{(n)}}\bigg(1-\frac{T}{n}\bigg)+\frac{bT}{n}+\sigma \sqrt{{X_{i-1}^{(n)}}}{q_{i}^{(n)}}\bigg)}^{2}\\{} & \displaystyle =\operatorname{\mathsf{E}}{\bigg({X_{i-1}^{(n)}}\bigg(1-\frac{T}{n}\bigg)+\frac{bT}{n}\bigg)}^{2}+\frac{{\sigma }^{2}T}{n}\operatorname{\mathsf{E}}{X_{i-1}^{(n)}}\\{} & \displaystyle ={\bigg(\frac{bT}{n}\bigg)}^{2}+\bigg[\frac{{\sigma }^{2}T}{n}+\frac{2bT}{n}\bigg(1-\frac{T}{n}\bigg)\bigg]\operatorname{\mathsf{E}}{X_{i-1}^{(n)}}\\{} & \displaystyle \hspace{1em}+{\bigg(1-\frac{T}{n}\bigg)}^{2}\operatorname{\mathsf{E}}{\big({X_{i-1}^{(n)}}\big)}^{2}.\end{array}\]
\[{\bigg(1-\frac{T}{n}\bigg)}^{2}{\beta }^{2}+\bigg[\frac{{\sigma }^{2}T}{n}+\frac{2bT}{n}\bigg(1-\frac{T}{n}\bigg)\bigg]\beta +{\bigg(\frac{bT}{n}\bigg)}^{2}<{\beta }^{2}\]
or, equivalently,
\[\bigg({\bigg(1-\frac{T}{n}\bigg)}^{2}-1\bigg){\beta }^{2}+\bigg[\frac{{\sigma }^{2}T}{n}+\frac{2bT}{n}\bigg(1-\frac{T}{n}\bigg)\bigg]\beta +{\bigg(\frac{bT}{n}\bigg)}^{2}<0,\]
which obviously holds when $\beta >\frac{{\sigma }^{2}+2b+\sqrt{{\sigma }^{4}+4b{\sigma }^{2}+8{b}^{2}}}{\frac{3}{2}}$. So, for all $1\le i\le n$, $\operatorname{\mathsf{E}}{X_{i}^{(n)}}\le \frac{{\sigma }^{2}+2b+\sqrt{{\sigma }^{4}+4b{\sigma }^{2}+8{b}^{2}}}{\frac{3}{2}}\vee x_{0}=:\gamma $.Using the Burkholder inequality, we estimate
\[\begin{array}{r@{\hskip0pt}l}\displaystyle 0\le \operatorname{\mathsf{E}}\underset{0\le k\le n}{\sup }{\big({X_{k}^{(n)}}\big)}^{2}& \displaystyle \le 2{(x_{0}+bT)}^{2}+2{\sigma }^{2}\operatorname{\mathsf{E}}\underset{0\le k\le n}{\sup }{\Bigg(\sum \limits_{i=1}^{n}{q_{i}^{(n)}}\sqrt{{X_{i-1}^{(n)}}}\Bigg)}^{2}\\{} & \displaystyle \le 2{(x_{0}+bT)}^{2}+8{\sigma }^{2}\operatorname{\mathsf{E}}{\Bigg(\sum \limits_{i=1}^{n}{q_{i}^{(n)}}\sqrt{{X_{i-1}^{(n)}}}\Bigg)}^{2}\\{} & \displaystyle \le 2{(x_{0}+bT)}^{2}+8{\sigma }^{2}\gamma T.\end{array}\]
Therefore,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \operatorname{\mathsf{P}}\big\{\exists k,0\le k\le n:{X_{k}^{(n)}}\ne {X_{k}^{(n,C)}}\big\}& \displaystyle =\operatorname{\mathsf{P}}\Big\{\underset{0\le k\le n}{\sup }{X_{k}^{(n)}}\ge C\Big\}\le {C}^{-2}\operatorname{\mathsf{E}}\underset{0\le k\le n}{\sup }{\big({X_{k}^{(n)}}\big)}^{2}\\{} & \displaystyle \le 2{C}^{-2}{(x_{0}+bT)}^{2}+8{\sigma }^{2}{C}^{-2}\gamma T,\end{array}\]
whence the proof follows. □Consider the sequences of step processes corresponding to these schemes:
and
Thus, the trajectories of the processes ${X}^{(n)}$ and ${X}^{(n,C)}$ have jumps at the points $kT/n\hspace{0.2778em},k=0,\dots ,n$, and are constant on the interior intervals. Consider the filtrations ${\mathcal{F}_{k}^{n}}=\sigma ({X_{t}^{(n)}},\hspace{0.1667em}t\le \frac{kT}{n})$. The processes ${X}^{(n,C)}$ are adapted with respect to them. Therefore, we can consider the same filtrations for all discrete approximation schemes. So, we can identify ${\mathcal{F}_{t}^{n}}$ with ${\mathcal{F}_{k}^{n}}$ for $\frac{kT}{n}\le t<\frac{(k+1)T}{n}$.
Denote by $\mathbb{Q}$ and ${\mathbb{Q}}^{n},n\ge 1$, the measures corresponding to the processes X and ${X}^{(n)},n\ge 1$, respectively, and by ${\mathbb{Q}}^{C}$ and ${\mathbb{Q}}^{n,C},n\ge 1$, the measures corresponding to the processes ${X}^{C}$ and ${X}^{(n,C)},n\ge 1$, respectively. Denote by $\stackrel{W}{\longrightarrow }$ the weak convergence of measures corresponding to stochastic processes. We apply Theorem 3.2 from [23] to prove the weak convergence of measures ${\mathbb{Q}}^{n,C}$ to the measure ${\mathbb{Q}}^{C}$. This theorem can be formulated as follows.
Theorem 3.1.
Assume that the following conditions are satisfied:
-
(ii) For any $\epsilon >0$ and $a\in (0,1]$,\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\operatorname{\mathsf{E}}\big({Q_{k}^{(n,C)}}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle \hspace{2em}-\underset{0}{\overset{t}{\int }}\big(b-{X_{s}^{(n,C)}}\wedge C\big)ds\bigg|\ge \epsilon \bigg)=0;\end{array}\]
-
(iii) For any $\epsilon >0$ and $a\in (0,1]$,\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\Big(\operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle \hspace{2em}-{\big(\operatorname{\mathsf{E}}\big({Q_{k}^{(n,C)}}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\big)}^{2}\Big)-{\sigma }^{2}\underset{0}{\overset{t}{\int }}\big({X_{s}^{(n,C)}}\wedge C\big)ds\bigg|\ge \epsilon \bigg)=0;\end{array}\]
Then ${\mathbb{Q}}^{n,C}\stackrel{W}{\longrightarrow }{\mathbb{Q}}^{C}$.
Using Theorem 3.1, we prove the following result.
Proof.
According to Theorem 3.1, we need to check conditions (i)–(iii). Relation (7) implies that $\sup _{0\le k\le n}|{Q_{k}^{(n,C)}}|\le \frac{b+CT}{n}+\sigma \sqrt{\frac{TC}{n}}$. Hence, there exists a constant $C_{2}>0$ such that $\sup _{0\le k\le n}|{Q_{k}^{(n,C)}}|\le \frac{C_{2}}{\sqrt{n}}$. This means that condition (i) is satisfied.
Furthermore, in order to establish (ii), we consider any fixed $a>0$ and $n\ge 1$ such that $\frac{C_{2}}{\sqrt{n}}\le a$, that is, $n\ge {(\frac{C_{2}}{a})}^{2}$. For such n,
For any $\epsilon >0$, we have
(12)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \operatorname{\mathsf{E}}\big({Q_{k}^{(n,C)}}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)& \displaystyle =\operatorname{\mathsf{E}}\big({Q_{k}^{(n,C)}}\big|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle =\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}+\sigma \operatorname{\mathsf{E}}{q_{k}^{(n)}}\sqrt{{X_{k-1}^{(n,C)}}\wedge C}\\{} & \displaystyle =\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}.\end{array}\]
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\operatorname{\mathsf{E}}\big({Q_{k}^{(n,C)}}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)-\underset{0}{\overset{t}{\int }}\big(b-\big({X_{s}^{(n,C)}}\wedge C\big)\big)ds\bigg|\ge \epsilon \bigg)\\{} & \displaystyle \hspace{1em}=\underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}-\sum \limits_{0\le k\le [\frac{nt}{T}]-1}\big(b-\big({X_{k}^{(n,C)}}\wedge C\big)\big)\frac{T}{n}\\{} & \displaystyle \hspace{2em}-\big(b-\big({X_{[\frac{nt}{T}]}^{(n,C)}}\wedge C\big)\big)\bigg(t-\frac{[\frac{nt}{T}]T}{n}\bigg)\bigg|\ge \epsilon \bigg)\\{} & \displaystyle \hspace{1em}=\underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\big(b-\big({X_{[\frac{nt}{T}]}^{(n,C)}}\wedge C\big)\big)\bigg(t-\frac{[\frac{nt}{T}]T}{n}\bigg)\bigg|\ge \epsilon \bigg)=0,\end{array}\]
and hence condition (ii) is satisfied. Now let us check condition (iii). We have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}|{\mathcal{F}_{k-1}^{n}}\big)=\operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle \hspace{1em}={\bigg(\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}\bigg)}^{2}+2\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}\sigma \operatorname{\mathsf{E}}{q_{k}^{(n)}}\sqrt{{X_{k-1}^{(n,C)}}\wedge C}\\{} & \displaystyle \hspace{2em}+{\sigma }^{2}\operatorname{\mathsf{E}}{\big({q_{k}^{(n)}}\big)}^{2}\big({X_{k-1}^{(n,C)}}\wedge C\big)\\{} & \displaystyle \hspace{1em}={\bigg(\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}\bigg)}^{2}+\hspace{0.1667em}{\sigma }^{2}\frac{T}{n}\big({X_{k-1}^{(n,C)}}\wedge C\big).\end{array}\]
Therefore, for any $\epsilon >0$,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\Big(\operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle \hspace{2em}-{\big(\operatorname{\mathsf{E}}\big({Q_{k}^{(n,C)}}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\big)}^{2}\Big)-{\sigma }^{2}\underset{0}{\overset{t}{\int }}\big({X_{s}^{(n,C)}}\wedge C\big)ds\bigg|\ge \epsilon \bigg)\\{} & \displaystyle \hspace{1em}=\underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\bigg({\bigg(\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}\bigg)}^{2}+{\sigma }^{2}\frac{T}{n}\big({X_{k-1}^{(n,C)}}\wedge C\big)\\{} & \displaystyle \hspace{2em}-{\bigg(\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}\bigg)}^{2}\bigg)-\sum \limits_{0\le k\le [\frac{nt}{T}]-1}\bigg({\sigma }^{2}\frac{T}{n}\big({X_{k}^{(n,C)}}\wedge C\big)\bigg)\\{} & \displaystyle \hspace{2em}-{\sigma }^{2}\big({X_{[\frac{nt}{T}]}^{(n,C)}}\wedge C\big)\bigg(t-\frac{[\frac{nt}{T}]T}{n}\bigg)\bigg|\ge \epsilon \bigg)\\{} & \displaystyle \hspace{1em}=\underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg({\sigma }^{2}\big({X_{[\frac{nt}{T}]}^{(n,C)}}\wedge C\big)\bigg(t-\frac{[\frac{nt}{T}]T}{n}\bigg)\bigg)\ge \epsilon \bigg)=0.\end{array}\]
The theorem is proved. □Proof.
According to Theorem A.1 and Theorem 3.2, it suffices to prove that
\[\underset{C\to \infty }{\lim }\overline{\underset{n\to \infty }{\lim }}\operatorname{\mathsf{P}}\Big\{\underset{0\le t\le T}{\sup }\big|{X_{t}^{(n)}}-{X_{t}^{(n,C)}}\big|\ge \epsilon \Big\}=0.\]
However, due to Remark 3.1,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{C\to \infty }{\lim }\overline{\underset{n\to \infty }{\lim }}\operatorname{\mathsf{P}}\Big\{\underset{0\le t\le T}{\sup }\big|{X_{t}^{(n)}}-{X_{t}^{(n,C)}}\big|\ge \epsilon \Big\}\\{} & \displaystyle \hspace{1em}\le \underset{C\to \infty }{\lim }\overline{\underset{n\to \infty }{\lim }}\operatorname{\mathsf{P}}\big\{\exists t,t\in [0,T]:{X_{t}^{(n)}}\ne {X_{t}^{(n,C)}}\big\}=0.\end{array}\]
□4 Multiplicative scheme for Cox–Ingersoll–Ross process
In this section, we construct a multiplicative discrete approximation scheme for the process ${e}^{X_{t}}$, $t\in [0,T]$, where $X_{t}$ is the CIR process given by (2). We construct the following multiplicative process based on the discrete approximation scheme (6)–(7). We introduce limit and prelimit processes as follows:
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {S_{t}^{n,C}}=\exp \{x_{0}\}\prod \limits_{1\le k\le [\frac{tn}{T}]}\big(1+{Q_{k}^{(n,C)}}\big),\hspace{1em}t\in \mathbb{T},\hspace{2em}\\{} & \displaystyle {S_{t}^{C}}=\exp \bigg\{{X_{t}^{C}}-\frac{{\sigma }^{2}}{2}\underset{0}{\overset{t}{\int }}\big({X_{t}^{C}}\wedge C\big)dt\bigg\},\hspace{1em}t\in \mathbb{T},\hspace{2em}\\{} & \displaystyle {S_{t}^{n}}=\exp \{x_{0}\}\prod \limits_{1\le k\le [\frac{tn}{T}]}\big(1+{Q_{k}^{(n)}}\big),\hspace{1em}t\in \mathbb{T},\hspace{2em}\\{} & \displaystyle S_{t}=\exp \bigg\{X_{t}-\frac{{\sigma }^{2}}{2}\underset{0}{\overset{t}{\int }}X_{t}dt\bigg\},\hspace{1em}t\in \mathbb{T},\hspace{2em}\\{} & \displaystyle {\widetilde{S}_{t}^{n}}=\exp \{x_{0}\}\prod \limits_{1\le k\le [\frac{tn}{T}]}\bigg[\big(1+{Q_{k}^{(n,C)}}\big)\exp \bigg\{\frac{{\sigma }^{2}}{2n}{X_{k}^{(n)}}\bigg\}\bigg],\hspace{1em}t\in \mathbb{T},\hspace{2em}\\{} \displaystyle \text{and}\hspace{2em}\hspace{2em}& \hspace{2em}\\{} & \displaystyle \widetilde{S}_{t}=\exp \{X_{t}\},\hspace{1em}t\in \mathbb{T}.\hspace{2em}\end{array}\]
Denote by ${\mathbb{G}}^{C}$, ${\mathbb{G}}^{n,C}$, $\mathbb{G}$, ${\mathbb{G}}^{n}$, $\widetilde{\mathbb{G}}$, and ${\widetilde{\mathbb{G}}}^{n}$, $n\ge 1$, the measures corresponding to the processes ${S_{t}^{C}}$, ${S_{t}^{n,C}}$, $S_{t}$, ${S_{t}^{n}}$, $\widetilde{S}_{t}$, and ${\widetilde{S}_{t}^{n}}$, $n\ge 1$, respectively.We apply Theorem 3.3 from [23] to prove the weak convergence of measures. This theorem can be formulated as follows.
Theorem 4.1.
Let the following conditions hold:
Then
-
(i) $\sup _{1\le k\le n}|{Q_{k}^{(n,C)}}|\stackrel{\operatorname{\mathsf{P}}}{\longrightarrow }0,\hspace{0.2778em}n\to \infty $;
-
(ii) For any $a\in (0,1]$,
-
(iii) For any $a\in (0,1]$,
-
(iv) For any $\epsilon >0$ and $a\in (0,1]$,\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\operatorname{\mathsf{E}}\big({Q_{k}^{(n,C)}}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle \hspace{2em}-\underset{0}{\overset{t}{\int }}\big(b-{X_{s}^{(n,C)}}\wedge C\big)ds\bigg|\ge \epsilon \bigg)=0;\end{array}\]
-
(v) For any $\epsilon >0$ and $a\in (0,1]$,\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle \hspace{2em}-{\sigma }^{2}\underset{0}{\overset{t}{\int }}\big({X_{s}^{(n,C)}}\wedge C\big)ds\bigg|\ge \epsilon \bigg)=0.\end{array}\]
We prove the following result using Theorem 4.1.
Proof.
According to Theorem 4.1, we need to check conditions (i)–(v). It was established in the proof of Theorem 3.2 that conditions (i) and (iv) are satisfied. Let us show that condition (ii) holds. It was also established in the proof of Theorem 3.2 that $\sup _{0\le k\le n}|{Q_{k}^{(n,C)}}|\le \frac{C_{2}}{\sqrt{n}}$. So, for all $a\in (0,1]$, starting from some number n, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sum \limits_{1\le k\le n}\operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)\\{} & \displaystyle \hspace{1em}=\sum \limits_{1\le k\le n}\operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}\big|{\mathcal{F}_{k-1}^{n}}\big)\le \sum \limits_{1\le k\le n}\frac{{C_{2}^{2}}}{n}\le {C_{2}^{2}},\end{array}\]
whence condition (ii) holds. Now, (12) implies that, for all $a\in (0,1]$, starting from some number n, we have
whence condition (iii) holds.Let us check condition (v). For any $\epsilon >0$ and $a\in .(0,1]$, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\operatorname{\mathsf{E}}\big({\big({Q_{k}^{(n,C)}}\big)}^{2}\mathbb{I}_{|{Q_{k}^{(n,C)}}|\le a}\big|{\mathcal{F}_{k-1}^{n}}\big)-{\sigma }^{2}\underset{0}{\overset{t}{\int }}\big({X_{s}^{(n,C)}}\wedge C\big)ds\bigg|\ge \epsilon \bigg)\\{} & \displaystyle \hspace{1em}=\underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg|\sum \limits_{1\le k\le [\frac{nt}{T}]}\bigg({\bigg(\frac{(b-({X_{k-1}^{(n,C)}}\wedge C))T}{n}\bigg)}^{2}+{\sigma }^{2}\frac{T}{n}\big({X_{k-1}^{(n,C)}}\wedge C\big)\bigg)\\{} & \displaystyle \hspace{2em}-\sum \limits_{0\le k\le [\frac{nt}{T}]-1}\bigg({\sigma }^{2}\frac{T}{n}\big({X_{k}^{(n,C)}}\wedge C\big)\bigg)-{\sigma }^{2}\big({X_{[\frac{nt}{T}]}^{(n,C)}}\wedge C\big)\bigg(t-\frac{[\frac{nt}{T}]T}{n}\bigg)\bigg|\ge \epsilon \bigg)\\{} & \displaystyle \hspace{1em}\le \underset{n}{\lim }{\operatorname{\mathsf{P}}}^{n}\bigg(\underset{t\in \mathbb{T}}{\sup }\bigg(\frac{{(|b|+C)}^{2}Tt}{n}+{\sigma }^{2}\big({X_{[\frac{nt}{T}]}^{(n,C)}}\wedge C\big)\bigg(t-\frac{[\frac{nt}{T}]T}{n}\bigg)\bigg)\ge \epsilon \bigg)=0.\end{array}\]
The theorem is proved. □Proof.
The proof immediately follows from Theorem A.1, Theorem 4.2, and Remark 3.1. Indeed,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{C\to \infty }{\lim }\overline{\underset{n\to \infty }{\lim }}\operatorname{\mathsf{P}}\Big\{\underset{0\le t\le T}{\sup }\big|{X_{t}^{(n)}}-{X_{t}^{(n,C)}}\big|\ge \epsilon \Big\}\\{} & \displaystyle \hspace{1em}\le \underset{C\to \infty }{\lim }\overline{\underset{n\to \infty }{\lim }}\operatorname{\mathsf{P}}\big\{\exists t,t\in [0,T]:{X_{t}^{(n)}}\ne {X_{t}^{(n,C)}}\big\}=0.\end{array}\]
□