1 Introduction
The statistical inference for diffusion models has been thoroughly studied by now; see the books [4, 6–8, 10] and references therein.
In this paper, we consider the homogeneous diffusion process given by the stochastic differential equation
where $W_{t}$ is a standard Wiener process, and θ is an unknown parameter.
The standard maximum likelihood estimator for the parameter θ constructed by the observations of X on the interval $[0,T]$ has the form
\[\hat{\theta }_{T}=\frac{{\textstyle\int _{0}^{T}}\frac{a(X_{t})}{b{(X_{t})}^{2}}\hspace{0.1667em}dX_{t}}{{\textstyle\int _{0}^{T}}\frac{a{(X_{t})}^{2}}{b{(X_{t})}^{2}}\hspace{0.1667em}dt};\]
see, for instance, [7, Example 1.37] and [9]. If the equation has a weak solution, the coefficient a is not identically zero, and the functions $\frac{1}{{b}^{2}}$, $\frac{{a}^{2}}{{b}^{2}}$, $\frac{{a}^{2}}{{b}^{4}}$ are locally integrable, then this estimator is strongly consistent [9, Thm. 3.3]. Moreover, if the model is ergodic, then this estimator is asymptotically normal [7, Ex. 1.37]. Note that in the nonergodic case the maximum likelihood estimator $\hat{\theta }_{T}$ may have different limit distributions; some examples can be found in [7, Sect. 3.5].If the data are the observations of the trajectory $\{X_{t},t\ge 0\}$ at discrete time moments $t_{1},t_{2},\dots \hspace{0.1667em}$, we obtain the discrete-time version of the model. Parameter estimation in such models has been studied since the mid-1980s; see [2, 3, 11]. A review of this problem and many references can be found in [5] and [13]. For recent results, see [6, 9, 12].
In this paper, we are interested in the scheme of observations that is called “rapidly increasing experimental design.” The process X is observed at time moments $t_{i}=i\Delta _{n}$, $i=0,\dots ,n$, such that $\Delta _{n}\to 0$ and $n\Delta _{n}\to \infty $ as $n\to \infty $. One of possible approaches to parameter estimation is to consider a discretized version of the continuous-time MLE $\hat{\theta }_{T}$. The most general results in this direction were obtained by Yoshida [14]. He proved the consistency and asymptotic normality of the discretized MLE in the model, where the process was multidimensional, the drift coefficient depended on θ nonlinearly, and the diffusion coefficient also contained an unknown parameter.
Assume that we observe the process X at discrete time moments ${t_{k}^{n}}=k/n$, $0\le k\le {n}^{1+\alpha }$, where $0<\alpha <\frac{1}{2}$. In this scheme, Mishura [9] proposed the following discretized version of the maximum likelihood estimator:
\[\hat{\theta }_{n}=\frac{{\textstyle\sum _{k=0}^{{n}^{1+\alpha }}}a(X_{\frac{k}{n}})(X_{\frac{k+1}{n}}-X_{\frac{k}{n}})/b{(X_{\frac{k}{n}})}^{2}}{{n}^{-1}{\textstyle\sum _{k=0}^{{n}^{1+\alpha }}}a{(X_{\frac{k}{n}})}^{2}/b{(X_{\frac{k}{n}})}^{2}}.\]
She proved its strong consistency in the case where the coefficients a and b are bounded. The aim of this paper is to establish the asymptotic normality of this estimator. Additionally, we assume the ergodicity of the model, but the boundedness of the coefficients is not required. In comparison with general results of Yoshida [14], our assumptions are less restrictive. We assume the polynomial growth of the function $1/b$ instead of the condition $\inf _{x}b{(x)}^{2}>0$. Also, we do not assume the smoothness of the coefficients and the polynomial growth of their derivatives; any Lipschitz continuous $a(x)$ and $b(x)$ are possible.2 Model description and main result
Let $(\varOmega ,\mathfrak{F})$ be a measurable space. Assume that $\theta \in \mathbb{R}$ is fixed but unknown. Consider a probability measure $\mathbf{P}_{\theta }$ such that $\mathfrak{F}$ is $\mathbf{P}_{\theta }$-complete.
Let X solve the equation
where $x_{0}\in \mathbb{R}$, $a,b:\mathbb{R}\to \mathbb{R}$ are measurable functions, and $\{W_{t},t\ge 0\}$ is a standard Wiener process on $(\varOmega ,\mathfrak{F},\mathbf{P}_{\theta })$.
(1)
\[X_{t}=x_{0}+\theta {\int _{0}^{t}}a(X_{s})\hspace{0.1667em}ds+{\int _{0}^{t}}b(X_{s})\hspace{0.1667em}dW_{s},\]Denote $c(x)=\frac{a(x)}{b{(x)}^{2}}$, $d(x)=\frac{a{(x)}^{2}}{b{(x)}^{2}}$, $\varphi _{\theta }(x)=\exp \{-2\theta {\int _{0}^{x}}c(y)\hspace{0.1667em}dy\}$, and $\varPhi _{\theta }(x)={\int _{0}^{x}}\varphi _{\theta }(y)\hspace{0.1667em}dy$.
Assume that the following conditions hold.
It is well known that under assumption (A1) the stochastic differential equation (1) has a unique strong solution. This assumption also yields that the functions $a(x)$ and $b(x)$ satisfy the linear growth condition, that is,
for some $M>0$ and for all $x\in \mathbb{R}$.
Assume additionally that
Then, for some $M_{2}>0$ and for any $x\in \mathbb{R}$,
Under assumptions (A2)–(A3), the diffusion process X is positive recurrent; see, for example, [7, Prop. 1.15]. In this case, it has ergodic properties with the invariant density given by
see [7, Thm. 1.16]. Moreover, according to [1, Sect. II.37], the convergence (4) holds also in $L_{1}$, that is,
Assume that the invariant distribution satisfies the condition
\[\mu _{\theta }(x)=\frac{1}{G_{\theta }b{(x)}^{2}\varphi _{\theta }(x)},\hspace{1em}x\in \mathbb{R}.\]
Let $\xi _{\theta }$ denote a random variable with density $\mu _{\theta }(x)$. Then, for any measurable function h such that $\mathbf{E}_{\theta }|h(\xi _{\theta })|<\infty $,
(4)
\[\frac{1}{T}{\int _{0}^{T}}h(X_{t})\hspace{0.1667em}dt\to {\int _{-\infty }^{+\infty }}h(x)\mu _{\theta }(x)\hspace{0.1667em}dx\equiv \mathbf{E}_{\theta }h(\xi _{\theta })\hspace{1em}\text{a.s. as}\hspace{2.5pt}T\to \infty ,\]Let $0<\alpha <1$. Suppose that we observe the process X at discrete time moments ${t_{k}^{n}}=k/n$, $0\le k\le {n}^{1+\alpha }$. Consider the estimator
\[\hat{\theta }_{n}=\frac{{\textstyle\sum _{k=1}^{{n}^{1+\alpha }}}c(X_{\frac{k-1}{n}})\Delta {X_{k}^{n}}}{{n}^{-1}{\textstyle\sum _{k=1}^{{n}^{1+\alpha }}}d(X_{\frac{k-1}{n}})},\]
where $\Delta {X_{k}^{n}}=X_{\frac{k}{n}}-X_{\frac{k-1}{n}}$.Assume also that
Then $\mathbf{E}_{\theta }d(\xi _{\theta })>0$. Note also that by (3) and (A5), $\mathbf{E}_{\theta }d(\xi _{\theta })<\infty $. Now we are ready to formulate the main result.
The proof is given in Section 4.
The following result gives sufficient conditions for consistency and asymptotic normality in the case where the parameter θ is positive.
Corollary 2.2.
If the coefficients are bounded, then the consistency and asymptotic normality of $\hat{\theta }_{n}$ can be obtained without assumption (A5).
Corollary 2.3.
Sketch of proof.
This result can be proved similarly to Theorem 2.1 using the boundedness of $a(x)$, $b(x)$, $c(x)$, $d(x)$ instead of the growth conditions (2), (3), and (A4) together with the boundedness of moments of the invariant density. In this case, (8) implies the inequality
for all $m\in \mathbb{N}$ and $t\in [\frac{k-1}{n},\frac{k}{n}]$, $k=1,2,\dots {n}^{\alpha }$. This estimate is used in the proof instead of Lemmas 4.1–4.2. □
3 Some simulation results
In this section, we illustrate quality of the estimator by simulation experiments. We consider the diffusion process (1) with drift parameter $\theta =2$ and initial value $x_{0}=1$ in three following cases:
Using the Milstein method, we simulate 100 sample paths of each process and find the estimate $\hat{\theta }_{n}$ for different values of n and α. The average values of $\hat{\theta }_{n}$ and the corresponding standard deviations are presented in Tables 1–3.
Table 1.
$a(x)=1-x$, $b(x)=2+\sin x$
n | |||||||
50 | 100 | 500 | 1000 | 2000 | 5000 | ||
$\alpha =0.1$ | Mean | 3.05812 | 2.97626 | 2.73973 | 2.58453 | 2.55888 | 2.53879 |
Std. dev. | 2.06388 | 2.00007 | 1.43273 | 1.34689 | 1.26920 | 1.22077 | |
$\alpha =0.5$ | Mean | 2.11065 | 2.15066 | 2.08157 | 2.05626 | 2.03686 | 2.03479 |
Std. dev. | 0.62613 | 0.56038 | 0.31621 | 0.28909 | 0.22875 | 0.18187 | |
$\alpha =0.9$ | Mean | 2.02509 | 2.01702 | 2.02024 | 2.01308 | 2.00626 | 2.00289 |
Std. dev. | 0.27874 | 0.19589 | 0.09995 | 0.06918 | 0.04850 | 0.03028 |
Table 2.
$a(x)=-\arctan x$, $b(x)=1$
n | |||||||
50 | 100 | 500 | 1000 | 2000 | 5000 | ||
$\alpha =0.1$ | Mean | 2.69321 | 2.66637 | 2.65053 | 2.66356 | 2.59903 | 2.46685 |
Std. dev. | 2.03142 | 2.06075 | 1.82903 | 1.73034 | 1.68212 | 1.50186 | |
$\alpha =0.5$ | Mean | 2.12190 | 2.10459 | 2.01048 | 1.99535 | 2.01712 | 1.99517 |
Std. dev. | 0.85304 | 0.69484 | 0.48803 | 0.37807 | 0.31746 | 0.25846 | |
$\alpha =0.9$ | Mean | 1.95538 | 1.97446 | 1.98035 | 1.99565 | 2.00266 | 2.00290 |
Std. dev. | 0.35057 | 0.26796 | 0.12235 | 0.09050 | 0.06496 | 0.04533 |
Table 3.
$a(x)=-\frac{x}{1+{x}^{2}}$, $b(x)=1$
n | |||||||
50 | 100 | 500 | 1000 | 2000 | 5000 | ||
$\alpha =0.1$ | Mean | 1.99507 | 1.99813 | 1.97122 | 1.99255 | 1.98366 | 1.94811 |
Std. dev. | 2.44248 | 2.53060 | 2.17322 | 2.13403 | 2.05527 | 1.80128 | |
$\alpha =0.5$ | Mean | 1.87038 | 1.87897 | 1.89022 | 1.92593 | 1.94964 | 1.96624 |
Std. dev. | 1.01932 | 0.89315 | 0.54811 | 0.49005 | 0.41787 | 0.33855 | |
$\alpha =0.9$ | Mean | 1.90341 | 1.92162 | 2.00240 | 2.00068 | 2.00491 | 1.99347 |
Std. dev. | 0.47656 | 0.33693 | 0.18136 | 0.13173 | 0.09595 | 0.07033 |
4 Proof of Theorem 2.1
In this section, we prove the main theorem and some auxiliary lemmas. In what follows, $C,C_{1},C_{2},\dots \hspace{0.1667em}$ are positive generic constants that may vary from line to line. If they depend on some arguments, we will write $C(\theta )$, $C(m,\theta )$, and so on.
By (1),
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \Delta {X_{k}^{n}}& \displaystyle =\theta {\int _{\frac{k-1}{n}}^{\frac{k}{n}}}a(X_{t})\hspace{0.1667em}dt+{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}b(X_{t})\hspace{0.1667em}dW_{t}\\{} & \displaystyle =\theta a(X_{\frac{k-1}{n}})\frac{1}{n}+\theta {\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\big(a(X_{t})-a(X_{\frac{k-1}{n}})\big)\hspace{0.1667em}dt+b(X_{\frac{k-1}{n}})\Delta {W_{k}^{n}}\\{} & \displaystyle \hspace{1em}+{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\big(b(X_{t})-b(X_{\frac{k-1}{n}})\big)\hspace{0.1667em}dW_{t}.\end{array}\]
Therefore,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hat{\theta }_{n}& \displaystyle =\theta +\sum \limits_{k=1}^{{n}^{1+\alpha }}\Bigg(c(X_{\frac{k-1}{n}})\theta {\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\big(a(X_{t})-a(X_{\frac{k-1}{n}})\big)\hspace{0.1667em}dt+\frac{a(X_{\frac{k-1}{n}})}{b(X_{\frac{k-1}{n}})}\Delta {W_{k}^{n}}\\{} & \displaystyle \hspace{1em}+c(X_{\frac{k-1}{n}}){\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\big(b(X_{t})-b(X_{\frac{k-1}{n}})\big)\hspace{0.1667em}dW_{t}\Bigg)\Big/\Bigg(\frac{1}{n}\sum \limits_{k=1}^{{n}^{1+\alpha }}d(X_{\frac{k-1}{n}})\Bigg).\end{array}\]
Then
where
\[\begin{array}{r@{\hskip0pt}l}\displaystyle D_{n}& \displaystyle ={n}^{-1-\alpha }\sum \limits_{k=1}^{{n}^{1+\alpha }}d(X_{\frac{k-1}{n}}),\\{} \displaystyle A_{n}& \displaystyle =\sum \limits_{k=1}^{{n}^{1+\alpha }}c(X_{\frac{k-1}{n}})\theta {\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\big(a(X_{t})-a(X_{\frac{k-1}{n}})\big)\hspace{0.1667em}dt,\\{} \displaystyle B_{n}& \displaystyle =\sum \limits_{k=1}^{{n}^{1+\alpha }}\frac{a(X_{\frac{k-1}{n}})}{b(X_{\frac{k-1}{n}})}\Delta {W_{k}^{n}},\\{} \displaystyle E_{n}& \displaystyle =\sum \limits_{k=1}^{{n}^{1+\alpha }}c(X_{\frac{k-1}{n}}){\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\big(b(X_{t})-b(X_{\frac{k-1}{n}})\big)\hspace{0.1667em}dW_{t}.\end{array}\]
Lemma 4.1.
Let assumptions (A1)–(A3) and (A5) be fulfilled. Then for every $m\in \mathbb{N}$, there exists a constant $C(m,\theta )>0$ such that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}& \displaystyle \le C(m,\theta ){n}^{-m+1+\alpha }.\end{array}\]
for all $n\in \mathbb{N}$, $1\le k\le {n}^{1+\alpha }$, and $t\in [\frac{k-1}{n},\frac{k}{n}]$.
Proof.
By (1) and the inequality ${(a+b)}^{2m}\le {2}^{2m-1}({a}^{2m}+{b}^{2m})$,
Further, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}\\{} & \displaystyle \hspace{1em}\le {2}^{2m-1}\Bigg({\theta }^{2m}\mathbf{E}_{\theta }{\Bigg({\int _{\frac{k-1}{n}}^{t}}a(X_{s})\hspace{0.1667em}ds\Bigg)}^{2m}+\mathbf{E}_{\theta }{\Bigg({\int _{\frac{k-1}{n}}^{t}}b(X_{s})\hspace{0.1667em}dW_{s}\Bigg)}^{2m}\Bigg).\end{array}\]
Using the Burkholder–Davis–Gundy inequality, we obtain
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}\\{} & \displaystyle \hspace{1em}\le {2}^{2m-1}\Bigg({\theta }^{2m}\mathbf{E}_{\theta }{\Bigg({\int _{\frac{k-1}{n}}^{t}}a(X_{s})\hspace{0.1667em}ds\Bigg)}^{2m}+C(m)\mathbf{E}_{\theta }{\Bigg({\int _{\frac{k-1}{n}}^{t}}b{(X_{s})}^{2}\hspace{0.1667em}ds\Bigg)}^{m}\Bigg).\end{array}\]
By Jensen’s inequality,
(8)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}& \displaystyle \le {2}^{2m-1}\Bigg({\theta }^{2m}{\big(t-\frac{k-1}{n}\big)}^{2m-1}\mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}a{(X_{s})}^{2m}\hspace{0.1667em}ds\\{} & \displaystyle \hspace{1em}+C(m){\big(t-\frac{k-1}{n}\big)}^{m-1}\mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}b{(X_{s})}^{2m}\hspace{0.1667em}ds\Bigg).\end{array}\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}& \displaystyle \le {2}^{2m-1}\Bigg({\theta }^{2m}{n}^{1-2m}\mathbf{E}_{\theta }{\int _{0}^{{n}^{\alpha }}}a{(X_{s})}^{2m}\hspace{0.1667em}ds\\{} & \displaystyle \hspace{1em}+C(m){n}^{1-m}\mathbf{E}_{\theta }{\int _{0}^{{n}^{\alpha }}}b{(X_{s})}^{2m}\hspace{0.1667em}ds\Bigg).\end{array}\]
Now it remains to note that by (2) and (5) the integrals ${n}^{-\alpha }{\int _{0}^{{n}^{\alpha }}}a{(X_{s})}^{2m}\hspace{0.1667em}ds$ and ${n}^{-\alpha }{\int _{0}^{{n}^{\alpha }}}b{(X_{s})}^{2m}\hspace{0.1667em}ds$ have bounded expectations. □Lemma 4.2.
Under assumption (A1), for every $m\in \mathbb{N}$, there exists a constant $C(m,\theta )>0$ such that
\[\mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}\le C(m,\theta ){n}^{-m}\mathbf{E}_{\theta }\big(1+|X_{\frac{k-1}{n}}{|}^{2m}\big)\]
for all $n\in \mathbb{N}$, $1\le k\le {n}^{1+\alpha }$, and $t\in [\frac{k-1}{n},\frac{k}{n}]$.
Proof.
By (8),
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}\\{} & \displaystyle \hspace{1em}\le C_{1}(m,\theta ){n}^{1-m}\Bigg(\mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}a{(X_{s})}^{2m}\hspace{0.1667em}ds+\mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}b{(X_{s})}^{2m}\hspace{0.1667em}ds\Bigg).\end{array}\]
Using assumption (A1) and (2), we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}a{(X_{s})}^{2m}\hspace{0.1667em}ds& \displaystyle \le {2}^{2m-1}\mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}\big({\big(a(X_{s})-a(X_{\frac{k-1}{n}})\big)}^{2m}+a{(X_{\frac{k-1}{n}})}^{2m}\big)\hspace{0.1667em}ds\\{} & \displaystyle \le {2}^{2m-1}L\mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}{(X_{s}-X_{\frac{k-1}{n}})}^{2m}\hspace{0.1667em}ds\\{} & \displaystyle \hspace{1em}+{2}^{2m-1}M\big(t-\frac{k-1}{n}\big)\mathbf{E}_{\theta }\big(1+|X_{\frac{k-1}{n}}{|}^{2m}\big).\end{array}\]
The same estimate holds for $\mathbf{E}_{\theta }{\int _{\frac{k-1}{n}}^{t}}b{(X_{s})}^{2m}\hspace{0.1667em}ds$. Therefore,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2m}& \displaystyle \le C_{2}(m,\theta ){n}^{1-m}{\int _{\frac{k-1}{n}}^{t}}\mathbf{E}_{\theta }{(X_{s}-X_{\frac{k-1}{n}})}^{2m}\hspace{0.1667em}ds\\{} & \displaystyle \hspace{1em}+C_{2}(m,\theta ){n}^{-m}\mathbf{E}_{\theta }\big(1+|X_{\frac{k-1}{n}}{|}^{2m}\big),\end{array}\]
and the result follows from the Gronwall lemma. □Lemma 4.3.
Proof.
Applying the Hölder inequality and Lemma 4.1, we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }\big(|X_{\frac{k-1}{n}}-X_{t}{|}^{i}|X_{t}{|}^{j}\big)& \displaystyle \le {\big(\mathbf{E}_{\theta }|X_{\frac{k-1}{n}}-X_{t}{|}^{2m+2}\big)}^{\frac{i}{2m+2}}{\big(\mathbf{E}_{\theta }|X_{t}{|}^{\frac{j(2m+2)}{2m+2-i}}\big)}^{\frac{2m+2-i}{2m+2}}\\{} & \displaystyle \le C_{1}(m,\theta ){n}^{-\frac{(m-\alpha )i}{2m+2}}{\big(\mathbf{E}_{\theta }|X_{t}{|}^{\frac{j(2m+2)}{2m+2-i}}\big)}^{\frac{2m+2-i}{2m+2}}.\end{array}\]
Then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\mathbf{E}_{\theta }\big(|X_{\frac{k-1}{n}}-X_{t}{|}^{i}|X_{t}{|}^{j}\big)\hspace{0.1667em}dt\\{} & \displaystyle \hspace{1em}\le C_{1}(m,\theta ){n}^{-\frac{(m-\alpha )i}{2m+2}}{\int _{0}^{{n}^{\alpha }}}{\big(\mathbf{E}_{\theta }|X_{t}{|}^{\frac{j(2m+2)}{2m+2-i}}\big)}^{\frac{2m+2-i}{2m+2}}\hspace{0.1667em}dt.\end{array}\]
By Jensen’s inequality we have
\[{\int _{0}^{{n}^{\alpha }}}{\big(\mathbf{E}_{\theta }|X_{t}{|}^{\frac{j(2m+2)}{2m+2-i}}\big)}^{\frac{2m+2-i}{2m+2}}\hspace{0.1667em}dt\le {n}^{\alpha }{\Bigg({n}^{-\alpha }{\int _{0}^{{n}^{\alpha }}}\mathbf{E}_{\theta }|X_{t}{|}^{\frac{j(2m+2)}{2m+2-i}}\hspace{0.1667em}dt\Bigg)}^{\frac{2m+2}{2m+2-i}}.\]
By (5) the expression in brackets is bounded. This completes the proof. □Lemma 4.4.
Proof.
(i) In the case $m=0$, the result is trivial. Let $m\ge 1$. By (5) we have
Therefore,
\[{n}^{-\alpha }{\int _{0}^{{n}^{\alpha }}}{X_{t}^{2m}}\hspace{0.1667em}dt\stackrel{L_{1}}{\to }\mathbf{E}_{\theta }{\xi _{\theta }^{2m}}\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]
Hence, it suffices to prove that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle F_{n}& \displaystyle :=\mathbf{E}_{\theta }\bigg|{n}^{-\alpha }{\int _{0}^{{n}^{\alpha }}}{X_{t}^{2m}}\hspace{0.1667em}dt-{n}^{-1-\alpha }\sum \limits_{k=1}^{{n}^{1+\alpha }}{X_{\frac{k-1}{n}}^{2m}}\bigg|\\{} & \displaystyle ={n}^{-\alpha }\mathbf{E}_{\theta }\bigg|\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\big({X_{t}^{2m}}-{X_{\frac{k-1}{n}}^{2m}}\big)\hspace{0.1667em}dt\bigg|\end{array}\]
converges to zero as $n\to \infty $. By the inequality $|x|\le |x-y|+|y|$,
(9)
\[\big|{x}^{2m}-{y}^{2m}\big|\le |x-y|\sum \limits_{i=0}^{2m-1}|x{|}^{i}|y{|}^{2m-1-i}\le \sum \limits_{i=1}^{2m}C_{i}|x-y{|}^{i}|y{|}^{2m-i}.\]
\[F_{n}\le \sum \limits_{i=1}^{2m}C_{i}{n}^{-\alpha }\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\mathbf{E}_{\theta }\big(|X_{\frac{k-1}{n}}-X_{t}{|}^{i}|X_{t}{|}^{2m-i}\big)\hspace{0.1667em}dt,\]
and, by Lemma 4.3,
(ii) For arbitrary x and y,
The rest of the proof can be done similarly to part (i) using estimate (10) instead of (9). □
\[\begin{array}{r@{\hskip0pt}l}\displaystyle d(x)-d(y)& \displaystyle =\frac{a{(x)}^{2}}{b{(x)}^{2}}-\frac{a{(y)}^{2}}{b{(y)}^{2}}\\{} & \displaystyle =\big(a(x)-a(y)\big)\bigg(\frac{a(x)}{b{(x)}^{2}}+\frac{a(y)}{b(x)b(y)}\bigg)\\{} & \displaystyle \hspace{1em}-\big(b(x)-b(y)\big)\bigg(\frac{a(x)a(y)}{b{(x)}^{2}b(y)}+\frac{a(x)a(y)}{b(x)b{(y)}^{2}}\bigg).\end{array}\]
By (A1), (A4), and (2),
(10)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|d(x)-d(y)\big|& \displaystyle \le C|x-y|\big(1+|x{|}^{2p+1}+\big(1+|x{|}^{p}\big)\big(1+|y{|}^{p+1}\big)\\{} & \displaystyle \hspace{1em}+\big(1+|x{|}^{2p+1}\big)\big(1+|y{|}^{p+1}\big)+\big(1+|x{|}^{p+1}\big)\big(1+|y{|}^{2p+1}\big)\big).\end{array}\]Proof.
(i) By the Cauchy–Schwarz inequality we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }|A_{n}|& \displaystyle \le |\theta |\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\mathbf{E}_{\theta }\big|c(X_{\frac{k-1}{n}})\big(a(X_{t})-a(X_{\frac{k-1}{n}})\big)\big|\hspace{0.1667em}dt\\{} & \displaystyle \le |\theta |\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}{\big(\mathbf{E}_{\theta }c{(X_{\frac{k-1}{n}})}^{2}\big)}^{\frac{1}{2}}{\big(\mathbf{E}_{\theta }{\big(a(X_{t})-a(X_{\frac{k-1}{n}})\big)}^{2}\big)}^{\frac{1}{2}}\hspace{0.1667em}dt.\end{array}\]
Using (A1), (3), and Lemma 4.2, we get
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}_{\theta }|A_{n}|& \displaystyle \le C_{1}|\theta |\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}{\big(\mathbf{E}_{\theta }\big(1+|X_{\frac{k-1}{n}}{|}^{4p+2}\big)\big)}^{\frac{1}{2}}{\big(\mathbf{E}_{\theta }{(X_{t}-X_{\frac{k-1}{n}})}^{2}\big)}^{\frac{1}{2}}\hspace{0.1667em}dt\\{} & \displaystyle \le C_{2}(\theta ){n}^{-1/2}\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}{\big(\mathbf{E}_{\theta }\big(1+|X_{\frac{k-1}{n}}{|}^{4p+4}\big)\big)}^{1/2}\hspace{0.1667em}dt\\{} & \displaystyle =C_{2}(\theta ){n}^{-3/2}\sum \limits_{k=1}^{{n}^{1+\alpha }}{\big(\mathbf{E}_{\theta }\big(1+|X_{\frac{k-1}{n}}{|}^{4p+4}\big)\big)}^{1/2}\\{} & \displaystyle \le C_{2}(\theta ){n}^{\alpha -1/2}{\Bigg({n}^{-1-\alpha }\sum \limits_{k=1}^{{n}^{1+\alpha }}\mathbf{E}_{\theta }\big(1+|X_{\frac{k-1}{n}}{|}^{4p+4}\big)\Bigg)}^{1/2}.\end{array}\]
By Lemma 4.4 the expression ${n}^{-1-\alpha }{\sum _{k=1}^{{n}^{1+\alpha }}}\mathbf{E}_{\theta }(1+|X_{\frac{k-1}{n}}{|}^{4p+4})$ is bounded. Therefore,
(ii) We have
where
\[h(t,\omega )=\sum \limits_{k=1}^{{n}^{1+\alpha }}c(X_{\frac{k-1}{n}})\big(b(X_{t})-b(X_{\frac{k-1}{n}})\big)𝟙_{[\frac{k-1}{n},\frac{k}{n})}(t).\]
Then
\[\mathbf{E}_{\theta }{E_{n}^{2}}=\mathbf{E}_{\theta }{\int _{0}^{{n}^{\alpha }}}h{(t,\omega )}^{2}\hspace{0.1667em}dt=\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\mathbf{E}_{\theta }\big(c{(X_{\frac{k-1}{n}})}^{2}{\big(b(X_{t})-b(X_{\frac{k-1}{n}})\big)}^{2}\big)\hspace{0.1667em}dt.\]
Similarly to (i), we can estimate $\mathbf{E}_{\theta }{E_{n}^{2}}\le C_{4}(\theta ){n}^{\alpha -1}$. Therefore, ${n}^{-\alpha /2}|E_{n}|\stackrel{L_{2}}{\to }0$ as $n\to \infty $. □Proof.
(i) Let us prove the convergence in $L_{2}$. We have
where
as $n\to \infty $. Hence, ${n}^{-\alpha }{B^{\prime }_{n}}\stackrel{L_{2}}{\to }0$ as $n\to \infty $.
\[{B^{\prime }_{n}}={\int _{0}^{{n}^{\alpha }}}\frac{a(X_{t})}{b(X_{t})}\hspace{0.1667em}dW_{t},\hspace{2em}{B^{\prime\prime }_{n}}=\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}\bigg(\frac{a(X_{\frac{k-1}{n}})}{b(X_{\frac{k-1}{n}})}-\frac{a(X_{t})}{b(X_{t})}\bigg)\hspace{0.1667em}dW_{t}.\]
Then by (5) we have
(11)
\[\mathbf{E}_{\theta }{\big({n}^{-\alpha /2}{B^{\prime }_{n}}\big)}^{2}={n}^{-\alpha }\mathbf{E}_{\theta }{\int _{0}^{{n}^{\alpha }}}\frac{a{(X_{t})}^{2}}{b{(X_{t})}^{2}}\hspace{0.1667em}dt={n}^{-\alpha }\mathbf{E}_{\theta }{\int _{0}^{{n}^{\alpha }}}d(X_{t})\hspace{0.1667em}dt\to \mathbf{E}_{\theta }d(\xi _{\theta })\]Arguing as in the proof of Lemma 4.5 (ii), we obtain
\[\mathbf{E}_{\theta }{\big({B^{\prime\prime }_{n}}\big)}^{2}=\sum \limits_{k=1}^{{n}^{1+\alpha }}{\int _{\frac{k-1}{n}}^{\frac{k}{n}}}{\bigg(\frac{a(X_{\frac{k-1}{n}})}{b(X_{\frac{k-1}{n}})}-\frac{a(X_{t})}{b(X_{t})}\bigg)}^{2}\hspace{0.1667em}dt.\]
Further,
\[\frac{a(x)}{b(x)}-\frac{a(y)}{b(y)}=\frac{a(x)}{b(x)b(y)}\big(b(y)-b(x)\big)+\frac{1}{b(y)}\big(a(x)-a(y)\big).\]
Therefore, by (2) and assumption (A4),
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{a(x)}{b(x)}-\frac{a(y)}{b(y)}& \displaystyle \le \frac{2a{(x)}^{2}}{b{(x)}^{2}b{(y)}^{2}}{\big(b(y)-b(x)\big)}^{2}+\frac{2}{b{(y)}^{2}}{\big(a(x)-a(y)\big)}^{2}\\{} & \displaystyle \le C{(x-y)}^{2}\big(1+|x{|}^{2p+2}|y{|}^{2p}+|y{|}^{2p}\big).\end{array}\]
Similarly to the proof of Lemma 4.4, we get the convergence