1 Introduction
Consider the stochastic differential equation
where a is a locally integrable function.
The aim of this paper is to study convergence in distribution of the sequence of processes $\{X_{}(nt)/\sqrt{n},\hspace{2.5pt}t\geqslant 0\}$ as $n\to \infty $.
Observe that
\[ \hspace{0.1667em}dX_{n}(t)=\sqrt{n}a\big(\sqrt{n}X_{n}(t)\big)\hspace{0.1667em}dt+\hspace{0.1667em}dW_{n}(t),\hspace{1em}t\geqslant 0,\]
where $W_{n}(t)=W(nt)/\sqrt{n}$, $t\geqslant 0$ is a Wiener process, and $X_{n}(t)=X_{}(nt)/\sqrt{n}$, $t\geqslant 0$.Hence, to study the sequence $\{X_{}(nt)/\sqrt{n}\}$, it suffices to investigate the SDEs
\[ \hspace{0.1667em}dX_{n}(t)=a_{n}\big(X_{n}(t)\big)\hspace{0.1667em}dt+\hspace{0.1667em}dW(t),\hspace{1em}t\geqslant 0,\]
where $a_{n}(x)=na(nx)$.If $a\in L_{1}(\mathbb{R})$, then $a_{n}$ converges in generalized sense to $\alpha \delta _{0}$, where $\delta _{0}$ is the Dirac delta function at zero, where $\alpha =\int _{\mathbb{R}}a(x)\hspace{0.1667em}dx$. It is well known that in this case the sequence $\{X_{n}\}$ converges weakly to a skew Brownian motion with parameter $\gamma =(\alpha )=\frac{{e}^{\alpha }-{e}^{-\alpha }}{{e}^{\alpha }+{e}^{-\alpha }}$; see, for example, [14, 10]. Recall that [5, 10] the skew Brownian motion $W_{\gamma }(t)$ with parameter γ, $|\gamma |\leqslant 1$, is a unique (strong) solution to the SDE
\[ \hspace{0.1667em}dW_{\gamma }(t)=\hspace{0.1667em}dW(t)+\gamma \hspace{0.1667em}d{L_{W_{\gamma }}^{0}}(t),\]
where ${L_{W_{\gamma }}^{0}}(t)=\lim _{\varepsilon \to 0+}{(2\varepsilon )}^{-1}{\int _{0}^{t}}\mathbb{1}_{|W_{\gamma }(s)|\leqslant \varepsilon }\hspace{0.1667em}ds$ is the local time of the process $W_{\gamma }$ at 0. The process $W_{\gamma }$ is a continuous Markov process with transition probability density function $p_{t}(x,y)=\varphi _{t}(x-y)+\gamma (y)\hspace{0.1667em}\varphi _{t}(|x|+|y|)$, $x,y\in \mathbb{R}$, where $\varphi _{t}(x)=\frac{1}{\sqrt{2\pi t}}{e}^{-{x}^{2}/2t}$ is the density of the normal distribution $N(0,t)$. Note also that $W_{\gamma }$ can be obtained from excursions of a Wiener process pointing them (independently of each other) up and down with probabilities $p=(1+\gamma )/2$ and $q=(1-\gamma )/2$, respectively.Kulinich et al. [8, 7] considered limit theorems in the case where a is nonintegrable function such that
where $c_{\pm }>-1/2$ are constants. In this case, $a_{n}(x)$ converges in some sense to $c_{-}\mathbb{1}_{x<0}+c_{+}\mathbb{1}_{x\geqslant 0}$ as $n\to \infty $.
(2)
\[ \underset{x\to \pm \infty }{\lim }\frac{1}{x}{\int _{0}^{x}}\big|va(v)-c_{\pm }\big|\hspace{0.1667em}dv=0,\hspace{1em}\big|xa(x)\big|\leqslant C,\]For instance, if $a(x)=c_{\pm }/x\hspace{2.5pt}\text{for}\hspace{2.5pt}\pm x>x_{0}$, then, for $c_{-}<1/2<c_{+}$, the sequence $X_{n}$ converges weakly to a Bessel process. If $c_{-}=c_{+}>-1/2$, then $|X_{n}|$ also converges weakly to a Bessel process. The problem of weak convergence of $X_{n}$ for (e.g.) $c_{-}=c_{+}>-1/2$ or $c_{-}<c_{+}\leqslant 1/2$ was not considered.
In this paper, we generalize the results of [14, 8] to the case
where $\widetilde{a}$ is integrable on $(-\infty ;\infty )$, and
We consider all possible limit processes (depending on $c_{+}$ and $c_{-}$). In particular, we show that, for $c_{+}=c_{-}<1/2$, the limit process is a skew Bessel process (see Section 2).
2 Bessel process. Skew Bessel process. Definition, properties
We recall the definition and some properties of Bessel processes.
Let $\delta \geqslant 0$ and $x_{0}\in \mathbb{R}$. Consider the SDE
where W is a Wiener process.
It is known (see [15], XI.1, (1.1)), that there exists a unique strong solution $Z({x_{0}^{2}},\cdot )$ of (3). This solution is called the squared δ-dimensional Bessel process. The process $Z({x_{0}^{2}},\cdot )$ is nonnegative a.s.
Definition 1.
The process $B_{c}(x_{0},t)=\sqrt{Z({x_{0}^{2}},t)}$ with $x_{0}\geqslant 0$ is called the (nonnegative) Bessel process with parameter $c=(\delta -1)/2$.
We will call the process ${B_{c}^{-}}(x_{0},t)=-B_{c}(x_{0},t)=-\sqrt{Z({x_{0}^{2}},t)}$ with $x_{0}\leqslant 0$ the nonpositive Bessel process.
Recall the following properties of the Bessel process (see [15, Chap. XI]).
The Bessel process $\xi (t)=B_{c}(x_{0},t)$ satisfies the SDE
\[ \hspace{0.1667em}d\xi _{t}=\hspace{0.1667em}dW_{t}+\frac{c}{\xi _{t}}\hspace{0.1667em}dt,\hspace{1em}t<T_{0},\]
where $T_{0}$ is the first hitting time of 0. If $\delta \geqslant 2$ (i.e., $c\geqslant 1/2$), then the Bessel process with probability 1 does not hit 0.If $0<\delta <2$ (i.e., $-1/2<c<1/2$), then with probability 1 the Bessel process hits 0 but spends zero time at 0. In particular, if $\delta =1$ (i.e., $c=0$), then the Bessel process is a reflecting Brownian motion.
If $\delta =0$ (i.e., $c=-1/2$), then with probability 1 the process attains 0 and remains there forever.
The scale function of the Bessel process $B_{c}$ equals
that is,
(4)
\[ \psi _{c}(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}-{x}^{-2c+1}\hspace{1em}& \text{if}\hspace{2.5pt}\hspace{2.5pt}c>1/2,\\{} \ln x\hspace{1em}& \text{if}\hspace{2.5pt}\hspace{2.5pt}c=1/2,\\{} {x}^{-2c+1}\hspace{1em}& \text{if}\hspace{2.5pt}\hspace{2.5pt}c<1/2,\end{array}\right.\]
\[ P_{x}(T_{a}<T_{b})=\frac{\psi _{c}(b)-\psi _{c}(x)}{\psi _{c}(b)-\psi _{c}(a)}\hspace{1em}\hspace{2.5pt}\text{for any}\hspace{2.5pt}\hspace{2.5pt}0<a<x<b,\]
where $T_{y}=\inf \{t\geqslant 0:\hspace{2.5pt}B_{c}(t)=y\}$.The transition density for $c>-1/2$, $x,y>0$, and $t>0$ equals
\[ {p_{t}^{c}}(x,y)={t}^{-1}{(y/x)}^{\nu }y\exp \big(-\big({x}^{2}+{y}^{2}\big)/2t\big)I_{\nu }(xy/t),\]
where $I_{\nu }$ is a Bessel function of index $\nu =c-1/2$.Let $c\in (-1/2,1/2)$, and let ${p_{t}^{0,c}}(x,y)$ be the transition density of the Bessel process $B_{c}$ killed at 0.
Set
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {p_{t}^{\mathit{skew}}}(x,y)& \displaystyle ={p_{t}^{0,c}}\big(|x|,|y|\big)\cdot \mathbb{1}_{xy>0}\\{} & \displaystyle \hspace{1em}+\frac{1+\gamma y}{2}\big({p_{t}^{c}}\big(|x|,|y|\big)-{p_{t}^{0,c}}\big(|x|,|y|\big)\big),\hspace{1em}x,y\in \mathbb{R}.\end{array}\]
It is easy to verify that this function satisfies the Chapman–Kolmogorov equation, is nonnegative, and $\int _{\mathbb{R}}{p_{t}^{\mathit{skew}}}(x,y)\hspace{0.1667em}dy=1,\hspace{2.5pt}x\in \mathbb{R}$. Definition 2.
A time-homogeneous Markov process with the transition density ${p_{t}^{\mathit{skew}}}$ is called the skew Bessel process ${B_{c,\gamma }^{\mathit{skew}}}$ with parameters c and $\gamma \in [-1,1]$.
Remark 1.
We do not consider the skew Bessel process for $c\geqslant 1/2$ because $B_{c}(x_{0},\cdot )$ does not hit 0 if $x_{0}\ne 0$.
Remark 2.
The skew Bessel process ${B}^{\mathit{skew}}$ can be obtained from a nonnegative Bessel process by pointing its excursions up with probability $p=\frac{1+\gamma }{2}$ and down with probability $q=\frac{1-\gamma }{2}$, similarly to the case of a skew Brownian motion; see arguments in [1], Section 2.
Thus, the scale function of the skew Bessel process equals
For other properties of the skew Bessel process, we refer to [2].
(5)
\[ \psi _{\mathit{skew}}(x)=(q\mathbb{1}_{x\geqslant 0}-p\mathbb{1}_{x<0})|x{|}^{-2c+1},\hspace{1em}x\in \mathbb{R}.\]Remark 3.
If $x_{0}>0$ and $p=1$ (i.e., $\gamma =1$), then ${B_{c,\gamma }^{\mathit{skew}}}(x_{0},\cdot )$ is a (nonnegative) Bessel process $B_{c}(x_{0},\cdot )$ with parameter c: ${B_{c,1}^{\mathit{skew}}}(x_{0},\cdot )\stackrel{d}{=}B_{c}(x_{0},\cdot )$.
Also, the absolute value of the skew Bessel process $|{B_{c,\gamma }^{\mathit{skew}}}|$ is a (nonnegative) Bessel process $B_{c}(x_{0},\cdot )$ with parameter c: $|{B_{c,\gamma }^{\mathit{skew}}}(x_{0},\cdot )|\stackrel{d}{=}B_{c}(x_{0},\cdot )$.
If $c=0$, then ${B_{c,\gamma }^{\mathit{skew}}}$ is a skew Brownian motion: ${B_{0,\gamma }^{\mathit{skew}}}(\cdot )\stackrel{d}{=}W_{\gamma }(\cdot )$.
3 Main results
Let
where $\widetilde{a}\in L_{1}(\mathbb{R})$ and
Let $X_{n}(t),t\geqslant 0$, be the solution of the SDE
\[ \left\{\begin{array}{l}\hspace{0.1667em}dX_{n}(t)=na\big(nX_{n}(t)\big)\hspace{0.1667em}dt+\hspace{0.1667em}dW(t)\\{} \hspace{2em}\hspace{2em}\hspace{2.5pt}=\bigg(n\widetilde{a}\big(nX_{n}(t)\big)+\displaystyle \frac{\bar{c}\big(nX_{n}(t)\big)}{X_{n}(t)}\bigg)\hspace{0.1667em}dt+\hspace{0.1667em}dW(t),\hspace{1em}t\geqslant 0,\\{} X_{n}(0)=x_{0}.\end{array}\right.\]
The existence and uniqueness of a strong solution of this SDE follows from [3, Thm. 4.53]. Theorem 1.
If $c_{+}\textit{and}\hspace{2.5pt}c_{-}>-1/2$, then the sequence of processes $\{X_{n}\}$ converges weakly to a limit process $X_{\infty }$. In particular:
-
A3. If $x_{0}<0$, $c_{-}<1/2$, and $c_{-}<c_{+}$, then the limiting process evolves as ${B_{c_{-}}^{-}}$ until hitting 0 and then proceeds as ${B_{c_{+}}^{+}}$ indefinitely, that is,\[ X_{\infty }(t)={B_{c_{-}}^{-}}(x_{0},t)\cdot \mathbb{1}_{t\leqslant \tau }+{B_{c_{+}}^{+}}(0,t-\tau )\cdot \mathbb{1}_{t>\tau },\hspace{1em}t\geqslant 0,\]where $\tau =\inf \{t:X_{\infty }(t)\geqslant 0\}$ and ${B_{c_{\pm }}^{\pm }}$ are independent (positive and negative) Bessel processes.
-
A5. If $c_{+}=c_{-}=:c<1/2$, then, for any $x_{0}$, where $\gamma =({\int _{-\infty }^{+\infty }}\widetilde{a}(z)\hspace{0.1667em}dz)=\frac{1-\exp \{-2{\int _{-\infty }^{+\infty }}\widetilde{a}(z)\hspace{0.1667em}dz\}}{1+\exp \{-2{\int _{-\infty }^{+\infty }}\widetilde{a}(z)\hspace{0.1667em}dz\}}$.
-
A6. Finally, if $x_{0}=0$, $c_{+}\geqslant 1/2$, and $c_{-}\geqslant 1/2$, then the distribution of the limit process $X_{\infty }$ equals where
(6)
\[ p=\frac{{\textstyle\int _{0}^{\infty }}A(-y){(y\vee 1)}^{-2c_{-}}\hspace{0.1667em}dy}{{\textstyle\int _{0}^{\infty }}(A(-y){(y\vee 1)}^{-2c_{-}}+A(y){(y\vee 1)}^{-2c_{+}})\hspace{0.1667em}dy},\]
4 Proof
It follows from [9, Section 3] or [11, Section 3.7] that if A1 is satisfied, then, for any $\alpha >0$, we have the convergence
\[ X_{n}\big(\cdot \wedge {\tau }^{n,\alpha }\big)\hspace{1em}\Rightarrow \hspace{1em}{B_{c_{+}}^{+}}\big(x_{0},\cdot \wedge {\tau }^{0,\alpha }\big),\hspace{1em}n\to \infty ,\]
where ${\tau }^{n,\alpha }=\inf \{t\geqslant 0:X_{n}(t)\leqslant \alpha \}$ and ${\tau }^{0,\alpha }=\inf \{t\geqslant 0:{B_{c_{+}}^{+}}(x_{0},t)\leqslant \alpha \}$. Since the process ${B_{c_{+}}^{+}}(x_{0},\cdot )$ does not hit 0, this yields the proof. Case A2 is considered similarly.Let $\{{\xi }^{(n)},n\geqslant 0\}$ be a sequence of continuous homogeneous strong Markov processes. For $\alpha >0$, set
We denote by ${\xi _{x_{0}}^{(n)}}$ a process that has the distribution of ${\xi }^{(n)}$ conditioned by ${\xi }^{(n)}(0)=x_{0}$.
The next statement is a particular case of Theorem 2 of [13].
Lemma 1.
Assume that the sequence $\{{\xi }^{(n)},n\geqslant 0\}$ satisfies the following conditions:
Assume that, for any $\alpha >0$, $x_{0}\in \mathbb{R}$, and any sequence $\{x_{n}\}$ such that $\lim _{n\to \infty }x_{n}=x_{0}$, we have
Then ${\xi }^{(n)}\Rightarrow {\xi }^{(0)}$ in $C([0,\infty ))$ as $n\to \infty $.
(7)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\xi }^{(n)}(0)\hspace{1em}\Rightarrow \hspace{1em}{\xi }^{(0)}(0);\end{array}\](8)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \forall T>0\hspace{2.5pt}\forall \varepsilon >0\hspace{2.5pt}\exists \delta >0\hspace{2.5pt}\exists n_{0}\hspace{2.5pt}\forall n\geqslant n_{0}\\{} & \displaystyle \hspace{1em}\mathrm{P}\Big(\underset{\begin{array}{c} |s-t|<\delta ,\\{}s,t\in [0,T]\end{array}}{\sup }\big|{\xi }^{(n)}(t)-{\xi }^{(n)}(s)\big|\geqslant \varepsilon \Big)\leqslant \varepsilon ;\end{array}\](11)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big({\xi _{x_{n}}^{(n)}}\big(\cdot \wedge {\tau }^{n,\alpha }\big),{\tau }^{n,\alpha }\big)& \displaystyle \hspace{1em}\Rightarrow \hspace{1em}\big({\xi _{x_{0}}^{(0)}}\big(\cdot \wedge {\tau }^{0,\alpha }\big),{\tau }^{0,\alpha }\big),\hspace{1em}n\to \infty ;\end{array}\]We apply this lemma for ${\xi }^{(n)}=X_{n},n\geqslant 1$, and ${\xi }^{(0)}=X_{\infty }$ in cases A1–A5 of the theorem. Case A6 will be considered separately.
Remark 5.
Condition (12) is the only condition that is not true in case A6. It fails if $x_{0}=0$. Indeed, for any $x>0$, the process ${B_{c_{+}}^{+}}(x,\cdot )$ does not hit 0. So, we may select a sequence $\{x_{n}\}\subset (0,\infty )$ that converges to 0 sufficiently slowly and such that, given $X_{n}(0)=x_{n}$, we have $X_{n}(\cdot )\Rightarrow B_{+}(0,\cdot )$ and $\lim _{n\to \infty }P(\exists t\ge 0:\hspace{2.5pt}X_{n}(t)=0)=0$. The concrete selection of $\{x_{k}\}$ can be done using formulas (15) and (16). Since $B_{+}(0,{\sigma }^{0,\alpha })=\alpha $ a.s., we get $X_{n}({\sigma }^{n,\alpha })\Rightarrow \alpha $. However, if all $x_{n}$ were negative, then the limit might be $-\alpha $.
The convergence
follows from [9, Section 3] or [11, Section 3.7]. Since
(13)
\[ \forall \alpha >0\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}{\xi _{x_{n}}^{(n)}}\big(\cdot \wedge {\tau }^{n,\alpha }\big)\hspace{1em}\Rightarrow \hspace{1em}{\xi _{x_{0}}^{(0)}}\big(\cdot \wedge {\tau }^{0,\alpha }\big),\hspace{1em}n\to \infty ,\]
\[ P\big(\forall \varepsilon >0\hspace{2.5pt}\exists t\in \big({\tau }^{0,\alpha },{\tau }^{0,\alpha }+\varepsilon \big):\hspace{2.5pt}\big|{\xi _{x_{0}}^{(0)}}(t)\big|<\alpha \hspace{2.5pt}|\hspace{2.5pt}{\tau }^{0,\alpha }<\infty \big)=1,\]
convergence (13) yields the convergence of pairs (11).Let us check condition (8). Set
\[\begin{array}{r@{\hskip0pt}l}\displaystyle A(y)& \displaystyle =\exp \Bigg\{-2{\int _{0}^{y}}\widetilde{a}(z)\hspace{0.1667em}dz\Bigg\},\\{} \displaystyle A_{n}(y)& \displaystyle =\exp \Bigg\{-2{\int _{0}^{y}}n\widetilde{a}(nz)\hspace{0.1667em}dz\Bigg\}=A(ny),\hspace{1em}y\in \mathbb{R},\\{} \displaystyle \varPhi _{n}(x)& \displaystyle ={\int _{0}^{x}}A_{n}(y)\hspace{0.1667em}dy,\hspace{1em}x\in \mathbb{R}.\end{array}\]
Observe that $\varPhi _{n}:\mathbb{R}\to \mathbb{R}$ is a bijection, $\varPhi _{n}(0)=0$, and
\[ \exists L>0\hspace{2.5pt}\hspace{2.5pt}\forall n\hspace{2.5pt}\hspace{2.5pt}\forall x,y\in \mathbb{R}\hspace{2em}{L}^{-1}|x-y|\leqslant \big|\varPhi _{n}(x)-\varPhi _{n}(y)\big|\leqslant L|x-y|.\]
Itô’s formula yields
\[ \hspace{0.1667em}d\varPhi _{n}\big(X_{n}(t)\big)=A\big(nX_{n}(t)\big)\bigg(\frac{\bar{c}(nX_{n}(t))}{X_{n}(t)}\hspace{0.1667em}dt+\hspace{0.1667em}dW(t)\bigg).\]
So
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|X_{n}(t)-X_{n}(s)\big|& \displaystyle \leqslant L\big|\varPhi _{n}\big(X_{n}(t)\big)-\varPhi _{n}\big(X_{n}(s)\big)\big|\\{} & \displaystyle \leqslant L\Bigg|{\int _{s}^{t}}A\big(nX_{n}(z)\big)\frac{\bar{c}(nX_{n}(z))}{X_{n}(z)}\hspace{0.1667em}dz\Bigg|+L\Bigg|{\int _{s}^{t}}A\big(nX_{n}(z)\big)\hspace{0.1667em}dW(z)\Bigg|.\end{array}\]
Let $|s-t|<\delta $, and let $\varDelta >0$ be fixed. Denote $f_{n}(t):={\int _{0}^{t}}A(nX_{n}(z))\hspace{0.1667em}dW(z)$.
a) Assume that $|X_{n}(z)|>\varDelta ,z\in [s,t]$. Then
\[ \Bigg|{\int _{s}^{t}}A\big(nX_{n}(z)\big)\frac{\bar{c}(nX_{n}(z))}{X_{n}(z)}\hspace{0.1667em}dz\Bigg|\leqslant C\delta /\varDelta ,\]
where $C=\| A\| _{\infty }\max (|c_{-}|,|c_{+}|)<\infty $. Hence, we have the estimate
where $\omega _{f}(\delta )=\sup _{|s-t|<\delta ,\hspace{2.5pt}s,t\in [0,T]}|f(t)-f(s)|$ is the modulus of continuity.b) Assume that $|X_{n}(z_{0})|\leqslant \varDelta $ for some $z_{0}\in [s,t]$.
Denote $\tau :=\inf \{z\geqslant s:|X_{n}(z)|\leqslant \varDelta \}$ and $\sigma :=\sup \{z\leqslant t:|X_{n}(z)|\leqslant \varDelta \}$. Then
Thus, in any case, we have the following estimate of the modulus of continuity:
Let $\varDelta \leqslant \varepsilon /6$. Then, for $\delta \leqslant \frac{\varepsilon \varDelta }{6LC}$, we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \underset{n}{\sup }\mathrm{P}\big(\omega _{X_{n}}(\delta )\geqslant \varepsilon \big)& \displaystyle \leqslant \underset{n}{\sup }\mathrm{P}\big(\varepsilon /3+2L\omega _{f_{n}}(\delta )+\varepsilon /3\geqslant \varepsilon \big)\\{} & \displaystyle =\underset{n}{\sup }\mathrm{P}\big(\omega _{f_{n}}(\delta )\geqslant \varepsilon /6L\big)\to 0,\hspace{1em}\delta \to 0+.\end{array}\]
The last convergence follows from the fact that the sequence of distributions of $\{f_{n}(\cdot )={\int _{0}^{.}}A(nX_{n}(z))\hspace{0.1667em}dW(z)\}_{n\geqslant 1}$ in the space of continuous functions is weakly relatively compact because the function A is bounded.Let $|x|<\alpha $. It is easy to see that $P_{x}({\sigma }^{n,\alpha }<\infty )=1,\hspace{2.5pt}n\in \mathbb{N}\cup \{\infty \}$. Since the process $X_{n}$ is continuous, we have $|X_{n}({\sigma }^{n,\alpha })|=\alpha $ a.s.
By ${p_{x}^{n}}=P_{x}(X_{n}({\sigma }^{n,\alpha })=\alpha ),\hspace{2.5pt}n\in \mathbb{N}\cup \{\infty \}$, we denote the probability to reach α before reaching $-\alpha $ when starting from x.
Using formulas (4) and (5) for the scale of a Bessel process and a skew Bessel process, it is easy to check that
where $\psi _{c}$ is given in (4).
(14)
\[ {p_{x}^{\infty }}=\left\{\begin{array}{l@{\hskip10.0pt}l}\mathbb{1}_{x\geqslant 0}-\big(1-\frac{\psi _{c_{-}}(-x)}{\psi _{c_{-}}(\alpha )}\big)\mathbb{1}_{x<0}\hspace{1em}& \text{in cases A1, A3},\\{} \frac{\psi _{c_{+}}(x)}{\psi _{c_{+}}(\alpha )}\cdot \mathbb{1}_{x>0}\hspace{1em}& \text{in cases A2, A4},\\{} \frac{\psi _{c}(|x|)}{\psi _{c}(\alpha )}(q\mathbb{1}_{x\geqslant 0}-p\mathbb{1}_{x<0})+p\hspace{1em}& \text{in case A5},\end{array}\right.\]For $n\in \mathbb{N}$, we have (see [4, Section 15] and [15])
where
The function φ is increasing. It follows from the definition of a that φ is bounded from above (below) iff $c_{+}>1/2$ ($c_{-}>1/2$). The function φ has the following asymptotic behavior:
where
and
(15)
\[ {p_{x}^{n}}=\frac{\varphi _{n}(x)-\varphi _{n}(-\alpha )}{\varphi _{n}(\alpha )-\varphi _{n}(-\alpha )},\](16)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \varphi _{n}(x)& \displaystyle ={\int _{0}^{x}}\exp \Bigg\{-2{\int _{0}^{y}}a_{n}(z)\hspace{0.1667em}dz\Bigg\}\hspace{0.1667em}dy={\int _{0}^{x}}\exp \Bigg\{-2{\int _{0}^{y}}na(nz)\hspace{0.1667em}dz\Bigg\}\hspace{0.1667em}dy\\{} & \displaystyle =\frac{1}{n}{\int _{0}^{nx}}\exp \Bigg\{-2{\int _{0}^{y}}a(z)\hspace{0.1667em}dz\Bigg\}\hspace{0.1667em}dy=\frac{1}{n}\varphi (nx),\\{} \displaystyle \varphi (x)& \displaystyle :={\int _{0}^{x}}\exp \Bigg\{-2{\int _{0}^{y}}a(z)\hspace{0.1667em}dz\Bigg\}\hspace{0.1667em}dy.\end{array}\](17)
\[ \varphi (x)\sim \left\{\begin{array}{l@{\hskip10.0pt}l}\pm A(\pm \infty )\frac{|x{|}^{1-2c_{\pm }}}{1-2c_{\pm }}\hspace{1em}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\hspace{2.5pt}c_{\pm }<1/2,\\{} \pm A(\pm \infty )\ln |x|\hspace{1em}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\hspace{2.5pt}c_{\pm }=1/2,\end{array}\right.\hspace{1em}x\to \pm \infty ,\]Condition (12) follows from (14), (15), (16), (17), (18) in cases A1–A5 (and in case A6 if $x_{n}=0,\hspace{2.5pt}n\geqslant 0$).
Set $\tau _{n}=\inf \{t\geqslant 0:|X_{n}(t)|\geqslant 1\}$.
Lemma 2.
Assume that
Then (9) is satisfied, that is,
(19)
\[ \underset{\varepsilon \to 0+}{\lim }\underset{|x|\leqslant 1}{\sup }\underset{n}{\sup }\mathrm{E}_{x}{\int _{0}^{\tau _{n}}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt=0.\]Proof.
Introduce the notations
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {S_{n,\pm }^{0}}& \displaystyle :=0,\hspace{2em}{T_{n,\pm }^{k}}:=\inf \big\{t\ge {S_{n,\pm }^{k-1}}:\hspace{2.5pt}X_{n}(t)=\pm 1\big\},\\{} \displaystyle {S_{n,\pm }^{k}}& \displaystyle :=\inf \big\{t\ge {T_{n,\pm }^{k}}:\hspace{2.5pt}X_{n}(t)=\pm \varepsilon \big\},\\{} \displaystyle {\tilde{T}_{n,\pm }^{k}}& \displaystyle :=\inf \big\{t\ge {S_{n,\pm }^{k}}:\hspace{2.5pt}\big|X_{n}(t)\big|=1\big\},\\{} \displaystyle {\beta _{n,\pm }^{k}}& \displaystyle :={\int _{{S_{n,\pm }^{k}}}^{{\tilde{T}_{n,\pm }^{k}}}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt,\hspace{2em}{\alpha _{n,\pm }^{k}}:={S_{n,\pm }^{k}}-{T_{n,\pm }^{k}},\hspace{1em}k\geqslant 1.\end{array}\]
Then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\int _{0}^{T}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt\\{} & \displaystyle \hspace{1em}\leqslant {\int _{0}^{\tau _{n}}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt+\sum \limits_{k}\big({\beta _{n,+}^{1}}+\cdots +{\beta _{n,+}^{k}}\big)\mathbb{1}_{{\alpha _{n,+}^{1}}<T,\dots ,{\alpha _{n,+}^{k}}<T,{\alpha _{n,+}^{k+1}}\geqslant T}\\{} & \displaystyle \hspace{2em}+\sum \limits_{k}\big({\beta _{n,-}^{1}}+\cdots +{\beta _{n,-}^{k}}\big)\mathbb{1}_{{\alpha _{n,-}^{1}}<T,\dots ,{\alpha _{n,-}^{k}}<T,{\alpha _{n,-}^{k+1}}\geqslant T}\hspace{0.1667em}.\end{array}\]
It follows from the strong Markov property that
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sum \limits_{k}\mathrm{E}\big({\beta _{n,+}^{1}}+\cdots +{\beta _{n,+}^{k}}\big)\mathbb{1}_{{\alpha _{n,+}^{1}}<T,\dots ,{\alpha _{n,+}^{k}}<T,{\alpha _{n,+}^{k+1}}\geqslant T}\\{} & \displaystyle \hspace{1em}=\sum \limits_{k}k\mathrm{E}_{\varepsilon }{\int _{0}^{\tau _{n}}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt\hspace{0.1667em}{(1-p_{n,+})}^{k}p_{n,+}={(p_{n,+})}^{-1}\mathrm{E}_{\varepsilon }{\int _{0}^{\tau _{n}}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt,\end{array}\]
where $p_{n,+}=\mathrm{P}_{1}({S_{n,+}^{1}}\geqslant T)$.Considering the last term similarly, we get the inequality
\[ \mathrm{E}{\int _{0}^{T}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt\leqslant \big(1+{(p_{n,+})}^{-1}+{(p_{n,-})}^{-1}\big)\underset{|x|\leqslant 1}{\sup }\underset{n}{\sup }\mathrm{E}_{x}{\int _{0}^{\tau _{n}}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt.\]
It is not difficult to see that $\sup _{n}{(p_{n,\pm })}^{-1}<\infty $. The lemma is proved. □Let us verify (19). It is known [6, Chap. 4.3] that
where Green’s function $G_{n}$ equals
\[ u_{n,\varepsilon }(x):=\mathrm{E}_{x}{\int _{0}^{\tau _{n}}}\mathbb{1}_{|X_{n}(t)|\leqslant \varepsilon }\hspace{0.1667em}dt\]
is of the form
(20)
\[ u_{n,\varepsilon }(x)={\int _{-1}^{1}}G_{n}(x,y)\mathbb{1}_{|y|\leqslant \varepsilon }\hspace{0.1667em}m_{n}(dy),\]
\[ G_{n}(x,y)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{(\varphi _{n}(x)-\varphi _{n}(-1))(\varphi _{n}(1)-\varphi _{n}(y))}{\varphi _{n}(1)-\varphi _{n}(-1)},\hspace{1em}& x\leqslant y,\\{} G_{n}(y,x),\hspace{2.5pt}\hspace{1em}& x\geqslant y,\end{array}\right.\]
with $\varphi _{n}$ given by formula (16), and
The function $u_{n,\varepsilon }(x)$ is a generalized solution (because $a_{n}$ may be discontinuous) of the equation
\[ 1/2\hspace{0.1667em}{u^{\prime\prime }_{n,\varepsilon }}(x)+a_{n}(x){u^{\prime }_{n,\varepsilon }}(x)=-\mathbb{1}_{|x|\leqslant \varepsilon }(x),\hspace{1em}|x|\leqslant 1,\]
with boundary conditions $u_{n,\varepsilon }(\pm 1)=0$.A direct verification of the condition $\lim _{\varepsilon \to 0+}\sup _{|x|\leqslant 1}\sup _{n}u_{n,\varepsilon }(x)=0$ is possible but cumbersome. We prove the corresponding convergence using the comparison theorem. We consider only the case where $a_{n}$ satisfies the Lipschitz condition. The general case follows by approximation.
It follows from the Itô–Tanaka formula that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \hspace{0.1667em}d\big|X_{n}(t)\big|& \displaystyle =\big(X_{n}(t)\big)\hspace{0.1667em}a_{n}\big(X_{n}(t)\big)\hspace{0.1667em}dt+\big(X_{n}(t)\big)\hspace{0.1667em}dW(t)+\hspace{0.1667em}dl_{n}(t)\\{} & \displaystyle =\big(X_{n}(t)\big)\hspace{0.1667em}a_{n}\big(X_{n}(t)\big)\hspace{0.1667em}dt+\hspace{0.1667em}dW_{n}(t)+\hspace{0.1667em}dl_{n}(t),\end{array}\]
where $W_{n}$ is a new Wiener process, and $l_{n}$ is the local time of $X_{n}$ at zero.Let $-1/2<c<\min (c_{-},c_{+},0)$. It follows from the arguments of [12] on comparison of reflecting SDEs that $|X_{n}(t)|\geqslant Y_{n}(t),\hspace{2.5pt}t\geqslant 0$, where $Y_{n}$ satisfies the following SDE with reflection at zero:
\[ \hspace{0.1667em}dY_{n}(t)=\bar{a}_{n}\big(Y_{n}(t)\big)\hspace{0.1667em}dt+\hspace{0.1667em}dW_{n}(t)+\hspace{0.1667em}d\tilde{l}_{n}(t).\]
Here $W_{n}(t)={\int _{0}^{t}}(X_{n}(s))\hspace{0.1667em}dW(s)$ is a Wiener process, $\tilde{l}_{n}$ is the local time of $Y_{n}$ at 0, $\bar{a}_{n}(x)=n\bar{a}(nx),\hspace{2.5pt}\bar{a}(x)=-(|a(x)|+|a(-x)|)-\frac{c}{x}\mathbb{1}_{|x|>1}+r(x)$, and r is any nonpositive function such that $\bar{a}$ satisfies Lipschitz condition. We will also assume that $\int _{\mathbb{R}}|r(x)|\hspace{0.1667em}dx\leqslant \int _{\mathbb{R}}|b(x)|\hspace{0.1667em}dx$. The Lipschitz property is used only for application of comparison theorem.To prove (19), it suffices to verify that
\[ \underset{\varepsilon \to 0}{\lim }\underset{x\in [0,1]}{\sup }\underset{n}{\sup }\bar{u}_{n,\varepsilon }(x):=\underset{\varepsilon \to 0}{\lim }\underset{x\in [0,1]}{\sup }\underset{n}{\sup }\mathrm{E}_{x}{\int _{0}^{\bar{\tau }_{n}}}\mathbb{1}_{Y_{n}(s)\in [0,\varepsilon ]}\hspace{0.1667em}ds=0,\]
where $\bar{\tau }_{n}$ is the entry time of $Y_{n}$ into $[1,\infty )$.It is known [6] that
where K is a constant that depends only on $\int _{\mathbb{R}}|b(x)|\hspace{0.1667em}dx$ and c (and is independent of n). By our choice, $c\in (-1/2,0)$, so the right-hand side of (21) tends to 0 as $\varepsilon \to 0+$ by the Lebesgue dominated convergence theorem.
\[ \bar{u}_{n,\varepsilon }(x)=2{\int _{x}^{1}}\exp \Bigg\{-2{\int _{1}^{y}}\bar{a}_{n}(z)\hspace{0.1667em}dz\Bigg\}{\int _{0}^{y}}\mathbb{1}_{[0,\varepsilon ]}(z)\exp \Bigg\{2{\int _{1}^{y}}\bar{a}_{n}(z)\hspace{0.1667em}dz\Bigg\}\hspace{0.1667em}dy\]
is a (generalized) solution of the equation
\[ 1/2\hspace{0.1667em}{\bar{u}^{\prime\prime }_{n,\varepsilon }}(x)+\bar{a}_{n}(x){\bar{u}^{\prime }_{n,\varepsilon }}(x)=-\mathbb{1}_{[0,\varepsilon ]},\hspace{1em}x\in [0,1],\]
with boundary conditions ${u^{\prime }_{n,\varepsilon }}(0)=0,\hspace{2.5pt}u_{n,\varepsilon }(1)=0$. So
(21)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{x\in [0,1]}{\sup }\underset{n}{\sup }\bar{u}_{n,\varepsilon }(x)\\{} & \displaystyle \hspace{1em}=\bar{u}_{n,\varepsilon }(0)\\{} & \displaystyle \hspace{1em}=2{\int _{0}^{1}}\exp \Bigg\{-2{\int _{1}^{y}}\bar{a}_{n}(z)\hspace{0.1667em}dz\Bigg\}{\int _{0}^{y}}\mathbb{1}_{[0,\varepsilon ]}(z)\exp \Bigg\{2{\int _{1}^{y}}\bar{a}_{n}(z)\hspace{0.1667em}dz\Bigg\}\hspace{0.1667em}dy\\{} & \displaystyle \hspace{1em}\leqslant K{\int _{0}^{1}}\exp \Bigg\{{\int _{0}^{y}}{\bar{y}}^{-2c}\hspace{0.1667em}dz\Bigg\}{\int _{0}^{y}}\mathbb{1}_{[0,\varepsilon ]}(z)\hspace{0.1667em}{y}^{2c}\hspace{0.1667em}dy,\end{array}\]Consider case A6. Note that conditions (7)–(11) are satisfied for ${\xi }^{(n)}=X_{n}$, $n\geqslant 1$, and ${\xi }^{(0)}=X_{\infty }$, where $X_{\infty }$ is given in the theorem. In particular, this implies that the sequence of distributions of stochastic processes $\{X_{n}\}$ in the space of continuous functions is weakly relatively compact. Choosing an arbitrary convergent subsequence, without loss of generality, we may assume that $\{X_{n}\}$ itself converges weakly to a continuous process X. Let $\delta >0$, and let ${\sigma }^{n,\delta }=\inf \{t\geqslant 0:X_{n}(t)=\delta \},\hspace{2.5pt}{\sigma }^{\delta }=\inf \{t\geqslant 0:X(t)=\delta \}$. It follows from formulas for the scale function of the processes $\{X_{n}\}$ that $\lim _{n\to \infty }P(X_{n}({\sigma }^{n,\delta })=\delta )=p,\hspace{2.5pt}\lim _{n\to \infty }P(X_{n}({\sigma }^{n,\delta })=-\delta )=1-p$, where p is given by (6). Formulas (9) and (11) imply that the limit process exits from the interval $[-\delta ,\delta ]$ with probability 1.
Observe that, for almost all $\delta >0$, with respect to the Lebesgue measure, the distribution of $X_{n}({\sigma }^{n,\delta }+\cdot )$ converges weakly as $n\to \infty $ to the distribution of $X(\delta +\cdot )$. Indeed, by the Skorokhod theorem on a single probability space (see [16]), without loss of generality, we may assume that the sequence $\{X_{n}\}$ converges to X uniformly on compact sets with probability 1. For simplicity, we will assume that the convergence holds for all ω and that also ${\sigma }^{n,\delta },{\sigma }^{\delta }<\infty $ for all $\omega ,n,\delta >0$. So we show convergence
if we prove that
Convergence (23) may fail only if ${\sigma }^{\delta }$ is a point of a local maximum of X. It follows from the definition that ${\sigma }^{\delta }$ is a point of a strict local maximum of X from the left. The set of points of local maximums that are strict maximums from the left is at most countable. This yields that, for almost all ω and almost all $\delta >0$ with respect to the Lebesgue measure, we have convergence (23) and hence (22).
On the other hand, the distribution of $X_{n}({\sigma }^{n,\delta }+\hspace{0.1667em}\cdot )$ converges weakly as $n\to \infty $ to the distribution of the process $\mathbb{1}_{\varOmega _{-}}{B_{c_{-}}^{-}}(-\delta ,\cdot )+\mathbb{1}_{\varOmega _{+}}{B_{c_{+}}^{+}}(\delta ,\cdot )$, where $P(\varOmega _{-})=1-p,\hspace{2.5pt}P(\varOmega _{+})=p$, and the σ-algebra $\{\varnothing ,\varOmega _{-},\varOmega _{+},\varOmega \}$ is independent of $\sigma ({B_{c_{\pm }}^{\pm }}(\pm \delta ,t),t\geqslant 0)$.
Recall that assumptions of the theorem yield
It follows from (9) that
Thus, we have the almost sure convergence in $C([0,\infty ))$
The processes $\mathbb{1}_{\varOmega _{-}}{B_{c_{-}}^{-}}(-\delta ,\cdot )+\mathbb{1}_{\varOmega _{+}}{B_{c_{+}}^{+}}(\delta ,\cdot )$ converge in distribution to
\[ \mathbb{1}_{\varOmega _{-}}{B_{c_{-}}^{-}}(0,\cdot )+\mathbb{1}_{\varOmega _{+}}{B_{c_{+}}^{+}}(0,\cdot ),\]
where the σ-algebras $\{\varnothing ,\varOmega _{-},\varOmega _{+},\varOmega \}$ and $\sigma ({B_{c_{\pm }}^{\pm }}(0,t),t\geqslant 0)$ are independent.This completes the proof of Theorem 1.