We investigate the convergence of hitting times for jump-diffusion processes. Specifically, we study a sequence of stochastic differential equations with jumps. Under reasonable assumptions, we establish the convergence of solutions to the equations and of the moments when the solutions hit certain sets.
In this article, we consider a sequence of stochastic differential equations with jumps
Xn(t)=Xn(0)+∫0tan(s,Xn(s))ds+∫0tbn(s,Xn(s))dW(s)+∫0t∫Rmcn(s,Xn(s−),θ)ν˜(dθ,ds),t≥0,n≥0.
Here W is a standard Wiener process, ν˜ is a compensated Poisson random measure, and Xn(0) is nonrandom (see Section 2 for precise assumptions). Assuming that an→a0, bn→b0, cn→c0, and Xn(0)→X0(0) as n→∞ in an appropriate sense, we are interested in convergence of hitting times τn→τ0, n→∞, where
τn=inf{t≥0:φn(t,Xn(t))≥0}
is the first time when the process Xn hits the set Gtn={x:φn(t,x)≥0}.
The study is motivated by the following observation. Jump-diffusion processes are commonly used to model prices of financial assets. When the parameters of a jump-diffusion process are estimated with the help of statistical methods, there is an estimation error. Thus, it is natural to investigate whether the optimal exercise strategies are close for two jump-diffusion processes with close parameters. Moreover, we should study particular hitting times since, in the Markovian setting, the optimal stopping time is the hitting time of the optimal stopping set.
There is a lot of literature devoted to jump-diffusion processes and their applications in finance. The book [1] gives an extensive list of references on the subject. The convergence of stopping times for diffusion and jump-diffusion processes was studied in [2, 3, 6]. All these papers are devoted to the one-dimensional case, and the techniques are different from ours. Here we generalize these results to the multidimensional case and also relax the assumptions on the convergence of coefficients. As an auxiliary result of independent interest, we prove the convergence of solutions under very mild assumptions on the convergence of coefficients.
Preliminaries and notation
Let (Ω,F,F,P) be a standard stochastic basis with filtration F={Ft,t≥0} satisfying the usual assumptions. Let {W(t)=(W1(t),…,Wk(t)),t≥0} be a standard Wiener process in Rk, and ν(dθ,dt) be a Poisson random measure on Rm×[0,∞). We assume that W and ν are compatible with the filtration F, that is, for any t>s≥0s\ge 0$]]> and any A∈B(Rm) and B∈B([s,t]), the increment W(t)−W(s) and the value ν(A×B) are Ft-measurable and independent of Fs.
Assume in addition that ν(dθ,dt) is homogeneous, that is, for all A∈B(Rm) and B∈B([0,∞)), E[ν(A×B)]=μ(A)λ(B), where λ is the Lebesgue measure, μ is a σ-finite measure on Rm having no atom at zero. Denote by ν˜ the corresponding compensated measure, that is, ν˜(A×B)=ν(A×B)−μ(A)λ(B) for all A∈B(Rm),B∈B([0,∞)).
For each integer n≥0, consider a stochastic differential equation in RdXin(t)=Xin(0)+∫0tain(s,Xn(s))ds+∑j=1k∫0tbijn(s,Xn(s))dWj(s)+∫0t∫Rmcin(s,Xn(s−),θ)ν˜(dθ,ds),t≥0,i=1,…,d.
In this equation, the initial condition Xn(0)∈Rd is nonrandom, and the coefficients ain,bijn:[0,∞)×Rd→R, cin:[0,∞)×Rd×Rm→R, i=1,…,d, j=1,…,k, are nonrandom and measurable.
In what follows, we abbreviate Eq. (1) as
Xn(t)=Xn(0)+∫0tan(s,Xn(s))ds+∫0tbn(s,Xn(s))dW(s)+∫0t∫Rmcn(s,Xn(s−),θ)ν˜(dθ,ds),t≥0.
For the rest of the article, we adhere to the following notation. By |·| we denote the absolute value of a number, the norm of a vector, or the operator norm of a matrix, and by (x,y) the scalar product of vectors x and y; Bk(r)={x∈Rk:|x|≤r}. The symbol C means a generic constant whose value is not important and may change from line to line; a constant dependent on parameters a,b,c,… will be denoted by Ca,b,c,….
The following assumptions guarantee that Eq. (2) has a unique strong solution.
For all n≥0, T>00$]]>, t∈[0,T], x∈Rd,
|an(t,x)|2+|bn(t,x)|2+∫Rm|cn(t,x,θ)|2μ(dθ)≤CT(1+|x|2).
For all n≥0, T≥0, t∈[0,T], R>00$]]>, and x,y∈Bd(R)|an(t,x)−an(t,y)|2+|bn(t,x)−bn(t,y)|2+∫Rm|cn(t,x,θ)−cn(t,y,θ)|2μ(dθ)≤CT,R|x−y|2.
Moreover, under these assumptions, for any T≥0, we have the following estimate:
E[supt∈[0,T]|Xn(t)|2]≤CT(1+|Xn(0)|2)
(see, e.g., [5, Section 3.1]). From this estimate it is easy to see from Eq. (2) that for all t,s∈[0,T],
E[|Xn(t)−Xn(s)|2]≤CT(1+|Xn(0)|2)|t−s|.
Now we state the assumptions on the convergence of coefficients of (2).
For all t≥0 and x∈Rd,
an(t,x)→a0(t,x),bn(t,x)→b0(t,x),∫Rm|cn(t,x,θ)−c0(t,x,θ)|2μ(dθ)→0,n→∞.
Xn(0)→X0(0), n→∞.
Convergence of solutions to stochastic differential equations with jumps
First, we establish a result on convergence of solutions to stochastic differential equations.
Let the coefficients of Eq. (2) satisfy assumptions (A1), (A2), (C1), and (C2). Then, for anyT>00$]]>, we have the convergence in probabilitysupt∈[0,T]|Xn(t)−X0(t)|⟶P0,n→∞.If additionally the constant in assumption (A2) is independent of R, then for anyT>00$]]>,E[supt∈[0,T]|Xn(t)−X0(t)|2]→0,n→∞.
Denote Δn(t)=sups∈[0,t]|Xn(t)−X0(t)|, asn,m=an(s,Xm(s)), bsn,m=bn(s,Xm(s)), csn,m(θ)=cn(s,Xm(s−),θ),
Ian(t)=∫0tasn,nds,Ibn(t)=∫0tbsn,ndW(s),Icn(t)=∫0t∫Rmcsn,n(θ)ν˜(dθ,ds).
It is easy to see that Ibn and Icn are martingales.
Write
Δn(t)2≤C(|Xn(0)−X0(0)|2+sups∈[0,t]|Ian(s)−Ia0(s)|2+sups∈[0,t]|Ibn(s)−Ib0(s)|2+sups∈[0,t]|Icn(s)−Ic0(s)|2).
For N≥1, define
σNn=inf{t≥0:|X0(t)|∨|Xn(t)|≥N}
and denote 1t=1t≤σNn. Then
E[Δn(t)21t]≤E[Δn(t∧σNn)2]≤C(|Xn(0)−X0(0)|2+∑x∈{a,b,c}E[sups∈[0,t∧σNn]|Ixn(s)−Ix0(s)|2]).
We estimate
E[sups∈[0,t∧σNn]|Ian(s)−Ia0(s)|2]≤E[sups∈[0,t](∫0s|aun,n−au0,0|1udu)2]≤E[(∫0t|aun,n−au0,0|1udu)2]≤t∫0tE[|aun,n−au0,0|21u]du≤Ct∫0t(E[|aun,n−aun,0|21u]+E[|aun,0−au0,0|21u])du.
In turn,
∫0tE[|aun,n−aun,0|21u]du=∫0tE[|an(u,Xn(u))−an(u,X0(u))|21u]du≤CN,t∫0tE[|Xn(u)−Xn(0)|21u]du≤CN,t∫0tE[Δn(u)21u]du.
By the Doob inequality and Itô isometry we obtain
E[sups∈[0,t∧σNn]|Ibn(s)−Ib0(s)|2]≤CE[|Ibn(t∧σNn)−Ib0(t∧σNn)|2]=C∫0tE[|bsn,n−bs0,0|21s]ds.
Estimating as in (5), we arrive at
∫0tE[|bsn,n−bs0,0|21s]ds≤CN,t∫0tE[Δn(s)21s]ds+C∫0tE[|bsn,0−bs0,0|21s]ds.
Finally, the Doob inequality yields
E[sups∈[0,t∧σNn]|Icn(s)−Ic0(s)|2]≤CE[|Icn(t∧σNn)−Ic0(t∧σNn)|2]=C∫0t∫RmE[|csn,n(θ)−cs0,0(θ)|21s]μ(dθ)ds≤C∫0t∫Rm(E[|csn,n(θ)−csn,0(θ)|21s]+E[|csn,0(θ)−cs0,0(θ)|21s])μ(dθ)ds.
By (A2) we have
C∫0t∫RmE[|csn,n(θ)−csn,0(θ)|21s]μ(dθ)ds≤CN,t∫0tE[|Xn(s)−X0(s)|21s]ds≤CN,t∫0tE[Δn(s)21s]ds.
Collecting all estimates, we arrive at the estimate
E[Δn(t)21t]≤C|Xn(0)−X0(0)|2+CN,t∫0tE[Δn(s)1s]ds+Ct∫0tE[|a˜sn,0−a˜s0,0|21s]ds+C∫0tE[|bsn,0−bs0,0|21s]ds+C∫0t∫RmE[|csn,0(θ)−cs0,0(θ)|21s]μ(dθ)ds,
where we can assume without loss of generality that the constants are nondecreasing in t. The application of the Gronwall lemma leads to
E[Δn(T)21T]≤CN,T(|Xn(0)−X0(0)|2+∫0TE[|a˜sn,0−a˜s0,0|21s]ds+∫0TE[|bsn,0−bs0,0|21s]ds+∫0T∫RmE[|csn,0(θ)−cs0,0(θ)|21s]μ(dθ)ds).
We claim that the right-hand side of the latter inequality vanishes as n→∞. Indeed, the integrands are bounded by CT(1+|X(s)|2) due to (A1) and vanish pointwise due to (C1). Hence, the convergence of integrals follows from the dominated convergence theorem. The first term vanishes due to (C2); thus,
E[Δn(T)21T]→0,n→∞.
Now to prove the first statement, for any ε>00$]]>, write
P(Δn(T)>ε)≤1ε2E[Δn(T)21T]+P(σNn<T)≤1ε2E[Δn(T)21T]+P(supt∈[0,T]|Xn(0)|≥N)+P(supt∈[0,T]|X0(0)|≥N).\varepsilon \big)& \displaystyle \le \frac{1}{{\varepsilon }^{2}}\mathsf{E}\big[{\varDelta }^{n}{(T)}^{2}1_{T}\big]+\mathsf{P}\big({\sigma _{N}^{n}}
This implies
lim‾n→∞P(Δn(T)>ε)≤2supn≥0P(supt∈[0,T]|Xn(0)|≥N).\varepsilon \big)\le 2\underset{n\ge 0}{\sup }\hspace{0.1667em}\mathsf{P}\Big(\underset{t\in [0,T]}{\sup }\big|{X}^{n}(0)\big|\ge N\Big).\]]]>
By the Chebyshev inequality we have
lim‾n→∞P(Δn(T)>ε)≤2N2supn≥0E[supt∈[0,T]|Xn(0)|2].\varepsilon \big)\le \frac{2}{{N}^{2}}\underset{n\ge 0}{\sup }\mathsf{E}\Big[\underset{t\in [0,T]}{\sup }{\big|{X}^{n}(0)\big|}^{2}\Big].\]]]>
Therefore, using (3) and letting N→∞, we get
lim‾n→∞P(Δn(T)>ε)=0,\varepsilon \big)=0,\]]]>
as desired.
In order to prove the second statement, we repeat the previous arguments with σNn≡T, getting the estimate
E[Δn(T)2]≤CT(|Xn(0)−X0(0)|2+∫0TE[|a˜sn,0−a˜s0,0|2]ds+∫0TE[|bsn,0−bs0,0|2]ds+∫0T∫RmE[|csn,0(θ)−cs0,0(θ)|2]μ(dθ)ds).
Hence, we get the required convergence as before, using the dominated convergence theorem. □
Convergence of hitting times
For each n≥0, define the stopping time
τn=inf{t≥0:φn(t,Xn(t))≥0}
with the convention inf∅=+∞; φn is a function satisfying certain assumptions to be specified later. In this section, we study the convergence τn→τ0 as n→∞.
The motivation to study stopping times of the form (6) comes from the financial modeling. Specifically, let a financial market model be driven by the process Xn solving Eq. (2), and q>00$]]> be a constant discount factor. Consider the problem of optimal exercise of an American-type contingent claim with payoff function f and maturity T, that is, the maximization problem
E[e−qτf(Xn(τ))]→max,
where τ is a stopping time taking values in [0,T]. Define the value function
vn(t,x)=supτ∈[t,T]E[e−q(τ−t)f(Xn(τ))∣Xn(t)=x]
as the maximal expected discounted payoff provided that the price process Xn starts from x at the moment t; the supremum is taken over all stopping times with values in [t,T].
Then it is well known that the minimal optimal stopping time is given as
τ∗,n=inf{t≥0:vn(t,Xn(t))=f(Xn(t))},
that is, it is the first time when the process Xn hits the so-called optimal stopping set
Gn={(t,x)∈[0,T]×Rd:vn(t,x)=f(x)}.
Note that τ∗,n≤T since v(T,x)=g(x). Since, obviously, vn(t,x)≥f(x), we may represent τ∗,n in the form (6) with φn=f(x)−vn(t,x).
Convergence of hitting times for finite horizon
Let T>00$]]> be a fixed number playing the role of finite maturity of an American contingent claim. Let also the stopping times τn, n≥0, be given by (6) with φn:[0,T]×Rd→R satisfying the following assumptions.
φ0∈C1([0,T)×Rd), and the derivative Dxφ0 is locally Lipschitz continuous in x, that is, for all t∈[0,T), R>00$]]>, s∈[0,t], and x,y∈Bd(R),
|Dxφ0(s,x)−Dxφ0(s,y)|≤Ct,R|x−y|.
For all n≥0 and x∈Rd, φn(T,x)=0.
For all t∈[0,T) and x∈Rd,
|b0(t,x)⊤Dxφ0(t,x)|>0.0.\]]]>
Here by b0(t,x)⊤Dxφ0(t,x) we denote the vector in Rk with jth coordinate equal to
∑i=1dbij0(t,x)∂xiφ0(t,x),j=1,…,k.
Assumption (7) means that the diffusion is acting strongly enough toward the border of the set Gt0:={x∈Rd:φ0(t,x)≤0}. In which situations does this assumption hold, will be studied elsewhere. Here we just want to remark that it is more delicate than it might seem. For example, consider the optimal stopping problem described in the beginning of this section with n=0 in (2). Then, under suitable assumptions (see, e.g., [4, 7]), we have the smooth fit principle: ∂xv0(t,x)=∂xf(x) on the boundary of the optimal stopping set. This means that we cannot set φ0(t,x)=f(x)−v0(t,x) in order for (7) to hold, contrary to what was proposed in the beginning of the section.
We will also assume the locally uniform convergence φn→φ0.
For all t∈[0,T) and R>00$]]>,
sup(s,x)∈[0,t]×Bd(R)|φn(s,x)−φ0(s,x)|→0,n→∞.
The convergence of value functions in optimal stopping problems usually holds under fairly mild assumptions on the convergence of coefficients and payoffs. However, as we explained in Remark 5.1, we cannot use the value function for φn. This means that we should find a function φn defining Gdifferent from vn(t,x)−f(x), but it still should satisfy the convergence assumption (G4).
The question in which cases such functions exist and the convergence assumption (G4) takes places will be a subject of our future research.
In the case where ν has infinite activity, that is, μ(Rm)=∞, we will also need some additional assumptions on the components of Eq. (2).
For each r>00$]]>, μ(Rm∖Bm(r))<∞.
For all t≥0, x∈Rd, and θ∈Rm,
|c0(t,x,θ)|≤h(t,x)g(θ),
where the functions g,h are locally bounded, g(0)=0, and g(θ)→0, θ→0.
Assumption (A3) means that only small jumps of μ can accumulate on a finite interval; assumption (A4) means that small jumps of μ are translated by Eq. (2) to small jumps of Xn. An important and natural example of a situation where these assumptions are satisfied is an equation
X0(t)=X0(0)+∫0ta0(s,X0(s))ds+∫0tb0(s,X0(s))dW(s)+∫0th0(s,X0(s−))dZ(s),t≥0,
driven by a Lévy process Z(t)=∫0t∫Rmθν˜(dθ,ds).
Now we are in a position to state the main result of this section.
Assume (A1)–(A4), (C1), (C2), (G1)–(G4). Then we have the following convergence in probability:τn⟶Pτ0,n→∞.
Let ε,δ be small positive numbers. We are to show that for all n large enough,
P(|τn−τ0|>ε)<δ.\varepsilon \big)<\delta .\]]]>
Using estimate (3) and the Chebyshev inequality, we obtain that for some R>00$]]>,
P(supt∈[0,T]|X0(t)|≥R)<δ4.
Denote K=[0,T−ε/2]×Bd(R+2),
M=1+R+CT,R+2+CT+CT−ε/2,R+2+sup(t,x)∈K(|a(t,x)|+|b(t,x)|+|∂tφ0(t,x)|+|Dxφ0(t,x)|+|b0(t,x)⊤Dxφ0(t,x)|−1),
where, with some abuse of notation, CT,R+2 is the constant from (A2) corresponding to T and R+2, CT is the sum of constants from (A1) and (4), and CT−ε/2,R+2 is the constant from (G1) corresponding to T−ε/2 and R+2.
Let κ∈(0,M] be a number, which we will specify later. Now we claim that there exists a function φ∈C1,2([0,T)×Rd) such that
sup(t,x)∈K|φ(t,x)−φ0(t,x)|<ϰ/2
and, moreover,
supt∈[0,T−ε/2]x∈Bd(R+1)(|∂tφ(t,x)|+|Dxφ(t,x)|+|Dxx2φ(t,x)|+|b0(t,x)⊤Dxφ(t,x)|−1)≤CT−ε/2,R+2+sup(t,x)∈K(|∂tφ0(t,x)|+|Dxφ0(t,x)|+|b0(t,x)⊤Dxφ0(t,x)|−1)≤M.
Indeed, we can take the convolution φ(t,x)=(φ0(t,·)⋆ψ)(x) with a delta-like smooth function ψ, supported on a ball of radius less than 1.
Further, by (G4) there exists n1≥1 such that for all n≥n1,
sup(t,x)∈K|φn(t,x)−φ0(t,x)|<ϰ/2.
On the other hand, by Theorem 3.1 there exists n2≥1 such that for all n≥n2,
P(supt∈[0,T]|Xn(t)−X0(t)|≥ϰM)<δ4.
In what follows, we consider n≥n1∨n2.
Define the stopping time
σn=inf{t≥0:|Xn(t)−X0(t)|≥ϰMor|X0(t)|≥R}∧T.
Write
P(|τn−τ0|>ε)≤P(|τn−τ0|>ε,σn>T−ε/2)+P(supt∈[0,T]|X0(t)|≥R)+P(supt∈[0,T]|Xn(t)−X0(t)|≥ϰM)<P(|τn−τ0|>ε,σn>T−ε/2)+δ2.\varepsilon \big)& \displaystyle \le \mathsf{P}\big(\big|{\tau }^{n}-{\tau }^{0}\big|>\varepsilon ,{\sigma }^{n}>T-\varepsilon /2\big)\\{} & \displaystyle \hspace{1em}+\mathsf{P}\Big(\underset{t\in [0,T]}{\sup }\big|{X}^{0}(t)\big|\ge R\Big)+\mathsf{P}\bigg(\underset{t\in [0,T]}{\sup }\big|{X}^{n}(t)-{X}^{0}(t)\big|\ge \frac{\varkappa }{M}\bigg)\\{} & \displaystyle <\mathsf{P}\big(\big|{\tau }^{n}-{\tau }^{0}\big|>\varepsilon ,{\sigma }^{n}>T-\varepsilon /2\big)+\frac{\delta }{2}.\end{array}\]]]>
For any t≤σn,
|Xn(t)|≤|X0(t)|+ϰM<R+1,
and hence,
|φn(t,Xn(t))−φ(t,X0(t))|≤|φn(t,Xn(t))−φ(t,Xn(t))|+|φ(t,Xn(t))−φ(t,X0(t))|≤ϰ+M|Xn(t)−X0(t)|≤2ϰ.
Now take some η∈(0,ε/2] whose exact value will be specified later and write the obvious inequality
P(τT∗,0+ε<τT∗,n,σn>T−ε/2)≤P(τ0<T−ε,τ0+η<τn,σn>T−ε/2).T-\varepsilon /2\big)\\{} & \displaystyle \hspace{1em}\le \mathsf{P}\big({\tau }^{0}T-\varepsilon /2\big).\end{array}\]]]>
Assume that τ0<T−ε, τ0+η<τn, σn>T−ε/2T-\varepsilon /2$]]>. Then, for all t∈[τ0,τ0+η]=:Iη,
|φn(s,X0(s))−φ(s,X0(s))|≤2ϰ,φn(t,Xn(t))<0.
Therefore, in view of the inequality φ(τ0,X0(τ0))≥0, we obtain
inft∈Iηφ(t,X0(t))≥φ(τ0,X0(τ0))−2ϰ.
Further, we will work with the expression φ(t,X0(t))−φ(τ0,X0(τ0)) for t∈Iη. For convenience, we will abbreviate fs=f(s,X0(s)); for example, φs=φ(s,X0(s)).
Let r>00$]]> be a positive number, which we will specify later, and assume that ν does not have jumps on Iη greater than r, that is, ν((Rm∖Bm(r))×Iη)=0. Write, using the Itô formula,
φ(t,X0(t))−φ(τ0,X0(τ0))=∫τ0tLsφsds+∫τ0t(Dxφs,bs0dW(s))+∫τ0t∫Bm(r)Δs(θ)ν˜(dθ,ds)=:I1(t)+I2(t)+I3(t),
where
Ltφt=∂tφt+(Dxφt,at0)+12tr(bt0(bt0)⊤Dxx2φt)+∫Bm(r)(Δs(θ)−(Dxφs,c0(s,X0(s−),θ)))μ(dθ),Δs(θ)=φ(s,X0(s−)+c(s,X0(s−),θ))−φ(s,X0(s−)).
Start with estimating I1(t). Since t≤σn∧(T−ε/2) for any t∈Iη, by the definition of M and σn we have
|∂tφt+(Dxφt,at0)+12tr(bt0(bt0)⊤Dxx2φt)|≤M+M2+M3≤3M3.
Further, by (A4), for t∈Iη and θ∈Br, |c(t,X(t−),θ)|≤h(t,X(t−))g(θ)≤K1mr, where K1=supt∈[0,T],|x|≤Rh(t,x) and mr=supθ∈Bm(r)g(θ). Since mr→0, r→0, we can assume that r is such that mr≤1/K1. Then, for t∈Iη, by the Taylor formula
|∫Bm(r)(Δt(θ)−(Dxφt,c0(t,X0(t−),θ)))μ(dθ)|≤12sup(u,x)∈[0,T]×Bd(R+1)|Dxx2φ(u,x)|∫Bm(r)|c(t,X0(t−),θ)|2μ(dθ)≤12M2(1+|X(t)|2)≤12M2(1+R2)≤M4.
Summing up the estimates, we get
|I1(t)|≤(3M3+M4)η≤4M4η.
Now proceed to I3(t). By the Doob inequality, for any a>00$]]>,
P(supt∈Iη|I3(t)|≥a,σn>T−ε/2)≤P(supt∈[τ0,(τ0+η)∧σn]|I3(t)|≥a)≤Ca−2E[(∫0T∫Bm(r)Δs(θ)1[τ0,(τ0+η)∧σn](s)ν˜(dθ,ds))2]=Ca−2∫0T∫Bm(r)E[Δs(θ)21[τ0,(τ0+η)∧σn](s)]μ(dθ)ds≤Ca−2M2∫0T∫Bm(r)E[|c(s,X0(s−),θ)|21[τ0,(τ0+η)∧σn](s)]μ(dθ)ds≤Ca−2M2∫0T∫Bm(r)E[K12mr21[τ0,(τ0+η)∧σn](s)]μ(dθ)ds≤K2a−2mr2ηT-\varepsilon /2\Big)\le \mathsf{P}\Big(\underset{t\in [\tau _{0},(\tau _{0}+\eta )\wedge \sigma _{n}]}{\sup }\big|I_{3}(t)\big|\ge a\Big)\\{} & \displaystyle \hspace{1em}\le C{a}^{-2}\mathsf{E}\Bigg[{\Bigg({\int _{0}^{T}}\int _{B_{m}(r)}\varDelta _{s}(\theta )1_{[\tau _{0},(\tau _{0}+\eta )\wedge \sigma _{n}]}(s)\widetilde{\nu }(d\theta ,ds)\Bigg)}^{2}\Bigg]\\{} & \displaystyle \hspace{1em}=C{a}^{-2}{\int _{0}^{T}}\int _{B_{m}(r)}\mathsf{E}\big[\varDelta _{s}{(\theta )}^{2}1_{[\tau _{0},(\tau _{0}+\eta )\wedge \sigma _{n}]}(s)\big]\mu (d\theta )ds\\{} & \displaystyle \hspace{1em}\le C{a}^{-2}{M}^{2}{\int _{0}^{T}}\int _{B_{m}(r)}\mathsf{E}\big[{\big|c\big(s,{X}^{0}(s-),\theta \big)\big|}^{2}1_{[\tau _{0},(\tau _{0}+\eta )\wedge \sigma _{n}]}(s)\big]\mu (d\theta )ds\\{} & \displaystyle \hspace{1em}\le C{a}^{-2}{M}^{2}{\int _{0}^{T}}\int _{B_{m}(r)}\mathsf{E}\big[{K_{1}^{2}}{m_{r}^{2}}1_{[\tau _{0},(\tau _{0}+\eta )\wedge \sigma _{n}]}(s)\big]\mu (d\theta )ds\le K_{2}{a}^{-2}{m_{r}^{2}}\eta \end{array}\]]]>
with some constant K2. Further, we fix a=δ2η1/2 and some r>00$]]> such that mr2≤δ5/(16K2) and mr≤1/K1. Then
P(supt∈Iη|I3(t)|≥δ2η1/2,σn>T−ε/2)≤δ16.T-\varepsilon /2\Big)\le \frac{\delta }{16}.\]]]>
Write I2(t)=J1(t)+J2(t)+J3(t), where
J1(t)=∫τ0t(Dxφs−Dxφτ0,bs0dW(s)),J2(t)=∫τ0t(Dxφτ0,(bs0−bτ00)dW(s)),J3(t)=(Dxφτ0,bτ00(W(t)−W(τ0)))=(uτ0,W(t)−W(τ0));us=b0(s,X0(s))⊤Dxφ(s,X0(s)).
Taking into account that (s,X0(s))∈K for s≤σn, we estimate with the help of Doob’s inequality
E[supt∈IηJ1(t)21σn>T−ϵ/2]≤E[supt∈[τ0,(τ0+η)∧σn]J1(t)2]≤CE[(∫τ0(τ0+η)∧σn(Dxφs−Dxφτ0,bs0dW(s)))2]≤CE[∫τ0(τ0+η)∧σn|Dxφs−Dxφτ0|2|bs0|2ds]≤CM3E[∫τ0τ0+η|X0(s)−X0(τ0)|2ds]≤CM4(1+|X0(0)|2)η2≤CM4(1+R2)η2≤CM6η2.T-\epsilon /2}\Big]& \displaystyle \le \mathsf{E}\Big[\underset{t\in [{\tau }^{0},({\tau }^{0}+\eta )\wedge {\sigma }^{n}]}{\sup }J_{1}{(t)}^{2}\Big]\\{} & \displaystyle \le C\mathsf{E}\Bigg[{\Bigg({\int _{{\tau }^{0}}^{({\tau }^{0}+\eta )\wedge {\sigma }^{n}}}\big(D_{x}\varphi _{s}-D_{x}\varphi _{{\tau }^{0}},{b_{s}^{0}}\hspace{0.1667em}dW(s)\big)\Bigg)}^{2}\Bigg]\\{} & \displaystyle \le C\mathsf{E}\Bigg[{\int _{{\tau }^{0}}^{({\tau }^{0}+\eta )\wedge {\sigma }^{n}}}|D_{x}\varphi _{s}-D_{x}\varphi _{{\tau }^{0}}{|}^{2}{\big|{b_{s}^{0}}\big|}^{2}ds\Bigg]\\{} & \displaystyle \le C{M}^{3}\mathsf{E}\Bigg[{\int _{{\tau }^{0}}^{{\tau }^{0}+\eta }}{\big|{X}^{0}(s)-{X}^{0}\big({\tau }^{0}\big)\big|}^{2}ds\Bigg]\\{} & \displaystyle \le C{M}^{4}\big(1+{\big|{X}^{0}(0)\big|}^{2}\big){\eta }^{2}\le C{M}^{4}\big(1+{R}^{2}\big){\eta }^{2}\le C{M}^{6}{\eta }^{2}.\end{array}\]]]>
Similarly, using (A2), we get
E[supt∈IηJ2(t)21σn>T−ϵ/2]≤CM6η2.T-\epsilon /2}\Big]\le C{M}^{6}{\eta }^{2}.\]]]>
The Chebyshev inequality yields
P(supt∈Iη(|J1(t)|+|J2(t)|)≥η2/3,σn>T−ε/2)≤K3M6η2/3T-\varepsilon /2\Big)\le K_{3}{M}^{6}{\eta }^{2/3}\]]]>
with certain constant K3. Assume further that
η≤η2:=(δ16K3M6)3/2,
in which case the right-hand side of the last inequality does not exceed δ/16, and that
η≤η3:=1125M9,
so that η2/3≥5ηM3. Hence, in view of (15), we obtain
P(τ0+ε<τn,σn>T−ε/2)≤P(inft∈IηJ3(t)≥−5ηM3−η2/3−δ2η1/2,σn>T−ε)+3δ16≤P(inft∈IηJ3(t)≥−2η2/3−δ2η1/2,(τ0,X0(τ0))∈K)+3δ16.T-\varepsilon /2\big)\\{} & \displaystyle \hspace{1em}\le \mathsf{P}\Big(\underset{t\in \mathcal{I}_{\eta }}{\inf }J_{3}(t)\ge -5\eta {M}^{3}-{\eta }^{2/3}-{\delta }^{2}{\eta }^{1/2},{\sigma }^{n}>T-\varepsilon \Big)+\frac{3\delta }{16}\\{} & \displaystyle \hspace{1em}\le \mathsf{P}\Big(\underset{t\in \mathcal{I}_{\eta }}{\inf }J_{3}(t)\ge -2{\eta }^{2/3}-{\delta }^{2}{\eta }^{1/2},\big({\tau }^{0},{X}^{0}\big({\tau }^{0}\big)\big)\in \mathcal{K}\Big)+\frac{3\delta }{16}.\end{array}\]]]>
Further, due to the strong Markov property of W,
P(inft∈IηJ3(t)≥−2η2/3−δ2η1/2,(τ0,X0(τ0))∈K)=E[1K(τ0,X0(τ0))P(inft∈IηJ3(t)≥−2η2/3−δ2η1/2∣Fτ0)]=E[1K(τ0,X0(τ0))×P(infz∈[0,η](u(s,x),W(s+z)−W(s))≥−2η2/3−δ2η1/2)|(s,x)=(τ0,X0(τ0))],
where u(s,x)=b0(s,x)⊤Dxφ(s,x). Observe now that {(u(s,x),W(z+s)−W(s)),z≥0} is a standard Wiener process multiplied by |u(s,x)|. Therefore,
P(infz∈[0,η](u(s,x),W(s+z)−W(s))≥−2η2/3−δ2η1/2)=1−2P((u(s,x),W(s+η)−W(s))<−2η2/3−δ2η1/2)=1−2Φ(−2η2/3+Δ2η1/2|u(s,x)|η1/2)=1−2Φ(−2η1/6+δ2|u(s,x)|),
where Φ is the standard normal distribution function. Thus,
P(inft∈IηJ3(t)≥−2η2/3−δ2η1/2,(τ0,X0(τ0))∈K)≤E[1K(τ0,X0(τ0))(1−2Φ(−2η1/6+δ2|u(τ0,X0(τ0))|))]≤1−2Φ(−M(2η1/6+δ2))≤M2π(2η1/6+δ2).
Note that the definition of M does not depend on δ. Thus, we can assume without loss of generality that δ≤π/(32M2). Finally, if
η≤η4:=(δπ642)6,
then
P(inft∈IηJ3(t)≥−2η2/3−Δ2η−1/2,(τ0,X0(τ0))∈K)≤δ16.
Now we can fix η=min{ε/2,η1,η2,η3,η4}, making all previous estimates to hold. Combining (16) with (17), we arrive at
P(τ0+ε<τn,σn>T−ε/2)≤δ4.T-\varepsilon /2\big)\le \frac{\delta }{4}.\]]]>
Similarly,
P(τn+ε<τ0,σn>T−ε/2)≤δ4,T-\varepsilon /2\big)\le \frac{\delta }{4},\]]]>
and hence
P(|τn−τ0|>ε,σn>T−ε/2)≤δ2.\varepsilon ,{\sigma }^{n}>T-\varepsilon /2\big)\le \frac{\delta }{2}.\]]]>
Plugging this estimate into (11), we arrive at the desired inequality (8). □
It is easy to modify the proof for the case where (7) holds for all (t,x)∈G0:={(t,x)∈[0,T)×Rd:φ(t,x)=0}. Indeed, the continuity would imply that (7) holds in some neighborhood of G0, which is sufficient for the argument.
As we have already mentioned, assumptions (A3) and (A4) are not needed in the case μ(Rm)<∞. Indeed, we can set r=0 in the previous argument and skip the estimation of I3(t). Nevertheless, these assumptions does not seem very restrictive, as we pointed out in Remark 5.3.
Convergence of hitting times for infinite horizon
Here we extend the results of the previous subsection to the case of infinite time horizon. Let, as before, the stopping times τn, n≥0, be given by (6). We impose the following assumptions.
φ0∈C1([0,∞)×Rd), and Dxφ0 is locally Lipschitz continuous in x, that is, for all T>00$]]>, R>00$]]>, t∈[0,T], and x,y∈Bd(R),
|Dxφ0(t,x)−Dxφ0(t,y)|≤CT,R|x−y|.
τ0<∞ a.s.
For all t≥0 and x∈Rd,
|Dxφ0(t,x)b0(t,x)⊤|>0.0.\]]]>
For all t≥0 and R>00$]]>,
sup(s,x)∈[0,t]×Bd(R)|φn(t,x)−φ0(t,x)|→0,n→∞.
Assume (A1), (A2), (C1), (C2), (H1)–(H4). Then we have the following convergence in probability:τn⟶Pτ0,n→∞.
Fix arbitrary ε∈(0,1) and δ>00$]]>. Since τ0<∞ a.s., P(τ0>T−1)≤δT-1)\le \delta $]]> for some T>11$]]>. For n≥0, t∈[0,T], and x∈Rd, define φ˜n(t,x)=φn(t,x)1[0,T)(t), τTn=τn∧T. Then the functions φ˜n, n≥0, satisfy (G1)–(G3) and τTn=inf{t≥0:φ˜n(t,Xn(t))≥0}. Therefore, in view of Theorem 5.1,
P(|τTn−τT∗,0|>ε)→0,n→∞.\varepsilon \big)\to 0,\hspace{1em}n\to \infty .\]]]>
We estimate
P(|τn−τ0|>ε)≤P(|τTn−τT0|>ε)+P(τ0>T−1)≤P(|τTn−τT0|>ε)+δ.\varepsilon \big)& \displaystyle \le \mathsf{P}\big(\big|{\tau _{T}^{n}}-{\tau _{T}^{0}}\big|>\varepsilon \big)+\mathsf{P}\big({\tau }^{0}>T-1\big)\\{} & \displaystyle \le \mathsf{P}\big(\big|{\tau _{T}^{n}}-{\tau _{T}^{0}}\big|>\varepsilon \big)+\delta .\end{array}\]]]>
Hence,
lim‾n→∞P(|τn−τ0|>ε)≤δ.\varepsilon \big)\le \delta .\]]]>
Letting δ→0, we arrive at the desired convergence. □
Let d=k=m=1 and for all t≥0, x,θ∈R, an(t,x)=an, bn(t,x)=bn, cn(t,x,θ)=cnθ, where an,bn,cn∈R. Then we have a sequence of Lévy processes
Xn(t)=Xn(0)+ant+bnW(t)+cn∫0t∫Rθν˜(ds,dθ).
Consider the following times:
τn=inf{t≥0:Xn(t)≥hn(t)}∧T,n≥0,
of crossing some curve h∈C1([0,T)).
Assume that an→a0, bn→b0≠0, cn→c0, and Xn(0)→X0(0) as n→∞ and, for any t∈[0,T), sups∈[0,t]|hn(t)−h0(t)|→0 as n→∞. Then τn⟶Pτ0, n→∞. Indeed, setting φn(t,x)=(hn(t)−x)1[0,T)(t), we can check that all assumptions of Theorem 5.1 are in force.
Let d=k=m=1. Suppose that the coefficients an, bn, cn satisfy (A1), (A2) and that the convergence (C1)–(C3) takes place. Assume that b0(t,x)>00$]]> for all t≥0 and x∈R. Define
τn=inf{t≥0:Xn(t)∉(ln,rn)},n≥0.
It is not hard to check that, due to the nondegeneracy of b0, τ0<∞ a.s. Assume that ln→l0, rn→r0, n→∞. Then, setting φn(t,x)=(x−ln)(rn−x) and using Theorem 5.2, we get the convergence τn⟶Pτ0, n→∞.
Acknowledgments
The author would like to thank the anonymous referee whose remarks led to a substantial improvement of the manuscript.
ReferencesCont, R., Tankov, P.: Chapman and Hall/CRC, Boca Raton (2004). MR2042661Mishura, Y.S., Tomashyk, V.V.: Convergence of exit times for diffusion processes.88, 139–149 (2014) Moroz, A.G., Tomashyk, V.V.: Convergence of solutions and their exit times in diffusion models with jumps. 50(2), 288–296 (2014). MR3276037. doi:10.1007/s10559-014-9616-6Pham, H.: Optimal stopping, free boundary, and American option in a jump-diffusion model. 35(2), 145–164 (1997). MR1424787. doi:10.1007/s002459900042Situ, R.: Springer, New York (2005). MR2160585Tomashyk, V.V., Shevchenko, G.M.: Convergence of hitting times in diffusion models with jumps and non-Lipschitz diffusion.2014(2), 32—38 (2014) Zhang, X.L.: Valuation of American options in a jump-diffusion model. In: . Publ. Newton Inst., pp. 93–114. Cambridge Univ. Press, Cambridge (1997). MR1470511