1 Introduction
This paper can be considered as a continuation of the research started in [5, 4, 7–9]. Namely, we are interested in representations of a random variable in the form of a stochastic integral
with respect to a fractional Brownian motion with Hurst parameter H>1/2; the integrand ψ is assumed to be adapted to the natural filtration generated by BH on the interval [0,1]. The motivation comes from financial mathematics, where capitals of self-financing strategies are given by stochastic integrals with respect to asset pricing processes.
The main representation results reported in [5, 4, 7–9] involve the following assumption about ξ: it is the value at 1 of an adapted Hölder-continuous process. Moreover, it was shown in [4] (see Remark 2.5) that such an assumption is unavoidable in the methods used in the cited papers.
In this paper, we generalize the existing results by showing the existence of representation (1) under a weaker assumption that ξ is the value at 1 of an adapted log-Hölder continuous process. In order to establish such a representation, we extend the definition of the fractional integral introduced in [10].
The paper is organized as follows. Section 2 gives all necessary prerequisites about the fractional Brownian motion and the definition of the extended fractional integral. In Section 3, we prove the main representation result. The Appendix contains auxiliary results concerning the extended fractional integral.
2 Preliminaries
2.1 General conventions
Let (Ω,F,F={Ft}t∈[0,1],P) be a standard stochastic basis. The adaptedness of processes will be understood with respect to the filtration F.
Throughout the paper, the symbol C will mean a generic constant, the value of which may change from line to line. When the dependence on some parameter(s) is important, this will be indicated by superscripts. A finite random variable of no importance will be denoted by C(ω).
2.2 Fractional Brownian motion
Our main object of interest in this paper is a fractional Brownian motion (fBm) with Hurst index H∈(1/2,1) on (Ω,F,P), that is, an F-adapted centered Gaussian process BH={BH(t)}t≥0 with the covariance function
By the Kolmogorov–Chentsov theorem, BH has a continuous modification, so in what follows, we assume that BH is continuous. Moreover, we will need the following statement on the uniform modulus of continuity of BH (see, e.g., [3]).
2.3 Small deviations of sum of squared increments of fractional Brownian motion
First, we state a small deviation estimate for sum of squares of Gaussian random variables (see, e.g., [2]).
We will also need the following asymptotics of the covariance of a fractional Brownian motion. Its proof is given in the Appendix.
2.4 Extended fractional integral
To integrate with respect to a fractional Brownian motion, we use the fractional integral introduced in [10], but the definition is modified according to our purposes. For functions f,g:[a,b]→R and α∈(0,1), define the fractional Riemann–Liouville derivatives (in Weyl form)
(Dαa+f)(x)=1Γ(1−α)(f(x)(x−a)α+α∫xaf(x)−f(u)(x−u)α+1du),(D1−αb−g)(x)=e−iπαΓ(α)(g(x)(b−x)1−α+(1−α)∫bxg(x)−g(u)(u−x)2−αdu).
Then the fractional integral ∫baf(x)dg(x) can be defined as
provided that the last integral is finite. However, in order to have good properties of this integral (independence of α, additivity), we need to assume more than just the finiteness of the integral; see the Appendix for details.Now we turn to the integration with respect to a fractional Brownian motion. We will restrict our exposition to the interval [0,1], which suffices for our purposes. Fix a number α∈(1−H,1/2). Note that from Theorem 1 it is easy to derive the following estimate:
For some μ>1/2, define the weight
Let a function f:[0,1]→R be such that Dα0+f∈L1([0,1],ρ); this will be our class of admissible integrands. Then the extended fractional integral
is well defined (see the Appendix). In particular, it is possible to take f with Dα0+f∈L1[0,1], and for such integrands, the definition agrees with the definition of the fractional integral given in [10]; see Remark on p. 340.
Furthermore, it is shown in the Appendix that if f satisfies the above assumption for a different value of α, the value of the extended fractional integral will be the same. The following estimate is obvious:
For each t∈(0,1), we will define the integral ∫t0f(x)dBH(x) by a similar formula understanding it in the sense of [10] since Dα0+f∈L1[0,t], D1−αt−BHt−∈L∞[0,t]. Under the additional assumption that Dαt+f∈L1([t,1],ρ), we can define the integral ∫1tf(x)dBH(x) similarly to (3), and the additivity holds:
Note that the additivity
for s<t follows from the results of [10].
Finally, it is worth to add that for f∈Cγ[0,1] with γ>1−H, the extended fractional integral is well defined since the derivative Dα0+f is bounded for any α<γ, and thus we can take α∈(1−H,γ) in the definition. The value of the integral agrees with the so-called Young integral, which is given by a limit of integral sums. An important example is f(x)=g(BH(x)) where g is Lipschitz continuous. In this case, the following change-of-variable formula holds:
where G(x)=∫x0g(y)dy. The formula appears to be valid (with the integral defined in the sense of [10]) even for functions h of locally bounded variation; see [1]. However, Lipschitz continuity (even continuous differentiability) will suffice for our purposes.
3 Main result
In this section, for a given F1-measurable random variable ξ, we construct an F-adapted process ψ={ψ(t)}t∈[0,1] such that (1) holds almost surely under the following “log-Hölder” assumption on ξ.
Remark 1.
Obviously, the process Z satisfies
for any b>0. So (4) is weaker than (5), which is the assumption made in [5].
In [5], the following example of a random variable not satisfying (5) was given. Assume that F={Ft=σ(BHs,s∈[0,t])}t∈[0,1], and let ξ=∫11/2g(t)dWt, where g(t)=(1−t)−1/2|log(1−t)|−1, and W is a Wiener process such that its natural filtration coincides with F. Using the same argument as in [5], it is possible to show that ξ does not satisfy (4) as well. However, if we take the same construction with g(t)=(1−t)−1/2|log(1−t)|−d, d>1, then the corresponding random variable satisfies (4), but not (5).
Next, we state a helpful lemma from [5], used in our construction of ψ.
We can now proceed with the main result.
Proof.
The proof is divided into three parts.
Construction of ψ. Let κ∈(2,2a). Put tn=1−e−κn/a, n≥1, and let Δn=tn+1−tn. It is easy to see that
Denote for brevity ξn=Z(tn). Then, by Assumption 1, |ξn−ξ|≤C(ω)κ−n, so that |ξn−ξ|≤2−n for all n large enough, say, for n≥N(ω). In particular, we have
for all n≥N(ω)+1.
The integrand ψ is constructed in an inductive way between the points {tn,n≥1}. Set first ψ(t)=0, t∈[0,t1]. Assuming that ψ(t) is defined on [0,tn), let V(t)=∫t0ψ(s)dBH(s),t∈[0,tn]. The construction of the integrand [tn,tn+1) depends on whether V(tn)=ξn−1 or not.
Case I.
V(tn)≠ξn−1. In this case, thanks to Lemma 4, there exists an adapted process {ϕ(t),t∈[tn,tn+1]} such that ∫ttnϕ(s)dBH(s)→+∞ as t→tn+1−. Define the stopping time
and the process
It is obvious that ∫tn+1tnψ(s)dBH(s)=ξn−V(tn) and V(tn+1)=ξn.
The construction approaches ξ. Our aim now is to prove that V(tn)=ξn−1 for all n large enough. By construction it suffices to show that Case II happens for all n large enough. Equivalently, we need to show that σn<tn+1 for all n large enough. In view of (8), the latter inequality holds if
Thus, in view of the Borell–Cantelli lemma and inequality (7), it suffices to verify the convergence of the series
or, equivalently, that
The latter follows from Lemma 3 through the self-similarity and stationarity of increments of an fBm. Thus, for all n large enough, say, for n≥N2(ω), V(tn)=ξn−1.
Integrability of ψ. It is easy to see that ψ is integrable w.r.t. BH on any interval [0,tN]. It remains to verify that the integral ∫1tNψ(s)dBH(s) is well defined and vanishes as N→∞ (note that we did not establish the continuity of the integral as a function of the upper limit). For some μ>1/2 (which will be specified later), define
Clearly, it suffices to show that ‖DαtN+ψ‖L1([tN,1],ρ)→0, N→∞. Let N≥N2(ω). Write
∫1tN|(DαtN+ψ)(s)|ρ(s)ds=∞∑n=N∫tn+1tn|(DαtN+ψ)(s)|ρ(s)ds≤C∞∑n=NΔH+α−1n|logΔn|μ∫tn+1tn|(DαtN+ψ)(s)|ds,
where we have used (6) and the fact that ρ is decreasing in a left neighborhood of 1.Now we estimate
∫tn+1tn|(DαtN+ψ)(s)|ds≤∫tn+1tn(|ψ(s)|(s−tN)α+∫stN|ψ(s)−ψ(u)||s−u|1+αdu)ds≤C(ω)anΔ1−αnδHn|log(δn)|1/2+∫tn+1tN∫stn|ψ(s)−ψ(u)||s−u|1+αduds,
where we have used Theorem 1 to estimate ψ.Consider the second term. It equals
Start with I1, observing that ψ vanishes on (σn,tn+1]:
I1≤∫tn+1tnn∑j=N∫tjtj−1|ψ(s)|+|ψ(u)||s−u|1+αduds≤C(ω)anδHn|logδn|1/2∫tn+1tn(s−tn)−αds+n−1∑j=NajδHj|logδj|1/2∫tn+1tn(s−tj+1)−αds≤C(ω)(anΔ1−αnδHn|logδn|1/2+n−1∑j=NajδHj|logδj|1/2Δ1−αn).
Proceed with the second term:
I2≤C(ω)anδHn|logδn|1/2n∑k=1∫sn,ksn,k−1∫sn,k−1tn|s−u|−1−αduds≤C(ω)anδHn|logδn|1/2n∑k=1∫sn,ksn,k−1(s−sn,k−1)−αds≤C(ω)annδH+1−αn|logδn|1/2=C(ω)anΔnδH−αn|logδn|1/2.
Finally, assuming that σn∈[sn,l−1,sn,l), we have
I3≤C(ω)l−1∑k=1∫sn,ksn,k−1∫ssn,k−1an(s−u)H|log(s−u)|1/2(s−u)1+αduds+∫σnsn,l−1∫ssn,l−1|ψ(s)−ψ(u)||s−u|1+αduds+∫sn,lσn∫σnsn,l−1|ψ(s)−ψ(u)||s−u|1+αduds≤C(ω)ann∑k=1∫sn,ksn,k−1(s−sn,k−1)H−α|log(s−sn,k−1)|1/2ds+C(ω)anδHn|logδn|1/2∫sn,lσn∫σnsn,l−11|s−u|1+αduds≤C(ω)annδH+1−αn|logδn|1/2nδ1−αn=C(ω)anΔnδH−αn|logδn|1/2.
Gathering all estimates, we get
∫1tN|DαtN+(ψ)(s)|ρ(s)ds≤C(ω)∞∑n=N(anΔHnδHn|logδn|1/2|logΔn|μ+anΔH+αnδH−αn|logδn|1/2|logΔn|μ+ΔHn|logΔn|μn−1∑j=NajδHj|logδj|1/2).
Consider the second sum. After changing the order of summation, we get
∞∑n=Nn−1∑j=NajδHj|logδj|1/2ΔHn|logΔn|μ=∞∑j=NajδHj∞∑n=j+1ΔHn|logΔn|μ≤C∞∑j=NajδHjΔHj|logΔj|μ∞∑n=j+1eκj/a−κn/aκμ(n−j)/a≤C∞∑j=NajδHjΔHj|logΔj|μ≤C∞∑j=NajδHjΔHj|logΔj|μ∞∑n=j+1e−κ(n−j)n/aκμ(n−j)/a≤C∞∑j=NajδHjΔHj|logΔj|μ.
Consequently, noting that ΔHnδHn≤ΔH+αnδH−αn, we have
∫1tN|DαtN+(ψ)(s)|ρ(s)ds≤C(ω)∞∑n=NanΔH+αnδH−αn|logδn|1/2|logΔn|μ≤∞∑n=N2−nn−1ΔH+αnδ−H−αn|logδn|μ+1/2≤∞∑n=N2−nnH+α−1(|logΔn|μ+1/2+nμ+1/2)≤∞∑n=N2−nnH+α−1(C+κ(μ+1/2)n/a+nμ+1/2).
Now, in order for the last series to converge, it is sufficient that (μ+1/2)/a<log2/logκ or, equivalently, μ<alog2/logκ−1/2. Since alog2/logκ>1 by our choice, it is possible to take some μ>1/2 satisfying this requirement, thus finishing the proof. □