1 Introduction and motivation
The quadratic variation, or the pathwise volatility, of stochastic processes is of paramount importance in mathematical finance. Indeed, it was the major discovery of the celebrated article by Black and Scholes [8] that the prices of financial derivatives depend only on the volatility of the underlying asset. In the Black–Scholes model of geometric Brownian motion, the volatility simply means the variance. Later the Brownian model was extended to more general semimartingale models. Delbaen and Schachermayer [10, 11] gave the final word on the pricing of financial derivatives with semimartingales. In all these models, the volatility simply meant the variance or the semimartingale quadratic variance. Now, due to the important article by Föllmer [13], it is clear that the variance is not the volatility. Instead, one should consider the pathwise quadratic variation. This revelation and its implications to mathematical finance has been studied, for example, in [6, 23].
An important class of pricing models is the mixed Brownian–fractional Brownian model. This is a model where the quadratic variation is determined by the Brownian part and the correlation structure is determined by the fractional Brownian part. Thus, this is a pricing model that captures the long-range dependence while leaving the Black–Scholes pricing formulas intact. The mixed Brownian–fractional Brownian model has been studied in the pricing context, for example, in [1, 5, 7].
By the hedging paradigm the prices and hedges of financial derivative depend only on the pathwise quadratic variation of the underlying process. Consequently, the statistical estimation of the quadratic variation is an important problem. One way to estimate the quadratic variation is to use directly its definition by the so-called realized quadratic variation. Although the consistency result (see Section 2.1) does not depend on a specific choice of the sampling scheme, the asymptotic distribution does. There are numerous articles that study the asymptotic behavior of realized quadratic variation; see [4, 3, 16, 14, 15] and references therein. Another approach, suggested by Dzhaparidze and Spreij [12], is to use the randomized periodogram estimator. In [12], the case of semimartingales was studied. In [2], the randomized periodogram estimator was studied for the mixed Brownian–fractional Brownian model, and the weak consistency of the estimator was proved. This article investigates the asymptotic normality of the randomized periodogram estimator for the mixed Brownian–fractional Brownian model.
The rest of the paper is organized as follows. In Section 2, we briefly introduce the two estimators for the quadratic variation already mentioned. In Section 3, we introduce the stochastic analysis for Gaussian processes needed for our results. In particular, we introduce the Föllmer pathwise calculus and Malliavin calculus. Section 4 contains our main results: the central limit theorem for the randomized periodogram estimator and an associated Berry–Esseen bound. Finally, some technical calculations are deferred into Appendix A.1 and Appendix A.2.
2 Two methods for estimating quadratic variation
2.1 Using discrete observations: realized quadratic variation
It is well known that (see [22, Chapter 6]) for a semimartingale X, the bracket $[X,X]$ can be identified with
\[[X,X]_{t}=\mathbb{P}-\underset{|\pi |\to 0}{\lim }\sum \limits_{t_{k}\in \pi }{(X_{t_{k}}-X_{t_{k-1}})}^{2},\]
where $\pi =\{t_{k}:0=t_{0}<t_{1}<\cdots <t_{n}=t\}$ is a partition of the interval $[0,t]$, $|\pi |=\max \{t_{k}-t_{k-1}:t_{k}\in \pi \}$, and $\mathbb{P}-\lim $ means convergence in probability. Statistically speaking, the sums of squared increments (realized quadratic variation) is a consistent estimator for the bracket as the volume of observations tends to infinity. Barndorff-Nielsen and Shephard [3] studied precision of the realized quadratic variation estimator for a special class of continuous semimartingales. They showed that sometimes the realized quadratic variation estimator can be a rather noisy estimator. So one should seek for new estimators of the quadratic variation.2.2 Using continuous observations: randomized periodogram
Dzhaparidze and Spreij [12] suggested another characterization of the bracket $[X,X]$. Let ${\mathbb{F}}^{X}$ be the filtration of X, and τ be a finite stopping time. For $\lambda \in \mathbb{R}$, define the periodogram $I_{\tau }(X;\lambda )$ of X at τ by
Let ξ be a symmetric random variable independent of the filtration ${\mathbb{F}}^{X}$ with density $g_{\xi }$ and real characteristic function $\varphi _{\xi }$. For given $L>0$, define the randomized periodogram by
If the characteristic function $\varphi _{\xi }$ is of bounded variation, then Dzhaparidze and Spreij have shown that we have the following characterization of the bracket as $L\to \infty $:
(1)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I_{\tau }(X;\lambda ):& \displaystyle =\bigg|{\int _{0}^{\tau }}{e}^{i\lambda s}\mathrm{d}X_{s}{\bigg|}^{2}\\{} & \displaystyle =2\hspace{2.5pt}\textbf{Re}{\int _{0}^{\tau }}{\int _{0}^{t}}{e}^{i\lambda (t-s)}\mathrm{d}X_{s}\mathrm{d}X_{t}+[X,X]_{\tau }\hspace{1em}(\text{by Itô formula}).\end{array}\](2)
\[\mathbb{E}_{\xi }I_{\tau }(X;L\xi )=\int _{\mathbb{R}}I_{\tau }(X;Lx)g_{\xi }(x)\mathrm{d}x.\]Recently, the convergence (3) is extended in [2] to some class of stochastic processes which contains nonsemimartingales in general. Let $W=\{W_{t}\}_{t\in [0,T]}$ be a standard Brownian motion, and ${B}^{H}=\{{B_{t}^{H}}\}_{t\in [0,T]}$ be a fractional Brownian motion with Hurst parameter $H\in (\frac{1}{2},1)$, independent of the Brownian motion W. Define the mixed Brownian–fractional Brownian motion $X_{t}$ by
Remark 1.
It is known that (see [9]) the process X is an $({\mathbb{F}}^{X},\mathbb{P})$-semimartingale if $H\in (\frac{3}{4},1)$, and for $H\in (\frac{1}{2},\frac{3}{4}]$, X is not a semimartingale with respect to its own filtration ${\mathbb{F}}^{X}$. Moreover, in both cases, we have
If the partitions in (4) are nested, that is, for each n, we have ${\pi }^{(n)}\subset {\pi }^{(n+1)}$, then the convergence can be strengthened to almost sure convergence. Hereafter, we always assume that the sequences of partitions are nested.
Given $\lambda \in \mathbb{R}$, define the periodogram of X at T as (1), that is,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle I_{T}(X;\lambda )& \displaystyle =\bigg|{\int _{0}^{T}}{e}^{i\lambda t}\mathrm{d}X_{t}{\bigg|}^{2}\\{} & \displaystyle =\bigg|{e}^{i\lambda T}X_{T}-i\lambda {\int _{0}^{T}}X_{t}{e}^{i\lambda t}\mathrm{d}t{\bigg|}^{2}\\{} & \displaystyle ={X_{T}^{2}}+X_{T}{\int _{0}^{T}}i\lambda \big({e}^{i\lambda (T-t)}-{e}^{-i\lambda (T-t)}\big)X_{t}\mathrm{d}t+{\lambda }^{2}\bigg|{\int _{0}^{T}}{e}^{i\lambda t}X_{t}\mathrm{d}t{\bigg|}^{2}.\end{array}\]
Let $(\tilde{\varOmega },\tilde{\mathcal{F}},\tilde{\mathbb{P}})$ be another probability space. We identify the σ-algebra $\mathcal{F}$ with $\mathcal{F}\otimes \{\phi ,\tilde{\varOmega }\}$ on the product space $(\varOmega \times \tilde{\varOmega },\mathcal{F}\otimes \tilde{\mathcal{F}},\mathbb{P}\otimes \tilde{\mathbb{P}})$. Let $\xi :\tilde{\varOmega }\to \mathbb{R}$ be a real symmetric random variable with density $g_{\xi }$ and independent of the filtration ${\mathbb{F}}^{X}$. For any positive real number L, define the randomized periodogram $\mathbb{E}_{\xi }I_{T}(X;L\xi )$ as in (2) by
where the term $I_{T}(X;Lx)$ is understood as before. Azmoodeh and Valkeila [2] proved the following:
Theorem 1.
Assume that X is a mixed Brownian–fractional Brownian motion, $\mathbb{E}_{\xi }I_{T}(X;L\xi )$ be the randomized periodogram given by (5), and
Then, as $L\to \infty $, we have
3 Stochastic analysis for Gaussian processes
3.1 Pathwise Itô formula
Föllmer [13] obtained a pathwise calculus for continuous functions with finite quadratic variation. The next theorem essentially belongs to Föllmer. For a nice exposition and its use in finance, see Sondermann [24].
Theorem 2 ([24]).
Let $X:[0,T]\to \mathbb{R}$ be a continuous process with continuous quadratic variation $[X,X]_{t}$, and let $F\in {C}^{2}(\mathbb{R})$. Then for any $t\in [0,T]$, the limit of the Riemann–Stieltjes sums
exists almost surely. Moreover, we have
The rest of the section contains the essential elements of Gaussian analysis and Malliavin calculus that are used in this paper. See, for instance, Refs. [17, 18] for further details. In what follows, we assume that all the random objects are defined on a complete probability space $(\varOmega ,\mathcal{F},\mathbb{P})$.
3.2 Isonormal Gaussian processes derived from covariance functions
Let $X=\{X_{t}\}_{t\in [0,T]}$ be a centered continuous Gaussian process on the interval $[0,T]$ with $X_{0}=0$ and continuous covariance function $R_{X}(s,t)$. We assume that $\mathcal{F}$ is generated by X. Denote by $\mathcal{E}$ the set of real-valued step functions on $[0,T]$, and let $\mathfrak{H}$ be the Hilbert space defined as the closure of $\mathcal{E}$ with respect to the scalar product
\[\langle \mathbf{1}_{[0,t]},\mathbf{1}_{[0,s]}\rangle _{\mathfrak{H}}=R_{X}(t,s),\hspace{1em}s,t\in [0,T].\]
For example, when X is a Brownian motion, $\mathfrak{H}$ reduces to the Hilbert space ${L}^{2}([0,T],\mathrm{d}t)$. However, in general, $\mathfrak{H}$ is not a space of functions, for example, when X is a fractional Brownian motion with Hurst parameter $H\in (\frac{1}{2},1)$ (see [21]). The mapping $\mathbf{1}_{[0,t]}\longmapsto X_{t}$ can be extended to a linear isometry between $\mathfrak{H}$ and the Gaussian space $\mathcal{H}_{1}$ spanned by a Gaussian process X. We denote this isometry by $\varphi \longmapsto X(\varphi )$, and $\{X(\varphi );\hspace{0.1667em}\varphi \in \mathfrak{H}\}$ is an isonormal Gaussian process in the sense of [18, Definition 1.1.1], that is, it is a Gaussian family with covariance function
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big(X(\varphi _{1})X(\varphi _{2})\big)& \displaystyle =\langle \varphi _{1},\varphi _{2}\rangle _{\mathfrak{H}}\\{} & \displaystyle =\int _{{[0,T]}^{2}}\varphi _{1}(s)\varphi _{2}(t)\mathrm{d}R_{X}(s,t),\hspace{1em}\varphi _{1},\varphi _{2}\in \mathcal{E},\end{array}\]
where $\mathrm{d}R_{X}(s,t)=R_{X}(\mathrm{d}s,\mathrm{d}t)$ stands for the measure induced by the covariance function $R_{X}$ on ${[0,T]}^{2}$. Let $\mathcal{S}$ be the space of smooth and cylindrical random variables of the form
where $f\in {C_{b}^{\infty }}({\mathbb{R}}^{n})$ (f and all its partial derivatives are bounded). For a random variable F of the form (7), we define its Malliavin derivative as the $\mathfrak{H}$-valued random variable
By iteration, the mth derivative ${D}^{m}F\in {L}^{2}(\varOmega ;{\mathfrak{H}}^{\otimes m})$ is defined for every $m\ge 2$. For $m\ge 1$, ${\mathbb{D}}^{m,2}$ denotes the closure of $\mathcal{S}$ with respect to the norm $\| \cdot \| _{m,2}$, defined by the relation
\[\| F{\| _{m,2}^{2}}\hspace{0.2778em}=\hspace{0.2778em}\mathbb{E}\big[|F{|}^{2}\big]+\sum \limits_{i=1}^{m}\mathbb{E}\big(\| {D}^{i}F{\| _{{\mathfrak{H}}^{\otimes i}}^{2}}\big).\]
Let δ be the adjoint of the operator D, also called the divergence operator. A random element $u\in {L}^{2}(\varOmega ,\mathfrak{H})$ belongs to the domain of δ, denoted $\mathrm{Dom}(\delta )$, if and only if it satisfies
\[\big|\mathbb{E}\langle DF,u\rangle _{\mathfrak{H}}\big|\le c_{u}\hspace{0.1667em}\| F\| _{{L}^{2}}\]
for any $F\in {\mathbb{D}}^{1,2}$, where $c_{u}$ is a constant depending only on u. If $u\in \mathrm{Dom}(\delta )$, then the random variable $\delta (u)$ is defined by the duality relationship
which holds for every $F\in {\mathbb{D}}^{1,2}$. The divergence operator δ is also called the Skorokhod integral because when the Gaussian process X is a Brownian motion, it coincides with the anticipating stochastic integral introduced by Skorokhod [18]. We denote $\delta (u)={\int _{0}^{T}}u_{t}\delta X_{t}$.For every $q\ge 1$, the symbol $\mathcal{H}_{q}$ stands for the qth Wiener chaos of X, defined as the closed linear subspace of ${L}^{2}(\varOmega )$ generated by the family $\{H_{q}(X(h)):h\in \hspace{2.5pt}\mathfrak{H},\| h\| _{\mathfrak{H}}=1\}$, where $H_{q}$ is the qth Hermite polynomial defined as
We write by convention $\mathcal{H}_{0}=\mathbb{R}$. For any $q\ge 1$, the mapping ${I_{q}^{X}}({h}^{\otimes q})=H_{q}(X(h))$ can be extended to a linear isometry between the symmetric tensor product ${\mathfrak{H}}^{\odot q}$ (equipped with the modified norm $\sqrt{q!}\| \cdot \| _{{\mathfrak{H}}^{\otimes q}}$) and the qth Wiener chaos $\mathcal{H}_{q}$. For $q=0$, we write by convention ${I_{0}^{X}}(c)=c$, $c\in \mathbb{R}$. For any $h\in {\mathfrak{H}}^{\odot q}$, the random variable ${I_{q}^{X}}(h)$ is called a multiple Wiener–Itô integral of order q. A crucial fact is that if $\mathfrak{H}={L}^{2}(A,\mathcal{A},\nu )$, where ν is a σ-finite and nonatomic measure on the measurable space $(A,\mathcal{A})$, then ${\mathfrak{H}}^{\odot q}={L_{s}^{2}}({\nu }^{q})$, where ${L_{s}^{2}}({\nu }^{q})$ stands for the subspace of ${L}^{2}({\nu }^{q})$ composed of the symmetric functions. Moreover, for every $h\in {\mathfrak{H}}^{\odot q}={L_{s}^{2}}({\nu }^{q})$, the random variable ${I_{q}^{X}}(h)$ coincides with the q-fold multiple Wiener–Itô integral of h with respect to the centered Gaussian measure (with control ν) generated by X (see [18]). We will also use the following central limit theorem for sequences living in a fixed Wiener chaos (see [20, 19]).
(9)
\[H_{q}(x)={(-1)}^{q}{e}^{\frac{{x}^{2}}{2}}\frac{{d}^{q}}{d{x}^{q}}\big({e}^{-\frac{{x}^{2}}{2}}\big).\]Theorem 3.
Let $\{F_{n}\}_{n\ge 1}$ be a sequence of random variables in the qth Wiener chaos, $q\ge 2$, such that $\lim _{n\to \infty }\mathbb{E}({F_{n}^{2}})={\sigma }^{2}$. Then, as $n\to \infty $, the following asymptotic statements are equivalent:
To obtain Berry–Esseen-type estimate, we shall use the following result from [17, Corollary 5.2.10].
Theorem 4.
Let $\{F_{n}\}_{n\ge 1}$ be a sequence of elements in the second Wiener chaos such that $\mathbb{E}({F_{n}^{2}})\to {\sigma }^{2}$ and $\operatorname{Var}\| DF_{n}{\| _{\mathfrak{H}}^{2}}\to 0$ as $n\to \infty $. Then, $F_{n}\stackrel{\textit{law}}{\to }Z\sim \mathcal{N}(0,{\sigma }^{2})$, and
3.3 Isonormal Gaussian process associated with two Gaussian processes
In this subsection, we briefly describe how two Gaussian processes can be embedded into an isonormal Gaussian process. Let $X_{1}$ and $X_{2}$ be two independent centered continuous Gaussian processes with $X_{1}(0)=X_{2}(0)=0$ and continuous covariance functions $R_{X_{1}}$ and $R_{X_{2}}$, respectively. Assume that $\mathfrak{H}_{1}$ and $\mathfrak{H}_{2}$ denote the associated Hilbert spaces as explained in Section 3.2. The appropriate set $\tilde{\mathcal{E}}$ of elementary functions is the set of the functions that can be written as $\varphi (t,i)=\delta _{1i}\varphi _{1}(t)+\delta _{2i}\varphi _{2}(t)$ for $(t,i)\in [0,T]\times \{1,2\}$, where $\varphi _{1},\varphi _{2}\in \mathcal{E}$, and $\delta _{ij}$ is the Kronecker’s delta. On the set $\tilde{\mathcal{E}}$, we define the inner product
where $\mathrm{d}R_{X_{i}}(s,t)=R_{X_{i}}(\mathrm{d}s,\mathrm{d}t),\hspace{2.5pt}i=1,2$.
(10)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \langle \varphi ,\psi \rangle _{\tilde{\mathfrak{H}}}:& \displaystyle =\big\langle \varphi (\cdot ,1),\psi (\cdot ,1)\big\rangle _{\mathfrak{H}_{1}}+\big\langle \varphi (\cdot ,2),\psi (\cdot ,2)\big\rangle _{\mathfrak{H}_{2}}\\{} & \displaystyle =\int _{{[0,T]}^{2}}\varphi (s,1)\psi (t,1)\mathrm{d}R_{X_{1}}(s,t)+\int _{{[0,T]}^{2}}\varphi (s,2)\psi (t,2)\mathrm{d}R_{X_{2}}(s,t),\end{array}\]Let $\mathfrak{H}$ denote the Hilbert space that is the completion of $\tilde{\mathcal{E}}$ with respect to the inner product (10). Notice that $\mathfrak{H}\cong \mathfrak{H}_{1}\oplus \mathfrak{H}_{2}$, where $\mathfrak{H}_{1}\oplus \mathfrak{H}_{2}$ is the direct sum of the Hilbert spaces $\mathfrak{H}_{1}$ and $\mathfrak{H}_{2}$, that is, it is a Hilbert space consisting of elements of the form of ordered pairs $(h_{1},h_{2})\in \mathfrak{H}_{1}\times \mathfrak{H}_{2}$ equipped with the inner product $\langle (h_{1},h_{2}),(g_{1},g_{2})\rangle _{\mathfrak{H}_{1}\oplus \mathfrak{H}_{2}}:=\langle h_{1},g_{1}\rangle _{\mathfrak{H}_{1}}+\langle h_{2},g_{2}\rangle _{\mathfrak{H}_{2}}$.
Now, for any $\varphi \in \tilde{\mathcal{E}}$, we define $X(\varphi ):=X_{1}(\varphi (\cdot ,1))+X_{2}(\varphi (\cdot ,2))$. Using the independence of $X_{1}$ and $X_{2}$, we infer that $\mathbb{E}(X(\varphi )X(\psi ))=\langle \varphi _{1},\psi \rangle _{\mathfrak{H}}$ for all $\varphi ,\psi \in \tilde{\mathcal{E}}$. Hence, the mapping X can be extended to an isometry on $\mathfrak{H}$, and therefore $\{X(h),\hspace{0.1667em}h\in \mathfrak{H}\}$ defines an isonormal Gaussian process associated to the Gaussian processes $X_{1}$ and $X_{2}$.
3.4 Malliavin calculus with respect to (mixed Brownian) fractional Brownian motion
The fractional Brownian motion ${B}^{H}=\{{B_{t}^{H}}\}_{t\in \mathbb{R}}$ with Hurst parameter $H\in (0,1)$ is a zero-mean Gaussian process with covariance function
Let $\mathfrak{H}$ denote the Hilbert space associated to the covariance function $R_{H}$; see Section 3.2. It is well known that for $H=\frac{1}{2}$, we have $\mathfrak{H}={L}^{2}([0,T])$, whereas for $H>\frac{1}{2}$, we have ${L}^{2}([0,T])\subset {L}^{\frac{1}{H}}([0,T])\subset |\mathfrak{H}|\subset \mathfrak{H}$, where $|\mathfrak{H}|$ is defined as the linear space of measurable functions φ on $[0,T]$ such that
(11)
\[\mathbb{E}\big({B_{t}^{H}}{B_{s}^{H}}\big)=R_{H}(s,t)=\frac{1}{2}\big(|t{|}^{2H}+|s{|}^{2H}-|t-s{|}^{2H}\big)\hspace{0.1667em}.\]
\[\| \varphi {\| _{|\mathfrak{H}|}^{2}}:=\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}\big|\varphi (s)\big|\big|\varphi (t)\big||t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t<\infty ,\]
where $\alpha _{H}=H(2H-1)$.Proposition 1 ([18], Chapter 5).
Let $\mathfrak{H}$ denote the Hilbert space associated to the covariance function $R_{H}$ for $H\in (0,1)$. If $H=\frac{1}{2}$, that is, ${B}^{H}$ is a Brownian motion, then for any $\varphi ,\psi \in \mathfrak{H}={L}^{2}([0,T],\mathrm{d}t)$, the inner product of $\mathfrak{H}$ is given by the well-known Itô isometry
\[\mathbb{E}\big({B}^{\frac{1}{2}}(\varphi ){B}^{\frac{1}{2}}(\psi )\big)=\langle \varphi ,\psi \rangle _{\mathfrak{H}}={\int _{0}^{T}}\varphi (t)\psi (t)\mathrm{d}t.\]
If $H>\frac{1}{2}$, then for any $\varphi ,\psi \in |\mathfrak{H}|$, we have
The following proposition establishes the link between pathwise integral and Skorokhod integral in Malliavin calculus associated to fractional Brownian motion and will play an important role in our analysis.
Proposition 2 ([18]).
Let $u=\{u_{t}\}_{t\in [0,T]}$ be a stochastic process in the space ${\mathbb{D}}^{1,2}(|\mathfrak{H}|)$ such that almost surely
Then u is pathwise integrable, and we have
For further use, we also need the following ancillary facts related to the isonormal Gaussian process derived from the covariance function of the mixed Brownian–fractional Brownian motion. Assume that $X=W+{B}^{H}$ stands for a mixed Brownian–fractional Brownian motion with $H>\frac{1}{2}$. We denote by $\mathfrak{H}$ the Hilbert space associated to the covariance function of the process X with inner product $\langle \cdot ,\cdot \rangle _{\mathfrak{H}}$. Then a direct application of relation (10) and Proposition 1 yields the following facts. We recall that in what follows the notations ${I_{1}^{X}}$ and ${I_{2}^{X}}$ stand for multiple Wiener integrals of orders 1 and 2 with respect to isonormal Gaussian process X; see Section 3.2.
Lemma 1.
For any $\varphi _{1},\varphi _{2},\psi _{1},\psi _{2}\in {L}^{2}([0,T])$, we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big({I_{1}^{X}}(\varphi ){I_{1}^{X}}(\psi )\big)& \displaystyle =\langle \varphi ,\psi \rangle _{\mathfrak{H}}\\{} & \displaystyle ={\int _{0}^{T}}\varphi (t)\psi (t)\mathrm{d}t+\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}\varphi (s)\psi (t)|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t.\end{array}\]
Moreover,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle \mathbb{E}\big({I_{2}^{X}}(\varphi _{1}\otimes \varphi _{2}){I_{2}^{X}}(\psi _{1}\otimes \psi _{2})\big)\\{} & & \displaystyle \hspace{1em}=2\langle \varphi _{1}\otimes \varphi _{2},\psi _{1}\otimes \psi _{2}\rangle _{{\mathfrak{H}}^{\otimes 2}}\\{} & & \displaystyle \hspace{1em}=\int _{{[0,T]}^{2}}\varphi _{1}(s_{1})\psi _{1}(s_{1})\varphi _{2}(s_{2})\psi _{2}(s_{2})\mathrm{d}s_{1}\mathrm{d}s_{2}\\{} & & \displaystyle \hspace{2em}+\hspace{0.1667em}\alpha _{H}\int _{{[0,T]}^{3}}\varphi _{1}(s_{1})\psi _{1}(s_{1})\varphi _{2}(s_{2})\psi _{2}(t_{2})|t_{2}-s_{2}{|}^{2H-2}\mathrm{d}s_{1}\mathrm{d}s_{2}\mathrm{d}t_{2}\\{} & & \displaystyle \hspace{2em}+\hspace{0.1667em}\alpha _{H}\int _{{[0,T]}^{3}}\varphi _{1}(s_{1})\psi _{1}(t_{1})\varphi _{2}(s_{1})\psi _{2}(s_{1})|t_{1}-s_{1}{|}^{2H-2}\mathrm{d}s_{1}\mathrm{d}t_{1}\mathrm{d}s_{1}\\{} & & \displaystyle \hspace{2em}+\hspace{0.1667em}{\alpha _{H}^{2}}\int _{{[0,T]}^{4}}\varphi _{1}(s_{1})\psi _{1}(t_{1})\varphi _{2}(s_{2})\psi _{2}(t_{2})\\{} & & \displaystyle \hspace{2em}\hspace{0.1667em}\times |t_{1}-s_{1}{|}^{2H-2}|t_{2}-s_{2}{|}^{2H-2}\mathrm{d}s_{1}\mathrm{d}t_{1}\mathrm{d}s_{2}\mathrm{d}t_{2}.\end{array}\]
4 Main results
Throughout this section, we assume that $X=W+{B}^{H}$ is a mixed Brownian–fractional Brownian motion with $H>\frac{1}{2}$, unless otherwise stated. We denote by $\mathfrak{H}$ the Hilbert space associated to process X with inner product $\langle \cdot ,\cdot \rangle _{\mathfrak{H}}$.
4.1 Central limit theorem
We start with the following fact, which is one of our key ingredients.
Lemma 2 ([2]).
Let $\mathbb{E}{\xi }^{2}<\infty $. Then the randomized periodogram of the mixed Brownian–fractional Brownian motion X given by (5) satisfies
where $\varphi _{\xi }$ is the characteristic function of ξ, and the iterated stochastic integral in the right-hand side is understood pathwise, that is, as the limit of the Riemann–Stieltjes sums; see Section 3.1.
(13)
\[\mathbb{E}_{\xi }I_{T}(X;L\hspace{0.1667em}\xi )=[X,X]_{T}+2{\int _{0}^{T}}{\int _{0}^{t}}\varphi _{\xi }\big(L(t-s)\big)\mathrm{d}X_{s}\mathrm{d}X_{t},\]Our first aim is to transform the pathwise integral in (13) into the Skorokhod integral. This is the topic of the next lemma.
Lemma 3.
Let $u_{t}={\int _{0}^{t}}\varphi _{\xi }(L(t-s))\mathrm{d}X_{s}$, where $\varphi _{\xi }$ denotes the characteristic function of a symmetric random variable ξ. Then $u\in \mathrm{Dom}(\delta )$, and
\[{\int _{0}^{T}}u_{t}\mathrm{d}X_{t}={\int _{0}^{T}}u_{t}\delta X_{t}+\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}{D_{s}^{({B}^{H})}}u_{t}|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t,\]
where the stochastic integral in the right-hand side is the Skorokhod integral with respect to mixed Brownian–fractional Brownian motion X, and ${D}^{({B}^{H})}$ denotes the Malliavin derivative operator with respect to the fractional Brownian motion ${B}^{H}$.
Proof.
First, note that
\[u_{t}={u_{t}^{W}}+{u_{t}^{{B}^{H}}}={\int _{0}^{t}}\varphi _{\xi }\big(L(t-s)\big)\mathrm{d}W_{s}+{\int _{0}^{t}}\varphi _{\xi }\big(L(t-s)\big)\mathrm{d}{B_{s}^{H}}.\]
Moreover, $\mathbb{E}({\int _{0}^{T}}{u_{t}^{2}}\mathrm{d}t)<\infty $, so that $u_{t}\in {\mathbb{D}}^{1,2}$ for almost all $t\in [0,T]$ and $\mathbb{E}(\int _{{[0,T]}^{2}}{(D_{s}u_{t})}^{2}\mathrm{d}s\mathrm{d}t)<\infty $. Hence, $u\in \mathrm{Dom}(\delta )$ by [18, Proposition 1.3.1]. On the other hand,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\int _{0}^{T}}u_{t}\mathrm{d}X_{t}& \displaystyle ={\int _{0}^{T}}u_{t}\mathrm{d}W_{t}+{\int _{0}^{T}}u_{t}\mathrm{d}{B_{t}^{H}}\\{} & \displaystyle ={\int _{0}^{T}}{u_{t}^{W}}\mathrm{d}W_{t}+{\int _{0}^{T}}{u_{t}^{{B}^{H}}}\mathrm{d}W_{t}+{\int _{0}^{T}}{u_{t}^{W}}\mathrm{d}{B_{t}^{H}}+{\int _{0}^{T}}{u_{t}^{{B}^{H}}}\mathrm{d}{B_{t}^{H}}\\{} & \displaystyle ={\int _{0}^{T}}{u_{t}^{W}}\delta W_{t}+{\int _{0}^{T}}{u_{t}^{{B}^{H}}}\delta W_{t}+{\int _{0}^{T}}{u_{t}^{W}}\delta {B_{t}^{H}}+{\int _{0}^{T}}{u_{t}^{{B}^{H}}}\delta {B_{t}^{H}}\\{} & \displaystyle \hspace{1em}+\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}{D_{s}^{({B}^{H})}}{u_{t}^{{B}^{H}}}|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t\\{} & \displaystyle ={\int _{0}^{T}}u_{t}\delta W_{t}+{\int _{0}^{T}}u_{t}\delta {B_{t}^{H}}+\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}{D_{s}^{({B}^{H})}}u_{t}|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t,\end{array}\]
where we have used the independence of W and ${B}^{H}$, Proposition 2, and the fact that for adapted integrands, the Skorokhod integral coincides with the Itô integral. To finish the proof, we use the very definition of Skorokhod integral and relation (8) to obtain that ${\int _{0}^{T}}u_{t}\delta W_{t}+{\int _{0}^{T}}u_{t}\delta {B_{t}^{H}}={\int _{0}^{T}}u_{t}\delta X_{t}$. □We will also pose the following assumption for characteristic function $\varphi _{\xi }$ of a symmetric random variable ξ.
Remark 2.
Note that Assumption 1 is satisfied for many distributions. Especially, if the characteristic function $\varphi _{\xi }$ is positive and the density function $g_{\xi }(x)$ is differentiable, then we get by applying Fubini’s theorem and integration by part that
We continue with the following technical lemma, which in fact provides a correct normalization for our central limit theorems.
Lemma 4.
Consider the symmetric two-variable function $\psi _{L}(s,t):=\varphi _{\xi }(L|t-s|)$ on $[0,T]\times [0,T]$. Then $\psi _{L}\in {\mathfrak{H}}^{\otimes 2}$, and moreover, as $L\to \infty $, we have
where ${\sigma _{T}^{2}}:=2\hspace{0.1667em}T{\int _{0}^{\infty }}{\varphi _{\xi }^{2}}(x)\mathrm{d}x$ is independent of the Hurst parameter H.
(14)
\[\underset{L\to \infty }{\lim }L\| \psi _{L}{\| _{{\mathfrak{H}}^{\otimes 2}}^{2}}={\sigma _{T}^{2}}<\infty ,\]Proof.
Throughout the proof, C denotes unimportant constant depending on T and H, which may vary from line to line. First, note that clearly $\psi _{L}\in {\mathfrak{H}}^{\otimes 2}$ since $\psi _{L}$ is a bounded function. In order to prove (14), we show that, as $L\to \infty $,
Next, by applying Lemma 1 we obtain $\| \psi _{L}{\| _{{\mathfrak{H}}^{\otimes 2}}^{2}}=A_{1}+A_{2}+A_{3}$, where
First, we show that $A_{1}\sim \frac{1}{L}$. By change of variables $y=\frac{L}{T}s$ and $x=\frac{L}{T}t$ we obtain
(15)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle A_{1}& \displaystyle :=\int _{{[0,T]}^{2}}{\varphi _{\xi }^{2}}\big(L|t-s|\big)\mathrm{d}t\mathrm{d}s,\end{array}\]
\[A_{1}=\frac{{T}^{2}}{{L}^{2}}{\int _{0}^{L}}{\int _{0}^{L}}{\varphi _{\xi }^{2}}\big(T|x-y|\big)\mathrm{d}x\mathrm{d}y.\]
Now, by applying L’Hôpital’s rule and some elementary computations we obtain that
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \underset{L\to \infty }{\lim }{L}^{-1}{\int _{0}^{L}}{\int _{0}^{L}}{\varphi _{\xi }^{2}}\big(T|x-y|\big)\mathrm{d}x\mathrm{d}y& \displaystyle =\underset{L\to \infty }{\lim }2{\int _{0}^{L}}{\varphi _{\xi }^{2}}\big(T(L-x)\big)\mathrm{d}x\\{} & \displaystyle =\frac{2}{T}{\int _{0}^{\infty }}{\varphi _{\xi }^{2}}(y)\mathrm{d}y,\end{array}\]
which is finite by Assumption 1. Consequently, we get
\[\underset{L\to \infty }{\lim }LA_{1}=2T{\int _{0}^{\infty }}{\varphi _{\xi }^{2}}(y)\mathrm{d}y,\]
or, in other words, $A_{1}\sim {L}^{-1}$. To complete the proof, it is shown in Appendix B that $\lim _{L\to \infty }L(A_{2}+A_{3})=0$. □We also apply the following proposition. The proof is rather technical and is postponed to Appendix A.
Proposition 3.
Our main theorem is the following.
Theorem 5.
Proof.
First, by applying Lemmas 2 and 3 we can write
Therefore,
which converges to zero for $H\in (\frac{3}{4},1)$, and item 1 of the claim is proved. Similarly, for $H=\frac{3}{4}$, we obtain
which proves item 2 of the claim. Finally, for item 3, from (18) we infer that, as $L\to \infty $,
\[\mathbb{E}I_{T}(X;L\xi )-[X,X]_{T}={I_{2}^{X}}(\psi _{L})+\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}\varphi _{\xi }\big(L|t-s|\big)|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t.\]
Consequently, we obtain
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sqrt{L}\big(\mathbb{E}I_{T}(X;L\xi )-[X,X]_{T}\big)\\{} & \displaystyle \hspace{1em}=\sqrt{L}\hspace{0.1667em}{I_{2}^{X}}(\psi _{L})+\sqrt{L}\hspace{0.1667em}\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}\varphi _{\xi }\big(L|t-s|\big)|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t\\{} & \displaystyle \hspace{1em}:=A_{1}+A_{2}.\end{array}\]
Now, thanks to Proposition 3, for any $H\in (\frac{1}{2},1)$, we have
\[A_{1}=\sqrt{L}\hspace{0.1667em}\| \psi _{L}\| _{{\mathfrak{H}}^{\otimes 2}}{I_{2}^{X}}(\tilde{\psi }_{L})\stackrel{\text{law}}{\to }\mathcal{N}\big(0,{\sigma _{T}^{2}}\big),\]
where ${\sigma _{H}^{2}}$ is given by (14). Hence, it remains to study the term $A_{2}$. Using change of variables $y=\frac{L}{T}s$ and $x=\frac{L}{T}t$, we obtain
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\int _{0}^{T}}{\int _{0}^{T}}\varphi _{\xi }\big(L|t-s|\big)|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t\\{} & \displaystyle \hspace{1em}={T}^{2H}{L}^{-2H}{\int _{0}^{L}}{\int _{0}^{L}}\varphi _{\xi }\big(T|x-y|\big)|x-y{|}^{2H-2}\mathrm{d}x\mathrm{d}y,\end{array}\]
where by L’Hôpital’s rule we obtain
\[\underset{L\to \infty }{\lim }{L}^{-1}{\int _{0}^{L}}{\int _{0}^{L}}\varphi _{\xi }\big(T|x-y|\big)|x-y{|}^{2H-2}\mathrm{d}x\mathrm{d}y=2{T}^{1-2H}{\int _{0}^{\infty }}\varphi _{\xi }(x){x}^{2H-2}\mathrm{d}x.\]
Note also that the integral in the right-hand side of the last identity is finite by Assumption 1. Consequently, we obtain
(18)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{L\to \infty }{\lim }{L}^{2H-1}\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}\varphi _{\xi }\big(L|t-s|\big)|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t\\{} & \displaystyle \hspace{1em}=2\alpha _{H}T{\int _{0}^{\infty }}\varphi _{\xi }(x){x}^{2H-2}\mathrm{d}x=\mu .\end{array}\]
\[{L}^{2H-1}\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}\varphi _{\xi }\big(L|t-s|\big)|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t\longrightarrow \mu .\]
Furthermore, for the term ${I_{2}^{X}}(\psi _{L})$, we obtain
\[{L}^{2H-1}\hspace{0.1667em}{I_{2}^{X}}(\psi _{L})={L}^{2H-\frac{3}{2}}\times \sqrt{L}\hspace{0.1667em}{I_{2}^{X}}(\psi _{L})\stackrel{\mathbb{P}}{\to }0\]
as $L\to \infty $. This is because $H<\frac{3}{4}$ implies $2H-\frac{3}{2}<0$ and moreover $\sqrt{L}\hspace{0.1667em}{I_{2}^{X}}(\psi _{L})\stackrel{\text{law}}{\to }\mathcal{N}(0,1)$ and ${L}^{2H-\frac{3}{2}}\to 0$. □Corollary 1.
When $X=W$ is a standard Brownian motion, that is, if the fractional Brownian motion part drops, then with similar arguments as in Theorem 5, we obtain
\[\sqrt{L}\big(\mathbb{E}_{\xi }I_{T}(X;L\hspace{0.1667em}\xi )-[X,X]_{T}\big)\stackrel{\textit{law}}{\longrightarrow }\mathcal{N}\big(0,{\sigma _{T}^{2}}\big),\]
where ${\sigma _{T}^{2}}=2T{\int _{0}^{\infty }}{\varphi _{\xi }^{2}}(x)\mathrm{d}x$, and $\varphi _{\xi }$ is the characteristic function of ξ.
Remark 4.
Note that the proof of Theorem 5 reveals that in the case $H\in (\frac{1}{2},\frac{3}{4})$, for any $\epsilon >\frac{3}{2}-2H$, we have that, as $L\to \infty $,
and, moreover,
4.2 The Berry–Esseen estimates
As a consequence of the proof of Theorem 5, we also obtain the following Berry–Esseen bound for the semimartingale case.
Proposition 4.
Let all the assumptions of Theorem 5 hold, and let $H\in (\frac{3}{4},1)$. Furthermore, let $Z\sim \mathcal{N}(0,{\sigma _{T}^{2}})$, where the variance ${\sigma _{T}^{2}}$ is given by (14). Then there exists a constant C (independent of $L)$ such that for sufficiently large L, we have
where
Proof.
By proof of Theorem 5 we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sqrt{L}\big(\mathbb{E}I_{T}(X;L\xi )-[X,X]_{T}\big)\\{} & \displaystyle \hspace{1em}=\sqrt{L}\hspace{0.1667em}{I_{2}^{X}}(\psi _{L})+\sqrt{L}\hspace{0.1667em}\alpha _{H}{\int _{0}^{T}}{\int _{0}^{T}}\varphi _{\xi }\big(L|t-s|\big)|t-s{|}^{2H-2}\mathrm{d}s\mathrm{d}t\\{} & \displaystyle \hspace{1em}=:A_{1}+A_{2},\end{array}\]
where
\[A_{1}=\sqrt{2L}\| \psi _{L}\| _{{\mathfrak{H}}^{\otimes 2}}\hspace{0.1667em}{I_{2}^{X}}(\tilde{\psi }_{L}).\]
Now, we know that the deterministic term $A_{2}$ converges to zero with rate ${L}^{\frac{3}{2}-2H}$ and the term $A_{1}\stackrel{\text{law}}{\to }\mathcal{N}(0,{\sigma _{T}^{2}})$. Hence, in order to complete the proof, it is sufficient to show that
Now, by using the proof of Proposition 3 in Appendix A we have
\[\sqrt{\operatorname{Var}\| DF_{L}{\| _{\mathfrak{H}}^{2}}}\le {L}^{-\frac{1}{2}}\le {L}^{\frac{3}{2}-2H}.\]
Finally, using the notation of the proof of Lemma 4, we have
\[E\big({F_{n}^{2}}\big)=L\hspace{0.1667em}\| \psi _{L}{\| _{{\mathfrak{H}}^{\otimes 2}}^{2}}=L\times (A_{1}+A_{2}+A_{3}),\]
where $A_{2}+A_{3}\le C{L}^{-2H}$. Consequently,
To complete the proof, we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle LA_{1}& \displaystyle =\frac{{T}^{2}}{L}{\int _{0}^{L}}{\int _{0}^{L}}{\varphi _{\xi }^{2}}\big(T|x-y|\big)\mathrm{d}y\mathrm{d}x=\frac{{T}^{2}}{L}{\int _{0}^{L}}{\int _{-x}^{L-x}}{\varphi _{\xi }^{2}}(Tz)\mathrm{d}z\mathrm{d}x\\{} & \displaystyle =\frac{{T}^{2}}{L}{\int _{-L}^{L}}{\int _{-z}^{L-z}}{\varphi _{\xi }^{2}}(Tz)\mathrm{d}x\mathrm{d}z={T}^{2}{\int _{-L}^{L}}{\varphi _{\xi }^{2}}(Tz)\mathrm{d}z\\{} & \displaystyle =2{T}^{2}{\int _{0}^{L}}{\varphi _{\xi }^{2}}(Tz)\mathrm{d}z.\end{array}\]
This gives us
Now, the claim follows by an application of Theorem 4. □Remark 5.
In many cases of interest, the leading term in $\rho (L)$ is the polynomial term ${L}^{\frac{3}{2}-2H}$, which reveals that the role of the particular choice of $\varphi _{\xi }$ affects only to the constant. In particular, if $\varphi _{\xi }$ admits an exponential decay, that is, $|\varphi _{\xi }(t)|\le C_{1}{e}^{-C_{2}t}$ for some constants $C_{1},C_{2}>0$, then ${\int _{L}^{\infty }}{\varphi _{\xi }^{2}}(Tz)\mathrm{d}z\le C_{3}{e}^{-C_{4}L}\le C{L}^{\frac{3}{2}-2H}$ for some constants $C_{3},C_{4},C>0$. As examples, this is the case if ξ is a standard normal random variable with characteristic function $\varphi _{\xi }(t)={e}^{-\frac{{t}^{2}}{2}}$ or if ξ is a standard Cauchy random variable with characteristic function $\varphi _{\xi }(t)={e}^{-|t|}$.
Remark 6.
Consider the case $X=W$, that is, X is a standard Brownian motion. In this case, the correction term $A_{2}$ in the proof of Theorem 5 disappears, and we have
\[\mathbb{E}\big({F_{L}^{2}}\big)-{\sigma _{T}^{2}}=2{T}^{2}{\int _{L}^{\infty }}{\varphi _{\xi }^{2}}(Tx)\mathrm{d}x.\]
Furthermore, by applying L’Hôpital’s rule twice and some elementary computations it can be shown that
\[\mathbb{E}{\big[\| DF_{L}{\| _{\mathfrak{H}}^{2}}-\mathbb{E}\| DF_{L}{\| _{\mathfrak{H}}^{2}}\big]}^{2}\le \big|\varphi _{\xi }(TL)\big|\hspace{0.1667em}{L}^{-1}.\]
Consequently, in this case, we obtain the Berry–Esseen bound
\[\underset{x\in \mathbb{R}}{\sup }\big|\mathbb{P}(\sqrt{L}\big(\mathbb{E}_{\xi }\big(I_{T}(X;L\hspace{0.1667em}\xi )-[X,X]_{T}\big)<x\big)-\mathbb{P}(Z<x)\big|\le C\rho (L),\]
where
\[\rho (L)=\max \Bigg\{\sqrt{\big|\varphi _{\xi }(TL)\big|{L}^{-1}},{\int _{L}^{\infty }}{\varphi _{\xi }^{2}}(Tz)\mathrm{d}z\Bigg\},\]
which is in fact better in many cases of interest. For example, if $\varphi _{\xi }$ admits an exponential decay, then we obtain $\rho (L)\le {e}^{-cL}$ for some constant c.