1 Introduction
A fractional Brownian motion $\{{B}^{H}(t),t\ge 0\}$ is a centered Gaussian stochastic process with covariance function
\[\mathbf{E}\big[{B}^{H}(t){B}^{H}(u)\big]=\frac{1}{2}\big({t}^{2H}+{u}^{2H}-|t-u{|}^{2H}\big),\hspace{1em}t,u\ge 0,\]
where $H\in (0,1)$ is the Hurst index. The fractional Brownian motion is a self-similar process with index H, that is, for any $a>0$,
where $\stackrel{d}{=}$ means the equality of finite-dimensional distributions.Due to self-similarity, we have that, for all $T>0$,
\[\big\{{B}^{H}(t),t\in [0,T]\big\}\stackrel{d}{=}\big\{{T}^{H}{B}^{H}\big({T}^{-1}t\big),t\in [0,T]\big\}=\big\{{T}^{H}{B}^{H}(s),s\in [0,1]\big\}.\]
Based on such a invariance of distributions, it is appropriate to investigate the properties of the fractional Brownian motion only over the time interval $[0,1]$.In this paper, we consider the behavior of the maximum functional $\max _{t\in [0,1]}{B}^{H}(t)$ with small values of Hurst index.
It should be noted that the fractional Brownian motion process with $H=1/2$ is the Wiener process $\{W(t),t\ge 0\}$. The distribution of $\max _{t\in [0,1]}W(t)$ is known. Namely,
and, therefore,
Many papers are devoted to the distribution of the maximum functional of the fractional Brownian motion, where usually asymptotic properties for large values of time horizon T are considered. For example, Molchan [5] has found an asymptotic behavior of small-ball probabilities for the maximum of the fractional Brownian motion. Talagrand [8] obtained lower bounds for the expected maximum of the fractional Brownian motion. In several works, the distribution of the maximum is investigated when the Hurst index H is close to $1/2$. In particular, this case was considered by Sinai [6] and recently by Delorme and Weise [4].
Currently, an analytical expression for the distribution of the maximum of the functional Brownian motion remains unknown. Moreover, the exact value of the expectation of such a functional is unknown too.
From the paper of Borovkov et al. [1] we know the following bounds:
On the other hand, we may get an approximate value of the expected maximum using Monte Carlo simulations. That is, for sufficiently large N,
The authors of [1] obtain an upper bound for the error $\varDelta _{N}$ of approximation (2). Namely, for $N\ge {2}^{1/H}$,
(1)
\[\frac{1}{2\sqrt{H\pi e\ln 2}}\le \mathbf{E}\underset{t\in [0,1]}{\max }{B}^{H}(t)<\frac{16.3}{\sqrt{H}}.\](2)
\[\mathbf{E}\underset{t\in [0,1]}{\max }{B}^{H}(t)\approx \mathbf{E}\underset{i=\overline{1,N}}{\max }{B}^{H}(i/N).\]The implementation of approximation (2) has technical limitations. Due to modern computer capabilities, we assume that $N\le {2}^{20}\approx {10}^{6}$. Under such conditions, inequality (4) is true when $H\ge 0.05$, and $\varDelta _{N}<11.18$.
In this article, we make Monte Carlo simulations and estimate $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)$. Also, we investigate the behavior of $\varDelta _{N}$ with small values of the Hurst index H and show that, for a fixed N, the approximation error $\varDelta _{N}\to +\infty $ as $H\to 0$. For the rate of this convergence, when $N={2}^{20}$, we prove the inequality $\varDelta _{N}>c_{1}{H}^{-1/2}-c_{2}$, $\hspace{2.5pt}H\in (0,1)$, where the constants $c_{1}=0.2055$ and $c_{2}=3.4452$ are calculated numerically. Thus, when the values of H are small, approximation (2) is not appropriate for evaluation of $\mathbf{E}\max _{t\in [0,1]}{B}^{H}(t)$.
The article is organized as follows. The first section presents the methodology of computing. The second section presents the results of computing of the expected maximum of the fractional Brownian motion. In the third section, we obtain a lower bound for the error $\varDelta _{N}$ and calculate the constants $c_{1}$ and $c_{2}$.
2 Methods of approximate calculations
2.1 Simulation of a vector $({B}^{H}(i/N))_{1\le i\le N}$
Let us consider briefly the method proposed by Wood and Chan [9]. Let G be the autocovariance matrix of $({B}^{H}(1/N),\dots ,{B}^{H}(N/N))$. Embed G in a circulant $m\times m$ matrix C given by
where
Proposition 1.
Let $m={2}^{1+\nu }$, where ${2}^{\nu }$ is the minimum power of 2 not less than N. Then the matrix C allows a representation $C=QJ{Q}^{T}$, where J is a diagonal matrix of eigenvalues of the matrix C, and Q is the unitary matrix with elements
\[(Q)_{j,k}=\frac{1}{\sqrt{m}}c_{j}\exp \bigg(-2i\pi \frac{jk}{m}\bigg),\hspace{1em}j,k=\overline{0,m-1}.\]
The eigenvalues $\lambda _{k}$ of the matrix C are equal to
Since Q is unitary, we can set $Y=Q{J}^{1/2}{Q}^{T}Z$, where $Z\sim N(0,I_{m})$. Therefore, we get $Y\sim N(0,C)$. Thus, the distributions of the vectors $(Y_{0},Y_{0}+Y_{1},\dots ,Y_{0}+\cdots +Y_{N-1})$ and $({B}^{H}(1/N),\dots ,{B}^{H}(N/N))$ coincide.
The method of Wood and Chan is exact and has complexity $O(N\log (N))$. A more detailed description of the algorithm, a comparison with other methods of simulation of the fractional Brownian motion, and a program code are contained in the paper [3]. For reasons of optimization of calculations, simulations in the present paper are made by the method of Wood and Chan.
The estimate of the mean value $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)$ is a sample mean over the sample of size n. That is why the total complexity of the algorithm is $O(nN\log (N))$.
2.2 Clark’s method
Instead of generating samples and computing sample means, there exists a method of Clark [2] for approximating the expected maximum.
Due to this method, the first four moments of the random variable $\max \{\xi _{1},\dots ,\xi _{N}\}$, where $(\xi _{1},\dots ,\xi _{N})$ is a Gaussian vector, are calculated approximately. Since the fractional Brownian motion is a Gaussian process, we put $(\xi _{1},\dots ,\xi _{N})=({B}^{H}(1/N),\dots ,{B}^{H}(N/N))$ and apply Clark’s method for approximate computing of $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)$.
Let us illustrate the basic idea of Clark’s method of calculating $\mathbf{E}\max \{\xi ,\eta ,\tau \}$, where $\xi ,\eta ,\tau $ are Gaussian distributed.
Proposition 2.
Let $\xi ,\eta ,\tau $ be Gaussian random variables. Put $a=\mathbf{Var}(\xi )+\mathbf{Var}(\eta )-\mathbf{Cov}(\xi ,\eta )$ and let $a>0$. Denote $\alpha :=(\mathbf{E}\xi -\mathbf{E}\eta )/a$. Then we have
where $\varphi (x)=\frac{1}{\sqrt{2\pi }}\exp (-\frac{{x}^{2}}{2})$ and $\varPhi (x)={\int _{-\infty }^{x}}\phi (t)dt$.
(5)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{E}\max \{\xi ,\eta \}& \displaystyle =\varPhi (\alpha )\mathbf{E}\xi +\varPhi (-\alpha )\mathbf{E}\eta +a\varphi (\alpha );\\{} \displaystyle \mathbf{E}{\big(\max \{\xi ,\eta \}\big)}^{2}& \displaystyle =\varPhi (\alpha )\mathbf{E}{\xi }^{2}+\varPhi (-\alpha )\mathbf{E}{\eta }^{2}+a\varphi (\alpha )(\mathbf{E}\xi +\mathbf{E}\eta ),\end{array}\]So, the exact value of $\mathbf{E}\max \{\xi ,\eta \}$ is obtained from the previous proposition.
Proposition 3.
Let $\xi ,\eta ,\tau $ be Gaussian random variables. Let $\mathbf{Corr}(\tau ,\xi )$ and $\mathbf{Corr}(\tau ,\eta )$ be known. Then
For approximate computing $\mathbf{E}\max \{\xi ,\eta ,\tau \}=\mathbf{E}\max \{\tau ,\max \{\xi ,\eta \}\}$, we assume that $\max \{\xi ,\eta \}$ has a Gaussian distribution. In fact, this is not true, but it allows us to apply formula (5) for random variables τ and $\max \{\xi ,\eta \}$. Thus, iteratively, we can calculate the approximate mean for any finite number of Gaussian random variables.
3 Computing the expected maximum
In this section, we present results of approximate computing $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)$ by generating random samples and applying Clark’s method. Also, we compare the computational results obtained by these two methods.
The values of the Hurst index are taken from the set $\{{10}^{-4}(1+4i),\hspace{0.1667em}i=\overline{0,24}\}\cup \{{10}^{-2}i,\hspace{0.1667em}i=\overline{1,9}\}$. The values of N are chosen from the set $\{{2}^{j},j=\overline{8,19}\}$. The values of $({B}^{H}(1/N),{B}^{H}(2/N),\dots ,{B}^{H}(N/N))$ are simulated by the method of Wood and Chan for each pair $N,H$ with the sample size $n=1000$. For each element in the sample, we calculate the following functionals:
3.1 Approximation error of $\frac{1}{N}{\sum _{i=0}^{N}}{B}^{H}(i/N)$
The sample moments of (7) when $H=\{{10}^{-4}(1+4i),\hspace{0.1667em}i=\overline{0,24}\}$ and $N=\{{2}^{j},\hspace{0.1667em}j=\overline{8,19}\}$ are presented in Figs. 1 and 2. In the figures, the lines indicate the theoretical moments and confidence intervals corresponding to the reliability of 95%. The data confirm the correctness of calculations of (7) with the reliability of 95% even for small values of H.
3.2 Computing functional $\max _{i=\overline{1,N}}{B}^{H}(i/N)$
For each pair $N,H$, we obtain the sample of values of the maximum functional (6) with sample size 1000. For some values of H, the sample means and approximate values of the expected maximum, obtained by Clark’s method, are presented in Table 1.
Table 1.
The approximate values of the expected maximum
Sample means of (6) | Values due to Clark’s method | |||||||
$N\diagdown H$ | 0.0900 | 0.0100 | 0.0013 | 0.0001 | 0.0900 | 0.0100 | 0.0013 | 0.0001 |
28 | 1.7017 | 2.0019 | 1.9897 | 1.9769 | 1.1738 | 1.8691 | 1.9696 | 1.9839 |
29 | 1.7693 | 2.0875 | 2.1602 | 2.1360 | 1.1903 | 1.9991 | 2.1194 | 2.1366 |
210 | 1.9487 | 2.2504 | 2.3047 | 2.2854 | 1.1971 | 2.1193 | 2.2604 | 2.2806 |
211 | 2.0138 | 2.4203 | 2.4446 | 2.4184 | 1.1966 | 2.2310 | 2.3939 | 2.4174 |
212 | 2.0886 | 2.5086 | 2.5948 | 2.5334 | 1.1910 | 2.3351 | 2.5208 | 2.5476 |
213 | 2.1938 | 2.6396 | 2.6934 | 2.6885 | 1.1822 | 2.4327 | 2.6420 | 2.6723 |
214 | 2.2591 | 2.7612 | 2.7829 | 2.7940 | 1.1714 | 2.5242 | 2.7579 | 2.7919 |
215 | 2.3327 | 2.8837 | 2.9452 | 2.9258 | 1.1586 | 2.6104 | 2.8693 | 2.9070 |
216 | 2.4050 | 2.9973 | 3.0526 | 3.0464 | 1.1436 | 2.6917 | 2.9765 | 3.0181 |
217 | 2.4620 | 3.0791 | 3.1386 | 3.1121 | 1.1263 | 2.7685 | 3.0798 | 3.1256 |
218 | 2.5328 | 3.1900 | 3.2102 | 3.2421 | 1.1068 | 2.8412 | 3.1798 | 3.2297 |
219 | 2.5597 | 3.3481 | 3.3487 | 3.3663 | 1.0855 | 2.9101 | 3.2766 | 3.3307 |
Within the data obtained by the different methods, we get that the approximate values obtained by Clark’s algorithm differ from the sample means at most by 57.6% when $H=0.09$, by 13.08% when $H=0.01$, by 2.85% when $H=0.0013$, and by 1.06% when $H=0.0001$. Thus, when $H\le 0.0013$, the values of the expected maximum, obtained by these completely different methods, are numerically identical. This indicates that the sample mean is approximately equal to $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)$.
4 Bounds for the approximation error
In this section, we find bounds for the error of approximation (2). As noted before, $\mathbf{E}\max _{t\in [0,1]}{B}^{H}(t)$ $\ge {(4H\pi e\ln 2)}^{-1/2}$. It is expected that obtained sample means of the maximum functional (6) also satisfies this constraint. In Fig. 3, the sample means and the values of ${(4H\pi e\ln 2)}^{-1/2}$ are presented. As one can see, the inequality $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)\ge {(4H\pi e\ln 2)}^{-1/2}$ is false for small values of H.
There are two possible explanations of this fact: either there is a significant error in calculations, or the approximation error $\varDelta _{N}$ grows rapidly as $H\to 0$. Let us verify these two explanations.
From [1, Theorem 4.2] we get that the expectation of the maximal functional (6) grows as $H\to 0$ and has the limit
where $\xi _{1},\dots ,\xi _{N}$ are i.i.d. r.v.s, $\xi _{1}\sim N(0,1)$, and ${x}^{+}:=\max \{0,x\}$.
(8)
\[\underset{H\to 0}{\lim }\mathbf{E}\underset{i=\overline{1,N}}{\max }{B}^{H}(i/N)=\frac{1}{\sqrt{2}}\mathbf{E}{\Big(\underset{i=\overline{1,N}}{\max }\xi _{i}\Big)}^{+},\]Moreover, the rate of convergence in (8) is also obtained in [1]:
The right-hand side of (9) does not exceed 0.1 when $N={2}^{20}$ and $H<0.0038$.
(9)
\[0\le \frac{1}{\sqrt{2}}\mathbf{E}{\Big(\underset{i=\overline{1,N}}{\max }\xi _{i}\Big)}^{+}-\mathbf{E}\underset{i=\overline{1,N}}{\max }{B}^{H}(i/N)\le 1-\frac{1}{{N}^{2H}}.\]We apply two approaches to calculate $\frac{1}{\sqrt{2}}\mathbf{E}{(\max _{i=\overline{1,N}}\xi _{i})}^{+}$. The first one is Monte Carlo simulations.
The sample means of $\frac{1}{\sqrt{2}}{(\max _{i=\overline{1,N}}\xi _{i})}^{+}$ are presented in Table 2 for several sample sizes n.
As we see, with increasing sample size 20 times, the sample means differ at most by 0.33% for each N. Therefore, to ensure the accuracy of calculations, it suffices to put $n=1000$. Under such conditions, technical resources allow us to calculate the sample means for larger values of N. In Table 3, the values of the sample means are presented for $N=\{{2}^{20},{2}^{21},{2}^{22},{2}^{23},{2}^{24},{2}^{25}\}$ .
Table 2.
Values of limit (8)
Sample means of $\frac{1}{\sqrt{2}}{(\max _{i=\overline{1,N}}\xi _{i})}^{+}$ | $N{\int _{0.5}^{1}}{\mathrm{erf}}^{(-1)}(2z-1){z}^{N-1}dz$ | |||||
$N\diagdown n$ | 1000 | 5000 | 10000 | 15000 | 20000 | |
28 | 1.9908 | 1.9908 | 1.9957 | 1.9965 | 1.9961 | 1.9989 |
29 | 2.1462 | 2.1506 | 2.1520 | 2.1526 | 2.1525 | 2.1524 |
210 | 2.3071 | 2.3033 | 2.3006 | 2.3004 | 2.2994 | 2.2969 |
211 | 2.4409 | 2.4362 | 2.4360 | 2.4371 | 2.4351 | 2.4337 |
212 | 2.5712 | 2.5657 | 2.5648 | 2.5635 | 2.5643 | 2.5640 |
213 | 2.6824 | 2.6847 | 2.6877 | 2.6874 | 2.6867 | 2.6887 |
214 | 2.8150 | 2.8066 | 2.8065 | 2.8060 | 2.8078 | 2.8082 |
215 | 2.9190 | 2.9259 | 2.9235 | 2.9248 | 2.9244 | 2.9232 |
216 | 3.0301 | 3.0372 | 3.0353 | 3.0340 | 3.0348 | 3.0343 |
217 | 3.1387 | 3.1372 | 3.1424 | 3.1418 | 3.1414 | 3.1417 |
218 | 3.2394 | 3.2456 | 3.2469 | 3.2460 | 3.2461 | 3.2458 |
219 | 3.3402 | 3.3442 | 3.3450 | 3.3458 | 3.3460 | 3.3469 |
Table 3.
Values of limit (8) for $N\ge {2}^{20}$
N | 220 | 221 | 222 | 223 | 224 | 225 |
Sample means of $\frac{1}{\sqrt{2}}{(\max _{i=\overline{1,N}}\xi _{i})}^{+}$ | 3.4516 | 3.536 | 3.627 | 3.724 | 3.816 | 4.073 |
$N{\int _{0.5}^{1}}{\mathrm{erf}}^{(-1)}(2z-1){z}^{N-1}dz$ | 3.4452 | 3.541 | 3.634 | 3.726 | 3.815 | 3.902 |
Instead of generating random samples, we may calculate the value of $\frac{1}{\sqrt{2}}\mathbf{E}{(\max _{i=\overline{1,N}}\xi _{i})}^{+}$ as an integral.
Proposition 4.
Let $\xi _{1},\dots ,\xi _{N}$ be i.i.d. r.v.s, $\xi _{1}\sim N(0,1)$. Then
where ${\varPhi }^{(-1)}$ is the inverse function of $\varPhi (x)={\int _{-\infty }^{x}}\frac{{e}^{-{y}^{2}/2}}{\sqrt{2\pi }}dy$, $x\in \mathbb{R}$, and ${\mathrm{erf}}^{(-1)}$ is the inverse function of the error function $\mathrm{erf}$.
(10)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{1}{\sqrt{2}}\mathbf{E}{\Big(\underset{i=\overline{1,N}}{\max }\xi _{i}\Big)}^{+}& \displaystyle =\frac{N}{\sqrt{2}}{\int _{1/2}^{1}}{\varPhi }^{(-1)}(z){z}^{N-1}dz\\{} & \displaystyle =N{\int _{1/2}^{1}}{\mathrm{erf}}^{(-1)}(2z-1){z}^{N-1}dz,\end{array}\]We immediately get the following corollary.
The integrand in (11) is not an elementary function, but its values are tabulated, and there exist methods for its numerical computing. For the present paper, the integral $N{\int _{0.5}^{1}}{\mathrm{erf}}^{(-1)}(2z-1){z}^{N-1}dz$ is calculated numerically, and the corresponding values are presented in Tables 2 and 3. By maintaining the accuracy of calculations, the maximum possible value of N is 231, and the value of the integral reaches 4.390.
The values of $\frac{1}{\sqrt{2}}\mathbf{E}{(\max _{i=\overline{1,N}}\xi _{i})}^{+}$, obtained by the two methods, differ at most by 0.44 % when $N\le {2}^{24}$. When $N={2}^{20}$, the absolute error of numerical computing of (10) is less than $1.3\times {10}^{-5}$. Thereafter, for $N={2}^{20}$, inequality (11) becomes
Let us return to the lower bound for $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)$. By Sudakov’s inequality [1, 7] we have
Moreover, the maximum of the right-hand side of (13) equals ${(4H\pi e\ln 2)}^{-1/2}$ and is reached when $N=[{e}^{1/2H}]$. The values of the lower bound are presented in Table 4.
Table 4.
Lower bounds
H | 0.5000 | 0.0900 | 0.0100 | 0.0013 | 0.0001 | ||
${(2\sqrt{H\pi e\ln 2})}^{-1}$ | 0.5811 | 1.3696 | 4.1089 | 11.396 | 41.089 | ||
${e}^{1/2H}$ | 2.7183 | 258.67 | 5.18 × 1021 | 1.1 × 10167 | 2.97 × 102171 | ||
N | |||||||
${(\frac{\ln (N+1)}{{N}^{2H}2\pi \ln 2})}^{1/2}$ | 28 | 0.0705 | 0.6853 | 1.0679 | 1.1207 | 1.1282 | |
29 | 0.0529 | 0.6828 | 1.1246 | 1.1873 | 1.1963 | ||
210 | 0.0394 | 0.6761 | 1.1772 | 1.2503 | 1.2608 | ||
211 | 0.0292 | 0.6662 | 1.2260 | 1.3101 | 1.3222 | ||
212 | 0.0216 | 0.6537 | 1.2717 | 1.3671 | 1.3808 | ||
213 | 0.0159 | 0.6393 | 1.3145 | 1.4217 | 1.4371 | ||
214 | 0.0117 | 0.6233 | 1.3547 | 1.4740 | 1.4913 | ||
215 | 0.0085 | 0.6061 | 1.3925 | 1.5244 | 1.5435 | ||
216 | 0.0062 | 0.5881 | 1.4283 | 1.5729 | 1.5940 | ||
217 | 0.0045 | 0.5696 | 1.4620 | 1.6199 | 1.6429 | ||
218 | 0.0033 | 0.5507 | 1.4940 | 1.6653 | 1.6905 | ||
219 | 0.0024 | 0.5315 | 1.5244 | 1.7094 | 1.7367 |
Combining Tables 1, 2, and 4, we get that all obtained sample means for $\mathbf{E}\max _{i=\overline{1,N}}{B}^{H}(i/N)$ satisfy the constraint
Therefore, even with small values of the parameter H, the simulation does not lead to contradiction.
Now let us find a lower bound for the approximation error $\varDelta _{N}$. We prove the following proposition.
Proposition 5.
From this it follows that, for a fixed N, the approximation error $\varDelta _{N}\to +\infty $ as $H\to 0$. We also have the following evident corollaries.
-
• if $H<0.00022$, then the relative error $\delta _{H}\ge 75\% $, and $\varDelta _{N}>10.34$;
-
• if $H<0.00089$, then the relative error $\delta _{H}\ge 50\% $, and $\varDelta _{N}>3.45$;
-
• if $H<0.0020$, then the relative error $\delta _{H}\ge 25\% $, and $\varDelta _{N}>1.15$;
-
• if $H<0.0028$, then the relative error $\delta _{H}\ge 10\% $, and $\varDelta _{N}>0.38$;
-
• if $H<0.0032$, then the relative error $\delta _{H}\ge 5\% $, and $\varDelta _{N}>0.18$;
-
• if $H<0.0035$, then the relative error $\delta _{H}\ge 1\% $, and $\varDelta _{N}>0.03$.
Thus, we conclude that the estimation of $\mathbf{E}\max _{t\in [0,1]}{B}^{H}(t)$ by Monte Carlo simulations leads to significant errors for small values of the parameter H.