Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. Drift parameter estimation for tempered ...

Drift parameter estimation for tempered fractional Ornstein–Uhlenbeck processes based on discrete observations
Olha Prykhodko   Kostiantyn Ralchenko ORCID icon link to view author Kostiantyn Ralchenko details  

Authors

 
Placeholder
https://doi.org/10.15559/25-VMSTA289
Pub. online: 18 November 2025      Type: Research Article      Open accessOpen Access

Received
25 June 2025
Revised
6 November 2025
Accepted
8 November 2025
Published
18 November 2025

Abstract

The problem of estimating the drift parameter is considered for an Ornstein–Uhlenbeck-type process driven by a tempered fractional Brownian motion (tfBm) or tempered fractional Brownian motion of the second kind (tfBmII). Unlike most existing studies, which assume continuous-time observations, a more realistic setting of discrete-time data is in focus. The strong consistency of a discretized least squares estimator is established under an asymptotic regime where the observation interval tends to zero while the total time horizon increases. A key step in the analysis involves deriving almost sure upper bounds for the increments of both tfBm and tfBmII.

1 Introduction

Over the past several decades, fractional Brownian motion (fBm) has remained a widely used stochastic process for modeling various phenomena that exhibit memory, as well as long- or short-range dependence. Initially proposed as a model for turbulence, it now finds applications in diverse fields such as mathematical finance (e.g., modeling stochastic volatility), hydrology, telecommunications (e.g., network traffic with self-similarity), image processing, and physics (e.g., anomalous diffusion). A comprehensive mathematical foundation and stochastic calculus for fBm and related processes can be found in the monograph [25].
To enhance its modeling capabilities in specific contexts and better adapt it to particular problems, numerous generalizations and related processes have been proposed. These include multifractional Brownian motion [31], mixed fractional Brownian motion [34], bifractional Brownian motion [15], subfractional Brownian motion [5], mixed subfractional Brownian motion [34], submixed fractional Brownian motion [14], and generalized fractional Brownian motions [35], among others. A comprehensive description of various types of mixed fractional Gaussian processes can be found in [30].
In recent years, there has been growing interest in tempered versions of fractional Brownian motion. Several types of tempered fractional Brownian motions have been introduced in the literature. In this work, we focus on two prominent examples: tempered fractional Brownian motion (tfBm) and tempered fractional Brownian motion of the second kind (tfBmII), introduced in [23] and [32], respectively. Both processes are based on the Mandelbrot–van Ness representation of fBm, modified by incorporating exponential tempering.
The incorporation of exponential tempering into fBm is primarily motivated by the need to overcome the limitation of classical fBm, which exhibits long-range dependence for $H\gt 1/2$. In many applications, such persistent correlations at all time scales are not realistic: one often observes strong dependence at short or moderate lags, but a much faster decay at longer horizons. Tempered versions of fBm provide a natural remedy, as the exponential factor effectively truncates long memory, yielding processes that are short-range dependent for all $H\gt 0$. At the same time, they retain long-memory-like features at shorter scales and are therefore sometimes described as exhibiting semi-long-range dependence. This intermediate structure makes tfBm and tfBmII attractive in diverse modeling contexts, including finance, where asset returns may display rough or persistent short-term dynamics but limited long-term memory, and in physics or hydrology, where tempered fractional models arise in anomalous diffusion and geophysical time series. The second kind of tempered fBm further broadens this framework by offering alternative covariance structures and scaling limits, which can be useful when fitting models to empirical data.
In addition to tfBm and tfBmII, other related processes include tempered fractional Brownian motion of the third kind [21], tempered multifractional Brownian motion, mixed tempered fractional Brownian motion, tempered fractional Brownian motion with two indices [20], and the general tempered fractional Brownian motion [24].
Due to tempering, tfBm and tfBmII exhibit several additional advantages over classical fBm. In particular, they are well defined for all positive values of the Hurst parameter H, whereas fBm is restricted to the interval $(0,1)$. Tempering modifies the global properties of the process while preserving local sample-path regularity and p-variation. As a result, the asymptotic behavior of the variance changes: for tfBm, the variance converges to a finite limit as $t\to \infty $, while for tfBmII, it grows linearly with t, see [1]. Furthermore, tempering also affects the structure of central limit theorems, leading to modifications of higher-order cumulants such as those appearing in the Breuer–Major theorem. The increments of tfBm and tfBmII, known as tempered fractional Gaussian noise, are stationary and exhibit correlation decay that is significantly faster than in the classical case, which has important consequences for statistical inference.
Additional motivation for studying these processes comes from their successful applications in modeling empirical data. For example, [6] demonstrates that tfBm provides a better fit for turbulent supercritical flow in the Red Cedar River than fBm, and [23] argues for the suitability of tempered fractional Gaussian noise in modeling wind speed data. These real-world applications highlight the importance of parameter estimation and hypothesis testing. In this context, [6] proposes wavelet-based estimators for tfBm parameters and introduces a statistical test to distinguish between fBm and tfBm.
In this paper, we study Langevin-type equations driven by either tfBm or tfBmII, and we develop statistical estimators for the drift parameter of these models.
The Langevin equation has a rich history of investigation (see the Appendix in [8]), and various types of stochastic processes have been considered as driving noise. In particular, [8] proposed a version of the Langevin equation driven by fBm, leading to the so-called fractional Ornstein–Uhlenbeck (fOU) process.
Today, this model has numerous applications and has been well studied from a statistical perspective. In particular, a large body of literature is devoted to the estimation of the drift parameter; see the survey [26] and the monograph [18] for comprehensive overviews. Estimation methods for the Hurst parameter and diffusion coefficient of the fOU process are also available, see, for example, [4, 7, 17, 18].
In the present paper, we estimate the drift parameter of tempered fractional Ornstein–Uhlenbeck processes using a least squares approach. We begin by analyzing an estimator based on continuous-time observations of the entire trajectory of the process. The main focus of our study, however, is on its discrete-time counterpart.
In the continuous-time setting, we adapt the least squares estimator originally proposed in [3] for the fOU process, and later generalized to Ornstein–Uhlenbeck processes driven by more general noise structures in [12, 13, 29]. We then consider a discretized version of this estimator, building upon the approach introduced in [19] and further developed in [18].
The primary contribution of this work is the establishment of strong consistency for both the continuous-time and discrete-time estimators. To prove the strong consistency of the continuous-time estimator, we utilize almost sure bounds for the growth rate of tfBm and tfBmII, recently developed in [27]. These results extend the corresponding bounds for fBm, initially established in [16]. To establish the strong consistency of the discretized least squares estimator, we derive almost sure upper bounds for the increments of both tfBm and tfBmII, employing techniques from [12].
The structure of the paper is as follows. Section 2 introduces the definitions and essential properties of tfBm and tfBmII. In Section 3, we establish almost sure upper bounds for the increments of these processes. The main theoretical results concerning the strong consistency of drift parameter estimators for tempered fractional Ornstein–Uhlenbeck processes are presented in Section 4. Finally, Section 5 is devoted to numerical simulations.

2 Preliminaries

In this section, we recall the definitions and basic properties of tempered fractional Brownian motion (tfBm) and tempered fractional Brownian motion of the second kind (tfBmII), which are required for the subsequent analysis. Both processes are constructed by modifying the Mandelbrot–van Ness representation [22] of fractional Brownian motion (fBm) through the introduction of exponential tempering factors.
In contrast to classical fBm ${B_{H}}=\{{B_{H}}(t),t\ge 0\}$, their tempered counterparts, ${B_{H,\lambda }^{I}}=\{{B_{H,\lambda }^{I}}(t),t\ge 0\}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}=\{{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t),t\ge 0\}$, are well defined for all positive values of the Hurst index H. These processes also depend on a tempering parameter $\lambda \gt 0$, and for $H\in (0,1)$, they become (up to a normalization) fBm if we put $\lambda =0$. In particular,
\[ {B_{H,0}^{I}}\stackrel{\triangle }{=}{B_{H,0}^{I\hspace{-0.1667em}I}}\stackrel{\triangle }{=}{C_{H}}{B_{H}},\hspace{1em}\text{where}\hspace{3.33333pt}{C_{H}}=\frac{\Gamma (H+\frac{1}{2})}{\sqrt{2H\sin (\pi H)\Gamma (2H)}},\]
see, e.g., [27, Remark 1].
Let $W=\{{W_{t}},t\in \mathbb{R}\}$ be a two-sided Wiener process.
Definition 1 ([23]).
The tempered fractional Brownian motion (tfBm) is the stochastic process ${B_{H,\lambda }^{I}}=\{{B_{H,\lambda }^{I}}(t),t\ge 0\}$, defined by the Wiener integral
(1)
\[ {B_{H,\lambda }^{I}}(t):={\int _{-\infty }^{t}}{g_{H,\lambda ,t}^{I}}(s)\hspace{0.1667em}d{W_{s}},\]
where
\[ {g_{H,\lambda ,t}^{I}}(s):={e^{-\lambda {(t-s)_{+}}}}{(t-s)_{+}^{H-\frac{1}{2}}}-{e^{-\lambda {(-s)_{+}}}}{(-s)_{+}^{H-\frac{1}{2}}},\hspace{1em}s\in \mathbb{R}.\]
Definition 2 ([32]).
The tempered fractional Brownian motion of the second kind (tfBmII) is the stochastic process ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}=\{{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t),t\ge 0\}$, defined by the Wiener integral
(2)
\[ {B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t):={\int _{-\infty }^{t}}{g_{H,\lambda ,t}^{I\hspace{-0.1667em}I}}(s)\hspace{0.1667em}d{W_{s}},\]
where
\[\begin{aligned}{}{g_{H,\lambda ,t}^{I\hspace{-0.1667em}I}}(s)& :={e^{-\lambda {(t-s)_{+}}}}{(t-s)_{+}^{H-\frac{1}{2}}}-{e^{-\lambda {(-s)_{+}}}}{(-s)_{+}^{H-\frac{1}{2}}}\\ {} & \hspace{1em}+\lambda {\int _{0}^{t}}{(u-s)_{+}^{H-\frac{1}{2}}}{e^{-\lambda {(u-s)_{+}}}}\hspace{0.1667em}du,\hspace{1em}s\in \mathbb{R}.\end{aligned}\]
Both processes, ${B_{H,\lambda }^{I}}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$, are centered Gaussian processes with stationary increments and share the following scaling property: for any scaling factor $c\gt 0$,
(3)
\[ \{{X_{H,\lambda }}(ct),t\ge 0\}\stackrel{\triangle }{=}\{{c^{H}}{X_{H,c\lambda }}(t),t\ge 0\},\]
where ${X_{H,\lambda }}$ denotes either ${B_{H,\lambda }^{I}}$ or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$, and $\stackrel{\triangle }{=}$ denotes equality in all finite-dimensional distributions.
The covariance functions of the tempered processes take the following forms.
  • • For tfBm (1) with parameters $H\gt 0$ and $\lambda \gt 0$, the covariance function is given by
    (4)
    \[ \operatorname{Cov}\left[{B_{H,\lambda }^{I}}(t),{B_{H,\lambda }^{I}}(s)\right]=\frac{1}{2}\left[{({C_{t}^{I}})^{2}}|t{|^{2H}}+{({C_{s}^{I}})^{2}}|s{|^{2H}}-{({C_{t-s}^{I}})^{2}}|t-s{|^{2H}}\right],\]
    where
    \[ {({C_{t}^{I}})^{2}}:=\frac{2\Gamma (2H)}{{(2\lambda |t|)^{2H}}}-\frac{2\Gamma (H+\frac{1}{2})}{\sqrt{\pi }}\frac{1}{{(2\lambda |t|)^{H}}}{K_{H}}(\lambda |t|),\hspace{1em}t\gt 0,\]
    and ${K_{\nu }}(z)$ denotes the modified Bessel function of the second kind.
  • • For tfBmII (2) with parameters $H\gt 0$ and $\lambda \gt 0$, the covariance function is
    (5)
    \[ \operatorname{Cov}\left[{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t),{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(s)\right]=\frac{1}{2}\left[{({C_{t}^{I\hspace{-0.1667em}I}})^{2}}|t{|^{2H}}+{({C_{s}^{I\hspace{-0.1667em}I}})^{2}}|s{|^{2H}}-{({C_{t-s}^{I\hspace{-0.1667em}I}})^{2}}|t-s{|^{2H}}\right],\]
    where
    \[\begin{aligned}{}{({C_{t}^{I\hspace{-0.1667em}I}})^{2}}& :=\frac{(1-2H)\Gamma (H+\frac{1}{2})\Gamma (H)}{\sqrt{\pi }{(\lambda t)^{2H}}}\\ {} & \hspace{1em}\hspace{0.2778em}\times \left[1{-_{2}}{F_{3}}\left(\left\{1,-\frac{1}{2}\right\},\left\{1-H,\frac{1}{2},1\right\},\frac{{\lambda ^{2}}{t^{2}}}{4}\right)\right]\\ {} & \hspace{1em}\hspace{0.2778em}+\frac{\Gamma (1-H)\Gamma (H+\frac{1}{2})}{\sqrt{\pi }H{2^{2H}}}{\hspace{0.1667em}_{2}}{F_{3}}\left(\left\{1,H-\frac{1}{2}\right\},\left\{1,H+1,H+\frac{1}{2}\right\},\frac{{\lambda ^{2}}{t^{2}}}{4}\right)\hspace{-0.1667em},\end{aligned}\]
    and ${_{2}}{F_{3}}$ denotes the generalized hypergeometric function.
The behavior of the covariance functions (4) and (5) was investigated in [1], where it was shown that, unlike fBm, both tfBm and tfBmII exhibit short-range dependence for all $H\gt 0$.
With regard to sample path properties, it is known from [27] that the trajectories of both tfBm and tfBmII are γ-Hölder continuous for any $\gamma \in (0,H)$ if $H\in (0,1]$, and are continuously differentiable if and only if $H\gt 1$. Furthermore, [27] establishes almost sure upper bounds for the asymptotic growth of these processes.
  • • For tfBm, it is shown in [27, Theorem 3] that for any $\delta \gt 0$, there exists a nonnegative random variable $\xi =\xi (\delta )$ such that for all $t\gt 0$,
    (6)
    \[ \underset{s\in [0,t]}{\sup }|{B_{H,\lambda }^{I}}(s)|\le ({t^{\delta }}\vee 1)\hspace{0.1667em}\xi \hspace{1em}\text{a.s.},\]
    and there exist constants ${C_{1}}={C_{1}}(\delta )\gt 0$, ${C_{2}}={C_{2}}(\delta )\gt 0$ such that for all $u\gt 0$,
    (7)
    \[ \mathsf{P}(\xi \gt u)\le {C_{1}}{e^{-{C_{2}}{u^{2}}}}.\]
  • • For tfBmII, it is shown in [27, Theorem 5] that for any $p\gt 1$, there exists a nonnegative random variable $\xi =\xi (p)$ such that for all $t\gt 0$,
    (8)
    \[ \underset{s\in [0,t]}{\sup }|{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(s)|\le \left({t^{\frac{1}{2}}}{({\log ^{+}}t)^{p}}\vee 1\right)\xi \hspace{1em}\text{a.s.},\]
    and the tail bound (7) holds for some constants ${C_{1}},{C_{2}}\gt 0$, which may depend on p.
These growth bounds (6) and (8) have been applied in [27, 28] to statistical problems related to the estimation of the drift parameter in tempered Vasicek-type models under continuous-time observations.
To handle discrete-time observations, however, we also require almost sure bounds for the increments of the tempered fractional processes in addition to the bounds for the processes themselves. This issue will be addressed in Section 3.
We conclude this section with an auxiliary results that provide lower and upper bounds for the variance of tfBm and tfBmII on the interval $[0,1]$. These results will be used in Section 3 to establish inequalities for the increments of these processes.
Lemma 1.
For all $H\gt 0$, $\lambda \gt 0$, and $t\in [0,1]$,
\[ \mathsf{E}\hspace{-0.1667em}\left[{\big({B_{H,\lambda }^{I}}(t)\big)^{2}}\right]\ge C{t^{2H}}\hspace{1em}\textit{and}\hspace{1em}\mathsf{E}\hspace{-0.1667em}\left[{\big({B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t)\big)^{2}}\right]\ge C{t^{2H}},\]
where $C={e^{-2\lambda }}/(2H)$.
Proof.
By Definition 1, for tfBm we have
\[\begin{aligned}{}\mathsf{E}\hspace{-0.1667em}\left[{\big({B_{H,\lambda }^{I}}(t)\big)^{2}}\right]& ={\int _{-\infty }^{0}}\hspace{-0.1667em}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}-{e^{-\lambda (-s)}}{(-s)^{H-\frac{1}{2}}}\right)^{2}}ds\\ {} & \hspace{1em}+{\int _{0}^{t}}\hspace{-0.1667em}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}\right)^{2}}ds\\ {} & \ge {\int _{0}^{t}}\hspace{-0.1667em}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}\right)^{2}}ds\\ {} & \ge {e^{-2\lambda t}}{\int _{0}^{t}}{(t-s)^{2H-1}}ds\ge \frac{{e^{-2\lambda }}}{2H}\hspace{0.1667em}{t^{2H}}.\end{aligned}\]
For tfBmII, Definition 2 yields a similar representation, namely,
\[\begin{aligned}{}& \mathsf{E}\left[{\left({B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t)\right)^{2}}\right]={\int _{-\infty }^{0}}{\left({g_{H,\lambda ,t}^{I\hspace{-0.1667em}I}}(s)\right)^{2}}ds\\ {} & \hspace{2em}+{\int _{0}^{t}}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}+\lambda {\int _{0}^{t}}{(u-s)^{H-\frac{1}{2}}}{e^{-\lambda (u-s)}}\hspace{0.1667em}du\right)^{2}}ds,\end{aligned}\]
and applying the same estimate to the second integral gives the same lower bound.  □
Lemma 2.
Let ${X_{H,\lambda }}$ denote either ${B_{H,\lambda }^{I}}$ or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$. For all $H\gt 0$, $\lambda \gt 0$, and $\kappa \in (0,2)$, there exist constants ${C_{1}},{C_{2}}\gt 0$ such that for all $t\in [0,1]$,
\[\begin{aligned}{}{C_{1}}{t^{2H}}& \le \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]\le {C_{2}}{t^{2H}},\hspace{1em}H\in (0,1),\\ {} {C_{1}}{t^{2}}& \le \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{1,\lambda }}(t)\big)^{2}}\right]\le {C_{2}}{t^{2-\kappa }},\hspace{1em}H=1,\\ {} {C_{1}}{t^{2}}& \le \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]\le {C_{2}}{t^{2}},\hspace{1em}H\gt 1.\end{aligned}\]
Proof.
According to [27, Lemmas 1 and 3], the variances of both processes are continuous on $[0,1]$ and exhibit similar behavior as $t\downarrow 0$, namely,
(9)
\[ \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]\sim \left\{\begin{array}{l@{\hskip10.0pt}l}C{t^{2H}},\hspace{1em}& H\in (0,1),\\ {} C{t^{2}}\left|\log t\right|,\hspace{1em}& H=1,\\ {} C{t^{2}},\hspace{1em}& H\gt 1,\end{array}\right.\]
where
\[ C=C(H,\lambda )=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{\Gamma {(H+\frac{1}{2})^{2}}}{2H\sin (\pi H)\Gamma (2H)},\hspace{1em}& H\in (0,1),\\ {} \frac{1}{4},\hspace{1em}& H=1,\\ {} \frac{\Gamma (2H)}{{2^{2H+1}}{\lambda ^{2H-2}}(H-1)},\hspace{1em}& H\gt 1,\hspace{3.33333pt}{X_{H,\lambda }}={B_{H,\lambda }^{I}},\\ {} \frac{(2H-1)\Gamma (2H)}{{2^{2H+1}}{\lambda ^{2H-2}}(H-1)},\hspace{1em}& H\gt 1,\hspace{3.33333pt}{X_{H,\lambda }}={B_{H,\lambda }^{I\hspace{-0.1667em}I}}.\end{array}\right.\]
Moreover, by Lemma 1, the variance $\mathsf{E}\hspace{-0.1667em}\left[{({X_{H,\lambda }}(1))^{2}}\right]$ is strictly positive.
Define
\[ {f_{1}}(t)=\frac{{t^{2(H\wedge 1)}}}{\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]},\hspace{2em}{f_{2}}(t)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{\mathsf{E}\hspace{-0.1667em}\left[{({X_{H,\lambda }}(t))^{2}}\right]}{{t^{2(H\wedge 1)}}},\hspace{1em}& H\gt 0,\hspace{3.33333pt}H\ne 1,\\ {} \frac{\mathsf{E}\hspace{-0.1667em}\left[{({X_{H,\lambda }}(t))^{2}}\right]}{{t^{2-\kappa }}},\hspace{1em}& H=1.\end{array}\right.\]
Both ${f_{1}}$ and ${f_{2}}$ are nonnegative and continuous on $(0,1]$. Using the asymptotic relation (9), they can be extended continuously to $[0,1]$. Hence, ${f_{1}}$ and ${f_{2}}$ are bounded on $[0,1]$, which yields the claim.  □
Taking into account the stationarity of increments of both tfBm and tfBmII, Lemma 2 immediately yields the following result, which can be regarded as an extension of [1, Theorem 2.7], where only the case $H\in (0,1)$ was considered.
Corollary 1.
Let ${X_{H,\lambda }}$ denote either ${B_{H,\lambda }^{I}}$ or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$ with $H\gt 0$ and $\lambda \gt 0$, and let $\kappa \in (0,1)$. Then there exist constants ${K_{1}},{K_{2}}\gt 0$ such that, for all $t,s\in {\mathbb{R}_{+}}$ with $\left|t-s\right|\le 1$, the following two-sided bounds hold:
\[\begin{array}{r@{\hskip0pt}l@{\hskip0pt}r@{\hskip0pt}l}\displaystyle {K_{1}}{\left|t-s\right|^{H}}& \displaystyle \le {\left(\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)-{X_{H,\lambda }}(s)\big)^{2}}\right]\right)^{1/2}}\le {K_{2}}{\left|t-s\right|^{H}},\hspace{2em}& & \displaystyle H\in (0,1),\\ {} \displaystyle {K_{1}}\left|t-s\right|& \displaystyle \le {\left(\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)-{X_{H,\lambda }}(s)\big)^{2}}\right]\right)^{1/2}}\le {K_{2}}{\left|t-s\right|^{1-\kappa }},\hspace{2em}& & \displaystyle H=1,\\ {} \displaystyle {K_{1}}\left|t-s\right|& \displaystyle \le {\left(\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)-{X_{H,\lambda }}(s)\big)^{2}}\right]\right)^{1/2}}\le {K_{2}}\left|t-s\right|,\hspace{2em}& & \displaystyle H\gt 1.\end{array}\]
Remark 1.
The bounds established in Corollary 1 imply that both tempered fractional Brownian motions of the first and second kind, ${B_{H,\lambda }^{I}}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$, possess Hölder continuous trajectories of any order $\gamma \lt H\wedge 1$. This follows from standard arguments based on the Kolmogorov–Čentsov theorem or, more generally, from the necessary and sufficient conditions for Hölder continuity of Gaussian processes given in [2].
Furthermore, it is known that ${B_{H,\lambda }^{I}}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$ have the same local regularity despite their different kernel structures; see [23, Theorem 5.1 and Remark 5.2] and [32, Remark 2.3]. Indeed, the latter paper shows that
\[ {B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t)={B_{H,\lambda }^{I}}(t)+\lambda {\int _{0}^{t}}{B_{H,\lambda }^{I}}(s)\hspace{0.1667em}ds+\lambda \hspace{0.1667em}{\eta _{H,\lambda }}t,\]
with ${\eta _{H,\lambda }}={\textstyle\int _{-\infty }^{0}}{(-s)^{H-\frac{1}{2}}}{e^{-\lambda (-s)}}\hspace{0.1667em}d{W_{s}}$, so ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$ differs from ${B_{H,\lambda }^{I}}$ only by smoother additive terms (an integrated process and a linear function of t). Consequently, both processes share the same local Hölder exponent.

3 Almost sure asymptotic growth of increments of tfBm and tfBmII

In this section, we establish almost sure upper bounds for the increments of tfBm and tfBmII over small intervals. These bounds are essential for proving the strong consistency of discretized estimators for Ornstein–Uhlenbeck-type processes, which will be studied in Section 4.
The derivation is based on verifying the assumptions of general theorems from [12, 18], which provide analogous upper bounds for a broad class of Gaussian processes. These assumptions are formulated in terms of the variogram of the underlying process and are summarized in a convenient form in the appendix for ease of reference.
Following [12], consider an increasing sequence $\{{b_{k}},k\ge 0\}$ with ${b_{0}}=0$ and increments satisfying ${b_{k+1}}-{b_{k}}\ge 1$ for all k. Let $a:[0,\infty )\to (0,\infty )$ be an increasing continuous function such that $a(t)\to \infty $ as $t\to \infty $, and define the sequence $\{{a_{k}},k\ge 0\}$ by ${a_{k}}=a({b_{k}})$.
For $\Delta \in (0,1]$, define
(10)
\[ {\mathbf{T}_{\Delta }}=\left\{\mathbf{t}=({t_{1}},{t_{2}})\in {\mathbb{R}_{+}^{2}}:{t_{1}}-\Delta \le {t_{2}}\le {t_{1}}\right\},\]
(11)
\[ {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}=\left\{\mathbf{t}=({t_{1}},{t_{2}})\in {\mathbb{R}_{+}^{2}}:{b_{k}}\le {t_{1}}\le {b_{k+1}},\hspace{3.33333pt}{t_{1}}-\Delta \le {t_{2}}\le {t_{1}}\right\},\]
and introduce the metric
(12)
\[ d(\mathbf{t},\mathbf{s})=\max \left\{|{t_{1}}-{s_{1}}|,|{t_{2}}-{s_{2}}|\right\},\hspace{1em}\mathbf{t},\mathbf{s}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}.\]
In this section, we analyze both processes simultaneously. Let
\[ X={X_{H,\lambda }}\hspace{2.5pt}\text{be either}\hspace{2.5pt}{B_{H,\lambda }^{I}},\hspace{2.5pt}\text{or}\hspace{2.5pt}{B_{H,\lambda }^{I\hspace{-0.1667em}I}},\]
and define the increment of ${X_{H,\lambda }}$ as
\[ Z(\mathbf{t}):={X_{H,\lambda }}({t_{1}})-{X_{H,\lambda }}({t_{2}}),\hspace{1em}\mathbf{t}=({t_{1}},{t_{2}})\in {\mathbf{T}_{\Delta }}.\]
We begin with auxiliary results leading to the main theorem of this subsection.
Proposition 1.
Assume the following conditions hold:
(13)
\[ {L_{1}}:={\sum \limits_{k=0}^{\infty }}\frac{1}{{a_{k}}}\lt \infty ,\hspace{1em}{L_{2}}:={\sum \limits_{k=0}^{\infty }}\frac{\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\lt \infty .\]
  • (i) If $H\ne 1$, then for any $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, $\Delta \in (0,1]$, and $\rho \gt 0$, the following inequality holds:
    \[\begin{aligned}{}\mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}& \le \frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{\Delta ^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\\ {} & \hspace{1em}\times \exp \left\{\frac{{K_{2}}}{{K_{1}}}\left(\frac{{L_{2}}}{{L_{1}}}+\frac{{4^{1-\beta }}}{\beta \vartheta {(1-\varepsilon /\beta )^{\beta /\varepsilon }}}\right)\right\},\end{aligned}\]
    where $\beta =H\wedge 1$.
  • (ii) If $H=1$, $\beta \in (0,1)$ is arbitrary, then for any $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, $\kappa \in (0,1)$, $\Delta \in (0,1]$, and $\rho \gt 0$, the following inequality holds:
    \[\begin{aligned}{}\mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}& \le \frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{\Delta ^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\\ {} & \hspace{1em}\times \exp \left\{\frac{{K_{2}}{\Delta ^{\beta -1}}}{{K_{1}}}\left(\frac{{L_{2}}}{{L_{1}}}+\frac{{4^{1-\beta }}}{\beta \vartheta {(1-\varepsilon /\beta )^{\beta /\varepsilon }}}\right)\right\}.\end{aligned}\]
Here ${K_{1}}$ and ${K_{2}}$ are the constants from Corollary 1.
Proof.
We verify that $Z(\mathbf{t})$ satisfies the conditions of [18, Theorem B.34] (see Theorem 4 in Appendix).
(i) By Corollary 1 (applied with $\kappa =1-\beta $ for $H=1$),
(14)
\[ {m_{k}}:=\underset{\mathbf{t}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}{\sup }{\left(\mathsf{E}Z{(\mathbf{t})^{2}}\right)^{1/2}}\le {K_{2}}{\Delta ^{\beta }}.\]
Hence, the condition (i) of Theorem 4 is satisfied.
(ii) Using Minkowski’s inequality and Corollary 1, for any $h\in (0,1)$,
\[\begin{aligned}{}& \underset{\substack{d(\mathbf{t},\mathbf{s})\le h\\ {} \mathbf{t},\mathbf{s}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}}{\sup }{\left(\mathsf{E}{\left|Z(\mathbf{t})-Z(\mathbf{s})\right|^{2}}\right)^{1/2}}\\ {} & \hspace{1em}\le \hspace{-0.1667em}\hspace{-0.1667em}\underset{\substack{d(\mathbf{t},\mathbf{s})\le h\\ {} \mathbf{t},\mathbf{s}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}}{\sup }\hspace{-0.1667em}\hspace{-0.1667em}\left[{\left(\mathsf{E}{\left|{X_{H,\lambda }}({t_{1}})-{X_{H,\lambda }}({s_{1}})\right|^{2}}\right)^{\frac{1}{2}}}+{\left(\mathsf{E}{\left|{X_{H,\lambda }}({t_{2}})-{X_{H,\lambda }}({s_{2}})\right|^{2}}\right)^{\frac{1}{2}}}\right]\\ {} & \hspace{1em}\le 2{K_{2}}{h^{\beta }}.\end{aligned}\]
Thus, $Z(\mathbf{t})$ satisfies the condition (ii) of Theorem 4 with ${c_{k}}=2{K_{2}}$, $k\ge 0$.
(iii) The convergence of the first two series in condition (iii) of Theorem 4 follows from the upper bound (14), together with the summability assumptions in (13):
(15)
\[ A:={\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}}{{a_{k}}}\le {K_{2}}{L_{1}}{\Delta ^{\beta }}\lt \infty ,\hspace{2em}{\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\le {K_{2}}{L_{2}}{\Delta ^{\beta }}\lt \infty .\]
Choosing $\gamma :=\beta /2$ and using the fact that ${c_{k}}=2{K_{2}}$, $k\ge 0$, we obtain that the third series in condition (iii) can be written as
(16)
\[ {\sum \limits_{k=0}^{\infty }}\frac{{m_{k}^{1-2\gamma /\beta }}\hspace{0.1667em}{c_{k}^{2\gamma /\beta }}}{{a_{k}}}={\sum \limits_{k=0}^{\infty }}\frac{{c_{k}}}{{a_{k}}}=2{K_{2}}{\sum \limits_{k=0}^{\infty }}\frac{1}{{a_{k}}}=2{K_{2}}{L_{1}}\lt \infty .\]
Thus, all assumptions of Theorem 4 are satisfied. Consequently, applying this theorem we obtain that, for any $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $\rho \gt 0$,
\[ \mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{A^{2}}}{2{(1-\vartheta )^{2}}}\right\}\cdot {A_{0}}\hspace{-0.1667em}\left(\vartheta ,\frac{\beta }{2},\varepsilon \right),\]
where
\[\begin{aligned}{}{A_{0}}\hspace{-0.1667em}\left(\vartheta ,\frac{\beta }{2},\varepsilon \right)& =\frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{1}{A}{\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\right\}\\ {} & \hspace{1em}\times \exp \left\{\frac{2{\Delta ^{\beta }}}{\beta A{\left(1-\varepsilon \beta \right)^{\beta /\varepsilon }}\vartheta \hspace{0.1667em}{4^{\beta }}}{\sum \limits_{k=0}^{\infty }}\frac{{c_{k}}}{{a_{k}}}\right\}.\end{aligned}\]
Next, using the bounds (15), the identity (16), and estimating A in the denominators from below as
\[ A={\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}}{{a_{k}}}\ge {K_{1}}{L_{1}}{\Delta ^{H\wedge 1}},\]
we arrive at the inequality
\[\begin{aligned}{}& \mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{\Delta ^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\\ {} & \hspace{2em}\times \frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{{K_{2}}{L_{2}}{\Delta ^{\beta -H\wedge 1}}}{{K_{1}}{L_{1}}}\right\}\exp \left\{\frac{4{K_{2}}{L_{1}}{\Delta ^{\beta -H\wedge 1}}}{\beta {K_{1}}{L_{1}}{\left(1-\varepsilon \beta \right)^{\beta /\varepsilon }}\vartheta \hspace{0.1667em}{4^{\beta }}}\right\}.\end{aligned}\]
Finally, since $\beta -H\wedge 1=0$ for $H\ne 1$ and $\beta -H\wedge 1=\beta -1$ for $H=1$, we recover the bounds stated in the proposition.  □
Now, introduce a strictly decreasing sequence $\{{d_{k}},k\ge 0\}$ with ${d_{0}}=1$ and ${d_{k}}\to 0$ as $k\to \infty $, a continuous function $g:(0,1]\to (0,\infty )$, and a sequence $\{{g_{k}},k\ge 0\}$ such that $0\lt {g_{k}}\le {\min _{{d_{k+1}}\le t\le {d_{k}}}}g(t)$.
The following theorem and corollary are derived from Proposition 1 in the same manner as Theorem B.57 and Corollary B.58 in [18], respectively.
Proposition 2.
Assume the conditions of Proposition 1.
  • (i) Let $H\ne 1$ and set $\beta =H\wedge 1$. Assume additionally that
    (17)
    \[ {D_{1}}:={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta }}|\log {d_{k}}|}{{g_{k}}}\lt \infty .\]
    Then for all $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $\rho \gt 0$,
    \[ \mathsf{E}\exp \left\{\rho \underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}{A_{1}}(\vartheta ,\varepsilon ),\]
    where $B={\textstyle\sum _{k=0}^{\infty }}\frac{{d_{k}^{\beta }}}{{g_{k}}}$,
    \[ {A_{1}}(\vartheta ,\varepsilon )={2^{\frac{2}{\varepsilon }+1}}\exp \left\{\frac{{D_{1}}}{B}+\frac{{K_{2}}}{{K_{1}}}\left(\frac{{L_{2}}}{{L_{1}}}+\frac{{4^{1-\beta }}}{\beta \vartheta {(1-\varepsilon /\beta )^{\beta /\varepsilon }}}\right)\right\}.\]
  • (ii) Let $H=1$ and $\beta \in (0,1)$. Assume that the stronger condition
    (18)
    \[ {D_{2}}:={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{2\beta -1}}}{{g_{k}}}\lt \infty \]
    is satisfied. Then for all $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $\rho \gt 0$,
    \[ \mathsf{E}\exp \left\{\rho \underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}{A_{1}}(\vartheta ,\varepsilon ),\]
    where
    \[ {A_{1}}(\vartheta ,\varepsilon )={2^{\frac{2}{\varepsilon }+1}}\exp \left\{\frac{{D_{1}}}{B}+\frac{{K_{2}}{D_{2}}}{{K_{1}}B}\left(\frac{{L_{2}}}{{L_{1}}}+\frac{{4^{1-\beta }}}{\beta \vartheta {(1-\varepsilon /\beta )^{\beta /\varepsilon }}}\right)\right\}.\]
Proof.
Define
\[ {\mathbf{T}^{(k)}}=\left\{({t_{1}},{t_{2}})\in {\mathbb{R}_{+}}:{d_{k+1}}\lt {t_{1}}-{t_{2}}\le {d_{k}}\right\}.\]
Clearly, ${\mathbf{T}^{(k)}}\subset {\mathbf{T}_{{d_{k}}}}$, and
\[ \left\{({t_{1}},{t_{2}})\in \mathbb{R}:0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1\right\}={\bigcup \limits_{k=0}^{\infty }}{\mathbf{T}^{(k)}}.\]
Hence,
\[\begin{aligned}{}\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}& \le {\sum \limits_{k=0}^{\infty }}\underset{\mathbf{t}\in {\mathbf{T}^{(k)}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\\ {} & \le {\sum \limits_{k=0}^{\infty }}\frac{1}{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}^{(k)}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\\ {} & \le {\sum \limits_{k=0}^{\infty }}\frac{1}{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}_{{d_{k}}}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}.\end{aligned}\]
Let ${\{{r_{k}}\}_{k\ge 0}}$ be positive numbers such that ${\textstyle\sum _{k=0}^{\infty }}\frac{1}{{r_{k}}}=1$. Then
\[\begin{aligned}{}I(\rho )& :=\mathsf{E}\exp \left\{\rho \underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\right\}\\ {} & \le \mathsf{E}\exp \left\{{\sum \limits_{k=0}^{\infty }}\frac{\rho }{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}_{{d_{k}}}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\le {\prod \limits_{k=0}^{\infty }}{\left(\mathsf{E}\exp \left\{\frac{\rho {r_{k}}}{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}_{{d_{k}}}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\right)^{1/{r_{k}}}}.\end{aligned}\]
We now distinguish two cases: $H\ne 1$ and $H=1$.
(i) Case $H\ne 1$. By Proposition 1(i), we obtain
\[\begin{aligned}{}I(\rho )& \le {\prod \limits_{k=0}^{\infty }}{\left(\frac{{2^{2/\varepsilon +2}}}{{d_{k}}}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{r_{k}^{2}}{K_{2}^{2}}{L_{1}^{2}}{d_{k}^{2\beta }}}{2{(1-\vartheta )^{2}}{g_{k}^{2}}}\right\}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\right)^{1/{r_{k}}}}\\ {} & ={2^{2/\varepsilon +2}}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}}{2{(1-\vartheta )^{2}}}{\sum \limits_{k=0}^{\infty }}\frac{{r_{k}}{d_{k}^{2\beta }}}{{g_{k}^{2}}}\right\}\exp \hspace{-0.1667em}\left\{-{\sum \limits_{k=0}^{\infty }}\frac{\log {d_{k}}}{{r_{k}}}\right\},\end{aligned}\]
where
\[ {A_{2}}(\vartheta ,\varepsilon )=\frac{{K_{2}}}{{K_{1}}}\left(\frac{{L_{2}}}{{L_{1}}}+\frac{{4^{1-\beta }}}{\beta \vartheta {(1-\varepsilon /\beta )^{\beta /\varepsilon }}}\right).\]
Setting ${r_{k}}=\frac{B{g_{k}}}{{d_{k}^{\beta }}}$, we obtain
\[\begin{aligned}{}I(\rho )& \le {2^{2/\varepsilon +2}}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}\exp \hspace{-0.1667em}\left\{-\frac{1}{B}{\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta }}\log {d_{k}}}{{g_{k}}}\right\}\\ {} & ={2^{2/\varepsilon +2}}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}\exp \left\{\frac{{D_{1}}}{B}\right\}\\ {} & =\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}{A_{1}}(\vartheta ,\varepsilon ).\end{aligned}\]
(ii) Case $H=1$. By Proposition 1(ii), we have
\[\begin{aligned}{}I(\rho )& \le {\prod \limits_{k=0}^{\infty }}{\left(\frac{{2^{2/\varepsilon +2}}}{{d_{k}}}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{d_{k}^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\exp \left\{{d_{k}^{\beta -1}}{A_{2}}(\vartheta ,\varepsilon )\right\}\right)^{1/{r_{k}}}}\\ {} & ={2^{2/\varepsilon +2}}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}}{2{(1-\vartheta )^{2}}}{\sum \limits_{k=0}^{\infty }}\frac{{r_{k}}{d_{k}^{2\beta }}}{{g_{k}^{2}}}\right\}\\ {} & \hspace{1em}\times \exp \hspace{-0.1667em}\left\{-{\sum \limits_{k=0}^{\infty }}\frac{\log {d_{k}}}{{r_{k}}}\right\}\exp \hspace{-0.1667em}\left\{{A_{2}}(\vartheta ,\varepsilon ){\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta -1}}}{{r_{k}}}\right\}.\end{aligned}\]
Finally, by choosing ${r_{k}}=\frac{B{g_{k}}}{{d_{k}^{\beta }}}$, the conclusion follows as in case (i).  □
Corollary 2.
Under the assumptions of Proposition 2, for all $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $u\gt 0$,
(19)
\[ \mathsf{P}\left\{\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\gt u\right\}\le \exp \left\{-\frac{{u^{2}}{(1-\vartheta )^{2}}}{2{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}\right\}{A_{1}}(\vartheta ,\varepsilon ).\]
Proof.
By Chebyshev’s inequality, we have
\[\begin{aligned}{}\mathsf{P}& \left\{\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{\left|Z(\mathbf{t})\right|}{a({t_{1}})g({t_{1}}-{t_{2}})}\gt u\right\}\\ {} & \hspace{2em}\le \frac{1}{{e^{\rho u}}}\mathsf{E}\exp \left\{\rho \hspace{0.1667em}\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{\left|Z(\mathbf{t})\right|}{a({t_{1}})g({t_{1}}-{t_{2}})}\right\}\\ {} & \hspace{2em}\le \exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}-\rho u\right\}{A_{1}}(\vartheta ,\varepsilon ).\end{aligned}\]
Choosing $\rho =\frac{u{(1-\vartheta )^{2}}}{{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}$ yields (19).  □
Propositions 1–2 and Corollary 2 allow us to derive the asymptotic bound for the increments of tfBm and tfBmII. The following theorem is the main result of this section.
Theorem 1.
Let $\beta =H\wedge 1$ if $H\ne 1$, and let $\beta \in (0,1)$ be arbitrary if $H=1$. For any $\delta \gt 0$ and $p\gt 2$, there exists a nonnegative random variable $\eta =\eta (\delta ,p)$ such that, for all $0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1$,
(20)
\[ |Z(\mathbf{t})|\le ({t_{1}^{\delta }}\vee 1){({t_{1}}-{t_{2}})^{\beta }}\big(|\log ({t_{1}}-{t_{2}}){|^{p}}\vee 1\big)\hspace{0.1667em}\eta ,\hspace{1em}\textit{a.s.},\]
and there exist constants ${C_{1}}={C_{1}}(\delta ,p)\gt 0$ and ${C_{2}}={C_{2}}(\delta ,p)\gt 0$ such that, for all $u\gt 0$,
(21)
\[ \mathsf{P}(\eta \gt u)\le {C_{1}}\exp (-{C_{2}}{u^{2}}).\]
Proof.
The proof consists of three steps, corresponding to the application of Proposition 1, Proposition 2, and Corollary 2, respectively.
Step 1. Let ${b_{0}}=0$, ${b_{k}}={e^{k}}$ for $k\ge 1$, and define $a(t)={t^{\delta }}\vee 1$. Then ${a_{k}}=a({b_{k}})={e^{k\delta }}$.
We verify the conditions in (13) from Proposition 1:
\[\begin{aligned}{}{L_{1}}={\sum \limits_{k=0}^{\infty }}\frac{1}{{a_{k}}}& ={\sum \limits_{k=0}^{\infty }}\frac{1}{{e^{k\delta }}}\lt \infty ,\\ {} {L_{2}}={\sum \limits_{k=0}^{\infty }}\frac{\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}& =1+{\sum \limits_{k=1}^{\infty }}\frac{k}{{e^{k\delta }}}+{\sum \limits_{k=1}^{\infty }}\frac{\log (e-1)}{{e^{k\delta }}}\lt \infty ,\end{aligned}\]
where the convergence of all series is verified by the ratio test.
Step 2. Let ${d_{k}}={e^{-k}}$ and define $g(t)={t^{\beta }}|\log t{|^{p}}$, $t\in (0,1]$. Define
\[ {g_{0}}={e^{-\beta }},\hspace{1em}{g_{k}}={d_{k+1}^{\beta }}|\log {d_{k}}{|^{p}}={e^{-(k+1)\beta }}{k^{p}},\hspace{1em}k\ge 1.\]
Note that ${g_{k}}\le {\min _{{d_{k+1}}\le t\le {d_{k}}}}g(t)$. We now verify the remaining assumptions of Proposition 2.
(i) Case $H\ne 1$. In this case, condition (17) holds:
\[ {D_{1}}={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta }}|\log {d_{k}}|}{{g_{k}}}={e^{\beta }}{\sum \limits_{k=1}^{\infty }}\frac{1}{{k^{p-1}}}\lt \infty \hspace{1em}\text{for any}\hspace{2.5pt}p\gt 2.\]
(ii) Case $H=1$. Here we apply Propositions 1–2 and Corollary 2 with ${\beta ^{\prime }}=(1+\beta )/2$ in place of β (recall that in these results $\beta \in (0,1)$ may be chosen arbitrarily when $H=1$). Then the conditions (17)–(18) are satisfied for any $p\gt 1$:
\[\begin{aligned}{}{D_{1}}& ={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{{\beta ^{\prime }}}}|\log {d_{k}}|}{{g_{k}}}={e^{\beta }}{\sum \limits_{k=1}^{\infty }}\frac{{e^{-({\beta ^{\prime }}-\beta )k}}}{{k^{p-1}}}={e^{\beta }}{\sum \limits_{k=1}^{\infty }}\frac{{e^{-(1-\beta )k/2}}}{{k^{p-1}}}\lt \infty ,\\ {} {D_{2}}& ={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{2{\beta ^{\prime }}-1}}}{{g_{k}}}={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta }}}{{g_{k}}}={e^{\beta }}\left(1+{\sum \limits_{k=1}^{\infty }}{k^{-p}}\right)\lt \infty .\end{aligned}\]
Step 3. Define
\[ \eta :=\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{({t_{1}^{\delta }}\vee 1){({t_{1}}-{t_{2}})^{\beta }}\left(|\log ({t_{1}}-{t_{2}}){|^{p}}\vee 1\right)}.\]
Then the inequality (20) holds a.s., and by Corollary 2, the random variable η has Gaussian-type tail estimates (21).  □

4 Strongly consistent estimation of the drift parameter in Ornstein–Uhlenbeck-type processes

4.1 Statistical model description

We consider the tempered fractional processes (tfBm and tfBmII) as the driving noise in a Langevin-type stochastic differential equation of the form
(22)
\[ {Y_{t}}={y_{0}}+\theta {\int _{0}^{t}}{Y_{s}}ds+\sigma {X_{t}},\hspace{1em}t\ge 0,\]
where ${y_{0}}\in \mathbb{R}$, $\theta \gt 0$, and $\sigma \gt 0$ are constants.
In our analysis, we set $X={X_{H,\lambda }}$ to be either ${B_{H,\lambda }^{I}}$, or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$.
Similarly to the case of the Langevin equation driven by fBm, equation (22) admits a solution $Y=\{{Y_{t}},t\ge 0\}$, which can be represented as
(23)
\[ {Y_{t}}={y_{0}}{e^{\theta t}}+\sigma {\int _{0}^{t}}{e^{\theta (t-s)}}d{X_{s}}.\]
The stochastic integral with respect to tfBm is defined pathwise in the Riemann–Stieltjes sense, since the integrand ${e^{\theta (t-s)}}$ is Lipschitz continuous, and the sample paths of both tfBm and tfBmII are Hölder continuous of order up to $H\wedge 1$.
By applying the integration-by-parts formula, the solution (23) can be equivalently rewritten as
(24)
\[ {Y_{t}}={y_{0}}{e^{\theta t}}+\sigma \theta {e^{\theta t}}{\int _{0}^{t}}{e^{-\theta s}}{X_{s}}ds+\sigma {X_{t}}.\]
The integral ${\textstyle\int _{0}^{\infty }}{e^{-bs}}{X_{s}}ds$ is well defined and bounded. For a detailed analysis, we refer to Lemma 4 and inequalities (33) and (34) for the case of tfBm and to Lemma 6 and inequality (41) for the case of tfBmII.
Throughout this section, we assume that the parameters σ, H, and λ are known and fixed, while the drift parameter $\theta \gt 0$ is unknown and subject to estimation. The goal is to estimate θ based on observations of the process Y. We consider two observational settings.
  • • Continuous-time observation: the trajectory of Y is observed continuously on the interval $[0,T]$.
  • • Discrete-time observation: the process is observed only at discrete time points.

4.2 Continuous-time estimator

In this section, we consider the model (22) and assume that a trajectory $\{{Y_{t}},0\le t\le T\}$ is observed. We introduce the continuous-time estimator
(25)
\[ {\widehat{\theta }_{T}}=\frac{{Y_{T}^{2}}-{y_{0}^{2}}}{2{\textstyle\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}.\]
Remark 2.
The estimator ${\widehat{\theta }_{T}}$ can be interpreted as the least squares estimator that formally minimizes the functional $F(\theta )={\textstyle\int _{0}^{T}}{\left({\dot{Y}_{t}}-\theta {Y_{t}}\right)^{2}}\hspace{0.1667em}dt$. It is straightforward to verify that the minimizer of this functional is given by ${\widehat{\theta }_{T}}=\frac{{\textstyle\int _{0}^{T}}{Y_{t}}d{Y_{t}}}{{\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}$. If the process Y is γ-Hölder continuous on $[0,T]$ for some $\gamma \gt \frac{1}{2}$, then the integral ${\textstyle\int _{0}^{T}}{Y_{t}}\hspace{0.1667em}d{Y_{t}}$ is well defined in the Young sense and satisfies ${\textstyle\int _{0}^{T}}{Y_{t}}\hspace{0.1667em}d{Y_{t}}=\frac{1}{2}\left({Y_{T}^{2}}-{Y_{0}^{2}}\right)$. However, for $H\lt \frac{1}{2}$, the required Hölder continuity of order greater than $\frac{1}{2}$ cannot be guaranteed. Despite this, the estimator defined by (25) remains well defined for all $H\gt 0$ and, as we show below, is strongly consistent.
Theorem 2.
The estimator ${\widehat{\theta }_{T}}$ is strongly consistent as $T\to \infty $.
Proof.
According to [27, Lemma 6], the following a.s. convergences hold as $T\to \infty $:
(26)
\[ {e^{-\theta T}}{Y_{T}}\to {\zeta _{\infty }},\]
(27)
\[ {e^{-2\theta T}}{\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt\to \frac{1}{2\theta }{\zeta _{\infty }^{2}},\]
where
\[ {\zeta _{\infty }}:={y_{0}}+\sigma \theta {\int _{0}^{\infty }}{e^{-\theta s}}{X_{s}}\hspace{0.1667em}ds.\]
The random variable ${\zeta _{\infty }}$ is Gaussian and satisfies $0\lt |{\zeta _{\infty }}|\lt \infty $ almost surely.
Multiplying the numerator and denominator of (25) by ${e^{-2\theta T}}$, and applying the limits from (26) and (27), we obtain
\[ {\widehat{\theta }_{T}}=\frac{{Y_{T}^{2}}-{y_{0}^{2}}}{2{\textstyle\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}=\frac{{({e^{-\theta T}}{Y_{T}})^{2}}-{e^{-2\theta T}}{y_{0}^{2}}}{2{e^{-2\theta T}}{\textstyle\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}\to \theta \hspace{1em}\text{a.s., as}\hspace{2.5pt}T\to \infty .\]
 □

4.3 Discrete-time estimator

We now introduce a discrete-time counterpart of the estimator (25). To this end, consider a partition of the interval $[0,{n^{m-1}}]$ with mesh size ${n^{-1}}$, where $n\gt 1$ and $m\gt 1$ is fixed. This yields a grid of points defined by ${t_{k,n}}=k{n^{-1}}$ for $0\le k\le {n^{m}}$. From the continuous-time process $\{{Y_{t}}:0\le t\le {n^{m-1}}\}$, we observe only the values at these discrete points and, for simplicity, denote ${Y_{k,n}}:={Y_{{t_{k,n}}}}$.
Replacing the integrals in (25) with appropriate Riemann sums, we define the estimator
(28)
\[ {\widetilde{\theta }_{n}}(m)=\frac{n\left({Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}\right)}{2{\textstyle\textstyle\sum _{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}}.\]
We analyze the asymptotic behavior of ${\widetilde{\theta }_{n}}(m)$ as $n\to \infty $. The main result of this section is stated below.
Theorem 3.
The estimator ${\widetilde{\theta }_{n}}(m)$ is strongly consistent as $n\to \infty $.
The proof of this theorem follows the general scheme proposed in [19]. We begin with the following auxiliary result.
Lemma 3.
For the process Y, the following bounds hold: for any $t\gt 0$,
(29)
\[ \underset{0\le s\le t}{\sup }|{Y_{s}}|\le |{y_{0}}|{e^{\theta t}}+\sigma \theta {e^{\theta t}}{\int _{0}^{t}}{e^{-\theta s}}\underset{0\le u\le s}{\sup }|{X_{u}}|\hspace{0.1667em}ds+\sigma \underset{0\le s\le t}{\sup }|{X_{s}}|;\]
for any $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$,
(30)
\[\begin{aligned}{}& \underset{\frac{k}{n}\le u\le s}{\sup }|{Y_{u}}-{Y_{k,n}}|\\ {} & \hspace{1em}\le \theta {\int _{\frac{k}{n}}^{s}}\left(\left|{y_{0}}\right|{e^{\theta u}}+\sigma \theta {e^{\theta u}}{\int _{0}^{u}}{e^{-\theta v}}\underset{0\le z\le v}{\sup }|{X_{z}}|dv+\sigma \underset{0\le z\le u}{\sup }|{X_{z}}|\right)du\\ {} & \hspace{2em}+\sigma \underset{\frac{k}{n}\le u\le s}{\sup }\left|{X_{u}}-{X_{k,n}}\right|.\end{aligned}\]
Proof.
Inequality (29) follows directly from the explicit solution (24) of the model under consideration.
To derive inequality (30), we proceed in two steps. First, using equation (22), we express the difference ${Y_{u}}-{Y_{k,n}}$ as
\[ {Y_{u}}-{Y_{k,n}}=\theta {\int _{\frac{k}{n}}^{u}}{Y_{s}}\hspace{0.1667em}ds+\sigma ({X_{u}}-{X_{k,n}}).\]
Then, we apply the bound from (29):
\[\begin{aligned}{}& \underset{\frac{k}{n}\le u\le s}{\sup }\left|{Y_{u}}-{Y_{k,n}}\right|\le \theta \underset{\frac{k}{n}\le u\le s}{\sup }{\int _{\frac{k}{n}}^{u}}\left|{Y_{v}}\right|dv+\sigma \underset{\frac{k}{n}\le u\le s}{\sup }\left|{X_{u}}-{X_{k,n}}\right|\\ {} & \hspace{1em}\le \theta {\int _{\frac{k}{n}}^{s}}\left(\left|{y_{0}}\right|{e^{\theta u}}+\sigma \theta {e^{\theta u}}{\int _{0}^{u}}{e^{-\theta v}}\underset{0\le z\le v}{\sup }|{X_{z}}|dv+\sigma \underset{0\le z\le u}{\sup }|{X_{z}}|\right)du\\ {} & \hspace{2em}+\sigma \underset{\frac{k}{n}\le u\le s}{\sup }\left|{X_{u}}-{X_{k,n}}\right|.\end{aligned}\]
 □
We now proceed to analyze the two cases separately: tfBm and tfBmII.

4.4 Proof of Theorem 3 in the case of tfBm

Throughout this subsection, we assume that the model (22) is driven by tfBm, i.e., we consider the case $X={B_{H,\lambda }^{I}}$.
To simplify the notation, we denote by Ξ the class of nonnegative random variables ξ satisfying the following tail bound: there exist constants ${C_{1}}\gt 0$ and ${C_{2}}\gt 0$ such that ξ satisfies (7) for all $u\gt 0$.
Remark 3.
The class Ξ is closed under positive scaling, addition of constants, and finite sums: if $\xi ,\eta \in \Xi $ and $C\gt 0$, then $C\xi \in \Xi $, $\xi +C\in \Xi $, $\xi +\eta \in \Xi $. The claim follows from elementary tail inequalities; for instance,
\[\begin{array}{l}\displaystyle \mathsf{P}(\xi +\eta \gt u)\le \mathsf{P}(\xi \gt u/2)+\mathsf{P}(\eta \gt u/2),\\ {} \displaystyle \mathsf{P}(\xi +C\gt u)\le \mathsf{P}(\xi \gt u-C)\le {C_{1}}{e^{-{C_{2}}{(u-C)^{2}}}}\le {C^{\prime }_{1}}{e^{-{C^{\prime }_{2}}{u^{2}}}}\end{array}\]
for suitable constants ${C^{\prime }_{1}},{C^{\prime }_{2}}\gt 0$.
Recall the notation: $\beta =H\wedge 1$ if $H\ne 1$, and $\beta \in (0,1)$ is arbitrary if $H=1$.
Lemma 4.
For any $\delta \gt 0$, there exists $\zeta \in \Xi $ such that:
(31)
\[ \underset{0\le u\le s}{\sup }|{Y_{u}}|\le \left({e^{\theta s}}+{s^{\delta }}\right)\zeta ,\]
and for any fixed $\alpha \in [0,\beta )$ and $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$,
(32)
\[ \underset{\frac{k}{n}\le u\le s}{\sup }|{Y_{u}}-{Y_{k,n}}|\le {n^{-\alpha }}\left({e^{\theta s}}+{s^{\delta }}\right)\zeta .\]
Proof.
It follows from (6) that for any $s\gt 0$, the following inequality holds:
(33)
\[ \underset{u\in [0,s]}{\sup }\left|{B_{H,\lambda }^{I}}(u)\right|\le \left({s^{\delta }}+1\right)\xi (\delta )\hspace{1em}\text{a.s.},\]
where the random variable $\xi =\xi (\delta )$ is nonnegative and belongs to Ξ for any $\delta \gt 0$.
Hence, we have the bound
(34)
\[ {\int _{0}^{t}}{e^{-\theta s}}\underset{0\le u\le s}{\sup }\left|{B_{H,\lambda }^{I}}(u)\right|ds\le {\int _{0}^{t}}{e^{-\theta s}}\left({s^{\delta }}+1\right)\xi (\delta )\hspace{0.1667em}ds\le C\xi (\delta ).\]
Substituting (33) and (34) into (29) yields
(35)
\[ \underset{0\le u\le s}{\sup }\left|{Y_{u}}\right|\le \left|{y_{0}}\right|{e^{\theta s}}+C\sigma \theta {e^{\theta s}}\xi (\delta )+\sigma \left({s^{\delta }}+1\right)\xi (\delta ).\]
Inequality (31) then follows by observing that in the last term one can bound ${s^{\delta }}+1\le {s^{\delta }}+{e^{\theta s}}$, and by setting $\zeta :=|{y_{0}}|+C\sigma \theta \xi (\delta )+\sigma \xi (\delta )\in \Xi $.
Now, let $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$, so that $0\lt s-\frac{k}{n}\le \frac{1}{n}\lt 1$. Set $p=3$, ${t_{1}}=s$, and ${t_{2}}=\frac{k}{n}$ in (20). Then,
(36)
\[ \left|{B_{H,\lambda }^{I}}\left(s\right)-{B_{H,\lambda }^{I}}\left(\frac{k}{n}\right)\right|\le \left({s^{\delta }}+1\right){\left(s-\frac{k}{n}\right)^{\beta }}\left({\left|\log \left(s-\frac{k}{n}\right)\right|^{3}}+1\right)\eta .\]
In order to bound the right-hand side of (36), note that for any $r\gt 0$, the function $f(t)={t^{r}}|\log t{|^{3}}$, $t\in (0,1)$, is bounded and attains its maximum at $t={e^{-3/r}}$. Therefore, for any $r\gt 0$, there exists a constant C such that ${t^{r}}|\log t{|^{3}}\le C$. Multiplying both sides by ${t^{\beta -r}}$ gives ${t^{\beta }}|\log t{|^{3}}\le C{t^{\beta -r}}$. Since ${t^{\beta }}$ as a function of β is strictly decreasing for $t\in (0,1)$, we also have ${t^{\beta }}\lt {t^{\beta -r}}$ for any $r\gt 0$. Hence,
\[ {t^{\beta }}|\log t{|^{3}}+{t^{\beta }}\le C{t^{\beta -r}},\hspace{1em}t\in (0,1).\]
Now, restrict r to the interval $0\lt r\le \beta $, so that $\alpha :=\beta -r\in [0,\beta )$. Then
\[ {\left(s-\frac{k}{n}\right)^{\beta -r}}\le {n^{-\beta +r}}={n^{-\alpha }},\]
for all $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$. Substituting this into (36) yields
(37)
\[ \left|{B_{H,\lambda }^{I}}\left(s\right)-{B_{H,\lambda }^{I}}\left(\frac{k}{n}\right)\right|\le C{n^{-\alpha }}\left({s^{\delta }}+1\right)\eta .\]
Now apply (33), (34), and (37) to (30). This gives
(38)
\[\begin{aligned}{}\underset{\frac{k}{n}\le u\le s}{\sup }|{Y_{u}}-{Y_{k,n}}|& \le \theta {\int _{\frac{k}{n}}^{s}}\left(\left|{y_{0}}\right|{e^{\theta u}}+C\sigma \theta {e^{\theta u}}\xi (\delta )+\sigma ({s^{\delta }}+1)\xi (\delta )\right)du\\ {} & \hspace{1em}+C\sigma {n^{-\alpha }}({s^{\delta }}+1)\eta ,\\ {} & \le \theta {n^{-1}}\left(\left|{y_{0}}\right|{e^{\theta s}}+C\sigma \theta {e^{\theta s}}\xi (\delta )+\sigma ({s^{\delta }}+1)\xi (\delta )\right)\\ {} & \hspace{1em}+C\sigma {n^{-\alpha }}({s^{\delta }}+1)\eta .\end{aligned}\]
This inequality implies the bound (32), noting that ${n^{-1}}\le {n^{-\alpha }}$ (since $\alpha \lt \beta \le 1$), using the estimate ${s^{\delta }}+1\le {s^{\delta }}+{e^{\theta s}}$, and defining
\[ \zeta =\theta |{y_{0}}|+C\sigma {\theta ^{2}}\xi (\delta )+\sigma \theta \xi (\delta )+C\sigma \eta ,\]
which belongs to Ξ by Remark 3.
Note that the two bounds, (31) and (32), can be achieved with the same $\zeta \in \Xi $ if we define
\[ \zeta =(1+\theta )\big(|{y_{0}}|+C\sigma \theta \xi (\delta )+\sigma \xi (\delta )\big)+C\sigma \eta .\]
 □
Lemma 5.
For any $\alpha \in (0,\beta )$, there exist $\zeta \in \Xi $ and a positive integer ${n_{0}}$ such that, for all $n\ge {n_{0}}$, the following inequality holds:
\[ \left|{\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds-\frac{1}{n}{\sum \limits_{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}\right|\le \frac{{\zeta ^{2}}}{{n^{\alpha }}}{e^{2\theta {n^{m-1}}}}.\]
Proof.
We begin by observing that
(39)
\[ \left|{\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds-\frac{1}{n}{\sum \limits_{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}\right|\le {\int _{0}^{{n^{m-1}}}}{\sum \limits_{k=0}^{{n^{m}}-1}}\left|{\varphi _{n,k}}(s)\right|\hspace{0.1667em}ds,\]
where the summands are given by
\[ {\varphi _{n,k}}(s):=\left({Y_{s}^{2}}-{Y_{k,n}^{2}}\right){𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s),\hspace{1em}0\le k\le {n^{m}}-1.\]
Using the bounds established in Lemma 4, we estimate each term as follows:
\[\begin{aligned}{}\left|{\varphi _{n,k}}(s)\right|& \le \left|{Y_{s}}-{Y_{k,n}}\right|\left(\left|{Y_{s}}\right|+\left|{Y_{k,n}}\right|\right){𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s)\\ {} & \le 2\underset{\frac{k}{n}\le u\le s}{\sup }\left|{Y_{u}}-{Y_{k,n}}\right|\cdot \underset{0\le u\le s}{\sup }\left|{Y_{u}}\right|{𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s)\\ {} & \le 2{n^{-\alpha }}{\left({e^{\theta s}}+{s^{\delta }}\right)^{2}}{\zeta ^{2}}{𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s)\le 4{n^{-\alpha }}\left({e^{2\theta s}}+{s^{2\delta }}\right){\zeta ^{2}}{𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s).\end{aligned}\]
Substituting this bound into (39) and integrating, we obtain
\[ \left|{\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds-\frac{1}{n}{\sum \limits_{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}\right|\le 4\hspace{0.1667em}{n^{-\alpha }}\left(\frac{{e^{2\theta {n^{m-1}}}}-1}{2\theta }+\frac{{n^{(2\delta +1)(m-1)}}}{2\delta +1}\right){\zeta ^{2}}.\]
Finally, note that for large n, the term in parentheses is $O({e^{2\theta {n^{m-1}}}})$, and any positive constants can be absorbed into the generic random variable $\zeta \in \Xi $ in accordance with Remark 3.  □
Corollary 3.
There exist a random variable $\zeta \in \Xi $ and a positive integer ${n_{0}}$ such that for all $n\ge {n_{0}}$,
(40)
\[ \frac{1}{n}{\sum \limits_{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}={\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds+{R_{n}},\]
where the error term satisfies
\[ \left|{R_{n}}\right|\le \frac{{\zeta ^{2}}}{{n^{\alpha }}}{e^{2\theta {n^{m-1}}}}.\]
Proof of Theorem 3.
Combining (40) with estimate (25), we obtain
\[\begin{aligned}{}{\widetilde{\theta }_{n}}(m)& =\frac{n\left({Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}\right)}{2{\textstyle\textstyle\sum _{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}}=\frac{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}{2{\textstyle\textstyle\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds+2{R_{n}}}\\ {} & ={\left(\frac{2{\textstyle\textstyle\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds}{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}+2\cdot \frac{{R_{n}}}{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}\right)^{-1}}\\ {} & ={\left(\frac{1}{{\widehat{\theta }_{{n^{m-1}}}}}+\frac{2{R_{n}}}{{e^{2\theta {n^{m-1}}}}}\cdot \frac{{e^{2\theta {n^{m-1}}}}}{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}\right)^{-1}}\to \theta ,\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\end{aligned}\]
Here, the convergence follows from Theorem 2, Corollary 3, and the convergence (26).  □

4.5 Sketch of the proof of Theorem 3 for the tfBmII case

The proof of Theorem 3 for the tfBmII process follows the same strategy as in the case of the tfBm process. Lemma 6 below is established in a manner analogous to Lemma 4 for the process ${B_{H,\lambda }^{I}}$. The key difference is that inequality (8) is used in place of (6).
Applying (8) with $p=2$, we obtain the following bound, which replaces (33):
\[ \underset{u\in [0,s]}{\sup }|{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(u)|\le \left({s^{\frac{1}{2}}}{({\log ^{+}}s)^{2}}+1\right)\xi ,\hspace{1em}\text{a.s.},\]
where $\xi \in \Xi $.
Furthermore, using the elementary inequality
\[ {\log ^{+}}s\le {C_{\epsilon }}{s^{\epsilon }},\hspace{1em}\epsilon \gt 0,\]
and setting $\delta =\frac{1}{2}+2\epsilon $, while incorporating the constant ${C_{\epsilon }}$ into the random variable $\xi \in \Xi $, we arrive at
(41)
\[ \underset{u\in [0,s]}{\sup }\left|{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(u)\right|\le \left({s^{\delta }}+1\right)\xi (\delta ),\hspace{1em}\text{a.s.}\]
Unlike (33), the bound in (41) holds only for $\delta \gt \frac{1}{2}$. However, this restriction does not affect the subsequent proofs. An analogue of Lemma 6 can therefore be formulated as follows.
Lemma 6.
For any $\delta \gt \frac{1}{2}$, there exists a random variable $\zeta \in \Xi $ such that
(42)
\[ \underset{0\le u\le s}{\sup }|{Y_{u}}|\le \left({e^{\theta s}}+{s^{\delta }}\right)\zeta ,\]
and for any fixed $\alpha \in [0,\beta )$ and $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$,
(43)
\[ \underset{\frac{k}{n}\le u\le s}{\sup }|{Y_{u}}-{Y_{k,n}}|\le {n^{-\alpha }}\left({e^{\theta s}}+{s^{\delta }}\right)\zeta .\]
The remaining arguments for the tfBmII case proceed in the same way as for the tfBm process.

5 Simulation

To verify and illustrate the theoretical results, we investigate the behavior of the estimator ${\widetilde{\theta }_{n}}(m)$ for the drift parameter in the Ornstein–Uhlenbeck process driven by tfBm and tfBmII. All simulations were conducted using the R programming language.
The generation of tfBm and tfBmII trajectories was based on the Davies–Harte method, which employs circulant matrix embedding and is widely used for exact simulation of fBm. This method was originally introduced in [9] and subsequently refined in [11] and [33]. A user-friendly and comprehensive implementation can be found in [10]. To adapt this algorithm from fBm to tfBm and tfBmII, we applied several modifications, including the use of different covariance matrices (see (4) and (5)) and the incorporation of scaling factors arising from the scaling properties of tempered fractional Brownian motions (see (3)).
For equation (22), we fixed the parameters $\theta =2$ and $\sigma =1$. For each type of tempered fractional Brownian motion (tfBm and tfBmII), we considered 18 combinations of parameters, with $\lambda \in \{0.05,0.25,0.5\}$ and $H\in \{0.2,0.4,0.6,0.8,1.2,1.4\}$.
For each parameter set, 1000 sample paths were simulated over the interval $[0,10]$. We considered two choices of the parameter m: $m=4/3$ and $m=5/4$.
In each case, we computed 1000 estimates ${\widetilde{\theta }_{n}}$ using formula (28). For the resulting collection of estimates, we calculated the sample mean and standard deviation. The results are summarized in Tables 1–4.
The simulation outcomes support the theoretical findings. The bias of the estimator decreases with increasing n. Furthermore, for a given value of n, the biases exhibit minor fluctuations around specific values: approximately 0.0385 and 0.0020 for $n=100$ and $n=1000$, respectively, in the case $m=4/3$; and 0.0403, 0.0038, and 0.0002 for $n=100$, $n=1000$, and $n=10000$, respectively, in the case $m=5/4$. The standard deviations are consistently small and close to zero, which is typical for nonergodic models driven by fractional noises. In most scenarios, the standard deviation decreases as n increases.
Table 1.
Bias of ${\widetilde{\theta }_{n}}$ for $m=4/3$
tfBm tfBmII
H n $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$ $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$
0.2 100 0.038459904 0.038459886 0.038459903 0.038459898 0.038459898 0.038459958
1000 0.002000002 0.001999986 0.002000002 0.001999996 0.001999997 0.002000053
0.4 100 0.038459892 0.038459896 0.038459903 0.038459902 0.038459236 0.038459906
1000 0.001999991 0.001999995 0.002000002 0.002000001 0.001999385 0.002000005
0.6 100 0.038459902 0.038459900 0.038459901 0.038459899 0.038459900 0.038459902
1000 0.002000001 0.001999999 0.002000000 0.001999999 0.001999999 0.002000001
0.8 100 0.038459902 0.038459903 0.038459900 0.038459904 0.038459902 0.038459901
1000 0.002000001 0.002000002 0.001999999 0.002000003 0.002000001 0.002000000
1.2 100 0.038459895 0.038459896 0.038459901 0.038459908 0.038459905 0.038459899
1000 0.001999994 0.001999995 0.002000000 0.002000007 0.002000003 0.001999998
1.4 100 0.038459885 0.038459900 0.038459902 0.038459894 0.038459923 0.038459906
1000 0.001999985 0.001999999 0.002000001 0.001999993 0.002000021 0.002000005
Table 2.
Standard deviation of ${\widetilde{\theta }_{n}}$ for $m=4/3$
tfBm tfBmII
H n $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$ $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$
0.2 100 $1.0850\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $3.6756\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.2962\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $2.0551\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.1776\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.7373\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$
1000 $1.0981\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $3.3239\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.3093\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $2.0221\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.1270\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.6145\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$
0.4 100 $2.5119\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.4829\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.0550\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.4634\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.1149\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-5}}$ $1.2389\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
1000 $2.3847\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.4046\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.0164\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.2696\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.9552\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-5}}$ $1.1813\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
0.6 100 $4.4196\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $5.9572\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $8.9885\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.5798\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.3487\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $6.9116\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
1000 $4.2560\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $5.7832\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $8.6578\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.4102\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.2977\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $6.6270\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
0.8 100 $3.9826\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $3.3919\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $3.2389\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.0509\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $4.6020\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.6810\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
1000 $3.8418\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $3.2792\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $3.1287\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.0135\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $4.4359\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.5867\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
1.2 100 $1.6268\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.4903\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.5265\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.4087\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.6554\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.9973\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
1000 $1.5693\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.4377\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.4725\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.3233\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.5969\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.7856\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
1.4 100 $4.8088\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $2.1822\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.8087\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $3.2764\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.4250\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.6412\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
1000 $4.6385\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $2.1051\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.7446\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $3.1606\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.2330\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.5831\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
Table 3.
Bias of ${\widetilde{\theta }_{n}}$ for $m=5/4$
tfBm tfBmII
H n $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$ $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$
0.2 100 0.040330582 0.040330576 0.040330592 0.040330554 0.040330575 0.040330443
1000 0.003804569 0.003804564 0.003804579 0.003804543 0.003804563 0.003804407
10000 0.000200005 0.000200001 0.000200015 0.000199980 0.000199999 0.000199840
0.4 100 0.040330593 0.040330573 0.040330578 0.040330580 0.040330582 0.040330569
1000 0.003804580 0.003804561 0.003804566 0.003804568 0.003804570 0.003804557
10000 0.000200016 0.000199997 0.000200002 0.000200004 0.000200006 0.000199993
0.6 100 0.040330587 0.040330574 0.040330574 0.040330576 0.040330580 0.040330584
1000 0.003804574 0.003804562 0.003804562 0.003804565 0.003804568 0.003804572
10000 0.000200011 0.000199999 0.000199998 0.000200001 0.000200004 0.000200008
0.8 100 0.040330574 0.040330575 0.040330577 0.040330577 0.040330598 0.040330576
1000 0.003804563 0.003804563 0.003804565 0.003804565 0.003804585 0.003804564
10000 0.000199999 0.000199999 0.000200001 0.000200001 0.000200021 0.000200000
1.2 100 0.040330579 0.040330578 0.040330576 0.040330575 0.040330577 0.040330567
1000 0.003804567 0.003804566 0.003804564 0.003804564 0.003804565 0.003804556
10000 0.000200003 0.000200002 0.000200000 0.000200000 0.000200001 0.000199992
1.4 100 0.040330564 0.040330574 0.040330576 0.040330505 0.040330569 0.040330574
1000 0.003804553 0.003804562 0.003804565 0.003804496 0.003804557 0.003804563
10000 0.000199989 0.000199998 0.000200001 0.000199932 0.000199994 0.000199999
Table 4.
Standard deviation of ${\widetilde{\theta }_{n}}$ for $m=5/4$
tfBm tfBmII
H n $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$ $\lambda =0.05$ $\lambda =0.25$ $\lambda =0.5$
0.2 100 $1.2076\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.8049\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.8835\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $6.9047\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.8000\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $3.4982\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$
1000 $1.1527\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.5964\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.5472\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $6.5957\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.8392\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $4.2790\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$
10000 $1.1419\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $5.5662\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.5446\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $6.4627\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.8377\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $4.3842\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$
0.4 100 $7.1789\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $8.0580\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $8.1488\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $8.5452\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.0327\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.1210\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
1000 $6.9960\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $7.7201\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $7.9059\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $8.1132\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.9951\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.0461\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
10000 $6.9327\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $7.6983\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $7.8546\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $8.0857\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.9895\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.0428\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
0.6 100 $3.4083\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $7.5283\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.2984\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $8.0141\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $6.2212\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.6325\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
1000 $3.2785\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $7.2589\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.1538\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $7.7401\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $5.9969\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.5024\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
10000 $3.2664\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $7.2358\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.1403\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $7.7070\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $5.9765\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.4914\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
0.8 100 $3.3002\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.2949\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.2554\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.3186\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.3405\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ $2.2509\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
1000 $3.1831\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.2007\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.1749\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.1645\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.2854\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ $2.1748\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
10000 $3.1717\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.1920\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.1670\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.1495\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.2803\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ $2.1670\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$
1.2 100 $8.0339\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $5.9244\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.1974\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $7.7352\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.6121\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.9655\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
1000 $7.7497\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $5.7172\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.1550\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $7.4610\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.4483\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.8957\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
10000 $7.7218\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $5.6967\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.1509\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $7.4342\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $4.4323\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $1.8889\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
1.4 100 $5.1044\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.5895\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $2.4863\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.1536\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ $2.1307\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.0808\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
1000 $4.9236\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.5332\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $2.3984\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.0777\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ $2.0552\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.0425\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$
10000 $4.9059\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.5277\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $2.3898\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ $2.0702\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ $2.0479\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ $1.0388\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$

A Appendix: asymptotic growth of a centered Gaussian process

Let $\{{b_{k}},k\ge 0\}$ be an increasing sequence such that ${b_{0}}=0$, ${b_{k+1}}-{b_{k}}\ge 1$, and ${b_{k}}\to \infty $ as $k\to \infty $. Let $a(t)\gt 0$ be a continuous, increasing function, and denote ${a_{k}}:=a({b_{k}})$.
For a fixed $\Delta \gt 0$, let the sets ${\mathbf{T}_{\Delta }}$, ${\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}$, and the metric $d(\mathbf{t},\mathbf{s})$ be defined as in equations (10), (11), and (12), respectively.
Theorem 4 ([18, Theorem B.34]).
Let $0\lt \Delta \le 2({b_{k+1}}-{b_{k}})$ for all $k\ge 0$, and let $X=\{X(\mathbf{t}),\mathbf{t}\in {\mathbf{T}_{\Delta }}\}$ be a centered Gaussian process satisfying the following conditions:
  • (i) The maximal standard deviation over each segment is finite:
    \[ {m_{k}}:=m({\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }})=\underset{\mathbf{t}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}{\sup }{\left(\mathsf{E}[X{(\mathbf{t})^{2}}]\right)^{1/2}}\lt \infty .\]
  • (ii) There exist constants $\beta \in (0,1]$ and ${c_{k}}\gt 0$ such that
    \[ \underset{\substack{d(\mathbf{t},\mathbf{s})\le h\\ {} \mathbf{t},\mathbf{s}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}}{\sup }{\left(\mathsf{E}\left[{(X(\mathbf{t})-X(\mathbf{s}))^{2}}\right]\right)^{1/2}}\le {c_{k}}{h^{\beta }}.\]
  • (iii) The following series are convergent:
    \[ A:={\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}}{{a_{k}}}\lt \infty ,\hspace{2em}{\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\lt \infty ,\]
    and for some $\gamma \in (0,1]$,
    \[ {\sum \limits_{k=0}^{\infty }}\frac{{m_{k}^{1-2\gamma /\beta }}\hspace{0.1667em}{c_{k}^{2\gamma /\beta }}}{{a_{k}}}\lt \infty .\]
Then for any $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $\rho \gt 0$, we have
\[ \mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|X(\mathbf{t})|}{a({t_{1}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{A^{2}}}{2{(1-\vartheta )^{2}}}\right\}\cdot {A_{0}}(\vartheta ,\gamma ,\varepsilon ),\]
where
\[\begin{aligned}{}{A_{0}}(\vartheta ,\gamma ,\varepsilon )& =\frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{1}{A}{\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\right\}\\ {} & \hspace{1em}\times \exp \left\{\frac{{\Delta ^{2\gamma }}}{\gamma A{\left(1-\frac{\varepsilon }{\beta }\right)^{2\gamma /\varepsilon }}{\vartheta ^{2\gamma /\beta }}{4^{2\gamma }}}{\sum \limits_{k=0}^{\infty }}\frac{{c_{k}^{2\gamma /\beta }}{m_{k}^{1-2\gamma /\beta }}}{{a_{k}}}\right\}.\end{aligned}\]

Acknowledgement

The authors express their sincere gratitude to the reviewers for their careful consideration of the manuscript and for their insightful comments, which have significantly enhanced its rigor, clarity, and readability.

References

[1] 
Azmoodeh, E., Mishura, Y., Sabzikar, F.: How does tempering affect the local and global properties of fractional Brownian motion? J. Theor. Probab. 35(1), 484–527 (2022). MR4379473. https://doi.org/10.1007/s10959-020-01068-z
[2] 
Azmoodeh, E., Sottinen, T., Viitasaari, L., Yazigi, A.: Necessary and sufficient conditions for Hölder continuity of Gaussian processes. Stat. Probab. Lett. 94, 230–235 (2014). MR3257384. https://doi.org/10.1016/j.spl.2014.07.030
[3] 
Belfadli, R., Es-Sebaiy, K., Ouknine, Y.: Parameter estimation for fractional Ornstein–Uhlenbeck processes: non-ergodic case. Front. Sci. Eng. 1(1), 1–16 (2011).
[4] 
Berzin, C., Latour, A., León, J.R.: Inference on the Hurst Parameter and the Variance of Diffusions Driven by Fractional Brownian Motion. Lecture Notes in Statistics, vol. 216. Springer, Cham (2014). MR3289986. https://doi.org/10.1007/978-3-319-07875-5
[5] 
Bojdecki, T., Gorostiza, L.G., Talarczyk, A.: Sub-fractional Brownian motion and its relation to occupation times. Stat. Probab. Lett. 69(4), 405–419 (2004). MR2091760. https://doi.org/10.1016/j.spl.2004.06.035
[6] 
Boniece, B.C., Didier, G., Sabzikar, F.: Tempered fractional Brownian motion: wavelet estimation, modeling and testing. Appl. Comput. Harmon. Anal. 51, 461–509 (2021). MR4196450. https://doi.org/10.1016/j.acha.2019.11.004
[7] 
Brouste, A., Iacus, S.M.: Parameter estimation for the discretely observed fractional Ornstein-Uhlenbeck process and the Yuima R package. Comput. Stat. 28(4), 1529–1547 (2013). MR3120827. https://doi.org/10.1007/s00180-012-0365-6
[8] 
Cheridito, P., Kawaguchi, H., Maejima, M.: Fractional Ornstein-Uhlenbeck processes. Electron. J. Probab. 8, 3–14 (2003). MR1961165. https://doi.org/10.1214/EJP.v8-125
[9] 
Davies, R.B., Harte, D.S.: Tests for Hurst effect. Biometrika 74(1), 95–101 (1987). MR0885922. https://doi.org/10.1093/biomet/74.1.95
[10] 
Dieker, T.: Simulation of fractional Brownian motion. Master’s thesis, Department of Mathematical Sciences, University of Twente (2004).
[11] 
Dietrich, C.R., Newsam, G.N.: Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix. SIAM J. Sci. Comput. 18(4), 1088–1107 (1997). MR1453559. https://doi.org/10.1137/S1064827592240555
[12] 
Dozzi, M., Kozachenko, Y., Mishura, Y., Ralchenko, K.: Asymptotic growth of trajectories of multifractional Brownian motion, with statistical applications to drift parameter estimation. Stat. Inference Stoch. Process. 21(1), 21–52 (2018). MR3769831. https://doi.org/10.1007/s11203-016-9147-z
[13] 
El Machkouri, M., Es-Sebaiy, K., Ouknine, Y.: Least squares estimator for non-ergodic Ornstein–Uhlenbeck processes driven by Gaussian processes. J. Korean Stat. Soc. 45(3), 329–341 (2016). MR3527650. https://doi.org/10.1016/j.jkss.2015.12.001
[14] 
El-Nouty, C., Zili, M.: On the sub-mixed fractional Brownian motion. Appl. Math. J. Chin. Univ. Ser. B 30(1), 27–43 (2015). MR3319622. https://doi.org/10.1007/s11766-015-3198-6
[15] 
Houdré, C., Villa, J.: An example of infinite dimensional quasi-helix. In: Stochastic Models, Mexico City, 2002. Contemp. Math., vol. 336, pp. 195–201. Amer. Math. Soc., Providence, RI (2003). MR2037165. https://doi.org/10.1090/conm/336/06034
[16] 
Kozachenko, Y., Melnikov, A., Mishura, Y.: On drift parameter estimation in models with fractional Brownian motion. Statistics 49(1), 35–62 (2015). MR3304366. https://doi.org/10.1080/02331888.2014.907294
[17] 
Kubilius, K., Melichov, D.: On comparison of the estimators of the Hurst index of the solutions of stochastic differential equations driven by the fractional Brownian motion. Informatica (Vilnius) 22(1), 97–114 (2011). MR2885661. https://doi.org/10.15388/Informatica.2011.316
[18] 
Kubilius, K., Mishura, Y., Ralchenko, K.: Parameter Estimation in Fractional Diffusion Models. Bocconi & Springer Series, vol. 8. Bocconi University Press, Milan; Springer, Cham (2017).
[19] 
Kubilius, K., Mishura, Y., Ralchenko, K., Seleznjev, O.: Consistency of the drift parameter estimator for the discretized fractional Ornstein–Uhlenbeck process with Hurst index $H\in (0,\frac{1}{2})$. Electron. J. Stat. 9(2), 1799–1825 (2015). MR3391120. https://doi.org/10.1214/15-EJS1062
[20] 
Lim, S., Eab, C.H.: Tempered fractional Brownian motion revisited via fractional Ornstein–Uhlenbeck processes. arXiv preprint arXiv:1907.08974 (2019).
[21] 
Macioszek, K., Sabzikar, F., Burnecki, K.: Testing of tempered fractional brownian motions. arXiv preprint arXiv:2504.11906 (2025).
[22] 
Mandelbrot, B.B., Van Ness, J.W.: Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10, 422–437 (1968). MR0242239. https://doi.org/10.1137/1010093
[23] 
Meerschaert, M.M., Sabzikar, F.: Tempered fractional Brownian motion. Stat. Probab. Lett. 83(10), 2269–2275 (2013). MR3093813. https://doi.org/10.1016/j.spl.2013.06.016
[24] 
Meerschaert, M.M., Sabzikar, F.: Stochastic integration for tempered fractional Brownian motion. Stoch. Process. Appl. 124(7), 2363–2387 (2014). MR3192500. https://doi.org/10.1016/j.spa.2014.03.002
[25] 
Mishura, Y.S.: Stochastic Calculus for Fractional Brownian Motion and Related Processes. Lecture Notes in Mathematics, vol. 1929. Springer (2008). MR2378138. https://doi.org/10.1007/978-3-540-75873-0
[26] 
Mishura, Y., Ralchenko, K.: Drift parameter estimation in the models involving fractional Brownian motion. In: Modern Problems of Stochastic Analysis and Statistics. Springer Proc. Math. Stat., vol. 208, pp. 237–268. Springer (2017). MR3747669. https://doi.org/10.1007/978-3-319-65313-6_10
[27] 
Mishura, Y., Ralchenko, K.: Asymptotic growth of sample paths of tempered fractional Brownian motions, with statistical applications to Vasicek-type models. Fractal Fract. 8(2), 79 (2024). https://doi.org/10.3390/fractalfract8020079
[28] 
Mishura, Y., Ralchenko, K., Dehtiar, O.: Asymptotic properties of parameter estimators in Vasicek model driven by tempered fractional Brownian motion. Austrian J. Stat. 54, 61–81 (2025). https://doi.org/10.17713/ajs.v54i1.1966
[29] 
Mishura, Y., Ralchenko, K., Shklyar, S.: Gaussian Volterra processes: asymptotic growth and statistical estimation. Theory Probab. Math. Stat. 108, 149–167 (2023). MR4588243. https://doi.org/10.1090/tpms/1190
[30] 
Mishura, Y., Zili, M.: Stochastic Analysis of Mixed Fractional Gaussian Processes. ISTE Press, London; Elsevier Ltd, Oxford (2018).
[31] 
Peltier, R.-F., Lévy Véhel, J.: Multifractional Brownian motion: definition and preliminary results. INRIA research report, vol. 2645 (1995).
[32] 
Sabzikar, F., Surgailis, D.: Tempered fractional Brownian and stable motions of second kind. Stat. Probab. Lett. 132, 17–27 (2018). MR3718084. https://doi.org/10.1016/j.spl.2017.08.015
[33] 
Wood, A.T.A., Chan, G.: Simulation of stationary Gaussian processes in ${[0,1]^{d}}$. J. Comput. Graph. Stat. 3(4), 409–432 (1994). MR1323050. https://doi.org/10.2307/1390903
[34] 
Zili, M.: On the mixed fractional Brownian motion. J. Appl. Math. Stoch. Anal., 32435–9 (2006). MR2253522. https://doi.org/10.1155/JAMSA/2006/32435
[35] 
Zili, M.: Generalized fractional Brownian motion. Mod. Stoch. Theory Appl. 4(1), 15–24 (2017). MR3633929. https://doi.org/10.15559/16-VMSTA71
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Almost sure asymptotic growth of increments of tfBm and tfBmII
  • 4 Strongly consistent estimation of the drift parameter in Ornstein–Uhlenbeck-type processes
  • 5 Simulation
  • A Appendix: asymptotic growth of a centered Gaussian process
  • Acknowledgement
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

Journal

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy