Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. On long-range dependent time series driv ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • More
    Article info Full article Related articles

On long-range dependent time series driven by pseudo-Poisson type processes
Eslah Azzo ORCID icon link to view author Eslah Azzo details  

Authors

 
Placeholder
https://doi.org/10.15559/25-VMSTA290
Pub. online: 9 December 2025      Type: Research Article     

Received
20 August 2025
Revised
14 November 2025
Accepted
14 November 2025
Published
9 December 2025

Abstract

In many practical systems, the load changes at the moments when random events occur, which are often modeled as arrivals in a Poisson process independent of the current load state. This modeling approach is widely applicable in areas such as telecommunications, queueing theory, and reliability engineering. This motivates the development of models that combine family-wise scaling with non-Gaussian driving mechanisms, capturing discontinuities or jump-type behavior. In this paper, a stationary time series is formed from increments of a family-wise scaling process defined on the positive real line. This family-wise scaling process is expressed as an integral of a pseudo-Poisson type process. It is established that this stationary time series exhibits long-range dependence, as indicated by an autocovariance function that decays following a power law with a slowly varying component, and a spectral density that displays a power-law divergence at low frequencies. The autocovariances are not summable, indicating strong correlations over long time intervals. This framework extends the classical results on fractional Gaussian noise as well as on series driven by Poisson-type or Lévy-type noise. Additionally, it provides a versatile methodology for the spectral analysis of one-sided long-memory stochastic processes.

Introduction

Self-similar processes are crucial for modeling complex systems that exhibit scale-invariant or fractal-like behavior. These processes have been studied extensively due to their theoretical significance and practical applications in various fields, including network traffic modeling, financial time series analysis, geophysics, and hydrology. Significant contributions to this area include the works of Mandelbrot and Van Ness [14], Samorodnitsky and Taqqu [19], and Embrechts and Maejima [9]. A key feature of self-similar processes is their capacity to capture long-range dependence, where correlations persist over large time intervals.
On the other hand, processes exhibiting family-wise scaling are vital in modern stochastic modeling because they offer a more flexible and realistic framework for systems where scaling behavior isn’t simple and fixed. While traditional self-similarity describes a system that looks statistically the same at all scales, family-wise scaling accounts for a deeper, more complex relationship where the very nature of the process changes when it is scaled, all while remaining part of the same “family” of processes. Jørgensen, Martínez and Demétrio [12] introduced a framework for self-similarity and the Lamperti convergence in one-parameter families of processes. Also, Borgnat, Amblard and Flandrin [5] discussed scale invariances and Lamperti transformations as tools for understanding how time rescaling interacts with parameter changes. These perspectives justify distinguishing “family-wise scaling” from the standard notion of self-similarity of a fixed-parameter process. Processes exhibiting family-wise scaling have importance and practical benefits. From a theoretical point, studying processes with family-wise scaling expands the scope of stochastic analysis beyond the well-understood self-similar processes like fractional Brownian motion. It allows researchers to develop new mathematical tools and theories to understand a broader range of complex, nonstandard behaviors. This contributes to the development of more sophisticated models that can handle a wider array of real-world data. For instance, in financial markets, volatility and other statistical properties of asset returns can change over different time scales. A traditional self-similar model might miss these nuances. A family-wise scaling model, however, can capture this by allowing a parameter (e.g., related to volatility) to change with the time scale, providing a more accurate and robust model for risk assessment. These processes enable us to deal with non-Gaussian phenomena. For example, many natural and human processes, such as network data traffic and seismic activity, exhibit sudden jumps or shocks that cannot be modeled by Gaussian (natural) processes.
Long-memory and scaling phenomena play a central role in stochastic processes and time series. Classical models often rely on self-similar processes and their stationary increments, such as fractional Brownian motion and fractional Gaussian noise. Extensions driven by Poisson-type or Lévy-type noises provide a richer framework for modeling discontinuities and heavy-tailed behavior.
Stationary long-range dependent (LRD; this abbreviation will also stand for ‘long-range dependence’) series can be broadly classified into three main types, each with distinct mechanisms for LRD, jumps, and scaling. Lévy-based series, including fractional Lévy noise and Lévy-driven moving averages, exhibit LRD via slowly decaying kernel correlations, with jumps intrinsic to the Lévy process, and show approximate scaling: $X(ct)\sim {c^{H}}X(t)$ for large c [6, 7]. Time-changed Gaussian processes generate LRD through memory in the stochastic time-change $\tau (t)$; if $\tau (t)$ is self-similar with index ν, then $X(ct)\stackrel{d}{=}{c^{\nu /2}}X(t)$ [8]. Counting-based Poisson-type series, such as fractional Poisson and mixed Poisson models, produce LRD through heavy-tailed interarrival times or correlated random intensity, with jumps intrinsic to the counting mechanism, but generally lack scaling except in certain asymptotic regimes [2, 13, 15].
In this paper, we introduce a family-type scaling process, where scaling the time variable by a factor $a\gt 0$ produces a scaled version of another process from the same family, rather than the same process. That is, the process parameters themselves are affected by scaling, distinguishing this construction from classical self-similar processes. Based on this process, we construct, under certain conditions, a stationary time series exhibiting long-range dependence. This framework extends classical results on a fractional Gaussian noise and Lévy or Poisson type series, providing a versatile tool for spectral analysis of one-sided long-memory processes.
The paper is organized as follows. In Section 1, we review the concepts of self-similarity and long-range dependence, followed by an overview of classical models such as fractional Gaussian noise (fGn). We then introduce the notion of pseudo-Poisson type processe, which serve as the driving process in our construction. In Section 2, we introduce our approach to family-type scaling processes, and then present the construction of such a process as the integral of a pseudo-Poisson type process. The Gamma distribution of the random intensity in the underlying Poisson process provides a natural mechanism for heavy-tailed behavior and significantly influences the statistical properties of the model. We refer to this process as fractional PSI-process. Its covariance structure, as given in Proposition 1, reflects long-range dependence with a power-law decay governed by the exponent parameter $H\in \left(\frac{1}{2},1\right)$. This implies persistent correlations over long time intervals, making the model suitable for capturing memory effects observed in various real-world phenomena such as network traffic, financial time series, and environmental data.
The fractional PSI-process ${\Psi _{\theta }}(t),t\in {\mathbb{R}_{+}}$, inherits several important characteristics from both the underlying fractional Brownian motion and the pseudo-Poisson time change. In Section 3, we study a time series defined on ${\mathbb{Z}_{+}}$, constructed from the stationary increments of a fractional PSI-process. The main contribution of this paper is to establish the long-memory behavior of the resulting stationary time series. Specifically, we show that its autocovariance function exhibits a power-law decay of the form
\[ {\gamma _{X}}(h)=L(h)\hspace{0.1667em}{h^{2d-1}},\hspace{1em}d\in (0,1/2),\]
where $L(h)$ is a slowly varying function at infinity. Also, we prove the functional central limit theorem for LRD series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ in the Skorokhod space $\mathcal{D}[0,1]$. Finally, in Section 4, we derive the spectral density and discuss its implications for the long-memory behavior. Although a stationary time series defined on ${\mathbb{Z}_{+}}$ is not strictly causal, its one-sided domain allows for the application of the unilateral Fourier transform to derive its spectral density. Nevertheless, the lack of symmetry and causality introduces additional technical challenges in both the spectral representation and its interpretation. By using distributional Fourier analysis, we show that the resulting spectral density is, in general, a complex-valued and nonsymmetric function. As a result, the corresponding spectral measure is not classical but must be understood in the sense of a generalized spectral measure, specifically, as a tempered distribution.

1 Preliminaries and known results

In this section, we will briefly review key concepts and results that are essential for our construction and analysis. All concepts and definitions presented here can be found in [16]. If a different source is referenced, it will be indicated explicitly.
Definition 1 (stationary increments).
A stochastic process ${\left\{X(t)\right\}_{t\in \mathbb{R}}}$ has stationary increments if, for any $t\in \mathbb{R}$,
\[ X(t+k)-X(k)\stackrel{d}{=}X(t)-X(0),\hspace{1em}\forall k\in \mathbb{R}.\]
Typically, one selects $k=1$ when examining the properties of the stationary time series ${Y_{n}}=X(n+1)-X(n)$.
Definition 2 (Self-similar process).
A stochastic process ${\left\{X(t)\right\}_{t\in \mathbb{R}}}$ is called self-similar (SS) or H-SS if there is $H\gt 0$, such that for all $c\gt 0$
\[ X(ct)\stackrel{d}{=}{c^{H}}X(t),\hspace{1em}\forall t\in \mathbb{R}.\]
The parameter H is called Hurst parameter.
As is known, the most famous self-similar process with stationary increments is a fractional Brownian motion (fBm). The fBm, denoted by ${B_{H}}={\left\{{B_{H}}(t)\right\}_{t\in \mathbb{R}}}$, is Gaussian SSSI process with Hurst parameter $H\in (0,1)$, zero mean and covariance function, for $s,t\in \mathbb{R}$, given by
\[ \mathbb{E}{B_{H}}(t){B_{H}}(s)=\frac{\mathbb{E}\left[{B_{H}}{(1)^{2}}\right]}{2}\left(|t{|^{2H}}+|s{|^{2H}}-|t-s{|^{2H}}\right).\]
A stochastic process ${\left\{X(t)\right\}_{t\in T}}$ is called weakly(or second-order) stationary if for all $s,t\in T$,
\[ \mathbb{E}X(t)=\mathbb{E}X(0),\hspace{1em}\text{and}\hspace{1em}\mathrm{Cov}(X(t),X(s))=\mathrm{Cov}(X(t-s),X(0)),\]
where $T=\mathbb{R}$ or $\mathbb{Z}$.
The autocovariance function of a weakly stationary time series ${\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$ is defined as
\[ {\gamma _{X}}(h)=\mathrm{Cov}({X_{n+h}},{X_{n}}),\hspace{1em}h\in \mathbb{Z}.\]
By weak stationarity, ${\gamma _{X}}(h)$ depends only on the lag h. The autocovariance function serves as a fundamental measure of dependence in time series analysis and it has a symmetry property, i.e., ${\gamma _{X}}(h)={\gamma _{X}}(-h),\hspace{0.1667em}h\in \mathbb{Z}$.
The spectral density function is a fundamental tool in analyzing weakly stationary time series, providing a frequency-domain perspective of their dependence structure. It is especially important for detecting and characterizing long-range dependence, as it reveals how the variance of the series is distributed over low-frequency components. This insight helps identify persistent correlations over long time horizons, which are crucial for accurate modeling and forecasting in many fields of application.
Definition 3 (Spectral density).
Let λ denote the spectral frequency. The spectral density of a weakly stationary time series $X={\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$ is defined as the Fourier transform of its autocovariance function ${\gamma _{X}}(h)$,1 that is,
(1)
\[ {f_{X}}(\lambda )=\frac{1}{2\pi }{\sum \limits_{h=-\infty }^{\infty }}{\gamma _{X}}(h){e^{-ih\lambda }},\hspace{1em}\lambda \in (-\pi ,\pi ].\]
Definition 4 (Long-range dependence).
A weakly stationary time series ${\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$ is called long-range dependent (LRD) with parametr $d\in (0,1/2)$ if one of the nonequivalent conditions holds.
  • I. The auto-covariance ${\gamma _{X}}(h)$ of the time series ${\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$ satisfies
    (2)
    \[ {\gamma _{X}}(h)={L_{1}}(h)\hspace{0.1667em}{h^{2d-1}},\hspace{1em}h=0,1,2,\dots ,\]
    where ${L_{1}}$ is a slowly varying function at infinity.2
  • II. The time series ${\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$ has a spectral density satisfying
    (3)
    \[ {f_{X}}(\lambda )={L_{2}}(|\lambda |)|\lambda {|^{-2d}},\hspace{1em}\lambda \in (-\pi ,\pi ],\]
    where ${L_{2}}$ is a slowly varying function at zero.
  • III. The time series ${\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$ satisfies
    (4)
    \[ \mathrm{Var}\left({\sum \limits_{k=1}^{N}}{X_{k}}\right)={L_{3}}(N)\hspace{3.33333pt}{N^{2d+1}},\hspace{1em}N=1,2,3,\dots ,\]
    where ${L_{3}}$ is a slowly varying function at infinity.
  • IV. The auto-covariances of the time series ${\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$ are not absolutely summable, i.e.,
    (5)
    \[ {\sum \limits_{h=-\infty }^{\infty }}|{\gamma _{X}}(h)|=\infty .\]
The parameter $d\in (0,1/2)$ is called a long-range dependence parameter.
Thus, although the four conditions capture different perspectives – covariance decay, spectral density, variance of partial sums, and nonsummability – they are tied together through a network of implications, providing a unified but flexible framework for analyzing long-memory behavior. This is shown in Table 1.
Table 1.
Relations between different conditions of LRD according to Pipiras and Taqqu [16, pp. 21–30]
Condition Formulation and relation
(I) Covariance decay ${\gamma _{X}}(h)={L_{1}}(h){h^{2d-1}},\hspace{0.2778em}h\ge 0,{L_{1}}$ slowly varying at ∞.
I ⇒ IV.
I ⇒ III, where the function ${L_{3}}$ is related to function ${L_{1}}$.
I ⇒ II if the slowly varing function ${L_{1}}$ is quasi-monotone.a
(II) Spectral density ${f_{X}}(\lambda )={L_{2}}(|\lambda |)|\lambda {|^{-2d}},\hspace{0.2778em}\lambda \in (-\pi ,\pi ]$. ${L_{2}}$ slowly varying at 0.
II ⇒ I if ${L_{2}}(1/u)$ is a quasi-monotone slowly varing function on $[1/\pi ,\infty )$.
II ⇒ III, IV.
(III) Variance of partial sums $\mathrm{Var}\hspace{-0.1667em}\left({\textstyle\sum \limits_{k=1}^{N}}{X_{k}}\right)={L_{3}}(N){N^{2d+1}},\hspace{0.2778em}N\ge 1$.
${L_{3}}$ slowly varying at ∞,
(IV) Nonsummability of covariances ${\textstyle\sum \limits_{h=-\infty }^{\infty }}|{\gamma _{X}}(h)|=\infty $.
a A slowly varing function L on $[0,\infty )$ is called quasi-monotone if it is of bounded variation on any compact interval of $[0,\infty )$ and if, for some $\delta \gt 0$,
\[ {\underset{0}{\overset{x}{\int }}}{u^{\delta }}|\mathrm{d}L(u)|=\mathrm{O}({x^{\delta }}L(x)),\hspace{1em}x\to \infty .\]
Definition 5.
Let ${B_{H}}={\left\{{B_{H}}(t)\right\}_{t\in \mathbb{R}}}$ be a fractional Brownian motion. Then the stationary series
\[ {X_{n}}={B_{H}}(n)-{B_{H}}(n-1),\hspace{1em}n\in \mathbb{Z},\]
is called a fractional Gaussian noise (fGn).
The fractional Gaussian noise is a zero-mean stationary process with variance
\[ \mathbb{E}\hspace{0.1667em}{X_{n}^{2}}=\mathbb{E}[{B_{H}}{(1)^{2}}],\]
and the autocovariance function given by
(6)
\[\begin{aligned}{}\gamma (k)& =\frac{\mathbb{E}[{B_{H}}{(1)^{2}}]}{2}\left(|k+1{|^{2H}}+|k-1{|^{2H}}-2|k{|^{2H}}\right)\end{aligned}\]
(7)
\[\begin{aligned}{}& \sim \mathbb{E}[{({B_{H}}(1))^{2}}]\cdot H(2H-1)\hspace{0.1667em}{k^{2H-2}},\hspace{1em}\text{as}\hspace{2.5pt}k\to \infty .\end{aligned}\]
According to [16, 20], the spectral density of fGn is given by
\[ {f_{X}}(\lambda )={c_{H}}|1-{e^{-i\lambda }}{|^{2}}{\sum \limits_{k=-\infty }^{\infty }}|2\pi k+\lambda {|^{-1-2H}},\hspace{1em}\text{for}\hspace{2.5pt}\lambda \in (-\pi ,\pi ],\]
where ${c_{H}}=\frac{\mathbb{E}[{B_{H}}{(1)^{2}}]}{2\pi }\Gamma (2H+1)\sin (\pi H)$.
Moreover, the spectral density exhibits a power-law divergence near the origin and behaves asymptotically as
\[ {f_{X}}(\lambda )\sim {c_{H}}{\lambda ^{1-2H}},\hspace{1em}\text{as}\hspace{2.5pt}\lambda \to 0,\hspace{3.33333pt}\text{for}\hspace{2.5pt}H\gt \frac{1}{2}.\]
In his foundational monograph [11], Feller introduced the concept of a pseudo-Poisson process, through what he called Poissonian randomization of time. This approach involves subordinating the index set of a sequence of random variables ${({\xi _{n}})_{n\ge 0}}$ using a Poisson process, typically assumed to be independent of the underlying sequence, which itself exhibits the Markov property.
In this paper, we consider a generalization of this construction in which the index time of the sequence ${({\xi _{n}})_{n\ge 0}}$ is subordinated by a Poisson process ${\Pi _{\lambda (\omega )}}$ with a random intensity $\lambda (\omega )$. This type of model has been studied by Rusakov, Yakubovich and Baev [17], as well as by Rusakov, Yakubovich, and Laskin [18].
Definition 6.
Poisson Stochastic Index process (PSI-process) is defined by
(8)
\[ \psi (s)={\xi _{{\Pi _{\lambda (\omega )}}(s)}},\hspace{1em}s\in {\mathbb{R}_{+}},\]
where the sequence ${({\xi _{n}})_{n\ge 0}}$, the Poisson process ${\Pi _{\lambda (\omega )}}$, and the random variable $\lambda (\omega )$ are assumed to be mutually independent and defined on a common probability space $(\Omega ,\mathcal{F},\mathbb{P})$.
Note that the standardized Poisson process with rate $\lambda (\omega )$ can be obtained by a random time-scaling of the unit-rate process:
\[ {\Pi _{\lambda (\omega )}}(s)\stackrel{d}{=}{\Pi _{1}}\big(\lambda (\omega )s\big).\]
Throughout this work, we assume that ${({\xi _{n}})_{n\ge 0}}$ is a sequence of independent and identically distributed (i.i.d.) random variables with zero mean and finite variance $\mathbb{E}{\xi _{0}^{2}}={\sigma ^{2}}\lt \infty $ and that $\lambda (\omega )$ is a positive random variable. As shown in [17, 18], the mean and covariance function of the PSI-process are given by
(9)
\[ \mathbb{E}\hspace{0.1667em}\psi (s)=\mathbb{E}\hspace{0.1667em}{\xi _{0}}=0,\hspace{1em}\mathrm{Cov}(\psi (s),\psi (t))=\mathbb{E}\left[{\xi _{0}^{2}}\right]{L_{\lambda }}(|t-s|),\hspace{1em}s,t\in {\mathbb{R}_{+}},\]
where ${L_{\lambda }}(\cdot )$ denotes the Laplace transform of the random variable $\lambda (\omega )$.
For simplicity, we denote $\lambda (\omega )$ by λ.

2 A family-wise scaling stochastic model driven by a pseudo-Poisson type process

In this paper, we introduce a new class of stochastic models that extends the classical notion of self-similarity, referred to as family-wise scaling processes. Up to now, there has been no rigorous or unified definition for processes whose scaling property involves not only time rescaling but also a simultaneous rescaling of an additional parameter. The existing literature mainly provides precise definitions for self-similar processes (characterized by a single Hurst exponent). However, none of these frameworks captures the family-wise relation that links the dynamics of the process with the simultaneous transformation of its family parameter. Therefore, the definition we introduce here provides the formulation of such a family-wise scaling processes.
Definition 7.
Let ${\left\{{X_{\theta }}(t)\right\}_{t\ge 0,\theta \gt 0}}$ be a family of stochastic processes indexed by a scale parameter θ. We say that ${\left\{{X_{\theta }}(t)\right\}_{t\ge 0,\theta \gt 0}}$ is a family-wise scaling process with exponent H if, for every scaling factor $a\gt 0$, the following scaling relation holds:
(10)
\[ {X_{\theta }}(at)\stackrel{d}{=}{a^{H}}{X_{a\theta }}(t).\]
Instead of the classical self-similarity condition, here the scaling also acts on the parameter θ.
In this section, we introduce a stochastic model that combines two important features: family-wise scaling and randomness in the time index through a pseudo-Poisson-type process. This model introduced by Azzo and Rusakov in [1] is defined by
(11)
\[ {\Psi _{\theta }}(t)={\int _{0}^{t}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\hspace{0.1667em}\mathrm{d}s,\hspace{1em}t\in [0,T],\]
where $\theta \in (0,\infty )$ and $\kappa \in (0,1)$.
The process ${\Psi _{\theta }}(t)$ can be interpreted as a randomly time-changed integral of a scaled discrete-time noise sequence ${\{{\xi _{n}}\}_{n\ge 0}}$, where the time distortion is governed by a Poisson process ${\Pi _{\theta \lambda }}(s)$ with random intensity λ, scaled by the parameter θ. The parameter $\theta \gt 0$ controls the degree of aggregation or dispersion in the time deformation, while $\kappa \in (0,1)$ determines the scaling behavior of the integrand. Moreover, since the Poisson process and the noise sequence are independent, the process ${\Psi _{\theta }}(t)$ is well defined and integrable under suitable assumptions on ${\xi _{0}}$ and λ.
This model can be applied to the analysis of stochastic systems in which random events generate workload or transmit information over time. Examples include telecommunications networks, financial systems, and queueing models. The process reflects the total load that is the accumulated effect or quantity of events occurring within a specified period – offering important insights into system efficiency, resource allocation, and risk evaluation.
Moreover, the stochastic process defined in (11), indexed by parameter θ, exhibits a family-wise scaling property with exponent parameter $H\in (\frac{1}{2},1)$, as opposed to the classical H-self-similarity. Specifically, a linear scaling of time by a factor a corresponds, in the distributional sense, to a deterministic rescaling of another process from the same family, indexed by $a\theta $, i.e., for any $a\gt 0$,
(12)
\[ {\Psi _{\theta }}(at)\stackrel{d}{=}{a^{H}}{\Psi _{a\theta }}(t),\hspace{1em}t\in {\mathbb{R}_{+}}.\]
This property can be verified through a straightforward calculation.
\[\begin{aligned}{}{\Psi _{\theta }}(at)& ={\int _{0}^{at}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\hspace{0.1667em}\mathrm{d}s\stackrel{d}{=}{\int _{0}^{at}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{1}}(\theta \lambda s)}}\hspace{0.1667em}\mathrm{d}s=a{\int _{0}^{t}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{1}}(\theta \lambda au)}}\hspace{0.1667em}\mathrm{d}u\\ {} & ={a^{1-\kappa /2}}{\int _{0}^{t}}{(a\theta )^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{1}}(a\theta \lambda u)}}\hspace{0.1667em}\mathrm{d}u={a^{H}}{\Psi _{a\theta }}(t),\end{aligned}\]
where the scaling exponent $H=1-\frac{\kappa }{2}\in \left(\frac{1}{2},1\right)$.
We are interested in analyzing the covariance structure of the process defined in (11), under the assumption that the random intensity λ of the Poisson process follows a gamma distribution with unit scale and shape parameter $\kappa \in (0,1)$.
Denote the covariance function of ${\Psi _{\theta }}(t)$ by ${K_{{\Psi _{\theta }}}}(s,t):=\mathrm{Cov}\left({\Psi _{\theta }}(s),{\Psi _{\theta }}(t)\right)$.
Proposition 1.
Let $\mathbb{E}\hspace{0.1667em}{\xi _{0}}=0,\hspace{0.1667em}\mathbb{E}\hspace{0.1667em}{\xi _{0}^{2}}={\sigma ^{2}}\lt \infty ,\hspace{0.1667em}\theta \gt 0$ and suppose that $\lambda \sim \Gamma (\kappa ,1)$ with $\kappa \in (0,1)$. Then the process ${\Psi _{\theta }}(t)$, defined by (11), has the following covariance function for all $s,t\in {\mathbb{R}_{+}}$:
(13)
\[\begin{aligned}{}{K_{{\Psi _{\theta }}}}(s,t)& =\frac{{\sigma ^{2}}}{2H(2H-1)}\left({\left(t+m\right)^{2H}}+{\left(s+m\right)^{2H}}-{\left(|t-s|+m\right)^{2H}}-{m^{2H}}\right.\\ {} & \hspace{1em}\left.-4H\min \{s,t\}{m^{2H-1}}\right),\end{aligned}\]
where $m=\frac{1}{\theta }$ and $H=1-\frac{\kappa }{2}\in \left(\frac{1}{2},1\right)$.
Proof.
It follows from (9) that the process has zero expected value and the following covariance function for all $s,t\in {\mathbb{R}_{+}}$:
(14)
\[ {K_{{\Psi _{\theta }}}}(s,t)=\mathrm{Cov}\left({\Psi _{\theta }}(s),{\Psi _{\theta }}(t)\right)={\theta ^{\kappa }}\hspace{3.33333pt}\mathbb{E}\left[{\xi _{0}^{2}}\right]\hspace{3.33333pt}{\underset{0}{\overset{t}{\int }}}{\underset{0}{\overset{s}{\int }}}{L_{\theta \lambda }}(|x-y|)\mathrm{d}y\mathrm{d}x.\]
Since $\lambda \sim \Gamma (\kappa ,1)$ with $\kappa \in (0,1)$, for θ denoting the scale parameter, it follows that $\theta \lambda \sim \Gamma (\kappa ,\theta )$. The Laplace transform of the distribution $\Gamma (\kappa ,\theta )$ is well known and is given by
\[ {L_{\theta \lambda }}(s)={(1+\theta s)^{-\kappa }}.\]
For $\theta \gt 0$ and $\kappa \in (0,1)$, note that ${(1+\theta s)^{-\kappa }}$ is continuous on $[0,\infty )$. Hence, ${(1+\theta |x-y|)^{-\kappa }}$ is measurable and nonnegative on $[0,t]\times [0,s]$, so we can apply Tonelli’s theorem. Moreover, since
\[ 0\le {(1+\theta |x-y|)^{-\kappa }}\le 1,\]
it follows that ${L_{\theta \lambda }}(|x-y|)$ is integrable on $[0,t]\times [0,s]$. Therefore, Fubini’s theorem also applies.
Thus, we obtain
(15)
\[\begin{aligned}{}{\underset{0}{\overset{t}{\int }}}{\underset{0}{\overset{s}{\int }}}{L_{\theta \lambda }}(|x-y|)\mathrm{d}y\mathrm{d}x& ={\underset{0}{\overset{t}{\int }}}{\underset{0}{\overset{s}{\int }}}\frac{\mathrm{d}y\hspace{3.33333pt}\mathrm{d}x}{{(1+\theta |x-y|)^{\kappa }}}=\frac{1}{{\theta ^{\kappa }}}{\underset{0}{\overset{t}{\int }}}{\underset{0}{\overset{s}{\int }}}\left(\frac{\mathrm{d}y\hspace{3.33333pt}\mathrm{d}x}{{(\frac{1}{\theta }+|x-y|)^{\kappa }}}\right)\\ {} & =\frac{1}{{\theta ^{\kappa }}}\left({\underset{0}{\overset{s}{\int }}}{\underset{y}{\overset{t}{\int }}}\frac{\mathrm{d}x\hspace{3.33333pt}\mathrm{d}y}{{(x-y+\frac{1}{\theta })^{\kappa }}}+{\underset{0}{\overset{s}{\int }}}{\underset{x}{\overset{s}{\int }}}\frac{\mathrm{d}y\hspace{3.33333pt}\mathrm{d}x}{{(y-x+\frac{1}{\theta })^{\kappa }}}\right).\end{aligned}\]
Denote $m=\frac{1}{\theta }$. Calculate the first integral for $0\le s\lt t$:
(16)
\[\begin{aligned}{}{\underset{0}{\overset{s}{\int }}}{\underset{y}{\overset{t}{\int }}}\frac{\mathrm{d}x\hspace{3.33333pt}\mathrm{d}y}{{(x-y+m)^{\kappa }}}& ={\underset{0}{\overset{s}{\int }}}{\underset{y}{\overset{t}{\int }}}{(x-y+m)^{-\kappa }}\mathrm{d}x\hspace{3.33333pt}\mathrm{d}y=\frac{1}{1-\kappa }{\underset{0}{\overset{s}{\int }}}{(x-y+m)^{1-\kappa }}{\Big|_{y}^{t}}\mathrm{d}y\\ {} & =\frac{1}{1-\kappa }{\underset{0}{\overset{s}{\int }}}\left({(t-y+m)^{1-\kappa }}-{m^{1-\kappa }}\right)\mathrm{d}y\\ {} & =\frac{1}{1-\kappa }\left(\frac{1}{\kappa -2}{(t-y+m)^{2-\kappa }}{\Big|_{0}^{s}}-y\hspace{3.33333pt}{m^{1-\kappa }}{\Big|_{0}^{s}}\right)\\ {} & =\frac{1}{1-\kappa }\left(\frac{1}{\kappa -2}{(t-s+m)^{2-\kappa }}-\frac{1}{\kappa -2}{(t+m)^{2-\kappa }}-s\hspace{3.33333pt}{m^{1-\kappa }}\right)\\ {} & =\frac{1}{(1-\kappa )(2-\kappa )}\left({(t+m)^{2-\kappa }}\hspace{-0.1667em}-\hspace{-0.1667em}{(t\hspace{-0.1667em}-\hspace{-0.1667em}s+m)^{2-\kappa }}\hspace{-0.1667em}-\hspace{-0.1667em}(2-\kappa )\hspace{3.33333pt}s\hspace{3.33333pt}{m^{1-\kappa }}\right).\end{aligned}\]
In the same way, we calculate the second integral for $0\le s\lt t$ and obtain
(17)
\[\begin{aligned}{}{\underset{0}{\overset{s}{\int }}}{\underset{x}{\overset{s}{\int }}}\frac{\mathrm{d}y\hspace{3.33333pt}\mathrm{d}x}{{(y-x+m)^{\kappa }}}& ={\underset{0}{\overset{s}{\int }}}{\underset{x}{\overset{s}{\int }}}{(y-x+m)^{-\kappa }}\mathrm{d}y\hspace{3.33333pt}\mathrm{d}x=\frac{1}{1-\kappa }{\underset{0}{\overset{s}{\int }}}{(y-x+m)^{1-\kappa }}{\Big|_{x}^{s}}\mathrm{d}x\\ {} & =\frac{1}{1-\kappa }{\underset{0}{\overset{s}{\int }}}\left({(s-x+m)^{1-\kappa }}-{m^{1-\kappa }}\right)\mathrm{d}x\\ {} & =\frac{1}{1-\kappa }\left(\frac{1}{\kappa -2}{(s-x+m)^{2-\kappa }}{\Big|_{0}^{s}}-x\hspace{3.33333pt}{m^{1-\kappa }}{\Big|_{0}^{s}}\right)\\ {} & =\frac{1}{1-\kappa }\left(\frac{1}{\kappa -2}{m^{2-\kappa }}-\frac{1}{\kappa -2}{(s+m)^{2-\kappa }}-s\hspace{3.33333pt}{m^{1-\kappa }}\right)\\ {} & =\frac{1}{(1-\kappa )(2-\kappa )}\left({(s+m)^{2-\kappa }}-{m^{2-\kappa }}-(2-\kappa )\hspace{3.33333pt}s\hspace{3.33333pt}{m^{1-\kappa }}\right).\end{aligned}\]
We put (16) and (17) in (15), and obtain the formula for $0\le s\lt t$:
(18)
\[\begin{aligned}{}{\underset{0}{\overset{t}{\int }}}{\underset{0}{\overset{s}{\int }}}{L_{\theta \lambda }}(|x-y|)\mathrm{d}y\mathrm{d}x& =\frac{1}{{\theta ^{\kappa }}(1-\kappa )(2-\kappa )}\left({\left(t+m\right)^{2-\kappa }}+{\left(s+m\right)^{2-\kappa }}\right.\\ {} & \hspace{1em}-{\left(t-s+m\right)^{2-\kappa }}-\left.{\left(m\right)^{2-\kappa }}-2(2-\kappa )\hspace{3.33333pt}s\hspace{3.33333pt}{\left(m\right)^{1-\kappa }}\right).\end{aligned}\]
In the final step, we substitute (18) into (14) and assume that $\mathbb{E}\hspace{0.1667em}{\xi _{0}^{2}}={\sigma ^{2}}$, thus we obtain, for all $s,t\in {\mathbb{R}_{+}}$,
\[\begin{aligned}{}{K_{{\Psi _{\theta }}}}(s,t)& =\frac{{\sigma ^{2}}}{(1-\kappa )(2-\kappa )}\left({\left(t+m\right)^{2-\kappa }}+{\left(s+m\right)^{2-\kappa }}-{\left(|t-s|+m\right)^{2-\kappa }}\right.\\ {} & \hspace{1em}-\left.{m^{2-\kappa }}-2(2-\kappa )\hspace{3.33333pt}\min \{s,t\}{m^{1-\kappa }}\right).\end{aligned}\]
We have $H=1-\frac{\kappa }{2}$, so $\kappa =2-2H$, and then obtain
\[\begin{aligned}{}{K_{{\Psi _{\theta }}}}(s,t)& =\frac{{\sigma ^{2}}}{2H(2H-1)}\left({\left(t+m\right)^{2H}}+{\left(s+m\right)^{2H}}-{\left(|t-s|+m\right)^{2H}}-{m^{2H}}\right.\\ {} & \hspace{1em}-\left.4H\min \{s,t\}{m^{2H-1}}\right).\end{aligned}\]
 □
Observe that for $H\in \left(\frac{1}{2},1\right)$, the constant factor $\frac{{\sigma ^{2}}}{2H(2H-1)}$ is strictly positive. Consequently, we refer to the process ${\Psi _{\theta }}(t),\hspace{3.33333pt}t\in {\mathbb{R}_{+}}$, defined in (11) and satisfying the assumptions of Proposition 1, as the fractional PSI-process with exponent parameter $H=1-\frac{\kappa }{2}\in \left(\frac{1}{2},1\right)$.
Lemma 1.
As $\theta \to \infty $, the asymptotic covariance of the fractional PSI-process is proportional to the covariance of the fractional Brownian motion ${\left\{{B_{H}}(t)\right\}_{t\in {\mathbb{R}_{+}}}}$ with
(19)
\[ \mathbb{E}\left[{B_{H}}{(1)^{2}}\right]=\frac{{\sigma ^{2}}}{H(2H-1)}.\]
Proof.
It is clear from (13) that, when $\theta \to \infty $, it follows that $m\to 0$, and we have
\[\begin{aligned}{}& \frac{{\sigma ^{2}}}{2H(2H\hspace{-0.1667em}-\hspace{-0.1667em}1)}\left({\left(t+m\right)^{2H}}+{\left(s+m\right)^{2H}}\hspace{-0.1667em}-{\left(|t-s|+m\right)^{2H}}\hspace{-0.1667em}-{m^{2H}}\hspace{-0.1667em}-4H\min \{s,t\}{m^{2H-1}}\right)\\ {} & \to \frac{{\sigma ^{2}}}{2H(2H-1)}\left({t^{2H}}+{s^{2H}}-{\left(|t-s|\right)^{2H}}\right)=\frac{\mathbb{E}\left[{B_{H}}{(1)^{2}}\right]}{2}\left({t^{2H}}+{s^{2H}}-|t-s{|^{2H}}\right)\\ {} & =\mathbb{E}{B_{H}}(t){B_{H}}(s),\end{aligned}\]
where $\mathbb{E}\left[{B_{H}}{(1)^{2}}\right]=\frac{{\sigma ^{2}}}{H(2H-1)}$.  □
Proposition 2.
The fractional PSI-process ${\Psi _{\theta }}(t),\hspace{3.57777pt}t\in {\mathbb{R}_{+}}$, satisfies the family-wise scaling property with stationary increments.
Proof.
For a non-Gaussian process to have stationary increments, it is sufficient to show that $X(t+h)-X(t)\stackrel{d}{=}X(h)-X(0)$ for all $t\ge 0,h\gt 0$. For the fractional PSI-process, we have
\[ {\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)={\int _{0}^{t+h}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\mathrm{d}s-{\int _{0}^{t}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\mathrm{d}s={\int _{t}^{t+h}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\mathrm{d}s.\]
Note that the PSI-process ${\xi _{{\Pi _{\theta \lambda }}(s)}}$ was constructed based on jumps of the Poisson process. In other words, the random variable ξ is replaced by an independent copy when the leading Poisson process ${\Pi _{\theta \lambda }}$ has a jump on the interval $[t,t+h]$. Let ${N_{h}}$ is the number of Poisson jumps in the interval $[t,t+h]$, then ${N_{h}}:={\Pi _{\theta \lambda }}(t+h)-{\Pi _{\theta \lambda }}(t)\sim \mathrm{Poi}(\theta \lambda h)$.
Suppose ${\Pi _{\theta \lambda }}$ jumps at times ${T_{1}}\lt {T_{2}}\lt \cdots \lt {T_{{N_{h}}-1}}$ within $[t,t+h]$. For convenience, we set ${T_{0}}=t,{T_{{N_{h}}+1}}=t+h$. Thus, for $s\in [{T_{k-1}},{T_{k}})$, we have ${\xi _{{\Pi _{\theta \lambda }}(s)}}={\xi _{k}},k=1,\dots ,{N_{h}}$.
Since the integrand is constant in each interval between jumps, the increments of the fractional PSI-process can be written as the sum of the contributions from each interval as follows:
\[ {\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)={\int _{t}^{t+h}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\mathrm{d}s={\sum \limits_{k=1}^{{N_{h}}}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{k}}\hspace{0.1667em}{\Delta _{k}},\]
where ${\Delta _{k}}={T_{k}}-{T_{k-1}}$.
It follows that, for all $u\in \mathbb{R}$, the characteristic function of the increments of the fractional PSI-process is
\[\begin{aligned}{}{\varphi _{{\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)}}(u)& =\mathbb{E}\exp \left\{iu{\int _{t}^{t+h}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\mathrm{d}s\right\}\\ {} & ={\mathbb{E}_{\lambda }}\left[\mathbb{E}\exp \left\{iu{\int _{t}^{t+h}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{{\Pi _{\theta \lambda }}(s)}}\mathrm{d}s\right\}\Big|\lambda \right]\\ {} & ={\mathbb{E}_{\lambda }}\left[\mathbb{E}\exp \left\{iu{\sum \limits_{k=1}^{{N_{h}}}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{k}}\hspace{0.1667em}{\Delta _{k}}\right\}\Big|\lambda \right]\\ {} & ={\mathbb{E}_{\lambda }}\left[\mathbb{E}\exp \left\{iu{\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=1}^{n}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{k}}\hspace{0.1667em}{\Delta _{k}}\right\}{𝟙_{\{{N_{h}}=n\}}}\Big|\lambda \right].\end{aligned}\]
Note that ${\textstyle\sum _{k=1}^{n}}{\Delta _{k}}=h$ and the normalized vector $\left(\frac{{\Delta _{1}}}{h},\frac{{\Delta _{2}}}{h},\dots ,\frac{{\Delta _{n}}}{h}\right)$ have the Dirichlet $(1,1,\dots ,1)$ law. So the average spacing is $\mathbb{E}\left[{\Delta _{k}}\mid {N_{h}}=n\right]=\frac{h}{n}$ and for large n the gaps concentrate around this mean. Thus, we approximate ${\Delta _{k}}\approx \frac{h}{n},\hspace{3.33333pt}k=1,\dots ,n$. Let ${\varphi _{\xi }}$ be the characteristic function of ξ, and since all random objects are independent, we obtain
(20)
\[\begin{aligned}{}{\varphi _{{\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)}}(u)& ={\mathbb{E}_{\lambda }}\left[{\sum \limits_{n=0}^{\infty }}\mathbb{E}\left[\exp \left\{iu{\sum \limits_{k=1}^{n}}{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{k}}\hspace{0.1667em}\frac{h}{n}\right\}{𝟙_{\{{N_{h}}=n\}}}\right]\Big|\lambda \right]\\ {} & ={\mathbb{E}_{\lambda }}\left[{\sum \limits_{n=0}^{\infty }}\mathbb{E}\left[{\prod \limits_{k=1}^{n}}\exp \left\{iu{\theta ^{\kappa /2}}\hspace{0.1667em}{\xi _{k}}\hspace{0.1667em}\frac{h}{n}\right\}{𝟙_{\{{N_{h}}=n\}}}\right]\Big|\lambda \right]\\ {} & ={\mathbb{E}_{\lambda }}\left[{\sum \limits_{n=0}^{\infty }}{\left({\varphi _{\xi }}\left(u{\theta ^{\kappa /2}}\frac{h}{n}\right)\right)^{n}}\mathbb{P}\{{N_{h}}=n\}\Big|\lambda \right]\\ {} & ={\mathbb{E}_{\lambda }}\left[{\sum \limits_{n=0}^{\infty }}{\left({\varphi _{\xi }}\left(u{\theta ^{\kappa /2}}\frac{h}{n}\right)\right)^{n}}{e^{-\theta \lambda h}}\frac{{(\theta \lambda h)^{n}}}{n!}\Big|\lambda \right]\\ {} & ={\mathbb{E}_{\lambda }}\left[\exp \left\{\theta \lambda h\hspace{0.1667em}{\varphi _{\xi }}\left(uh{\theta ^{\kappa /2}}\right)-\theta \lambda h\right\}\Big|\lambda \right]\\ {} & ={\mathbb{E}_{\lambda }}\left[\exp \left\{-\lambda \left(\theta h-\theta h\hspace{0.1667em}{\varphi _{\xi }}\left(uh{\theta ^{\kappa /2}}\right)\right)\right\}\Big|\lambda \right]\\ {} & ={\left(1+\theta h-\theta h\hspace{0.1667em}{\varphi _{\xi }}\left(uh{\theta ^{\kappa /2}}\right)\right)^{-\kappa }}.\end{aligned}\]
Note that the distribution of the increment ${\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)$ is determined by the jumps of the process ${\Pi _{\theta \lambda }}$ that occur within the interval $[t,t+h]$. In the same way, the distribution of the increment ${\Psi _{\theta }}(h)-{\Psi _{\theta }}(0)={\Psi _{\theta }}(h)$, where ${\Psi _{\theta }}(0)=0$, is determined by the jumps that occur within the interval $[0,h]$. Following the same steps as before, we get
\[ {\varphi _{{\Psi _{\theta }}(h)}}(u)={\left(1+\theta h-\theta h\hspace{0.1667em}{\varphi _{\xi }}\left(uh{\theta ^{\kappa /2}}\right)\right)^{-\kappa }},\]
which is the characteristic function of ${\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)$. Then the fractional PSI-process has stationary increments, as its increment ${\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)\stackrel{d}{=}{\Psi _{\theta }}(h)-{\Psi _{\theta }}(0)$.  □
Remark 1.
Note from equation (20) that the stationary increments property of the fractional PSI-process does not depend on the distribution of the Poisson process’s random intensity λ. Therefore, the stochastic model defined by (11) is a family-wise scaling process with stationary increments.
The following corollary follows directly from the stationary increments property of the fractional PSI-process and highlights that the increment process is weakly stationary, with constant variance and an autocovariance function depending solely on the time lag.
Corollary 1.
Let ${\Psi _{\theta }}(t),\hspace{3.57777pt}t\in {\mathbb{R}_{+}}$, be the fractional PSI-process. For any fixed $h\gt 0$, the increment process ${\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)$ is weakly stationary. Its autocovariance depends only on the lag $|t-s|$ and the fixed increment length h.
Proof.
The variance of the fractional PSI-process follows from (13) and is given by
(21)
\[ \mathrm{Var}\big({\Psi _{\theta }}(t)\big)=\frac{{\sigma ^{2}}}{H(2H-1)}\left({\left(t+m\right)^{2H}}-{m^{2H}}-2Ht{m^{2H-1}}\right).\]
Moreover, from the stationary increments of the fractional PSI-process, we have, for any fixed $h\gt 0$ and $t\ge 0$,
\[\begin{aligned}{}\mathrm{Var}\left({\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t)\right)& =\mathrm{Var}\big({\Psi _{\theta }}(t+h)\big)+\hspace{-0.1667em}\mathrm{Var}\big({\Psi _{\theta }}(t)\big)\hspace{-0.1667em}-2\hspace{0.1667em}\mathrm{Cov}\big({\Psi _{\theta }}(t),{\Psi _{\theta }}(t+h)\big)\\ {} & =\frac{{\sigma ^{2}}}{H(2H-1)}\left({\left(h+m\right)^{2H}}-{m^{2H}}-2Ht{m^{2H-1}}\right)\\ {} & =\mathrm{Var}\big({\Psi _{\theta }}(h)\big),\end{aligned}\]
Therefore, we assume for any $u,v\ge 0$
\[ \mathrm{Var}\big({\Psi _{\theta }}(u)-{\Psi _{\theta }}(v)\big)=g(|u-v|)\]
Denote $U={\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t),V={\Psi _{\theta }}(s+h)-{\Psi _{\theta }}(s)$ and ${K_{uv}}=\mathbb{E}[{\Psi _{\theta }}(u){\Psi _{\theta }}(v)]$. Since the fractional PSI-process is centered, we obtain
\[ \mathrm{Cov}(U,V)=\hspace{0.2778em}{K_{t+h,s+h}}-{K_{t+h,s}}-{K_{t,s+h}}+{K_{t,s}},\]
and use the polarization identity
\[ \mathrm{Var}({\Psi _{\theta }}(u)-{\Psi _{\theta }}(v))={K_{uu}}+{K_{vv}}-2{K_{uv}},\]
so that
\[ {K_{uv}}=\frac{1}{2}\big({K_{uu}}+{K_{vv}}-\mathrm{Var}({\Psi _{\theta }}(u)-{\Psi _{\theta }}(v))\big).\]
Substituting this expression for each ${S_{\cdot \cdot }}$ into the autocovariance above, we obtain
\[\begin{aligned}{}2\mathrm{Cov}(U,V)& =\big({K_{t+h,t+h}}+{K_{s+h,s+h}}-Var({\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(s+h))\big)\\ {} & \hspace{1em}-\big({K_{t+h,t+h}}+{K_{s,s}}-Var({\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(s))\big)\\ {} & \hspace{1em}-\big({K_{t,t}}+{K_{s+h,s+h}}-Var({\Psi _{\theta }}(t)-{\Psi _{\theta }}(s+h))\big)\\ {} & \hspace{1em}+\big({K_{t,t}}+{K_{s,s}}-Var({\Psi _{\theta }}(t)-{\Psi _{\theta }}(s))\big)\\ {} & =-\mathrm{Var}({\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(s+h))+\mathrm{Var}({\Psi _{\theta }}(t+h)-X(s))\\ {} & \hspace{2em}+\mathrm{Var}({\Psi _{\theta }}(t)-X(s+h))-\mathrm{Var}({\Psi _{\theta }}(t)-{\Psi _{\theta }}(s)).\end{aligned}\]
Thus, we get
\[ \mathrm{Cov}\big({\Psi _{\theta }}(t+h)-{\Psi _{\theta }}(t),\hspace{0.1667em}{\Psi _{\theta }}(s+h)-{\Psi _{\theta }}(s)\big)=\frac{1}{2}\Big(g(|t-s+h|)+g(|t-s-h|)-2g(|t-s|)\Big).\]
Since g depends only on the pairwise time differences, the right-hand side depends solely on $|t-s|$ (and the fixed increment length h). Moreover, because the increments have zero mean, the increment process is weakly stationary with autocovariance depending only on the lag.  □

3 Fractional PSI-increment series and their memory properties

The construction of the fractional PSI-process captures essential features of long-range dependent systems by incorporating both a nontrivial time structure and a non-Gaussian or Lévy source of randomness. Since the fractional PSI-process ${\Psi _{\theta }}(t),t\in {\mathbb{R}_{+}}$, has stationary increments, it is well suited to the analysis of LRD phenomena on ${\mathbb{Z}_{+}}$. In practice, many empirical time series (e.g., in network traffic, finance, and climate data) are naturally indexed by nonnegative time. For instance, fractional ARIMA processes, which are classical examples of LRD models, are typically defined on ${\mathbb{Z}_{+}}$. Accordingly, we define the stationary increment series as
(22)
\[ {X_{n}}={\Psi _{\theta }}(n+1)-{\Psi _{\theta }}(n),\hspace{1em}n\in {\mathbb{Z}_{+}}.\]
The resulting stationary time series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ has key properties, as detailed below.
Proposition 3.
The stationary time series $X={\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ defined in (22), satisfies the following properties.
  • • $\mathbb{E}\hspace{0.1667em}{X_{n}}=0$.
  • • $\mathbb{E}\hspace{0.1667em}{X_{n}^{2}}=\mathbb{E}[{\Psi _{\theta }}{(1)^{2}}]$.
  • • The autocovariance function for $h\in {\mathbb{Z}_{+}}$ is given by3
(23)
\[\begin{aligned}{}{\gamma _{X}}(h)& =\frac{{\sigma ^{2}}}{2H(2H-1)}\left({\left(h+1+m\right)^{2H}}+{\left(h-1+m\right)^{2H}}-2{\left(h+m\right)^{2H}}\right)\end{aligned}\]
(24)
\[\begin{aligned}{}& \sim {\sigma ^{2}}{\left(h+m\right)^{2H-2}},\hspace{1em}\textit{as}\hspace{2.5pt}h\to \infty .\end{aligned}\]
Proof.
Since the fractional PSI-process is centered, the mean of the series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ is zero.
The expression for the variance of the series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ follows immediately from the fact that the increment process is weakly stationary (Corollary 1), thus
\[ \mathbb{E}{X^{2}}(n)=\mathrm{Var}\big({\Psi _{\theta }}(n+1)-{\Psi _{\theta }}(n)\big)=\mathrm{Var}\big({\Psi _{\theta }}(1)\big)=\mathbb{E}\left[{\Psi _{\theta }}{(1)^{2}}\right].\]
By Corollary 1, since the increment process is weakly stationary, the autocovariance function of the increments depends only on the lag time. An explicit formula can be derived from (13) and ${\Psi _{\theta }}(0)=0$:
\[\begin{aligned}{}{\gamma _{X}}(h)& =\mathrm{Cov}(X(n+h),X(n))=\mathrm{Cov}(X(h),X(0))=\mathbb{E}\left[X(h)X(0)\right]\\ {} & =\mathbb{E}\left[\left({\Psi _{\theta }}(h+1)-{\Psi _{\theta }}(h)\right)\left({\Psi _{\theta }}(1)-{\Psi _{\theta }}(0)\right)\right]\\ {} & =\mathbb{E}\left[{\Psi _{\theta }}(h+1){\Psi _{\theta }}(1)\right]-\mathbb{E}\left[{\Psi _{\theta }}(h){\Psi _{\theta }}(1)\right]\\ {} & =\frac{{\sigma ^{2}}}{2H(2H-1)}\left({\left(h+1+m\right)^{2H}}+{\left(1+m\right)^{2H}}\hspace{-0.1667em}-{\left(h+m\right)^{2H}}\hspace{-0.1667em}-{m^{2H}}\hspace{-0.1667em}-4H{m^{2H-1}}\right)\\ {} & -\frac{{\sigma ^{2}}}{2H(2H-1)}\left({\left(h+m\right)^{2H}}+{\left(1+m\right)^{2H}}\hspace{-0.1667em}-{\left(h-1+m\right)^{2H}}\hspace{-0.1667em}-{m^{2H}}\hspace{-0.1667em}-4H{m^{2H-1}}\right)\\ {} & =\frac{{\sigma ^{2}}}{2H(2H-1)}\left({\left(h+1+m\right)^{2H}}+{\left(h-1+m\right)^{2H}}-2{\left(h+m\right)^{2H}}\right).\end{aligned}\]
Finally, for the asymptotic autocovariance as $h\to \infty $, we denote $h+m=x$ and define $f(x)={x^{2H}}$. Therefore, we can write
\[\begin{aligned}{}& \frac{{\sigma ^{2}}}{2H(2H-1)}\left({(1+x)^{2H}}+{(x-1)^{2H}}-2{x^{2H}}\right)\\ {} & \hspace{1em}=\frac{{\sigma ^{2}}}{2H(2H-1)}{x^{2H-2}}\cdot \frac{f\left(1+\frac{1}{x}\right)-2f(1)+f\left(1-\frac{1}{x}\right)}{\frac{1}{{x^{2}}}}\\ {} & \hspace{1em}\sim \frac{{\sigma ^{2}}}{2H(2H-1)}{x^{2H-2}}\cdot {f^{\prime\prime }}(1),\hspace{1em}x\to \infty \\ {} & \hspace{1em}={\sigma ^{2}}{\left(h+m\right)^{2H-2}}.\end{aligned}\]
 □
Remark 2.
Note that since $2H-2\in (-1,0)$, dependencies decay slowly. This means that, as $h\to \infty $, the asymptotic autocovariance of the stationary series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ corresponds to the asymptotic autocovariance of the fGn, defined by (7), with shift parameter $m=\frac{1}{\theta }$, such that ${\sigma ^{2}}=\mathbb{E}\left[{B_{H}}{(1)^{2}}\right]H(2H-1)$.
Morevere, as $\theta \to \infty $, the asymptotic autocovariance function of ${\{{X_{n}}\}_{n\in {\mathbb{Z}_{+}}}}$ is given by
\[ {\gamma _{X}}(h)\to \frac{{\sigma ^{2}}}{2H(2H-1)}\left({(h+1)^{2H}}+{(h-1)^{2H}}-2{h^{2H}}\right).\]
This expression corresponds to the autocovariance function of fGn, defined by (6), with $\mathbb{E}[{B_{H}}{(1)^{2}}]=\frac{{\sigma ^{2}}}{H(2H-1)}$, a result which is consistent with the statement in Lemma 1.
Figure 1 illustrates that when θ is sufficiently large, the effect of $m=\frac{1}{\theta }$ in the autocovariance of the series derived by the fractional PSI-process becomes negligible, and it matches the fGn autocovariance. Consequently, for large θ the studied series behaves very similarly to fGn, with $\mathbb{E}[{B_{H}}{(1)^{2}}]=\frac{{\sigma ^{2}}}{H(2H-1)}$, regarding long-range correlations.
Graph comparing autocovariance of series Xn (blue) and fGn autocovariance (red dashed) over lag h, showing close overlap and decay.
Fig. 1.
Autocovariance comparison of the studied series and fGn, for $\sigma =1,\theta =1000$ and $H=0.85$
Remark 3.
Since the time series constructed from the increments of the fractional PSI-process is defined only on ${\mathbb{Z}_{+}}$, its autocovariance function is defined only for $h\ge 0$. Consequently, the symmetry property ${\gamma _{X}}(-h)={\gamma _{X}}(h)$, which typically holds for two-sided stationary series indexed over $\mathbb{Z}$, does not apply in this context. We also observe that its autocovariance function is positive.
Corollary 2.
The series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, constructed from the increments of the fractional PSI-process ${\Psi _{\theta }}$, exhibits long-range dependence in the sense of condition I, defined by (2), in Definition 4. From (24), we have
\[ {\gamma _{X}}(h)\sim {\sigma ^{2}}{\left(h+m\right)^{2H-2}}={\sigma ^{2}}{\left(1+\frac{m}{h}\right)^{2H-2}}{h^{2H-2}}={L_{1}}(h)\hspace{0.1667em}{h^{2d-1}},\hspace{1em}d\in \left(0,\frac{1}{2}\right),\]
where $d=H-\frac{1}{2}$ and ${L_{1}}(h)={\sigma ^{2}}{\left(1+\frac{m}{h}\right)^{2H-2}}$ is a slowly varying function at infinity.
Furthermore, as shown in Table 1, condition I implies condition IV. Therefore, its autocovariance sum diverges for $H\in \left(\frac{1}{2},1\right)$, i.e., ${\textstyle\sum \limits_{h=0}^{\infty }}{\gamma _{X}}(h)=\infty $.
For the following result, we are interested in verifying that the stationary series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ satisfies condition III of Definition 4. From 3, we have
\[\begin{aligned}{}\mathrm{Var}\left({\sum \limits_{k=1}^{N}}{X_{k}}\right)& ={\sum \limits_{k=1}^{N}}\mathrm{Var}({X_{k}})+2\sum \limits_{1\le i\lt j\le N}\mathrm{Cov}({X_{i}},{X_{j}})\\ {} & =N{\gamma _{X}}(0)+2{\sum \limits_{h=1}^{N-1}}(N-h){\gamma _{X}}(h)\\ {} & \sim {\sigma ^{2}}N{m^{2H-2}}+2{\sigma ^{2}}{\sum \limits_{h=1}^{N-1}}(N-h){\left(h+m\right)^{2H-2}}\\ {} & ={\sigma ^{2}}\hspace{3.33333pt}{N^{2H-1}}{\left(\frac{m}{N}\right)^{2H-2}}+2{\sigma ^{2}}{N^{2H-1}}{\sum \limits_{h=1}^{N-1}}\left(1-\frac{h}{N}\right){\left(\frac{h}{N}+\frac{m}{N}\right)^{2H-2}}\\ {} & ={N^{2H}}{\sigma ^{2}}{N^{-1}}\left({\left(\frac{m}{N}\right)^{2H-2}}+2{\sum \limits_{h=1}^{N-1}}\left(1-\frac{h}{N}\right){\left(\frac{h}{N}+\frac{m}{N}\right)^{2H-2}}\right)\\ {} & ={N^{2H}}{L_{3}}(N)\end{aligned}\]
where
(25)
\[ {L_{3}}(N)={\sigma ^{2}}{N^{-1}}\left({\left(\frac{m}{N}\right)^{2H-2}}+2{\sum \limits_{h=1}^{N-1}}\left(1-\frac{h}{N}\right){\left(\frac{h}{N}+\frac{m}{N}\right)^{2H-2}}\right),\hspace{1em}H\in (\frac{1}{2},1).\]
Note that ${L_{3}}(N)$, defined in (25), is a slowly varying function at infinity. To analyze this, define
\[ S(N)={\left(\frac{m}{N}\right)^{2H-2}}+2{\sum \limits_{h=1}^{N-1}}\left(1-\frac{h}{N}\right){\left(\frac{h}{N}+\frac{m}{N}\right)^{2H-2}}.\]
Let $x=\frac{h}{N}$, so that $h=Nx$. Then
\[ S(N)\approx {\left(\frac{m}{N}\right)^{2H-2}}+2N{\underset{0}{\overset{1}{\int }}}(1-x){\left(x+\frac{m}{N}\right)^{2H-2}}\mathrm{d}x.\]
Since $2H-2\lt 0$, as $N\to \infty $ we have ${\left(\frac{m}{N}\right)^{2H-2}}\to 0$ and ${\left(x+\frac{m}{N}\right)^{2H-2}}\to {x^{2H-2}}\hspace{1em}\text{for}\hspace{3.33333pt}x\gt 0$. So
\[ S(N)\sim 2N{\underset{0}{\overset{1}{\int }}}(1-x){x^{2H-2}}\mathrm{d}x=2N\frac{\Gamma (2H-1)\Gamma (2)}{\Gamma (2H+1)}=2N\frac{\Gamma (2H-1)}{\Gamma (2H+1)}.\]
Thus
(26)
\[ {L_{3}}(N)\approx 2{\sigma ^{2}}\frac{\Gamma (2H-1)}{\Gamma (2H+1)}=\text{const}\gt 0\]
which is a constant as $N\to \infty $. Thus the function ${L_{3}}(N)$, defined by (25), is slowly varying at infinity for $H(\frac{1}{2},1)$.
Then the stationary series ${\{{X_{n}}\}_{n\in {\mathbb{Z}_{+}}}}$ satisfies condition III of Definition 4, i.e.,
(27)
\[ \mathrm{Var}\left({\sum \limits_{k=1}^{N}}{X_{k}}\right)={N^{2d+1}}{L_{3}}(N),\]
where $d=H-\frac{1}{2}$, with $d\in \left(0,\frac{1}{2}\right)$, and ${L_{3}}(N)$ is the slowly varying function defined in (25).
Now, we consider the following model and then state and prove the functional central limit theorem for LRD series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, constructed from the increments of a fractional PSI-process. The convergence is understood in distribution in the Skorokhod space $\mathcal{D}[0,1]$ towards a fractional Brownian motion.The assumptions below follow the classical framework of [21].
Assumption 1 (Gaussian driving sequence and long memory).
Let ${({\eta _{n}})_{n\in \mathbb{Z}}}$ be a stationary Gaussian sequence with mean 0, unit variance, and covariance function
(28)
\[ r(h)=\mathrm{Cov}({\eta _{0}},{\eta _{h}})\sim L(h)\hspace{0.1667em}{h^{2d-1}},\hspace{1em}h\to \infty ,\]
where $d\in (0,1/2)$ and $L(\cdot )$ is a slowly varying function. Then the corresponding Hurst index is $H:=d+\frac{1}{2}\in \Big(\frac{1}{2},1\Big)$.
Assumption 2 (Functional transform and Hermite rank).
Let $G:\mathbb{R}\to \mathbb{R}$ be a square-integrable function with respect to the standard Gaussian measure, and define the stationary sequence
\[ {X_{n}}:=G({\eta _{n}}),\hspace{1em}n\in {\mathbb{Z}_{+}}.\]
The function G admits the Hermite expansion
\[ G(x)-\mathbb{E}[G(x)]={\sum \limits_{q=m}^{\infty }}{c_{q}}{H_{q}}(x),\]
where ${H_{q}}$ denotes the q-th Hermite polynomial, ${c_{m}}\ne 0$, and $m\ge 1$ is the Hermite rank of G. In this note, we focus on the important case $m=1$ (i.e., ${c_{1}}\ne 0$).
For $N\ge 1$ and $t\in [0,1]$, define the partial-sum process
\[ {S_{N}}(t):={\sum \limits_{k=1}^{\lfloor Nt\rfloor }}\big({X_{k}}-\mathbb{E}[{X_{k}}]\big),\hspace{2em}{\widetilde{S}_{N}}(t):=\frac{{S_{N}}(t)}{\sqrt{\operatorname{Var}\big({\textstyle\textstyle\sum _{k=1}^{N}}{X_{k}}\big)}}.\]
Theorem 1 (Functional CLT for the LRD Series).
Under Assumptions 1 and 2 with Hermite rank $m=1$, let ${\{{X_{n}}\}_{n\in {\mathbb{Z}_{+}}}}$ be an LRD series defined by (22), satisfying Condition III. Then, as $N\to \infty $,
\[ {\widetilde{S}_{N}}(\cdot )\xrightarrow{d}{B_{H}}(\cdot )\hspace{1em}\textit{in}\hspace{2.5pt}\mathcal{D}[0,1],\]
where ${B_{H}}$ is a fractional Brownian motion with Hurst parameter $H=d+\frac{1}{2}$.
Proof.
By Assumption 2 and the classical Hermite expansion, we can express the centered observations as
\[ {X_{k}}-\mathbb{E}[{X_{k}}]={\sum \limits_{q=1}^{\infty }}{c_{q}}{H_{q}}({\eta _{k}}).\]
Hence,
\[ {S_{N}}(t)={\sum \limits_{q=1}^{\infty }}{c_{q}}{S_{N,q}}(t),\hspace{2em}{S_{N,q}}(t):={\sum \limits_{k=1}^{\lfloor Nt\rfloor }}{H_{q}}({\eta _{k}}).\]
Because the Hermite rank is $m=1$ (that is, ${c_{1}}\ne 0$), we will show that, after the normalization determined by the variance of ${S_{N}}(1)$, the term corresponding to $q=1$ dominates, while all higher-order chaos components are asymptotically negligible. For any fixed $q\ge 1$, by the product formula for Hermite polynomials (see [19, Section 1.1]), we have
\[ \mathrm{Cov}\big({H_{q}}({\eta _{0}}),{H_{q}}({\eta _{h}})\big)=q!\hspace{0.1667em}r{(h)^{q}}.\]
Under the regular variation condition $r(h)\sim L(h){h^{2d-1}}$, this gives
\[ \mathrm{Cov}\big({H_{q}}({\eta _{0}}),{H_{q}}({\eta _{h}})\big)\sim q!\hspace{0.1667em}L{(h)^{q}}{h^{q(2d-1)}}.\]
The standard Karamata–Abelian arguments (see[19, Section 1.3]) yield
\[ \mathrm{Var}\big({S_{N,q}}(1)\big)={\sum \limits_{h=-(N-1)}^{N-1}}(N-|h|)\hspace{0.1667em}\mathrm{Cov}\big({H_{q}}({\eta _{0}}),{H_{q}}({\eta _{h}})\big)\sim {C_{q}}L{(N)^{q}}{N^{2-q(1-2d)}},\]
where ${C_{q}}=\frac{2q!}{(1-\alpha )(2-\alpha )}$ and $\alpha =q(1-2d)$. Since the Hermite rank is $m=1$, for $q=1$ we have $\alpha =1-2d$,
(29)
\[ {C_{1}}=\frac{1}{d(1+2d)}.\]
Thus
(30)
\[ \mathrm{Var}\big({S_{N,1}}(1)\big)\sim {C_{1}}\hspace{0.1667em}L(N)\hspace{0.1667em}{N^{1+2d}},\]
and consequently
\[ \sqrt{\mathrm{Var}\big({S_{N,1}}(1)\big)}\sim \sqrt{{C_{1}}\hspace{0.1667em}L(N)}\hspace{0.1667em}{N^{d+\frac{1}{2}}}.\]
Thus, the standard deviation of the $q=1$ sum grows like ${N^{d+\frac{1}{2}}}$. For $q\ge 2$, the exponent $2-q(1-2d)$ is strictly smaller than $2-(1-2d)$, and therefore the higher-order chaos terms grow at a slower rate.
Consequently,
\[ \mathrm{Var}\hspace{-0.1667em}\left({\sum \limits_{k=1}^{N}}{X_{k}}\right)=\sum \limits_{q\ge 1}{c_{q}^{2}}\hspace{0.1667em}\mathrm{Var}\big({S_{N,q}}(1)\big)={c_{1}^{2}}\hspace{0.1667em}\mathrm{Var}\big({S_{N,1}}(1)\big)\big(1+o(1)\big),\hspace{1em}N\to \infty .\]
Note that for each fixed $t\in [0,1]$, $\mathrm{Var}\big({S_{N,1}}(t)\big)\sim {C_{1}}\hspace{0.1667em}L(\lfloor Nt\rfloor )\hspace{0.1667em}{\lfloor Nt\rfloor ^{\hspace{0.1667em}1+2d}}$, so asymptotically,
\[ \sqrt{\mathrm{Var}\big({S_{N,1}}(t)\big)}\sim {t^{d+\frac{1}{2}}}\hspace{0.1667em}\sqrt{\mathrm{Var}\big({S_{N,1}}(1)\big)}.\]
Hence, the normalization used in ${\widetilde{S}_{N}}$ is asymptotically proportional to
\[ |{c_{1}}|\hspace{0.1667em}\sqrt{{C_{1}}\hspace{0.1667em}L(N)}\hspace{0.1667em}{N^{d+\frac{1}{2}}}.\]
Now, we verify the finite-dimensional convergence. We fix finite time points $0\le {t_{1}}\lt \cdots \lt {t_{m}}\le 1$, and then show that
\[ \big({\widetilde{S}_{N}}({t_{1}}),\dots ,{\widetilde{S}_{N}}({t_{m}})\big)\xrightarrow{d}\big({B_{H}}({t_{1}}),\dots ,{B_{H}}({t_{m}})\big),\hspace{1em}N\to \infty .\]
Since the higher-order chaos components are negligible after normalization, it suffices to consider the leading term corresponding to $q=1$. Note that
\[ {c_{1}}{S_{N,1}}(t)={c_{1}}{\sum \limits_{k=1}^{\lfloor Nt\rfloor }}{\eta _{k}}\]
is linear in the Gaussian sequence $\{{\eta _{k}}\}$. Hence, the vector
\[ \big({\widetilde{S}_{N}}({t_{1}}),\dots ,{\widetilde{S}_{N}}({t_{m}})\big)=\frac{{c_{1}}}{\sqrt{\mathrm{Var}\hspace{-0.1667em}\left({\textstyle\sum \limits_{k=1}^{N}}{X_{k}}\right)}}\big({S_{N,1}}({t_{1}}),\dots ,{S_{N,1}}({t_{m}})\big)\]
is Gaussian for each N. Therefore, if the covariances converge, the limit distribution will also be multivariate normal. By the variance asymptotics, defined by (30), for $i\le j$, the covariance of the normalized vector can be written as
\[\begin{aligned}{}\mathrm{Cov}\big({\widetilde{S}_{N}}({t_{i}}),{\widetilde{S}_{N}}({t_{j}})\big)& =\frac{{c_{1}^{2}}}{\mathrm{Var}\left({\textstyle\sum \limits_{k=1}^{N}}{X_{k}}\right)}\mathbb{E}{S_{N,1}}({t_{i}}){S_{N,1}}({t_{j}})\\ {} & \sim \frac{1}{{C_{1}}\hspace{0.1667em}{N^{1+2d}}L(N)}\mathbb{E}{S_{N,1}}({t_{i}}){S_{N,1}}({t_{j}})\end{aligned}\]
where ${C_{1}}$ is defined by (29). Using the polarization identity
\[ \mathbb{E}\big[{S_{N,1}}({t_{i}}){S_{N,1}}({t_{j}})\big]=\frac{1}{2}\Big(\mathbb{E}\big[{S_{N,1}}{({t_{i}})^{2}}\big]+\mathbb{E}\big[{S_{N,1}}{({t_{j}})^{2}}\big]-\mathbb{E}\big[{({S_{N,1}}({t_{j}})-{S_{N,1}}({t_{i}}))^{2}}\big]\Big),\]
and the variance asymptotics
\[ \mathbb{E}\big[{S_{N,1}}{({t_{i}})^{2}}\big]=\mathrm{Var}\big({S_{N,1}}({t_{i}})\big)\sim {C_{1}}\hspace{0.1667em}L(\lfloor N{t_{i}}\rfloor )\hspace{0.1667em}{\lfloor N{t_{i}}\rfloor ^{1+2d}},\]
we get
\[ \frac{\mathbb{E}\big[{S_{N,1}}{({t_{i}})^{2}}\big]}{{C_{1}}\hspace{0.1667em}{N^{1+2d}}L(N)}\sim \frac{{\lfloor N{t_{i}}\rfloor ^{1+2d}}L(\lfloor N{t_{i}}\rfloor )}{{N^{1+2d}}L(N)}\longrightarrow {t_{i}^{1+2d}}.\]
Combination of these three reliation yields
\[ \mathrm{Cov}\big({\widetilde{S}_{N}}({t_{i}}),{\widetilde{S}_{N}}({t_{j}})\big)\longrightarrow \frac{1}{2}\big({t_{i}^{2H}}+{t_{j}^{2H}}-|{t_{j}}-{t_{i}}{|^{2H}}\big),\hspace{1em}N\to \infty ,\]
which coincides with the covariance function of fractional Brownian motion with Hurst parameter $H=d+\frac{1}{2}$.
To conclude the functional central limit theorem, it remains to establish the tightness of ${\{{\widetilde{S}_{N}}(t)\}_{t\in [0,1]}}$ in $\mathcal{D}[0,1]$. We first establish a uniform ${L^{2}}$-bound for increments.
By stationarity and the variance asymptotics proved above, there exists ${N_{0}}$ and a constant $M\gt 0$ such that, for all $N\ge {N_{0}}$ and all $0\le s\lt t\le 1$,
\[ \mathrm{Var}\big({\widetilde{S}_{N}}(t)-{\widetilde{S}_{N}}(s)\big)=\frac{\mathrm{Var}\big({\textstyle\textstyle\sum _{k=\lfloor Ns\rfloor +1}^{\lfloor Nt\rfloor }}{X_{k}}\big)}{\mathrm{Var}\big({\textstyle\textstyle\sum _{k=1}^{N}}{X_{k}}\big)}\le M\hspace{0.1667em}|t-s{|^{2H}}.\]
Indeed, the numerator is of order ${([Nt]-[Ns])^{1+2d}}L([Nt]-[Ns])$ and the denominator is of order ${N^{1+2d}}L(N)$; by regular variation this yields the stated uniform bound for large N.
Fix $\varepsilon \gt 0$ and let $\delta \gt 0$. For any partition $0={t_{0}}\lt {t_{1}}\lt \cdots \lt {t_{r}}=1$ with mesh $\le \delta $ define
\[ {w^{\prime }_{{\widetilde{S}_{N}}}}(\delta ):=\underset{\{{t_{i}}\}}{\inf }\underset{1\le i\le r}{\max }\underset{u,v\in [{t_{i-1}},{t_{i}})}{\sup }|{\widetilde{S}_{N}}(u)-{\widetilde{S}_{N}}(v)|.\]
By Chebyshev’s inequality and the ${L^{2}}$-bound,
\[ \mathbb{P}\big(|{\widetilde{S}_{N}}(t)-{\widetilde{S}_{N}}(s)|\gt \varepsilon \big)\le \frac{\mathrm{Var}({\widetilde{S}_{N}}(t)-{\widetilde{S}_{N}}(s))}{{\varepsilon ^{2}}}\le \frac{M|t-s{|^{2H}}}{{\varepsilon ^{2}}}.\]
Hence, for any partition with mesh $\le \delta $,
\[\begin{aligned}{}\mathbb{P}\big({w^{\prime }_{{\widetilde{S}_{N}}}}(\delta )\gt \eta \big)& \le {\sum \limits_{i=1}^{r}}\mathbb{P}\big(\underset{u,v\in [{t_{i-1}},{t_{i}})}{\sup }|{\widetilde{S}_{N}}(u)-{\widetilde{S}_{N}}(v)|\gt \varepsilon \big)\le {\sum \limits_{i=1}^{r}}\frac{M{({t_{i}}-{t_{i-1}})^{2H}}}{{\varepsilon ^{2}}}\\ {} & \le \frac{M}{{\varepsilon ^{2}}}\hspace{0.1667em}{\delta ^{2H-1}},\end{aligned}\]
because the number of intervals $r\le 1/\delta $ and ${\textstyle\sum _{i}}{({t_{i}}-{t_{i-1}})^{2H}}\le r{\delta ^{2H}}\le {\delta ^{2H-1}}$. Since $H\gt 1/2$ we have $2H-1\gt 0$, so the right-hand side tends to 0 as $\delta \downarrow 0$, uniformly in $N\ge {N_{0}}$. Finally ${\widetilde{S}_{N}}(0)=0$ for all N, so by Billingsley’s tightness criterion ([3, Thm. 13.2]) the sequence $\{{\widetilde{S}_{N}}\}$ is tight in $\mathcal{D}[0,1]$.
Since both the convergence of the finite-dimensional distributions and the tightness condition hold, we conclude that
\[ {\widetilde{S}_{N}}(\cdot )\xrightarrow{d}{B_{H}}(\cdot )\hspace{1em}\text{in}\hspace{2.5pt}\mathcal{D}[0,1],\]
where ${B_{H}}$ is a fractional Brownian motion with Hurst parameter $H=d+\frac{1}{2}$.  □

4 Spectral density

The (classical) spectral density of a weakly stationary time series $X={\left\{{X_{n}}\right\}_{n\in \mathbb{Z}}}$, defined by (1), is always real and nonnegative. This is because the autocovariance function ${\gamma _{X}}(h)$ of a stationary process is always symmetric, satisfying ${\gamma _{X}}(h)={\gamma _{X}}(-h)$. Consequently, the Fourier representation reduces to a cosine series, and no imaginary component appears.
In contrast, in our framework we consider the nonsymmetric autocovariance function of stationary series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ constructed by increment process of the fractional PSI-process ${\Psi _{\theta }}(t),t\in {\mathbb{R}_{+}}$. Since the series is defined only on ${\mathbb{Z}_{+}}$ and the autocovariance function is defined only for $h\ge 0$, it is nonsymmetric. Hence, the spectral density can be defined using a unilateral (one-sided) Fourier series
(31)
\[ {f_{X}}(\lambda )=\frac{1}{2\pi }{\sum \limits_{h=0}^{\infty }}{\gamma _{X}}(h){e^{-ih\lambda }},\hspace{1em}\lambda \in (-\pi ,\pi ],\]
which is not the classical spectral density but rather a one-sided distributional extension.
By Corollary 2, since the autocovariance series diverges for $H\in \left(\frac{1}{2},1\right)$, the unilateral Fourier series of ${\gamma _{X}}(h)$ may not converge pointwise. Nevertheless, the spectral density can still be rigorously defined as a tempered distribution.
To obtain a rigorous representation of the spectral density ${f_{X}}(\lambda )$, defined by (31), in terms of a unilateral Fourier series, one must establish the convergence of the unilateral Fourier transform of ${\gamma _{X}}(h)$ in the distributional sense within the space of tempered distributions ${\mathcal{S}^{\prime }}(\mathbb{R})$.
Let $\mathcal{S}(\mathbb{R})$ denote the Schwartz space of rapidly decreasing smooth functions, i.e.,
\[ \mathcal{S}(\mathbb{R}):=\Big\{\varphi \in {C^{\infty }}(\mathbb{R}):\underset{x\in \mathbb{R}}{\sup }|{x^{m}}{\varphi ^{(n)}}(x)|\lt \infty ,\hspace{0.2778em}\forall m,n\ge 0\Big\}.\]
Its topological dual, denoted by ${\mathcal{S}^{\prime }}(\mathbb{R})$, is the space of tempered distributions. A tempered distribution $T\in {\mathcal{S}^{\prime }}(\mathbb{R})$ is a continuous linear functional on $\mathcal{S}(\mathbb{R})$, i.e., for each $\varphi \in \mathcal{S}(\mathbb{R})$,
\[ T:\varphi \mapsto \langle T,\varphi \rangle \in \mathbb{C}.\]
Thus, the spectral density ${f_{X}}(\lambda )$, defined by (31), can be rigorously defined as a tempered distribution by
\[ \langle {f_{X}},\varphi \rangle :=\frac{1}{2\pi }{\sum \limits_{h=0}^{\infty }}{\gamma _{X}}(h)\hspace{0.1667em}\widehat{\varphi }(h),\hspace{1em}\forall \varphi \in \mathcal{S}(\mathbb{R}),\]
where $\widehat{\varphi }(h)={\textstyle\int _{\mathbb{R}}}{e^{-ih\lambda }}\varphi (\lambda )\hspace{0.1667em}d\lambda $ is the Fourier transform of a test function φ. This ensures that ${f_{X}}\in {\mathcal{S}^{\prime }}(\mathbb{R})$ and provides a rigorous basis for the unilateral Fourier transform in the distributional sense.
Furthermore, by Corollary 2, the autocovariance function ${\gamma _{X}}(h)$ exhibits long-range dependence of the form
(32)
\[ {\gamma _{X}}(h)\sim {L_{1}}(h)\hspace{0.1667em}{h^{2d-1}},\hspace{1em}d\in \left(0,\frac{1}{2}\right),\]
where ${L_{1}}$ is a slowly varying function at infinity. Estrada and Kanwal in [10] prove that if coefficients ${\gamma _{X}}(h)$ satisfy at most polynomial growth, then ${f_{X}}(\lambda )$ defines a tempered distribution (via its partial sums). Accordingly, we state the following result.
Lemma 2.
Let ${\{{\gamma _{X}}(h)\}_{h\ge 0}}$ be a sequence with ${\gamma _{X}}(h)$ defined by (32). Define the unilateral Fourier series
\[ {f_{N}}(x)={\sum \limits_{h=0}^{N}}{\gamma _{X}}(h){e^{-ihx}}.\]
Then the sequence ${\{{f_{N}}\}_{N\ge 1}}$ converges in the topology of tempered distributions, that is,
\[ \underset{N\to \infty }{\lim }\langle {f_{N}},\varphi \rangle =\langle {f_{X}},\varphi \rangle ,\hspace{1em}\forall \varphi \in \mathcal{S}(\mathbb{R}),\]
where ${f_{X}}\in {\mathcal{S}^{\prime }}(\mathbb{R})$ is the tempered distribution defined by (31).
Proof.
Since ${\gamma _{X}}(h)$ given by (32), there exists $M\gt 0$ such that
\[ |{\gamma _{X}}(h)|\le M\hspace{0.1667em}|{L_{1}}(h)|\hspace{0.1667em}{h^{2d-1}}\]
for sufficiently large h. By Karamata’s characterization of slowly varying functions (see [4, Chapters 1–2]), for any $\varepsilon \gt 0$ we have $|{L_{1}}(h)|\le {M_{\varepsilon }}{h^{\varepsilon }}$ when h is large. Thus
\[ |{\gamma _{X}}(h)|=O({h^{2d-1+\varepsilon }}),\]
which is of polynomial growth since $2d-1\lt 0$. Hence the hypotheses of the distributional Fourier series theorem apply for ${\gamma _{X}}(h)$. The convergence in ${\mathcal{S}^{\prime }}(\mathbb{R})$ follows.  □
Lemma 2 shows that the unilateral Fourier expansion of ${\gamma _{X}}(h)$ is well defined in ${\mathcal{S}^{\prime }}(\mathbb{R})$, and its spectral interpretation remains valid.
Now, we present a typical result used in the study of long-memory processes and fractional behavior.
Proposition 4.
Let ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ be a stationary time series constructed from the increments of a fractional PSI-process with exponent parmeter $H\in (\frac{1}{2},1)$. Then for $\lambda \in (-\pi ,\pi ]$ the spectral density is given by
(33)
\[ {f_{X}}(\lambda )\approx {C_{H}}\left(2(\cos \lambda -1)\right){e^{im\lambda }}{(i\lambda )^{-(2H+1)}},\]
and satisfies the asymptotic behavior
(34)
\[ {f_{X}}(\lambda )\sim {C^{\prime }_{H}}{(i\lambda )^{1-2H}}\hspace{1em}\textit{as}\hspace{3.57777pt}\lambda \to 0,\]
where the constants ${C_{H}}$ and ${C^{\prime }_{H}}$ are given by
(35)
\[ {C_{H}}=\frac{{\sigma ^{2}}}{4\pi H(2H-1)}\Gamma \left(2H+1,im\lambda \right)\hspace{1em}\textit{and}\hspace{1em}{C^{\prime }_{H}}=\frac{{\sigma ^{2}}}{4\pi H(2H-1)}\Gamma \left(2H+1\right).\]
Proof.
We have
\[\begin{aligned}{}{f_{X}}(\lambda )& =\frac{1}{2\pi }{\sum \limits_{h=0}^{\infty }}{\gamma _{X}}(h){e^{-ih\lambda }}\\ {} & =\frac{{\sigma ^{2}}}{4\pi H(2H-1)}{\sum \limits_{h=0}^{\infty }}\left({\left(h+1+m\right)^{2H}}+{\left(h-1+m\right)^{2H}}-2{\left(h+m\right)^{2H}}\right){e^{-ih\lambda }}.\end{aligned}\]
Denote $C=\frac{{\sigma ^{2}}}{4\pi H(2H-1)}$ and $g(h)={\left(h+m\right)^{2H}}$. So, we have
\[\begin{aligned}{}{f_{X}}(\lambda )& =C{\sum \limits_{h=0}^{\infty }}\left(g(h+1)+g(h-1)-2g(h)\right){e^{-ih\lambda }}\\ {} & =C\left({\sum \limits_{h=0}^{\infty }}g(h+1){e^{-ih\lambda }}+{\sum \limits_{h=0}^{\infty }}g(h-1){e^{-ih\lambda }}-{\sum \limits_{h=0}^{\infty }}2g(h){e^{-ih\lambda }}\right)\\ {} & =C\left({e^{i\lambda }}{\sum \limits_{h=0}^{\infty }}g(h){e^{-ih\lambda }}+{e^{-i\lambda }}{\sum \limits_{h=0}^{\infty }}g(h){e^{-ih\lambda }}-2{\sum \limits_{h=0}^{\infty }}g(h){e^{-ih\lambda }}\right)\\ {} & =C\left({e^{i\lambda }}+{e^{-i\lambda }}-2\right){\sum \limits_{h=0}^{\infty }}g(h){e^{-ih\lambda }}=C\left(2\cos \lambda -2\right){\sum \limits_{h=0}^{\infty }}g(h){e^{-ih\lambda }}\\ {} & =C\left(2(\cos \lambda -1)\right){\sum \limits_{h=0}^{\infty }}{\left(h+m\right)^{2H}}{e^{-ih\lambda }}.\end{aligned}\]
Sine for $2H\gt 1$, the sum ${\textstyle\sum \limits_{h=0}^{\infty }}{\left(h+m\right)^{2H}}{e^{-ih\lambda }}$ defines a tempered distribution rather than a regular function. By approximating this sum with an integral and applying distributional Fourier analysis, we obtain, for $2H\in (1,2)$,
\[\begin{aligned}{}{\sum \limits_{h=0}^{\infty }}{\left(h+m\right)^{2H}}{e^{-ih\lambda }}& \approx {\underset{0}{\overset{\infty }{\int }}}{\left(x+m\right)^{2H}}{e^{-ix\lambda }}\mathrm{d}x={e^{im\lambda }}{\underset{m}{\overset{\infty }{\int }}}{y^{2H}}{e^{-iy\lambda }}\mathrm{d}y\\ {} & ={e^{im\lambda }}\Gamma \left(2H+1,im\lambda \right){(i\lambda )^{-(2H+1)}},\end{aligned}\]
where $\Gamma (s,z)$ is the upper incomplete Gamma function.
So, we get
\[ {f_{X}}(\lambda )\approx {C_{H}}\left(2(\cos \lambda -1)\right){e^{im\lambda }}{(i\lambda )^{-(2H+1)}},\]
where ${C_{H}}=C\hspace{3.33333pt}\Gamma \left(2H+1,im\lambda \right)=\frac{{\sigma ^{2}}}{4\pi H(2H-1)}\Gamma \left(2H+1,im\lambda \right)$.
As $\lambda \to 0$, we have $\cos \lambda -1\sim -\frac{{\lambda ^{2}}}{2}$ and $\Gamma \left(2H+1,im\lambda \right)\to \Gamma (2H+1)$. Therefore, as $\lambda \to 0$,
\[\begin{aligned}{}{f_{X}}(\lambda )& \sim \frac{{\sigma ^{2}}}{4\pi H(2H-1)}\Gamma \left(2H+1\right)2\left(-\frac{{\lambda ^{2}}}{2}\right){(i\lambda )^{-(2H+1)}}\\ {} & =\frac{{\sigma ^{2}}}{4\pi H(2H-1)}\Gamma \left(2H+1\right){(i\lambda )^{2}}{(i\lambda )^{-(2H+1)}}\\ {} & =\frac{{\sigma ^{2}}}{4\pi H(2H-1)}\Gamma \left(2H+1\right){(i\lambda )^{1-2H}}={C^{\prime }_{H}}{(i\lambda )^{1-2H}},\end{aligned}\]
where ${C^{\prime }_{H}}=\frac{{\sigma ^{2}}}{4\pi H(2H-1)}\Gamma \left(2H+1\right)$.  □
Remark 4.
The incomplete Gamma function representation in the formula for the spectral density (33) arises naturally from the integral approximation of the unilateral Fourier sum. Since the argument of the incomplete Gamma function is purely imaginary, the resulting expression for ${f_{X}}(\lambda )$ is, in general, complex-valued and nonsymmetric function. Moreover, the imaginary part reflects the one-sided nature of the representation, not any causal structure of the series.
Remark 5.
The spectral density exhibits a power-law divergence near zero frequency, which is characteristic of long-range dependence. The term ${(i\lambda )^{1-2H}}$ governs the behavior at low frequencies. This asymptotic form generalizes classical results for fractional Gaussian noise, making it a crucial tool in the spectral analysis of fractional processes.
Corollary 3.
Let ${\gamma _{X}}(h)$ be given by (32), and ${f_{X}}(\lambda )$ be given by (34). Then the Abelian/Tauberian theorem linking ${f_{X}}(\lambda )$ and ${\gamma _{X}}(h)$ holds.
Proof.
By the classical Abelian/Tauberian theory (see [4]), the power-law asymptotics of ${\gamma _{X}}(h)$ is directly linked to the singular behavior of the spectral density near the origin:
\[ {\gamma _{X}}(h)\sim {L_{1}}(h)\hspace{0.1667em}{h^{2d-1}}\Longleftrightarrow {f_{X}}(\lambda )\sim {C_{d}}\hspace{0.1667em}{L_{1}}(1/|\lambda |)\hspace{0.1667em}|\lambda {|^{-2d}},\hspace{1em}\lambda \to 0,\hspace{3.33333pt}d\in \big(0,\frac{1}{2}\big),\]
for some explicit constant ${C_{d}}\gt 0$. In our setting, according to (34), for $d=H-\frac{1}{2}$ we obtain
\[\begin{aligned}{}{(i\lambda )^{1-2H}}& ={(i\lambda )^{-2d}}=|\lambda {|^{-2d}}\hspace{0.1667em}{e^{-i\pi d\hspace{0.1667em}\mathrm{sgn}(\lambda )}}=|\lambda {|^{-2d}}\left(\cos (\pi d)-i\hspace{0.1667em}\mathrm{sgn}(\lambda )\hspace{0.1667em}\sin (\pi d)\right)\\ {} & \sim |\lambda {|^{-2d}}\cos (\pi d),\hspace{1em}\lambda \to 0,\end{aligned}\]
where ${L_{1}}(1/|\lambda |)\equiv 1$, and hence ${C_{d}}={C^{\prime }_{H}}\cos (\pi d)$. Thus, our result confirms the validity of the classical Abelian/Tauberian theory in this framework.  □
Thus, a key distinction between general stationary LRD models and our model lies in the availability of closed-form covariance functions and spectral distributions. As we have shown, our LRD model admits a closed-form covariance function and tractable one-sided spectral distributions, whereas other LRD models behave differently.
  • • Fractional Lévy noise / Lévy-driven moving averages: The covariance is usually implicit, given by an integral involving a kernel, and only asymptotic behavior (e.g., ${\gamma _{X}}(h)\sim C{h^{2H-2}}$) is explicit. The spectral density is defined via the bilateral Fourier transform, but no closed-form expression exists in general and one-sided spectral distributions are not tractable.
  • • Time-changed Gaussian models: The covariance may be expressed explicitly only for special subordinators (e.g., inverse stable), but is otherwise implicit. The spectral density, also defined via the bilateral Fourier transform, is rarely closed-form, though asymptotics near the origin often characterize long memory. One-sided spectral distributions are not tractable in general.
  • • Fractional Poisson-based models: The covariance often admits closed forms in terms of Mittag-Leffler functions. The spectral density can sometimes be written explicitly, and one-sided spectral distributions are tractable in special cases.
  • • Mixed Poisson-based models: The covariance is closed-form, depending directly on the variance of the random intensity in the underlying Poisson process. The spectral density and one-sided spectral distributions are often explicit and analytically tractable.
Figure 2 illustrates the spectral structure of the stationary time series constructed by the increments of the fractional process. We observe that as $\lambda \to \pm \pi $, the spectral density decays smoothly, indicating diminishing contribution from high-frequency components. Notably, the magnitude of $|{f_{X}}(\lambda )|$ sharply peaks at zero frequency, which is a characteristic signature of LRD processes, where low frequencies dominate the spectrum. Additionally, while the imaginary part of ${f_{X}}(\lambda )$ is small for large θ (i.e., small $m=\frac{1}{\theta }$), it demonstrates the asymmetry of the spectral density caused by the exponential factor ${e^{i\lambda /\theta }}$. This reflects the lack of symmetry in the autocovariance function. In spectral analysis and many applications, we are interested in how much energy or variation is present at each frequency. This is accurately represented by $|{f_{X}}(\lambda )|$ which, represents the spectral energy at the frequency. The upper panel of Figure 3 illustrates the effect of Hurst parameter H for $H=0.6,0.7,0.9$ with fixed $\theta =1000$. The spectral energy $|{f_{X}}(\lambda )|$ shows a clear peak at zero frequency, which sharpens and increases in height as H rises.
Graph shows spectral density components vs frequency, with real, imaginary, and magnitude parts peaking sharply near zero frequency.
Fig. 2.
Spectral density ${f_{X}}(\lambda )$ of the process for $H=0.6$ and $\theta =1000$. The plot shows the real part (blue), imaginary part (orange dashed), and magnitude (green dotted) of the complex-valued spectral density
Two graphs showing effects of varying H and θ on |fₖ(λ)|, with sharp peaks at λ=0 and varying curve shapes across different parameter values.
Fig. 3.
The upper panel illustrates the effect of varying the Hurst parameter H (with fixed $\theta =1000$) on $|{f_{X}}(\lambda )|$. The lower panel shows the effect of varying θ with fixed $H=0.8$ on $|{f_{X}}(\lambda )|$
The lower panel of Figure 3 shows the effect of varying $\theta =0.3,0.5,10,100$ on $|{f_{X}}(\lambda )|$, with fixed $H=0.8$. As θ increases, the spectral energy decays more smoothly near $\lambda =\pm \pi $, indicating reduced contributions from high-frequency components. Furthermore, large values of θ produce a spectrum that closely resembles that of classical fractional Gaussian noise.
Remark 6.
As a consequence, the spectral density is not integrable over $(-\pi ,\pi ]$, and the associated spectral measure is not absolutely continuous with respect to Lebesgue measure. Therefore, the spectral measure in this case is a generalized spectral measure given by
\[ {F_{X}}(\mathrm{d}\lambda )={f_{X}}(\lambda )\hspace{0.1667em}\mathrm{d}\lambda \approx {C_{H}}\left(2(\cos \lambda -1)\right){e^{i\frac{\lambda }{\theta }}}{(i\lambda )^{-(2H+1)}}\hspace{0.1667em}\mathrm{d}\lambda .\]
Thus, the spectral representation of the stationary time series defined as the increments of a fractional PSI-process is to be interpreted in the sense of generalized spectral measure. This approach leads to the framework of distribution theory and the Fourier transforms of distributions.

5 Numerical simulation of the studied series

In this simulation, we generate and analyze a stationary time series constructed from the increments of a fractional PSI-process with $\kappa =0.4$ and $\theta =50$ over a fixed total duration of 1000, representing the complete time interval across which the process is observed and the series is constructed. Figure 4 shows a sample path of the simulated series ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$ over the first 500 observations. The amplitude of changes between successive points is influenced by the fractional PSI-process parameters, particularly $\theta =50$ and $H=1-\frac{\kappa }{2}=0.8$. The upper panel of Figure 5 displays the empirical autocorrelation function of ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$. The slow decay of correlations across lags indicates the presence of long-range dependence, consistent with the exponent parameter $H=0.8$. The lower panel of Figure 5 presents the low-frequency spectral density of the sequence ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$. It can be observed that the spectral power attains its highest values near the origin $(\lambda \approx 0)$, indicating that most of the energy of the process is concentrated at low frequencies. This feature reflects the dominance of slow-varying components in the signal, which evolve gradually over time. Such a concentration of spectral power near zero frequency is a well-known signature of long-range dependence and is consistent with the slow decay observed in the empirical autocorrelation function shown in the upper panel. As the spectral density decays smoothly at higher frequencies with no additional dominant peaks (Figure 2), only the low-frequency range is shown in Figure 5 to highlight the slow-varying components and the long-memory behavior of the sequence.
Graph showing a sample path of X_n over n, with irregular blue lines fluctuating between -6 and 7, displaying stochastic behavior.
Fig. 4.
Sample path of the simulated series ${\{{X_{n}}\}_{n\in {\mathbb{Z}_{+}}}}$ over the first 500 observations for $H=0.8$ and $\theta =50$
Two graphs: top shows empirical autocorrelation with lag h for H=0.80, bottom displays low-frequency spectrum with power vs frequency λ.
Fig. 5.
Upper panel – empirical autocorrelation of ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, lower panel – low-frequency spectral density of ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, both for $H=0.8$

Acknowledgments

The author dedicates this paper to the memory of Oleg Rusakov, who passed away recently. His works have been a source of inspiration for this research.

Footnotes

1 The spectral density ${f_{X}}(\lambda )$ is $2\pi $-periodic, i.e.,
\[ {f_{X}}(\lambda )={f_{X}}(\lambda +2\pi k),\hspace{1em}\forall k\in \mathbb{Z}.\]
2 The function ${L_{1}}$ is slowly varying function at infinity if it is positive on $[b,\infty ),b\ge 0$, and for any $a\gt 0$, $\underset{x\to \infty }{\lim }\frac{{L_{1}}(ax)}{{L_{1}}(x)}=1$.
3 Here and below ∼ denotes asymptotic equivalence, i.e., $f(x)\sim g(x)$ if $\underset{x\to \infty }{\lim }\frac{f(x)}{g(x)}=1$.

References

[1] 
Azzo, E., Rusakov, O.: On a variant of a pseudo-Poisson process with random intensity and free-distribution self-similarity. In: Proceedings of the All-Russian Conference on Sciences and Humanities, Nauka SPbU, Saint Petersburg, Russian Federation, pp. 54–60 (2020)
[2] 
Beghin, L., Orsingher, E.: Fractional Poisson processes and related planar random motions. Electron. J. Probab. 14, 1790–1826 (2009) MR2535014. https://doi.org/10.1214/EJP.v14-675
[3] 
Billingsley, P.: Convergence of Probability Measures, 2nd edn. Wiley, New York (1999) MR1700749. https://doi.org/10.1002/9780470316962
[4] 
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press, Cambridge (1987) MR0898871. https://doi.org/10.1017/CBO9780511721434
[5] 
Borgnat, P., Amblard, P.-O., Flandrin, P.: Scale invariances and Lamperti transformations for stochastic processes. J. Phys. A, Math. Gen. 38(10), 2081–2101 (2005) MR2126506. https://doi.org/10.1088/0305-4470/38/10/002
[6] 
Brockwell, P.J., Schlemm, E.: Linear fractional stable motion and fractional Lévy noise. Stoch. Process. Appl. 123(2), 653–677 (2013)
[7] 
Brockwell, P.J., Schlemm, E.: Long-range dependent Lévy-driven moving averages and their properties. arXiv:1702.02794 (2017)
[8] 
Chen, W., Meerschaert, M.M., Scheffler, H.P.: Fractional Gaussian processes and time-changed Brownian motion. J. Stat. Phys. 140(6), 1025–1040 (2010)
[9] 
Embrechts, P., Maejima, M.: Selfsimilar Processes. Princeton University Press (2002) MR1920153
[10] 
Estrada, R., Kanwal, R.P.: A Distributional Approach to Asymptotics: Theory and Applications, 2nd edn. Birkhäuser, Boston (2002) MR1882228. https://doi.org/10.1007/978-0-8176-8130-2
[11] 
Feller, W.: An Introduction to Probability Theory and Its Applications vol. 2, 2nd edn. John Wiley & Sons (1971) MR0270403
[12] 
Jørgensen, B., Martínez, J.R., Demétrio, C.G.B.: Self-similarity and Lamperti convergence for families of stochastic processes. Braz. J. Probab. Stat. 24(2), 154–183 (2010) MR2832230. https://doi.org/10.1007/s10986-011-9131-7
[13] 
Laskin, N.: Fractional Poisson process. Commun. Nonlinear Sci. Numer. Simul. 8(3–4), 201–213 (2003) MR2007003. https://doi.org/10.1016/S1007-5704(03)00037-6
[14] 
Mandelbrot, B.B., Van Ness, J.W.: Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10(4), 422–437 (1968) MR0242239. https://doi.org/10.1137/1010093
[15] 
Meerschaert, M., Nane, E., Vellaisamy, P.: The fractional Poisson process and the inverse stable subordinator. Electron. J. Probab. 16, 1600–1620 (2011) MR2835248. https://doi.org/10.1214/EJP.v16-920
[16] 
Pipiras, V., Taqqu, M.S.: Long-Range Dependence and Self-Similarity. Cambridge University Press (2017) MR3729426
[17] 
Rusakov, O.V., Yakubovich, Y.V., Baev, A.: On some local asymptotic properties of sequences with a random index. Vestn. St. Petersbg. Univ., Math. 53(3), 308–319 (2020) MR4165885. https://doi.org/10.21638/spbu01.2020.308
[18] 
Rusakov, O.V., Yakubovich, Y.V., Laskin, M.: Self-similarity for information flows with a random load free on distribution: The long memory case. In: Proceedings of the 2nd European Conference on Electrical Engineering and Computer Science (EECS), pp. 183–189 (2018)
[19] 
Samorodnitsky, G., Taqqu, M.S.: Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. Chapman and Hall, New York (1994) MR1280932
[20] 
Sinai, Y.G.: Self-similar probability distributions. Theory Probab. Appl. 21(1), 64–80 (1976) MR0407959
[21] 
Taqqu, M.S.: Weak convergence to fractional Brownian motion and to the Rosenblatt process. Z. Wahrscheinlichk.theor. Verw. Geb. 31, 287–302 (1975) MR0400329. https://doi.org/10.1007/BF00532868
Reading mode PDF XML

Table of contents
  • Introduction
  • 1 Preliminaries and known results
  • 2 A family-wise scaling stochastic model driven by a pseudo-Poisson type process
  • 3 Fractional PSI-increment series and their memory properties
  • 4 Spectral density
  • 5 Numerical simulation of the studied series
  • Acknowledgments
  • Footnotes
  • References

Copyright
© 2025 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Family-wise scaling process long-range dependence pseudo-Poisson process spectral density

MSC2020
60G18 60G22 60G55 62M10 62M15

Metrics
since March 2018
0

Article info
views

0

Full article
views

0

PDF
downloads

0

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    5
  • Tables
    1
  • Theorems
    1
Graph comparing autocovariance of series Xn (blue) and fGn autocovariance (red dashed) over lag h, showing close overlap and decay.
Fig. 1.
Autocovariance comparison of the studied series and fGn, for $\sigma =1,\theta =1000$ and $H=0.85$
Graph shows spectral density components vs frequency, with real, imaginary, and magnitude parts peaking sharply near zero frequency.
Fig. 2.
Spectral density ${f_{X}}(\lambda )$ of the process for $H=0.6$ and $\theta =1000$. The plot shows the real part (blue), imaginary part (orange dashed), and magnitude (green dotted) of the complex-valued spectral density
Two graphs showing effects of varying H and θ on |fₖ(λ)|, with sharp peaks at λ=0 and varying curve shapes across different parameter values.
Fig. 3.
The upper panel illustrates the effect of varying the Hurst parameter H (with fixed $\theta =1000$) on $|{f_{X}}(\lambda )|$. The lower panel shows the effect of varying θ with fixed $H=0.8$ on $|{f_{X}}(\lambda )|$
Graph showing a sample path of X_n over n, with irregular blue lines fluctuating between -6 and 7, displaying stochastic behavior.
Fig. 4.
Sample path of the simulated series ${\{{X_{n}}\}_{n\in {\mathbb{Z}_{+}}}}$ over the first 500 observations for $H=0.8$ and $\theta =50$
Two graphs: top shows empirical autocorrelation with lag h for H=0.80, bottom displays low-frequency spectrum with power vs frequency λ.
Fig. 5.
Upper panel – empirical autocorrelation of ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, lower panel – low-frequency spectral density of ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, both for $H=0.8$
Table 1.
Relations between different conditions of LRD according to Pipiras and Taqqu [16, pp. 21–30]
Theorem 1 (Functional CLT for the LRD Series).
Graph comparing autocovariance of series Xn (blue) and fGn autocovariance (red dashed) over lag h, showing close overlap and decay.
Fig. 1.
Autocovariance comparison of the studied series and fGn, for $\sigma =1,\theta =1000$ and $H=0.85$
Graph shows spectral density components vs frequency, with real, imaginary, and magnitude parts peaking sharply near zero frequency.
Fig. 2.
Spectral density ${f_{X}}(\lambda )$ of the process for $H=0.6$ and $\theta =1000$. The plot shows the real part (blue), imaginary part (orange dashed), and magnitude (green dotted) of the complex-valued spectral density
Two graphs showing effects of varying H and θ on |fₖ(λ)|, with sharp peaks at λ=0 and varying curve shapes across different parameter values.
Fig. 3.
The upper panel illustrates the effect of varying the Hurst parameter H (with fixed $\theta =1000$) on $|{f_{X}}(\lambda )|$. The lower panel shows the effect of varying θ with fixed $H=0.8$ on $|{f_{X}}(\lambda )|$
Graph showing a sample path of X_n over n, with irregular blue lines fluctuating between -6 and 7, displaying stochastic behavior.
Fig. 4.
Sample path of the simulated series ${\{{X_{n}}\}_{n\in {\mathbb{Z}_{+}}}}$ over the first 500 observations for $H=0.8$ and $\theta =50$
Two graphs: top shows empirical autocorrelation with lag h for H=0.80, bottom displays low-frequency spectrum with power vs frequency λ.
Fig. 5.
Upper panel – empirical autocorrelation of ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, lower panel – low-frequency spectral density of ${\left\{{X_{n}}\right\}_{n\in {\mathbb{Z}_{+}}}}$, both for $H=0.8$
Table 1.
Relations between different conditions of LRD according to Pipiras and Taqqu [16, pp. 21–30]
Condition Formulation and relation
(I) Covariance decay ${\gamma _{X}}(h)={L_{1}}(h){h^{2d-1}},\hspace{0.2778em}h\ge 0,{L_{1}}$ slowly varying at ∞.
I ⇒ IV.
I ⇒ III, where the function ${L_{3}}$ is related to function ${L_{1}}$.
I ⇒ II if the slowly varing function ${L_{1}}$ is quasi-monotone.a
(II) Spectral density ${f_{X}}(\lambda )={L_{2}}(|\lambda |)|\lambda {|^{-2d}},\hspace{0.2778em}\lambda \in (-\pi ,\pi ]$. ${L_{2}}$ slowly varying at 0.
II ⇒ I if ${L_{2}}(1/u)$ is a quasi-monotone slowly varing function on $[1/\pi ,\infty )$.
II ⇒ III, IV.
(III) Variance of partial sums $\mathrm{Var}\hspace{-0.1667em}\left({\textstyle\sum \limits_{k=1}^{N}}{X_{k}}\right)={L_{3}}(N){N^{2d+1}},\hspace{0.2778em}N\ge 1$.
${L_{3}}$ slowly varying at ∞,
(IV) Nonsummability of covariances ${\textstyle\sum \limits_{h=-\infty }^{\infty }}|{\gamma _{X}}(h)|=\infty $.
a A slowly varing function L on $[0,\infty )$ is called quasi-monotone if it is of bounded variation on any compact interval of $[0,\infty )$ and if, for some $\delta \gt 0$,
\[ {\underset{0}{\overset{x}{\int }}}{u^{\delta }}|\mathrm{d}L(u)|=\mathrm{O}({x^{\delta }}L(x)),\hspace{1em}x\to \infty .\]
Theorem 1 (Functional CLT for the LRD Series).
Under Assumptions 1 and 2 with Hermite rank $m=1$, let ${\{{X_{n}}\}_{n\in {\mathbb{Z}_{+}}}}$ be an LRD series defined by (22), satisfying Condition III. Then, as $N\to \infty $,
\[ {\widetilde{S}_{N}}(\cdot )\xrightarrow{d}{B_{H}}(\cdot )\hspace{1em}\textit{in}\hspace{2.5pt}\mathcal{D}[0,1],\]
where ${B_{H}}$ is a fractional Brownian motion with Hurst parameter $H=d+\frac{1}{2}$.

MSTA

Journal

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy