Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 4 (2024)
  4. Combinatorial approach to the calculatio ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • More
    Article info Full article

Combinatorial approach to the calculation of projection coefficients for the simplest Gaussian-Volterra process
Volume 11, Issue 4 (2024), pp. 403–419
Iryna Bodnarchuk ORCID icon link to view author Iryna Bodnarchuk details   Yuliya Mishura ORCID icon link to view author Yuliya Mishura details  

Authors

 
Placeholder
https://doi.org/10.15559/24-VMSTA252
Pub. online: 9 April 2024      Type: Research Article      Open accessOpen Access

Received
25 February 2024
Revised
19 March 2024
Accepted
20 March 2024
Published
9 April 2024

Abstract

The Gaussian-Volterra process with a linear kernel is considered, its properties are established and projection coefficients are explicitly calculated, i.e. one of possible prediction problems related to Gaussian processes is solved.

1 Introduction

Starting with the famous fractional Brownian motion (fBm), the models involving the noise represented by Gaussian-Volterra processes (GVp’s) become very popular and are even more popular now, because they have non-Markov property, so, in some sense, they have a memory, and at the same time, the phenomenon of memory is observed in almost all real processes: in economics, finance, cellular and other types of communications, in neural networks and other areas. In this connection, GVp’s were studied in many papers, including [2, 6, 8, 10–12, 16, 20], where the properties of GVp’s themselves were studied, and [1–5, 7, 15, 19], where stochastic differential equations involving such processes as the noise were studied.
In general, GVp is a process of the form
\[ {Y_{t}}={\int _{0}^{t}}K(t,s)\hspace{0.1667em}d{W_{s}},\hspace{1em}t\ge 0,\]
where W is a Wiener process, $K(t,s):{\mathbb{R}^{+}}\times [0,t]\to \mathbb{R}$, $t\ge 0$, is a measurable Volterra kernel, such that
\[ {\int _{0}^{t}}{K^{2}}(t,s)\hspace{0.1667em}ds\lt \infty ,\hspace{1em}t\ge 0.\]
With this setting, Y is correctly defined square-integrable stochastic process. If Y is an fBm, ${Y_{t}}={B_{t}^{H}}$, then ([17])
(1)
\[\begin{array}{l}\displaystyle K(t,s)={C_{H}}\left[{\left(\frac{t}{s}\right)^{H-1/2}}{(t-s)^{H-1/2}}\right.\\ {} \displaystyle \left.-\left(H-\frac{1}{2}\right){s^{1/2-H}}{\int _{s}^{t}}{u^{H-3/2}}{(u-s)^{H-1/2}}du\right],\end{array}\]
where $H\in (0,1)$ is the Hurst index. For $H\in (1/2,1)$ the kernel $K(t,s)$ is simplified to
(2)
\[ K(t,s)=(H-\frac{1}{2}){C_{H}}{s^{1/2-H}}{\int _{s}^{t}}{u^{H-1/2}}{(u-s)^{H-3/2}}du,\]
where constant ${C_{H}}$ in (1) and (2) equals
\[ {C_{H}}={\left(\frac{2H\Gamma (\frac{3}{2}-H)}{\Gamma (H+\frac{1}{2})\Gamma (2-2H)}\right)^{1/2}},\]
and $\Gamma (\cdot )$ is the Euler gamma function. Representations (1) and (2) have the following advantages and disadvantages. Advantage is that fBm is a process with stationary increments. What about disadvantages? First, such kernels are very particular, therefore, it is natural to consider their generalizations and investigate the properties of such generalized GVp’s. This was, in particular, done in the papers [16, 12, 10, 15].
Second, the kernels (1), (2) are comparatively complicated. This leads to a complicated form of the covariance function
\[ \mathsf{E}{B_{s}^{H}}{B_{t}^{H}}=\frac{1}{2}({t^{2H}}+{s^{2H}}-|t-s{|^{2H}}),\hspace{1em}t,\hspace{0.1667em}s\ge 0,\]
and this circumstance, in turn, leads to complicated form of the covariance matrix of the vector created from the values or from the increments of fBm. In particular, there are still two unsolved problems, related to the determinant of covariance matrix of fBm-vectors. These problems are considered in [9, 13] and [14]. For example, the following problem is still unsolved: let ${\Delta _{i}^{H}}={B_{i}^{H}}-{B_{i-1}^{H}}$, $i\ge 1$. Consider the projection $\mathsf{E}({\Delta _{1}^{H}}|{\Delta _{2}^{H}},\dots ,{\Delta _{n}^{H}})$ for any $n\ge 2$. According to the theorem of normal correlation, there exist real-valued coefficients ${c_{2}},\dots ,{c_{n}}$ such that $\mathsf{E}({\Delta _{1}^{H}}|{\Delta _{2}^{H}},\dots ,{\Delta _{n}^{H}})={\textstyle\sum _{i=2}^{n}}{c_{i}}{\Delta _{i}^{H}}$. The hypothesis “for $H\gt \frac{1}{2}$ all coefficients are strictly positive” was checked numerically in [13], and analytically for small n, however, it was not proved analytically. If solved, it would give the key to studying the properties of matrices in the Cholesky expansion of covariance matrices of fBm and its increments, however, as we say, it is still unsolved. Therefore our idea in the present paper is to consider a very simple GVp of the form
(3)
\[ {X_{t}}={\int _{0}^{t}}(t-s)\hspace{0.1667em}d{W_{s}},\]
and establish the properties of this process and of the coefficients of the respective projection, i.e. to consider the respective prediction problem and try to guess which properties of the kernel imply selected properties of prediction coefficients. In order to distinguish this process from other Gaussian-Volterra processes, we call it the simplest Gaussian-Volterra process (the simplest GVp), although, of course, this name is somewhat arbitrary. We establish that X mimics several properties of fBm with $H\gt 1/2$: it is self-similar, non-Markov, has a long memory, its increments over nonoverlapping intervals are positively correlated. However, unlike fBm, its increments are not stationary, and we can assume that it is precisely this property that is determining the signs of projection coefficients, since we established that in our case the coefficients are not all positive, moreover, they are alternating. Note that this is apparently one of the few cases when the coefficients can be calculated explicitly; we use a combinatorial approach for this, solving the system of linear equations and calculating all determinants by some recurrence and combinatorics. And we can state that the form of kernel itself does not make it possible to predict the properties of projection coefficients.
The paper is organized as follows: in Section 2 the main properties of the simplest Gaussian-Volterra process are established, in Section 3 we give the combinatorial method for calculation of the determinant of covariance matrix of the increments of the simplest GVp, together with supplementing numerics, and in Section 4 we give the explicit formulas for the projection coefficients and establish that they are alternating. At any step, we check our calculations using several methods.

2 The main properties of the simplest Gaussian-Volterra process

Let $(\Omega ,\mathcal{F},\mathsf{P})$ be a complete probability space, $W=\{{W_{t}},t\ge 0\}$ be a Wiener process on this space, and we consider its continuous modification. Denote $\mathbb{F}=\{{\mathcal{F}_{t}}={\mathcal{F}_{t}^{W}},t\ge 0\}$ the filtration generated by W. Consider the simplest GVp of the form (3) and establish its basic properties.
Proposition 1.
  • 1) $X=\{{X_{t}},t\ge 0\}$ is a zero-mean Gaussian process with the covariance function
    \[ \mathsf{E}{X_{t}}{X_{s}}=\frac{{(t\wedge s)^{2}}}{6}(3\hspace{0.1667em}t\vee s-t\wedge s),\hspace{1em}s,\hspace{0.1667em}t\ge 0.\]
  • 2) Process X admits a representation
    (4)
    \[ {X_{t}}={\int _{0}^{t}}{W_{s}}ds,\hspace{1em}t\ge 0,\]
    it is continuously differentiable, and
    \[ {\mathcal{F}_{t}^{X}}={\mathcal{F}_{t}^{W}},\hspace{1em}t\ge 0.\]
  • 3) Process X is non-Markov.
  • 4) Process X is α-self-similar with exponent $\alpha =3/2$, is nonstationary and has nonstationary increments.
  • 5) Increments of X in the case of nonoverlapping intervals are positively correlated.
  • 6) Process X has a long memory in the sense that $\mathsf{E}{X_{s}}({X_{t}}-{X_{s}})\to \infty $ as $t\to \infty $.
Proof.
1) Obviously,
(5)
\[\begin{array}{l}\displaystyle \mathsf{E}{X_{t}}{X_{s}}={\int _{0}^{t\wedge s}}(t-u)(s-u)du={\int _{0}^{t\wedge s}}(ts-tu-su+{u^{2}})du\\ {} \displaystyle =ts(t\wedge s)-(t+s)\frac{{(t\wedge s)^{2}}}{2}+\frac{{(t\wedge s)^{3}}}{3}=\frac{{(t\wedge s)^{2}}}{6}(3\hspace{0.1667em}t\vee s-t\wedge s).\end{array}\]
2) Representation (4) immediately follows from (3) if we integrate by parts. Continuous differentiability is also evident, since we consider continuous modification of Wiener process. Furthermore,
\[ {W_{t}}=\underset{\Delta t\downarrow 0}{\lim }\frac{{X_{t}}-{X_{t-\Delta t}}}{\Delta t}={({X_{-}})^{\prime }_{t}},\]
with the left-hand derivative of X on the very right side, where ${\mathcal{F}_{t}^{W}}\subset {\mathcal{F}_{t}^{X}}$. The inverse relation follows from (4).
3) On the one hand, according to (4),
(6)
\[ \mathsf{E}({X_{t}}|{\mathcal{F}_{s}^{X}})={X_{s}}+{W_{s}}(t-s)\hspace{1em}\text{for any}\hspace{2.5pt}0\le s\le t.\]
On the other hand, according to the theorem of normal correlation,
\[ \mathsf{E}({X_{t}}|{X_{s}})=a{X_{s}},\]
where
(7)
\[ a=\frac{\mathsf{E}{X_{s}}{X_{t}}}{\mathsf{E}{X_{s}^{2}}}=\frac{\frac{{s^{2}}}{6}(3t-s)}{\frac{{s^{3}}}{3}}=\frac{3t-s}{2s}.\]
Equalities (6) and (7) imply non-Markov property of X.
4) Self-similarity and nonstationarity of the process itself immediately follow from (5), while nonstationarity of its increments follows from the equalities (here $s\lt t$)
(8)
\[\begin{array}{l}\displaystyle \mathsf{E}{({X_{t}}-{X_{s}})^{2}}=\mathsf{E}{X_{t}^{2}}-2\mathsf{E}{X_{t}}{X_{s}}+\mathsf{E}{X_{s}^{2}}=\frac{{t^{3}}}{3}-\frac{{s^{2}}}{3}(3t-s)+\frac{{s^{3}}}{3}\\ {} \displaystyle =\frac{{t^{3}}}{3}-t{s^{2}}+\frac{2{s^{3}}}{3},\end{array}\]
while
\[\begin{array}{l}\displaystyle \mathsf{E}{X_{t-s}^{2}}=\frac{{(t-s)^{3}}}{3}=\frac{{t^{3}}}{3}-t{s^{2}}+\frac{2{s^{3}}}{3}+2t{s^{2}}-{t^{2}}s-{s^{3}}\\ {} \displaystyle =\mathsf{E}{({X_{t}}-{X_{s}})^{2}}-s{(t-s)^{2}}\ne \mathsf{E}{({X_{t}}-{X_{s}})^{2}}.\end{array}\]
5) If the increments are subsequent, i.e. $0\le u\lt s\lt t$, then
\[ \mathsf{E}({X_{s}}-{X_{u}})({X_{t}}-{X_{s}})=\frac{1}{2}({s^{2}}-{u^{2}})(t-s)\gt 0.\]
In the general case, when $0\le u\lt v\le s\lt t$, we have that
(9)
\[ \mathsf{E}({X_{v}}-{X_{u}})({X_{t}}-{X_{s}})=\frac{1}{2}({v^{2}}-{u^{2}})(t-s)\gt 0.\]
6) Indeed,
\[ \mathsf{E}{X_{s}}({X_{t}}-{X_{s}})=\frac{{s^{2}}}{6}(3t-s)-\frac{{s^{3}}}{3}=\frac{{s^{2}}}{2}(t-s)\to \infty ,\hspace{1em}t\to \infty .\]
 □
Remark 1.
Formula (9) means that in some sense, the increments of the simplest GVp over nonoverlapping intervals are stationary in the right-hand interval, because the value depends only on the length $t-s$, but is not stationary in the left-hand interval.

3 Calculation of the determinant of covariance matrix of increments of the simplest GVp. Combinatorial approach

Let ${\Delta _{i}}={X_{i}}-{X_{i-1}}$, $i\ge 1$. According to formula (9), for $l\gt k$,
(10)
\[ \mathsf{E}{\Delta _{k}}{\Delta _{l}}=\frac{1}{2}(l-(l-1))({k^{2}}-{(k-1)^{2}})=\frac{1}{2}(2k-1)=k-\frac{1}{2}\]
and this value does not depend on l, see Remark 1.
Also, according to (8),
(11)
\[ \mathsf{E}{\Delta _{k}^{2}}=E{({X_{k}}-{X_{k-1}})^{2}}=\frac{{k^{3}}}{3}-k{(k-1)^{2}}+\frac{2{(k-1)^{3}}}{3}=k-\frac{2}{3}.\]
Let us record the covariance matrix of increments, starting from 1 and to n:
\[\begin{array}{l}\displaystyle {A_{1,n}}={(\mathsf{E}{\Delta _{k}}{\Delta _{l}})_{k,l=1}^{n}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\mathsf{E}{\Delta _{1}^{2}}& \mathsf{E}{\Delta _{1}}{\Delta _{2}}& \dots & \mathsf{E}{\Delta _{1}}{\Delta _{n}}\\ {} \mathsf{E}{\Delta _{2}}{\Delta _{1}}& \mathsf{E}{\Delta _{2}^{2}}& \dots & \mathsf{E}{\Delta _{2}}{\Delta _{n}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} \mathsf{E}{\Delta _{n}}{\Delta _{1}}& \mathsf{E}{\Delta _{n}}{\Delta _{2}}& \dots & \mathsf{E}{\Delta _{n}^{2}}\end{array}\right)\\ {} \displaystyle =\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}1-\frac{2}{3}& 1-\frac{1}{2}& 1-\frac{1}{2}& \cdots & 1-\frac{1}{2}& 1-\frac{1}{2}\\ {} 1-\frac{1}{2}& 2-\frac{2}{3}& 2-\frac{1}{2}& \cdots & 2-\frac{1}{2}& 2-\frac{1}{2}\\ {} 1-\frac{1}{2}& 2-\frac{1}{2}& 3-\frac{2}{3}& \cdots & 3-\frac{1}{2}& 3-\frac{1}{2}\\ {} \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ {} 1-\frac{1}{2}& 2-\frac{1}{2}& 3-\frac{1}{2}& \cdots & (n-1)-\frac{2}{3}& (n-1)-\frac{1}{2}\\ {} 1-\frac{1}{2}& 2-\frac{1}{2}& 3-\frac{1}{2}& \cdots & (n-1)-\frac{1}{2}& n-\frac{2}{3},\end{array}\right),\end{array}\]
and denote by ${D_{1,n}}$ its determinant. Also denote
\[ {p_{n}}={\left(2+\sqrt{3}\right)^{n}}-{\left(2-\sqrt{3}\right)^{n}}.\]
Theorem 1.
1) The determinant ${D_{1,n}}=\det ({A_{1,n}})$ equals
(12)
\[ {D_{1,1}}=\frac{1}{3},\hspace{1em}{D_{1,n}}=\frac{\sqrt{3}}{{6^{n+1}}}(7{p_{n-1}}-2{p_{n-2}}),\hspace{1em}n\ge 2.\]
2) The determinant ${D_{1,n}}$ decreases in n and satisfies the inequalities
(13)
\[ \frac{5}{42}{D_{1,n-1}}\lt \frac{5\sqrt{3}}{{6^{n+1}}}{p_{n-1}}\lt {D_{1,n}}\lt \frac{2}{3}{D_{1,n-1}},\hspace{1em}n\ge 2.\]
Proof.
Evidently, the matrix ${A_{1,n}}$ and, respectively, its determinant ${D_{1,n}}$ can be transformed as follows:
\[ {D_{1,n}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{1}{3}& \frac{1}{2}& \frac{1}{2}& \cdots & \frac{1}{2}& \frac{1}{2}\\ {} \frac{1}{2}& \frac{4}{3}& \frac{3}{2}& \cdots & \frac{3}{2}& \frac{3}{2}\\ {} \frac{1}{2}& \frac{3}{2}& \frac{7}{3}& \cdots & \frac{5}{2}& \frac{5}{2}\\ {} \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ {} \frac{1}{2}& \frac{3}{2}& \frac{5}{2}& \cdots & \frac{3(n-1)-2}{3}& \frac{2(n-1)-1}{2}\\ {} \frac{1}{2}& \frac{3}{2}& \frac{5}{2}& \cdots & \frac{2(n-1)-1}{2}& \frac{3n-2}{3}\end{array}\right|.\]
Write the determinant ${D_{1,n}}$ in the recurrent form. Subtracting the penultimate column from the last column, we have
\[ {D_{1,n}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{1}{3}& \frac{1}{2}& \frac{1}{2}& \cdots & \frac{1}{2}& 0\\ {} \frac{1}{2}& \frac{4}{3}& \frac{3}{2}& \cdots & \frac{3}{2}& 0\\ {} \frac{1}{2}& \frac{3}{2}& \frac{7}{3}& \cdots & \frac{5}{2}& 0\\ {} \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ {} \frac{1}{2}& \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{2}{3}& \frac{1}{6}\\ {} \frac{1}{2}& \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{1}{2}& \frac{5}{6}\end{array}\right|=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}& & & 0\\ {} & {D_{1,n-1}}& & \vdots \\ {} & & & \frac{1}{6}\\ {} \frac{1}{2}& \dots & (n-1)-\frac{1}{2}& \frac{5}{6}\end{array}\right|.\]
Next, we subtract the penultimate row from the last row and then expand the determinant over the last column, arriving at
\[ {D_{1,n}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}& & & 0\\ {} & {D_{1,n-1}}& & \vdots \\ {} & & & \frac{1}{6}\\ {} 0& \dots & \frac{1}{6}& \frac{2}{3}\end{array}\right|=\frac{2}{3}{D_{1,n-1}}-\frac{1}{6}\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{1}{3}& \frac{1}{2}& \frac{1}{2}& \cdots & \frac{1}{2}\\ {} \frac{1}{2}& \frac{4}{3}& \frac{3}{2}& \cdots & \frac{3}{2}\\ {} \frac{1}{2}& \frac{3}{2}& \frac{7}{3}& \cdots & \frac{5}{2}\\ {} \vdots & \vdots & \vdots & \ddots & \vdots \\ {} 0& 0& 0& \cdots & \frac{1}{6}\end{array}\right|.\]
The last determinant has size $(n-1)\times (n-1)$. We expand it on the last row and obtain the recurrent form
(14)
\[ {D_{1,n}}=\frac{2}{3}{D_{1,n-1}}-\frac{1}{36}{D_{1,n-2}}.\]
This recurrent formula demonstrates that the determinant ${D_{1,n}}$ decreases in n.
Now we find a general direct formula for determinant ${D_{1,n}}$ using the recurrence formula (14).
A sequence with such a recurrent formula is considered in [18, Example 1.34, p. 86–89]. Namely,
\[ {a_{1}}=a,\hspace{1em}{a_{2}}=b,\hspace{1em}{a_{n}}=(x+y){a_{n-1}}-xy{a_{n-2}},\hspace{1em}n\ge 1,\hspace{2.5pt}x\ne y.\]
In this case, according to [18, p. 87] the direct formula has the form
(15)
\[ {a_{n}}=b\frac{{x^{n-1}}-{y^{n-1}}}{x-y}-axy\frac{{x^{n-2}}-{y^{n-2}}}{x-y}.\]
Find x, y for our case. We have the following system
\[ \left\{\begin{array}{l}x+y=\frac{2}{3}\hspace{1em}\\ {} xy=\frac{1}{36}\hspace{1em}\end{array}\right.\Leftrightarrow \hspace{1em}\left\{\begin{array}{l}x=\frac{2}{3}-y\hspace{1em}\\ {} {y^{2}}-\frac{2}{3}y+\frac{1}{36}=0\hspace{1em}\end{array}\right.\Leftrightarrow \hspace{1em}\left\{\begin{array}{l}{x_{1,2}}=\frac{2\mp \sqrt{3}}{6}\hspace{1em}\\ {} {y_{1,2}}=\frac{2\pm \sqrt{3}}{6}\hspace{1em}\end{array}\right..\]
We chose the pair $x=\frac{2+\sqrt{3}}{6}$, $y=\frac{2-\sqrt{3}}{6}$ (in this case $x-y\gt 0$). Taking into account the fact that ${D_{1,1}}=\frac{1}{3}$, ${D_{1,2}}=\frac{7}{36}$, and $x-y=\frac{1}{\sqrt{3}}$, we get
\[ {D_{1,n}}=\frac{7}{36}\sqrt{3}\cdot \frac{{p_{n-1}}}{{6^{n-1}}}-\frac{1}{3}\frac{1}{36}\sqrt{3}\cdot \frac{{p_{n-2}}}{{6^{n-2}}}.\]
Thus, we establish (12).
From (12) we get strict positivity of ${D_{1,n}}$ and the bound $\frac{5\sqrt{3}}{{6^{n+1}}}{p_{n-1}}\lt {D_{1,n}}$ because for any $0\lt b\lt 1\lt a$ the function ${a^{x}}-{b^{x}}$ increases in $x\gt 0$.
Furthermore,
\[\begin{array}{l}\displaystyle \frac{5\sqrt{3}}{{6^{n+1}}}{p_{n-1}}\gt \frac{5\sqrt{3}}{{6^{n+1}}}{p_{n-2}}=\frac{5\sqrt{3}}{{6^{n+1}}7}7{p_{n-2}}\gt \frac{5\sqrt{3}}{{6^{n+1}}7}(7{p_{n-2}}-2{p_{n-3}})\\ {} \displaystyle =\frac{5}{42}{D_{1,n-1}},\hspace{1em}n\ge 3.\end{array}\]
For $n=2$ we have the same estimate, i.e.
\[ {D_{1,2}}=\frac{7\sqrt{3}}{{6^{3}}}2\sqrt{3}=\frac{7}{36}\gt \frac{5}{42}\cdot \frac{1}{3}=\frac{5}{42}{D_{1,1}}.\]
Finally, inequality ${D_{1,n}}\lt \frac{2}{3}{D_{1,n-1}}$ follows directly from (14) and positivity of ${D_{1,n-2}},n\ge 3$. If $n=2$ then ${D_{1,2}}=\frac{7}{36}\lt \frac{2}{3}\frac{1}{3}=\frac{2}{3}{D_{1,1}}$.  □
Remark 2.
Even in our simple case it is not so trivial to establish that the matrix ${A_{1,n}}$ is nondegenerate. However, equality (12) immediately gives us this result.
Remark 3.
It is interesting to compare the direct calculations of determinant ${D_{1,n}}$ with calculations by formulas (14) and (12). For $n=1,2,3,4,5$ direct calculations give us the following values:
(16)
\[ {D_{1,1}}=\frac{1}{3},\hspace{2.5pt}{D_{1,2}}=\frac{7}{36},\hspace{2.5pt}{D_{1,3}}=\frac{13}{108},\hspace{2.5pt}{D_{1,4}}=\frac{97}{1296},\hspace{2.5pt}{D_{1,5}}=\frac{181}{3888}.\]
Now, by the recurrent formula (14)
\[\begin{array}{l}\displaystyle {D_{1,3}}=\frac{2}{3}{D_{1,2}}-\frac{1}{36}{D_{1,1}}=\frac{13}{108},\hspace{2em}{D_{1,4}}=\frac{2}{3}{D_{1,3}}-\frac{1}{36}{D_{1,2}}=\frac{97}{1296},\\ {} \displaystyle {D_{1,5}}=\frac{2}{3}{D_{1,4}}-\frac{1}{36}{D_{1,3}}=\frac{181}{3888}.\end{array}\]
Finally, check the same values obtained by the final formula (12).
In particular, for $n=2$, ${D_{1,2}}=\frac{\sqrt{3}}{{6^{3}}}(7\cdot 2\sqrt{3})=\frac{7}{{6^{2}}}=\frac{7}{36}$.
Now, calculate the determinant by (12) for $n=3,4,5$ and compare with (16):
\[\begin{array}{l}\displaystyle {D_{1,3}}=\frac{\sqrt{3}}{{6^{4}}}\left[7{p_{2}}-2{p_{1}}\right]=\frac{\sqrt{3}}{{6^{4}}}\left[7\cdot 8\sqrt{3}-4\sqrt{3}\right]=\frac{13}{108},\\ {} \displaystyle {D_{1,4}}=\frac{\sqrt{3}}{{6^{5}}}\left[7{p_{3}}-2{p_{2}}\right]=\frac{\sqrt{3}}{{6^{5}}}\left[7\left(\left(2+\sqrt{3}\right)-\left(2-\sqrt{3}\right)\right)\right.\\ {} \displaystyle \times \left.\left({\left(2+\sqrt{3}\right)^{2}}+\left(2+\sqrt{3}\right)\left(2-\sqrt{3}\right)+{\left(2-\sqrt{3}\right)^{2}}\right)-2\cdot 8\sqrt{3}\right]\\ {} \displaystyle =\frac{\sqrt{3}}{{6^{5}}}\left[7\cdot 2\sqrt{3}\cdot 15-2\cdot 8\sqrt{3}\right]=\frac{97}{{6^{4}}}=\frac{97}{1296},\\ {} \displaystyle {D_{1,5}}=\frac{\sqrt{3}}{{6^{6}}}\left[7{p_{4}}-2{p_{3}}\right]=\frac{\sqrt{3}}{{6^{6}}}\left[7\cdot 112\sqrt{3}-2\cdot 30\sqrt{3}\right]=\frac{181}{3888},\end{array}\]
where we took into account that
\[\begin{array}{l}\displaystyle {\left(2+\sqrt{3}\right)^{2}}+{\left(2-\sqrt{3}\right)^{2}}=2\left({2^{2}}+{\sqrt{3}^{2}}\right)=14,\hspace{1em}\left(2+\sqrt{3}\right)\left(2-\sqrt{3}\right)=1,\\ {} \displaystyle {\left(2+\sqrt{3}\right)^{4}}-{\left(2-\sqrt{3}\right)^{4}}=\left({\left(2+\sqrt{3}\right)^{2}}+{\left(2-\sqrt{3}\right)^{2}}\right){p_{2}}=112\sqrt{3}.\end{array}\]
Thus, all three methods of calculation gave the same results, which also confirms the validity of the formulas.
Remark 4.
Suppose that we need to calculate the determinant ${D_{2,n+1}}=\det ({A_{2,n+1}})$, where ${A_{2,n+1}}={(\mathsf{E}{\Delta _{i}}{\Delta _{k}})_{2\le i,k\le n+1}}$.
Similarly to (12), we can establish the direct formula for ${D_{2,n+1}}=\mathrm{det}({A_{2,n+1}})$, with the help of the recurrent formula (14). Since ${A_{2,n+1}}$ is an algebraic complement to the element ${a_{11}}=\mathsf{E}{\Delta _{1}^{2}}$ of the matrix ${A_{1,n+1}}={({a_{kl}})_{k,l=1}^{n+1}}$, ${a_{kl}}=\mathsf{E}{\Delta _{k}}{\Delta _{l}}$, matrix ${A_{2,n+1}}$ has dimension n, has the same form as the main covariance matrix of increments but without the first row and the first column. Therefore the determinant ${D_{2,n+1}}=\mathrm{det}({A_{2,n+1}})$, $n\ge 3$, can be calculated by the same recurrent formula as ${D_{1,n}}=\mathrm{det}({A_{1,n}})$, $n\ge 3$, but with another initial elements, namely,
(17)
\[ {D_{2,2}}=\frac{4}{3},\hspace{2.5pt}\hspace{2.5pt}{D_{2,3}}=\left|\begin{array}{c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}\\ {} \frac{3}{2}& \frac{7}{3}\end{array}\right|=\frac{31}{36},\hspace{2.5pt}\hspace{2.5pt}{D_{2,n+1}}=\frac{2}{3}{D_{2,n}}-\frac{1}{36}{D_{2,n-1}},\hspace{2.5pt}\hspace{2.5pt}n\ge 3.\]
Applying formula (15) for the same $x=\frac{2+\sqrt{3}}{6}$, $y=\frac{2-\sqrt{3}}{6}$ and $a=\frac{4}{3}$, $b=\frac{31}{36}$ we arrive at
(18)
\[ {D_{2,n+1}}=\frac{31}{36}\sqrt{3}\frac{{p_{n-1}}}{{6^{n-1}}}-\frac{4}{3}\frac{1}{36}\sqrt{3}\frac{{p_{n-2}}}{{6^{n-2}}}=\frac{\sqrt{3}}{{6^{n+1}}}\left[31{p_{n-1}}-8{p_{n-2}}\right],\hspace{2.5pt}\hspace{2.5pt}n\ge 3.\]
Let us check the latter formula for $n=3,4,5$:
\[ {D_{2,4}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}& \frac{3}{2}\\ {} \frac{3}{2}& \frac{7}{3}& \frac{5}{2}\\ {} \frac{3}{2}& \frac{5}{2}& \frac{10}{3}\end{array}\right|=\frac{29}{54},\hspace{2.5pt}{D_{2,5}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}& \frac{3}{2}& \frac{3}{2}\\ {} \frac{3}{2}& \frac{7}{3}& \frac{5}{2}& \frac{5}{2}\\ {} \frac{3}{2}& \frac{5}{2}& \frac{10}{3}& \frac{7}{2}\\ {} \frac{3}{2}& \frac{5}{2}& \frac{7}{2}& \frac{13}{3}\end{array}\right|=\frac{433}{1296},\hspace{2.5pt}{D_{2,6}}=\frac{101}{486}.\]
At the same time, formula (18) gives the equalities
\[\begin{array}{l}\displaystyle {D_{2,4}}=\frac{\sqrt{3}}{{6^{4}}}\left[31{p_{2}}-8{p_{1}}\right]=\frac{\sqrt{3}}{{6^{4}}}\left[31\cdot 8\sqrt{3}-8\cdot 2\sqrt{3}\right]=\frac{29}{54},\\ {} \displaystyle {D_{2,5}}=\frac{\sqrt{3}}{{6^{5}}}\left[31{p_{3}}-8{p_{2}}\right]=\frac{\sqrt{3}}{{6^{5}}}\left[31\cdot 2\sqrt{3}\cdot 15-8\cdot 8\sqrt{3}\right]=\frac{433}{1296},\\ {} \displaystyle {D_{2,6}}=\frac{\sqrt{3}}{{6^{6}}}\left[31{p_{4}}-8{p_{3}}\right]=\frac{\sqrt{3}}{{6^{6}}}\left[31\cdot 8\sqrt{3}\cdot 14-2\cdot 15\cdot 8\sqrt{3}\right]=\frac{101}{486},\end{array}\]
while the recurrent formula gives the same equalities:
\[\begin{array}{l}\displaystyle {D_{2,4}}=\frac{2}{3}{D_{2,3}}-\frac{1}{36}{D_{2,2}}=\frac{2}{3}\frac{31}{36}-\frac{1}{36}\frac{4}{3}=\frac{29}{54},\\ {} \displaystyle {D_{2,5}}=\frac{2}{3}{D_{2,4}}-\frac{1}{36}{D_{2,3}}=\frac{2}{3}\frac{29}{54}-\frac{1}{36}\frac{31}{36}=\frac{{2^{4}}\cdot 29-31}{{6^{4}}}=\frac{433}{1296},\\ {} \displaystyle {D_{2,6}}=\frac{2}{3}{D_{2,5}}-\frac{1}{36}{D_{2,4}}=\frac{2}{3}\frac{433}{1296}-\frac{1}{36}\frac{29}{54}=\frac{433-31}{{6^{3}}\cdot {3^{2}}}=\frac{101}{486}.\end{array}\]
That is, the values coincide, and therefore the formulas (17) and (18) have been confirmed.

4 Projection problem and respective coefficients for the simplest GVp

Now we consider the problem of projection of ${\Delta _{1}}={X_{1}}-{X_{0}}={X_{1}}$ on n subsequent increments ${\Delta _{i}}={X_{i}}-{X_{i-1}}$, $2\le i\le n+1$. More precisely, we can apply theorem of normal correlation and obtain the presentation
(19)
\[ \mathsf{E}({\Delta _{1}}|{\Delta _{2}},\dots ,{\Delta _{n+1}})={\sum \limits_{i=2}^{n+1}}{c_{n}^{(i)}}{\Delta _{i}},\]
where ${c_{n}^{(i)}}\in \mathbb{R}$, $2\le i\le n+1$, are the respective projection coefficients.
Lemma 1.
Coefficients $\{{c_{n}^{(k)}},\hspace{2.5pt}2\le k\le n+1\}$ equal to the unique solution of the system of linear equations
(20)
\[ {\sum \limits_{i=2}^{n+1}}{x_{i}}\mathsf{E}{\Delta _{i}}{\Delta _{k}}=\mathsf{E}{\Delta _{1}}{\Delta _{k}},\hspace{1em}2\le k\le n+1.\]
Proof.
We multiply both parts of (19) by ${\Delta _{k}},\hspace{2.5pt}2\le k\le n+1$; taking expectation and observing that ${\Delta _{k}}$ is adapted to σ-field generated by $\{{\Delta _{2}},\dots ,{\Delta _{n+1}}\}$, we get the proof. The solution exists and is unique because the main determinant of the system is ${D_{2,n+1}}=\det ({A_{2,n+1}})$, and according to Theorem 1 and Remark 4, determinant ${D_{2,n+1}}$ is strictly positive.  □
Corollary 1.
Obviously, Lemma 1 implies that coefficients $\{{c_{n}^{(k)}},\hspace{2.5pt}2\le k\le n+1\}$ are ${c_{n}^{(k)}}=\frac{{D_{2,n+1}^{(k-1)}}}{{D_{2,n+1}}}$, where ${D_{2,n+1}}=\det {(\mathsf{E}{\Delta _{i}}{\Delta _{j}})_{i,j=2}^{n+1}}$ and ${D_{2,n+1}^{(k-1)}}$ is the determinant obtained from ${D_{2,n+1}}$ by replacing the $(k-1)$th column with a vector ${b_{2,n+1}}={(\frac{1}{2},\dots ,\frac{1}{2})^{\mathsf{T}}}$.
Theorem 2.
Coefficients $\{{c_{n}^{(k)}},\hspace{2.5pt}2\le k\le n+1\}$ can be find by the following formulas:
1) $n=1$, $\hspace{2.5pt}{c_{1}^{(2)}}=\frac{3}{8}$;
2) $n=2$,
\[\begin{array}{l}\displaystyle {c_{2}^{(2)}}=3\frac{5{p_{n-1}}-{p_{n-2}}}{31{p_{n-1}}-8{p_{n-2}}}=\frac{{(-1)^{n}}30\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}}=\frac{15}{31},\\ {} \displaystyle {c_{2}^{(3)}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}=\frac{{(-1)^{n-1}}6\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}}\hspace{-0.1667em}=\hspace{-0.1667em}-\frac{3}{31};\end{array}\]
3) $n\ge 3$,
\[\begin{array}{l}\displaystyle {c_{n}^{(n+2-m)}}=\frac{{(-1)^{n-m}}\sqrt{3}}{2\cdot {6^{n}}\cdot {D_{2,n+1}}}\left[5{p_{m-1}}-{p_{m-1}}\right]\\ {} \displaystyle ={(-1)^{n-m}}3\frac{5{p_{m-1}}-{p_{m-2}}}{31{p_{n-1}}-8{p_{n-2}}},\hspace{1em}3\le m\le n,\\ {} \displaystyle {c_{n}^{(n)}}=\frac{{(-1)^{n}}5}{2\cdot {6^{n-1}}}\cdot \frac{1}{{D_{2,n+1}}}=\frac{{(-1)^{n}}30\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}},\\ {} \displaystyle {c_{n}^{(n+1)}}=\frac{{(-1)^{n-1}}}{2\cdot {6^{n-1}}}\cdot \frac{1}{{D_{2,n+1}}}=\frac{{(-1)^{n-1}}6\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}}.\end{array}\]
Proof.
1) If $n=1$, then $k=2$ and we have one equation ${c_{1}^{(2)}}\mathsf{E}{\Delta _{2}^{2}}=\mathsf{E}{\Delta _{1}}{\Delta _{2}}$. By (10) and (11) we obtain ${c_{1}^{(2)}}\frac{4}{3}=\frac{1}{2}$, and ${c_{1}^{(2)}}=\frac{3}{8}$.
2), 3) The system of the linear equations (20) is equivalent to the following matrix equation
\[ {A_{2,n+1}}{c_{n}}={b_{2,n+1}},\]
where ${A_{2,n+1}}$ has dimension $n\times n$ and this is the algebraic complement of the element ${a_{11}}=\mathsf{E}{\Delta _{1}^{2}}$ of the matrix ${A_{1,n+1}}={({a_{kl}})_{k,l=1}^{n+1}}$, ${a_{kl}}=\mathsf{E}{\Delta _{k}}{\Delta _{l}}$; ${c_{n}}={({c_{n}^{(2)}},\dots ,{c_{n}^{(n+1)}})^{\mathsf{T}}}$ is the vector of unknown parameters of projection (19); ${b_{2,n+1}}={({a_{12}},\dots ,{a_{1(n+1)}})^{\mathsf{T}}}={(\mathsf{E}{\Delta _{1}}{\Delta _{2}},\dots ,\mathsf{E}{\Delta _{1}}{\Delta _{n+1}})^{\mathsf{T}}}={(\frac{1}{2},\dots ,\frac{1}{2})^{\mathsf{T}}}$.
Thus, we solve the equation
\[\begin{array}{l}\displaystyle \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{a_{22}}& {a_{23}}& \cdots & {a_{2n}}& {a_{2(n+1)}}\\ {} {a_{32}}& {a_{33}}& \cdots & {a_{3n}}& {a_{3(n+1)}}\\ {} \vdots & \ddots & \vdots & \vdots & \vdots \\ {} {a_{n2}}& {a_{n3}}& \cdots & {a_{nn}}& {a_{n(n+1)}}\\ {} {a_{(n+1)2}}& {a_{(n+1)3}}& \cdots & {a_{(n+1)n}}& {a_{(n+1)(n+1)}}\end{array}\right)\left(\begin{array}{c}{c_{n}^{(2)}}\\ {} {c_{n}^{(3)}}\\ {} \vdots \\ {} {c_{n}^{(n)}}\\ {} {c_{n}^{(n+1)}}\end{array}\right)=\left(\begin{array}{c}{a_{12}}\\ {} {a_{13}}\\ {} \vdots \\ {} {a_{1n}}\\ {} {a_{1(n+1)}}\end{array}\right)\\ {} \displaystyle \Leftrightarrow \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}& \frac{3}{2}& \cdots & \frac{3}{2}& \frac{3}{2}\\ {} \frac{3}{2}& \frac{7}{3}& \frac{5}{2}& \cdots & \frac{5}{2}& \frac{5}{2}\\ {} \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ {} \frac{3}{2}& \frac{5}{2}& \frac{7}{2}& \cdots & n-\frac{2}{3}& n-\frac{1}{2}\\ {} \frac{3}{2}& \frac{5}{2}& \frac{7}{2}& \cdots & n-\frac{1}{2}& (n+1)-\frac{2}{3}\end{array}\right)\left(\begin{array}{c}{c_{n}^{(2)}}\\ {} {c_{n}^{(3)}}\\ {} \vdots \\ {} {c_{n}^{(n)}}\\ {} {c_{n}^{(n+1)}}\end{array}\right)=\left(\begin{array}{c}\frac{1}{2}\\ {} \frac{1}{2}\\ {} \vdots \\ {} \frac{1}{2}\\ {} \frac{1}{2}\end{array}\right).\end{array}\]
According to Corollary 1 we use Cramer’s rule and find the solutions in the form
\[ {c_{n}^{(k)}}=\frac{{D_{2,n+1}^{(k-1)}}}{{D_{2,n+1}}},\hspace{1em}k=\overline{2,n+1}.\]
Here ${D_{2,n+1}^{(k-1)}}$ is the determinant obtained from ${D_{2,n+1}}$, in which we replaced the $(k-1)$th column with the vector ${b_{2,n+1}}={(\frac{1}{2},\dots ,\frac{1}{2})^{\mathsf{T}}}$. The determinants ${D_{2,n+1}}$ are obtained in Remark 4 and particularly in (17) and (18).
Therefore, we need to find ${D_{2,n+1}^{(k-1)}},\hspace{2.5pt}2\le k\le n+1$. Write it in the recurrent form and then derive the direct formula.
First, let $k=2$, then
\[ {D_{2,n+1}^{(1)}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{1}{2}& \frac{3}{2}& \frac{3}{2}& \frac{3}{2}& \cdots & \frac{3}{2}& \frac{3}{2}& \frac{3}{2}\\ {} \frac{1}{2}& \frac{7}{3}& \frac{5}{2}& \frac{5}{2}& \cdots & \frac{5}{2}& \frac{5}{2}& \frac{5}{2}\\ {} \frac{1}{2}& \frac{5}{2}& \frac{10}{3}& \frac{7}{2}& \cdots & \frac{7}{2}& \frac{7}{2}& \frac{7}{2}\\ {} \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ {} \frac{1}{2}& \frac{5}{2}& \frac{7}{2}& \frac{9}{2}& \cdots & (n-1)-\frac{2}{3}& (n-1)-\frac{1}{2}& (n-1)-\frac{1}{2}\\ {} \frac{1}{2}& \frac{5}{2}& \frac{7}{2}& \frac{9}{2}& \cdots & (n-1)-\frac{1}{2}& n-\frac{2}{3}& n-\frac{1}{2}\\ {} \frac{1}{2}& \frac{5}{2}& \frac{7}{2}& \frac{9}{2}& \cdots & (n-1)-\frac{1}{2}& n-\frac{1}{2}& (n+1)-\frac{2}{3}\end{array}\right|\]
can be calculated by the same recurrent formula as ${D_{1,n}},n\ge 3$, but with another initial elements. Namely,
(21)
\[ {D_{2,2}^{(1)}}=\frac{1}{2},\hspace{0.2222em}{D_{2,3}^{(1)}}=\frac{5}{12}=\frac{5}{6}{D_{2,2}^{(1)}},\hspace{0.2222em}{D_{2,n+1}^{(1)}}=\frac{2}{3}{D_{2,n}^{(1)}}-\frac{1}{36}{D_{2,n-1}^{(1)}},\hspace{2.5pt}n\ge 3.\]
Applying formula (15) for the same $x=\frac{2+\sqrt{3}}{6}$, $y=\frac{2-\sqrt{3}}{6}$ and $a=\frac{1}{2}$, $b=\frac{5}{12}$ we arrive at
(22)
\[\begin{array}{l}\displaystyle {D_{2,n+1}^{(1)}}=\frac{5}{12}\sqrt{3}\frac{{p_{n-1}}}{{6^{n-1}}}-\frac{1}{2}\frac{1}{36}\sqrt{3}\frac{{p_{n-2}}}{{6^{n-2}}}=\frac{\sqrt{3}}{72}\left(30\frac{{p_{n-1}}}{{6^{n-1}}}-\frac{{p_{n-2}}}{{6^{n-2}}}\right)\\ {} \displaystyle =\frac{\sqrt{3}}{{6^{n}}}\left[5{p_{n-1}}-{p_{n-2}}\right]\hspace{-0.1667em}.\end{array}\]
Let us check the latter formula for $n=3$:
\[ {D_{2,4}^{(1)}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{1}{2}& \frac{3}{2}& \frac{3}{2}\\ {} \frac{1}{2}& \frac{7}{3}& \frac{5}{2}\\ {} \frac{1}{2}& \frac{5}{2}& \frac{10}{3}\end{array}\right|=\frac{19}{2\cdot {6^{2}}}=\frac{19}{72}.\]
By (22) we get
\[ {D_{2,4}^{(1)}}=\frac{\sqrt{3}}{2\cdot {6^{3}}}\left[5{p_{2}}-{p_{1}}\right]=\frac{\sqrt{3}}{2\cdot {6^{3}}}\left[5\cdot 8\sqrt{3}-2\sqrt{3}\right]=\frac{2\cdot 3}{2\cdot {6^{3}}}\left[20-1\right]=\frac{19}{72}.\]
At the same time, the recurrent formula gives the equalities
\[ {D_{2,4}^{(1)}}=\frac{2}{3}{D_{2,3}^{(1)}}-\frac{1}{36}{D_{2,2}^{(1)}}=\frac{2}{3}\frac{5}{12}-\frac{1}{36}\frac{1}{2}=\frac{19}{72}.\]
Thus, we have obtained the same results for all three methods.
In the case $k\ge 3$, a slightly modified approach should be used. For convenience, we denote $j=k-1$, and consider $j\ge 2$, $n\ge 2$. Here j is the number of the column of the matrix ${A_{2,n+1}}$, which we replaced by vector ${b_{2,n+1}}$ to obtain the determinant ${D_{2,n+1}^{(j)}}$.
First of all we investigate the case $j=n$. We have
\[ {D_{2,n+1}^{(n)}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}& \cdots & \frac{3}{2}& \frac{3}{2}& \frac{1}{2}\\ {} \frac{3}{2}& \frac{7}{3}& \cdots & \frac{5}{2}& \frac{5}{2}& \frac{1}{2}\\ {} \vdots & \vdots & \ddots & \vdots & \vdots \\ {} \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{1}{2}& n-\frac{2}{3}& \frac{1}{2}\\ {} \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{1}{2}& n-\frac{1}{2}& \frac{1}{2}\end{array}\right|.\]
We subtract the penultimate row from the last row, then expand the determinant over the last row and get
\[\begin{array}{l}\displaystyle {D_{2,n+1}^{(n)}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}& \cdots & \frac{3}{2}& \frac{3}{2}& \frac{1}{2}\\ {} \frac{3}{2}& \frac{7}{3}& \cdots & \frac{5}{2}& \frac{5}{2}& \frac{1}{2}\\ {} \vdots & \vdots & \ddots & \vdots & \vdots \\ {} \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{1}{2}& n-\frac{2}{3}& \frac{1}{2}\\ {} 0& 0& \cdots & 0& \frac{1}{6}& 0\end{array}\right|\\ {} \displaystyle =-\frac{1}{6}{D_{2,n}^{(n-1)}}=\cdots =\frac{{(-1)^{n-1}}}{{6^{n-1}}}{D_{2,2}^{(1)}}.\end{array}\]
Thus,
(23)
\[ {D_{2,n+1}^{(n)}}=\frac{{(-1)^{n-1}}}{2\cdot {6^{n-1}}}.\]
Further, consider $j=n-1$. We have
\[ {D_{2,n+1}^{(n-1)}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}& \cdots & \frac{3}{2}& \frac{1}{2}& \frac{3}{2}\\ {} \frac{3}{2}& \frac{7}{3}& \cdots & \frac{5}{2}& \frac{1}{2}& \frac{5}{2}\\ {} \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ {} \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{1}{2}& \frac{1}{2}& n-\frac{1}{2}\\ {} \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{1}{2}& \frac{1}{2}& (n+1)-\frac{2}{3}\end{array}\right|.\]
Now we analogously subtract the penultimate row from the last row and expand the determinant over the last row. That is,
\[ {D_{2,n+1}^{(n-1)}}=\left|\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{4}{3}& \frac{3}{2}& \cdots & \frac{3}{2}& \frac{1}{2}& \frac{3}{2}\\ {} \frac{3}{2}& \frac{7}{3}& \cdots & \frac{5}{2}& \frac{1}{2}& \frac{5}{2}\\ {} \vdots & \vdots & \ddots & \vdots & \vdots \\ {} \frac{3}{2}& \frac{5}{2}& \cdots & (n-1)-\frac{1}{2}& \frac{1}{2}& n-\frac{1}{2}\\ {} 0& 0& \cdots & 0& 0& \frac{5}{6}\end{array}\right|=\frac{5}{6}{D_{2,n}^{(n-1)}}.\]
By (23) we arrive at
(24)
\[ {D_{2,n+1}^{(n-1)}}=\frac{5}{6}\frac{{(-1)^{n-2}}}{2\cdot {6^{n-2}}}=\frac{5{(-1)^{n}}}{2\cdot {6^{n-1}}}.\]
Note that (23) and (24) imply the relation ${D_{2,n+1}^{(n-1)}}=-5{D_{2,n+1}^{(n)}}$. Moreover, ${D_{2,n+2}^{(n)}}=\frac{5}{6}{D_{2,n+1}^{(n)}}$.
Next, we study the case $1\le j\le n-2$. Let us fix such j and consider determinant ${D_{2,j+m}^{(j)}}$ for $m\ge 3$. The recurrent formula (14) is also valid for this determinant, that is,
\[ {D_{2,j+m}^{(j)}}=\frac{2}{3}{D_{2,j+1}^{(j)}}-\frac{1}{36}{D_{2,j+2}^{(j)}},\hspace{1em}m\ge 3.\]
Here by equalities (23) and (24) we have ${D_{2,j+1}^{(j)}}=\frac{{(-1)^{j-1}}}{2\cdot {6^{j-1}}}$ and ${D_{2,j+2}^{(j)}}=\frac{5{(-1)^{j-1}}}{2\cdot {6^{j}}}$ respectively, and ${D_{2,j+2}^{(j)}}=\frac{5}{6}{D_{2,j+1}^{(j)}}$.
Therefore, we obtain by (15)
\[\begin{array}{l}\displaystyle {D_{2,j+m}^{(j)}}={D_{2,j+2}^{(j)}}\frac{\sqrt{3}}{{6^{m-1}}}{p_{m-1}}-{D_{2,j+1}^{(j)}}\frac{\sqrt{3}}{36\cdot {6^{m-2}}}{p_{m-2}}\\ {} \displaystyle ={D_{2,j+1}^{(j)}}\sqrt{3}\left[\frac{5}{6}\frac{{p_{m-1}}}{{6^{m-1}}}-\frac{1}{36}\frac{{p_{m-2}}}{{6^{m-2}}}\right]=\frac{{(-1)^{j-1}}\sqrt{3}}{2\cdot {6^{m+j-1}}}\left[5{p_{m-1}}-{p_{m-2}}\right].\end{array}\]
Thus, we can rewrite the last equality in the form
(25)
\[ {D_{2,n+1}^{(n+1-m)}}=\frac{{(-1)^{n-m}}\sqrt{3}}{2\cdot {6^{n}}}\left[5{p_{m-1}}-{p_{m-2}}\right].\]
Now we use formulas (18), (22), (21), (23), (24) and the last one to find the coefficients of projection. Namely,
\[\begin{array}{l}\displaystyle {c_{n}^{(n+1)}}=\frac{{D_{2,n+1}^{(n)}}}{{D_{2,n+1}}}=\frac{{(-1)^{n-1}}}{2\cdot {6^{n-1}}}\cdot \frac{1}{{D_{2,n+1}}}=\frac{{(-1)^{n-1}}6\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}},\\ {} \displaystyle {c_{n}^{(n)}}=\frac{{(-1)^{n}}5}{2\cdot {6^{n-1}}}\cdot \frac{1}{{D_{2,n+1}}}=\frac{{(-1)^{n}}30\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}},\end{array}\]
and for $3\le m\le n$,
\[ {c_{n}^{(n+2-m)}}={(-1)^{n-m}}3\frac{5{p_{n-1}}-{p_{n-2}}}{31{p_{n-1}}-8{p_{n-2}}}.\]
If $n=2$, we have the particular case of previous results for $k=n$ and $k=n+1$. Note, that in this case equality (25) (with $m=n=2$) is also true and coincides with (24) and with the second equality of (21). Indeed,
\[ {D_{2,3}}=\frac{\sqrt{3}}{2\cdot {6^{2}}}5\left(\left(2+\sqrt{3}\right)-\left(2-\sqrt{3}\right)\right)=\frac{5}{12}.\]
And on the other hand,
\[ {D_{2,3}}=\frac{5}{2\cdot 6}=\frac{5}{12}.\]
Consequently, by the second equality of (17),
\[ {c_{2}^{(2)}}=\frac{5}{12}\frac{36}{31}=\frac{15}{31}.\]
For completeness, we also get
\[ {c_{2}^{(3)}}=\frac{{D_{2,3}^{(2)}}}{{D_{2,3}}}=\frac{-1}{12}\frac{36}{31}=-\frac{3}{31}.\]
 □
Remark 5.
In order to check the obtained results, we calculate the projection coefficients for some values of n, by another method.
By formulas from Theorem 2 we obtain, for $n=3$,
\[\begin{array}{l}\displaystyle {c_{3}^{(2)}}=3\frac{5{p_{2}}-{p_{1}}}{31{p_{2}}-8{p_{1}}}=3\frac{5\cdot 8\sqrt{3}-2\sqrt{3}}{31\cdot 8\sqrt{3}-8\cdot 2\sqrt{3}}=\frac{57}{116},\\ {} \displaystyle {c_{3}^{(3)}}=\frac{{(-1)^{3}}30\sqrt{3}}{31{p_{2}}-8{p_{1}}}=\frac{-30\sqrt{3}}{29\cdot 8\sqrt{3}}=-\frac{15}{116},\hspace{2.5pt}{c_{3}^{(4)}}=\frac{{(-1)^{2}}6\sqrt{3}}{29\cdot 8\sqrt{3}}=\frac{3}{116};\end{array}\]
and for $n=4$,
\[\begin{array}{l}\displaystyle {c_{4}^{(2)}}=3\frac{5{p_{3}}-{p_{2}}}{31{p_{3}}-8{p_{2}}}=3\frac{5\cdot 30\sqrt{3}-8\sqrt{3}}{31\cdot 30\sqrt{3}-8\cdot 8\sqrt{3}}=\frac{213}{433},\\ {} \displaystyle {c_{4}^{(3)}}=-3\frac{5{p_{2}}-{p_{1}}}{433\cdot 2\sqrt{3}}=-\frac{57}{433},\hspace{1em}{c_{4}^{(4)}}=\frac{{(-1)^{4}}30\sqrt{3}}{433\cdot 2\sqrt{3}}=\frac{15}{433},\\ {} \displaystyle {c_{4}^{(5)}}=\frac{{(-1)^{3}}6\sqrt{3}}{433\cdot 2\sqrt{3}}=-\frac{3}{433}.\end{array}\]
On the other hand, we can solve the system of equations (20) using standard methods (Gaussian elimination). Thus, we get the following results.
If $n=3$ then
\[ \left\{\begin{array}{l}8{c_{3}^{(2)}}+9{c_{3}^{(3)}}+9{c_{3}^{(4)}}=3\hspace{1em}\\ {} 9{c_{3}^{(2)}}+14{c_{3}^{(3)}}+15{c_{3}^{(4)}}=3\hspace{1em}\\ {} 9{c_{3}^{(2)}}+15{c_{3}^{(3)}}+20{c_{3}^{(4)}}=3\hspace{1em}\end{array}\right.\Rightarrow \left\{\begin{array}{l}{c_{3}^{(2)}}=\frac{57}{116}\hspace{1em}\\ {} {c_{3}^{(3)}}=-\frac{15}{116}\hspace{1em}\\ {} {c_{3}^{(4)}}=\frac{3}{116}\hspace{1em}\end{array}\right..\]
In the case $n=4$,
\[ \left\{\begin{array}{l}8{c_{4}^{(2)}}+9{c_{4}^{(3)}}+9{c_{4}^{(4)}}+9{c_{4}^{(5)}}=3\hspace{1em}\\ {} 9{c_{4}^{(2)}}+14{c_{4}^{(3)}}+15{c_{4}^{(4)}}+15{c_{4}^{(5)}}=3\hspace{1em}\\ {} 9{c_{4}^{(2)}}+15{c_{4}^{(3)}}+20{c_{4}^{(4)}}+21{c_{4}^{(5)}}=3\hspace{1em}\\ {} 9{c_{4}^{(2)}}+15{c_{4}^{(3)}}+21{c_{4}^{(4)}}+26{c_{4}^{(5)}}=3\hspace{1em}\end{array}\right.\Rightarrow \left\{\begin{array}{l}{c_{4}^{(2)}}=\frac{213}{433}\hspace{1em}\\ {} {c_{4}^{(3)}}=-\frac{57}{433}\hspace{1em}\\ {} {c_{4}^{(4)}}=\frac{15}{433}\hspace{1em}\\ {} {c_{4}^{(5)}}=-\frac{3}{433}\hspace{1em}\end{array}\right..\]
As we see, the results coincide.

References

[1] 
Bayer, C., Breneis, S.: Markovian approximations of stochastic Volterra equations with the fractional kernel. Quant. Finance 23(1), 53–70 (2023). MR4521278. https://doi.org/10.1080/14697688.2022.2139193
[2] 
Benth, F.E., Harang, F.A.: Infinite dimensional pathwise Volterra processes driven by Gaussian noise–Probabilistic properties and applications. Electron. J. Probab. 26, 1–42 (2021). MR4309972. https://doi.org/10.1214/21-ejp683
[3] 
Cass, T., Lim, N.: Skorohod and rough integration for stochastic differential equations driven by Volterra processes. Ann. Inst. Henri Poincaré B, Probab. Stat. 57, 132–168 (2021). MR4255171. https://doi.org/10.1214/20-aihp1074
[4] 
Coupek, P., Maslowski, B.: Stochastic evolution equations with Volterra noise. Stoch. Process. Appl. 127(3), 877–900 (2017). MR3605714. https://doi.org/10.1016/j.spa.2016.07.003
[5] 
Duncan, T.E., Maslowski, B., Pasik-Duncan, B.: Linear stochastic differential equations driven by Gauss-Volterra processes and related linear-quadratic control problems. Appl. Math. Optim. 80, 369–389 (2019). MR4008669. https://doi.org/10.1007/s00245-017-9468-3
[6] 
El Omari, M.: On the Gaussian Volterra processes with power-type kernels. Stoch. Models 40(1), 152–165 (2024). MR4694530. https://doi.org/10.1080/15326349.2023.2212763
[7] 
Fan, X., Jiang, X.: Stochastic control problem for distribution dependent SDE driven by a Gauss Volterra process. Discrete Contin. Dyn. Syst., Ser. S 16(5), 1041–1061 (2023). MR4571171. https://doi.org/10.3934/dcdss.2023031
[8] 
Gehringer, J., Li, X.M., Sieber, J.: Functional limit theorems for Volterra processes and applications to homogenization. Nonlinearity 35(4), 1521 (2022). MR4399494. https://doi.org/10.1088/1361-6544/ac4818
[9] 
Malyarenko, A., Mishura, Y., Ralchenko, K., Shklyar, S.: Entropy and alternative entropy functionals of fractional Gaussian noise as the functions of Hurst index. Fract. Calc. Appl. Anal. 26(3), 1052–1081 (2023). MR4599227. https://doi.org/10.1007/s13540-023-00155-2
[10] 
Mishura, Y., Shklyar, S.: Gaussian Volterra processes with power-type kernels. part II. Mod. Stoch. Theory Appl. 9(4), 431–452 (2022). MR4510382. https://doi.org/10.15559/22-VMSTA211
[11] 
Mishura, Y., Ottaviano, S., Vargiolu, T.: Gaussian Volterra processes as models of electricity markets. arXiv:2311.09384. https://doi.org/10.48550/arXiv.2311.09384
[12] 
Mishura, Y., Shklyar, S.: Gaussian Volterra processes with power-type kernels. part I. Mod. Stoch. Theory Appl. 9(3), 313–338 (2022). MR4462026. https://doi.org/10.15559/22-VMSTA205
[13] 
Mishura, Y., Ralchenko, K., Schilling, R.L.: Analytical and computational problems related to fractional gaussian noise. Fractal Fract. 6(11), 1–22 (2022). https://doi.org/10.3390/fractalfract6110620
[14] 
Mishura, Y., Ralchenko, K., Shklyar, S.: General conditions of weak convergence of discrete-time multiplicative scheme to asset price with memory. Risks 8(1), 1–29 (2020). https://doi.org/10.3390/risks8010011
[15] 
Mishura, Y., Ralchenko, K., Shklyar, S.: Gaussian volterra processes: Asymptotic growth and statistical estimation. Theory Probab. Math. Stat. 108, 149–167 (2023). MR4588243. https://doi.org/10.1090/tpms/1190
[16] 
Mishura, Y., Shevchenko, G., Shklyar, S.: Gaussian processes with Volterra Kernels. In: Malyarenko, A., Ni, Y., Rančić, M., Silvestrov, S. (eds.) Stochastic Processes, Statistical Methods, and Engineering Mathematics. SPAS 2019. Springer Proceedings in Mathematics & Statistics, vol. 408, pp. 249–276. Springer, Cham (2022). MR4607849. https://doi.org/10.1007/978-3-031-17820-7_13
[17] 
Norros, I., Valkeila, E., Virtamo, J.: An elementary approach to a Girsanov formula and other analytical results on fractional Brownian motions. Bernoulli 5(4), 571–587 (1999). MR1704556. https://doi.org/10.2307/3318691
[18] 
Perestyuk, M., Vyshenskyi, V.: Combinatorics: First Steps. Nova Science Publishers, Incorporated (2021). https://doi.org/10.52305/FIZC1542
[19] 
Ross, M., Smith, M.T., Álvarez, M.: Learning nonparametric Volterra kernels with Gaussian processes. In: Beygelzimer, A., Dauphin, Y., Liang, P., Wortman Vaughan, J. (eds.) Advances in Neural Information Processing Systems. NeurIPS 2021, vol. 34, pp. 24099–24110. Curran Associates, Inc. (2021).
[20] 
Valdivia, A.: Information loss on Gaussian Volterra process. Electron. Commun. Probab. 22, 1–5 (2017). MR3724558. https://doi.org/10.1214/17-ECP79
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 The main properties of the simplest Gaussian-Volterra process
  • 3 Calculation of the determinant of covariance matrix of increments of the simplest GVp. Combinatorial approach
  • 4 Projection problem and respective coefficients for the simplest GVp
  • References

Copyright
© 2024 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Gaussian-Volterra process covariance matrix projection problem combinatorial approach

MSC2010
60G15 60G25 05-08

Funding
The second author was supported by The Swedish Foundation for Strategic Research, grant No. UKR22-0017 and by the ToppForsk project No. 274410 of the Research Council of Norway with the title “STORM: Stochastics for Time-Space Risk Models.”

Metrics
since March 2018
703

Article info
views

186

Full article
views

207

PDF
downloads

72

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    2
Theorem 1.
Theorem 2.
Theorem 1.
1) The determinant ${D_{1,n}}=\det ({A_{1,n}})$ equals
(12)
\[ {D_{1,1}}=\frac{1}{3},\hspace{1em}{D_{1,n}}=\frac{\sqrt{3}}{{6^{n+1}}}(7{p_{n-1}}-2{p_{n-2}}),\hspace{1em}n\ge 2.\]
2) The determinant ${D_{1,n}}$ decreases in n and satisfies the inequalities
(13)
\[ \frac{5}{42}{D_{1,n-1}}\lt \frac{5\sqrt{3}}{{6^{n+1}}}{p_{n-1}}\lt {D_{1,n}}\lt \frac{2}{3}{D_{1,n-1}},\hspace{1em}n\ge 2.\]
Theorem 2.
Coefficients $\{{c_{n}^{(k)}},\hspace{2.5pt}2\le k\le n+1\}$ can be find by the following formulas:
1) $n=1$, $\hspace{2.5pt}{c_{1}^{(2)}}=\frac{3}{8}$;
2) $n=2$,
\[\begin{array}{l}\displaystyle {c_{2}^{(2)}}=3\frac{5{p_{n-1}}-{p_{n-2}}}{31{p_{n-1}}-8{p_{n-2}}}=\frac{{(-1)^{n}}30\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}}=\frac{15}{31},\\ {} \displaystyle {c_{2}^{(3)}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}=\frac{{(-1)^{n-1}}6\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}}\hspace{-0.1667em}=\hspace{-0.1667em}-\frac{3}{31};\end{array}\]
3) $n\ge 3$,
\[\begin{array}{l}\displaystyle {c_{n}^{(n+2-m)}}=\frac{{(-1)^{n-m}}\sqrt{3}}{2\cdot {6^{n}}\cdot {D_{2,n+1}}}\left[5{p_{m-1}}-{p_{m-1}}\right]\\ {} \displaystyle ={(-1)^{n-m}}3\frac{5{p_{m-1}}-{p_{m-2}}}{31{p_{n-1}}-8{p_{n-2}}},\hspace{1em}3\le m\le n,\\ {} \displaystyle {c_{n}^{(n)}}=\frac{{(-1)^{n}}5}{2\cdot {6^{n-1}}}\cdot \frac{1}{{D_{2,n+1}}}=\frac{{(-1)^{n}}30\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}},\\ {} \displaystyle {c_{n}^{(n+1)}}=\frac{{(-1)^{n-1}}}{2\cdot {6^{n-1}}}\cdot \frac{1}{{D_{2,n+1}}}=\frac{{(-1)^{n-1}}6\sqrt{3}}{31{p_{n-1}}-8{p_{n-2}}}.\end{array}\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy