1 Introduction
The field of temporal and contemporaneous aggregations of independent stationary stochastic processes is an important and very active research area in the empirical and theoretical statistics and in other areas as well. The scheme of contemporaneous (also called cross-sectional) aggregation of random-coefficient autoregressive processes of order 1 was firstly proposed by Robinson [16] and Granger [4] in order to obtain the long memory phenomena in aggregated time series. For surveys on papers dealing with the aggregation of different kinds of stochastic processes, see, e.g., Pilipauskaitė and Surgailis [13], Jirak [8, page 512] or the arXiv version of Barczy et al. [2].
In this paper we study the limit behaviour of temporal (time) and contemporaneous (space) aggregations of independent copies of a strictly stationary multitype Galton–Watson branching process with immigration in the so-called iterated and simultaneous cases, respectively. According to our knowledge, the aggregation of general multitype Galton–Watson branching processes with immigration has not been considered in the literature so far. To motivate the fact that the aggregation of branching processes could be an important topic, now we present an interesting and relevant example, where the phenomena of aggregation of this kind of processes may come into play. A usual Integer-valued AutoRegressive (INAR) process of order 1, $(X_{k})_{k\geqslant 0}$, can be used to model migration, which is quite a big issue nowadays all over the world. More precisely, given a camp, for all $k\geqslant 0$, the random variable $X_{k}$ can be interpreted as the number of migrants to be present in the camp at time k, and every migrant will stay in the camp with probability $\alpha \in (0,1)$ indepedently of each other (i.e., with probability $1-\alpha $ each migrant leaves the camp) and at any time $k\geqslant 1$ new migrants may come to the camp. Given several camps in a country, we may suppose that the corresponding INAR processes of order 1 share the same parameter α and they are independent. So, the temporal and contemporaneous aggregations of these INAR processes of order 1 is the total usage of the camps in terms of the number of migrants in the given country in a given time period, and this quantity may be worth studying.
The present paper is organized as follows. In Section 2 we formulate our main results, namely the iterated and simultaneous limit behaviour of time- and space-aggregated independent stationary p-type Galton–Watson branching processes with immigration is described (where $p\geqslant 1$), see Theorems 1 and 2. The limit distributions in these limit theorems coincide, namely, it is a p-dimensional zero mean Brownian motion with a covariance function depending on the expectations and covariances of the offspring and immigration distributions. In the course of the proofs of our results, in Lemma 2.3, we prove that for a subcritical, positively regular multitype Galton–Watson branching process with nontrivial immigration, its unique stationary distribution admits finite ${\alpha }^{\mathrm{th}}$ moments provided that the branching and immigration distributions have finite ${\alpha }^{\mathrm{th}}$ moments, where $\alpha \in \{1,2,3\}$. In case of $\alpha \in \{1,2\}$, Quine [14] contains this result, however in case of $\alpha =3$, we have not found any precise proof in the literature for it, it is something like a folklore, so we decided to write down a detailed proof. As a by-product, we obtain an explicit formula for the third moment in question. Section 3 is devoted to the special case of generalized INAR processes, especially to single-type Galton–Watson branching processes with immigration. All the proofs can be found in Section 4.
2 Aggregation of multitype Galton–Watson branching processes with immigration
Let $\mathbb{Z}_{+}$, $\mathbb{N}$, $\mathbb{R}$, $\mathbb{R}_{+}$, and $\mathbb{C}$ denote the set of non-negative integers, positive integers, real numbers, non-negative real numbers, and complex numbers, respectively. For all $d\in \mathbb{N}$, the $d\times d$ identity matrix is denoted by $\boldsymbol{I}_{d}$. The standard basis in ${\mathbb{R}}^{d}$ is denoted by $\{\boldsymbol{e}_{1},\dots ,\boldsymbol{e}_{d}\}$. For $\boldsymbol{v}\in {\mathbb{R}}^{d}$, the Euclidean norm is denoted by $\| \boldsymbol{v}\| $, and for $\boldsymbol{A}\in {\mathbb{R}}^{d\times d}$, the induced matrix norm is denoted by $\| \boldsymbol{A}\| $ as well (with a little abuse of notation). All the random variables will be defined on a probability space $(\varOmega ,\mathcal{F},\mathbb{P})$.
Let $(\boldsymbol{X}_{k}={[X_{k,1},\dots ,X_{k,p}]}^{\top })_{k\in \mathbb{Z}_{+}}$ be a p-type Galton–Watson branching process with immigration. For each $k,\ell \in \mathbb{Z}_{+}$ and $i,j\in \{1,\dots ,p\}$, the number of j-type individuals in the ${k}^{\mathrm{th}}$ generation will be denoted by $X_{k,j}$, the number of j-type offsprings produced by the ${\ell }^{\mathrm{th}}$ individual belonging to type i of the ${(k-1)}^{\mathrm{th}}$ generation will be denoted by ${\xi _{k,\ell }^{(i,j)}}$, and the number of immigrants of type i in the ${k}^{\mathrm{th}}$ generation will be denoted by ${\varepsilon _{k}^{(i)}}$. Then we have
for every $k\in \mathbb{N}$, where we define ${\sum _{\ell =1}^{0}}:=\mathbf{0}$. Here $\{\boldsymbol{X}_{0},{\boldsymbol{\xi }_{k,\ell }^{(i)}},\boldsymbol{\varepsilon }_{k}:k,\ell \in \mathbb{N},i\in \{1,\dots ,p\}\}$ are supposed to be independent ${\mathbb{Z}_{+}^{p}}$-valued random vectors. Note that we do not assume independence among the components of these vectors. Moreover, for all $i\in \{1,\dots ,p\}$, $\{{\boldsymbol{\xi }}^{(i)},{\boldsymbol{\xi }_{k,\ell }^{(i)}}:k,\ell \in \mathbb{N}\}$ and $\{\boldsymbol{\varepsilon },\boldsymbol{\varepsilon }_{k}:k\in \mathbb{N}\}$ are supposed to consist of identically distributed random vectors, respectively.
(1)
\[ \boldsymbol{X}_{k}={\sum \limits_{\ell =1}^{X_{k-1,1}}}\left[\begin{array}{c}{\xi _{k,\ell }^{(1,1)}}\\{} \vdots \\{} {\xi _{k,\ell }^{(1,p)}}\end{array}\right]+\cdots +{\sum \limits_{\ell =1}^{X_{k-1,p}}}\left[\begin{array}{c}{\xi _{k,\ell }^{(p,1)}}\\{} \vdots \\{} {\xi _{k,\ell }^{(p,p)}}\end{array}\right]+\left[\begin{array}{c}{\varepsilon _{k}^{(1)}}\\{} \vdots \\{} {\varepsilon _{k}^{(p)}}\end{array}\right]=:{\sum \limits_{i=1}^{p}}{\sum \limits_{\ell =1}^{X_{k-1,i}}}{\boldsymbol{\xi }_{k,\ell }^{(i)}}+\boldsymbol{\varepsilon }_{k}\]Let us introduce the notations $\boldsymbol{m}_{\boldsymbol{\varepsilon }}:=\mathbb{E}(\boldsymbol{\varepsilon })\in {\mathbb{R}_{+}^{p}}$, along with $\boldsymbol{M}_{\boldsymbol{\xi }}:=\mathbb{E}([{\boldsymbol{\xi }}^{(1)},\dots ,{\boldsymbol{\xi }}^{(p)}])\in {\mathbb{R}_{+}^{p\times p}}$ and
For further application, we define the matrix
provided that the covariances in question are finite.
\[ \boldsymbol{v}_{(i,j)}:={\big[\operatorname{Cov}\big({\xi }^{(1,i)},{\xi }^{(1,j)}\big),\dots ,\operatorname{Cov}\big({\xi }^{(p,i)},{\xi }^{(p,j)}\big),\operatorname{Cov}\big({\varepsilon }^{(i)},{\varepsilon }^{(j)}\big)\big]}^{\top }\in {\mathbb{R}}^{(p+1)\times 1}\]
for $i,j\in \{1,\dots ,p\}$, provided that the expectations and covariances in question are finite. Let $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})$ denote the spectral radius of $\boldsymbol{M}_{\boldsymbol{\xi }}$, i.e., the maximum of the modulus of the eigenvalues of $\boldsymbol{M}_{\boldsymbol{\xi }}$. The process $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$ is called subcritical, critical or supercritical if $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})$ is smaller than 1, equal to 1 or larger than 1, respectively. The matrix $\boldsymbol{M}_{\boldsymbol{\xi }}$ is called primitive if there is a positive integer $n\in \mathbb{N}$ such that all the entries of ${\boldsymbol{M}_{\boldsymbol{\xi }}^{n}}$ are positive. The process $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$ is called positively regular if $\boldsymbol{M}_{\boldsymbol{\xi }}$ is primitive. In what follows, we suppose that
(2)
\[ \begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)& \displaystyle \in {\mathbb{R}_{+}^{p}},\hspace{1em}i\in \{1,\dots ,p\},\hspace{1em}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\in {\mathbb{R}_{+}^{p}}\setminus \{\mathbf{0}\},\\{} \displaystyle \rho (\boldsymbol{M}_{\boldsymbol{\xi }})& \displaystyle <1,\hspace{1em}\boldsymbol{M}_{\boldsymbol{\xi }}\hspace{5pt}\text{is primitive.}\end{array}\]Remark 1.
Note that the matrix ${(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}$, which appears in (3) and throughout the paper, exists. Indeed, $\lambda \in \mathbb{C}$ is an eigenvalue of $\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }}$ if and only if $1-\lambda $ is that of $\boldsymbol{M}_{\boldsymbol{\xi }}$. Therefore, since $\rho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$, all eigenvalues of $\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }}$ are non-zero. This means that $\det (\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})\ne 0$, so ${(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}$ does exist. One could also refer to Corollary 5.6.16 and Lemma 5.6.10 in Horn and Johnson [6]. □
Remark 2.
Note that $\boldsymbol{V}$ is symmetric and positive semidefinite, since $\boldsymbol{v}_{(i,j)}=\boldsymbol{v}_{(j,i)}$, $i,j\in \{1,\dots ,p\}$, and for all $\boldsymbol{x}\in {\mathbb{R}}^{p}$,
\[ {\boldsymbol{x}}^{\top }\boldsymbol{V}\boldsymbol{x}={\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}V_{i,j}x_{i}x_{j}=\Bigg({\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}x_{i}x_{j}{\boldsymbol{v}_{(i,j)}^{\top }}\Bigg)\left[\begin{array}{c}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} 1\end{array}\right],\]
where
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}x_{i}x_{j}{\boldsymbol{v}_{(i,j)}^{\top }}\\{} & \displaystyle \hspace{1em}=\big[{\boldsymbol{x}}^{\top }\operatorname{Cov}\big({\boldsymbol{\xi }}^{(1)},{\boldsymbol{\xi }}^{(1)}\big)\boldsymbol{x},\dots ,{\boldsymbol{x}}^{\top }\operatorname{Cov}\big({\boldsymbol{\xi }}^{(p)},{\boldsymbol{\xi }}^{(p)}\big)\boldsymbol{x},{\boldsymbol{x}}^{\top }\operatorname{Cov}(\boldsymbol{\varepsilon },\boldsymbol{\varepsilon })\boldsymbol{x}\big].\end{array}\]
Here ${\boldsymbol{x}}^{\top }\operatorname{Cov}({\boldsymbol{\xi }}^{(i)},{\boldsymbol{\xi }}^{(i)})\boldsymbol{x}\geqslant 0$, $i\in \{1,\dots ,p\}$, ${\boldsymbol{x}}^{\top }\operatorname{Cov}(\boldsymbol{\varepsilon },\boldsymbol{\varepsilon })\boldsymbol{x}\geqslant 0$, and ${(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\in {\mathbb{R}_{+}^{p}}$ due to the fact that ${(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{m}_{\boldsymbol{\varepsilon }}$ is nothing else but the expectation vector of the unique stationary distribution of $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$, see the discussion below and formula (14). □
Under (2), by the Theorem in Quine [14], there is a unique stationary distribution π for $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$. Indeed, under (2), $\boldsymbol{M}_{\boldsymbol{\xi }}$ is irreducible due to the primitivity of $\boldsymbol{M}_{\boldsymbol{\xi }}$, see Definition 8.5.0 and Theorem 8.5.2 in Horn and Johnson [6]. For the definition of irreducibility, see Horn and Johnson [6, Definitions 6.2.21 and 6.2.22]. Further, $\boldsymbol{M}_{\boldsymbol{\xi }}$ is aperiodic, since this is equivalent to the primitivity of $\boldsymbol{M}_{\boldsymbol{\xi }}$, see Kesten and Stigum [10, page 314] and Kesten and Stigum [9, Section 3]. For the definition of aperiodicity (also called acyclicity), see, e.g., the Introduction of Danka and Pap [3]. Finally, since $\boldsymbol{m}_{\boldsymbol{\varepsilon }}\in {\mathbb{R}_{+}^{p}}\setminus \{\mathbf{0}\}$, the probability generating function of $\boldsymbol{\varepsilon }$ at $\mathbf{0}$ is less than 1, and
\[ \mathbb{E}\Bigg(\log \Bigg({\sum \limits_{i=1}^{p}}{\varepsilon }^{(i)}\Bigg)\mathbb{1}_{\{\boldsymbol{\varepsilon }\ne \mathbf{0}\}}\Bigg)\leqslant \mathbb{E}\Bigg({\sum \limits_{i=1}^{p}}{\varepsilon }^{(i)}\mathbb{1}_{\{\boldsymbol{\varepsilon }\ne \mathbf{0}\}}\Bigg)\leqslant \mathbb{E}\Bigg({\sum \limits_{i=1}^{p}}{\varepsilon }^{(i)}\Bigg)={\sum \limits_{i=1}^{p}}\mathbb{E}\big({\varepsilon }^{(i)}\big),\]
which is finite, so one can apply the Theorem in Quine [14].For each $\alpha \in \mathbb{N}$, we say that the ${\alpha }^{\mathrm{th}}$ moment of a random vector is finite if all of its mixed moments of order α are finite.
Lemma 1.
Let us assume (2). For each $\alpha \in \{1,2,3\}$, the unique stationary distribution π has a finite ${\alpha }^{\mathrm{th}}$ moment, provided that the ${\alpha }^{\mathrm{th}}$ moments of ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ are finite.
In what follows, we suppose (2) and that the distribution of $\boldsymbol{X}_{0}$ is the unique stationary distribution π, hence the Markov chain $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$ is strictly stationary. Recall that, by (2.1) in Quine and Durham [15], for any measurable function $f:{\mathbb{R}}^{p}\to \mathbb{R}$ satisfying $\mathbb{E}(|f(\boldsymbol{X}_{0})|)<\infty $, we have
First we consider a simple aggregation procedure. For each $N\in \mathbb{N}$, consider the stochastic process ${\boldsymbol{S}}^{(N)}=({\boldsymbol{S}_{k}^{(N)}})_{k\in \mathbb{Z}_{+}}$ given by
(4)
\[ \frac{1}{n}{\sum \limits_{k=1}^{n}}f(\boldsymbol{X}_{k})\stackrel{\mathrm{a}.\mathrm{s}.}{\longrightarrow }\mathbb{E}\big(f(\boldsymbol{X}_{0})\big)\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]
\[ {\boldsymbol{S}_{k}^{(N)}}:={\sum \limits_{j=1}^{N}}\big({\boldsymbol{X}_{k}^{(j)}}-\mathbb{E}\big({\boldsymbol{X}_{k}^{(j)}}\big)\big),\hspace{1em}k\in \mathbb{Z}_{+},\]
where ${\boldsymbol{X}}^{(j)}=({\boldsymbol{X}_{k}^{(j)}})_{k\in \mathbb{Z}_{+}}$, $j\in \mathbb{N}$, is a sequence of independent copies of the strictly stationary p-type Galton–Watson process $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$ with immigration. Here we point out that we consider so-called idiosyncratic immigrations, i.e., the immigrations belonging to ${\boldsymbol{X}}^{(j)}$, $j\in \mathbb{N}$, are independent.We will use $\stackrel{\mathcal{D}_{\mathrm{f}}}{\longrightarrow }$ or $\mathcal{D}_{\mathrm{f}}$-lim for weak convergence of finite dimensional distributions, and $\stackrel{\mathcal{D}}{\longrightarrow }$ for weak convergence in $D(\mathbb{R}_{+},{\mathbb{R}}^{p})$ of stochastic processes with càdlàg sample paths, where $D(\mathbb{R}_{+},{\mathbb{R}}^{p})$ denotes the space of ${\mathbb{R}}^{p}$-valued càdlàg functions defined on $\mathbb{R}_{+}$.
Proposition 1.
If all entries of the vectors ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ have finite second moments, then
\[ {N}^{-\frac{1}{2}}{\boldsymbol{S}}^{(N)}\stackrel{\mathcal{D}_{\mathrm{f}}}{\longrightarrow }\pmb{\mathcal{X}}\hspace{1em}\textit{as}\hspace{2.5pt}N\to \infty ,\]
where $\pmb{\mathcal{X}}=(\pmb{\mathcal{X}}_{k})_{k\in \mathbb{Z}_{+}}$ is a stationary p-dimensional zero mean Gaussian process with covariances
where
We note that using formula (16) presented later on, one could give an explicit formula for $\operatorname{Var}(\boldsymbol{X}_{0})$ (not containing an infinite series).
Proposition 2.
If all entries of the vectors ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ have finite third moments, then
\[ \Bigg({n}^{-\frac{1}{2}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}{\boldsymbol{S}_{k}^{(1)}}\Bigg)_{t\in \mathbb{R}_{+}}=\Bigg({n}^{-\frac{1}{2}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big({\boldsymbol{X}_{k}^{(1)}}-\mathbb{E}\big({\boldsymbol{X}_{k}^{(1)}}\big)\big)\Bigg)_{t\in \mathbb{R}_{+}}\stackrel{\mathcal{D}}{\longrightarrow }{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{B}\]
as $n\to \infty $, where $\boldsymbol{B}=(\boldsymbol{B}_{t})_{t\in \mathbb{R}_{+}}$ is a p-dimensional zero mean Brownian motion satisfying $\operatorname{Var}(\boldsymbol{B}_{1})=\boldsymbol{V}$.
Note that Propositions 1 and 2 are about the scalings of the space-aggregated process ${\boldsymbol{S}}^{(N)}$ and the time-aggregated process $({\sum _{k=1}^{\lfloor nt\rfloor }}{\boldsymbol{S}_{k}^{(1)}})_{t\in \mathbb{R}_{+}}$, respectively.
For each $N,n\in \mathbb{N}$, consider the stochastic process ${\boldsymbol{S}}^{(N,n)}=({\boldsymbol{S}_{t}^{(N,n)}})_{t\in \mathbb{R}_{+}}$ given by
Theorem 1.
If all entries of the vectors ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ have finite second moments, then
where $\boldsymbol{B}=(\boldsymbol{B}_{t})_{t\in \mathbb{R}_{+}}$ is a p-dimensional zero mean Brownian motion satisfying $\operatorname{Var}(\boldsymbol{B}_{1})=\boldsymbol{V}$. If all entries of the vectors ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ have finite third moments, then
where $\boldsymbol{B}=(\boldsymbol{B}_{t})_{t\in \mathbb{R}_{+}}$ is a p-dimensional zero mean Brownian motion satisfying $\operatorname{Var}(\boldsymbol{B}_{1})=\boldsymbol{V}$.
(7)
\[ \mathcal{D}_{\mathrm{f}}\textit{-}\hspace{-2.84526pt}\underset{n\to \infty }{\lim }\hspace{0.1667em}\mathcal{D}_{\mathrm{f}}\textit{-}\hspace{-2.84526pt}\underset{N\to \infty }{\lim }\hspace{0.1667em}{(nN)}^{-\frac{1}{2}}{\boldsymbol{S}}^{(N,n)}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{B},\](8)
\[ \mathcal{D}_{\mathrm{f}}\textit{-}\hspace{-2.84526pt}\underset{N\to \infty }{\lim }\hspace{0.1667em}\mathcal{D}_{\mathrm{f}}\textit{-}\hspace{-2.84526pt}\underset{n\to \infty }{\lim }\hspace{0.1667em}{(nN)}^{-\frac{1}{2}}{\boldsymbol{S}}^{(N,n)}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{B},\]Theorem 2.
If all entries of the vectors ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ have finite third moments, then
if both n and N converge to infinity (at any rate), where $\boldsymbol{B}=(\boldsymbol{B}_{t})_{t\in \mathbb{R}_{+}}$ is a p-dimensional zero mean Brownian motion satisfying $\operatorname{Var}(\boldsymbol{B}_{1})=\boldsymbol{V}$.
(9)
\[ {(nN)}^{-\frac{1}{2}}{\boldsymbol{S}}^{(N,n)}\stackrel{\mathcal{D}}{\longrightarrow }{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{B},\]A key ingredient of the proofs is the fact that $(\boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k}))_{k\in \mathbb{Z}_{+}}$ can be rewritten as a stable first order vector autoregressive process with coefficient matrix $\boldsymbol{M}_{\boldsymbol{\xi }}$ and with heteroscedastic innovations, see (24).
3 A special case: aggregation of GINAR processes
We devote this section to the analysis of aggregation of Generalized Integer-Valued Autoregressive processes of order $p\in \mathbb{N}$ (GINAR(p) processes), which are special cases of p-type Galton–Watson branching processes with immigration introduced in (1). For historical fidelity, we note that it was Latour [11] who introduced GINAR(p) processes as generalizations of INAR(p) processes. This class of processes became popular in modelling integer-valued time series data such as the daily number of claims at an insurance company. In fact, a GINAR(1) process is a (general) single type Galton–Watson branching process with immigration.
Let $(Z_{k})_{k\geqslant -p+1}$ be a GINAR(p) process. Namely, for each $k,\ell \in \mathbb{Z}_{+}$ and $i\in \{1,\dots ,p\}$, the number of individuals in the ${k}^{\mathrm{th}}$ generation will be denoted by $Z_{k}$, the number of offsprings produced by the ${\ell }^{\mathrm{th}}$ individual belonging to the ${(k-i)}^{\mathrm{th}}$ generation will be denoted by ${\xi _{k,\ell }^{(i,1)}}$, and the number of immigrants in the ${k}^{\mathrm{th}}$ generation will be denoted by ${\varepsilon _{k}^{(1)}}$. Here the 1-s in the supercripts of ${\xi _{k,\ell }^{(i,1)}}$ and ${\varepsilon _{k}^{(1)}}$ are displayed in order to have a better comparison with (1). Then we have
\[ Z_{k}={\sum \limits_{\ell =1}^{Z_{k-1}}}{\xi _{k,\ell }^{(1,1)}}+\cdots +{\sum \limits_{\ell =1}^{Z_{k-p}}}{\xi _{k,\ell }^{(p,1)}}+{\varepsilon _{k}^{(1)}},\hspace{1em}k\in \mathbb{N}.\]
Here $\{Z_{0},Z_{-1},\dots ,Z_{-p+1},{\xi _{k,\ell }^{(i,1)}},{\varepsilon _{k}^{(1)}}:k,\ell \in \mathbb{N},i\in \{1,\dots ,p\}\}$ are supposed to be independent nonnegative integer-valued random variables. Moreover, for all $i\in \{1,\dots ,p\}$, $\{{\xi }^{(i,1)},{\xi _{k,\ell }^{(i,1)}}:k,\ell \in \mathbb{N}\}$ and $\{{\varepsilon }^{(1)},{\varepsilon _{k}^{(1)}}:k\in \mathbb{N}\}$ are supposed to consist of identically distributed random variables, respectively.A GINAR(p) process can be embedded in a p-type Galton–Watson branching process with immigration $(\boldsymbol{X}_{k}={[Z_{k},\dots ,Z_{k-p+1}]}^{\top })_{k\in \mathbb{Z}_{+}}$ with the corresponding p-dimensional random vectors
\[ {\boldsymbol{\xi }_{k,\ell }^{(1)}}\hspace{0.1667em}=\left[\begin{array}{c}{\xi _{k,\ell }^{(1,1)}}\\{} 1\\{} 0\\{} \vdots \\{} 0\end{array}\right],\hspace{1em}\dots ,\hspace{1em}{\boldsymbol{\xi }_{k,\ell }^{(p-1)}}\hspace{0.1667em}=\left[\begin{array}{c}{\xi _{k,\ell }^{(p-1,1)}}\\{} 0\\{} \vdots \\{} 0\\{} 1\end{array}\right],\hspace{1em}{\boldsymbol{\xi }_{k,\ell }^{(p)}}\hspace{0.1667em}=\left[\begin{array}{c}{\xi _{k,\ell }^{(p,1)}}\\{} 0\\{} 0\\{} \vdots \\{} 0\end{array}\right],\hspace{1em}\boldsymbol{\varepsilon }_{k}\hspace{0.1667em}=\left[\begin{array}{c}{\varepsilon _{k}^{(1)}}\\{} 0\\{} 0\\{} \vdots \\{} 0\end{array}\right]\]
for any $k,\ell \in \mathbb{N}$.In what follows, we reformulate the classification of GINAR(p) processes in terms of the expectations of the offspring distributions.
Remark 3.
In case of a GINAR(p) process, one can show that φ, the characteristic polynomial of the matrix $\boldsymbol{M}_{\boldsymbol{\xi }}$, has the form
\[ \varphi (\lambda ):=\det (\lambda \boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})={\lambda }^{p}-\mathbb{E}\big({\xi }^{(1,1)}\big){\lambda }^{p-1}-\cdots -\mathbb{E}\big({\xi }^{(p-1,1)}\big)\lambda -\mathbb{E}\big({\xi }^{(p,1)}\big)\]
for any $\lambda \in \mathbb{C}$. Recall that $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})$ denotes the spectral radius of $\boldsymbol{M}_{\boldsymbol{\xi }}$, i.e., the maximum of the modulus of the eigenvalues of $\boldsymbol{M}_{\boldsymbol{\xi }}$. If $\mathbb{E}({\xi }^{(p,1)})>0$, then, by the proof of Proposition 2.2 in Barczy et al. [1], the characteristic polynomial φ has just one positive root, $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})>0$, the nonnegative matrix $\boldsymbol{M}_{\boldsymbol{\xi }}$ is irreducible, $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})$ is an eigenvalue of $\boldsymbol{M}_{\boldsymbol{\xi }}$, and ${\sum _{i=1}^{p}}\mathbb{E}({\xi }^{(i,1)})\varrho {(\boldsymbol{M}_{\boldsymbol{\xi }})}^{-i}=1$. Further,
\[ \hspace{50.0pt}\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})\hspace{0.1667em}\left\{\begin{array}{l}<\hspace{1em}\\{} =\hspace{1em}\\{} >\hspace{1em}\end{array}\right.1\hspace{2em}\Longleftrightarrow \hspace{2em}{\sum \limits_{i=1}^{p}}\mathbb{E}\big({\xi }^{(i,1)}\big)\hspace{0.1667em}\left\{\begin{array}{l}<\hspace{1em}\\{} =\hspace{1em}\\{} >\hspace{1em}\end{array}\right.1.\hspace{50.0pt}\hspace{1em}\square \]
Next, we specialize the matrix $\boldsymbol{V}$, defined in (3), in case of a subcritical GINAR(p) process.
Remark 4.
In case of a GINAR(p) process, the vectors
\[ \boldsymbol{v}_{(i,j)}={\big[\operatorname{Cov}\big({\xi }^{(1,i)},{\xi }^{(1,j)}\big),\dots ,\operatorname{Cov}\big({\xi }^{(p,i)},{\xi }^{(p,j)}\big),\operatorname{Cov}\big({\varepsilon }^{(i)},{\varepsilon }^{(j)}\big)\big]}^{\top }\in {\mathbb{R}}^{(p+1)\times 1}\]
for $i,j\in \{1,\dots ,p\}$ are all zero vectors except for the case $i=j=1$. Therefore, in case of $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$, the matrix $\boldsymbol{V}$, defined in (3), reduces to
□
Finally, we specialize the limit distribution in Theorems 1 and 2 in case of a subcritical GINAR(p) process.
Remark 5.
Let us note that in case of $p=1$ and $\mathbb{E}({\xi }^{(1,1)})<1$ (yielding that the corresponding GINAR(1) process is subcritical), the limit process in Theorems 1 and 2 can be written as
\[ \frac{1}{1-\mathbb{E}({\xi }^{(1,1)})}\sqrt{\frac{\mathbb{E}({\varepsilon }^{(1)})\operatorname{Var}({\xi }^{(1,1)})+(1-\mathbb{E}({\xi }^{(1,1)}))\operatorname{Var}({\varepsilon }^{(1)})}{1-\mathbb{E}({\xi }^{(1,1)})}}W,\]
where $W=(W_{t})_{t\in \mathbb{R}_{+}}$ is a standard 1-dimensional Brownian motion. Indeed, this holds, since in this special case $\boldsymbol{M}_{\boldsymbol{\xi }}=\mathbb{E}({\xi }^{(1,1)})$ yielding that ${(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}={(1-\mathbb{E}({\xi }^{(1,1)}))}^{-1}$, and, by (10),
\[ \hspace{11.0pt}\boldsymbol{V}={\left[\begin{array}{c}\operatorname{Cov}({\xi }^{(1,1)},{\xi }^{(1,1)})\\{} \operatorname{Cov}({\varepsilon }^{(1)},{\varepsilon }^{(1)})\end{array}\right]}^{\top }\left[\begin{array}{c}\frac{\mathbb{E}({\varepsilon }^{(1)})}{1-\mathbb{E}({\xi }^{(1,1)})}\\{} 1\end{array}\right]=\frac{\operatorname{Var}({\xi }^{(1,1)})\mathbb{E}({\varepsilon }^{(1)})}{1-\mathbb{E}({\xi }^{(1,1)})}+\operatorname{Var}\big({\varepsilon }^{(1)}\big).\hspace{3.0pt}\hspace{1em}\square \]
4 Proofs
Proof of Lemma 1.
Let $(\boldsymbol{Z}_{k})_{k\in \mathbb{Z}_{+}}$ be a p-type Galton–Watson branching process without immigration, with the same offspring distribution as the process $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$, and with $\boldsymbol{Z}_{0}\stackrel{\mathcal{D}}{=}\boldsymbol{\varepsilon }$. Then the stationary distribution π of $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$ admits the representation
where $({\boldsymbol{Z}_{k}^{(n)}})_{k\in \mathbb{Z}_{+}}$, $n\in \mathbb{Z}_{+}$, are independent copies of $(\boldsymbol{Z}_{k})_{k\in \mathbb{Z}_{+}}$. This is a consequence of formula (16) for the probability generating function of π in Quine [14]. It is convenient to calculate moments of Kronecker powers of random vectors. We will use the notation $\boldsymbol{A}\otimes \boldsymbol{B}$ for the Kronecker product of the matrices $\boldsymbol{A}$ and $\boldsymbol{B}$, and we put ${\boldsymbol{A}}^{\otimes 2}:=\boldsymbol{A}\otimes \boldsymbol{A}$ and ${\boldsymbol{A}}^{\otimes 3}:=\boldsymbol{A}\otimes \boldsymbol{A}\otimes \boldsymbol{A}$. For each $\alpha \in \{1,2,3\}$, by the monotone convergence theorem, we have
For each $n\in \mathbb{N}$, using (1), we obtain
where ${\mathcal{F}_{n-1}^{\boldsymbol{Y}}}:=\sigma (\boldsymbol{Y}_{0},\dots ,\boldsymbol{Y}_{n-1})$, $n\in \mathbb{N}$, and $Y_{n-1,i}:={\boldsymbol{e}_{i}^{\top }}\boldsymbol{Y}_{n-1}$, $i\in \{1,\dots ,p\}$. Taking the expectation, we get
Taking into account $\boldsymbol{Y}_{0}=\mathbf{0}$, we obtain
\[ \int _{{\mathbb{R}}^{p}}{\boldsymbol{x}}^{\otimes \alpha }\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})=\mathbb{E}\Bigg[{\Bigg({\sum \limits_{r=0}^{\infty }}{\boldsymbol{Z}_{r}^{(r)}}\Bigg)}^{\otimes \alpha }\Bigg]=\underset{n\to \infty }{\lim }\mathbb{E}\Bigg[{\Bigg({\sum \limits_{r=0}^{n-1}}{\boldsymbol{Z}_{r}^{(r)}}\Bigg)}^{\otimes \alpha }\Bigg].\]
For each $n\in \mathbb{Z}_{+}$, we have
\[ {\sum \limits_{r=0}^{n-1}}{\boldsymbol{Z}_{r}^{(r)}}\stackrel{\mathcal{D}}{=}\boldsymbol{Y}_{n},\]
where $(\boldsymbol{Y}_{k})_{k\in \mathbb{Z}_{+}}$ is a Galton–Watson branching process with the same offspring and immigration distributions as $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$, and with $\boldsymbol{Y}_{0}=\mathbf{0}$. This can be checked comparing their probability generating functions taking into account formula (3) in Quine [14] as well. Consequently, we conclude
(11)
\[ \int _{{\mathbb{R}}^{p}}{\boldsymbol{x}}^{\otimes \alpha }\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})=\underset{n\to \infty }{\lim }\mathbb{E}\big({\boldsymbol{Y}_{n}^{\otimes \alpha }}\big).\](12)
\[ \begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big(\boldsymbol{Y}_{n}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)& \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{Y_{n-1,i}}}\mathbb{E}\big({\boldsymbol{\xi }_{n,j}^{(i)}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)+\mathbb{E}\big(\boldsymbol{\varepsilon }_{n}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)\\{} & \displaystyle ={\sum \limits_{i=1}^{p}}Y_{n-1,i}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)+\mathbb{E}(\boldsymbol{\varepsilon })\\{} & \displaystyle ={\sum \limits_{i=1}^{p}}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big){\boldsymbol{e}_{i}^{\top }}\boldsymbol{Y}_{n-1}+\boldsymbol{m}_{\boldsymbol{\varepsilon }}=\boldsymbol{M}_{\boldsymbol{\xi }}\boldsymbol{Y}_{n-1}+\boldsymbol{m}_{\boldsymbol{\varepsilon }},\end{array}\](13)
\[ \mathbb{E}(\boldsymbol{Y}_{n})=\boldsymbol{M}_{\boldsymbol{\xi }}\mathbb{E}(\boldsymbol{Y}_{n-1})+\boldsymbol{m}_{\boldsymbol{\varepsilon }},\hspace{1em}n\in \mathbb{N}.\]
\[ \mathbb{E}(\boldsymbol{Y}_{n})={\sum \limits_{k=1}^{n}}{\boldsymbol{M}_{\boldsymbol{\xi }}^{n-k}}\boldsymbol{m}_{\boldsymbol{\varepsilon }}={\sum \limits_{\ell =0}^{n-1}}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\ell }}\boldsymbol{m}_{\boldsymbol{\varepsilon }},\hspace{1em}n\in \mathbb{N}.\]
For each $n\in \mathbb{N}$, we have $(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }}){\sum _{\ell =0}^{n-1}}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\ell }}=\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{n}}$. By the condition $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$, the matrix $\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }}$ is invertible and ${\sum _{\ell =0}^{\infty }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\ell }}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}$, see Corollary 5.6.16 and Lemma 5.6.10 in Horn and Johnson [6]. Consequently, by (11), the first moment of π is finite, and
Now we suppose that the second moments of ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ are finite. For each $n\in \mathbb{N}$, using again (1), we obtain
Using also (13), we obtain
Since
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big({\boldsymbol{Y}_{n}^{\otimes 2}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)& \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{Y_{n-1,i}}}{\sum \limits_{{i^{\prime }}=1}^{p}}{\sum \limits_{{j^{\prime }}=1}^{Y_{n-1,{i^{\prime }}}}}\mathbb{E}\big({\boldsymbol{\xi }_{n,j}^{(i)}}\otimes {\boldsymbol{\xi }_{n,{j^{\prime }}}^{({i^{\prime }})}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{Y_{n-1,i}}}\mathbb{E}\big({\boldsymbol{\xi }_{n,j}^{(i)}}\otimes \boldsymbol{\varepsilon }_{n}+\boldsymbol{\varepsilon }_{n}\otimes {\boldsymbol{\xi }_{n,j}^{(i)}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)+\mathbb{E}\big({\boldsymbol{\varepsilon }_{n}^{\otimes 2}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)\\{} & \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{\underset{{i^{\prime }}\ne i}{{i^{\prime }}=1}}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}(Y_{n-1,i}-1){\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }+\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\big)+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big)\\{} & \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]-{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\big\{\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big\}+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big)\\{} & \displaystyle ={(\boldsymbol{M}_{\boldsymbol{\xi }}\boldsymbol{Y}_{n-1})}^{\otimes 2}+\boldsymbol{A}_{2,1}\boldsymbol{Y}_{n-1}+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big).\end{array}\]
with
\[ \boldsymbol{A}_{2,1}:={\sum \limits_{i=1}^{p}}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)-{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}{\boldsymbol{e}_{i}^{\top }}\in {\mathbb{R}}^{{p}^{2}\times p}.\]
Indeed, using the mixed-product property $(\boldsymbol{A}\otimes \boldsymbol{B})(\boldsymbol{C}\otimes \boldsymbol{D})=(\boldsymbol{A}\boldsymbol{C})\otimes (\boldsymbol{B}\boldsymbol{D})$ for matrices of such size that one can form the matrix products $\boldsymbol{A}\boldsymbol{C}$ and $\boldsymbol{B}\boldsymbol{D}$, we have
\[ Y_{n-1,i}Y_{n-1,{i^{\prime }}}=Y_{n-1,i}\otimes Y_{n-1,{i^{\prime }}}=\big({\boldsymbol{e}_{i}^{\top }}\boldsymbol{Y}_{n-1}\big)\otimes \big({\boldsymbol{e}_{{i^{\prime }}}^{\top }}\boldsymbol{Y}_{n-1}\big)=\big({\boldsymbol{e}_{i}^{\top }}\otimes {\boldsymbol{e}_{{i^{\prime }}}^{\top }}\big){\boldsymbol{Y}_{n-1}^{\otimes 2}},\]
hence
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\\{} & \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\big]\big({\boldsymbol{e}_{i}^{\top }}\otimes {\boldsymbol{e}_{{i^{\prime }}}^{\top }}\big){\boldsymbol{Y}_{n-1}^{\otimes 2}}\\{} & \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}\big[\big(\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big){\boldsymbol{e}_{i}^{\top }}\big)\otimes \big(\mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big){\boldsymbol{e}_{{i^{\prime }}}^{\top }}\big)\big]{\boldsymbol{Y}_{n-1}^{\otimes 2}}={\Bigg({\sum \limits_{i=1}^{p}}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big){\boldsymbol{e}_{i}^{\top }}\Bigg)}^{\otimes 2}{\boldsymbol{Y}_{n-1}^{\otimes 2}}\\{} & \displaystyle ={(\boldsymbol{M}_{\boldsymbol{\xi }})}^{\otimes 2}{\boldsymbol{Y}_{n-1}^{\otimes 2}}={(\boldsymbol{M}_{\boldsymbol{\xi }}\boldsymbol{Y}_{n-1})}^{\otimes 2}.\end{array}\]
Consequently, we obtain
\[ \mathbb{E}\big({\boldsymbol{Y}_{n}^{\otimes 2}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)={\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}{\boldsymbol{Y}_{n-1}^{\otimes 2}}+\boldsymbol{A}_{2,1}\boldsymbol{Y}_{n-1}+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big),\hspace{1em}n\in \mathbb{N}.\]
Taking the expectation, we get
(15)
\[ \mathbb{E}\big({\boldsymbol{Y}_{n}^{\otimes 2}}\big)={\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}\mathbb{E}\big({\boldsymbol{Y}_{n-1}^{\otimes 2}}\big)+\boldsymbol{A}_{2,1}\mathbb{E}(\boldsymbol{Y}_{n-1})+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big),\hspace{1em}n\in \mathbb{N}.\]
\[ \left[\begin{array}{c}\mathbb{E}(\boldsymbol{Y}_{n})\\{} \mathbb{E}({\boldsymbol{Y}_{n}^{\otimes 2}})\end{array}\right]=\boldsymbol{A}_{2}\left[\begin{array}{c}\mathbb{E}(\boldsymbol{Y}_{n-1})\\{} \mathbb{E}({\boldsymbol{Y}_{n-1}^{\otimes 2}})\end{array}\right]+\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\end{array}\right],\hspace{1em}n\in \mathbb{N},\]
with
\[ \boldsymbol{A}_{2}:=\left[\begin{array}{c@{\hskip10.0pt}c}\boldsymbol{M}_{\boldsymbol{\xi }}& \mathbf{0}\\{} \boldsymbol{A}_{2,1}& {\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}\end{array}\right]\in {\mathbb{R}}^{(p+{p}^{2})\times (p+{p}^{2})}.\]
Taking into account $\boldsymbol{Y}_{0}=\mathbf{0}$, we obtain
\[ \left[\begin{array}{c}\mathbb{E}(\boldsymbol{Y}_{n})\\{} \mathbb{E}({\boldsymbol{Y}_{n}^{\otimes 2}})\end{array}\right]={\sum \limits_{k=1}^{n}}{\boldsymbol{A}_{2}^{n-k}}\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\end{array}\right]={\sum \limits_{\ell =0}^{n-1}}{\boldsymbol{A}_{2}^{\ell }}\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\end{array}\right],\hspace{1em}n\in \mathbb{N}.\]
We have $\varrho (\boldsymbol{A}_{2})=\max \{\varrho (\boldsymbol{M}_{\boldsymbol{\xi }}),\varrho ({\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}})\}$, where $\varrho ({\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}})={[\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})]}^{2}$. Taking into account $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$, we conclude that $\varrho (\boldsymbol{A}_{2})=\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$, and, by (11), the second moment of π is finite, and
(16)
\[ \left[\begin{array}{c}\int _{{\mathbb{R}}^{p}}\boldsymbol{x}\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})\\{} \int _{{\mathbb{R}}^{p}}{\boldsymbol{x}}^{\otimes 2}\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})\end{array}\right]={(\boldsymbol{I}_{p+{p}^{2}}-\boldsymbol{A}_{2})}^{-1}\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\end{array}\right].\]
\[ {(\boldsymbol{I}_{p+{p}^{2}}-\boldsymbol{A}_{2})}^{-1}=\left[\begin{array}{c@{\hskip10.0pt}c}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}& \mathbf{0}\\{} {(\boldsymbol{I}_{{p}^{2}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}})}^{-1}\boldsymbol{A}_{2,1}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}& {(\boldsymbol{I}_{{p}^{2}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}})}^{-1}\end{array}\right],\]
we get that $\int _{{\mathbb{R}}^{p}}{\boldsymbol{x}}^{\otimes 2}\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})$ equals
\[ {\big(\boldsymbol{I}_{{p}^{2}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}\big)}^{-1}\boldsymbol{A}_{2,1}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{m}_{\boldsymbol{\varepsilon }}+{\big(\boldsymbol{I}_{{p}^{2}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}\big)}^{-1}\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big).\]
Now we suppose that the third moments of ${\boldsymbol{\xi }}^{(i)}$, $i\in \{1,\dots ,p\}$, and $\boldsymbol{\varepsilon }$ are finite. For each $n\in \mathbb{N}$, using again (1), we obtain
with
with
Summarizing, we obtain
Since
\[ \mathbb{E}\big({\boldsymbol{Y}_{n}^{\otimes 3}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)=S_{n,1}+S_{n,2}+S_{n,3}+\mathbb{E}\big({\boldsymbol{\varepsilon }_{n}^{\otimes 3}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)\]
with
\[\begin{array}{r@{\hskip0pt}l}\displaystyle S_{n,1}& \displaystyle :={\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{Y_{n-1,i}}}{\sum \limits_{{i^{\prime }}=1}^{p}}{\sum \limits_{{j^{\prime }}=1}^{Y_{n-1,{i^{\prime }}}}}{\sum \limits_{{i^{\prime\prime }}=1}^{p}}{\sum \limits_{{j^{\prime\prime }}=1}^{Y_{n-1,{i^{\prime\prime }}}}}\mathbb{E}\big({\boldsymbol{\xi }_{n,j}^{(i)}}\otimes {\boldsymbol{\xi }_{n,{j^{\prime }}}^{({i^{\prime }})}}\otimes {\boldsymbol{\xi }_{n,{j^{\prime\prime }}}^{({i^{\prime\prime }})}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big),\\{} \displaystyle S_{n,2}& \displaystyle :={\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{Y_{n-1,i}}}{\sum \limits_{{i^{\prime }}=1}^{p}}{\sum \limits_{{j^{\prime }}=1}^{Y_{n-1,{i^{\prime }}}}}\big\{\mathbb{E}\big({\boldsymbol{\xi }_{n,j}^{(i)}}\otimes {\boldsymbol{\xi }_{n,{j^{\prime }}}^{({i^{\prime }})}}\otimes \boldsymbol{\varepsilon }_{n}+{\boldsymbol{\xi }_{n,j}^{(i)}}\otimes \boldsymbol{\varepsilon }_{n}\otimes {\boldsymbol{\xi }_{n,{j^{\prime }}}^{({i^{\prime }})}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{1em}\hspace{2.5pt}+\mathbb{E}\big(\boldsymbol{\varepsilon }_{n}\otimes {\boldsymbol{\xi }_{n,j}^{(i)}}\otimes {\boldsymbol{\xi }_{n,{j^{\prime }}}^{({i^{\prime }})}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)\big\},\\{} \displaystyle S_{n,3}& \displaystyle :={\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{Y_{n-1,i}}}\mathbb{E}\big({\boldsymbol{\xi }_{n,j}^{(i)}}\otimes {\boldsymbol{\varepsilon }_{n}^{\otimes 2}}+\boldsymbol{\varepsilon }_{n}\otimes {\boldsymbol{\xi }_{n,j}^{(i)}}\otimes \boldsymbol{\varepsilon }_{n}+{\boldsymbol{\varepsilon }_{n}^{\otimes 2}}\otimes {\boldsymbol{\xi }_{n,j}^{(i)}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big).\end{array}\]
We have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle S_{n,1}& \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{\underset{{i^{\prime }}\ne i}{{i^{\prime }}=1}}^{p}}{\sum \limits_{\underset{{i^{\prime\prime }}\notin \{i,{i^{\prime }}\}}{{i^{\prime\prime }}=1}}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}Y_{n-1,{i^{\prime\prime }}}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime\prime }})}\big)\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}{\sum \limits_{\underset{{i^{\prime }}\ne i}{{i^{\prime }}=1}}^{p}}Y_{n-1,i}(Y_{n-1,i}-1)Y_{n-1,{i^{\prime }}}\\{} & \displaystyle \hspace{2em}\times \big\{{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{2em}\hspace{1em}\hspace{2.5pt}+\mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes {\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}{\sum \limits_{\underset{{i^{\prime }}\ne i}{{i^{\prime }}=1}}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}\\{} & \displaystyle \hspace{2em}\times \big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes {\boldsymbol{\xi }}^{({i^{\prime }})}\otimes {\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{2em}\hspace{1em}\hspace{2.5pt}+\mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\big\}\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}(Y_{n-1,i}-1)(Y_{n-1,i}-2){\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 3}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 3}\big]\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}(Y_{n-1,i}-1)\\{} & \displaystyle \hspace{2em}\times \big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)+\mathbb{E}\big({\boldsymbol{\xi }_{1,1}^{(i)}}\otimes {\boldsymbol{\xi }_{1,2}^{(i)}}\otimes {\boldsymbol{\xi }_{1,1}^{(i)}}\big)\\{} & \displaystyle \hspace{2em}\hspace{1em}\hspace{2.5pt}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\big\},\end{array}\]
which can be written in the form
\[\begin{array}{r@{\hskip0pt}l}\displaystyle S_{n,1}& \displaystyle =\hspace{0.1667em}{\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}{\sum \limits_{{i^{\prime\prime }}=1}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}Y_{n-1,{i^{\prime\prime }}}\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime\prime }})}\big)\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes {\boldsymbol{\xi }}^{({i^{\prime }})}\otimes {\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{1em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}+\mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]-{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\\{} & \displaystyle \hspace{1em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}-\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{1em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}-\mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes {\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 3}\big]-\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)-\mathbb{E}\big({\boldsymbol{\xi }_{1,1}^{(i)}}\otimes {\boldsymbol{\xi }_{1,2}^{(i)}}\otimes {\boldsymbol{\xi }_{1,1}^{(i)}}\big)\\{} & \displaystyle \hspace{1em}\hspace{2em}\hspace{2em}\hspace{1em}\hspace{2.5pt}-\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]+2{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 3}\big\}.\end{array}\]
Hence
(17)
\[ S_{n,1}={\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}}{\boldsymbol{Y}_{n-1}^{\otimes 3}}+{\boldsymbol{A}_{3,2}^{(1)}}{\boldsymbol{Y}_{n-1}^{\otimes 2}}+{\boldsymbol{A}_{3,1}^{(1)}}\boldsymbol{Y}_{n-1}\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\boldsymbol{A}_{3,2}^{(1)}}& \displaystyle :={\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes {\boldsymbol{\xi }}^{({i^{\prime }})}\otimes {\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2.5pt}\hspace{2.5pt}+\mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]-{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2.5pt}\hspace{2.5pt}-\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)-\mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes {\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2.5pt}\hspace{2.5pt}\times \big({\boldsymbol{e}_{i}^{\top }}\otimes {\boldsymbol{e}_{{i^{\prime }}}^{\top }}\big)\in {\mathbb{R}}^{{p}^{3}\times {p}^{2}},\\{} \displaystyle {\boldsymbol{A}_{3,1}^{(1)}}& \displaystyle :={\sum \limits_{i=1}^{p}}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 3}\big]-\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)-\mathbb{E}\big({\boldsymbol{\xi }_{1,1}^{(i)}}\otimes {\boldsymbol{\xi }_{1,2}^{(i)}}\otimes {\boldsymbol{\xi }_{1,1}^{(i)}}\big)\\{} & \displaystyle \hspace{2em}\hspace{1em}-\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]+2{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 3}\big\}{\boldsymbol{e}_{i}^{\top }}\in {\mathbb{R}}^{{p}^{3}\times p}.\end{array}\]
Moreover,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle S_{n,2}& \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{\underset{{i^{\prime }}\ne i}{{i^{\prime }}=1}}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}\big\{\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\big\}\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}(Y_{n-1,i}-1)\big\{{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{2.5pt}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{1em}+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes {\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\big)+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\big\},\end{array}\]
where $\mathbb{E}({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)})$ is finite, since there exists a permutation matrix $\boldsymbol{P}\in {\mathbb{R}}^{{p}^{2}\times {p}^{2}}$ such that $\boldsymbol{u}\otimes \boldsymbol{v}=\boldsymbol{P}(\boldsymbol{v}\otimes \boldsymbol{u})$ for all $\boldsymbol{u},\boldsymbol{v}\in {\mathbb{R}}^{p}$ (see, e.g., Henderson and Searle [5, formula (6)]), hence
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\big)& \displaystyle =\mathbb{E}\big(\big[\boldsymbol{P}\big(\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\big)\big]\otimes {\boldsymbol{\xi }}^{(i)}\big)=\mathbb{E}\big(\big[\boldsymbol{P}\big(\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\big)\big]\otimes \big(\boldsymbol{I}_{p}{\boldsymbol{\xi }}^{(i)}\big)\big)\\{} & \displaystyle =\mathbb{E}\big((\boldsymbol{P}\otimes \boldsymbol{I}_{p})\big(\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\otimes {\boldsymbol{\xi }}^{(i)}\big)\big)\\{} & \displaystyle =(\boldsymbol{P}\otimes \boldsymbol{I}_{p})\big(\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\big).\end{array}\]
Thus
\[\begin{array}{r@{\hskip0pt}l}\displaystyle S_{n,2}& \displaystyle ={\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}Y_{n-1,i}Y_{n-1,{i^{\prime }}}\big\{\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\big\}\\{} & \displaystyle \hspace{1em}+{\sum \limits_{i=1}^{p}}Y_{n-1,i}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\big)+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2.5pt}-{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}-\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2.5pt}-\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes {\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}.\end{array}\]
Hence
(18)
\[ S_{n,2}={\boldsymbol{A}_{3,2}^{(2)}}{\boldsymbol{Y}_{n-1}^{\otimes 2}}+{\boldsymbol{A}_{3,1}^{(2)}}\boldsymbol{Y}_{n-1}\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\boldsymbol{A}_{3,2}^{(2)}}& \displaystyle :={\sum \limits_{i=1}^{p}}{\sum \limits_{{i^{\prime }}=1}^{p}}\big\{\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\\{} & \displaystyle \hspace{2em}\hspace{2em}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{({i^{\prime }})}\big)\big\}\big({\boldsymbol{e}_{i}^{\top }}\otimes {\boldsymbol{e}_{{i^{\prime }}}^{\top }}\big),\end{array}\]
and
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\boldsymbol{A}_{3,1}^{(2)}}& \displaystyle :={\sum \limits_{i=1}^{p}}\big\{\mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}+\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\big)+\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big[{\big({\boldsymbol{\xi }}^{(i)}\big)}^{\otimes 2}\big]\\{} & \displaystyle \hspace{1em}\hspace{2em}-{\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}-\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\\{} & \displaystyle \hspace{1em}\hspace{2em}-\boldsymbol{m}_{\boldsymbol{\varepsilon }}\otimes {\big[\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big]}^{\otimes 2}\big\}{\boldsymbol{e}_{i}^{\top }},\end{array}\]
which are ${\mathbb{R}}^{{p}^{3}\times {p}^{2}}$-valued and ${\mathbb{R}}^{{p}^{3}\times p}$-valued matrices, respectively. Further,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle S_{n,3}& \displaystyle ={\sum \limits_{i=1}^{p}}Y_{n-1,i}\big\{\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big)+\mathbb{E}\big(\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\big)+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big\}\\{} & \displaystyle ={\boldsymbol{A}_{3,1}^{(3)}}\boldsymbol{Y}_{n-1}\end{array}\]
with
\[ {\boldsymbol{A}_{3,1}^{(3)}}:={\sum \limits_{i=1}^{p}}\big\{\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big)+\mathbb{E}\big(\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\big)+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big)\otimes \mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\big\}{\boldsymbol{e}_{i}^{\top }}\in {\mathbb{R}}^{{p}^{3}\times p},\]
where $\mathbb{E}(\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon })$ is finite, since
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big(\boldsymbol{\varepsilon }\otimes {\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\big)& \displaystyle =\mathbb{E}\big(\big[\boldsymbol{P}\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\big)\big]\otimes \boldsymbol{\varepsilon }\big)=\mathbb{E}\big(\big[\boldsymbol{P}\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\big)\big]\otimes (\boldsymbol{I}_{p}\boldsymbol{\varepsilon })\big)\\{} & \displaystyle =\mathbb{E}\big((\boldsymbol{P}\otimes \boldsymbol{I}_{p})\big({\boldsymbol{\xi }}^{(i)}\otimes \boldsymbol{\varepsilon }\otimes \boldsymbol{\varepsilon }\big)\big)=(\boldsymbol{P}\otimes \boldsymbol{I}_{p})\big(\mathbb{E}\big({\boldsymbol{\xi }}^{(i)}\big)\otimes \mathbb{E}\big[{\boldsymbol{\varepsilon }}^{\otimes 2}\big]\big).\end{array}\]
Consequently, we have
\[ \mathbb{E}\big({\boldsymbol{Y}_{n}^{\otimes 3}}\mid {\mathcal{F}_{n-1}^{\boldsymbol{Y}}}\big)={\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}}{\boldsymbol{Y}_{n-1}^{\otimes 3}}+\boldsymbol{A}_{3,2}{\boldsymbol{Y}_{n-1}^{\otimes 2}}+\boldsymbol{A}_{3,1}\boldsymbol{Y}_{n-1}+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 3}\big)\]
with $\boldsymbol{A}_{3,2}:={\boldsymbol{A}_{3,2}^{(1)}}+{\boldsymbol{A}_{3,2}^{(2)}}$ and $\boldsymbol{A}_{3,1}:={\boldsymbol{A}_{3,1}^{(1)}}+{\boldsymbol{A}_{3,1}^{(2)}}+{\boldsymbol{A}_{3,1}^{(3)}}$. Taking the expectation, we get
(19)
\[ \mathbb{E}\big({\boldsymbol{Y}_{n}^{\otimes 3}}\big)={\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}}\mathbb{E}\big({\boldsymbol{Y}_{n-1}^{\otimes 3}}\big)+\boldsymbol{A}_{3,2}\mathbb{E}\big({\boldsymbol{Y}_{n-1}^{\otimes 2}}\big)+\boldsymbol{A}_{3,1}\mathbb{E}(\boldsymbol{Y}_{n-1})+\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 3}\big).\]
\[ \left[\begin{array}{c}\mathbb{E}(\boldsymbol{Y}_{n})\\{} \mathbb{E}({\boldsymbol{Y}_{n}^{\otimes 2}})\\{} \mathbb{E}({\boldsymbol{Y}_{n}^{\otimes 3}})\end{array}\right]=\boldsymbol{A}_{3}\left[\begin{array}{c}\mathbb{E}(\boldsymbol{Y}_{n-1})\\{} \mathbb{E}({\boldsymbol{Y}_{n-1}^{\otimes 2}})\\{} \mathbb{E}({\boldsymbol{Y}_{n-1}^{\otimes 3}})\end{array}\right]+\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 3})\end{array}\right],\hspace{1em}n\in \mathbb{N},\]
with
\[ \boldsymbol{A}_{3}:=\left[\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\boldsymbol{M}_{\boldsymbol{\xi }}& \mathbf{0}& \mathbf{0}\\{} \boldsymbol{A}_{2,1}& {\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}& \mathbf{0}\\{} \boldsymbol{A}_{3,1}& \boldsymbol{A}_{3,2}& {\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}}\end{array}\right]\in {\mathbb{R}}^{(p+{p}^{2}+{p}^{3})\times (p+{p}^{2}+{p}^{3})}.\]
Taking into account $\boldsymbol{Y}_{0}=\mathbf{0}$, we obtain
\[ \left[\begin{array}{c}\mathbb{E}(\boldsymbol{Y}_{n})\\{} \mathbb{E}({\boldsymbol{Y}_{n}^{\otimes 2}})\\{} \mathbb{E}({\boldsymbol{Y}_{n}^{\otimes 3}})\end{array}\right]={\sum \limits_{k=1}^{n}}{\boldsymbol{A}_{3}^{n-k}}\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 3})\end{array}\right]={\sum \limits_{\ell =0}^{n-1}}{\boldsymbol{A}_{3}^{\ell }}\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 3})\end{array}\right],\hspace{1em}n\in \mathbb{N}.\]
We have $\varrho ({\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}})={[\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})]}^{2}$ and $\varrho ({\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}})={[\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})]}^{3}$, and hence $\varrho (\boldsymbol{A}_{3})=\max \{\varrho (\boldsymbol{M}_{\boldsymbol{\xi }}),\varrho ({\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}),\varrho ({\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}})\}$. Taking into account $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$, we conclude that $\varrho (\boldsymbol{A}_{3})=\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$, and, by (11), the third moment of π is finite, and
(20)
\[ \left[\begin{array}{c}\int _{{\mathbb{R}}^{p}}\boldsymbol{x}\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})\\{} \int _{{\mathbb{R}}^{p}}{\boldsymbol{x}}^{\otimes 2}\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})\\{} \int _{{\mathbb{R}}^{p}}{\boldsymbol{x}}^{\otimes 3}\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})\end{array}\right]={(\boldsymbol{I}_{p+{p}^{2}+{p}^{3}}-\boldsymbol{A}_{3})}^{-1}\left[\begin{array}{c}\boldsymbol{m}_{\boldsymbol{\varepsilon }}\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 2})\\{} \mathbb{E}({\boldsymbol{\varepsilon }}^{\otimes 3})\end{array}\right].\]
\[ {(\boldsymbol{I}_{p+{p}^{2}+{p}^{3}}-\boldsymbol{A}_{3})}^{-1}=\left[\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}& \mathbf{0}& \mathbf{0}\\{} \boldsymbol{B}_{2,1}& {(\boldsymbol{I}_{{p}^{2}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}})}^{-1}& \mathbf{0}\\{} \boldsymbol{B}_{3,1}& \boldsymbol{B}_{3,2}& {(\boldsymbol{I}_{{p}^{3}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}})}^{-1}\end{array}\right],\]
where
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \boldsymbol{B}_{2,1}={\big(\boldsymbol{I}_{{p}^{2}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}\big)}^{-1}\boldsymbol{A}_{2,1}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1},\\{} & \displaystyle \boldsymbol{B}_{3,1}={\big(\boldsymbol{I}_{{p}^{3}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}}\big)}^{-1}\big(\boldsymbol{A}_{3,1}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}+\boldsymbol{A}_{3,2}\boldsymbol{B}_{2,1}\big),\\{} & \displaystyle \boldsymbol{B}_{3,2}={\big(\boldsymbol{I}_{{p}^{3}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}}\big)}^{-1}\boldsymbol{A}_{3,2}{\big(\boldsymbol{I}_{{p}^{2}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 2}}\big)}^{-1},\end{array}\]
we have
\[ \int _{{\mathbb{R}}^{p}}{\boldsymbol{x}}^{\otimes 3}\hspace{0.1667em}\pi (\mathrm{d}\boldsymbol{x})=\boldsymbol{B}_{3,1}\boldsymbol{m}_{\boldsymbol{\varepsilon }}+\boldsymbol{B}_{3,2}\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 2}\big)+{\big(\boldsymbol{I}_{{p}^{3}}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\otimes 3}}\big)}^{-1}\mathbb{E}\big({\boldsymbol{\varepsilon }}^{\otimes 3}\big).\]
□Proof of Proposition 1.
Similarly as (12), we have
and, by (14),
Put
for $i,j\in \{1,\dots ,p\}$ and $k\in \mathbb{N}$, where ${[U_{k,1},\dots ,U_{k,p}]}^{\top }:=\boldsymbol{U}_{k}$, $k\in \mathbb{N}$. For each $k\in \mathbb{N}$, using $\boldsymbol{X}_{k}=\boldsymbol{M}_{\boldsymbol{\xi }}\boldsymbol{X}_{k-1}+\boldsymbol{m}_{\varepsilon }+\boldsymbol{U}_{k}$ and (21), we obtain
Consequently,
Here $\lim _{N\to \infty }{\boldsymbol{M}_{\boldsymbol{\xi }}^{N}}\operatorname{Var}(\boldsymbol{X}_{0}){({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }})}^{N}=\mathbf{0}\in {\mathbb{R}}^{p\times p}$. Indeed, by the Gelfand formula $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})=\lim _{k\to \infty }\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}{\| }^{1/k}$, see, e.g., Horn and Johnson [6, Corollary 5.6.14]. Hence there exists $k_{0}\in \mathbb{N}$ such that
since $\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$. Thus, for all $N\geqslant k_{0}$,
\[ \mathbb{E}\big(\boldsymbol{X}_{k}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)=\boldsymbol{M}_{\boldsymbol{\xi }}\boldsymbol{X}_{k-1}+\boldsymbol{m}_{\boldsymbol{\varepsilon }},\hspace{1em}k\in \mathbb{N},\]
where ${\mathcal{F}_{k}^{\boldsymbol{X}}}:=\sigma (\boldsymbol{X}_{0},\dots ,\boldsymbol{X}_{k})$, $k\in \mathbb{Z}_{+}$. Consequently,
(21)
\[ \mathbb{E}(\boldsymbol{X}_{k})=\boldsymbol{M}_{\boldsymbol{\xi }}\mathbb{E}(\boldsymbol{X}_{k-1})+\boldsymbol{m}_{\boldsymbol{\varepsilon }},\hspace{1em}k\in \mathbb{N},\](22)
\[ \mathbb{E}(\boldsymbol{X}_{0})={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{m}_{\boldsymbol{\varepsilon }}.\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \boldsymbol{U}_{k}& \displaystyle :=\boldsymbol{X}_{k}-\mathbb{E}\big(\boldsymbol{X}_{k}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)=\boldsymbol{X}_{k}-(\boldsymbol{M}_{\boldsymbol{\xi }}\boldsymbol{X}_{k-1}+\boldsymbol{m}_{\boldsymbol{\varepsilon }})\\{} & \displaystyle \hspace{2.5pt}={\sum \limits_{i=1}^{p}}{\sum \limits_{\ell =1}^{X_{k-1,i}}}\big({\boldsymbol{\xi }_{k,\ell }^{(i)}}-\mathbb{E}\big({\boldsymbol{\xi }_{k,\ell }^{(i)}}\big)\big)+\big(\boldsymbol{\varepsilon }_{k}-\mathbb{E}(\boldsymbol{\varepsilon }_{k})\big),\hspace{1em}k\in \mathbb{N}.\end{array}\]
Then $\mathbb{E}(\boldsymbol{U}_{k}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}})=\mathbf{0}$, $k\in \mathbb{N}$, and using the independence of $\{{\boldsymbol{\xi }_{k,\ell }^{(i)}},\boldsymbol{\varepsilon }_{k}:k,\ell \in \mathbb{N},i\in \{1,\dots ,p\}\}$, we have
(23)
\[ \begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big(U_{k,i}U_{k,j}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)& \displaystyle ={\sum \limits_{q=1}^{p}}X_{k-1,q}\operatorname{Cov}\big({\xi _{k,1}^{(q,i)}},{\xi _{k,1}^{(q,j)}}\big)+\operatorname{Cov}\big({\varepsilon _{k}^{(i)}},{\varepsilon _{k}^{(j)}}\big)\\{} & \displaystyle ={\boldsymbol{v}_{(i,j)}^{\top }}\left[\begin{array}{c}\boldsymbol{X}_{k-1}\\{} 1\end{array}\right]\end{array}\](24)
\[ \boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})=\boldsymbol{M}_{\boldsymbol{\xi }}\big(\boldsymbol{X}_{k-1}-\mathbb{E}(\boldsymbol{X}_{k-1})\big)+\boldsymbol{U}_{k},\hspace{1em}k\in \mathbb{N}.\]
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbb{E}\big(\big(\boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})\big){\big(\boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})\big)}^{\top }\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)\\{} & \displaystyle \hspace{1em}=\mathbb{E}\big(\big(\boldsymbol{M}_{\boldsymbol{\xi }}\big(\boldsymbol{X}_{k-1}-\mathbb{E}(\boldsymbol{X}_{k-1})\big)+\boldsymbol{U}_{k}\big){\big(\boldsymbol{M}_{\boldsymbol{\xi }}\big(\boldsymbol{X}_{k-1}-\mathbb{E}(\boldsymbol{X}_{k-1})\big)+\boldsymbol{U}_{k}\big)}^{\top }\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)\\{} & \displaystyle \hspace{1em}=\mathbb{E}\big(\boldsymbol{U}_{k}{\boldsymbol{U}_{k}^{\top }}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)+\boldsymbol{M}_{\boldsymbol{\xi }}\big(\boldsymbol{X}_{k-1}-\mathbb{E}(\boldsymbol{X}_{k-1})\big){\big(\boldsymbol{X}_{k-1}-\mathbb{E}(\boldsymbol{X}_{k-1})\big)}^{\top }{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\end{array}\]
for all $k\in \mathbb{N}$. Taking the expectation, by (22) and (23), we conclude that
\[ \operatorname{Var}(\boldsymbol{X}_{k})=\mathbb{E}\big(\boldsymbol{U}_{k}{\boldsymbol{U}_{k}^{\top }}\big)+\boldsymbol{M}_{\boldsymbol{\xi }}\operatorname{Var}(\boldsymbol{X}_{k-1}){\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}=\boldsymbol{V}+\boldsymbol{M}_{\boldsymbol{\xi }}\operatorname{Var}(\boldsymbol{X}_{k-1}){\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\]
for all $k\in \mathbb{N}$. Under the conditions of the proposition, by Lemma 1, the unique stationary distribution π has a finite second moment, hence, using again the stationarity of $(\boldsymbol{X}_{k})_{k\in \mathbb{Z}_{+}}$, for each $N\in \mathbb{N}$, we get
(25)
\[ \begin{array}{r@{\hskip0pt}l}\displaystyle \operatorname{Var}(\boldsymbol{X}_{0})& \displaystyle =\boldsymbol{V}+\boldsymbol{M}_{\boldsymbol{\xi }}\operatorname{Var}(\boldsymbol{X}_{0}){\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\\{} & \displaystyle ={\sum \limits_{k=0}^{N-1}}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\boldsymbol{V}{\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{k}+{\boldsymbol{M}_{\boldsymbol{\xi }}^{N}}\operatorname{Var}(\boldsymbol{X}_{0}){\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{N}.\end{array}\](26)
\[ {\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big\| }^{1/k}\leqslant \varrho (\boldsymbol{M}_{\boldsymbol{\xi }})+\frac{1-\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})}{2}=\frac{1+\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})}{2}<1\hspace{1em}\text{for all}\hspace{2.5pt}k\geqslant k_{0},\]
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{N}}\operatorname{Var}(\boldsymbol{X}_{0}){\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{N}\big\| & \displaystyle \hspace{0.1667em}\leqslant \hspace{0.1667em}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{N}}\big\| \big\| \operatorname{Var}(\boldsymbol{X}_{0})\big\| \big\| {\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{N}\big\| \\{} & \displaystyle \hspace{0.1667em}=\hspace{0.1667em}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{N}}\big\| \big\| \operatorname{Var}(\boldsymbol{X}_{0})\big\| \big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{N}}\big\| \hspace{0.1667em}\leqslant \hspace{0.1667em}{\bigg(\frac{1\hspace{0.1667em}+\hspace{0.1667em}\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})}{2}\bigg)}^{2N}\big\| \operatorname{Var}(\boldsymbol{X}_{0})\big\| ,\end{array}\]
hence $\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{N}}\operatorname{Var}(\boldsymbol{X}_{0}){({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }})}^{N}\| \to 0$ as $N\to \infty $. Consequently, $\operatorname{Var}(\boldsymbol{X}_{0})={\sum _{k=0}^{\infty }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\boldsymbol{V}{({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }})}^{k}$, yielding (6). Moreover, by (24),
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbb{E}\big(\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big){\big(\boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})\big)}^{\top }\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)\\{} & \displaystyle \hspace{1em}=\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big)\mathbb{E}\big({\big(\boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})\big)}^{\top }\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)\\{} & \displaystyle \hspace{1em}=\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big){\big(\boldsymbol{X}_{k-1}-\mathbb{E}(\boldsymbol{X}_{k-1})\big)}^{\top }{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }},\hspace{1em}k\in \mathbb{N}.\end{array}\]
Taking the expectation, we conclude
\[ \operatorname{Cov}(\boldsymbol{X}_{0},\boldsymbol{X}_{k})=\operatorname{Cov}(\boldsymbol{X}_{0},\boldsymbol{X}_{k-1}){\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }},\hspace{1em}k\in \mathbb{N}.\]
Hence, by induction, we obtain the formula for $\operatorname{Cov}(\boldsymbol{X}_{0},\boldsymbol{X}_{k})$. The statement will follow from the multidimensional central limit theorem. Due to the continuous mapping theorem, it is sufficient to show the convergence ${N}^{-1/2}({\boldsymbol{S}_{0}^{(N)}},{\boldsymbol{S}_{1}^{(N)}},\dots ,{\boldsymbol{S}_{k}^{(N)}})\stackrel{\mathcal{D}}{\longrightarrow }(\pmb{\mathcal{X}}_{0},\pmb{\mathcal{X}}_{1},\dots ,\pmb{\mathcal{X}}_{k})$ as $N\to \infty $ for all $k\in \mathbb{Z}_{+}$. For all $k\in \mathbb{Z}_{+}$, the random vectors ${({({\boldsymbol{X}_{0}^{(j)}}-\mathbb{E}({\boldsymbol{X}_{0}^{(j)}}))}^{\top },{({\boldsymbol{X}_{1}^{(j)}}-\mathbb{E}({\boldsymbol{X}_{1}^{(j)}}))}^{\top },\dots ,{({\boldsymbol{X}_{k}^{(j)}}-\mathbb{E}({\boldsymbol{X}_{k}^{(j)}}))}^{\top })}^{\top }$, $j\in \mathbb{N}$, are independent, identically distributed having zero mean vector and covariances
\[ \operatorname{Cov}\big({\boldsymbol{X}_{\ell _{1}}^{(j)}},{\boldsymbol{X}_{\ell _{2}}^{(j)}}\big)=\operatorname{Cov}\big({\boldsymbol{X}_{0}^{(j)}},{\boldsymbol{X}_{\ell _{2}-\ell _{1}}^{(j)}}\big)=\operatorname{Var}(\boldsymbol{X}_{0}){\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\ell _{2}-\ell _{1}}\]
for $j\in \mathbb{N}$, $\ell _{1},\ell _{2}\in \{0,1,\dots ,k\}$, $\ell _{1}\leqslant \ell _{2}$, following from the strict stationarity of ${\boldsymbol{X}}^{(j)}$ and from (5). □Proof of Proposition 2.
It is known that
with $C_{3}:=\max \{\mathbb{E}(\| {\boldsymbol{\xi }}^{(i)}-\mathbb{E}({\boldsymbol{\xi }}^{(i)}){\| }^{3}),i\in \{1,\dots ,p\},\mathbb{E}(\| \boldsymbol{\varepsilon }-\mathbb{E}(\boldsymbol{\varepsilon }){\| }^{3})\}$, where the last inequality follows by Proposition 3.3 of Nedényi [12], and the almost sure convergence is a consequence of (4), since, under the third order moment assumptions in Proposition 2, by Lemma 1 and (4),
which implies the statement using Slutsky’s lemma, since $\rho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$. Indeed, $\lim _{n\to \infty }{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}=\mathbf{0}$ by (26), hence
for $j\in \{1,\dots ,p\}$ and $k\in \mathbb{N}$, and hence
since, applying (26) for $\lfloor nt\rfloor \geqslant k_{0}$, we have
\[ \boldsymbol{U}_{k}=\boldsymbol{X}_{k}-\mathbb{E}\big(\boldsymbol{X}_{k}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)=\boldsymbol{X}_{k}-\boldsymbol{M}_{\boldsymbol{\xi }}\boldsymbol{X}_{k-1}-\boldsymbol{m}_{\varepsilon },\hspace{1em}k\in \mathbb{N},\]
are martingale differences with respect to the filtration $({\mathcal{F}_{k}^{\boldsymbol{X}}})_{k\in \mathbb{Z}_{+}}$. The functional martingale central limit theorem can be applied, see, e.g., Jacod and Shiryaev [7, Theorem VIII.3.33]. Indeed, using (23) and the fact that the first moment of $\boldsymbol{X}_{0}$ exists and is finite, by (4), for each $t\in \mathbb{R}_{+}$, and $i,j\in \{1,\dots ,p\}$, we have
\[ \frac{1}{n}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\mathbb{E}\big(U_{k,i}U_{k,j}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)\stackrel{\mathrm{a}.\mathrm{s}.}{\longrightarrow }{\boldsymbol{v}_{(i,j)}^{\top }}\left[\begin{array}{c}\mathbb{E}(\boldsymbol{X}_{0})\\{} 1\end{array}\right]t=V_{i,j}t\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]
and hence the convergence holds in probability as well. Moreover, the conditional Lindeberg condition holds, namely, for all $\delta >0$,
(27)
\[ \begin{array}{r@{\hskip0pt}l}\displaystyle \frac{1}{n}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\mathbb{E}\big(\| \boldsymbol{U}_{k}{\| }^{2}\mathbf{1}_{\{\| \boldsymbol{U}_{k}\| >\delta \sqrt{n}\}}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)& \displaystyle \leqslant \frac{1}{\delta {n}^{3/2}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\mathbb{E}\big(\| \boldsymbol{U}_{k}{\| }^{3}\mid {\mathcal{F}_{k-1}^{\boldsymbol{X}}}\big)\\{} & \displaystyle \hspace{0.2778em}\leqslant \frac{C_{3}{(p+1)}^{3}}{\delta {n}^{3/2}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}{\left\| \left[\begin{array}{c}\boldsymbol{X}_{k-1}\\{} 1\end{array}\right]\right\| }^{3}\stackrel{\mathrm{a}.\mathrm{s}.}{\longrightarrow }0\end{array}\]
\[ \frac{1}{n}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}{\left\| \left[\begin{array}{c}\boldsymbol{X}_{k-1}\\{} 1\end{array}\right]\right\| }^{3}\stackrel{\mathrm{a}.\mathrm{s}.}{\longrightarrow }t\mathbb{E}\left({\left\| \left[\begin{array}{c}\boldsymbol{X}_{0}\\{} 1\end{array}\right]\right\| }^{3}\right)\hspace{1em}\text{as}\hspace{5pt}n\to \infty \text{.}\]
Hence we obtain
\[ \Bigg(\frac{1}{\sqrt{n}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\boldsymbol{U}_{k}\Bigg)_{t\in \mathbb{R}_{+}}\stackrel{\mathcal{D}}{\longrightarrow }\boldsymbol{B}\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]
where $\boldsymbol{B}=(\boldsymbol{B}_{t})_{t\in \mathbb{R}_{+}}$ is a p-dimensional zero mean Brownian motion satisfying $\operatorname{Var}(\boldsymbol{B}_{1})=\boldsymbol{V}$. Using (24), we have
\[ \boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})={\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big)+{\sum \limits_{j=1}^{k}}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k-j}}\boldsymbol{U}_{j},\hspace{1em}k\in \mathbb{N}.\]
Consequently, for each $n\in \mathbb{N}$ and $t\in \mathbb{R}_{+}$,
(28)
\[ \begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{1}{\sqrt{n}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big(\boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})\big)\\{} & \displaystyle \hspace{1em}=\frac{1}{\sqrt{n}}\Bigg[\Bigg({\sum \limits_{k=1}^{\lfloor nt\rfloor }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\Bigg)\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big)+{\sum \limits_{k=1}^{\lfloor nt\rfloor }}{\sum \limits_{j=1}^{k}}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k-j}}\boldsymbol{U}_{j}\Bigg]\\{} & \displaystyle \hspace{1em}=\frac{1}{\sqrt{n}}\Bigg[{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big(\boldsymbol{M}_{\boldsymbol{\xi }}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}\big)\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big)+{\sum \limits_{j=1}^{\lfloor nt\rfloor }}\Bigg({\sum \limits_{k=j}^{\lfloor nt\rfloor }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k-j}}\Bigg)\boldsymbol{U}_{j}\Bigg]\\{} & \displaystyle \hspace{1em}=\frac{1}{\sqrt{n}}\Bigg[{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big(\boldsymbol{M}_{\boldsymbol{\xi }}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}\big)\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big)\\{} & \displaystyle \hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{1em}\hspace{2em}+{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}{\sum \limits_{j=1}^{\lfloor nt\rfloor }}\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -j+1}}\big)\boldsymbol{U}_{j}\Bigg],\end{array}\]
\[ \frac{1}{\sqrt{n}}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big(\boldsymbol{M}_{\boldsymbol{\xi }}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}\big)\big(\boldsymbol{X}_{0}-\mathbb{E}(\boldsymbol{X}_{0})\big)\stackrel{\mathrm{a}.\mathrm{s}.}{\longrightarrow }\mathbf{0}\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]
Moreover, ${n}^{-1/2}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}{\sum _{j=1}^{\lfloor nt\rfloor }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -j+1}}\boldsymbol{U}_{j}$ converges in $L_{1}$ and hence in probability to $\mathbf{0}$ as $n\to \infty $, since by (23),
(29)
\[ \mathbb{E}\big(|U_{k,j}|\big)\leqslant \sqrt{\mathbb{E}\big({U_{k,j}^{2}}\big)}=\sqrt{{\boldsymbol{v}_{(j,j)}^{\top }}\left[\begin{array}{c}\mathbb{E}(\boldsymbol{X}_{0})\\{} 1\end{array}\right]}=\sqrt{V_{j,j}}\](30)
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbb{E}\Bigg(\Bigg\| \frac{1}{\sqrt{n}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -k+1}}\boldsymbol{U}_{k}\Bigg\| \Bigg)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\mathbb{E}\big(\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -k+1}}\boldsymbol{U}_{k}\big\| \big)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -k+1}}\big\| \mathbb{E}\big(\| \boldsymbol{U}_{k}\| \big)\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -k+1}}\big\| {\sum \limits_{j=1}^{p}}\mathbb{E}\big(|U_{k,j}|\big)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -k+1}}\big\| {\sum \limits_{j=1}^{p}}\sqrt{V_{j,j}}\to 0\hspace{1em}\text{as}\hspace{5pt}n\to \infty \text{,}\end{array}\]
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -k+1}}\big\| \\{} & \displaystyle \hspace{1em}={\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big\| ={\sum \limits_{k=1}^{k_{0}-1}}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big\| +{\sum \limits_{k=k_{0}}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big\| \\{} & \displaystyle \hspace{1em}\leqslant {\sum \limits_{k=1}^{k_{0}-1}}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big\| +{\sum \limits_{k=k_{0}}^{\lfloor nt\rfloor }}{\bigg(\frac{1+\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})}{2}\bigg)}^{k}\leqslant {\sum \limits_{k=1}^{k_{0}-1}}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big\| +{\sum \limits_{k=k_{0}}^{\infty }}{\bigg(\frac{1+\varrho (\boldsymbol{M}_{\boldsymbol{\xi }})}{2}\bigg)}^{k},\end{array}\]
which is finite. Consequently, by Slutsky’s lemma,
\[ \Bigg({n}^{-\frac{1}{2}}{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\big(\boldsymbol{X}_{k}-\mathbb{E}(\boldsymbol{X}_{k})\big)\Bigg)_{t\in \mathbb{R}_{+}}\stackrel{\mathcal{D}}{\longrightarrow }{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{B}\hspace{0.1667em}\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]
where $\boldsymbol{B}=(\boldsymbol{B}_{t})_{t\in \mathbb{R}_{+}}$ is a p-dimensional zero mean Brownian motion satisfying $\operatorname{Var}(\boldsymbol{B}_{1})=\boldsymbol{V}$, as desired. □Proof of Theorem 1.
First, we prove (8). For all $N,m\in \mathbb{N}$ and all $t_{1},\dots ,t_{m}\in \mathbb{R}_{+}$, by Proposition 2 and the continuity theorem, we have
\[ \frac{1}{\sqrt{n}}\big({\boldsymbol{S}_{t_{1}}^{(N,n)}},\dots ,{\boldsymbol{S}_{t_{m}}^{(N,n)}}\big)\stackrel{\mathcal{D}}{\longrightarrow }{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}{\sum \limits_{\ell =1}^{N}}\big({\boldsymbol{B}_{t_{1}}^{(\ell )}},\dots ,{\boldsymbol{B}_{t_{m}}^{(\ell )}}\big)\]
as $n\hspace{0.1667em}\to \hspace{0.1667em}\infty $, where ${\boldsymbol{B}}^{(\ell )}\hspace{0.1667em}=\hspace{0.1667em}({\boldsymbol{B}_{t}^{(\ell )}})_{t\in \mathbb{R}_{+}}$, $\ell \hspace{0.1667em}\in \hspace{0.1667em}\{1,\dots ,N\}$, are independent p-dimensional zero mean Brownian motions satisfying $\operatorname{Var}({\boldsymbol{B}_{1}^{(\ell )}})=\boldsymbol{V}$, $\ell \in \{1,\dots ,p\}$. Since
\[ \frac{1}{\sqrt{N}}{\sum \limits_{\ell =1}^{N}}\big({\boldsymbol{B}_{t_{1}}^{(\ell )}},\dots ,{\boldsymbol{B}_{t_{m}}^{(\ell )}}\big)\stackrel{\mathcal{D}}{=}(\boldsymbol{B}_{t_{1}},\dots ,\boldsymbol{B}_{t_{m}}),\hspace{1em}N\in \mathbb{N},\hspace{2.5pt}m\in \mathbb{N},\]
we obtain the convergence (8).Now, we turn to prove (7). For all $n\in \mathbb{N}$ and for all $t_{1},\dots ,t_{m}\in \mathbb{R}_{+}$ with $t_{1}<\cdots <t_{m}$, $m\in \mathbb{N}$, by Proposition 1 and by the continuous mapping theorem, we have
We have
and hence ${(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }})}^{-1}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}={(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }})}^{-1}-\boldsymbol{I}_{p}$, thus the left-hand side of equation (31) can be written as
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{1}{\sqrt{N}}{\big({\big({\boldsymbol{S}_{t_{1}}^{(N,n)}}\big)}^{\top },\dots ,{\big({\boldsymbol{S}_{t_{m}}^{(N,n)}}\big)}^{\top }\big)}^{\top }\stackrel{\mathcal{D}}{\longrightarrow }{\Bigg({\sum \limits_{k=1}^{\lfloor nt_{1}\rfloor }}{\pmb{\mathcal{X}}_{k}^{\top }},\dots ,{\sum \limits_{k=1}^{\lfloor nt_{m}\rfloor }}{\pmb{\mathcal{X}}_{k}^{\top }}\Bigg)}^{\top }\\{} & \displaystyle \stackrel{\mathcal{D}}{=}\mathcal{N}_{pm}\Bigg(\mathbf{0},\operatorname{Var}\Bigg({\Bigg({\sum \limits_{k=1}^{\lfloor nt_{1}\rfloor }}{\pmb{\mathcal{X}}_{k}^{\top }},\dots ,{\sum \limits_{k=1}^{\lfloor nt_{m}\rfloor }}{\pmb{\mathcal{X}}_{k}^{\top }}\Bigg)}^{\top }\Bigg)\Bigg)\hspace{1em}\text{as}\hspace{5pt}N\to \infty \text{,}\end{array}\]
where $(\pmb{\mathcal{X}}_{k})_{k\in \mathbb{Z}_{+}}$ is the p-dimensional zero mean stationary Gaussian process given in Proposition 1 and, by (5),
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \operatorname{Var}\Bigg({\Bigg({\sum \limits_{k=1}^{\lfloor nt_{1}\rfloor }}{\pmb{\mathcal{X}}_{k}^{\top }},\dots ,{\sum \limits_{k=1}^{\lfloor nt_{m}\rfloor }}{\pmb{\mathcal{X}}_{k}^{\top }}\Bigg)}^{\top }\Bigg)\\{} & \displaystyle \hspace{1em}={\Bigg(\operatorname{Cov}\Bigg({\sum \limits_{k=1}^{\lfloor nt_{i}\rfloor }}\pmb{\mathcal{X}}_{k},{\sum \limits_{k=1}^{\lfloor nt_{j}\rfloor }}\pmb{\mathcal{X}}_{k}\Bigg)\Bigg)_{i,j=1}^{m}}\\{} & \displaystyle \hspace{1em}={\Bigg({\sum \limits_{k=1}^{\lfloor nt_{i}\rfloor }}{\sum \limits_{\ell =1}^{\lfloor nt_{j}\rfloor }}\operatorname{Cov}(\pmb{\mathcal{X}}_{k},\pmb{\mathcal{X}}_{\ell })\Bigg)_{i,j=1}^{m}}\\{} & \displaystyle \hspace{1em}=\Bigg({\sum \limits_{k=1}^{\lfloor nt_{i}\rfloor }}{\sum \limits_{\ell =1}^{(k-1)\wedge \lfloor nt_{j}\rfloor }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k-\ell }}\operatorname{Var}(\boldsymbol{X}_{0})+\big(\lfloor nt_{i}\rfloor \wedge \lfloor nt_{j}\rfloor \big)\operatorname{Var}(\boldsymbol{X}_{0})\\{} & \displaystyle \hspace{2.5pt}\hspace{2em}+\operatorname{Var}(\boldsymbol{X}_{0}){\sum \limits_{k=1}^{\lfloor nt_{i}\rfloor }}{\sum \limits_{\ell =k+1}^{\lfloor nt_{j}\rfloor }}{\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\ell -k}\Bigg){_{i,j=1}^{m}},\end{array}\]
where ${\sum _{\ell =q_{1}}^{q_{2}}}:=0$ for all $q_{2}<q_{1}$, $q_{1},q_{2}\in \mathbb{N}$. By the continuity theorem, for all $\boldsymbol{\theta }_{1},\dots ,\boldsymbol{\theta }_{m}\in {\mathbb{R}}^{p}$, $m\in \mathbb{N}$, we conclude that
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{N\to \infty }{\lim }\mathbb{E}\Bigg(\exp \Bigg\{\mathrm{i}{\sum \limits_{j=1}^{m}}{\boldsymbol{\theta }_{j}^{\top }}{n}^{-1/2}{N}^{-1/2}{\boldsymbol{S}_{t_{j}}^{(N,n)}}\Bigg\}\Bigg)\\{} & \displaystyle \hspace{1em}=\exp \Bigg\{-\frac{1}{2n}{\sum \limits_{i=1}^{m}}{\sum \limits_{j=1}^{m}}{\boldsymbol{\theta }_{i}^{\top }}\Bigg[{\sum \limits_{k=1}^{\lfloor nt_{i}\rfloor }}{\sum \limits_{\ell =1}^{\lfloor nt_{j}\rfloor }}\operatorname{Cov}(\pmb{\mathcal{X}}_{k},\pmb{\mathcal{X}}_{\ell })\Bigg]\boldsymbol{\theta }_{j}\Bigg\}\\{} & \displaystyle \hspace{1em}\to \exp \Bigg\{-\frac{1}{2}{\sum \limits_{i=1}^{m}}{\sum \limits_{j=1}^{m}}(t_{i}\wedge t_{j}){\boldsymbol{\theta }_{i}^{\top }}\big[\boldsymbol{M}_{\boldsymbol{\xi }}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0})\\{} & \displaystyle \hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\hspace{1em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}+\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big]\boldsymbol{\theta }_{j}\Bigg\}\end{array}\]
as $n\to \infty $. Indeed, for all $s,t\in \mathbb{R}_{+}$ with $s<t$, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{1}{n}{\sum \limits_{k=1}^{\lfloor ns\rfloor }}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}\operatorname{Cov}(\pmb{\mathcal{X}}_{k},\pmb{\mathcal{X}}_{\ell })\\{} & \displaystyle \hspace{1em}=\frac{1}{n}{\sum \limits_{k=1}^{\lfloor ns\rfloor }}{\sum \limits_{\ell =1}^{k-1}}{\boldsymbol{M}_{\boldsymbol{\xi }}^{k-\ell }}\operatorname{Var}(\boldsymbol{X}_{0})+\frac{\lfloor ns\rfloor }{n}\operatorname{Var}(\boldsymbol{X}_{0})+\frac{1}{n}\operatorname{Var}(\boldsymbol{X}_{0}){\sum \limits_{k=1}^{\lfloor ns\rfloor }}{\sum \limits_{\ell =k+1}^{\lfloor nt\rfloor }}{\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\ell -k}\\{} & \displaystyle \hspace{1em}=\frac{1}{n}{\sum \limits_{k=1}^{\lfloor ns\rfloor }}\big(\boldsymbol{M}_{\boldsymbol{\xi }}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{k}}\big){(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})+\frac{\lfloor ns\rfloor }{n}\operatorname{Var}(\boldsymbol{X}_{0})\\{} & \displaystyle \hspace{2em}+\frac{1}{n}\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}{\sum \limits_{k=1}^{\lfloor ns\rfloor }}\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}-{\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\lfloor nt\rfloor -k+1}\big)\\{} & \displaystyle \hspace{1em}=\frac{1}{n}\big(\lfloor ns\rfloor \boldsymbol{M}_{\boldsymbol{\xi }}-\boldsymbol{M}_{\boldsymbol{\xi }}\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor ns\rfloor }}\big){(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big){(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})\\{} & \displaystyle \hspace{2em}+\frac{\lfloor ns\rfloor }{n}\operatorname{Var}(\boldsymbol{X}_{0})+\frac{1}{n}\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}\\{} & \displaystyle \hspace{1em}\hspace{2em}\times \big(\lfloor ns\rfloor {\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}-{\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}\big(\boldsymbol{I}_{p}-{\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\lfloor ns\rfloor }\big){\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\lfloor nt\rfloor -\lfloor ns\rfloor +1}\big)\\{} & \displaystyle \hspace{1em}=\frac{\lfloor ns\rfloor }{n}\big(\boldsymbol{M}_{\boldsymbol{\xi }}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)\\{} & \displaystyle \hspace{2em}-\frac{1}{n}\big(\boldsymbol{M}_{\boldsymbol{\xi }}\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor ns\rfloor }}\big){(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-2}\operatorname{Var}(\boldsymbol{X}_{0})\\{} & \displaystyle \hspace{2em}\hspace{2em}+\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-2}\big(\boldsymbol{I}_{p}-{\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\lfloor ns\rfloor }\big){\big({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{\lfloor nt\rfloor -\lfloor ns\rfloor +1}\big)\\{} & \displaystyle \hspace{1em}\to s\big(\boldsymbol{M}_{\boldsymbol{\xi }}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)\end{array}\]
as $n\to \infty $, since $\lim _{n\to \infty }{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor ns\rfloor }}=\mathbf{0}$, $\lim _{n\to \infty }{({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }})}^{\lfloor ns\rfloor }=\mathbf{0}$ and $\lim _{n\to \infty }{({\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }})}^{\lfloor nt\rfloor -\lfloor ns\rfloor +1}=\mathbf{0}$ by (26). It remains to show that
(31)
\[ \begin{array}{r@{\hskip0pt}l}& \displaystyle \boldsymbol{M}_{\boldsymbol{\xi }}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\\{} & \displaystyle \hspace{1em}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{V}{\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}.\end{array}\](32)
\[ \begin{array}{r@{\hskip0pt}l}\displaystyle \boldsymbol{M}_{\boldsymbol{\xi }}{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}& \displaystyle =\big(\boldsymbol{I}_{p}-(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})\big){(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\\{} & \displaystyle ={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}-\boldsymbol{I}_{p},\end{array}\]
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \big({(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}-\boldsymbol{I}_{p}\big)\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0})\big({\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}-\boldsymbol{I}_{p}\big)\\{} & \displaystyle \hspace{1em}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})-\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}.\end{array}\]
By (25), we have $\boldsymbol{V}=\operatorname{Var}(\boldsymbol{X}_{0})-\boldsymbol{M}_{\boldsymbol{\xi }}\operatorname{Var}(\boldsymbol{X}_{0}){\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}$, hence, by (32), the right-hand side of the equation (31) can be written as
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big(\operatorname{Var}(\boldsymbol{X}_{0})-\boldsymbol{M}_{\boldsymbol{\xi }}\operatorname{Var}(\boldsymbol{X}_{0}){\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}\\{} & \displaystyle \hspace{1em}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}\\{} & \displaystyle \hspace{2em}-{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{M}_{\boldsymbol{\xi }}\operatorname{Var}(\boldsymbol{X}_{0}){\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}{\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}\\{} & \displaystyle \hspace{1em}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}\\{} & \displaystyle \hspace{2em}-\big({(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}-\boldsymbol{I}_{p}\big)\operatorname{Var}(\boldsymbol{X}_{0})\big({\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1}-\boldsymbol{I}_{p}\big)\\{} & \displaystyle \hspace{1em}={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\operatorname{Var}(\boldsymbol{X}_{0})-\operatorname{Var}(\boldsymbol{X}_{0})+\operatorname{Var}(\boldsymbol{X}_{0}){\big(\boldsymbol{I}_{p}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\top }}\big)}^{-1},\end{array}\]
and we conclude (31). This implies the convergence (7). □Proof of Theorem 2.
As n and N converge to infinity simultaneously, (9) is equivalent to ${(nN_{n})}^{-\frac{1}{2}}{\boldsymbol{S}}^{(N_{n},n)}\stackrel{\mathcal{D}}{\longrightarrow }{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\hspace{0.1667em}\boldsymbol{B}$ as $n\to \infty $ for any sequence $(N_{n})_{n\in \mathbb{N}}$ of positive integers such that $\lim _{n\to \infty }N_{n}=\infty $. As we have seen in the proof of Proposition 2, for each $j\in \mathbb{N}$,
\[ {\boldsymbol{U}_{k}^{(j)}}:={\boldsymbol{X}_{k}^{(j)}}-\mathbb{E}\big({\boldsymbol{X}_{k}^{(j)}}\mid {\mathcal{F}}^{{\boldsymbol{X}_{k-1}^{(j)}}}\big)={\boldsymbol{X}_{k}^{(j)}}-\boldsymbol{M}_{\boldsymbol{\xi }}{\boldsymbol{X}_{k-1}^{(j)}}-\boldsymbol{m}_{\varepsilon },\hspace{1em}k\in \mathbb{N},\]
are martingale differences with respect to the filtration $({\mathcal{F}_{k}^{{\boldsymbol{X}}^{(j)}}})_{k\in \mathbb{Z}_{+}}$. We are going to apply the functional martingale central limit theorem, see, e.g., Jacod and Shiryaev [7, Theorem VIII.3.33], for the triangular array consisting of the random vectors
\[ \big({\boldsymbol{V}_{k}^{(n)}}\big)_{k\in \mathbb{N}}:={(nN_{n})}^{-\frac{1}{2}}\big({\boldsymbol{U}_{1}^{(1)}},\dots ,{\boldsymbol{U}_{1}^{(N_{n})}}\hspace{-0.1667em},{\boldsymbol{U}_{2}^{(1)}},\dots ,{\boldsymbol{U}_{2}^{(N_{n})}}\hspace{-0.1667em},{\boldsymbol{U}_{3}^{(1)}},\dots ,{\boldsymbol{U}_{3}^{(N_{n})}}\hspace{-0.1667em},\dots \big)\]
in the ${n}^{\mathrm{th}}$ row for each $n\in \mathbb{N}$ with the filtration $({\mathcal{F}_{k}^{(n)}})_{k\in \mathbb{Z}_{+}}$ given by ${\mathcal{F}_{k}^{(n)}}:={\mathcal{F}_{k}^{{\boldsymbol{Y}}^{(n)}}}=\sigma ({\boldsymbol{Y}_{0}^{(n)}},\dots ,{\boldsymbol{Y}_{k}^{(n)}})$, where
\[ \big({\boldsymbol{Y}_{k}^{(n)}}\big)_{k\in \mathbb{Z}_{+}}:=\big(\big({\boldsymbol{X}_{0}^{(1)}},\dots ,{\boldsymbol{X}_{0}^{(N_{n})}}\big),{\boldsymbol{X}_{1}^{(1)}},\dots ,{\boldsymbol{X}_{1}^{(N_{n})}},{\boldsymbol{X}_{2}^{(1)}},\dots ,{\boldsymbol{X}_{2}^{(N_{n})}},\dots \big).\]
Hence ${\mathcal{F}_{0}^{(n)}}=\sigma ({\boldsymbol{X}_{0}^{(1)}},\dots ,{\boldsymbol{X}_{0}^{(N_{n})}})$, and for each $k=\ell N_{n}+r$ with $\ell \in \mathbb{Z}_{+}$ and $r\in \{1,\dots ,N_{n}\}$, we have
\[ {\mathcal{F}_{k}^{(n)}}=\sigma \big(\big({\cup _{j=1}^{r}}{\mathcal{F}_{\ell +1}^{{\boldsymbol{X}}^{(j)}}}\big)\cup \big({\cup _{j=r+1}^{N_{n}}}{\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(j)}}}\big)\big),\]
where ${\cup _{j=N_{n}+1}^{N_{n}}}:=\varnothing $. Moreover, ${\boldsymbol{Y}_{0}^{(n)}}=({\boldsymbol{X}_{0}^{(1)}},\dots ,{\boldsymbol{X}_{0}^{(N_{n})}})$, and for $k=\ell N_{n}+r$ with $\ell \in \mathbb{Z}_{+}$ and $r\in \{1,\dots ,N_{n}\}$, we have ${\boldsymbol{Y}_{k}^{(n)}}={\boldsymbol{X}_{\ell +1}^{(r)}}$ and ${\boldsymbol{V}_{k}^{(n)}}={(nN_{n})}^{-\frac{1}{2}}{\boldsymbol{U}_{\ell +1}^{(r)}}$.Next we check that for each $n\in \mathbb{N}$, $({\boldsymbol{V}_{k}^{(n)}})_{k\in \mathbb{N}}$ is a sequence of martingale differences with respect to $({\mathcal{F}_{k}^{(n)}})_{k\in \mathbb{Z}_{+}}$. We will use the equality $\mathbb{E}(\boldsymbol{\xi }\mid \sigma (\mathcal{G}_{1}\cup \mathcal{G}_{2}))=\mathbb{E}(\boldsymbol{\xi }\mid \mathcal{G}_{1})$ for a random vector $\boldsymbol{\xi }$ and for σ-algebras $\mathcal{G}_{1}\subset \mathcal{F}$ and $\mathcal{G}_{2}\subset \mathcal{F}$ such that $\sigma (\sigma (\boldsymbol{\xi })\cup \mathcal{G}_{1})$ and $\mathcal{G}_{2}$ are independent and $\mathbb{E}(\| \boldsymbol{\xi }\| )<\infty $. For each $k=\ell N_{n}+1$ with $\ell \in \mathbb{Z}_{+}$, we have $\mathbb{E}({\boldsymbol{V}_{k}^{(n)}}\mid {\mathcal{F}_{k-1}^{(n)}})={(nN_{n})}^{-\frac{1}{2}}\mathbb{E}({\boldsymbol{U}_{\ell +1}^{(1)}}\mid {\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(1)}}})=\mathbf{0}$, since
\[ \mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(1)}}\mid {\mathcal{F}_{k-1}^{(n)}}\big)=\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(1)}}\mid \sigma \big({\cup _{j=1}^{N_{n}}}{\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(j)}}}\big)\big)=\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(1)}}\mid {\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(1)}}}\big)=\mathbf{0}.\]
In a similar way, for each $k=\ell N_{n}+r$ with $\ell \in \mathbb{Z}_{+}$ and $r\in \{2,\dots ,N_{n}\}$, we have $\mathbb{E}({\boldsymbol{V}_{k}^{(n)}}\mid {\mathcal{F}_{k-1}^{(n)}})={(nN_{n})}^{-\frac{1}{2}}\mathbb{E}({\boldsymbol{U}_{\ell +1}^{(r)}}\mid {\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(r)}}})=\mathbf{0}$, since
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(r)}}\mid {\mathcal{F}_{k-1}^{(n)}}\big)& \displaystyle =\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(r)}}\mid \sigma \big(\big({\cup _{j=1}^{r-1}}{\mathcal{F}_{\ell +1}^{{\boldsymbol{X}}^{(j)}}}\big)\cup \big({\cup _{j=r}^{N_{n}}}{\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(j)}}}\big)\big)\big)\\{} & \displaystyle =\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(r)}}\mid {\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(r)}}}\big)=\mathbf{0}.\end{array}\]
We want to obtain a functional central limit theorem for the sequence
\[ \Bigg({\sum \limits_{k=1}^{\lfloor nt\rfloor N_{n}}}{\boldsymbol{V}_{k}^{(n)}}\Bigg)_{t\in \mathbb{R}_{+}}=\Bigg(\frac{1}{\sqrt{nN_{n}}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{U}_{\ell }^{(r)}}\Bigg)_{t\in \mathbb{R}_{+}},\hspace{1em}n\in \mathbb{N}.\]
First, we calculate the conditional variance matrix of ${\boldsymbol{V}_{k}^{(n)}}$. If $k=\ell N_{n}+1$ with $\ell \in \mathbb{Z}_{+}$, then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}\big({\boldsymbol{V}_{k}^{(n)}}{\big({\boldsymbol{V}_{k}^{(n)}}\big)}^{\top }\mid {\mathcal{F}_{k-1}^{(n)}}\big)& \displaystyle ={(nN_{n})}^{-1}\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(1)}}{\big({\boldsymbol{U}_{\ell +1}^{(1)}}\big)}^{\top }\mid \sigma \big({\cup _{j=1}^{N_{n}}}{\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(j)}}}\big)\big)\\{} & \displaystyle ={(nN_{n})}^{-1}\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(1)}}{\big({\boldsymbol{U}_{\ell +1}^{(1)}}\big)}^{\top }\mid {\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(1)}}}\big).\end{array}\]
In a similar way, if $k=\ell N_{n}+r$ with $\ell \in \mathbb{Z}_{+}$ and $r\in \{2,\dots ,N_{n}\}$, then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbb{E}\big({\boldsymbol{V}_{k}^{(n)}}{\big({\boldsymbol{V}_{k}^{(n)}}\big)}^{\top }\mid {\mathcal{F}_{k-1}^{(n)}}\big)\\{} & \displaystyle \hspace{1em}={(nN_{n})}^{-1}\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(r)}}{\big({\boldsymbol{U}_{\ell +1}^{(r)}}\big)}^{\top }\mid \sigma \big(\big({\cup _{j=1}^{r-1}}{\mathcal{F}_{\ell +1}^{{\boldsymbol{X}}^{(j)}}}\big)\cup \big({\cup _{j=r}^{N_{n}}}{\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(j)}}}\big)\big)\big)\\{} & \displaystyle \hspace{1em}={(nN_{n})}^{-1}\mathbb{E}\big({\boldsymbol{U}_{\ell +1}^{(r)}}{\big({\boldsymbol{U}_{\ell +1}^{(r)}}\big)}^{\top }\mid {\mathcal{F}_{\ell }^{{\boldsymbol{X}}^{(r)}}}\big).\end{array}\]
Consequently, for each $n\in \mathbb{N}$ and $t\in \mathbb{R}_{+}$, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\sum \limits_{k=1}^{\lfloor nt\rfloor N_{n}}}\mathbb{E}\big({\boldsymbol{V}_{k}^{(n)}}{\big({\boldsymbol{V}_{k}^{(n)}}\big)}^{\top }\mid {\mathcal{F}_{k-1}^{(n)}}\big)\\{} & \displaystyle \hspace{1em}={\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}\mathbb{E}\big({\boldsymbol{V}_{(\ell -1)N_{n}+r}^{(n)}}{\big({\boldsymbol{V}_{(\ell -1)N_{n}+r}^{(n)}}\big)}^{\top }\mid {\mathcal{F}_{(\ell -1)N_{n}+r-1}^{(n)}}\big)\\{} & \displaystyle \hspace{1em}=\frac{1}{nN_{n}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}\mathbb{E}\big({\boldsymbol{U}_{\ell }^{(r)}}{\big({\boldsymbol{U}_{\ell }^{(r)}}\big)}^{\top }\mid {\mathcal{F}_{\ell -1}^{{\boldsymbol{X}}^{(r)}}}\big).\end{array}\]
Next, we show that for each $t\in \mathbb{R}_{+}$ and $i,j\in \{1,\dots ,p\}$, we have
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \frac{1}{nN_{n}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}\mathbb{E}\big({U_{\ell ,i}^{(r)}}{U_{\ell ,j}^{(r)}}\mid {\mathcal{F}_{\ell -1}^{{\boldsymbol{X}}^{(r)}}}\big)& \displaystyle =\frac{1}{nN_{n}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{v}_{(i,j)}^{\top }}\left[\begin{array}{c}{\boldsymbol{X}_{\ell -1}^{(r)}}\\{} 1\end{array}\right]\\{} & \displaystyle \hspace{0.2778em}\stackrel{\mathbb{P}}{\longrightarrow }{\boldsymbol{v}_{(i,j)}^{\top }}\left[\begin{array}{c}\mathbb{E}(\boldsymbol{X}_{0})\\{} 1\end{array}\right]t=V_{i,j}t\end{array}\]
as $n\to \infty $. Indeed, the equality follows by (23), and for the convergence in probability, note that $\lim _{n\to \infty }\frac{\lfloor nt\rfloor }{n}=t$, $t\in \mathbb{R}_{+}$, and, by Cauchy–Schwarz inequality,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \mathbb{E}\Bigg({\Bigg(\frac{1}{\lfloor nt\rfloor N_{n}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{v}_{(i,j)}^{\top }}\left[\begin{array}{c}{\boldsymbol{X}_{\ell -1}^{(r)}}-\mathbb{E}(\boldsymbol{X}_{0})\\{} 0\end{array}\right]\Bigg)}^{2}\Bigg)\\{} & \displaystyle \hspace{1em}=\frac{1}{{\lfloor nt\rfloor }^{2}{N_{n}^{2}}}\mathbb{E}\Bigg(\Bigg({\boldsymbol{v}_{(i,j)}^{\top }}{\sum \limits_{\ell _{1}=1}^{\lfloor nt\rfloor }}{\sum \limits_{r_{1}=1}^{N_{n}}}\left[\begin{array}{c}{\boldsymbol{X}_{\ell _{1}-1}^{(r_{1})}}-\mathbb{E}(\boldsymbol{X}_{0})\\{} 0\end{array}\right]\Bigg)\\{} & \displaystyle \hspace{1em}\hspace{2em}\hspace{2em}\hspace{2em}\times \Bigg({\sum \limits_{\ell _{2}=1}^{\lfloor nt\rfloor }}{\sum \limits_{r_{2}=1}^{N_{n}}}{\left[\begin{array}{c}{\boldsymbol{X}_{\ell _{2}-1}^{(r_{2})}}-\mathbb{E}(\boldsymbol{X}_{0})\\{} 0\end{array}\right]}^{\top }\boldsymbol{v}_{(i,j)}\Bigg)\Bigg)\\{} & \displaystyle \hspace{1em}=\frac{1}{{\lfloor nt\rfloor }^{2}{N_{n}^{2}}}\\{} & \displaystyle \hspace{2em}\times {\boldsymbol{v}_{(i,j)}^{\top }}{\sum \limits_{\ell _{1}=1}^{\lfloor nt\rfloor }}{\sum \limits_{\ell _{2}=1}^{\lfloor nt\rfloor }}{\sum \limits_{r_{1}=1}^{N_{n}}}{\sum \limits_{r_{2}=1}^{N_{n}}}\left[\begin{array}{c@{\hskip10.0pt}c}\mathbb{E}(({\boldsymbol{X}_{\ell _{1}-1}^{(r_{1})}}-\mathbb{E}(\boldsymbol{X}_{0})){({\boldsymbol{X}_{\ell _{2}-1}^{(r_{2})}}-\mathbb{E}(\boldsymbol{X}_{0}))}^{\top })& \mathbf{0}\\{} \mathbf{0}& 0\end{array}\right]\hspace{-0.1667em}\boldsymbol{v}_{(i,j)}\\{} & \displaystyle \hspace{1em}=\frac{1}{{\lfloor nt\rfloor }^{2}N_{n}}{\boldsymbol{v}_{(i,j)}^{\top }}\hspace{-0.1667em}\hspace{-0.1667em}{\sum \limits_{\ell _{1}=1}^{\lfloor nt\rfloor }}{\sum \limits_{\ell _{2}=1}^{\lfloor nt\rfloor }}\left[\begin{array}{c@{\hskip10.0pt}c}\mathbb{E}((\boldsymbol{X}_{\ell _{1}-1}-\mathbb{E}(\boldsymbol{X}_{0})){(\boldsymbol{X}_{\ell _{2}-1}-\mathbb{E}(\boldsymbol{X}_{0}))}^{\top })& \mathbf{0}\\{} \mathbf{0}& 0\end{array}\right]\boldsymbol{v}_{(i,j)}\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{{\lfloor nt\rfloor }^{2}N_{n}}\| \boldsymbol{v}_{(i,j)}{\| }^{2}{\sum \limits_{\ell _{1}=1}^{\lfloor nt\rfloor }}{\sum \limits_{\ell _{2}=1}^{\lfloor nt\rfloor }}\mathbb{E}\big(\big\| \big(\boldsymbol{X}_{\ell _{1}-1}-\mathbb{E}(\boldsymbol{X}_{0})\big){\big(\boldsymbol{X}_{\ell _{2}-1}-\mathbb{E}(\boldsymbol{X}_{0})\big)}^{\top }\big\| \big)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{{\lfloor nt\rfloor }^{2}N_{n}}\| \boldsymbol{v}_{(i,j)}{\| }^{2}\\{} & \displaystyle \hspace{2em}\times {\sum \limits_{\ell _{1}=1}^{\lfloor nt\rfloor }}{\sum \limits_{\ell _{2}=1}^{\lfloor nt\rfloor }}{\sum \limits_{m_{1}=1}^{p}}{\sum \limits_{m_{2}=1}^{p}}\mathbb{E}\big(\big|\big(X_{\ell _{1}-1,m_{1}}-\mathbb{E}(X_{0,m_{1}})\big)\big(X_{\ell _{2}-1,m_{2}}-\mathbb{E}(X_{0,m_{2}})\big)\big|\big)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{{\lfloor nt\rfloor }^{2}N_{n}}\| \boldsymbol{v}_{(i,j)}{\| }^{2}{\sum \limits_{\ell _{1}=1}^{\lfloor nt\rfloor }}{\sum \limits_{\ell _{2}=1}^{\lfloor nt\rfloor }}{\sum \limits_{m_{1}=1}^{p}}{\sum \limits_{m_{2}=1}^{p}}\sqrt{\operatorname{Var}(X_{\ell _{1}-1,m_{1}})\operatorname{Var}(X_{\ell _{2}-1,m_{2}})}\\{} & \displaystyle \hspace{1em}=\frac{1}{N_{n}}\| \boldsymbol{v}_{(i,j)}{\| }^{2}{\sum \limits_{m_{1}=1}^{p}}{\sum \limits_{m_{2}=1}^{p}}\sqrt{\operatorname{Var}(X_{0,m_{1}})\operatorname{Var}(X_{0,m_{2}})}\to 0\hspace{1em}\text{as}\hspace{5pt}n\to \infty \text{,}\end{array}\]
where we used that $\| \boldsymbol{Q}\| \hspace{0.1667em}\leqslant \hspace{0.1667em}{\sum _{i=1}^{p}}{\sum _{j=1}^{p}}|q_{i,j}|$ for any matrix $\boldsymbol{Q}\hspace{0.1667em}=\hspace{0.1667em}{(q_{i,j})_{i,j=1}^{p}}\hspace{0.1667em}\in \hspace{0.1667em}{\mathbb{R}}^{p\times p}$.Moreover, in a similar way, the conditional Lindeberg condition holds, namely, for all $\delta >0$,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\sum \limits_{k=1}^{\lfloor nt\rfloor N_{n}}}\mathbb{E}\big({\big\| {\boldsymbol{V}_{k}^{(n)}}\big\| }^{2}\mathbb{1}_{\{\| {\boldsymbol{V}_{k}^{(n)}}\| >\delta \}}\mid {\mathcal{F}_{k-1}^{(n)}}\big)\\{} & \displaystyle \hspace{1em}=\frac{1}{nN_{n}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}\mathbb{E}\big({\big\| {\boldsymbol{U}_{\ell }^{(r)}}\big\| }^{2}\mathbb{1}_{\{\| {\boldsymbol{U}_{\ell }^{(r)}}\| >\delta \sqrt{nN_{n}}\}}\mid {\mathcal{F}_{\ell -1}^{{\boldsymbol{X}}^{(r)}}}\big)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\delta {n}^{3/2}{N_{n}^{1/2}}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}\mathbb{E}\big({\big\| {\boldsymbol{U}_{\ell }^{(1)}}\big\| }^{3}\mid {\mathcal{F}_{\ell -1}^{{\boldsymbol{X}}^{(1)}}}\big)\stackrel{\mathrm{a}.\mathrm{s}.}{\longrightarrow }0\hspace{1em}\text{as}\hspace{5pt}n\to \infty \text{,}\end{array}\]
where the almost sure convergence follows by (27). Hence we obtain
\[ \Bigg(\frac{1}{\sqrt{nN_{n}}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{U}_{\ell }^{(r)}}\Bigg)_{t\in \mathbb{R}_{+}}=\Bigg({\sum \limits_{k=1}^{\lfloor nt\rfloor N_{n}}}{\boldsymbol{V}_{k}^{(n)}}\Bigg)_{t\in \mathbb{R}_{+}}\stackrel{\mathcal{D}}{\longrightarrow }\boldsymbol{B}\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]
where $\boldsymbol{B}=(\boldsymbol{B}_{t})_{t\in \mathbb{R}_{+}}$ is a p-dimensional zero mean Brownian motion satisfying $\operatorname{Var}(\boldsymbol{B}_{1})=\boldsymbol{V}$. Using (28), for each $n\in \mathbb{N}$ and $t\in \mathbb{R}_{+}$, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \frac{1}{\sqrt{nN_{n}}}{\sum \limits_{\ell =1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}\big({\boldsymbol{X}_{\ell }^{(r)}}-\mathbb{E}\big({\boldsymbol{X}_{\ell }^{(r)}}\big)\big)\\{} & \displaystyle \hspace{1em}=\frac{1}{\sqrt{n}}\Bigg[{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big(\boldsymbol{M}_{\boldsymbol{\xi }}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}\big)\frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}\big({\boldsymbol{X}_{0}^{(r)}}-\mathbb{E}\big({\boldsymbol{X}_{0}^{(r)}}\big)\big)\Bigg]\\{} & \displaystyle \hspace{1em}\hspace{1em}-\frac{1}{\sqrt{n}}\Bigg[{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{U}_{m}^{(r)}}\Bigg]\\{} & \displaystyle \hspace{1em}\hspace{1em}+{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\frac{1}{\sqrt{nN_{n}}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{U}_{m}^{(r)}},\end{array}\]
which implies the statement using Slutsky’s lemma, since $\rho (\boldsymbol{M}_{\boldsymbol{\xi }})<1$. Indeed, $\lim _{n\to \infty }{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}=\mathbf{0}$ by (26), thus
\[ \underset{n\to \infty }{\lim }{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big(\boldsymbol{M}_{\boldsymbol{\xi }}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}\big)={(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\boldsymbol{M}_{\boldsymbol{\xi }},\]
and, by Proposition 1,
\[ \frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}\big({\boldsymbol{X}_{0}^{(r)}}-\mathbb{E}\big({\boldsymbol{X}_{0}^{(r)}}\big)\big)\stackrel{\mathcal{D}}{\longrightarrow }\mathcal{N}_{p}\big(\mathbf{0},\operatorname{Var}(\boldsymbol{X}_{0})\big)\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]
where $\mathcal{N}_{p}(\mathbf{0},\operatorname{Var}(\boldsymbol{X}_{0}))$ denotes a p-dimensional normal distribution with zero mean and covariance matrix $\operatorname{Var}(\boldsymbol{X}_{0})$, and then Slutsky’s lemma yields that
\[ \frac{1}{\sqrt{n}}\Bigg[{(\boldsymbol{I}_{p}-\boldsymbol{M}_{\boldsymbol{\xi }})}^{-1}\big(\boldsymbol{M}_{\boldsymbol{\xi }}-{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor +1}}\big)\frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}\big({\boldsymbol{X}_{0}^{(r)}}-\mathbb{E}\big({\boldsymbol{X}_{0}^{(r)}}\big)\big)\Bigg]\stackrel{\mathbb{P}}{\longrightarrow }\mathbf{0}\]
as $n\to \infty $. Further,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \Bigg\| \mathbb{E}\Bigg(\frac{1}{\sqrt{n}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}{\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{U}_{m}^{(r)}}\Bigg)\Bigg\| \\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}\mathbb{E}\Bigg(\Bigg\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{U}_{m}^{(r)}}\Bigg\| \Bigg)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\big\| \mathbb{E}\Bigg(\Bigg\| \frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}{\boldsymbol{U}_{m}^{(r)}}\Bigg\| \Bigg)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\big\| {\sum \limits_{j=1}^{p}}\mathbb{E}\Bigg(\Bigg|\frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}{U_{m,j}^{(r)}}\Bigg|\Bigg)\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\big\| {\sum \limits_{j=1}^{p}}\sqrt{\mathbb{E}\Bigg({\Bigg(\frac{1}{\sqrt{N_{n}}}{\sum \limits_{r=1}^{N_{n}}}{U_{m,j}^{(r)}}\Bigg)}^{2}\Bigg)}\\{} & \displaystyle \hspace{1em}=\frac{1}{\sqrt{n}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\big\| {\sum \limits_{j=1}^{p}}\sqrt{\mathbb{E}\big({\big({U_{m,j}^{(1)}}\big)}^{2}\big)}\\{} & \displaystyle \hspace{1em}\leqslant \frac{1}{\sqrt{n}}{\sum \limits_{m=1}^{\lfloor nt\rfloor }}\big\| {\boldsymbol{M}_{\boldsymbol{\xi }}^{\lfloor nt\rfloor -m+1}}\big\| {\sum \limits_{j=1}^{p}}\sqrt{V_{j,j}}\to 0\hspace{1em}\text{as}\hspace{5pt}n\to \infty \text{,}\end{array}\]
by (30), where for the last inequality we used (29), completing the proof. □