Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. Skorokhod M1 convergence of maxima of mu ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • More
    Article info Full article Related articles

Skorokhod M1 convergence of maxima of multivariate linear processes with heavy-tailed innovations and random coefficients
Danijel Krizmanić ORCID icon link to view author Danijel Krizmanić details  

Authors

 
Placeholder
https://doi.org/10.15559/25-VMSTA271
Pub. online: 28 January 2025      Type: Research Article      Open accessOpen Access

Received
30 August 2024
Revised
11 January 2025
Accepted
11 January 2025
Published
28 January 2025

Abstract

In this paper, functional convergence is derived for the partial maxima stochastic processes of multivariate linear processes with weakly dependent heavy-tailed innovations and random coefficients. The convergence takes place in the space of ${\mathbb{R}^{d}}$-valued càdlàg functions on $[0,1]$ endowed with the weak Skorokhod ${M_{1}}$ topology.

1 Introduction

Let ${({X_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of random variables, and denote by ${M_{n}}=\max \{{X_{1}},{X_{2}},\dots ,{X_{n}}\}$, $n\ge 1$, its partial maxima. The asymptotic distributional behavior of ${M_{n}}$ is one of the main objects of interest of classical extreme value theory. When $({X_{i}})$ is an i.i.d. sequence and there exist constants ${a_{n}}\gt 0$ and ${b_{n}}$ such that
(1)
\[ \operatorname{P}\bigg(\frac{{M_{n}}-{b_{n}}}{{a_{n}}}\le x\bigg)\to G(x)\hspace{1em}\mathrm{as}\hspace{2.5pt}n\to \infty ,\]
with nondegenerated limit G, the limit belongs to the class of extreme value distributions, see [13]. It is known that generalizations of this result to weak convergence of partial maxima processes in the space of càdlàg functions hold. More precisely, relation (1) implies
(2)
\[ {a_{n}^{-1}}({M_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})-{b_{n}}):={a_{n}^{-1}}\bigg({\underset{i=1}{\overset{\lfloor n\hspace{0.1667em}\cdot \rfloor }{\bigvee }}}{X_{i}}-{b_{n}}\bigg)\xrightarrow{d}Y(\hspace{0.1667em}\cdot \hspace{0.1667em})\]
in the space $D([0,1],\mathbb{R})$ of real-valued càdlàg functions on $[0,1]$ endowed with the Skorokhod ${J_{1}}$ topology, with Y being an extremal process generated by G (see [11], and Proposition 4.20 in [13]). Simplifying notation, we sometimes omit brackets and write ${a_{n}^{-1}}({M_{n}}-{b_{n}})\xrightarrow{d}Y$. The convergence in relation (2) also holds for a special class of weakly dependent random variables, the linear or moving averages processes with i.i.d. heavy-tailed innovations and deterministic coefficients (see Proposition 4.28 in [13]).
Recently, it was shown in [9] that the functional convergence in (2) holds for linear processes with i.i.d. heavy-tailed innovations and random coefficients. In this paper we aim to generalize this result in two directions, the first one by studying linear processes with weakly dependent innovations (and random coefficients), and the second one by extending this theory to the multivariate setting. Due to possible clustering of large values, the ${J_{1}}$ topology becomes inappropriate, and therefore we will use the weaker Skorokhod ${M_{1}}$ topology. This topology works well if all extremes within each cluster of large values have the same sign.
The paper is organized as follows. In Section 2 we introduce basic notions about regular variation, linear processes, point processes and Skorokhod topologies. In Section 3 we derive the weak ${M_{1}}$ convergence of the partial maxima stochastic process for finite order multivariate linear processes with weakly dependent heavy-tailed innovations and random coefficients. In Section 4 we extend this result to infinite order multivariate linear processes, and give an example which shows that the convergence in the weak ${M_{1}}$ topology in general cannot be replaced by the standard ${M_{1}}$ convergence.

2 Preliminaries

We use superscripts in parentheses to designate vector components and coordinate functions, i.e. $a=({a^{(1)}},\dots ,{a^{(d)}})\in {\mathbb{R}^{d}}$ and $x=({x^{(1)}},\dots ,{x^{(d)}}):[0,1]\to {\mathbb{R}^{d}}$. For two vectors $a=({a^{(1)}},\dots ,{a^{(d)}})$, $b=({b^{(1)}},\dots ,{b^{(d)}})\in {\mathbb{R}^{d}}$, $a\le b$ means ${a^{(k)}}\le {b^{(k)}}$ for all $k=1,\dots ,d$. The vector $({a^{(1)}},\dots ,{a^{(d)}},{b^{(1)}},\dots ,{b^{(d)}})$ will be denoted by $(a,b)$, and the vector $({a^{(1)}},{b^{(1)}},{a^{(2)}},{b^{(2)}},\dots ,{a^{(d)}},{b^{(d)}})$ will be denoted by ${({a^{(i)}},{b^{(i)}})_{i=1,\dots ,d}^{\ast }}$. Denote by $a\vee b$ the vector $({a^{(1)}}\vee {b^{(1)}},\dots ,{a^{(d)}}\vee {b^{(d)}})$, where for $c,d\in \mathbb{R}$ we put $c\vee d=\max \{c,d\}$. Sometimes for convenience we will denote the vector a by ${({a^{(i)}})_{i=1,\dots ,d}}$. For a real number c we write $ca=(c{a^{(1)}},\dots ,c{a^{(d)}})$.

2.1 Regular variation

The ${\mathbb{R}^{d}}$-valued random vector ξ is (multivariate) regularly varying if there exist $\alpha \gt 0$ and a random vector Θ on the unit sphere ${\mathbb{S}^{d-1}}=\{x\in {\mathbb{R}^{d}}:\| x\| =1\}$ in ${\mathbb{R}^{d}}$, such that for every $u\gt 0$,
(3)
\[ \frac{\operatorname{P}(\| \xi \| \gt ux,\hspace{0.1667em}\xi /\| \xi \| \in \cdot \hspace{0.1667em})}{\operatorname{P}(\| \xi \| \gt x)}\xrightarrow{w}{u^{-\alpha }}\operatorname{P}(\Theta \in \cdot \hspace{0.1667em})\hspace{1em}\mathrm{as}\hspace{2.5pt}x\to \infty ,\]
where the arrow “$\xrightarrow{w}$” denotes the weak convergence of finite measures and $\| \cdot \| $ denotes the max-norm on ${\mathbb{R}^{d}}$. This definition does not depend on the choice of the norm, since if (3) holds for some norm on ${\mathbb{R}^{d}}$, it holds for all norms (of course, with different distributions of Θ). The number α is called the index of regular variation of ξ, and the probability measure $\operatorname{P}(\Theta \in \cdot \hspace{0.1667em})$ is called the spectral measure of ξ with respect to the norm $\| \hspace{0.1667em}\cdot \| $. In the one-dimensional case regular variation is characterized by $\operatorname{P}(|\xi |\gt x)={x^{-\alpha }}L(x)$, $x\gt 0$, for some slowly varying function L and the tail balance condition
\[ \underset{x\to \infty }{\lim }\frac{\operatorname{P}(\xi \gt x)}{\operatorname{P}(|\xi |\gt x)}=p,\hspace{2em}\underset{x\to \infty }{\lim }\frac{\operatorname{P}(\xi \lt -x)}{\operatorname{P}(|\xi |\gt x)}=q,\]
where $p\in [0,1]$ and $p+q=1$.
A strictly stationary ${\mathbb{R}^{d}}$-valued random process ${({\xi _{n}})_{n\in \mathbb{Z}}}$ is regularly varying with index $\alpha \gt 0$ if for any nonnegative integer k the $kd$-dimensional random vector $\xi =({\xi _{1}},\dots ,{\xi _{k}})$ is multivariate regularly varying with index α. According to [2] the regular variation property of the sequence $({\xi _{n}})$ is equivalent to the existence of a process ${({Y_{n}})_{n\in \mathbb{Z}}}$ which satisfies $\operatorname{P}(\| {Y_{0}}\| \gt y)={y^{-\alpha }}$ for $y\ge 1$, and
(4)
\[ \big({({x^{-1}}\hspace{2.5pt}{\xi _{n}})_{n\in \mathbb{Z}}}\hspace{0.1667em}\big|\hspace{0.1667em}\| {\xi _{0}}\| \gt x\big)\xrightarrow{\text{fidi}}{({Y_{n}})_{n\in \mathbb{Z}}}\hspace{1em}\mathrm{as}\hspace{2.5pt}x\to \infty ,\]
where “$\xrightarrow{\text{fidi}}$” denotes convergence of finite-dimensional distributions. The process $({Y_{n}})$ is called the tail process of $({\xi _{n}})$.

2.2 Linear processes

Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of random vectors in ${\mathbb{R}^{d}}$, and assume ${Z_{1}}$ is multivariate regularly varying with index $\alpha \gt 0$. We study multivariate linear processes with random coefficients, defined by
(5)
\[ {X_{i}}={\sum \limits_{j=0}^{\infty }}{C_{j}}{Z_{i-j}},\hspace{1em}i\in \mathbb{Z},\]
where ${({C_{j}})_{j\ge 0}}$ is a sequence of $d\times d$ matrices (with real-valued random variables as entries) independent of $({Z_{i}})$ such that the above series is a.s. convergent. One sufficient condition for that is ${\textstyle\sum _{j=0}^{\infty }}\mathrm{E}\| {C_{j}}{\| ^{\delta }}\lt \infty $ for some $\delta \lt \alpha $, $0\lt \delta \le 1$ (see Section 4.5 in [13]), where for a $d\times d$ matrix $C=({C_{i,j}})$, $\| C\| $ denotes the operator norm
\[ \| C\| =\sup \{\| Cx\| :x\in {\mathbb{R}^{d}},\| x\| =1\}=\underset{i=1,\dots ,d}{\max }{\sum \limits_{j=1}^{d}}|{C_{i,j}}|.\]

2.3 Skorokhod topologies

Denote by ${D^{d}}\equiv D([0,1],{\mathbb{R}^{d}})$ the space of all right-continuous ${\mathbb{R}^{d}}$-valued functions on $[0,1]$ with left limits. For $x\in {D^{d}}$ the completed (thick) graph of x is defined as
\[ {G_{x}}=\{(t,z)\in [0,1]\times {\mathbb{R}^{d}}:z\in [[x(t-),x(t)]]\},\]
where $x(t-)$ is the left limit of x at t and $[[a,b]]$ is the product segment, i.e. $[[a,b]]=[{a^{(1)}},{b^{(1)}}]\times \cdots \times [{a^{(d)}},{b^{(d)}}]$ for $a=({a^{(1)}},\dots ,{a^{(d)}})$, $b=({b^{(1)}},\dots ,{b^{(d)}})\in {\mathbb{R}^{d}}$, and $[{a^{(i)}},{b^{(i)}}]$ coincides with the closed interval $[{a^{(i)}}\wedge {b^{(i)}},{a^{(i)}}\vee {b^{(i)}}]$, with $c\wedge d=\min \{c,d\}$ for $c,d\in \mathbb{R}$. On the graph ${G_{x}}$ we define an order by saying that $({t_{1}},{z_{1}})\le ({t_{2}},{z_{2}})$ if either (i) ${t_{1}}\lt {t_{2}}$, or (ii) $|{x_{j}}({t_{1}}-)-{z_{1}^{(j)}}|\le |{x_{j}}({t_{2}}-)-{z_{2}^{(j)}}|$ for all $j=1,2,\dots ,d$. A weak parametric representation of the graph ${G_{x}}$ is a continuous nondecreasing function $(r,u)$ mapping $[0,1]$ into ${G_{x}}$, with r being the time component and u the spatial component, such that $r(0)=0$, $r(1)=1$ and $u(1)=x(1)$. Let ${\Pi _{w}}(x)$ denote the set of weak parametric representations of ${G_{x}}$. For ${x_{1}},{x_{2}}\in {D^{d}}$ define
\[ {d_{w}}({x_{1}},{x_{2}})=\inf \{\| {r_{1}}-{r_{2}}{\| _{[0,1]}}\vee \| {u_{1}}-{u_{2}}{\| _{[0,1]}}:({r_{i}},{u_{i}})\in {\Pi _{w}}({x_{i}}),i=1,2\},\]
where $\| x{\| _{[0,1]}}=\sup \{\| x(t)\| :t\in [0,1]\}$. Now we say that a sequence ${({x_{n}})_{n}}$ converges to x in ${D^{d}}$ in the weak Skorokhod ${M_{1}}$ topology if ${d_{w}}({x_{n}},x)\to 0$ as $n\to \infty $. If we replace the graph ${G_{x}}$ with the completed (thin) graph
\[ {\Gamma _{x}}=\{(t,z)\in [0,1]\times {\mathbb{R}^{d}}:z=\lambda x(t-)+(1-\lambda )x(t)\hspace{2.5pt}\text{for some}\hspace{2.5pt}\lambda \in [0,1]\},\]
and weak parametric representations with strong parametric representations, that is continuous nondecreasing functions $(r,u)$ mapping $[0,1]$ onto ${\Gamma _{x}}$, then we obtain the standard (or strong) Skorokhod ${M_{1}}$ topology. This topology is induced by the metric
\[ {d_{{M_{1}}}}({x_{1}},{x_{2}})=\inf \{\| {r_{1}}-{r_{2}}{\| _{[0,1]}}\vee \| {u_{1}}-{u_{2}}{\| _{[0,1]}}:({r_{i}},{u_{i}})\in {\Pi _{s}}({x_{i}}),i=1,2\},\]
where ${\Pi _{s}}(x)$ is the set of strong parametric representations of the graph ${\Gamma _{x}}$. Since ${\Pi _{s}}(x)\subseteq {\Pi _{w}}(x)$ for all $x\in {D^{d}}$, the weak ${M_{1}}$ topology is weaker than the standard ${M_{1}}$ topology on ${D^{d}}$, but they coincide for $d=1$. The weak ${M_{1}}$ topology coincides with the topology induced by the metric
(6)
\[ {d_{p}}({x_{1}},{x_{2}})=\max \{{d_{{M_{1}}}}({x_{1}^{(j)}},{x_{2}^{(j)}}):j=1,\dots ,d\}\]
for ${x_{i}}=({x_{i}^{(1)}},\dots ,{x_{i}^{(d)}})\in {D^{d}}$ and $i=1,2$. The metric ${d_{p}}$ induces the product topology on ${D^{d}}$.
By using parametric representations in which only the time component r is nondecreasing instead of $(r,u)$ we obtain Skorokhod’s weak and strong ${M_{2}}$ topologies. The metric
\[ {d_{{M_{2}}}}({x_{1}},{x_{2}})=\bigg(\underset{a\in {\Gamma _{{x_{1}}}}}{\sup }\underset{b\in {\Gamma _{{x_{2}}}}}{\inf }d(a,b)\bigg)\vee \bigg(\underset{a\in {\Gamma _{{x_{2}}}}}{\sup }\underset{b\in {\Gamma _{{x_{1}}}}}{\inf }d(a,b)\bigg),\]
where $d(a,b)=\max \{|{a^{(i)}}-{b^{(i)}}|:i=1,\dots ,d+1\}$ for $a=({a^{(1)}},\dots ,{a^{(d+1)}})$, $b=({b^{(1)}},\dots ,{b^{(d+1)}})\in {\mathbb{R}^{d+1}}$, induces the strong ${M_{2}}$ topology, which is weaker than the ${M_{1}}$ topology. For more details and discussion on the ${M_{1}}$ and ${M_{2}}$ topologies we refer to Sections 12.3-5 and 12.10-11 in [17]. Since the sample paths of the partial maxima processes in (2) are nondecreasing, we will restrict our attention to the subspace ${D_{\uparrow }^{d}}$ of functions x in ${D^{d}}$ for which the coordinate functions ${x^{(i)}}$ are nondecreasing for all $i=1,\dots ,d$.

2.4 Point processes

Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$. Assume the elements of this sequence are pairwise asymptotically (or extremally) independent in the sense that
(7)
\[ \underset{x\to \infty }{\lim }\frac{\operatorname{P}(\| {Z_{i}}\| \gt x,\| {Z_{j}}\| \gt x)}{\operatorname{P}(\| {Z_{1}}\| \gt x)}=0\hspace{1em}\text{for all}\hspace{2.5pt}i\ne j.\]
We also assume asymptotical independence of the components of each ${Z_{i}}$:
(8)
\[ \underset{x\to \infty }{\lim }\operatorname{P}(|{Z_{i}^{(j)}}|\gt x\hspace{0.1667em}\big|\hspace{0.1667em}|{Z_{i}^{(k)}}|\gt x)=0\hspace{1em}\text{for all}\hspace{2.5pt}j,k\in \{1,\dots ,d\},j\ne k.\]
Condition (7) implies the sequence $({Z_{i}})$ is regularly varying with index α (Proposition 2.1.8 in [10]) with the tail process as in the i.i.d. case, that is ${Y_{i}}=0$ for $i\ne 0$, and $\operatorname{P}(\| {Y_{0}}\| \gt y)={y^{-\alpha }}$ for $y\ge 1$. Relation (8) implies that ${Y_{0}}$ a.s. has no two nonzero components.
Define the time-space point processes
\[ {N_{n}}={\sum \limits_{i=1}^{n}}{\delta _{(i/n,\hspace{0.1667em}{Z_{i}}/{a_{n}})}}\hspace{1em}\text{for all}\hspace{2.5pt}n\in \mathbb{N},\]
with $({a_{n}})$ being a sequence of positive real numbers such that
(9)
\[ n\operatorname{P}(\| {Z_{1}}\| \gt {a_{n}})\to 1\hspace{1em}\mathrm{as}\hspace{2.5pt}n\to \infty .\]
The point process convergence for the sequence $({N_{n}})$ on the space $[0,1]\times {\mathbb{E}^{d}}$, where ${\mathbb{E}^{d}}={[-\infty ,\infty ]^{d}}\setminus \{0\}$, was obtained in [3] under the following two weak dependence conditions.
Condition 2.1 (Mixing condition).
There exists a sequence of positive integers $({r_{n}})$ such that ${r_{n}}\to \infty $ and ${r_{n}}/n\to 0$ as $n\to \infty $ and such that for every nonnegative continuous function f on $[0,1]\times {\mathbb{E}^{d}}$ with compact support, denoting ${k_{n}}=\lfloor n/{r_{n}}\rfloor $, as $n\to \infty $,
\[ \operatorname{E}\bigg[\exp \bigg\{-{\sum \limits_{i=1}^{n}}f\bigg(\frac{i}{n},\frac{{Z_{i}}}{{a_{n}}}\bigg)\bigg\}\bigg]-{\prod \limits_{k=1}^{{k_{n}}}}\operatorname{E}\bigg[\exp \bigg\{-{\sum \limits_{i=1}^{{r_{n}}}}f\bigg(\frac{k{r_{n}}}{n},\frac{{Z_{i}}}{{a_{n}}}\bigg)\bigg\}\bigg]\to 0.\]
Condition 2.2 (Anticlustering condition).
There exists a sequence of positive integers $({r_{n}})$ such that ${r_{n}}\to \infty $ and ${r_{n}}/n\to 0$ as $n\to \infty $ and such that for every $u\gt 0$,
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }\operatorname{P}\bigg(\underset{m\le |i|\le {r_{n}}}{\max }\| {Z_{i}}\| \gt u{a_{n}}\hspace{0.1667em}\bigg|\hspace{0.1667em}\| {Z_{0}}\| \gt u{a_{n}}\bigg)=0.\]
The sequences $({r_{n}})$ in these two conditions are assumed to be the same. It can be shown that Condition 2.1 holds for strongly mixing random sequences (see [5, 7]). Now we show that Condition 2.2 holds under condition (7). Let
\[ {x_{k,n}}:={\sum \limits_{|i|=1}^{k}}\frac{\mathrm{P}(\| {Z_{i}}\| \gt u{a_{n}},\| {Z_{0}}\| \gt u{a_{n}})}{\mathrm{P}(\| {Z_{0}}\| \gt u{a_{n}})}\]
for $k,n\in \mathbb{N}$. For every $k\in \mathbb{N}$ condition (7) implies that ${x_{k,n}}\to 0$ as $n\to \infty $. An application of the triangular argument (Lemma A.1.3 in [10]) yields that there exists a nondecreasing sequence of positive integers ${({s_{n}})_{n}}$ such that ${s_{n}}\to \infty $ and ${x_{{s_{n}},n}}\to 0$ as $n\to \infty $. Denote
(10)
\[ {r_{n}}:=\min \{{s_{n}},\lfloor \sqrt{n}\rfloor \},\hspace{1em}n\in \mathbb{N}.\]
Then ${r_{n}}\to \infty $ and ${r_{n}}/n\to 0$ as $n\to \infty $. For a fixed $m\in \mathbb{N}$ and large ${r_{n}}$ (such that ${r_{n}}\ge m$) it holds that
\[\begin{aligned}{}\mathrm{P}\bigg(\underset{m\le |i|\le {r_{n}}}{\max }\| {Z_{i}}\| \gt u{a_{n}}\hspace{0.1667em}\bigg|\hspace{0.1667em}\| {Z_{0}}\| \gt u{a_{n}}\bigg)& \le {\sum \limits_{|i|=m}^{{r_{n}}}}\frac{\mathrm{P}(\| {Z_{i}}\| \gt u{a_{n}},\| {Z_{0}}\| \gt u{a_{n}})}{\mathrm{P}(\| {Z_{0}}\| \gt u{a_{n}})}\\ {} & \le {x_{{s_{n}},n}},\end{aligned}\]
and letting $n\to \infty $ we obtain
\[ \underset{n\to \infty }{\lim }\mathrm{P}\bigg(\underset{m\le |i|\le {r_{n}}}{\max }\| {Z_{i}}\| \gt u{a_{n}}\hspace{0.1667em}\bigg|\hspace{0.1667em}\| {Z_{0}}\| \gt u{a_{n}}\bigg)=0\]
for every $m\in \mathbb{N}$. Hence, letting $m\to \infty $, we see that Condition 2.2 holds. The latter condition holds also under Leadbetter’s condition ${D^{\prime }}$:
(11)
\[ \underset{k\to \infty }{\lim }\underset{n\to \infty }{\limsup }\hspace{2.5pt}n{\sum \limits_{i=1}^{\lfloor n/k\rfloor }}\operatorname{P}(\| {Z_{0}}\| \gt x{a_{n}},\| {Z_{i}}\| \gt x{a_{n}})=0\hspace{1em}\text{for all}\hspace{2.5pt}x\gt 0.\]
The asymptotical independence condition (7) also holds under condition ${D^{\prime }}$. For more discussion of the above weak dependence conditions, in the context of partial sums, we refer to Section 9.1 in [12] and [16].
In the sequel whenever we assume that Condition 2.1 holds we suppose that the sequence $({r_{n}})$ that appears in this condition is the same as in (10), and this will ensure that Conditions 2.1 and 2.2 are satisfied by the same sequence $({r_{n}})$. Under Condition 2.1 by Theorem 3.1 in [3], as $n\to \infty $,
(12)
\[ {N_{n}}\xrightarrow{d}N=\sum \limits_{i}\sum \limits_{j}{\delta _{({T_{i}},{P_{i}}{\eta _{ij}})}}\]
in $[0,1]\times {\mathbb{E}^{d}}$, where
  • (i) ${\textstyle\sum _{i=1}^{\infty }}{\delta _{({T_{i}},{P_{i}})}}$ is a Poisson process on $[0,1]\times (0,\infty )$ with intensity measure $\mathit{Leb}\times \nu $, with $\nu (\mathrm{d}x)=\theta \alpha {x^{-\alpha -1}}\hspace{0.1667em}\mathrm{d}x$ and $\theta =\operatorname{P}({\sup _{i\le -1}}\| {Y_{i}}\| \le 1)$.
  • (ii) ${({\textstyle\sum _{j=1}^{\infty }}{\delta _{{\eta _{ij}}}})_{i}}$ is an i.i.d. sequence of point processes in ${\mathbb{E}^{d}}$ independent of ${\textstyle\sum _{i}}{\delta _{({T_{i}},{P_{i}})}}$ and with common distribution equal to the distribution of the point process ${\textstyle\sum _{j}}{\delta _{{\widetilde{Y}_{j}}/L(\widetilde{Y})}}$, where $L(\widetilde{Y})={\sup _{j\in \mathbb{Z}}}\| {\widetilde{Y}_{j}}\| $ and ${\textstyle\sum _{j}}{\delta _{{\widetilde{Y}_{j}}}}$ is distributed as $({\textstyle\sum _{j\in \mathbb{Z}}}{\delta _{{Y_{j}}}}\hspace{0.1667em}|\hspace{0.1667em}{\sup _{i\le -1}}\| {Y_{i}}\| \le 1)$.
Taking into account the form of the tail process $({Y_{i}})$, it holds that $\theta =1$ and $N={\textstyle\sum _{i}}{\delta _{({T_{i}},{P_{i}}{\eta _{i0}})}}$ with $\| {\eta _{i0}}\| =1$. Hence, denoting ${Q_{i}}={\eta _{i0}}$, the limiting point process in relation (12) reduces to
(13)
\[ N=\sum \limits_{i}{\delta _{({T_{i}},{P_{i}}{Q_{i}})}}.\]
Since the sequence $({Q_{i}})$ is independent of the Poisson process ${\textstyle\sum _{i=1}^{\infty }}{\delta _{({T_{i}},{P_{i}})}}$, an application of Proposition 5.3 in [14] yields that ${\textstyle\sum _{i}}{\delta _{({T_{i}},{P_{i}},{Q_{i}})}}$ is a Poisson process on $[0,1]\times (0,\infty )\times {\mathbb{E}^{d}}$ with intensity measure $\mathit{Leb}\times \nu \times F$, where F is the common probability distribution of ${Q_{i}}$.
For $x\in \mathbb{R}$ let ${x^{+}}=|x|{1_{\{x\gt 0\}}}$ and ${x^{-}}=|x|{1_{\{x\lt 0\}}}$. Define the maximum functional $\Phi :{\mathbf{M}_{p}}([0,1]\times {\mathbb{E}^{d}})\to {D_{\uparrow }^{2{d^{2}}}}$ by
(14)
\[ \Phi \Big(\sum \limits_{i}{\delta _{({t_{i}},({x_{i}^{(1)}},\dots ,{x_{i}^{(d)}}))}}\Big)(t)={\bigg(\Big(\underset{{t_{i}}\le t}{\bigvee }{x_{i}^{(j)+}},\underset{{t_{i}}\le t}{\bigvee }{x_{i}^{(j)-}}{\Big)_{j=1,\dots ,d}^{\ast }}\bigg)_{k=1,\dots ,d}}\]
for $t\in [0,1]$ (with the convention $\vee \varnothing =0$), where the space ${\mathbf{M}_{p}}([0,1]\times {\mathbb{E}^{d}})$ of Radon point measures on $[0,1]\times {\mathbb{E}^{d}}$ is equipped with the vague topology (see Chapter 3 in [13]). Note that on the right-hand side in (14) we repeat the $2d$ coordinates of the vector $\Big({\textstyle\bigvee _{{t_{i}}\le t}}{x_{i}^{(j)+}},{\textstyle\bigvee _{{t_{i}}\le t}}{x_{i}^{(j)-}}{\Big)_{j=1,\dots ,d}^{\ast }}$ consecutively d times. Let
\[\begin{aligned}{}\Lambda =\{& \eta \in {\mathbf{M}_{p}}([0,1]\times {\mathbb{E}^{d}}):\eta (\{0,1\}\times {\mathbb{E}^{d}})=0\hspace{2.5pt}\mathrm{and}\\ {} & \eta ([0,1]\times \{({x^{(1)}},\dots ,{x^{(d)}}):|{x^{(i)}}|=\infty \hspace{2.5pt}\text{for some}\hspace{2.5pt}i\})=0\}.\end{aligned}\]
Then Proposition 3.1 in [9] and the definition of the metric ${d_{p}}$ in (6) yield the continuity of the maximum functional Φ on the set Λ in the weak ${M_{1}}$ topology.

3 Finite order linear processes

Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$. Fix $m\in \mathbb{N}$, and let
(15)
\[ {X_{i}}={\sum \limits_{j=0}^{m}}{C_{j}}{Z_{i-j}},\hspace{1em}i\in \mathbb{Z},\]
be a finite order linear process, where ${C_{0}},{C_{1}},\dots ,{C_{m}}$ are random $d\times d$ matrices independent of $({Z_{i}})$. Define the corresponding partial maxima process by
(16)
\[ {M_{n}}(t)=\left\{\begin{array}{l@{\hskip10.0pt}l}{a_{n}^{-1}}{\underset{i=1}{\overset{\lfloor nt\rfloor }{\displaystyle \bigvee }}}{X_{i}}={\Big({a_{n}^{-1}}{\underset{i=1}{\overset{\lfloor nt\rfloor }{\displaystyle \bigvee }}}{X_{i}^{(k)}}\Big)_{k=1,\dots ,d}},& t\ge \displaystyle \frac{1}{n},\\ {} {a_{n}^{-1}}{X_{1}}={a_{n}^{-1}}({X_{1}^{(1)}},\dots ,{X_{1}^{(d)}}),& t\lt \displaystyle \frac{1}{n},\end{array}\right.\]
for $t\in [0,1]$, with the normalizing sequence $({a_{n}})$ as in (9). For $k,j\in \{1,\dots ,d\}$ let
(17)
\[ {D_{+}^{k,j}}={\underset{i=0}{\overset{m}{\bigvee }}}{C_{i;k,j}^{+}}\hspace{1em}\mathrm{and}\hspace{1em}{D_{-}^{k,j}}={\underset{i=0}{\overset{m}{\bigvee }}}{C_{i;k,j}^{-}},\]
where ${C_{i;k,j}}$ is the $(k,j)$th entry of the matrix ${C_{i}}$, ${C_{i;k,j}^{+}}=|{C_{i;k,j}}|{1_{\{{C_{i;k,j}}\gt 0\}}}$ and ${C_{i;k,j}^{-}}=|{C_{i;k,j}}|{1_{\{{C_{i;k,j}}\lt 0\}}}$.
First, we show in the proposition below that a particular process ${W_{n}}$, constructed from the sequence $({Z_{i}})$, converges in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology. Later, in the main result of this section, we show that the weak ${M_{1}}$ distance between processes ${M_{n}}$ and ${W_{n}}$ is asymptotically negligible (as $n\to \infty $), which will imply the functional convergence of the maxima process ${M_{n}}$. The limiting process will be described in terms of certain extremal processes derived from the point process $N={\textstyle\sum _{i}}{\delta _{({T_{i}},{P_{i}}{Q_{i}})}}$ in relation (13). Extremal processes can be derived from Poisson processes in the following way. Let $\xi ={\textstyle\sum _{k}}{\delta _{({t_{k}},{j_{k}})}}$ be a Poisson process on ${[0,\infty )\times [0,\infty )^{d}}$ with mean measure $\mathit{Leb}\times \mu $, where μ is a measure on ${[0,\infty )^{d}}$ satisfying $\mu {(\{x\in [0,\infty )^{d}}:\| x\| \gt \delta \})\lt \infty $ for any $\delta \gt 0$. The extremal process $G(\hspace{0.1667em}\cdot \hspace{0.1667em})$ generated by ξ is defined by $G(t)={\textstyle\bigvee _{{t_{k}}\le t}}{j_{k}}$ for $t\gt 0$. Then for ${x\in [0,\infty )^{d}}$, $x\ne 0$, and $t\gt 0$ it holds that $\operatorname{P}(G(t)\le x)=\exp (-t\mu ({[[0,x]]^{c}}))$ (cf. Section 5.6 in [14]). The measure μ is called the exponent measure.
Proposition 3.1.
Let $({X_{i}})$ be a linear process defined in (15), where ${({Z_{i}})_{i\in \mathbb{Z}}}$ is a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$ that satisfy (7) and (8), and ${C_{0}},{C_{1}},\dots ,{C_{m}}$ are random $d\times d$ matrices independent of $({Z_{i}})$. Assume Condition 2.1 holds. Let
\[ {W_{n}}(t):={\bigg({\underset{i=1}{\overset{\lfloor nt\rfloor }{\bigvee }}}{\underset{j=1}{\overset{d}{\bigvee }}}{a_{n}^{-1}}\Big({D_{+}^{k,j}}{Z_{i}^{(j)+}}\vee {D_{-}^{k,j}}{Z_{i}^{(j)-}}\Big)\bigg)_{k=1,\dots ,d}},\hspace{1em}t\in [0,1],\]
with ${D_{+}^{k,j}}$ and ${D_{-}^{k,j}}$ defined in (17). Then, as $n\to \infty $,
(18)
\[ {W_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})\xrightarrow{d}M(\hspace{0.1667em}\cdot \hspace{0.1667em}):={\bigg({\underset{j=1}{\overset{d}{\bigvee }}}\Big({\widetilde{D}_{+}^{k,j}}{M^{(j+)}}(\hspace{0.1667em}\cdot \hspace{0.1667em})\vee {\widetilde{D}_{-}^{k,j}}{M^{(j-)}}(\hspace{0.1667em}\cdot \hspace{0.1667em})\Big)\bigg)_{k=1,\dots ,d}}\]
in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology, where ${M^{(j+)}}$ and ${M^{(j-)}}$ are extremal processes with exponent measures ${\nu _{j+}}$ and ${\nu _{j-}}$ respectively, with
\[ {\nu _{j+}}(\mathrm{d}x)=\mathrm{E}{({Q_{1}^{(j)+}})^{\alpha }}\hspace{0.1667em}\alpha {x^{-\alpha -1}}\hspace{0.1667em}\mathrm{d}x\hspace{1em}\mathit{and}\hspace{1em}{\nu _{j-}}(\mathrm{d}x)=\mathrm{E}{({Q_{1}^{(j)-}})^{\alpha }}\hspace{0.1667em}\alpha {x^{-\alpha -1}}\hspace{0.1667em}\mathrm{d}x\]
for $x\gt 0$ $(j=1,\dots ,d)$, and ${({({\widetilde{D}_{+}^{k,j}},{\widetilde{D}_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}$ is a $2{d^{2}}$-dimensional random vector, independent of ${({M^{(j+)}},{M^{(j-)}})_{j=1,\dots ,d}}$, such that
\[ {({({\widetilde{D}_{+}^{k,j}},{\widetilde{D}_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}\stackrel{d}{=}{({({D_{+}^{k,j}},{D_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}.\]
Remark 3.2.
In Proposition 3.1, as well as in the sequel of this paper, we suppose ${M^{(j+)}}$ is an extremal process if $\mathrm{E}{({Q_{1}^{(j)+}})^{\alpha }}\gt 0$, and a zero process if this quantity is equal to zero. Analogously for ${M^{(j-)}}$.
Proof of Proposition 3.1.
As noted in Subsection 2.4, condition (7) implies that the sequence $({Z_{i}})$ is regularly varying with index α and that Condition 2.2 holds. This, with Condition 2.1, implies the point process convergence in (12) with the limiting point process N described in (13). Since N is a Poisson process, it almost surely belongs to the set Λ. Therefore, since the maximum functional Φ is continuous on Λ, the continuous mapping theorem (see, for instance, Theorem 3.1 in [14]) applied to the convergence in (12) yields $\Phi ({N_{n}})\xrightarrow{d}\Phi (N)$ in ${D_{\uparrow }^{2{d^{2}}}}$ under the weak ${M_{1}}$ topology, i.e.
(19)
\[\begin{aligned}{}{W_{n}^{\mathrm{\star }}}(\hspace{0.1667em}\cdot \hspace{0.1667em}):=& \bigg({\bigg({a_{n}^{-1}}{\underset{i=1}{\overset{\lfloor n\hspace{0.1667em}\cdot \rfloor }{\bigvee }}}{Z_{i}^{(j)+}},{a_{n}^{-1}}{\underset{i=1}{\overset{\lfloor n\hspace{0.1667em}\cdot \rfloor }{\bigvee }}}{Z_{i}^{(j)-}}{\bigg)_{j=1,\dots ,d}^{\ast }}\bigg)_{k=1,\dots ,d}}\\ {} \xrightarrow{d}& W(\hspace{0.1667em}\cdot \hspace{0.1667em}):=\bigg({\bigg(\underset{{T_{i}}\le \hspace{0.1667em}\cdot }{\bigvee }{P_{i}}{Q_{i}^{(j)+}},\underset{{T_{i}}\le \hspace{0.1667em}\cdot }{\bigvee }{P_{i}}{Q_{i}^{(j)-}}{\bigg)_{j=1,\dots ,d}^{\ast }}\bigg)_{k=1,\dots ,d}}.\end{aligned}\]
By the same arguments as in the proof of Proposition 3.2 in [9] we obtain that ${D_{\uparrow }^{2{d^{2}}}}$ with the weak ${M_{1}}$ topology is a Polish space, and hence by Corollary 5.18 in [4] we can find a random vector ${({({\widetilde{D}_{+}^{k,j}},{\widetilde{D}_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}$, independent of W, such that
\[ {({({\widetilde{D}_{+}^{k,j}},{\widetilde{D}_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}\stackrel{d}{=}{({({D_{+}^{k,j}},{D_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}.\]
This, relation (19) and the fact that ${({({D_{+}^{k,j}},{D_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}$ is independent of ${W_{n}^{\mathrm{\star }}}$, by an application of Theorem 3.29 in [4], imply that
(20)
\[ (B,{W_{n}^{\mathrm{\star }}})\xrightarrow{d}(\widetilde{B},W)\hspace{1em}\mathrm{as}\hspace{2.5pt}n\to \infty \]
in ${D_{\uparrow }^{4{d^{2}}}}$ with the product ${M_{1}}$ topology, where $B={({({B_{+}^{k,j}},{B_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}$ and $\widetilde{B}={({({\widetilde{B}_{+}^{k,j}},{\widetilde{B}_{-}^{k,j}})_{j=1,\dots ,d}^{\ast }})_{k=1,\dots ,d}}$ are random elements in ${D_{\uparrow }^{2{d^{2}}}}$ such that ${B_{+}^{k,j}}(t)={D_{+}^{k,j}}$, ${B_{-}^{k,j}}(t)={D_{-}^{k,j}}$, ${\widetilde{B}_{+}^{k,j}}(t)={\widetilde{D}_{+}^{k,j}}$ and ${\widetilde{B}_{-}^{k,j}}(t)={\widetilde{D}_{+}^{k,j}}$ for $t\in [0,1]$.
A multivariate version of Lemma 2.1 in [9] implies that the function $g:{D_{\uparrow }^{4{d^{2}}}}\to {D_{\uparrow }^{2{d^{2}}}}$ defined by
\[ g(x)=({x^{(1)}}{x^{(2{d^{2}}+1)}},{x^{(2)}}{x^{(2{d^{2}}+2)}},\dots ,{x^{(2{d^{2}})}}{x^{(4{d^{2}})}})\]
for $x=({x^{(1)}},\dots ,{x^{(4{d^{2}})}})\in {D_{\uparrow }^{4{d^{2}}}}$, is continuous in the weak ${M_{1}}$ topology on the set of all functions in ${D_{\uparrow }^{4{d^{2}}}}$ for which the first $2{d^{2}}$ component functions have no discontinuity points, and this yields $\operatorname{P}[(\widetilde{B},W)\in \mathrm{Disc}(g)]=0$, where $\mathrm{Disc}(g)$ denotes the set of discontinuity points of g. A multivariate version of Lemma 2.2 in [9] shows that the function $h:{D_{\uparrow }^{2{d^{2}}}}\to {D_{\uparrow }^{d}}$, defined by
\[ h(x)=\bigg({\underset{i=1}{\overset{2d}{\bigvee }}}{x^{(i)}},{\underset{i=2d+1}{\overset{4d}{\bigvee }}}{x^{(i)}},\dots ,{\underset{i=2(d-1)d+1}{\overset{2{d^{2}}}{\bigvee }}}{x^{(i)}}\bigg)\]
for $x={({x^{(i)}})_{i=1,\dots ,2{d^{2}}}}\in {D_{\uparrow }^{2{d^{2}}}}$, is continuous when both spaces ${D_{\uparrow }^{2{d^{2}}}}$ and ${D_{\uparrow }^{d}}$ are endowed with the weak ${M_{1}}$ topology. Therefore, the continuous mapping theorem applied to the convergence in (20) yields $(h\circ g)(B,{W_{n}^{\mathrm{\star }}})\xrightarrow{d}(h\circ g)(\widetilde{B},W)$ as $n\to \infty $, i.e.
\[\begin{aligned}{}& {\bigg({\underset{i=1}{\overset{\lfloor n\hspace{0.1667em}\cdot \rfloor }{\bigvee }}}{\underset{j=1}{\overset{d}{\bigvee }}}\frac{{D_{+}^{k,j}}{Z_{i}^{(j)+}}\vee {D_{-}^{k,j}}{Z_{i}^{(j)-}}}{{a_{n}}}\bigg)_{k=1,\dots ,d}}\\ {} & \hspace{1em}\xrightarrow{d}{\bigg({\underset{{T_{i}}\le \hspace{0.1667em}\cdot }{\overset{}{\bigvee }}}{\underset{j=1}{\overset{d}{\bigvee }}}({\widetilde{D}_{+}^{k,j}}{P_{i}}{Q_{i}^{(j)+}}\hspace{0.1667em}\vee \hspace{0.1667em}{\widetilde{D}_{-}^{k,j}}{P_{i}}{Q_{i}^{(j)-}})\bigg)_{k=1,\dots ,d}}\end{aligned}\]
in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology. Note that $(h\circ g)(B,{W_{n}^{\mathrm{\star }}})$ is equal to ${W_{n}}$.
To finish the proof, it remains to show that $(h\circ g)(\widetilde{B},W)$ is equal to the limiting process in relation (18). By an application of Propositions 5.2 and 5.3 in [14] we obtain that for every $j=1,\dots ,d$ the point process ${\textstyle\sum _{i}}{\delta _{({T_{i}},{P_{i}}{Q_{i}^{(j)+}})}}$ is a Poisson process with intensity measures $\mathit{Leb}\times {\nu _{j+}}$, and hence ${M^{(j+)}}(t):={\textstyle\bigvee _{{T_{i}}\le t}}{P_{i}}{Q_{i}^{(j)+}}$ is an extremal processes with exponent measures ${\nu _{j+}}$ (see Section 4.3 in [13]; and [14], p. 161). Analogously, ${M^{(j-)}}(t):={\textstyle\bigvee _{{T_{i}}\le t}}{P_{i}}{Q_{i}^{(j)-}}$ is an extremal processes with exponent measures ${\nu _{j-}}$, and hence
\[ (h\circ g)(\widetilde{B},W)=\Big({\underset{j=1}{\overset{d}{\bigvee }}}{\Big({\widetilde{D}_{+}^{k,j}}{M^{(j+)}}\vee {\widetilde{D}_{-}^{k,j}}{M^{(j-)}}\Big)\Big)_{k=1,\dots ,d}},\hspace{1em}t\in [0,1].\]
 □
The proof of the next theorem relies on the proof of Theorem 3.3 in [9] where the functional convergence of the partial maxima process is established for univariate linear processes with i.i.d. innovations and random coefficients. We will omit some details of those parts of the proof that remain the same in our case, but we will show how to handle those parts that differ due to the multivariate setting and weak dependence of innovations.
Theorem 3.3.
Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$ that satisfy (7) and (8), and let ${C_{0}},{C_{1}},\dots ,{C_{m}}$ be random $d\times d$ matrices independent of $({Z_{i}})$. Assume Condition 2.1 holds. Then ${M_{n}}\xrightarrow{d}M$ as $n\to \infty $ in ${D_{\uparrow }^{d}}$ endowed with the weak ${M_{1}}$ topology.
Proof.
Let ${W_{n}}$ be as defined in Proposition 3.1. If we show that for every $\delta \gt 0$,
\[ \underset{n\to \infty }{\lim }\operatorname{P}[{d_{p}}({W_{n}},{M_{n}})\gt \delta ]=0,\]
then from Proposition 3.1 by an application of Slutsky’s theorem (see Theorem 3.4 in [14]) it will follow that ${M_{n}}\xrightarrow{d}M$ in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology. Taking into account (6) we need to show
\[ \underset{n\to \infty }{\lim }\operatorname{P}[{d_{{M_{1}}}}({W_{n}^{(j)}},{M_{n}^{(j)}})\gt \delta ]=0,\]
for every $j=1,\dots ,d$, but it is enough to prove the last relation only for $j=1$ (since the proof is analogous for all coordinates j). In fact, it suffices to show
(21)
\[ \underset{n\to \infty }{\lim }\operatorname{P}[{d_{{M_{2}}}}({W_{n}^{(1)}},{M_{n}^{(1)}})\gt \delta ]=0,\]
since for $x,y\in {D_{\uparrow }^{1}}$ it holds that ${d_{{M_{2}}}}(x,y)={d_{{M_{1}}}^{\ast }}(x,y)$, where ${d_{{M_{1}}}^{\ast }}$ is a complete metric topologically equivalent to ${d_{{M_{1}}}}$ (see Remark 12.8.1 in [17]; and [9], page 247).
In order to show (21), fix $\delta \gt 0$ and let $n\in \mathbb{N}$ be large enough, i.e. $n\gt \max \{2m,2m/\delta \}$. By the definition of the metric ${d_{{M_{2}}}}$ we have
\[\begin{aligned}{}{d_{{M_{2}}}}({W_{n}^{(1)}},{M_{n}^{(1)}})& =\bigg(\underset{v\in {\Gamma _{{W_{n}^{(1)}}}}}{\sup }\underset{z\in {\Gamma _{{M_{n}^{(1)}}}}}{\inf }d(v,z)\bigg)\vee \bigg(\underset{v\in {\Gamma _{{M_{n}^{(1)}}}}}{\sup }\underset{z\in {\Gamma _{{W_{n}^{(1)}}}}}{\inf }d(v,z)\bigg)\\ {} & =:{R_{n}}\vee {T_{n}}.\end{aligned}\]
Hence
(22)
\[ \operatorname{P}[{d_{{M_{2}}}}({W_{n}^{(1)}},{M_{n}^{(1)}})\gt \delta ]\le \operatorname{P}({R_{n}}\gt \delta )+\operatorname{P}({T_{n}}\gt \delta ).\]
To estimate the first term on the right-hand side of (22), define
\[ {D_{n}}=\{\exists \hspace{0.1667em}v\in {\Gamma _{{W_{n}^{(1)}}}}\hspace{2.5pt}\text{such that}\hspace{2.5pt}d(v,z)\gt \delta \hspace{2.5pt}\text{for every}\hspace{2.5pt}z\in {\Gamma _{{M_{n}^{(1)}}}}\}.\]
Note that $\{{R_{n}}\gt \delta \}\subseteq {D_{n}}$. On the event ${D_{n}}$ it holds that $d(v,{\Gamma _{{M_{n}^{(1)}}}})\gt \delta $. Let $v=({t_{v}},{x_{v}})$. Then as in the proof of Theorem 3.3 in [9], for all $l=0,1,\dots ,m$ it holds that
(23)
\[ \Big|{W_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)-{M_{n}^{(1)}}\Big(\frac{{i^{\ast }}+l}{n}\Big)\Big|\ge d(v,{\Gamma _{{M_{n}^{(1)}}}})\gt \delta \]
with ${i^{\ast }}=\lfloor n{t_{v}}\rfloor $ or ${i^{\ast }}=\lfloor n{t_{v}}\rfloor -1$. Note that ${i^{\ast }}$ is a random index. Let $D={\textstyle\bigvee _{k,j=1,\dots ,d}}({D_{+}^{k,j}}\vee {D_{-}^{k,j}})$. This implies $|{C_{i:k,j}}|\le D$ for all $i\in \{0,\dots ,m\}$ and $k,j\in \{1,\dots ,d\}$. Denote ${\delta ^{\ast }}=\delta /[8(m+1)d]$. We claim that
(24)
\[ {D_{n}}\subseteq {H_{n,1}}\cup {H_{n,2}}\cup {H_{n,3}},\]
where
\[\begin{aligned}{}{H_{n,1}}=\bigg\{& \exists \hspace{0.1667em}l\in \{-m,\dots ,m\}\cup \{n-m+1,\dots ,n\}\hspace{2.5pt}\mathrm{s}.\mathrm{t}.\hspace{2.5pt}\frac{D\| {Z_{l}}\| }{{a_{n}}}\gt {\delta ^{\ast }}\bigg\},\\ {} {H_{n,2}}=\bigg\{& \exists \hspace{0.1667em}k\in \{1,\dots ,n\}\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}\exists \hspace{0.1667em}l\in \{k-m,\dots ,k+m\}\setminus \{k\}\hspace{2.5pt}\\ {} & \text{such that}\hspace{2.5pt}\frac{D\| {Z_{k}}\| }{{a_{n}}}\gt {\delta ^{\ast }}\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}\frac{D\| {Z_{l}}\| }{{a_{n}}}\gt {\delta ^{\ast }}\bigg\},\\ {} {H_{n,3}}=\bigg\{& \exists \hspace{0.1667em}k\in \{1,\dots ,n\},\hspace{2.5pt}\exists \hspace{0.1667em}{j_{0}}\in \{1,\dots ,d\}\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}\exists \hspace{0.1667em}p\in \{1,\dots ,d\}\setminus \{{j_{0}}\}\\ {} & \text{such that}\hspace{2.5pt}\frac{D|{Z_{k}^{({j_{0}})}}|}{{a_{n}}}\gt {\delta ^{\ast }}\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}\frac{D|{Z_{k}^{(p)}}|}{{a_{n}}}\gt {\delta ^{\ast }}\bigg\}.\end{aligned}\]
Note that relation (24) will be proven if we show that
\[ {\widehat{D}_{n}}:={D_{n}}\cap {({H_{n,1}}\cup {H_{n,2}}\cup {H_{n,3}})^{c}}=\varnothing .\]
Assume the event ${\widehat{D}_{n}}$ occurs. Then necessarily ${W_{n}^{(1)}}({i^{\ast }}/n)\gt {\delta ^{\ast }}$. Indeed, if ${W_{n}^{(1)}}({i^{\ast }}/n)\le {\delta ^{\ast }}$, that is
\[ {\underset{i=1}{\overset{{i^{\ast }}}{\bigvee }}}{\underset{j=1}{\overset{d}{\bigvee }}}{a_{n}^{-1}}\Big({D_{+}^{1,j}}{Z_{i}^{(j)+}}\vee {D_{-}^{1,j}}{Z_{i}^{(j)-}}\Big)={W_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)\le {\delta ^{\ast }},\]
then for every $s\in \{m+1,\dots ,{i^{\ast }}\}$ it holds that
(25)
\[\begin{aligned}{}\frac{{X_{s}^{(1)}}}{{a_{n}}}=& {\sum \limits_{r=0}^{m}}{\sum \limits_{j=1}^{d}}\frac{{C_{r;1,j}}{Z_{s-r}^{(j)}}}{{a_{n}}}\le {\sum \limits_{r=0}^{m}}{\sum \limits_{j=1}^{d}}\frac{{D_{+}^{1,j}}{Z_{s-r}^{(j)+}}\vee {D_{-}^{1,j}}{Z_{s-r}^{(j)-}}}{{a_{n}}}\\ {} \le & {\sum \limits_{r=0}^{m}}{\sum \limits_{j=1}^{d}}\frac{\delta }{8(m+1)d}=\frac{\delta }{8},\end{aligned}\]
since by the definition of ${D_{+}^{1,j}}$ and ${D_{-}^{1,j}}$ we have ${D_{+}^{1,j}}{Z_{s-r}^{(j)+}}\ge 0$, ${D_{-}^{1,j}}{Z_{s-r}^{(j)-}}\ge 0$ and
\[ {C_{r;1,j}}{Z_{s-r}^{(j)}}\le \left\{\begin{array}{c@{\hskip10.0pt}l}{D_{+}^{1,j}}{Z_{s-r}^{(j)+}},& \mathrm{if}\hspace{2.5pt}{C_{r;1,j}}\gt 0\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}{Z_{s-r}^{(j)}}\gt 0,\\ {} {D_{-}^{1,j}}{Z_{s-r}^{(j)-}},& \mathrm{if}\hspace{2.5pt}{C_{r;1,j}}\lt 0\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}{Z_{s-r}^{(j)}}\lt 0,\\ {} 0,& \mathrm{if}\hspace{2.5pt}{C_{r;1,j}}\cdot {Z_{s-r}^{(j)}}\le 0.\end{array}\right.\]
Since the event ${H_{n,1}^{c}}$ occurs for every $s\in \{1,\dots ,m\}$, we also have
(26)
\[ \frac{|{X_{s}^{(1)}}|}{{a_{n}}}\le {\sum \limits_{r=0}^{m}}{\sum \limits_{j=1}^{d}}|{C_{r;1,j}}|\frac{|{Z_{s-r}^{(j)}}|}{{a_{n}}}\le {\sum \limits_{r=0}^{m}}{\sum \limits_{j=1}^{d}}\frac{D\| {Z_{s-r}}\| }{{a_{n}}}\le (m+1)d\hspace{0.1667em}{\delta ^{\ast }}=\frac{\delta }{8}.\]
Combining (25) and (26) we obtain
\[ -\frac{\delta }{8}\le \frac{{X_{1}^{(1)}}}{{a_{n}}}\le {M_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)={\underset{s=1}{\overset{{i^{\ast }}}{\bigvee }}}\frac{{X_{s}^{(1)}}}{{a_{n}}}\le \frac{\delta }{8},\]
and thus
\[ \Big|{W_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)-{M_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)\Big|\le \Big|{W_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)\Big|+\Big|{M_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)\Big|\le \frac{\delta }{8(m+1)d}+\frac{\delta }{8}\le \frac{\delta }{4},\]
which is in contradiction to (23).
Therefore ${W_{n}^{(1)}}({i^{\ast }}/n)\gt {\delta ^{\ast }}$, and hence there exist $k\in \{1,\dots ,{i^{\ast }}\}$ and ${j_{0}}\in \{1,\dots ,d\}$ such that
\[ {W_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)={a_{n}^{-1}}\Big({D_{+}^{1,{j_{0}}}}{Z_{k}^{({j_{0}})+}}\vee {D_{-}^{1,{j_{0}}}}{Z_{k}^{({j_{0}})-}}\Big)\gt {\delta ^{\ast }}.\]
This implies
\[ \frac{D\| {Z_{k}}\| }{{a_{n}}}=\frac{D}{{a_{n}}}{\underset{j=1}{\overset{d}{\bigvee }}}|{Z_{k}^{(j)}}|\ge \frac{D}{{a_{n}}}|{Z_{k}^{({j_{0}})}}|\ge \frac{1}{{a_{n}}}\Big({D_{+}^{1,{j_{0}}}}{Z_{k}^{({j_{0}})+}}\vee {D_{-}^{1,{j_{0}}}}{Z_{k}^{({j_{0}})-}}\Big)\gt {\delta ^{\ast }}.\]
From this, since ${H_{n,1}^{c}}\cap {H_{n,2}^{c}}\cap {H_{n,3}^{c}}$ occurs, it follows that $m+1\le k\le n-m$,
(27)
\[ \frac{D\| {Z_{l}}\| }{{a_{n}}}\le {\delta ^{\ast }}\hspace{1em}\text{for all}\hspace{2.5pt}l\in \{k-m,\dots ,k+m\}\setminus \{k\},\]
and
(28)
\[ \frac{D|{Z_{k}^{(p)}}|}{{a_{n}}}\le {\delta ^{\ast }}\hspace{1em}\text{for all}\hspace{2.5pt}p\in \{1,\dots ,d\}\setminus \{{j_{0}}\}.\]
Similarly as in the proof of Theorem 3.3 in [9] one can show that ${M_{n}^{(1)}}({i^{\ast }}/n)={X_{j}^{(1)}}/{a_{n}}$ for some $j\in \{1,\dots ,{i^{\ast }}\}\setminus \{k,\dots ,k+m\}$. Now we have four cases:
  • (A1) all random vectors ${Z_{j-m}},\dots ,{Z_{j}}$ are “small”,
  • (A2) exactly one is “large” with exactly one “large” component,
  • (A3) exactly one is “large” with at least two “large” components,
  • (A4) at least two of them are “large”,
where we say Z is “large” if $D\| Z\| /{a_{n}}\gt {\delta ^{\ast }}$, otherwise it is “small”, and similarly the component ${Z^{(s)}}$ is “large” if $D|{Z^{(s)}}|/{a_{n}}\gt {\delta ^{\ast }}$.
Following the arguments from [9], adjusted to the multivariate setting, it can be shown that the cases (A1) and (A2) are not possible (see the arXiv preprint [6] for details). The case (A3) is not possible on the event ${H_{n,3}^{c}}$, and the case (A4) is not possible on the event ${H_{n,2}^{c}}$. Since neither of the four cases (A1)–(A4) is possible, we conclude that ${\widehat{D}_{n}}=\varnothing $, and hence (24) holds.
The next step is to show that $\operatorname{P}({H_{n,k}})\to 0$ as $n\to \infty $ for $k=1,2,3$. By stationarity we have $\operatorname{P}({H_{n,1}})\le (3m+1)\operatorname{P}(D\| {Z_{1}}\| \gt {\delta ^{\ast }}{a_{n}})$, and therefore
(29)
\[ \underset{n\to \infty }{\lim }\operatorname{P}({H_{n,1}})=0.\]
As for ${H_{n,2}}$ we have
\[\begin{aligned}{}\operatorname{P}({H_{n,2}}\cap \{D\le c\})=& {\sum \limits_{k=1}^{n}}{\sum \limits_{\begin{array}{c}l=k-m\\ {} l\ne k\end{array}}^{k+m}}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\operatorname{P}\bigg(\frac{D\| {Z_{k}}\| }{{a_{n}}}\gt {\delta ^{\ast }},\hspace{0.1667em}\frac{D\| {Z_{l}}\| }{{a_{n}}}\gt {\delta ^{\ast }},\hspace{0.1667em}D\le c\bigg)\\ {} \le & 2n{\sum \limits_{i=1}^{m}}\operatorname{P}\bigg(\frac{\| {Z_{0}}\| }{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c},\hspace{0.1667em}\frac{\| {Z_{i}}\| }{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\bigg)\\ {} \le & 2{\sum \limits_{i=1}^{m}}n\operatorname{P}\bigg(\frac{\| {Z_{0}}\| }{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\bigg)\frac{\operatorname{P}\Big(\frac{\| {Z_{0}}\| }{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c},\hspace{0.1667em}\frac{\| {Z_{i}}\| }{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\Big)}{\operatorname{P}\Big(\frac{\| {Z_{0}}\| }{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\Big)}\end{aligned}\]
for an arbitrary $c\gt 0$. Therefore regular variation and the asymptotical independence condition (7) yield ${\lim \nolimits_{n\to \infty }}\operatorname{P}({H_{n,2}}\cap \{D\le c\})=0$, and this implies
\[ \underset{n\to \infty }{\limsup }\operatorname{P}({H_{n,2}})\le \underset{n\to \infty }{\limsup }\operatorname{P}({H_{n,2}}\cap \{D\gt c\})\le \operatorname{P}(D\gt c).\]
Letting $c\to \infty $ we conclude
(30)
\[ \underset{n\to \infty }{\lim }\operatorname{P}({H_{n,2}})=0.\]
By the definition of the set ${H_{n,3}}$ and stationarity it holds that
\[\begin{aligned}{}& \operatorname{P}({H_{n,3}}\cap \{D\le c\})={\sum \limits_{k=1}^{n}}{\sum \limits_{\begin{array}{c}l,s=1\\ {} l\ne s\end{array}}^{d}}\operatorname{P}\bigg(\frac{D|{Z_{k}^{(l)}}|}{{a_{n}}}\gt {\delta ^{\ast }},\hspace{0.1667em}\frac{D|{Z_{k}^{(s)}}|}{{a_{n}}}\gt {\delta ^{\ast }},\hspace{0.1667em}D\le c\bigg)\\ {} & \hspace{1em}\le {\sum \limits_{\begin{array}{c}l,s=1\\ {} l\ne s\end{array}}^{d}}n\operatorname{P}\bigg(\frac{|{Z_{1}^{(s)}}|}{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\bigg)\operatorname{P}\bigg(\frac{|{Z_{1}^{(l)}}|}{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\hspace{0.1667em}\bigg|\hspace{0.1667em}\frac{|{Z_{1}^{(s)}}|}{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\bigg)\\ {} & \hspace{1em}\le {\sum \limits_{\begin{array}{c}l,s=1\\ {} l\ne s\end{array}}^{d}}n\operatorname{P}\bigg(\frac{\| {Z_{1}}\| }{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\bigg)\operatorname{P}\bigg(\frac{|{Z_{1}^{(l)}}|}{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\hspace{0.1667em}\bigg|\hspace{0.1667em}\frac{|{Z_{1}^{(s)}}|}{{a_{n}}}\gt \frac{{\delta ^{\ast }}}{c}\bigg),\end{aligned}\]
and hence regular variation and condition (8) yield
(31)
\[ \underset{n\to \infty }{\lim }\operatorname{P}({H_{n,3}})=0.\]
Now from relations (24) and (29)–(31) we obtain ${\lim \nolimits_{n\to \infty }}\operatorname{P}({D_{n}})=0$, and since $\{{R_{n}}\gt \delta \}\subseteq {D_{n}}$, we conclude that
(32)
\[ \underset{n\to \infty }{\lim }\operatorname{P}({R_{n}}\gt \delta )=0.\]
Interchanging the roles of ${M_{n}^{(1)}}$ and ${W_{n}^{(1)}}$ in handling the event ${D_{n}}$, and using the arguments from the proof of Theorem 3.3 in [9], adjusted to the multivariate setting, we can show
(33)
\[ \underset{n\to \infty }{\lim }\operatorname{P}({T_{n}}\gt \delta )=0\]
(for details see the arXiv preprint [6]). Now from (22), (32) and (33) we obtain (21), which means that ${M_{n}}\xrightarrow{d}M$ in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology.  □

4 Infinite order linear processes

Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$, and ${({C_{i}})_{i\ge 0}}$ a sequence of random $d\times d$ matrices independent of $({Z_{i}})$ such that the series defining the linear process
(34)
\[ {X_{i}}={\sum \limits_{j=0}^{\infty }}{C_{j}}{Z_{i-j}},\hspace{1em}i\in \mathbb{Z},\]
is a.s. convergent. For $k,j\in \{1,\dots ,d\}$ let
\[ {D_{+}^{k,j}}=\max \{{C_{i;k,j}^{+}}:i\ge 0\}\hspace{1em}\mathrm{and}\hspace{1em}{D_{-}^{k,j}}=\max \{{C_{i;k,j}^{-}}:i\ge 0\},\]
where ${C_{i;k,j}}$ is the $(k,j)$th entry of the matrix ${C_{i}}$. Let ${M_{n}}$ be the partial maxima process as defined in (16), and M the limiting process from Proposition 3.1.
To obtain functional convergence of the partial maxima process for infinite order linear processes, we first approximate them by a sequence of finite order linear processes, for which Theorem 3.3 holds, and then show that the error of approximation is negligible in the limit with respect to the weak ${M_{1}}$ topology. In this case, besides the conditions from Theorem 3.3 for finite order linear processes, we will need also some moment conditions on the sequence of coefficients.
Theorem 4.1.
Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$ that satisfy (7) and (8), and let ${({C_{i}})_{i\ge 0}}$ be a sequence of random $d\times d$ matrices independent of $({Z_{i}})$. Assume Condition 2.1 holds and suppose
(35)
\[ \left\{\begin{array}{l@{\hskip10.0pt}l}{\textstyle\textstyle\sum _{j=0}^{\infty }}\mathrm{E}(\| {C_{j}}{\| ^{\delta }}+\| {C_{j}}{\| ^{\gamma }})\lt \infty ,& \mathit{if}\hspace{2.5pt}\alpha \in (0,1),\\ {} {\textstyle\textstyle\sum _{j=0}^{\infty }}\mathrm{E}(\| {C_{j}}{\| ^{\delta }}+\| {C_{j}}\| )\lt \infty ,& \mathit{if}\hspace{2.5pt}\alpha =1,\\ {} {\textstyle\textstyle\sum _{j=0}^{\infty }}\mathrm{E}\| {C_{j}}\| \lt \infty ,& \mathit{if}\hspace{2.5pt}\alpha \gt 1,\end{array}\right.\]
for some $\delta \in (0,\alpha )$ and $\gamma \in (\alpha ,1)$. Then ${M_{n}}\xrightarrow{d}M$ as $n\to \infty $ in ${D_{\uparrow }^{d}}$ endowed with the weak ${M_{1}}$ topology.
Proof.
For $m\in \mathbb{N}$, $m\ge 2$, define
\[ {X_{i}^{m}}={\sum \limits_{j=0}^{m-2}}{C_{j}}{Z_{i-j}}+{C^{(m,\vee )}}{Z_{i-m+1}}+{C^{(m,\wedge )}}{Z_{i-m}},\hspace{1em}i\in \mathbb{Z},\]
and
\[ {M_{n,m}}(t)=\left\{\begin{array}{c@{\hskip10.0pt}c}{a_{n}^{-1}}{\underset{i=1}{\overset{\lfloor nt\rfloor }{\displaystyle \bigvee }}}{X_{i}^{m}},& t\in \Big[\displaystyle \frac{1}{n},1\Big],\\ {} {a_{n}^{-1}}{X_{1}^{m}},& t\in \Big[0,\displaystyle \frac{1}{n}\Big),\end{array}\right.\]
where ${C^{(m,\vee )}}=\max \{{C_{i}}:i\ge m-1\}$ and ${C^{(m,\wedge )}}=\min \{{C_{i}}:i\ge m-1\}$, with the maximum and minimum of matrices interpreted componentwise, i.e. the $(k,j)$th entry of the matrix ${C^{(m,\vee )}}$ is ${C_{k,j}^{(m,\vee )}}=\max \{{C_{i;k,j}}:i\ge m-1\}$, and the $(k,j)$th entry of the matrix ${C^{(m,\wedge )}}$ is ${C_{k,j}^{(m,\wedge )}}=\min \{{C_{i;k,j}}:i\ge m-1\}$.
For $k,j\in \{1,\dots ,d\}$ define
\[ {D_{+}^{m,k,j}}=\bigg({\underset{i=0}{\overset{m-2}{\bigvee }}}{C_{i;k,j}^{+}}\bigg)\vee {C_{k,j}^{(m,\vee )+}}\vee {C_{k,j}^{(m,\wedge )+}}\]
and
\[ {D_{-}^{m,k,j}}=\Big({\underset{i=0}{\overset{m-2}{\bigvee }}}{C_{i;k,j}^{-}}\Big)\vee {C_{k,j}^{(m,\vee )-}}\vee {C_{k,j}^{(m,\wedge )-}}.\]
Then ${D_{+}^{m,k,j}}={D_{+}^{k,j}}$ and ${D_{-}^{m,k,j}}={D_{-}^{k,j}}$, and therefore for the sequence of finite order linear processes ${({X_{i}^{m}})_{i}}$ by Theorem 3.3 we obtain ${M_{n,m}}\xrightarrow{d}M$ as $n\to \infty $ in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology.
If we show that for every $\epsilon \gt 0$
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }\operatorname{P}[{d_{p}}({M_{n}},{M_{n,m}})\gt \epsilon ]=0,\]
then by a generalization of Slutsky’s theorem (see Theorem 3.5 in [14]) it will follow that ${M_{n}}\xrightarrow{d}M$ in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology. Taking into account (6) and the fact that the metric ${d_{{M_{1}}}}$ on ${D_{\uparrow }^{1}}$ is bounded above by the uniform metric, it suffices to show that
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }\operatorname{P}\bigg(\underset{0\le t\le 1}{\sup }|{M_{n}^{(j)}}(t)-{M_{n,m}^{(j)}}(t)|\gt \epsilon \bigg)=0,\]
for every $j=1,\dots ,d$, and further, as in the proof of Theorem 3.3, it is enough to show the last relation only for $j=1$. Denote by ${J_{n,m}}$ the probability in the last relation (for $j=1$). Now we treat separately the cases $\alpha \in (0,1)$ and $\alpha \in [1,\infty )$.
Case $\alpha \in (0,1)$. Recalling the definitions, we have
(36)
\[ {J_{n,m}}\le \operatorname{P}\bigg({\underset{i=1}{\overset{n}{\bigvee }}}\frac{|{X_{i}^{(1)}}-{X_{i}^{m(1)}}|}{{a_{n}}}\gt \epsilon \bigg)\le \operatorname{P}\bigg({\sum \limits_{i=1}^{n}}\frac{|{X_{i}^{(1)}}-{X_{i}^{m(1)}}|}{{a_{n}}}\gt \epsilon \bigg).\]
Similarly as in the univariate case treated in [9] we obtain
\[\begin{aligned}{}{X_{i}^{(1)}}-{X_{i}^{m(1)}}=& {\sum \limits_{j=1}^{d}}\bigg({\sum \limits_{k=m+1}^{\infty }}{C_{k;1,j}}{Z_{i-k}^{(j)}}+({C_{m-1;1,j}}-{C_{1,j}^{(m,\vee )}}){Z_{i-m+1}^{(j)}}\\ {} & +({C_{m;1,j}}-{C_{1,j}^{(m,\wedge )}}){Z_{i-m}^{(j)}}\bigg),\end{aligned}\]
and
\[\begin{aligned}{}& {\sum \limits_{i=1}^{n}}|{X_{i}^{(1)}}-{X_{i}^{m(1)}}|\\ {} & \hspace{1em}\le {\sum \limits_{j=1}^{d}}\bigg[{\sum \limits_{i=-\infty }^{0}}|{Z_{i-m}^{(j)}}|{\sum \limits_{s=1}^{n}}\| {C_{m-i+s}}\| +\bigg(2{\sum \limits_{l=m-1}^{\infty }}\| {C_{l}}\| \bigg){\sum \limits_{i=1}^{n+1}}|{Z_{i-m}^{(j)}}|\bigg].\end{aligned}\]
Therefore from (36) by applying condition (35) and the multivariate generalization of Lemma 3.2 in [8] (for the proof of this generalization see the arXiv preprint [6]) it follows that ${\lim \nolimits_{m\to \infty }}{\limsup _{n\to \infty }}{J_{n,m}}=0$, which means that ${M_{n}}\xrightarrow{d}M$ as $n\to \infty $ in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology.
Case $\alpha \in [1,\infty )$. Define
\[ {A_{k,j}}=\left\{\begin{array}{l@{\hskip10.0pt}l}{C_{k;1,j}}-{C_{1,j}^{(m,\vee )}},& \mathrm{if}\hspace{2.5pt}k=m-1,\\ {} {C_{k;1,j}}-{C_{1,j}^{(m,\wedge )}},& \mathrm{if}\hspace{2.5pt}k=m,\\ {} {C_{k;1,j}},& \mathrm{if}\hspace{2.5pt}k\ge m+1,\end{array}\right.\]
for $k\ge m-1$ and $j\in \{1,\dots ,d\}$. Then using the representation of ${X_{i}^{(1)}}-{X_{i}^{m(1)}}$ obtained in the previous case we get
\[ |{M_{n}^{(1)}}(t)-{M_{n,m}^{(1)}}(t)|\le {\underset{i=1}{\overset{n}{\bigvee }}}\frac{|{X_{i}^{(1)}}-{X_{i}^{m(1)}}|}{{a_{n}}}={\underset{i=1}{\overset{n}{\bigvee }}}{\sum \limits_{j=1}^{d}}\bigg|{\sum \limits_{k=m-1}^{\infty }}{A_{k,j}}\frac{{Z_{i-k}^{(j)}}}{{a_{n}}}\bigg|\]
for every $t\in [0,1]$. This, (35) and Lemma 5.2 in the arXiv preprint [6] yield ${\lim \nolimits_{m\to \infty }}{\limsup _{n\to \infty }}{J_{n,m}}=0$. Thus in this case also ${M_{n}}\xrightarrow{d}M$ as $n\to \infty $ in ${D_{\uparrow }^{d}}$ with the weak ${M_{1}}$ topology.  □
Remark 4.2.
When the sequence of coefficients $({C_{i}})$ is deterministic, the limiting process M in Theorem 4.1 has the representation
\[ M(t)=\underset{{T_{i}}\le t}{\bigvee }{P_{i}}{S_{i}},\hspace{1em}t\in [0,1],\]
where ${S_{i}}=({S_{i}^{(1)}},\dots ,{S_{i}^{(d)}})$, with ${S_{i}^{(k)}}={\textstyle\bigvee _{j=1}^{d}}({D_{+}^{k,j}}{Q_{i}^{(j)+}}\vee {D_{-}^{k,j}}{Q_{i}^{(j)-}})$ for $k=1,\dots ,d$. It is an extremal process with an exponent measure ρ, where for ${x\in [0,\infty )^{d}}$, $x\ne 0$,
\[ \rho ({[[0,x]]^{c}})={\int _{0}^{\infty }}\operatorname{P}\bigg(y{\underset{k=1}{\overset{d}{\bigvee }}}\frac{{S_{1}^{(k)}}}{{x^{(k)}}}\gt 1\bigg)\hspace{0.1667em}\alpha {y^{-\alpha -1}}\hspace{0.1667em}\mathrm{d}y.\]
Remark 4.3.
A special case of multivariate linear processes studied in this paper is
\[ {X_{i}}={\sum \limits_{j=0}^{\infty }}{B_{j}}{Z_{i-j}},\hspace{1em}i\in \mathbb{Z},\]
where ${({B_{i}})_{i\ge 0}}$ is a sequence of random variables independent of $({Z_{i}})$. To obtain this linear process from the general one in (34) take
\[ {C_{i;k,j}}=\left\{\begin{array}{l@{\hskip10.0pt}l}{B_{i}},& \mathrm{if}\hspace{2.5pt}k=j,\\ {} 0,& \mathrm{if}\hspace{2.5pt}k\ne j,\end{array}\right.\]
for $i\ge 0$ and $k,j\in \{1,\dots ,d\}$. In this case, under the conditions from Theorem 4.1 the limiting process M reduces to
\[\begin{aligned}{}M(t)=& {\Big({\widetilde{D}_{+}^{k,k}}{M^{(k+)}}(t)\vee {\widetilde{D}_{-}^{k,k}}{M^{(k-)}}(t)\Big)_{k=1,\dots ,d}}\\ {} =& {\Big({\widetilde{B}_{+}}{M^{(k+)}}(t)\vee {\widetilde{B}_{-}}{M^{(k-)}}(t)\Big)_{k=1,\dots ,d}}\end{aligned}\]
for $t\in [0,1]$, where $({\widetilde{B}_{+}},{\widetilde{B}_{-}})$ is a two-dimensional random vector, independent of ${({M^{(k+)}},{M^{(k-)}})_{k=1,\dots ,d}^{\ast }}$, such that $({\widetilde{B}_{+}},{\widetilde{B}_{-}})\stackrel{d}{=}({\textstyle\bigvee _{i\ge 0}}{B_{i}^{+}},{\textstyle\bigvee _{i\ge 0}}{B_{i}^{-}})$. By an application of Propositions 5.2 and 5.3 in [14] we can represent M in the form
\[ M(t)={\widetilde{B}_{+}}{M_{+}}(t)\vee {\widetilde{B}_{-}}{M_{-}}(t),\hspace{1em}t\in [0,1],\]
where ${M_{+}}(t)={({M^{(k+)}})_{k=1,\dots ,d}}$ and ${M_{-}}(t)={({M^{(k-)}})_{k=1,\dots ,d}}$ are extremal processes with exponent measures ${\nu _{+}}$ and ${\nu _{-}}$ respectively, where for ${x\in [0,\infty )^{d}}$, $x\ne 0$,
\[ {\nu _{+}}({[[0,x]]^{c}})={\int _{0}^{\infty }}\operatorname{P}\bigg(y{\underset{k=1}{\overset{d}{\bigvee }}}\frac{{Q_{1}^{(k)+}}}{{x^{(k)}}}\gt 1\bigg)\hspace{0.1667em}\alpha {y^{-\alpha -1}}\hspace{0.1667em}\mathrm{d}y\]
and
\[ {\nu _{-}}({[[0,x]]^{c}})={\int _{0}^{\infty }}\operatorname{P}\bigg(y{\underset{k=1}{\overset{d}{\bigvee }}}\frac{{Q_{1}^{(k)-}}}{{x^{(k)}}}\gt 1\bigg)\hspace{0.1667em}\alpha {y^{-\alpha -1}}\hspace{0.1667em}\mathrm{d}y.\]
In the following example we show that the functional convergence in the weak ${M_{1}}$ topology in Theorems 3.3 and 4.1 in general cannot be replaced by convergence in the stronger standard ${M_{1}}$ topology.
Example 4.4.
Let ${({T_{i}})_{i\in \mathbb{Z}}}$ be a sequence of i.i.d. unit Fréchet random variables, i.e. $\operatorname{P}({T_{i}}\le x)={e^{-1/x}}$ for $x\gt 0$. Take a sequence of positive real numbers $({a_{n}})$ such that $n\operatorname{P}({T_{1}}\gt {a_{n}})\to 1/2$ as $n\to \infty $, for instance, we can take ${a_{n}}=2n$. Let
\[ {Z_{i}}=({T_{2i-1}},{T_{2i}}),\hspace{1em}i\in \mathbb{Z}.\]
Then it follows easily that $n\operatorname{P}(\| {Z_{1}}\| \gt {a_{n}})\to 1$ as $n\to \infty $. It is straightforward to see that the random process ${({Z_{i}})_{i\in \mathbb{Z}}}$ satisfies all conditions of Theorem 3.3, and hence the partial maxima processes ${M_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})$ of the linear process
\[ {X_{i}}={C_{0}}{Z_{i}}+{C_{1}}{Z_{i-1}},\hspace{1em}i\in \mathbb{Z},\]
with
\[ {C_{0}}=\left(\begin{array}{c@{\hskip10.0pt}c}1& 1\\ {} 0& 0\end{array}\right)\hspace{1em}\mathrm{and}\hspace{1em}{C_{1}}=\left(\begin{array}{c@{\hskip10.0pt}c}0& 0\\ {} 1& 1\end{array}\right),\]
converges in distribution in ${D_{\uparrow }^{2}}$ with the weak ${M_{1}}$ topology.
Next we show that ${M_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})$ do not converge in distribution under the standard ${M_{1}}$ topology on ${D_{\uparrow }^{2}}$. This shows that the weak ${M_{1}}$ topology in Theorems 3.3 and 4.1 in general cannot be replaced by the standard ${M_{1}}$ topology. Let
\[ {V_{n}}(t)={M_{n}^{(1)}}(t)-{M_{n}^{(2)}}(t),\hspace{1em}t\in [0,1],\]
where
\[ {M_{n}^{(1)}}(t)={\underset{i=1}{\overset{\lfloor nt\rfloor }{\bigvee }}}\frac{{Z_{i}^{(1)}}+{Z_{i}^{(2)}}}{{a_{n}}}={\underset{i=1}{\overset{\lfloor nt\rfloor }{\bigvee }}}\frac{{T_{2i-1}}+{T_{2i}}}{{a_{n}}}\]
and
\[ {M_{n}^{(2)}}(t)={\underset{i=1}{\overset{\lfloor nt\rfloor }{\bigvee }}}\frac{{Z_{i-1}^{(1)}}+{Z_{i-1}^{(2)}}}{{a_{n}}}={\underset{i=1}{\overset{\lfloor nt\rfloor }{\bigvee }}}\frac{{T_{2i-3}}+{T_{2i-2}}}{{a_{n}}}.\]
The first step is to show that ${V_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})$ does not converge in distribution in ${D^{1}}$ endowed with the standard ${M_{1}}$ topology. According to [15] (see also Proposition 2 in [1], where the term “weak ${M_{1}}$ convergence” is used for convergence in distribution in the standard ${M_{1}}$ topology) it suffices to show that
(37)
\[ \underset{\delta \to 0}{\lim }\underset{n\to \infty }{\limsup }\operatorname{P}({\omega _{\delta }}({V_{n}})\gt \epsilon )\gt 0\]
for some $\epsilon \gt 0$, where
\[ {\omega _{\delta }}(x)=\underset{\substack{{t_{1}}\le t\le {t_{2}}\\ {} 0\le {t_{2}}-{t_{1}}\le \delta }}{\sup }M(x({t_{1}}),x(t),x({t_{2}}))\]
($x\in {D^{1}}$, $\delta \gt 0)$ and
\[ M({x_{1}},{x_{2}},{x_{3}})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \mathrm{if}\hspace{2.5pt}{x_{2}}\in [{x_{1}},{x_{3}}],\\ {} \min \{|{x_{2}}-{x_{1}}|,|{x_{3}}-{x_{2}}|\},& \mathrm{otherwise}.\end{array}\right.\]
Denote by ${i^{\prime }}={i^{\prime }}(n)$ the index at which ${\max _{1\le i\le n-1}}{T_{i}}$ is obtained. Fix $\epsilon \gt 0$ and let ${A_{n,\epsilon }}=\{{T_{{i^{\prime }}}}\gt \epsilon {a_{n}}\}$ and
\[ {B_{n,\epsilon }}=\{{T_{{i^{\prime }}}}\gt \epsilon {a_{n}}\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}\exists \hspace{0.1667em}k\in \{-{i^{\prime }}-1,\dots ,3\}\setminus \{0\}\hspace{2.5pt}\text{such that}\hspace{2.5pt}{T_{{i^{\prime }}+k}}\gt \epsilon {a_{n}}/8\}.\]
The regular variation property of ${T_{1}}$ yields $n\operatorname{P}({T_{1}}\gt c{a_{n}})\to {(2c)^{-1}}$ as $n\to \infty $ for $c\gt 0$, and this, together with the fact that $({T_{i}})$ is a sequence of i.i.d. variables, yield
(38)
\[ \underset{n\to \infty }{\lim }\operatorname{P}({A_{n,\epsilon }})=1-\underset{n\to \infty }{\lim }{\bigg(1-\frac{n\operatorname{P}({T_{1}}\gt \epsilon {a_{n}})}{n}\bigg)^{n-1}}=1-{e^{-{(2\epsilon )^{-1}}}}\]
and
(39)
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }\operatorname{P}({B_{n,\epsilon }})\le \underset{n\to \infty }{\limsup }{\sum \limits_{i=1}^{n-1}}{\sum \limits_{\begin{array}{c}k=-n\\ {} k\ne 0\end{array}}^{3}}\operatorname{P}({T_{i}}\gt \epsilon {a_{n}},{T_{i+k}}\gt \epsilon {a_{n}}/8)\\ {} & \hspace{1em}\le \underset{n\to \infty }{\limsup }\hspace{0.1667em}(n-1)(n+3)\operatorname{P}({T_{1}}\gt \epsilon {a_{n}})\operatorname{P}({T_{1}}\gt \epsilon {a_{n}}/8)=2{\epsilon ^{-2}}.\end{aligned}\]
Note that on the event ${A_{n,\epsilon }}\setminus {B_{n,\epsilon }}$ it holds that ${T_{{i^{\prime }}}}\gt \epsilon {a_{n}}$ and ${T_{{i^{\prime }}+k}}\le \epsilon {a_{n}}/8$ for every $k\in \{-{i^{\prime }}-1,\dots ,3\}\setminus \{0\}$. Now we distinguish two cases.
(i) ${i^{\prime }}$ is an even number. Then ${i^{\prime }}=2{i^{\ast }}$ for some integer ${i^{\ast }}$. Observe that on the set ${A_{n,\epsilon }}\setminus {B_{n,\epsilon }}$ we have
\[ {M_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)=\frac{{T_{{i^{\prime }}-1}}+{T_{{i^{\prime }}}}}{{a_{n}}}\gt \epsilon \hspace{1em}\mathrm{and}\hspace{1em}{M_{n}^{(2)}}\Big(\frac{{i^{\ast }}}{n}\Big)={\underset{i=1}{\overset{{i^{\ast }}}{\bigvee }}}\frac{{T_{2i-3}}+{T_{2i-2}}}{{a_{n}}}\le \frac{\epsilon }{4},\]
and similarly
\[ {M_{n}^{(1)}}\Big(\frac{{i^{\ast }}-1}{n}\Big)\le \frac{\epsilon }{4}\hspace{1em}\mathrm{and}\hspace{1em}{M_{n}^{(2)}}\Big(\frac{{i^{\ast }}-1}{n}\Big)\le \frac{\epsilon }{4}.\]
This implies
\[ {V_{n}}\Big(\frac{{i^{\ast }}}{n}\Big)={M_{n}^{(1)}}\Big(\frac{{i^{\ast }}}{n}\Big)-{M_{n}^{(2)}}\Big(\frac{{i^{\ast }}}{n}\Big)\gt \frac{3\epsilon }{4},\]
and
\[ {V_{n}}\Big(\frac{{i^{\ast }}-1}{n}\Big)={M_{n}^{(1)}}\Big(\frac{{i^{\ast }}-1}{n}\Big)-{M_{n}^{(2)}}\Big(\frac{{i^{\ast }}-1}{n}\Big)\in \Big[-\frac{\epsilon }{4},\frac{\epsilon }{4}\Big].\]
Further, on the set ${A_{n,\epsilon }}\setminus {B_{n,\epsilon }}$ it holds that
\[ {M_{n}^{(1)}}\Big(\frac{{i^{\ast }}+1}{n}\Big)=\frac{{T_{{i^{\prime }}-1}}+{T_{{i^{\prime }}}}}{{a_{n}}}\hspace{1em}\mathrm{and}\hspace{1em}{M_{n}^{(2)}}\Big(\frac{{i^{\ast }}+1}{n}\Big)=\frac{{T_{{i^{\prime }}-1}}+{T_{{i^{\prime }}}}}{{a_{n}}},\]
which yields
\[ {V_{n}}\Big(\frac{{i^{\ast }}+1}{n}\Big)=0.\]
(ii) ${i^{\prime }}$ is an odd number. Then ${i^{\prime }}=2{i^{\ast }}-1$ for some integer ${i^{\ast }}$. Similarly as in the case (i) on the event ${A_{n,\epsilon }}\setminus {B_{n,\epsilon }}$ one obtains
\[ {V_{n}}\Big(\frac{{i^{\ast }}}{n}\Big)\gt \frac{3\epsilon }{4},\hspace{1em}{V_{n}}\Big(\frac{{i^{\ast }}-1}{n}\Big)\in \Big[-\frac{\epsilon }{4},\frac{\epsilon }{4}\Big]\hspace{1em}\mathrm{and}\hspace{1em}{V_{n}}\Big(\frac{{i^{\ast }}+1}{n}\Big)=0.\]
Hence from (i) and (ii) we conclude that on the set ${A_{n,\epsilon }}\setminus {B_{n,\epsilon }}$ it holds that
(40)
\[ \Big|{V_{n}}\Big(\frac{{i^{\ast }}}{n}\Big)-{V_{n}}\Big(\frac{{i^{\ast }}-1}{n}\Big)\Big|\gt \frac{3\epsilon }{4}-\frac{\epsilon }{4}=\frac{\epsilon }{2}\]
and
(41)
\[ \Big|{V_{n}}\Big(\frac{{i^{\ast }}+1}{n}\Big)-{V_{n}}\Big(\frac{{i^{\ast }}}{n}\Big)\Big|\gt \frac{3\epsilon }{4}.\]
Note that on the set ${A_{n,\epsilon }}\setminus {B_{n,\epsilon }}$ one also has
\[ {V_{n}}\Big(\frac{{i^{\ast }}}{n}\Big)\notin \Big[{V_{n}}\Big(\frac{{i^{\ast }}-1}{n}\Big),{V_{n}}\Big(\frac{{i^{\ast }}+1}{n}\Big)\Big],\]
and therefore taking into account (40) and (41) we obtain
\[ {\omega _{2/n}}({V_{n}})\ge M\Big({V_{n}}\Big(\frac{{i^{\ast }}-1}{n}\Big),{V_{n}}\Big(\frac{{i^{\ast }}}{n}\Big),{V_{n}}\Big(\frac{{i^{\ast }}+1}{n}\Big)\Big)\gt \frac{\epsilon }{2}\]
on the event ${A_{n,\epsilon }}\setminus {B_{n,\epsilon }}$. Therefore, since ${\omega _{\delta }}(\hspace{0.1667em}\cdot \hspace{0.1667em})$ is nondecreasing in δ, it holds that
(42)
\[\begin{aligned}{}\underset{n\to \infty }{\liminf }\operatorname{P}({A_{n,\epsilon }}\setminus {B_{n,\epsilon }})\le & \underset{n\to \infty }{\liminf }\operatorname{P}({\omega _{2/n}}({V_{n}})\gt \epsilon /2)\\ {} \le & \underset{\delta \to 0}{\lim }\underset{n\to \infty }{\limsup }\operatorname{P}({\omega _{\delta }}({V_{n}})\gt \epsilon /2).\end{aligned}\]
Since ${x^{2}}(1-{e^{-{(2x)^{-1}}}})$ tends to infinity as $x\to \infty $, we can find $\epsilon \gt 0$ such that ${\epsilon ^{2}}(1-{e^{-{(2\epsilon )^{-1}}}})\gt 2$, that is $1-{e^{-{(2\epsilon )^{-1}}}}\gt 2{\epsilon ^{-2}}$. For this ϵ, by relations (38) and (39), we have
\[ \underset{n\to \infty }{\lim }\operatorname{P}({A_{n,\epsilon }})\gt \underset{n\to \infty }{\limsup }\operatorname{P}({B_{n,\epsilon }}),\]
i.e.
\[ \underset{n\to \infty }{\liminf }\operatorname{P}({A_{n,\epsilon }}\setminus {B_{n,\epsilon }})\ge \underset{n\to \infty }{\lim }\operatorname{P}({A_{n,\epsilon }})-\underset{n\to \infty }{\limsup }\operatorname{P}({B_{n,\epsilon }})\gt 0.\]
This and (42) imply (37), and hence ${V_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})$ does not converge in distribution in ${D^{1}}$ with the standard ${M_{1}}$ topology.
To finish, if ${M_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})$ would converge in distribution in the standard ${M_{1}}$ topology on ${D_{\uparrow }^{2}}$, and then also on ${D^{2}}$, using the fact that linear combinations of the coordinates are continuous in the same topology (see Theorems 12.7.1 and 12.7.2 in [17]) and the continuous mapping theorem, we would obtain that ${V_{n}}(\hspace{0.1667em}\cdot \hspace{0.1667em})={M_{n}^{(1)}}(\hspace{0.1667em}\cdot \hspace{0.1667em})-{M_{n}^{(2)}}(\hspace{0.1667em}\cdot \hspace{0.1667em})$ converges in ${D^{1}}$ with the standard ${M_{1}}$ topology, which is impossible, as is shown above.

Acknowledgments

The author would like to thank the referee for valuable comments and suggestions, which helped to improve the paper.

References

[1] 
Avram, F., Taqqu, M.: Weak convergence of sums of moving averages in the α–stable domain of attraction. Ann. Probab. 20, 483–503 (1992) MR1143432
[2] 
Basrak, B., Segers, J.: Regularly varying multivariate time series. Stoch. Process. Appl. 119, 1055–1080 (2009) MR2508565. https://doi.org/10.1016/j.spa.2008.05.004
[3] 
Basrak, B., Tafro, A.: A complete convergence theorem for stationary regularly varying multivariate time series. Extremes 19, 549–560 (2016) MR3535966. https://doi.org/10.1007/s10687-016-0253-5
[4] 
Kallenberg, O.: Foundations of Modern Probability. Springer, New York (1997) MR1464694
[5] 
Krizmanić, D.: Functional limit theorems for weakly dependent regularly varying time series. Ph.D. dissertation, University of Zagreb, Croatia, https://www.math.uniri.hr/~dkrizmanic/DKthesis.pdf. Accessed 28 June 2024.
[6] 
Krizmanić, D.: Skorokhod ${M_{1}}$ convergence of maxima of multivariate linear processes with heavy-tailed innovations and random coefficients. arXiv preprint, https://arxiv.org/abs/2208.04054, 2022. Accessed 28 June 2024.
[7] 
Krizmanić, D.: Functional weak convergence of partial maxima processes. Extremes 19, 7–23 (2016) MR3454028. https://doi.org/10.1007/s10687-015-0236-y
[8] 
Krizmanić, D.: Functional convergence for moving averages with heavy tails and random coefficients. ALEA Lat. Am. J. Probab. Math. Stat. 16, 729–757 (2019) MR3949276. https://doi.org/10.30757/alea.v16-26
[9] 
Krizmanić, D.: Maxima of linear processes with heavy-tailed innovations and random coefficients. J. Time Ser. Anal. 43, 238–262 (2022) MR4400293. https://doi.org/10.1111/jtsa.12610
[10] 
Kulik, R., Soulier, P.: Heavy-Tailed Time Series. Springer, New York (2020) MR4174389. https://doi.org/10.1007/978-1-0716-0737-4
[11] 
Lamperti, J.: On extreme order statistics. Ann. Math. Stat. 35, 1726–1737 (1964) MR0170371. https://doi.org/10.1214/aoms/1177700395
[12] 
Mikosch, T., Wintenberger, O.: Extreme Value Theory for Time Series. Models with Power-Law Tails. Springer, New York (2024) MR4823721. https://doi.org/10.1007/978-3-031-59156-3
[13] 
Resnick, S.I.: Extreme Values, Regular Variation, and Point Processes. Springer, New York (1987) MR0900810. https://doi.org/10.1007/978-0-387-75953-1
[14] 
Resnick, S.I.: Heavy-Tail Phenomena: Probabilistic and Statistical Modeling. Springer, New York (2007) MR2271424
[15] 
Skorohod, A.V.: Limit theorems for stochastic processes. Theory Probab. Appl. 1, 261–290 (1956) MR0084897
[16] 
Tyran-Kamińska, M.: Convergence to Lévy stable processes under some weak dependence conditions. Stoch. Process. Appl. 120, 1629–1650 (2010) MR2673968. https://doi.org/10.1016/j.spa.2010.05.010
[17] 
Whitt, W.: Stochastic-Process Limits. Springer, New York (2002) MR1876437
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Finite order linear processes
  • 4 Infinite order linear processes
  • Acknowledgments
  • References

Copyright
© 2025 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Functional limit theorem multivariate linear process regular variation extremal process M1 topology

MSC2010
60F17 60G70

Funding
This work has been supported by University of Rijeka research grant uniri-iskusni-prirod-23-98.

Metrics
since March 2018
152

Article info
views

211

Full article
views

42

PDF
downloads

14

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    2
Theorem 3.3.
Theorem 4.1.
Theorem 3.3.
Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$ that satisfy (7) and (8), and let ${C_{0}},{C_{1}},\dots ,{C_{m}}$ be random $d\times d$ matrices independent of $({Z_{i}})$. Assume Condition 2.1 holds. Then ${M_{n}}\xrightarrow{d}M$ as $n\to \infty $ in ${D_{\uparrow }^{d}}$ endowed with the weak ${M_{1}}$ topology.
Theorem 4.1.
Let ${({Z_{i}})_{i\in \mathbb{Z}}}$ be a strictly stationary sequence of regularly varying ${\mathbb{R}^{d}}$-valued random vectors with index $\alpha \gt 0$ that satisfy (7) and (8), and let ${({C_{i}})_{i\ge 0}}$ be a sequence of random $d\times d$ matrices independent of $({Z_{i}})$. Assume Condition 2.1 holds and suppose
(35)
\[ \left\{\begin{array}{l@{\hskip10.0pt}l}{\textstyle\textstyle\sum _{j=0}^{\infty }}\mathrm{E}(\| {C_{j}}{\| ^{\delta }}+\| {C_{j}}{\| ^{\gamma }})\lt \infty ,& \mathit{if}\hspace{2.5pt}\alpha \in (0,1),\\ {} {\textstyle\textstyle\sum _{j=0}^{\infty }}\mathrm{E}(\| {C_{j}}{\| ^{\delta }}+\| {C_{j}}\| )\lt \infty ,& \mathit{if}\hspace{2.5pt}\alpha =1,\\ {} {\textstyle\textstyle\sum _{j=0}^{\infty }}\mathrm{E}\| {C_{j}}\| \lt \infty ,& \mathit{if}\hspace{2.5pt}\alpha \gt 1,\end{array}\right.\]
for some $\delta \in (0,\alpha )$ and $\gamma \in (\alpha ,1)$. Then ${M_{n}}\xrightarrow{d}M$ as $n\to \infty $ in ${D_{\uparrow }^{d}}$ endowed with the weak ${M_{1}}$ topology.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy