1 Introduction
For each dimension $d\in \mathbb{N}$, consider a random walk ${({S_{i}^{(d)}})_{i=0,1,\dots }}$ in ${\mathbb{R}^{d}}$ defined by
\[ {S_{0}^{(d)}}=0,\hspace{1em}{S_{n}^{(d)}}={X_{1}^{(d)}}+{X_{2}^{(d)}}+\cdots +{X_{n}^{(d)}},\hspace{1em}n\in \mathbb{N},\]
where ${X_{i}^{(d)}}=({X_{i,1}^{(d)}},\dots ,{X_{i,d}^{(d)}})$, $i\ge 1$, are independent identically distributed random vectors in ${\mathbb{R}^{d}}$, and denote ${S_{i}^{(d)}}=({S_{i,1}^{(d)}},\dots ,{S_{i,d}^{(d)}})$.The values of this random walk
are considered as a finite metric space which is embedded in ${\mathbb{R}^{d}}$ with the induced Euclidean metric.
In the regime where the dimension d and the distribution of ${X_{1}^{(d)}}$ are fixed, with $\mathbf{E}{X_{1}^{(d)}}=0$ and an identity covariance matrix $\operatorname{Cov}({X_{1}^{(d)}})={I_{d}}$. Donsker’s invariance principle implies that, after rescaling by ${n^{-1/2}}$, the random set ${\mathcal{Z}_{n}^{(d)}}$ converges in distribution to the path of the d-dimensional standard Brownian motion on $[0,1]$.
Let ${\ell _{2}}$ be the space of square-summable real sequences with norm denoted by $\| \cdot {\| _{2}}$. For every $d\in \mathbb{N}$, we identify ${\mathbb{R}^{d}}$ with the d-dimensional coordinate subspace of ${\ell _{2}}$, which leads to the embedding of ${\mathbb{R}^{d}}$ into ${\ell _{2}}$. This identification allows us to view all random walks as subsets of a common ambient space.
When both n and d tend to infinity, under the square integrability and several further assumptions listed in [5], the random metric space $({n^{-1/2}}{\mathcal{Z}_{n}^{(d)}},\| \cdot {\| _{2}})$ converges in probability to the Wiener spiral with respect to the Gromov–Hausdorff distance. The latter space is defined as the set of indicator functions ${\mathbf{1}_{[0,t]}}$, $t\in [0,1]$, viewed as a subset of the Hilbert space ${L^{2}}([0,1])$ and endowed with the metric induced by the ${L^{2}}$-norm. With respect to this metric, the space is isometric to the interval $[0,1]$ equipped with the distance $r(t,s)=\sqrt{|t-s|}$. Later Jin [4] verified that the bridge variant of the random metric space also converges in probability to a deterministic limit in the Gromov–Hausdorff sense, that is $[0,1]$ equipped with the pseudo-metric $\sqrt{|t-s|(1-|t-s|)}$. Furthermore, the case of random walk with heavy-tailed increments is considered in [6].
The Gromov–Hausdorff distance between metric spaces $\mathbb{X}=(X,{\rho _{X}})$ and $\mathbb{Y}=(Y,{\rho _{Y}})$ is defined as
\[ {d_{GH}}(\mathbb{X},\mathbb{Y})=\underset{i:X\hookrightarrow Z,j:Y\hookrightarrow Z}{\inf }{d_{H}}\big(i(X),j(Y)\big),\]
where the infimum is taken over all isometric embeddings i and j into all possible metric spaces $(Z,d)$ which can embed $X\hspace{3.33333pt}\text{and}\hspace{3.33333pt}Y$. The Hausdorff distance between sets F and H in $(Z,d)$ is defined as
\[ {d_{H}}(F,H)=\inf \{\varepsilon \gt 0:F\subset {H^{\varepsilon }}\hspace{3.33333pt}\text{and}\hspace{3.33333pt}H\subset {F^{\varepsilon }}\},\]
where ${F^{\varepsilon }}=\{x:d(x,F)\lt \varepsilon \}$ is the ε-neighbourhood of F, see [1, Chapter 7].We replace the ${\ell _{2}}$-metric on the space of sequences with the ${\ell _{p}}$-metric for a general $p\in [1,\infty )$. The study of random metric spaces relies on identifying the spaces up to isometries. The following remarks explain why the proof for $p\ne 2$ is not a routine modification of the ${\ell _{2}}$ case. Contrary to the ${\ell _{2}}$ setting, which admits a large group of rotations as isometries, the isometry group of ${\ell _{p}}$ for $p\ne 2$ is far more constrained, see [7] and [2, Theorem 7.4.1]. Furthermore, while we have the identity $\| x+y{\| _{2}^{2}}=\| x{\| _{2}^{2}}+\| y{\| _{2}^{2}}+2\langle x,y\rangle $, no analogous simple expression exists for $\| x+y{\| _{p}^{p}}$ with $p\ne 2$. This complicates the analysis of path of the random walk.
It should be noted that Kabluchko and Marynych [5] established that for subsets of ${\ell _{2}}$, convergence in the Gromov–Hausdorff sense is equivalent to convergence in the Hausdorff distance up to isometries of ${\ell _{2}}$, that is distance for the two subsets defined by taking the infimum over the Hausdorff distance between their images under all possible isometries of ${\ell _{2}}$. However, this equivalence fails for compact subsets of ${\ell _{p}}$ when $p\ne 2$. For instance, consider the two-point metric spaces $F=\{(0,0,\dots ),(1,1,0,\dots )\}$ and $H=\{(0,0,\dots ),({a^{1/p}},{b^{1/p}},{(2-a-b)^{1/p}},0,\dots )\}$ for any $a,b\gt 0$ such that $a+b\lt 2$, $a+b\ne 1$ and $a,b\ne 1$. Equipped with the ${\ell _{p}}$-metric, these spaces are isometric for any $p\in [1,\infty )$ and thus the Gromov–Hausdorff distance between them vanishes, while it is impossible to map F to H using an isometry of ${\ell _{p}}$ if $p\ne 2$.
Fix a $p\in [1,\infty )$. Impose a special structure on the increments of the random walk. Namely, we assume that
Denote $\mathbf{E}{\xi ^{2}}={\sigma ^{2}}$. Let ${M_{p}}$ be the p-th absolute moment of the standard normal distribution.
(1)
\[\begin{aligned}{}{X_{1}^{(d)}}& ={d^{-1/p}}({\xi _{1}},\dots ,{\xi _{d}}),\hspace{1em}{\xi _{1}},\dots ,{\xi _{d}}\hspace{3.33333pt}\text{share the same distribution},\\ {} \mathbf{E}{\xi _{1}}& =0,\hspace{1em}\mathbf{E}|{\xi _{1}}{|^{2p}}\lt \infty ,\hspace{1em}\operatorname{Cov}({\xi _{i}},{\xi _{j}})=0\hspace{1em}\text{for all}\hspace{3.33333pt}1\le i\ne j\le d.\end{aligned}\]Theorem 1.
Let $p\in [1,\infty )$ and $d=d(n)$ be an arbitrary sequence of positive integers such that $d(n)\to \infty $ as $n\to \infty $. Consider a random walk with increments given by (1). Then, as $n\to \infty $, the random metric space $({n^{-1/2}}{\mathcal{Z}_{n}^{(d)}},\| \cdot {\| _{p}})$, converges in probability to $\big([0,1],\sqrt{|t-s|}\sigma {M_{p}^{1/p}}\big)$ under the Gromov-Hausdorff distance.
Remark 1.
In the case $p=2$, we assume $\mathbf{E}{\xi ^{4}}\lt \infty $, while only the finite second moment of ξ is required in [5]. However, we do not see a possibility that the moment assumption can be weaker, since a different tool is applied in our proof.
2 Moment convergence theorem
We need the following results.
Lemma 1 (Moment convergence theorem).
Let $\eta ,{\eta _{1}},{\eta _{2}},\dots $ be independent identically distributed random variables with $\mathbf{E}\eta =\mu $ and $\operatorname{Var}\eta ={\sigma ^{2}}$, and ${S_{n}}={\eta _{1}}+\cdots +{\eta _{n}}$, $n\ge 1$. Then
\[ \mathbf{E}\bigg|\frac{{S_{n}}-n\mu }{\sigma \sqrt{n}}{\bigg|^{p}}\to {M_{p}}={2^{p/2}}\frac{1}{\sqrt{\pi }}\Gamma \bigg(\frac{p+1}{2}\bigg),\]
if $p\in (0,2)$ and for $p\ge 2$ if $\mathbf{E}|\eta {|^{p}}\lt \infty $.
Lemma 2 (Marcinkiewicz–Zygmund inequality. See Corollary 3.8.2 in [3]).
Let $p\ge 1$. Suppose that $X,{X_{1}},\dots ,{X_{n}}$ are independent, identically distributed random variables with mean 0 and $\mathbf{E}|X{|^{p}}\lt \infty $. Set ${S_{n}}={\textstyle\sum _{k=1}^{n}}{X_{k}}$, $n\ge 1$. Then there exists a constant ${B_{p}}$ depending only on p such that
Theorem 2 (Bivariate moment convergence theorem).
Let $({X_{1}},{Y_{1}})$, …, $({X_{n}},{Y_{n}})$ be independent copies of a centered $2p$-integrable random vector $(X,Y)$ with the covariance matrix Σ. Denote ${S_{n}}={X_{1}}+\cdots +{X_{n}}$ and ${Z_{n}}={Y_{1}}+\cdots +{Y_{n}}$. Then
where $({\eta _{1}},{\eta _{2}})\sim \mathcal{N}(0,\Sigma )$.
(2)
\[ \mathbf{E}{\big|{n^{-1/2}}{S_{n}}\big|^{p}}{\big|{n^{-1/2}}{Z_{n}}\big|^{p}}\to \mathbf{E}|{\eta _{1}}{\eta _{2}}{|^{p}}\hspace{1em}\textit{as}\hspace{3.57777pt}n\to \infty ,\]Proof.
By the central limit theorem,
Now we apply Skorokhod’s representation theorem, which allows us to replace the distributional convergence above with a.s. convergence on a new probability space which accommodates the following objects.
Denote ${\overline{S}_{n}^{(d)}}={\overline{X}_{1}^{(d)}}+\cdots +{\overline{X}_{n}^{(d)}}$ and ${\overline{Z}_{n}^{(d)}}={\overline{Y}_{1}^{(d)}}+\cdots +{\overline{Y}_{n}^{(d)}}$. Then
\[ \big({n^{-1/2}}{\overline{S}_{n}^{(d)}},{n^{-1/2}}{\overline{Z}_{n}^{(d)}}\big)\stackrel{\text{a.s.}}{\to }({\eta _{1}},{\eta _{2}})\hspace{1em}\text{as}\hspace{3.33333pt}n\to \infty .\]
By Lemma 1, as $n\to \infty $,
\[\begin{aligned}{}\mathbf{E}\big(|{n^{-1/2}}{\overline{S}_{n}^{(d)}}{|^{2p}}+|{n^{-1/2}}{\overline{Z}_{n}^{(d)}}{|^{2p}}\big)& =\mathbf{E}\big({n^{-1/2}}|{S_{n}^{(d)}}{|^{2p}}+|{n^{-1/2}}{Z_{n}^{(d)}}{|^{2p}}\big)\\ {} & \hspace{1em}\to \mathbf{E}(|{\eta _{1}}{|^{2p}}+|{\eta _{2}}{|^{2p}})=\mathbf{E}(|{\overline{\eta }_{1}}{|^{2p}}+|{\overline{\eta }_{1}}{|^{2p}}).\end{aligned}\]
Furthermore, by Pratt’s extension of the Lebesgue dominated convergence theorem, see [3, Theorem 5.5], and the inequality
\[ |xy{|^{p}}=|x{|^{p}}|y{|^{p}}\le \frac{|x{|^{2p}}+|y{|^{2p}}}{2},\hspace{1em}x,y\in \mathbb{R},\]
we conclude that, as $n\to \infty $,
\[ \mathbf{E}\big|{n^{-1/2}}{S_{n}^{(d)}}\big|\big|{n^{-1/2}}{Z_{n}^{(d)}}\big|=\mathbf{E}{\big|{n^{-1/2}}{\overline{S}_{n}^{(d)}}\big|^{p}}{\big|{n^{-1/2}}{\overline{Z}_{n}^{(d)}}\big|^{p}}\to \mathbf{E}|{\overline{\eta }_{1}}{\overline{\eta }_{2}}{|^{p}}=\mathbf{E}|{\eta _{1}}{\eta _{2}}{|^{p}}.\]
□3 Convergence of the ${\ell _{p}}$-metric of random walks
The proof of Theorem 1 relies on the following theorems, while they follow the general scheme of [5], substantial adjustments are necessary to handle the ${\ell _{p}}$-case with $p\ne 2$.
Theorem 3.
Let $p\in [1,\infty )$. Consider a random walk with increments given by (1). Then
\[ {n^{-p/2}}\| {S_{\lfloor nt\rfloor }^{(d)}}{\| _{p}^{p}}\stackrel{p}{\to }{t^{p/2}}{\sigma ^{p}}{M_{p}}\hspace{1em}\textit{as}\hspace{3.57777pt}n\to \infty \]
for all $t\in [0,1]$.
Proof.
Without loss of generality, let $t=1$. By the definition of convergence in probability, we need to verify that
Markov’s inequality implies that (3) is bounded above by
(3)
\[ \mathbf{P}\bigg\{\bigg|{n^{-p/2}}{\sum \limits_{i=1}^{d}}|{S_{n,i}^{(d)}}{|^{p}}-{\sigma ^{p}}{M_{p}}\bigg|\gt \varepsilon \bigg\}\to 0\hspace{2em}\text{as}\hspace{3.33333pt}n\to \infty .\]
\[\begin{aligned}{}& {\varepsilon ^{-2}}\mathbf{E}{\bigg({n^{-p/2}}{\sum \limits_{i=1}^{d}}|{S_{n,i}^{(d)}}{|^{p}}-{\sigma ^{p}}{M_{p}}\bigg)^{2}}\\ {} & \hspace{14.22636pt}={\varepsilon ^{-2}}\Bigg(\mathbf{E}{\bigg({n^{-p/2}}{\sum \limits_{i=1}^{d}}|{S_{n,i}^{(d)}}{|^{p}}\bigg)^{2}}+{\sigma ^{2p}}{M_{p}^{2}}-2{\sigma ^{p}}{M_{p}}\mathbf{E}\bigg({n^{-p/2}}{\sum \limits_{i=1}^{d}}|{S_{n,i}^{(d)}}{|^{p}}\bigg)\Bigg)\\ {} & \hspace{14.22636pt}={\varepsilon ^{-2}}\Bigg({n^{-p}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{2p}}+{n^{-p}}\sum \limits_{1\le i\ne j\le d}\mathbf{E}|{S_{n,i}^{(d)}}{|^{p}}|{S_{n,j}^{(d)}}{|^{p}}+{\sigma ^{2p}}{M_{p}^{2}}\\ {} & \hspace{213.39566pt}-2{n^{-p/2}}{\sigma ^{p}}{M_{p}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}\Bigg)\\ {} & \hspace{14.22636pt}={\varepsilon ^{-2}}({A_{1}}+{A_{2}}+{A_{3}}),\end{aligned}\]
where we have decomposed the terms as follows,
and
Let $({\xi _{1}^{(k)}},\dots ,{\xi _{d}^{(k)}})$, $1\le k\le n$, be independent copies of $({\xi _{1}},\dots ,{\xi _{d}})$. Denote
\[ {b_{nij,p}}=\operatorname{Cov}\Big({n^{-p/2}}{\big|{\xi _{i}^{(1)}}+{\xi _{i}^{(2)}}+\cdots +{\xi _{i}^{(n)}}\big|^{p}},{n^{-p/2}}{\big|{\xi _{j}^{(1)}}+{\xi _{j}^{(2)}}+\cdots +{\xi _{j}^{(n)}}\big|^{p}}\Big)\]
for all $1\le i\ne j\le d$. Thus, by Lemma 2,
\[ {A_{1}}={d^{-1}}\mathbf{E}{n^{-p}}|{\xi _{1}}+{\xi _{1}^{(2)}}+\cdots +{\xi _{1}^{(n)}}{|^{2p}}\le {d^{-1}}{B_{2p}}\mathbf{E}|\xi {|^{2p}},\]
where ${B_{2p}}$ is a constant depending only on p, and the term ${A_{1}}$ converges to 0 as $d\to \infty $.By Theorem 2, the term ${A_{2}}$ converges to 0 as $n\to \infty $ since ${\lim \nolimits_{n\to \infty }}{b_{ndd,p}}=0$.
Furthermore, the term ${A_{3}}$ is bounded above by
\[\begin{aligned}{}{n^{-p}}& {d^{2}}{\big(\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}\big)^{2}}-{n^{-p/2}}{\sigma ^{p}}{M_{p}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}+{\sigma ^{2p}}{M_{p}^{2}}-{n^{-p/2}}{\sigma ^{p}}{M_{p}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}\\ {} & ={n^{-p/2}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}\Big({n^{-p/2}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}-{\sigma ^{p}}{M_{p}}\Big)\\ {} & \hspace{156.49014pt}+{\sigma ^{p}}{M_{p}}\Big({\sigma ^{p}}{M_{p}}-{n^{-p/2}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}\Big)\\ {} & ={\Big({n^{-p/2}}d\mathbf{E}|{S_{n,1}^{(d)}}{|^{p}}-{\sigma ^{p}}{M_{p}}\Big)^{2}},\end{aligned}\]
which converges to 0 as $n\to \infty $ by the moment convergence theorem. □Theorem 4 (Uniform convergence of the ${\ell _{p}}$-norm of random walk).
Let $p\in [1,\infty )$. Consider a random walk with increments given by (1). Then
Proof.
For all $p\ge 1$,
\[ \| {S_{n}^{(d)}}{\| _{p}^{p}}={\sum \limits_{i=1}^{d}}|{S_{n,i}^{(d)}}{|^{p}}={T_{n}^{(d)}}+{Q_{n}^{(d)}},\]
where
(4)
\[\begin{aligned}{}{T_{n}^{(d)}}& ={\sum \limits_{i=1}^{d}}|{S_{n,i}^{(d)}}{|^{p}}-p{\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}{X_{j,i}^{(d)}}{S_{j-1,i}^{(d)}}|{S_{j-1,i}^{(d)}}{|^{p-2}},\\ {} {Q_{n}^{(d)}}& =p{\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}{X_{j,i}^{(d)}}{S_{j-1,i}^{(d)}}|{S_{j-1,i}^{(d)}}{|^{p-2}}.\end{aligned}\]For all $n\in \mathbb{N}$,
\[\begin{aligned}{}{T_{n}^{(d)}}-{T_{n-1}^{(d)}}& ={\sum \limits_{i=1}^{d}}\Big(|{S_{n,i}^{(d)}}{|^{p}}-p{\sum \limits_{j=1}^{n}}{X_{j,i}^{(d)}}{S_{j-1,i}^{(d)}}|{S_{j-1,i}^{(d)}}{|^{p-2}}-|{S_{n-1,i}^{(d)}}{|^{p}}\\ {} & \hspace{142.26378pt}+p{\sum \limits_{j=1}^{n-1}}{X_{j,i}^{(d)}}{S_{j-1,i}^{(d)}}|{S_{j-1,i}^{(d)}}{|^{p-2}}\Big)\\ {} & ={\sum \limits_{i=1}^{d}}\Big(|{S_{n,i}^{(d)}}{|^{p}}-|{S_{n-1,i}^{(d)}}{|^{p}}-p{X_{n,i}^{(d)}}{S_{n-1,i}^{(d)}}|{S_{n-1,i}^{(d)}}{|^{p-2}}\Big)\\ {} & ={\sum \limits_{i=1}^{d}}\big(|{S_{n,i}^{(d)}}{|^{p}}-|{S_{n-1,i}^{(d)}}{|^{p}}-p({S_{n,i}^{(d)}}-{S_{n-1,i}^{(d)}}){S_{n-1,i}^{(d)}}|{S_{n-1,i}^{(d)}}{|^{p-2}}\big).\end{aligned}\]
If $p\gt 1$, then the generic term of this sum is
\[\begin{aligned}{}|{S_{n,i}^{(d)}}{|^{p}}-|{S_{n-1,i}^{(d)}}{|^{p}}& -p({S_{n,i}^{(d)}}-{S_{n-1,i}^{(d)}}){S_{n-1,i}^{(d)}}|{S_{n-1,i}^{(d)}}{|^{p-2}}\\ {} & =|{S_{n,i}^{(d)}}{|^{p}}-|{S_{n-1,i}^{(d)}}{|^{p}}-p{S_{n,i}^{(d)}}{S_{n-1,i}^{(d)}}|{S_{n-1,i}^{(d)}}{|^{p-2}}+p|{S_{n-1,i}^{(d)}}{|^{p}}\\ {} & =|{S_{n,i}^{(d)}}{|^{p}}+(p-1)|{S_{n-1,i}^{(d)}}{|^{p}}-p{S_{n,i}^{(d)}}{S_{n-1,i}^{(d)}}|{S_{n-1,i}^{(d)}}{|^{p-2}}\\ {} & \ge |{S_{n,i}^{(d)}}{|^{p}}+(p-1)|{S_{n-1,i}^{(d)}}{|^{p}}-p\bigg(\frac{|{S_{n,i}^{(d)}}{|^{p}}}{p}+\frac{|{S_{n-1,i}^{(d)}}{|^{p}}}{p/(p-1)}\bigg)=0,\end{aligned}\]
where the last inequality follows from $xy\le \frac{{x^{p}}}{p}+\frac{{y^{q}}}{q}$ with $\frac{1}{p}+\frac{1}{q}=1$, for all $p,q\gt 1$ and $x,y\gt 0$.If $p=1$, then
\[\begin{aligned}{}{T_{n}^{(d)}}-{T_{n-1}^{(d)}}& ={\sum \limits_{i=1}^{d}}\Big(|{S_{n,i}^{(d)}}|-|{S_{n-1,i}^{(d)}}|-{X_{n,i}^{(d)}}{S_{n-1,i}^{(d)}}{\big|{S_{n-1,i}^{(d)}}\big|^{-1}}\Big)\\ {} & ={\sum \limits_{i=1}^{d}}\left\{\begin{array}{l@{\hskip10.0pt}l}|{S_{n,i}^{(d)}}|-{S_{n,i}^{(d)}},\hspace{1em}& \text{if}\hspace{2.5pt}{S_{n-1,i}^{(d)}}\gt 0,\\ {} |{S_{n,i}^{(d)}}|+{S_{n,i}^{(d)}},\hspace{1em}& \text{if}\hspace{2.5pt}{S_{n-1,i}^{(d)}}\lt 0,\end{array}\right.\hspace{3.33333pt}\ge 0.\end{aligned}\]
Thus, the sequence ${T_{n}^{(d)}}$ is monotone increasing.Next, ${Q_{n}^{(d)}}$ is a martingale, since
\[\begin{aligned}{}\mathbf{E}\Big({Q_{n}^{(d)}}-{Q_{n-1}^{(d)}}\hspace{3.33333pt}\Big|\hspace{3.33333pt}{\mathcal{F}_{n-1}^{(d)}}\Big)& =\mathbf{E}\Bigg({\sum \limits_{i=1}^{d}}p{X_{n,i}^{(d)}}{S_{n-1,i}^{(d)}}{\big|{S_{n-1,i}^{(d)}}\big|^{p-2}}\hspace{3.33333pt}\bigg|\hspace{3.33333pt}{\mathcal{F}_{n-1}^{(d)}}\Bigg)\\ {} & =p{S_{n-1,i}^{(d)}}{\big|{S_{n-1,i}^{(d)}}\big|^{p-2}}{d^{1-1/p}}\mathbf{E}\xi =0,\end{aligned}\]
where ${\mathcal{F}_{n-1}^{(d)}}$ is the σ-algebra generated by ${X_{1}^{(d)}},\dots ,{X_{n-1}^{(d)}}$. Then, by Doob’s inequality,
We shall now estimate the second moment of ${Q_{n}^{(d)}}$ for $p\gt 1$. Firstly, by Lemma 1, for all $\delta \gt 0$, there exists an integer $N(\delta )$ such that for all $j\ge N(\delta )$,
\[ {M_{2p-2}}-\delta \le \mathbf{E}\Bigg|\frac{{\textstyle\textstyle\sum _{l=1}^{j-1}}{X_{l,1}^{(d)}}}{\sqrt{j-1}\sqrt{\mathbf{E}{({X_{1,1}^{(d)}})^{2}}}}{\Bigg|^{2p-2}}\le {M_{2p-2}}+\delta .\]
Using this inequality we infer
\[\begin{aligned}{}& \mathbf{E}{({Q_{n}^{(d)}})^{2}}={p^{2}}{\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}{\sum \limits_{{i^{\prime }}=1}^{d}}{\sum \limits_{{j^{\prime }}=1}^{n}}\mathbf{E}\Big({X_{j,i}^{(d)}}{X_{{j^{\prime }},{i^{\prime }}}^{(d)}}{S_{j-1,i}^{(d)}}|{S_{j-1,i}^{(d)}}{|^{p-2}}{S_{{j^{\prime }}-1,{i^{\prime }}}^{(d)}}|{S_{{j^{\prime }}-1,{i^{\prime }}}^{(d)}}{|^{p-2}}\Big)\\ {} & ={p^{2}}{\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}\mathbf{E}{({X_{j,i}^{(d)}})^{2}}\mathbf{E}|{S_{j-1,i}^{(d)}}{|^{2p-2}}\\ {} & ={p^{2}}d\mathbf{E}{({X_{1,1}^{(d)}})^{2}}{\sum \limits_{j=1}^{n}}{(\sqrt{j-1})^{2p-2}}{\Big(\mathbf{E}{({X_{1,1}^{(d)}})^{2}}\Big)^{p-1}}\mathbf{E}\Bigg|\frac{{\textstyle\textstyle\sum _{l=1}^{j-1}}{X_{l,1}^{(d)}}}{\sqrt{j-1}\sqrt{\mathbf{E}{({X_{1,1}^{(d)}})^{2}}}}{\Bigg|^{2p-2}}\\ {} & \le {p^{2}}d\mathbf{E}{({X_{1,1}^{(d)}})^{2}}{\sum \limits_{j=1}^{N(\delta )}}{(\sqrt{j-1})^{2p-2}}{\Big(\mathbf{E}{({X_{1,1}^{(d)}})^{2}}\Big)^{p-1}}\mathbf{E}\Bigg|\frac{{\textstyle\textstyle\sum _{l=1}^{j-1}}{X_{l,1}^{(d)}}}{\sqrt{j-1}\sqrt{\mathbf{E}{({X_{1,1}^{(d)}})^{2}}}}{\Bigg|^{2p-2}}\\ {} & +{p^{2}}d\mathbf{E}{({X_{1,1}^{(d)}})^{2}}{\sum \limits_{j=1}^{n}}{(\sqrt{j-1})^{2p-2}}{\Big(\mathbf{E}{({X_{1,1}^{(d)}})^{2}}\Big)^{p-1}}({M_{2p-2}}+\delta ).\end{aligned}\]
Hence,
\[\begin{aligned}{}\mathbf{E}{({Q_{n}^{(d)}})^{2}}& \le {p^{2}}d\mathbf{E}{({X_{1,1}^{(d)}})^{2}}{\sum \limits_{j=1}^{N(\delta )}}\mathbf{E}{\Big|{\sum \limits_{l=1}^{j-1}}{X_{l,1}^{(d)}}\Big|^{2p-2}}+{p^{2}}({M_{2p-2}}+\delta ){n^{p}}d{\big(\mathbf{E}{({X_{1,1}^{(d)}})^{2}}\big)^{p}}\\ {} & \le {p^{2}}d\mathbf{E}{({X_{1,1}^{(d)}})^{2}}N(\delta ){B_{2p-2}}\max (N{(\delta )^{p-1}},N(\delta ))\mathbf{E}|{X_{1,1}^{(d)}}{|^{2p-2}}\\ {} & \hspace{142.26378pt}+{p^{2}}({M_{2p-2}}+\delta ){n^{p}}d{\big(\mathbf{E}{({X_{1,1}^{(d)}})^{2}}\big)^{p}},\end{aligned}\]
where Lemma 2 is used to bound the first term and ${B_{2p-2}}$ is a constant which depends on p. The right-hand side is bounded above by
\[ {p^{2}}N(\delta ){B_{2p-2}}\max (N{(\delta )^{p-1}},N(\delta )){d^{-1}}\mathbf{E}{\xi ^{2}}\mathbf{E}|\xi {|^{2p-2}}+{p^{2}}c{n^{p}}{d^{-1}}{\big(\mathbf{E}{\xi ^{2}}\big)^{p}}.\]
For $p\ge 2$, a direct application of Lemma 2 yields the alternative bound
\[\begin{aligned}{}\mathbf{E}{({Q_{n}^{(d)}})^{2}}& ={p^{2}}{\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}\mathbf{E}{({X_{j,i}^{(d)}})^{2}}\mathbf{E}|{S_{j-1,i}^{(d)}}{|^{2p-2}}\\ {} & \le {p^{2}}dn\mathbf{E}{({X_{1,1}^{(d)}})^{2}}{B_{2p-2}}{n^{p-1}}\mathbf{E}|{X_{1,1}^{(d)}}{|^{2p-2}}={p^{2}}{n^{p}}{B_{2p-2}}{d^{-1}}\mathbf{E}{\xi ^{2}}\mathbf{E}|\xi {|^{2p-2}}.\end{aligned}\]
Secondly, for $p=1$,
\[\begin{aligned}{}& \mathbf{E}{({Q_{n}^{(d)}})^{2}}=\mathbf{E}{\Big({\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}{X_{j,i}^{(d)}}{S_{j-1,i}^{(d)}}|{S_{j-1,i}^{(d)}}{|^{-1}}\Big)^{2}}\\ {} & ={\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}{\sum \limits_{{i^{\prime }}=1}^{d}}{\sum \limits_{{j^{\prime }}=1}^{n}}\mathbf{E}\Big({X_{j,i}^{(d)}}{X_{{j^{\prime }},{i^{\prime }}}^{(d)}}{S_{j-1,i}^{(d)}}|{S_{j-1,i}^{(d)}}{|^{-1}}{S_{{j^{\prime }}-1,{i^{\prime }}}^{(d)}}|{S_{{j^{\prime }}-1,{i^{\prime }}}^{(d)}}{|^{-1}}\Big)\\ {} & ={\sum \limits_{i=1}^{d}}{\sum \limits_{j=1}^{n}}\mathbf{E}{({X_{j,i}^{(d)}})^{2}}=nd\mathbf{E}{({X_{1,1}^{(d)}})^{2}}=n{d^{-1}}\mathbf{E}{\xi ^{2}}.\end{aligned}\]
Then we conclude that
and
By Theorem 3,
\[ {n^{-p/2}}{T_{\lfloor nt\rfloor }^{(d)}}={n^{-p/2}}\big(\| {S_{\lfloor nt\rfloor }^{(d)}}{\| _{p}^{p}}-{Q_{\lfloor nt\rfloor }^{(d)}}\big)\stackrel{p}{\to }{t^{p/2}}{\sigma ^{p}}{M_{p}}\hspace{2em}\text{as}\hspace{3.33333pt}n\to \infty ,\]
for all $t\in [0,1]$. By monotonicity of the function $t\mapsto {T_{\lfloor nt\rfloor }^{(d)}}$, Dini’s theorem yields that
\[ \underset{t\in [0,1]}{\sup }\Big|{n^{-p/2}}{T_{\lfloor nt\rfloor }^{(d)}}-{t^{p/2}}{\sigma ^{p}}{M_{p}}\Big|\stackrel{p}{\to }0\hspace{2em}\text{as}\hspace{3.33333pt}n\to \infty .\]
Therefore,
\[\begin{aligned}{}& \underset{t\in [0,1]}{\sup }\Big|{n^{-p/2}}\| {S_{\lfloor nt\rfloor }^{(d)}}{\| _{p}^{p}}-{t^{p/2}}{\sigma ^{p}}{M_{p}}\Big|\le \underset{t\in [0,1]}{\sup }\Big|{n^{-p/2}}{T_{\lfloor nt\rfloor }^{(d)}}-{t^{p/2}}{\sigma ^{p}}{M_{p}}\Big|\\ {} & \hspace{199.16928pt}+{n^{-p/2}}\underset{t\in [0,1]}{\sup }|{Q_{\lfloor nt\rfloor }^{(d)}}|\stackrel{p}{\to }0,\end{aligned}\]
which completes the proof. □Theorem 5 (Uniform convergence of the ${\ell _{p}}$-metric of differences).
Let $p\in [1,\infty )$. Consider a random walk with increments given by (1). Then
Proof.
Take some $m\in \mathbb{N}$. By Theorem 3, for every $i=0,\dots ,m-1$,
and
For fixed $m\in \mathbb{N}$,
\[ {n^{-1/2}}\| {S_{\lfloor n(i/m)\rfloor }^{(d)}}{\| _{p}}\stackrel{p}{\to }\sqrt{i/m}\sigma {M_{p}^{1/p}}.\]
Moreover, for every integer $0\le i\le j\le m$, by stationarity of the increments of random walks, we obtain that
\[ {n^{-1/2}}\| {S_{\lfloor n(j/m)\rfloor }^{(d)}}-{S_{\lfloor n(i/m)\rfloor }^{(d)}}{\| _{p}}\stackrel{p}{\to }\sqrt{(j-i)/m}\sigma {M_{p}^{1/p}}.\]
By the union bound, it follows that, for every fixed $m\in \mathbb{N}$,
\[ \underset{0\le i\le j\le m}{\max }\Big|{n^{-1/2}}\| {S_{\lfloor n(j/m)\rfloor }^{(d)}}-{S_{\lfloor n(i/m)\rfloor }^{(d)}}{\| _{p}}-\sqrt{(j-i)/m}\sigma {M_{p}^{1/p}}\Big|\stackrel{p}{\to }0.\]
If $0\le s\le t\le 1$ are such that $s\in [i/m,(i+1)/m)$ and $t\in [j/m,(j+1)/m)$, then by the triangle inequality,
\[\begin{aligned}{}& \Big|{n^{-1/2}}\| {S_{\lfloor nt\rfloor }^{(d)}}-{S_{\lfloor ns\rfloor }^{(d)}}{\| _{p}}-{n^{-1/2}}\| {S_{\lfloor n(j/m)\rfloor }^{(d)}}-{S_{\lfloor n(i/m)\rfloor }^{(d)}}{\| _{p}}\Big|\\ {} & \hspace{1em}\le {n^{-1/2}}\underset{z\in [\frac{i}{m},\frac{i+1}{m}]}{\sup }\| {S_{\lfloor nz\rfloor }^{(d)}}-{S_{\lfloor n(i/m)\rfloor }^{(d)}}{\| _{p}}+{n^{-1/2}}\underset{z\in [\frac{j}{m},\frac{j+1}{m}]}{\sup }\| {S_{\lfloor nz\rfloor }^{(d)}}-{S_{\lfloor n(j/m)\rfloor }^{(d)}}{\| _{p}}.\end{aligned}\]
Consider the random variable
\[ {\varepsilon _{m,d}}={n^{-1/2}}\underset{i\in \{0,\dots ,m-1\}}{\max }\underset{z\in [\frac{i}{m},\frac{i+1}{m}]}{\sup }\| {S_{\lfloor nz\rfloor }^{(d)}}-{S_{\lfloor n(i/m)\rfloor }^{(d)}}{\| _{p}}.\]
To complete the proof, it suffices to show that, for every $\varepsilon \gt 0$,
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }\mathbf{P}\{{\varepsilon _{m,d}}\ge \varepsilon \}=0.\]
By the union bound, it follows that, for every fixed $m\in \mathbb{N}$,
\[ \mathbf{P}\{{\varepsilon _{m,d}}\ge \varepsilon \}\le m\mathbf{P}\bigg\{\underset{t\in [0,\frac{1}{m}]}{\sup }\| {S_{\lfloor nt\rfloor }^{(d)}}{\| _{p}^{p}}\ge {n^{p/2}}{\varepsilon ^{p}}\bigg\}.\]
Since $\| {S_{\lfloor nt\rfloor }^{(d)}}{\| _{p}^{p}}={T_{\lfloor nt\rfloor }^{(d)}}+{Q_{\lfloor nt\rfloor }^{(d)}}$ with ${T_{\lfloor nt\rfloor }^{(d)}}$ and ${Q_{\lfloor nt\rfloor }^{(d)}}$ defined by (4), it suffices to show that
(6)
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }m\mathbf{P}\{{T_{\lfloor n/m\rfloor }^{(d)}}\ge {n^{p/2}}{\varepsilon ^{p}}/2\}=0,\](7)
\[ \underset{m\to \infty }{\lim }\underset{n\to \infty }{\limsup }m\mathbf{P}\{\underset{t\in [0,\frac{1}{m}]}{\sup }{Q_{\lfloor nt\rfloor }^{(d)}}\ge {n^{p/2}}{\varepsilon ^{p}}/2\}=0.\]
\[ {n^{-p/2}}{T_{\lfloor n/m\rfloor }^{(d)}}\stackrel{p}{\to }{m^{-p/2}}{\sigma ^{p}}{M_{p}}\hspace{1em}\text{as}\hspace{3.33333pt}n\to \infty .\]
Hence, (6) holds for every $m\gt {2^{2/p}}{\sigma ^{2}}{({M_{p}})^{2/p}}/{\varepsilon ^{2}}$. By Doob’s inequality,
\[ m\mathbf{P}\{\underset{t\in [0,\frac{1}{m}]}{\sup }{Q_{\lfloor nt\rfloor }^{(d)}}\ge {n^{p/2}}{\varepsilon ^{p}}/2\}\le m{n^{-p}}{({\varepsilon ^{p}}/2)^{-2}}\mathbf{E}{({Q_{\lfloor n/m\rfloor }^{(d)}})^{2}}\to 0,\]
where the last step is implied by (5), hence (7) holds. □Proof of Theorem 1..
By Corollary 7.3.28 of [1], the Gromov-Hausdorff distance between $({n^{-1/2}}{\mathcal{Z}_{n}},\| \cdot {\| _{p}})$ and $\big([0,1],\sqrt{|t-s|}\sigma {M_{p}^{1/p}}\big)$ is bounded by
\[ 2\underset{0\le s\le t\le 1}{\sup }\Big|{n^{-1/2}}\| {S_{\lfloor nt\rfloor }^{(d)}}-{S_{\lfloor ns\rfloor }^{(d)}}{\| _{p}}-\sqrt{t-s}\sigma {M_{p}^{1/p}}\Big|.\]
The proof is completed by referring to Theorem 5. □