1 Introduction
A standard random walk formed by partial sums of independent identically distributed random vectors is one of the most simple, classical and well-studied discrete-time random processes. Regarded as a Markov chain this process is time-homogeneous, meaning that its transition probabilities do not depend on time, and its increments are space-homogeneous meaning that their distributions do not depend on the current state of the process. There are many various generalizations of this basic model leading to interesting and far-reaching theories. A whole class of such extensions is provided by a notion of locally perturbed random walks. In a wide sense, a locally perturbed random walk is a Markov chain whose transition probabilities coincide with the transition probabilities of some standard random walk only outside a given subset of a state space. The first model of this flavor is due to Harrison and Shepp [5] who investigated simple random walk on $\mathbb{Z}$ with a perturbation at state 0. That is, if the process is at state $x\in \mathbb{Z}\setminus \{0\}$ it jumps to $x\pm 1$ with probabilities $1/2$, whereas from the state 0 it jumps to $\pm 1$ with probability $p\in [0,1]\setminus \{1/2\}$. Their main result says that the scaling limit of the locally perturbed random walk is a skew Brownian motion, rather than a standard Brownian motion occurring in the case of a simple symmetric random walk when $p=1/2$. Recent results and generalizations of this model can be found in [7, 9] and [10].
Another representative of the class of locally perturbed random walks is a planar random walk in a semi-infinite strip studied in [2]. This model deals with a two-dimensional random walk on ${\mathbb{Z}^{2}}$ which behaves as a standard random walk before hitting a horizontal line $\{(x,y)\in {\mathbb{Z}^{2}}:y=c\}$ and afterwards the walk ‘sticks’ to that line and behaves as a one-dimensional random walk on that line. Thus, the perturbation in that random walk is due to the presence of an impenetrable barrier that changes the behavior of the walk when it is reached. In [2] such a process appeared in the context of statistical analysis of the low inventory problem and, therefore, is of purely applied nature. The main focus in [2] was put on the analysis of that model when $c\to \infty $. Motivated by the aforementioned study we introduce and analyze in this note a new class of multidimensional locally perturbed random walks, which we call random walks with sticky barriers (RWSB) and which generalizes the model treated in [2].
Let $d\in \mathbb{N}$ be an integer which will remain fixed throughout the paper and denotes the dimension of an underlying space. For a sequence ${\mathbf{b}_{n}}:={({b_{1}^{(n)}},{b_{2}^{(n)}},\dots ,{b_{d}^{(n)}})_{n\in \mathbb{N}}}$ of ${\mathbb{N}^{d}}$-valued vectors,2 introduce a sequence of lattice sets
\[ {\mathbf{R}_{n}}:=\left([0,\hspace{0.1667em}{b_{1}^{(n)}}]\times [0,\hspace{0.1667em}{b_{2}^{(n)}}]\times \cdots \times [0,\hspace{0.1667em}{b_{d}^{(n)}}]\right)\cap {\mathbb{N}_{0}^{d}},\]
where ${\mathbb{N}_{0}}:=\mathbb{N}\cup \{0\}$. The random walk with sticky barriers ${({\mathbf{S}_{k}}(n))_{k\in {\mathbb{N}_{0}}}}:={({\mathbf{S}_{k}})_{k\in {\mathbb{N}_{0}}}}$ is a discrete-time ${\mathbb{N}_{0}^{d}}$-valued random process whose evolution can be described by the following informal rules (a formal description will be given below):-
• is starts at the origin, that is, ${\mathbf{S}_{0}}:=\mathbf{0}$;
-
• while staying inside ${\mathbf{R}_{n}}$ the process $({\mathbf{S}_{k}})$ evolves as the usual random walk with independent identically distributed (i.i.d.) ${\mathbb{N}_{0}^{d}}$-valued increments;
-
• upon hitting one of the half-spaces\[ {\mathbf{H}_{i,\ge }^{(n)}}=\{({x_{1}},{x_{2}},\dots ,{x_{d}})\in {\mathbb{N}_{0}^{d}}:{x_{i}}\ge {b_{i}^{(n)}}\},\]for some $i=1,\dots ,d$, the i-th coordinate of the process is set to be equal to ${b_{i}^{(n)}}$ forever, that is, the process ‘sticks’ to the corresponding hyperplane\[{\mathbf{H}_{i}^{(n)}}:=\{({x_{1}},{x_{2}},\dots ,{x_{d}})\in {\mathbb{N}_{0}^{d}}\hspace{0.1667em}:\hspace{0.1667em}{x_{i}}={b_{i}^{(n)}}\},\]and evolves further as another random walk of a smaller dimension with i.i.d. increments in the hyperplane ${\mathbf{H}_{i}^{(n)}}$ until it hits the next barrier of the form ${\mathbf{H}_{i,\ge }^{(n)}}\cap {\mathbf{H}_{j,\ge }^{(n)}}$, for some $i\ne j$, and so on;
-
• the process terminates upon hitting the vertex $({b_{1}^{(n)}},{b_{2}^{(n)}},\dots ,{b_{d}^{(n)}})$.
Note that the notion of RWSB is genuinely multidimensional and becomes almost degenerate when $d=1$. Thus, the simplest nontrivial situation occurs in dimension two and from now on we always assume that $d\ge 2$. As has already been mentioned, in case $d=2$ the concept of RWSB (in a slightly simplified variant with ${b_{2}^{(n)}}=\infty $) was proposed in [2] for the statistical analysis of a shared inventory problem with two types of goods. This model, on the one hand, is rich enough to take into account peculiarities of a finite inventory, but at the same time is simple enough to allow for a closed form of many important statistical quantities related to the model, see [2] for the details. Let us also stress that the model studied here, as well as its particular case treated in [2], can easily be generalized to the nonlattice settings. That is, one can easily drop the assumption that the increments are ${\mathbb{N}_{0}^{d}}$-valued and define a RWSB with state space ${\mathbb{R}^{d}}$. Most of the results obtained in this paper hold also in this more general settings upon necessary amendments.
The rest of the paper is organized as follows. In Section 2 we give a formal specification of the model by defining the process ${({\mathbf{S}_{k}})_{k\in {\mathbb{N}_{0}}}}$ and various related quantities. The main results are given in Section 3 and divided into three parts: strong laws of large numbers, asymptotic expansions for expectations and functional limit theorems. The proofs are given in Section 4. In the Appendix we collected some basic facts from renewal theory which is the main toolbox for our analysis.
We shall also use the following notational conventions. For a vector $\mathbf{x}=({x_{1}},{x_{2}},\dots ,{x_{d}})\in {\mathbb{N}_{0}^{d}}$ we denote by $|\mathbf{x}|=|{x_{1}}|+|{x_{2}}|+\cdots +|{x_{d}}|$ its Manhattan norm. We shall also use the notation $\mathbf{x}\cdot \mathbf{y}$ for the coordinatewise product of vectors, that is, if $\mathbf{y}=({y_{1}},{y_{2}},\dots ,{y_{d}})\in {\mathbb{N}_{0}^{d}}$, then $\mathbf{x}\cdot \mathbf{y}=({x_{1}}{y_{1}},{x_{2}}{y_{2}},\dots ,{x_{d}}{y_{d}})$.
2 Formal description of the model
Let $n\in \mathbb{N}$ be a fixed integer. Denote by ${\mathbf{B}_{d}}:={\{0,1\}^{d}}$ the set of binary strings of length d. The vector $\mathbf{m}=({m_{1}},{m_{2}},\dots ,{m_{d}})\in {\mathbf{B}_{d}}$ represents the already reached barriers, that is, ${m_{i}}=0$, for some $i=1,\dots ,d$, if and only if a RWSB, to be constructed below, did not yet reach the barrier ${b_{i}^{(n)}}$. Put ${\mathbf{B}^{\prime }_{d}}:={\{0,1\}^{d}}\setminus \{1,1,\dots ,1\}$. Let $(\Omega ,\mathcal{F},\mathbb{P})$ be a fixed probability space, which we assume to be rich enough to accommodate an array ${\mathbf{X}_{k}^{(\mathbf{m})}}=({X_{k,1}^{(\mathbf{m})}},{X_{k,2}^{(\mathbf{m})}},\dots ,{X_{k,d}^{(\mathbf{m})}})$, $\mathbf{m}\in {\mathbf{B}^{\prime }_{d}}$, $k\in \mathbb{N}$, of mutually independent ${\mathbb{N}_{0}^{d}}$-valued random variables. For every fixed $\mathbf{m}\in {\mathbf{B}^{\prime }_{d}}$, the variables ${({\mathbf{X}_{k}^{(\mathbf{m})}})_{k\in \mathbb{N}}}$ are assumed to be identically distributed and represent the increments of the RWSB when it has hit barriers determined by m. A random walk with sticky barriers is a Markov chain ${\mathbf{S}_{k}}:=({S_{k,1}},{S_{k,2}},\dots ,{S_{k,d}})$, $k\in {\mathbb{N}_{0}}$, that depends on the parameter n, and which is defined formally by the recursive formula
where the minimum is taken coordinatewise, the binary vector
represents the barriers already reached before k-th step, and
are the moments at which $({\mathbf{S}_{k}})$ hits a new barrier.3 Note that, for every fixed $n\in \mathbb{N}$, $({\mathbf{S}_{k}})$ is indeed a Markov chain with the discrete state space ${\mathbf{R}_{n}}$ and the inhomogeneous (in space) transition kernel
(1)
\[ {\mathbf{S}_{0}}=\mathbf{0},\hspace{1em}{\mathbf{S}_{k+1}}:=\min \left\{{\mathbf{S}_{k}}+{\mathbf{X}_{k+1-{\tau _{|{\mathbf{I}_{k}}|}}(n)}^{({\mathbf{I}_{k}})}},{\mathbf{b}_{n}}\right\},\hspace{1em}k\in {\mathbb{N}_{0}},\](2)
\[ {\mathbf{B}_{d}}\ni {\mathbf{I}_{k}}:={\mathbf{I}_{k}^{(n)}}=\left({\mathbf{1}_{\{{S_{k,1}}={b_{1}^{(n)}}\}}},{\mathbf{1}_{\{{S_{k,2}}={b_{2}^{(n)}}\}}},\dots ,{\mathbf{1}_{\{{S_{k,d}}={b_{d}^{(n)}}\}}}\right),\hspace{1em}k\in {\mathbb{N}_{0}},\](3)
\[ {\tau _{0}}(n):=0,\hspace{1em}{\tau _{j}}(n):=\inf \{k\in {\mathbb{N}_{0}}:|{\mathbf{I}_{k}^{(n)}}|\ge j\},\hspace{1em}j=1,\dots ,d,\]
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle \mathbb{P}\{{\mathbf{S}_{k}}=\mathbf{y}|{\mathbf{S}_{k-1}}=\mathbf{x}\}=\mathbb{P}\{\min (\mathbf{x}+{\mathbf{X}^{(\mathbf{m}(\mathbf{x}))}},{\mathbf{b}_{n}})=\mathbf{y}\},\\ {} & & \displaystyle \hspace{1em}\text{where}\hspace{1em}\mathbf{m}(\mathbf{x}):=\left({\mathbf{1}_{\{{x_{1}}={b_{1}^{(n)}}\}}},{\mathbf{1}_{\{{x_{2}}={b_{2}^{(n)}}\}}},\dots ,{\mathbf{1}_{\{{x_{d}}={b_{d}^{(n)}}\}}}\right),\hspace{1em}\mathbf{x},\mathbf{y}\in {\mathbf{R}_{n}},\end{array}\]
and, for $\mathbf{m}\in {\mathbf{B}_{d}}$, ${\mathbf{X}^{(\mathbf{m})}}=({X_{1}^{(\mathbf{m})}},{X_{2}^{(\mathbf{m})}},\dots ,{X_{d}^{(\mathbf{m})}})$ is a generic copy of the i.i.d. variables ${\mathbf{X}_{k}^{(\mathbf{m})}}$, $k\in \mathbb{N}$. By definition, ${\tau _{d}}(n)$ is the absorption time when the RWSB hits ${\mathbf{b}_{n}}$ and stays there forever. Furthermore, without loss of generality we can and do assume in what follows that, for all $k\in \mathbb{N}$ and $\mathbf{m}\in {\mathbf{B}_{d}}$,
Despite a formidable form, formula (1) has a simple interpretation that agrees with the informal description given in the introduction. Namely, the evolution of ${({\mathbf{S}_{k}})_{k\in {\mathbb{N}_{0}}}}$ before the absorption time ${\tau _{d}}(n)$ is divided into disjoint parts. For every integer $k<{\tau _{d}}(n)$ there is a unique index $i=0,\dots ,d-1$, such that $k\in [{\tau _{i}}(n),{\tau _{i+1}}(n))$. Thus, using that ${\mathbf{I}_{k}}={\mathbf{I}_{{\tau _{i}}(n)}}$ and $|{\mathbf{I}_{k}}|=i$ if $k\in [{\tau _{i}}(n),{\tau _{i+1}}(n))$, and denoting $l:=k-{\tau _{i}}(n)$, we can write
Moreover, by definition of ${\tau _{i+1}}(n)$ and in view of assumption (4), the minimum above can be dropped if $0\le l\le {\tau _{i+1}}(n)-{\tau _{i}}(n)-2$. Thus,
(5)
\[ {\mathbf{S}_{{\tau _{i}}(n)+l+1}}=\min \left\{{\mathbf{S}_{{\tau _{i}}(n)+l}}+{\mathbf{X}_{l+1}^{({\mathbf{I}_{{\tau _{i}}(n)}})}},{\mathbf{b}_{n}}\right\},\hspace{1em}0\le l\le {\tau _{i+1}}(n)-{\tau _{i}}(n)-1.\]
\[ {\mathbf{S}_{{\tau _{i}}(n)+l+1}}={\mathbf{S}_{{\tau _{i}}(n)+l}}+{\mathbf{X}_{l+1}^{({\mathbf{I}_{{\tau _{i}}(n)}})}},\hspace{1em}0\le l<{\tau _{i+1}}(n)-{\tau _{i}}(n)-1,\]
which means that, for every $i=0,\dots ,d-1$, the process $({\mathbf{S}_{{\tau _{i}}(n)+l}}-{\mathbf{S}_{{\tau _{i}}(n)}})$ is the standard zero-delayed random walk on the time interval $0\le l<{\tau _{i+1}}(n)-{\tau _{i}}(n)-1$. We also emphasize that in our definition of $({\mathbf{S}_{k}})$ the numeration of increments starts afresh from 1 on each time interval $[{\tau _{i}}(n),{\tau _{i+1}}(n))$.Another useful representation of $({\mathbf{S}_{k}})$ is
Indeed, (1) yields, for $k\le {\tau _{d}}(n)$,
(6)
\[ {\mathbf{S}_{k}}=\min \left({\sum \limits_{i=0}^{d-1}}{\sum \limits_{j={\tau _{i}}(n)}^{\min ({\tau _{i+1}}(n),k)-1}}{\mathbf{X}_{j+1-{\tau _{i}}(n)}^{({\mathbf{I}_{{\tau _{i}}(n)}})}},{\mathbf{b}_{n}}\right),\hspace{1em}k\in {\mathbb{N}_{0}}.\]
\[\begin{aligned}{}& \hspace{-28.45274pt}{\mathbf{S}_{k}}=\min \left\{{\sum \limits_{j=0}^{\infty }}{\mathbf{X}_{j+1-{\tau _{|{\mathbf{I}_{j}}|}}(n)}^{({\mathbf{I}_{j}})}}{\mathbb{1}_{\{j<k\}}},{\mathbf{b}_{n}}\right\}\\ {} & =\min \left\{{\sum \limits_{j=0}^{\infty }}{\sum \limits_{i=0}^{d-1}}{\mathbb{1}_{\{{\tau _{i}}(n)\le j<\min ({\tau _{i+1}}(n),k)\}}}{\mathbf{X}_{j+1-{\tau _{|{\mathbf{I}_{j}}|}}(n)}^{({\mathbf{I}_{j}})}},{\mathbf{b}_{n}}\right\}\\ {} & =\min \left\{{\sum \limits_{i=0}^{d-1}}{\sum \limits_{j=0}^{\infty }}{\mathbb{1}_{\{{\tau _{i}}(n)\le j<\min ({\tau _{i+1}}(n),k)\}}}{\mathbf{X}_{j+1-{\tau _{i}}(n)}^{({\mathbf{I}_{{\tau _{i}}(n)}})}},{\mathbf{b}_{n}}\right\},\end{aligned}\]
which is the right-hand side of (6).Throughout the paper we shall make the following assumptions on the distribution of ${\mathbf{X}^{(\mathbf{m})}}$ and use the following notation. By $\mathbb{E}$ we denote expectation with respect to the probability measure $\mathbb{P}$. We suppose that, for every $\mathbf{m}\in {\mathbf{B}^{\prime }_{d}}$, it holds
and
(7)
\[\begin{array}{r}\displaystyle {\boldsymbol{\mu }^{(\mathbf{m})}}={({\mu _{1}^{(\mathbf{m})}},{\mu _{2}^{(\mathbf{m})}},\dots ,{\mu _{d}^{(\mathbf{m})}}):=\left(\mathbb{E}{X_{1}^{(\mathbf{m})}},\mathbb{E}{X_{2}^{(\mathbf{m})}},\dots ,\mathbb{E}{X_{d}^{(\mathbf{m})}}\right)\in [0,\infty )^{d}}\\ {} \displaystyle \hspace{2.5pt}\text{and}\hspace{2.5pt}\mathbb{E}{X_{i}^{(\mathbf{m})}}>0\hspace{2.5pt}\text{if}\hspace{2.5pt}{m_{i}}=0,\hspace{1em}i=1,\dots ,d;\end{array}\]We now state the assumptions on the sequence of barriers $({\mathbf{b}_{n}})$. We assume that there exist pairwise distinct positive numbers ${\rho _{1}},\dots ,{\rho _{d}}$ such that, for every fixed $\lambda >0$,
that is, the sequence ${({b_{i}^{(n)}})_{n\in \mathbb{N}}}$ is regularly varying with index ${\rho _{i}}$, for every $i=1,\dots ,d$. See [1] for the comprehensive information on regularly varying functions and sequences. Note that without loss of generality we may assume that $0<{\rho _{1}}<{\rho _{2}}<\cdots <{\rho _{d}}$, since we can always permute the coordinates of all involved variables to achieve the desired order of indices ${\rho _{i}}$’s. Thus, from now on we suppose additionally that
We shall also occasionally allow the sequence ${({\mathbf{b}_{n}})_{n\in \mathbb{N}}}$ to be random. In this case we shall assume that the barriers and the steps of the corresponding RWSB are independent and that the limit relation (9) holds almost surely for some deterministic ${\rho _{1}},\dots ,{\rho _{d}}$, which, therefore, can again be arranged in the increasing order.
(9)
\[ \underset{n\to \infty }{\lim }\frac{{b_{i}^{(\lfloor n\lambda \rfloor )}}}{{b_{i}^{(n)}}}={\lambda ^{{\rho _{i}}}},\hspace{1em}i=1,\dots ,d,\]3 Main results
Let $\{{\mathbf{e}_{1}},{\mathbf{e}_{2}},\dots ,{\mathbf{e}_{d}}\}$ denote the standard unit basis of ${\mathbb{R}^{d}}$ and put
3.1 Strong laws of large numbers
Our first main result is a uniform strong law of large numbers for the differences ${\tau _{j}}-{\tau _{j-1}}$ and positions of the RWSB upon hitting the barriers.
Theorem 1.
Assume that ${({\mathbf{b}_{n}})_{n\in \mathbb{N}}}\subset {\mathbb{N}^{d}}$ is a (possibly random) sequence of barriers such that (9) holds for some deterministic ${\rho _{j}}$, $j=1,\dots ,d$, satisfying (10). Suppose also that ${({\mathbf{b}_{n}})_{n\in \mathbb{N}}}$ and $({\mathbf{X}_{k}^{(m)}})$ are independent. Then, for every $j=1,\dots ,d$ and $a>0$, it holds
and
Furthermore,
that is, for all $j=1,\dots ,d$,
(11)
\[ \underset{t\in (0,\hspace{0.1667em}a]}{\sup }\left|\frac{{\tau _{j}}(\lfloor nt\rfloor )-{\tau _{j-1}}(\lfloor nt\rfloor )}{{b_{j}^{(n)}}}-\frac{{t^{{\rho _{j}}}}}{{\mu _{j}^{({\mathbf{s}_{j-1}})}}}\right|\stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}0,\](12)
\[ \underset{t\in (0,\hspace{0.1667em}a]}{\sup }\left|\frac{{\mathbf{S}_{{\tau _{j}}(\lfloor nt\rfloor )}}}{{b_{j}^{(n)}}}-\frac{{t^{{\rho _{j}}}}}{{\mu _{j}^{({\mathbf{s}_{j-1}})}}}({\mathbf{s}_{d}}-{\mathbf{s}_{j-1}})\cdot {\boldsymbol{\mu }^{({\mathbf{s}_{j-1}})}}\right|\stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}0.\](13)
\[ |{\mathbf{I}_{{\tau _{j}}(n)}^{(n)}}-{\mathbf{s}_{j}}|\stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}0,\]Fix $l=1,\dots ,d$. Replacing, for $j=1,\dots ,l-1$, the denominators in (11) by ${b_{l}^{(n)}}$ and the limits ${t^{{\rho _{j}}}}/{\mu _{j}^{({\mathbf{s}_{j-1}})}}$ by zeros, and summing over all $j=1,\dots ,l$ yields the following corollary, which, in particular, provides the uniform strong law of large numbers for the absorption time of the RWSB when $l=d$.
Corollary 1.
Under the assumptions on Theorem 1, for $l=1,\dots ,d$,
In particular, the absorption time satisfies
3.2 Expectations
The exact calculation of $\mathbb{E}{\tau _{j}}(n)$ and $\mathbb{E}{\mathbf{S}_{{\tau _{j}}(n)}}$ seems to be overly complicated. However, we have been able to prove the following result.
Proposition 1.
For $j=1,\dots ,d$, it holds
and, for $l=j,\dots ,d$,
In particular,
and, for $l=2,\dots ,d$,
(15)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle \mathbb{E}\left({\tau _{j}}(n)-{\tau _{j-1}}(n)|{\mathbf{I}_{{\tau _{j-1}}(n)}}={\mathbf{s}_{j-1}}\right)\\ {} & & \displaystyle \hspace{1em}=\frac{{b_{j}^{(n)}}-\mathbb{E}\left({S_{{\tau _{j-1}}(n),j}}|{\mathbf{I}_{{\tau _{j-1}}(n)}}={\mathbf{s}_{j-1}}\right)}{{\mu _{j}^{({\mathbf{s}_{j-1}})}}}+O(1),\hspace{1em}n\to \infty ;\end{array}\](16)
\[\begin{aligned}{}& \mathbb{E}\left({S_{{\tau _{j}}(n),l}}-{S_{{\tau _{j-1}}(n),l}}|{\mathbf{I}_{{\tau _{j-1}}(n)}}={\mathbf{s}_{j-1}}\right)\\ {} & \hspace{1em}=\frac{{\mu _{l}^{({\mathbf{s}_{j-1}})}}\left({b_{j}^{(n)}}-\mathbb{E}\left({S_{{\tau _{j-1}}(n),j}}|{\mathbf{I}_{{\tau _{j-1}}(n)}}={\mathbf{s}_{j-1}}\right)\right)}{{\mu _{j}^{({\mathbf{s}_{j-1}})}}}+O(1),\hspace{1em}n\to \infty .\end{aligned}\](17)
\[ \mathbb{E}{\tau _{1}}(n)=\frac{{b_{1}^{(n)}}}{{\mu _{1}^{(\mathbf{0})}}}+O(1),\hspace{1em}n\to \infty ,\]Note that by (13),
\[ \mathbb{E}\left({S_{{\tau _{j-1}}(n),j}}|{\mathbf{I}_{{\tau _{j-1}}(n)}}={\mathbf{s}_{j-1}}\right)\le \frac{\mathbb{E}{S_{{\tau _{j-1}}(n),j}}}{\mathbb{P}\{{\mathbf{I}_{{\tau _{j-1}}(n)}}={\mathbf{s}_{j-1}}\}}\hspace{2.5pt}\sim \mathbb{E}{S_{{\tau _{j-1}}(n),j}},\hspace{1em}n\to \infty ,\]
and $\mathbb{E}{S_{{\tau _{j-1}}(n),j}}=O({b_{j-1}^{(n)}})=o({b_{j}^{(n)}})$, as $n\to \infty $, for $j=2,\dots ,d$, as can be easily checked by the stochastic comparison argument. Thus, we obtain the following.
3.3 Functional limit theorems
Fix $0<a<b<\infty $. In what follows we denote by $\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}$ the convergence in distribution in the Skorokhod space $D[a,b]$ endowed with the standard ${J_{1}}$-topology.
Theorem 2.
For every $j=1,\dots ,d$, the following holds true:
where $B={(B(t))_{t\ge 0}}$ is the standard Brownian motion and the events ${A_{n,j-1}}(a)$ are defined in (32) below. Moreover, if $2{\rho _{j-1}}<{\rho _{j}}$, then
in the Skorokhod space $D(0,\infty )$ endowed with the standard ${J_{1}}$-topology.
(21)
\[\begin{aligned}{}& {\left(\frac{{\tau _{j}}(\lfloor nt\rfloor )-{\tau _{j-1}}(\lfloor nt\rfloor )-\frac{1}{{\mu _{j}^{({\mathbf{s}_{j-1}})}}}({b_{j}^{(\lfloor nt\rfloor )}}-\mathbb{E}({S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}|{A_{n,j-1}}(a)))}{{({\mu _{j}^{({\mathbf{s}_{j-1}})}})^{-3/2}}{({v_{j,j}^{({\mathbf{s}_{j-1}})}}{b_{j}^{(n)}})^{1/2}}}\right)_{t\in [a,\hspace{0.1667em}b]}}\\ {} & \hspace{227.62204pt}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(B({t^{{\rho _{j}}}}))_{t\in [a,\hspace{0.1667em}b]}},\end{aligned}\](22)
\[ {\left(\frac{{\tau _{j}}(\lfloor nt\rfloor )-{\tau _{j-1}}(\lfloor nt\rfloor )-{({\mu _{j}^{({\mathbf{s}_{j-1}})}})^{-1}}{b_{j}^{(\lfloor nt\rfloor )}}}{{({\mu _{j}^{({\mathbf{s}_{j-1}})}})^{-3/2}}{({v_{j,j}^{({\mathbf{s}_{j-1}})}}{b_{j}^{(n)}})^{1/2}}}\right)_{t>0}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(B({t^{{\rho _{j}}}}))_{t>0}}\]Remark 3.
We do not know whether the conditional expectation
in the centering in (21) can be replaced by its unconditional counterpart without extra assumptions. Even though ${\lim \nolimits_{n\to \infty }}\mathbb{P}\{{A_{n,j-1}}(a)\}=1$, we do not even know whether
\[ \mathbb{E}({S_{{\tau _{j-1}}(n),j}}|{A_{n,j-1}}(a))\hspace{2.5pt}\sim \hspace{2.5pt}\mathbb{E}{S_{{\tau _{j-1}}(n),j}},\hspace{1em}n\to \infty ,\]
since we do not have a sufficient control over the behavior of ${S_{{\tau _{j-1}}(n)}}$ on the complement event ${A_{n,j-1}^{c}}(a)$.
4 Proofs
Let us introduce some additional notation. For $\mathbf{m}\in {\mathbf{B}^{\prime }_{d}}$, let
\[ {\mathbf{S}_{0}^{(\mathbf{m})}}=0,\hspace{1em}{\mathbf{S}_{k}^{(\mathbf{m})}}={\mathbf{S}_{k-1}^{(\mathbf{m})}}+{\mathbf{X}_{k}^{(\mathbf{m})}},\hspace{1em}k\in \mathbb{N},\]
be a standard zero-delayed random walk. Further, for $\mathbf{A}\subset {\mathbb{N}_{0}^{d}}$, set
\[ {\sigma ^{(\mathbf{m})}}(\mathbf{A}):=\left\{\begin{array}{l@{\hskip10.0pt}l}\inf \{k\in {\mathbb{N}_{0}}:{\mathbf{S}_{k}^{(\mathbf{m})}}\in \mathbf{A}\},\hspace{1em}& \text{if}\hspace{2.5pt}{\mathbf{S}_{k}^{(\mathbf{m})}}\in \mathbf{A}\hspace{2.5pt}\text{for some}\hspace{2.5pt}k\in {\mathbb{N}_{0}};\\ {} +\infty ,\hspace{1em}& \text{otherwise}.\end{array}\right.\]
For every fixed $\mathbf{m}\in {\mathbf{B}_{d}}$, let ${\pi _{\mathbf{m}}}:{\mathbb{R}^{d}}\mapsto {\mathbb{R}^{d-|\mathbf{m}|}}$ be the projection mapping which removes coordinates of $\mathbf{x}\in {\mathbb{R}^{d}}$ with indices $i=1,\dots ,d$ such that ${m_{i}}=1$. For example,
\[ {\pi _{{\mathbf{e}_{j}}}}({x_{1}},{x_{2}},\dots ,{x_{d}})=({x_{1}},\dots ,{x_{j-1}},{x_{j+1}},\dots ,{x_{d}}),\hspace{1em}({x_{1}},{x_{2}},\dots ,{x_{d}})\in {\mathbb{R}^{d}},\]
and
\[ {\pi _{{\mathbf{s}_{j}}}}({x_{1}},{x_{2}},\dots ,{x_{d}})=({x_{j+1}},\dots ,{x_{d}}),\hspace{1em}({x_{1}},{x_{2}},\dots ,{x_{d}})\in {\mathbb{R}^{d}},\]
for $j=0,\dots ,d$.The following simple observation lies in the core of all subsequent proofs.
Proposition 2.
Fix $j=1,\dots ,d$ and let the process ${({\mathbf{S}_{k}^{\to j}})_{k\in {\mathbb{N}_{0}}}}$ be defined by the equality
Given that $\{{\mathbf{I}_{{\tau _{j-1}}(n)}}=\mathbf{y}\}$ for some $\mathbf{y}\in {\mathbf{B}^{\prime }_{d}}$, the process $({\mathbf{S}_{k}^{\to j}})$ is a random walk with (random) sticky barriers in the space ${\mathbb{Z}^{d-j+1}}$ with the barrier defined by
(23)
\[ {\mathbf{S}_{k}^{\to j}}:={\pi _{{\mathbf{I}_{{\tau _{j-1}}(n)}}}}({\mathbf{S}_{k+{\tau _{j-1}}(n)}}-{\mathbf{S}_{{\tau _{j-1}}(n)}}),\hspace{1em}k\in {\mathbb{N}_{0}},\hspace{1em}j=1,\dots ,d.\]Proof.
Fix $j=1,\dots ,d$. We are going to show that $({\mathbf{S}_{k}^{\to j}})$ satisfies a version of (1). Substituting in (1) $k\mapsto k+{\tau _{j-1}}(n)$, subtracting ${\mathbf{S}_{{\tau _{j-1}}(n)}}$ from both sides and applying the linear mapping ${\pi _{{\mathbf{I}_{{\tau _{j-1}}(n)}}}}$ yield
In order to simplify the above expression, put
as well as
Similarly to (3), put
Putting things together we see that (25) can be recast as
are independent of the random barrier ${\mathbf{b}_{n}^{\to j}}$. □
(25)
\[ {\mathbf{S}_{k+1}^{\to j}}=\min \left({\mathbf{S}_{k}^{\to j}}+{\pi _{{\mathbf{I}_{{\tau _{j-1}}(n)}}}}\left({\mathbf{X}_{k+1+{\tau _{j-1}}(n)-{\tau _{|{\mathbf{I}_{k+{\tau _{j-1}}(n)}}|}}(n)}^{({\mathbf{I}_{k+{\tau _{j-1}}(n)}})}}\right),{\mathbf{b}_{n}^{\to j}}\right).\]
\[ {\mathbf{I}_{k}^{\to j}}=\left({\mathbf{1}_{\{{S_{k,1}^{\to j}}={b_{n,1}^{\to j}}\}}},{\mathbf{1}_{\{{S_{k,2}^{\to j}}={b_{n,2}^{\to j}}\}}},\dots ,{\mathbf{1}_{\{{S_{k,d-j+1}^{\to j}}={b_{n,d-j+1}^{\to j}}\}}}\right),\hspace{1em}k\in {\mathbb{N}_{0}},\]
and note that
(26)
\[ {\pi _{{\mathbf{I}_{{\tau _{j-1}}(n)}}}}({\mathbf{I}_{k+{\tau _{j-1}}(n)}})={\mathbf{I}_{k}^{\to j}},\hspace{1em}k\in {\mathbb{N}_{0}},\](27)
\[ |{\mathbf{I}_{k+{\tau _{j-1}}(n)}}|=|{\mathbf{I}_{k}^{\to j}}|+(j-1),\hspace{1em}k\in {\mathbb{N}_{0}}.\]
\[ {\tau _{0}^{\to j}}(n):=0,\hspace{1em}{\tau _{i}^{\to j}}(n):=\inf \{k\in {\mathbb{N}_{0}}:|{\mathbf{I}_{k}^{\to j}}|\ge i\},\hspace{1em}i=1,\dots ,d-j+1.\]
In view of (26) it holds
(28)
\[\begin{aligned}{}{\tau _{i}^{\to j}}(n)& =\inf \{k\in {\mathbb{N}_{0}}:|{\pi _{{\mathbf{I}_{{\tau _{j-1}}(n)}}}}({\mathbf{I}_{k+{\tau _{j-1}}(n)}})|\ge i\}\\ {} & =\inf \{k\in {\mathbb{N}_{0}}:|{\mathbf{I}_{k+{\tau _{j-1}}(n)}}|\ge i+j-1\}\stackrel{\text{(27)}}{=}{\tau _{i+j-1}}(n)-{\tau _{j-1}}(n).\end{aligned}\]
\[ {\mathbf{S}_{k+1}^{\to j}}=\min \left({\mathbf{S}_{k}^{\to j}}+{\pi _{{\mathbf{I}_{{\tau _{j-1}}(n)}}}}\left({\mathbf{X}_{k+1-{\tau _{|{\mathbf{I}_{k}^{\to j}}|}^{\to j}}(n)}^{({\mathbf{I}_{k}^{\to j}})}}\right),{\mathbf{b}_{n}^{\to j}}\right).\]
This shows that $({\mathbf{S}_{k}^{\to j}})$ is indeed a random walk with sticky barriers in the space ${\mathbb{Z}^{d-j+1}}$. It remains to note that given $\{{\mathbf{I}_{{\tau _{j-1}}(n)}}=\mathbf{y}\}$ the increments
(29)
\[ {\pi _{{\mathbf{I}_{{\tau _{j-1}}(n)}}}}\left({\mathbf{X}_{k+1-{\tau _{|{\mathbf{I}_{k}^{\to j}}|}^{\to j}}(n)}^{({\mathbf{I}_{k}^{\to j}})}}\right)={\pi _{\mathbf{y}}}\left({\mathbf{X}_{k+1-{\tau _{|{\mathbf{I}_{k}^{\to j}}|}^{\to j}}(n)}^{({\mathbf{I}_{k}^{\to j}})}}\right),\hspace{1em}k\in {\mathbb{N}_{0}},\]We shall now briefly describe our proof strategy. In accordance with (28), for every $n\in \mathbb{N}$ and $j=1,\dots ,d$,
and, further,
The most important properties of the RWSB ${({\mathbf{S}_{k}^{\to j}})_{k\in {\mathbb{N}_{0}}}}$ are:
In conjunction with the fact that the event $\{{\mathbf{I}_{{\tau _{j-1}}(n)}}=\mathbf{y}\}$ for a particular deterministic $\mathbf{y}={\mathbf{s}_{j-1}}$ has probability tending to one, see (32) and (33) below, equations (30) and (31) demonstrate how to reduce the analysis of ${\tau _{j}}(n)-{\tau _{j-1}}(n)$ and ${\mathbf{S}_{{\tau _{j}}(n)}}-{\mathbf{S}_{{\tau _{j-1}}(n)}}$, for arbitrary $j=1,\dots ,d$, to the analysis of the hitting time of the first barrier by the RWSB ${({\mathbf{S}_{k}^{\to j}})_{k\in {\mathbb{N}_{0}}}}$. In order to quantify the above statement, for every fixed $a>0$, $j=0,\dots ,d-1$ and $n\in \mathbb{N}$, define events
We shall see below that
(31)
\[ {\mathbf{S}_{{\tau _{j}}(n)}}-{\mathbf{S}_{{\tau _{j-1}}(n)}}={\mathbf{S}_{{\tau _{1}^{\to j}}(n)}^{\to j}}.\]-
(i) a.s. regular variation of the barriers ${({\mathbf{b}_{n}^{\to j}})_{n\in \mathbb{N}}}$, which will be proved later on, see formula (38) below;
-
(ii) the barriers ${\mathbf{b}_{n}^{\to j}}$, conditional on the event $\{{\mathbf{I}_{{\tau _{j-1}}(n)}}=\mathbf{y}\}$, are independent of the increments (29) of ${({\mathbf{S}_{k}^{\to j}})_{k\in {\mathbb{N}_{0}}}}$.
4.1 Proof of Theorem 1
We start by recalling the following consequence of (9), which is called uniform convergence theorem for regularly varying functions, see Theorem 1.5.2 in [1]. If (9) holds for ${\rho _{i}}>0$, then, for every $T>0$,
In order to prove Theorem 1 we shall use Proposition 2 and induction on $j=1,\dots ,d$.
(34)
\[ \underset{n\to \infty }{\lim }\underset{\lambda \in (0,T]}{\sup }\left|\frac{{b_{i}^{(\lfloor n\lambda \rfloor )}}}{{b_{i}^{(n)}}}-{\lambda ^{{\rho _{i}}}}\right|=0,\hspace{1em}i=1,\dots ,d.\]
Base of induction. We shall prove (11), (12) and (13) for $j=1$. To this end, recall the notation
By the functional strong law of large numbers for the ordinary renewal process applied to the projections of $({\mathbf{S}_{k}^{(\mathbf{0})}})$ onto the coordinate axes, see (52) below, in conjunction with (34),
for all $0<a<b$.
\[ {\mathbf{H}_{i,\ge }^{(n)}}=\{({x_{1}},{x_{2}},\dots ,{x_{d}})\in {\mathbb{N}_{0}^{d}}:{x_{i}}\ge {b_{i}^{(n)}}\},\hspace{1em}i=1,\dots ,d,\hspace{1em}n\in {\mathbb{N}^{d}},\]
and note that
(35)
\[ {\tau _{1}}(n)=\min \{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}}),{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{2,\ge }^{(n)}}),\dots ,{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{d,\ge }^{(n)}})\},\hspace{1em}n\in \mathbb{N}.\](36)
\[ \underset{t\in (0,\hspace{0.1667em}a]}{\sup }\left|\frac{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{j,\ge }^{(\lfloor nt\rfloor )}})}{{b_{j}^{(n)}}}-\frac{{t^{{\rho _{j}}}}}{{\mu _{j}^{(\mathbf{0})}}}\right|\stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}0,\hspace{1em}j=1,\dots ,d,\]In view of (35), and taking into account (10), we obtain (11) for $j=1$. In order to check (13) we note that
Taking into account the already proved relation (13) for $j=1$, we see that (12) holds for $j=1$.
\[ \{{\mathbf{I}_{{\tau _{1}}(n)}^{(n)}}={\mathbf{s}_{1}}\}=\{{\tau _{1}}(n)={\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}}),{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})<{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{j,\ge }^{(n)}}),j=2,\dots ,d\}.\]
By (36), there exists a (random) ${n_{0}}\in \mathbb{N}$ and an event ${\Omega ^{\prime }}\subset \Omega $ such that $\mathbb{P}\{{\Omega ^{\prime }}\}=1$ and on ${\Omega ^{\prime }}$
\[ \frac{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})}{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{j,\ge }^{(n)}})}\le 1/2,\hspace{1em}n\ge {n_{0}},\hspace{1em}j=2,\dots ,d.\]
Thus, (13) follows. For the proof of (12) in case $j=1$ we write
\[ {\mathbf{S}_{{\tau _{1}}(n)}}{\mathbb{1}_{\{{\mathbf{I}_{{\tau _{1}}(n)}^{(n)}}={\mathbf{s}_{1}}\}}}=\min \left({\mathbf{S}_{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})}^{(\mathbf{0})}}{\mathbb{1}_{\{{\mathbf{I}_{{\tau _{1}}(n)}^{(n)}}={\mathbf{s}_{1}}\}}},{\mathbf{b}_{n}}\right).\]
By the functional strong law of large numbers applied to $({\mathbf{S}_{k}^{(\mathbf{0})}})$, see (52) below, for every $T>0$,
\[ \underset{t\in [0,\hspace{0.1667em}T]}{\sup }\left|\frac{{\mathbf{S}_{\lfloor {b_{1}^{(n)}}t\rfloor }^{(\mathbf{0})}}}{{b_{1}^{(n)}}}-{\boldsymbol{\mu }^{(\mathbf{0})}}t\right|\stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}0.\]
By taking superposition of this relation and (36) with $j=1$ we see that, for every $a>0$,
(37)
\[ \underset{t\in (0,\hspace{0.1667em}a]}{\sup }\left|\frac{{\mathbf{S}_{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(\lfloor nt\rfloor )}})}^{(\mathbf{0})}}}{{b_{1}^{(n)}}}-{\boldsymbol{\mu }^{(\mathbf{0})}}\frac{{t^{{\rho _{1}}}}}{{\mu _{1}^{(\mathbf{0})}}}\right|\stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}0.\]
Step of induction. By the induction assumption we can pick a (random) ${n_{0,j}}\in \mathbb{N}$ such that ${\mathbf{I}_{{\tau _{j-1}}(n)}}={\mathbf{s}_{j-1}}$ for all $n\ge {n_{0,j}}$. According to Proposition 2 for the passage from $j-1$ to j we only need to check that the coordinates of ${\mathbf{b}_{n}^{\to j}}$ are a.s. regularly varying with deterministic indices of regular variation forming an increasing sequence. For $n\ge {n_{0,j}}$,
where we have used that
in view of (10). The proof of Theorem 1 is complete.
\[ {\mathbf{b}_{n}^{\to j}}=({b_{j}^{(n)}}-{S_{{\tau _{j-1}}(n),j}},\dots ,{b_{d}^{(n)}}-{S_{{\tau _{j-1}}(n),d}}).\]
Furthermore, by (12) and using the induction hypothesis, we obtain
(38)
\[\begin{array}{r}\displaystyle \frac{{b_{k}^{(\lfloor n\lambda \rfloor )}}-{S_{{\tau _{j-1}}(\lfloor \lambda n\rfloor ),k}}}{{b_{k}^{(n)}}-{S_{{\tau _{j-1}}(n),k}}}=\frac{{b_{k}^{(\lfloor n\lambda \rfloor )}}-{S_{{\tau _{j-1}}(\lfloor \lambda n\rfloor ),k}}}{{b_{k}^{(n)}}}\frac{{b_{k}^{(n)}}}{{b_{k}^{(n)}}-{S_{{\tau _{j-1}}(n),k}}}\\ {} \displaystyle \stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}{\lambda ^{{\rho _{k}}}},\hspace{1em}k=j,\dots ,d,\end{array}\]4.2 Proof of Proposition 1
We start with an auxiliary lemma.
Lemma 1.
Let ${({\xi _{k}},{\eta _{k}})_{k\in \mathbb{N}}}$ be a sequence of independent copies of a random vector $(\xi ,\eta )$ with finite second moments and such that $\mathbb{E}\xi >0$. Let f be an arbitrary function such that ${\lim \nolimits_{n\to \infty }}f(n)=+\infty $. Then
Proof.
Pick $\delta >0$ so small that $\mathbb{E}\xi -\delta \mathbb{E}\eta >0$. There exists ${n_{0}}\in \mathbb{N}$ such that $\delta f(n)\ge 1$, for all $n\ge {n_{0}}$. Thus,
\[\begin{aligned}{}& {\sum \limits_{k=1}^{\infty }}\mathbb{P}\{{\xi _{1}}+\cdots +{\xi _{k}}\le n,{\eta _{1}}+\cdots +{\eta _{k}}\ge nf(n)\}\\ {} & ={\sum \limits_{k=1}^{\infty }}\mathbb{P}\{{\xi _{1}}+\cdots +{\xi _{k}}\le n,(-\delta {\eta _{1}})+\cdots +(-\delta {\eta _{k}})\le -n\delta f(n)\}\\ {} & \le {\sum \limits_{k=1}^{\infty }}\mathbb{P}\{{\xi _{1}}+\cdots +{\xi _{k}}\le n,(-\delta {\eta _{1}})+\cdots +(-\delta {\eta _{k}})\le -n\}\\ {} & \le {\sum \limits_{k=1}^{\infty }}\mathbb{P}\{({\xi _{1}}-\delta {\eta _{1}})+\cdots +({\xi _{k}}-\delta {\eta _{k}})\le 0\}.\end{aligned}\]
The last series is convergent by Theorem 1 in [8], since $\mathbb{E}{(\xi -\delta \eta )^{2}}<\infty $ and $\mathbb{E}\xi -\delta \mathbb{E}\eta >0$. □Proof of Proposition 1.
In view of Proposition 2 we only need to check the case $j=1$. In this case we can write by (35)
\[\begin{aligned}{}\mathbb{E}{\tau _{1}}(n)& ={\sum \limits_{k=1}^{\infty }}\mathbb{P}\{{\tau _{1}}(n)\ge k\}\\ {} & ={\sum \limits_{k=1}^{\infty }}\mathbb{P}\{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})\ge k,{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{2,\ge }^{(n)}})\ge k,\dots ,{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{d,\ge }^{(n)}})\ge k\}\\ {} & =\mathbb{E}{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})\\ {} & -{\sum \limits_{k=1}^{\infty }}\mathbb{P}\{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})\ge k,{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{j,\ge }^{(n)}})<k\hspace{2.5pt}\text{for some}\hspace{2.5pt}j=2,\dots ,d\}.\end{aligned}\]
As it has already been mentioned, the quantity ${\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})$ is the first-passage time to the set $[{b_{1}^{(n)}},+\infty )$ for the ordinary one-dimensional random walk. By Lorden’s inequality, see (51) below, we obtain
\[ \mathbb{E}{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})=\frac{{b_{1}^{(n)}}}{{\mu _{1}^{(\mathbf{0})}}}+O(1),\hspace{1em}n\to \infty .\]
The remaining sum can be estimated as
\[\begin{aligned}{}& \hspace{-28.45274pt}{\sum \limits_{k=0}^{\infty }}\mathbb{P}\{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})>k,{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{j,\ge }^{(n)}})\le k,\hspace{2.5pt}\text{for some}\hspace{2.5pt}j=2,\dots ,d\}\\ {} & \le {\sum \limits_{j=2}^{d}}{\sum \limits_{k=0}^{\infty }}\mathbb{P}\{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(n)}})>k,{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{j,\ge }^{(n)}})\le k\}\\ {} & ={\sum \limits_{j=2}^{d}}{\sum \limits_{k=0}^{\infty }}\mathbb{P}\{{S_{k,1}^{(\mathbf{0})}}<{b_{1}^{(n)}},{S_{k,j}^{(\mathbf{0})}}\ge {b_{j}^{(n)}}\}.\end{aligned}\]
By Lemma 1 the last sum is $O(1)$, since ${b_{j}^{(n)}}/{b_{1}^{(n)}}\to +\infty $, as $n\to \infty $, for $j=2,\dots ,d$.For the calculation of $\mathbb{E}{\mathbf{S}_{{\tau _{1}}(n)}}$ we note that
It remains to check that, for every $l=2,\dots ,d$,
Note that
\[ \mathbb{E}{\mathbf{S}_{{\tau _{1}}(n)}}=\mathbb{E}\min ({\mathbf{S}_{{\tau _{1}}(n)}^{(\mathbf{0})}},{\mathbf{b}_{n}})=\mathbb{E}{\mathbf{S}_{{\tau _{1}}(n)}^{(\mathbf{0})}}-\mathbb{E}\max ({\mathbf{S}_{{\tau _{1}}(n)}^{(\mathbf{0})}}-{\mathbf{b}_{n}},0).\]
In view of formula (35) ${\tau _{1}}(n)$ is the stopping time with respect to the filtration ${\mathcal{F}_{k}}:=\sigma (\{{\mathbf{X}_{1}^{(\mathbf{0})}},\dots ,{\mathbf{X}_{k}^{(\mathbf{0})}}\})$, $k\in {\mathbb{N}_{0}}$. Thus, by Wald’s first identity
(39)
\[ \mathbb{E}{\mathbf{S}_{{\tau _{1}}(n)}^{(\mathbf{0})}}={\boldsymbol{\mu }^{(\mathbf{0})}}\mathbb{E}{\tau _{1}}(n).\](40)
\[ \mathbb{E}\max ({S_{{\tau _{1}}(n),l}^{(\mathbf{0})}}-{b_{l}^{(n)}},0)=O(1),\hspace{1em}n\to \infty .\]
\[\begin{aligned}{}\mathbb{E}\max ({S_{{\tau _{1}}(n),l}^{(\mathbf{0})}}-{b_{l}^{(n)}},0)& =\mathbb{E}({S_{{\tau _{1}}(n),l}^{(\mathbf{0})}}-{b_{l}^{(n)}}){\mathbb{1}_{\{{\tau _{1}}(n)={\sigma ^{(\mathbf{0})}}({H_{l,\ge }^{(n)}})\}}}\\ {} & \le \mathbb{E}({S_{{\sigma ^{(\mathbf{0})}}({H_{l,\ge }^{(n)}}),l}^{(\mathbf{0})}}-{b_{l}^{(n)}}).\end{aligned}\]
The term under the expectation is the overshoot of a standard one-dimensional random walk. Since we assume $\mathbb{E}{({X_{l}^{(\mathbf{0})}})^{2}}<\infty $, this expectation is bounded, see (54) below. Combining (39) and (40) we obtain (18). □4.3 Proof of Theorem 2
Throughout the proof we shall frequently use the following simple observation. Assume that a sequence of events $({B_{n}})$ satisfies
Then for an arbitrary event A, it holds
We need the following auxiliary result.
Lemma 2.
Let $a(n)$ be an arbitrary sequence such that
Then, for every $j=2,\dots ,d$, it holds
where $\Theta (t)\equiv 0$, for $t\ge 0$.
(41)
\[ {\left(\frac{{S_{{\tau _{1}}(\lfloor nt\rfloor ),j}}-\mathbb{E}{S_{{\tau _{1}}(\lfloor nt\rfloor ),j}}}{\sqrt{a(n)}}\right)_{t>0}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(\Theta (t))_{t>0}},\]Proof.
According to (18) it is enough to check that
whereas, by formula (46) below,
Applying the continuous mapping
\[ {\left(\frac{{S_{{\tau _{1}}(\lfloor nt\rfloor ),j}}-{({\mu _{1}^{(\mathbf{0})}})^{-1}}{\mu _{j}^{(\mathbf{0})}}{b_{1}^{(\lfloor nt\rfloor )}}}{\sqrt{a(n)}}\right)_{t>0}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(\Theta (t))_{t>0}}.\]
Fix $0<a<b$ and denote the quantity within the parentheses on the left-hand side of the last relation by ${\widetilde{S}_{1}}(n,j,t)$. We have
\[\begin{aligned}{}\mathbb{P}\{{({\widetilde{S}_{1}}(n,j,t))_{t\in [a,\hspace{0.1667em}b]}}\in \cdot \}& =\mathbb{P}\{{({\widetilde{S}_{1}}(n,j,t))_{t\in [a,\hspace{0.1667em}b]}}\in \cdot |{A_{n,1}}(a)\}\mathbb{P}\{{A_{n,1}}(a)\}\\ {} & \hspace{1em}+\mathbb{P}\{{({\widetilde{S}_{1}}(n,j,t))_{t\in [a,\hspace{0.1667em}b]}}\in \cdot |{A_{n,1}^{c}}(a)\}\mathbb{P}\{{A_{n,1}^{c}}(a)\},\end{aligned}\]
where ${A_{n,1}}(a)$ is defined by (32) with $j=1$, and ${A_{n,1}^{c}}(a)$ denotes the complement of ${A_{n,1}}(a)$. Since
in view of (33), we see that, as $n\to \infty $,
\[ \mathbb{P}\{{({\widetilde{S}_{1}}(n,j,t))_{t\in [a,\hspace{0.1667em}b]}}\in \cdot \}=\mathbb{P}\{{({\widetilde{S}_{1}}(n,j,t))_{t\in [a,\hspace{0.1667em}b]}}\in \cdot |{A_{n,1}}(a)\}+o(1).\]
Clearly,
\[\begin{aligned}{}& \mathbb{P}\{{({\widetilde{S}_{1}}(n,j,t))_{t\in [a,\hspace{0.1667em}b]}}\in \cdot |{A_{n,1}}(a)\}\\ {} & =\mathbb{P}\left\{{\left(\frac{\min ({S_{{\tau _{1}}(\lfloor nt\rfloor ),j}^{(\mathbf{0})}},{b_{j}^{(\lfloor nt\rfloor )}})-{({\mu _{1}^{(\mathbf{0})}})^{-1}}{\mu _{j}^{(\mathbf{0})}}{b_{1}^{(\lfloor nt\rfloor )}}}{\sqrt{a(n)}}\right)_{t\in [a,\hspace{0.1667em}b]}}\hspace{-5.69054pt}\in \cdot \Big|{A_{n,1}}(a)\right\}\\ {} & =\mathbb{P}\left\{{\left(\frac{\min ({S_{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(\lfloor nt\rfloor )}}),j}^{(\mathbf{0})}},{b_{j}^{(\lfloor nt\rfloor )}})-{({\mu _{1}^{(\mathbf{0})}})^{-1}}{\mu _{j}^{(\mathbf{0})}}{b_{1}^{(\lfloor nt\rfloor )}}}{\sqrt{a(n)}}\right)_{t\in [a,\hspace{0.1667em}b]}}\hspace{-5.69054pt}\in \cdot \Big|{A_{n,1}}(a)\right\}.\end{aligned}\]
Thus, in view of (37),
\[\begin{aligned}{}& \mathbb{P}\{{({\widetilde{S}_{1}}(n,j,t))_{t\in [a,\hspace{0.1667em}b]}}\in \cdot |{A_{n,1}}(a)\}\\ {} & =\mathbb{P}\left\{{\left(\frac{{S_{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(\lfloor nt\rfloor )}}),j}^{(\mathbf{0})}}-{({\mu _{1}^{(\mathbf{0})}})^{-1}}{\mu _{j}^{(\mathbf{0})}}{b_{1}^{(\lfloor nt\rfloor )}}}{\sqrt{a(n)}}\right)_{t\in [a,\hspace{0.1667em}b]}}\in \cdot \Big|{A_{n,1}}(a)\right\}+o(1)\\ {} & =\mathbb{P}\left\{{\left(\frac{{S_{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(\lfloor nt\rfloor )}}),j}^{(\mathbf{0})}}-{({\mu _{1}^{(\mathbf{0})}})^{-1}}{\mu _{j}^{(\mathbf{0})}}{b_{1}^{(\lfloor nt\rfloor )}}}{\sqrt{a(n)}}\right)_{t\in [a,\hspace{0.1667em}b]}}\in \cdot \right\}+o(1).\end{aligned}\]
The claim now follows from the continuous mapping theorem. Indeed, as a consequence of Donsker’s invariance principle,
(42)
\[ {\left(\frac{{S_{\lfloor {b_{1}^{(n)}}u\rfloor ,j}^{(\mathbf{0})}}-{\mu _{j}^{(\mathbf{0})}}{b_{1}^{(n)}}u}{\sqrt{a(n)}}\right)_{u\ge 0}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(\Theta (u))_{u\ge 0}},\](43)
\[ {\left(\frac{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(\lfloor nt\rfloor )}})-{({\boldsymbol{\mu }_{1}^{(\mathbf{0})}})^{-1}}{b_{1}^{(\lfloor nt\rfloor )}}}{\sqrt{a(n)}}\right)_{t>0}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(\Theta (t))_{t>0}}.\]
\[ D(0,\infty )\times D(0,\infty )\times D(0,\infty )\ni (f,g,h)\mapsto f\circ h+g\in D(0,\infty )\]
to the relations (42), (43) and (36) (with $j=1$), we obtain (41). □Proof of Theorem 2.
We shall first prove that
with the random term ${S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}$ in the centring instead of its conditional expectation $\mathbb{E}({S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}|{A_{n,j-1}}(a))$.
(44)
\[\begin{array}{r}\displaystyle {\left(\frac{{\tau _{j}}(\lfloor nt\rfloor )-{\tau _{j-1}}(\lfloor nt\rfloor )-{({\mu _{j}^{({\mathbf{s}_{j-1}})}})^{-1}}({b_{j}^{(\lfloor nt\rfloor )}}-{S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}})}{{({\mu _{j}^{({\mathbf{s}_{j-1}})}})^{-3/2}}{({v_{j,j}^{({\mathbf{s}_{j-1}})}}{b_{j}^{(n)}})^{1/2}}}\right)_{t>0}}\\ {} \displaystyle \stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(B({t^{{\rho _{j}}}}))_{t>0}}\end{array}\]We argue as in the proof of Lemma 2. Fix $0<a<b$. Denote the left-hand side of (44) by ${\widetilde{\tau }_{n,j}}(t)$ and write
In view of (13), (45) is equivalent to
which is a simple consequence of the continuity of composition operation applied to the functional limit theorem for the renewal process, see (53) below, and the uniform convergence (34) for $j=1$. This finishes the proof of (44).
\[\begin{aligned}{}\mathbb{P}\{{({\widetilde{\tau }_{n,j}}(t))_{t\ge a}}\in \cdot \}& =\mathbb{P}\{{({\widetilde{\tau }_{n,j}}(t))_{t\ge a}}\in \cdot |{A_{n,j-1}}(a)\}\mathbb{P}\{{A_{n,j-1}}(a)\}\\ {} & \hspace{1em}+\mathbb{P}\{{({\widetilde{\tau }_{n,j}}(t))_{t\ge a}}\in \cdot |{A_{n,j}^{c}}(a)\}\mathbb{P}\{{A_{n,j}^{c}}(a)\},\end{aligned}\]
where ${A_{n,j-1}}(a)$ is defined by (32). By (33)
and, thereupon,
\[\begin{aligned}{}& \mathbb{P}\{{({\widetilde{\tau }_{n,j}}(t))_{t\ge a}}\in \cdot \}\\ {} & =\mathbb{P}\left\{{\left(\frac{{\tau _{1}^{\to j}}(\lfloor nt\rfloor )-{({\mu _{j}^{({\mathbf{s}_{j-1}})}})^{-1}}({b_{j}^{(\lfloor nt\rfloor )}}-{S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}})}{{({\mu _{j}^{({\mathbf{s}_{j-1}})}})^{-3/2}}{({v_{j,j}^{({\mathbf{s}_{j-1}})}}{b_{j}^{(n)}})^{1/2}}}\right)_{t\ge a}}\in \cdot \Big|{A_{n,j-1}}(a)\right\}\\ {} & +o(1),\hspace{0.2778em}\hspace{0.2778em}n\to \infty .\end{aligned}\]
According to Proposition 2 and taking into account (38), only the case $j=1$ has to be checked, that is, we need to prove that
(45)
\[ {\left(\frac{{\tau _{1}}(\lfloor nt\rfloor )-{({\mu _{1}^{(\mathbf{0})}})^{-1}}{b_{1}^{(\lfloor nt\rfloor )}}}{{({\mu _{1}^{(\mathbf{0})}})^{-3/2}}{({v_{1,1}^{(\mathbf{0})}}{b_{1}^{(n)}})^{1/2}}}\right)_{t\ge a}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(B({t^{{\rho _{1}}}}))_{t\ge a}}.\](46)
\[ {\left(\frac{{\sigma ^{(\mathbf{0})}}({\mathbf{H}_{1,\ge }^{(\lfloor nt\rfloor )}})-{({\mu _{1}^{(\mathbf{0})}})^{-1}}{b_{1}^{(\lfloor nt\rfloor )}}}{{({\mu _{1}^{(\mathbf{0})}})^{-3/2}}{({v_{1,1}^{(\mathbf{0})}}{b_{1}^{(n)}})^{1/2}}}\right)_{t\ge a}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(B({t^{{\rho _{1}}}}))_{t\ge a}},\]It remains to check that, for every $j=1,\dots ,d$,
For $j=1$ this is trivial, since the numerator is identically zero in this case. Assume that $j\ge 2$ and write
We shall show that every summand here converges to ${(\Theta (t))_{t\ge a}}$, as $n\to \infty $. Note that, for $j>l$,
For $j=2,\dots ,d$ and $l=1,\dots ,j-1$, we can write using (31)
(47)
\[ {\left(\frac{{S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}-\mathbb{E}({S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}|{A_{n,j-1}}(a))}{\sqrt{{b_{j}^{(n)}}}}\right)_{t\ge a}}\stackrel{\mathrm{d}}{\underset{n\to \infty }{\Longrightarrow }}{(\Theta (t))_{t\ge a}}.\](48)
\[\begin{aligned}{}& \frac{{S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}-\mathbb{E}({S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}|{A_{n,j-1}}(a))}{\sqrt{{b_{j}^{(n)}}}}\\ {} & ={\sum \limits_{l=1}^{j-1}}\frac{({S_{{\tau _{l}}(\lfloor nt\rfloor ),j}}-{S_{{\tau _{l-1}}(\lfloor nt\rfloor ),j}})-\mathbb{E}\left(({S_{{\tau _{l}}(\lfloor nt\rfloor ),j}}-{S_{{\tau _{l-1}}(\lfloor nt\rfloor ),j}})\Big|{A_{n,j-1}}(a)\right)}{\sqrt{{b_{j}^{(n)}}}}\\ {} & ={\sum \limits_{l=1}^{j-1}}\frac{{S_{{\tau _{1}^{\to l}}(\lfloor nt\rfloor ),j-l+1}^{\to l}}-\mathbb{E}\left({S_{{\tau _{1}^{\to l}}(\lfloor nt\rfloor ),j-l+1}^{\to l}}\Big|{A_{n,j-1}}(a)\right)}{\sqrt{{b_{j}^{(n)}}}}.\end{aligned}\](49)
\[ \frac{{b_{l}^{(n)}}-{S_{{\tau _{l-1}}(\lfloor nt\rfloor ),j}}}{{b_{j}^{(n)}}}\stackrel{\mathrm{a}.\mathrm{s}.}{\underset{n\to \infty }{\longrightarrow }}0.\]
\[ \mathbb{P}\{{({\mathbf{S}_{{\tau _{l}}(\lfloor nt\rfloor )}}-{\mathbf{S}_{{\tau _{l-1}}(\lfloor nt\rfloor )}})_{t\ge a}}\in \cdot \}=\mathbb{P}\{{({\mathbf{S}_{{\tau _{1}^{\to l}}(\lfloor nt\rfloor )}^{\to l}})_{t\ge a}}\in \cdot \hspace{0.2778em},{A_{n,j-1}}(a)\}+o(1).\]
By Proposition 2 on the event ${A_{n,j-1}}(a)\subseteq {A_{n,l}}(a)$ the process ${({\mathbf{S}_{{\tau _{1}^{\to l}}(\lfloor nt\rfloor )}^{\to l}})_{t\ge a}}$ is a RWSB with a structure specified therein. Since
taking into account (49) we can apply Lemma 2 with $a(n)={b_{j}^{(n)}}$ to obtain, for $j>l$,
\[\begin{array}{r}\displaystyle \mathbb{P}\left\{{\left(\frac{{S_{{\tau _{1}^{\to l}}(\lfloor nt\rfloor ),j-l+1}^{\to l}}-\mathbb{E}\left({S_{{\tau _{1}^{\to l}}(\lfloor nt\rfloor ),j-l+1}^{\to l}}\Big|{A_{n,j-1}}(a)\right)}{\sqrt{{b_{j}^{(n)}}}}\right)_{t\ge a}}\hspace{-5.69054pt}\in \cdot \Big|{A_{n,j-1}}(a)\right\}\\ {} \displaystyle \hspace{1em}\to \mathbb{P}\left\{{(\Theta (t))_{t\ge a}}\in \cdot \right\},\hspace{1em}n\to \infty .\end{array}\]
Note that the conditional expectation pops up because we apply Lemma 2 with respect to the conditional probability measure $\mathbb{P}\{\cdot |{A_{n,j-1}}(a)\}$. Therefore, all summands in the right-hand side of (48) converge to ${(\Theta (t))_{t\ge a}}$, as $n\to \infty $. This proves (47) and finishes the proof of (21). If $2{\rho _{j-1}}<{\rho _{j}}$, then (22) holds, since
\[ \underset{n\to \infty }{\lim }\underset{t\in [a,b]}{\sup }\frac{\mathbb{E}({S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}|{A_{n,j-1}}(a))}{\sqrt{{b_{j}^{(n)}}}}\le \underset{n\to \infty }{\lim }\underset{t\in [a,b]}{\sup }\frac{\mathbb{E}{S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}}{\mathbb{P}\{{A_{n,j-1}}(a)\}\sqrt{{b_{j}^{(n)}}}}=0,\]
which follows from $\mathbb{E}{S_{{\tau _{j-1}}(\lfloor nt\rfloor ),j}}=O({b_{j-1}^{(n)}})$, as $n\to \infty $. The proof of Theorem 2 is complete. □