Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 7, Issue 1 (2020)
  4. The laws of iterated and triple logarith ...

The laws of iterated and triple logarithms for extreme values of regenerative processes
Volume 7, Issue 1 (2020), pp. 61–78
Alexander Marynych   Ivan Matsak  

Authors

 
Placeholder
https://doi.org/10.15559/20-VMSTA147
Pub. online: 17 February 2020      Type: Research Article      Open accessOpen Access

Received
11 January 2020
Revised
1 February 2020
Accepted
1 February 2020
Published
17 February 2020

Abstract

We analyze almost sure asymptotic behavior of extreme values of a regenerative process. We show that under certain conditions a properly centered and normalized running maximum of a regenerative process satisfies a law of the iterated logarithm for the lim sup and a law of the triple logarithm for the lim inf. This complements a previously known result of Glasserman and Kou [Ann. Appl. Probab. 5(2) (1995), 424–445]. We apply our results to several queuing systems and a birth and death process.

1 Introduction and main results

Various problems related to asymptotic behavior of extreme values of regenerative processes is of considerable practical interest and has attracted a lot of attention in probabilistic community. For example, extremes in queuing systems and of birth and death processes have been investigated in [2, 3, 6, 13, 20], to name but a few. Analysis carried out in the above papers is mostly based on the classical theory of extreme values for independent identically distributed (i.i.d.) random variables. A survey of early results in this direction can be found, among other, in paper [3]. In recent paper [22] a slightly different approach to the asymptotic analysis of extreme values of regenerative processes using a nonlinear time transformations has been proposed.
The aforementioned works were mostly aimed at the derivation of weak limit theorems for extremes of regenerative processes. In this article instead, we are interested in almost sure (a.s.) behavior of general regenerative processes and in particular of regenerative processes appearing in queuing and birth–death systems. Our main results formulated in Theorems 1 and 2 below provide the laws of iterated and triple logarithms for the running maximum of regenerative processes. A distinguishing feature of our results is a different scaling required for lim sup and lim inf. Under the assumption that the right tail of the maximum of a regenerative process over its regeneration cycle has an exponential tail, this type of behavior has already been observed in [11], see Proposition 3.2 therein. Our theorems provide a generalization of the aforementioned result and cover, for example, regenerative processes with Weibull-like tails of the maximum over a regeneration cycle. As in many other papers dealing with extremes of regenerative processes, our approach relies on analyzing the a.s. behavior of the running maximum of i.i.d. random variables. In this respect, let us also mention papers [16, 17, 19] dealing with a.s. growth rate of the running maximum, see Section 3.5 in [8] for a survey.
Before formulating the results we introduce necessary definitions. Let us recall, see [4], that a positive measurable function U defined in some neighbourhood of $+\infty $ is called regularly varying at $+\infty $ with index $\kappa \in \mathbb{R}$ if $U(x)={x^{\kappa }}V(x)$, and the function V is slowly varying at $+\infty $, that is
\[ \underset{t\to +\infty }{\lim }\frac{V(tx)}{V(t)}=1\hspace{1em}\text{for all}\hspace{2.5pt}x>0.\]
Given a function $H:\mathbb{R}\to \mathbb{R}$ we denote by ${H^{-1}}$ its generalized inverse defined by
(1)
\[ {H^{-1}}(y)=\inf \left\{x\in \mathbb{R}:H(x)>y\right\},\hspace{1em}y\in \mathbb{R}.\]
The following definition is of crucial importance for our main results.
Definition 1.
We say that a function $H:\mathbb{R}\to \mathbb{R}$ satisfies condition $(\mathbb{U})$ if the following holds:
  • 1. ${\lim \nolimits_{x\to +\infty }}H(x)=+\infty $;
  • 2. the function H is eventually nondecreasing and differentiable;
  • 3. the derivative $h(x):={H^{\prime }}(x)$ is such that for some $\kappa \in \mathbb{R}$ the function
    \[ \hat{h}(x)={({H^{-1}}(x))^{\prime }}=\frac{1}{h({H^{-1}}(x))},\hspace{1em}x\in \mathbb{R},\]
    is regularly varying at $+\infty $ with index κ.
Note that the assumption of regular variation of $\hat{h}$ implies that h is eventually positive. Thus, H is eventually strictly increasing and the generalized inverse ${H^{-1}}$ defined by (1) eventually coincides with the usual inverse.
Let $X={(X(t))_{t\ge 0}}$ be a regenerative random process, that is,
\[ X(t)={\xi _{k}}(t-{S_{k-1}})\hspace{1em}\text{for}\hspace{2.5pt}t\in [{S_{k-1}},{S_{k}}),\hspace{2.5pt}k\in \mathbb{N},\]
where
\[ {S_{0}}=0,\hspace{2em}{S_{k}}={T_{1}}+\cdots +{T_{k}},\hspace{1em}k\in \mathbb{N},\]
and ${({T_{k}},{\xi _{k}}(\cdot ))_{k\in \mathbb{N}}}$ is a sequence of independent copies of a pair $(T,\xi (\cdot ))$, see, for example, [21, Part II, Chapter 2] and [9, Chapter 11, §8]. The points $({S_{k}})$ are called regeneration epochs and the interval $[{S_{k-1}},{S_{k}})$ is the k-th period of regeneration.
For $t\ge 0$, put
\[ \bar{X}(t)=\underset{0\le s<t}{\sup }X(s),\]
and note that $\bar{X}({T_{1}})$ is the maximum of the process X on the first period of regeneration. Let F be the distribution function of $\bar{X}({T_{1}})$, that is,
\[ F(x):=\mathbf{P}(\bar{X}({T_{1}})\le x).\]
Put
\[ R(x):=-\log (1-F(x)),\hspace{1em}x\in \mathbb{R},\]
and
\[ {\alpha _{T}}=\mathbf{E}{T_{1}}=\mathbf{E}T.\]
Note also that it is always possible to write a decomposition
(2)
\[ R(x)={R_{0}}(x)+{R_{1}}(x),\hspace{1em}x\in \mathbb{R},\]
where
(3)
\[ |{R_{1}}(x)|\le {C_{1}}<\infty ,\hspace{1em}x\in \mathbb{R}.\]
Here and hereafter we denote by $C,{C_{1}},{C_{2}}$ etc. some positive constants which may vary from place to place and may depend on parameters of the process $X(\cdot )$.
We are ready to formulate our first result.
Theorem 1.
Let ${(X(t))_{t\ge 0}}$ be a regenerative random process. Assume that there exists a decomposition (2) such that (3) holds and the function ${R_{0}}$ satisfies condition $(\mathbb{U})$. Suppose further that ${\alpha _{T}}<\infty $. For large enough $x\in \mathbb{R}$, let ${r_{0}}$ be the derivative of ${R_{0}}$. Then
(4)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({A_{0}}(t))(\bar{X}(t)-{A_{0}}(t))}{{L_{2}}(t)}=1\hspace{1em}\textit{a.s.},\]
and
(5)
\[ \underset{t\to \infty }{\liminf }\frac{{r_{0}}({A_{0}}(t))(\bar{X}(t)-{A_{0}}(t))}{{L_{3}}(t)}=-1\hspace{1em}\textit{a.s.},\]
where
\[ {A_{0}}(t)={R_{0}^{-1}}\left(\log \frac{t}{{\alpha _{T}}}\right),\hspace{2em}{L_{2}}(t)=\log \log t,\hspace{2em}{L_{3}}(t)=\log \log \log t.\]
Our next result is a counterpart of Theorem 1 for discrete processes taking values in some lattice in $\mathbb{R}$. Such processes are important, among other fields, in the queuing theory. Assume that
(6)
\[ \mathbf{P}(X(t)\in \{0,1,2,3,\dots \})=1,\hspace{1em}t\ge 0,\]
and, for $k=0,1,2,3,\dots $, put
\[ {R_{k}}:=-\log \mathbf{P}(\bar{X}({T_{1}})>k).\]
Similarly to (2) and (3) we can write a decomposition
(7)
\[ {R_{k}}={R_{0}}(k)+{R_{1}}(k),\hspace{1em}k=0,1,2,3,\dots ,\]
where ${R_{0}}:\mathbb{R}\to \mathbb{R}$ and ${R_{1}}:\mathbb{R}\to \mathbb{R}$ are real-valued functions and ${R_{1}}$ is such that
(8)
\[ |{R_{1}}(k)|\le {C_{1}}<\infty ,\hspace{1em}k=0,1,2,3,\dots .\]
Theorem 2.
Let ${(X(t))_{t\ge 0}}$ be a regenerative random process such that (6) holds. Assume that there exists a decomposition (7) such that (8) is fulfilled and the function ${R_{0}}$ satisfies condition $(\mathbb{U})$. Suppose also that ${\alpha _{T}}<\infty $.
  • (i) The asymptotic relation
    (9)
    \[ {r_{0}}({R_{0}^{-1}}(x))=o(\log x),\hspace{1em}x\to \infty ,\]
    entails
    (10)
    \[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({A_{0}}(t))(\bar{X}(t)-{A_{0}}(t))}{{L_{2}}(t)}=1\hspace{1em}\textit{a.s.}\]
  • (ii) The asymptotic relation
    (11)
    \[ {r_{0}}({R_{0}^{-1}}(x))=o(\log \log x),\hspace{1em}x\to \infty ,\]
    entails
    (12)
    \[ \underset{t\to \infty }{\liminf }\frac{{r_{0}}({A_{0}}(t))(\bar{X}(t)-{A_{0}}(t))}{{L_{3}}(t)}=-1\hspace{1em}\textit{a.s.}\]
    The functions ${A_{0}}$ and ${r_{0}}$ were defined in Theorem 1.
Remark 1.
In the discrete setting we assume that there exist extensions of the sequences $({R_{0}}(k))$ and $({R_{1}}(k))$ to functions defined on the whole real line with the extension of ${R_{0}}$ being smooth. While such an assumption might look artificial, it is necessary for keeping the paper homogeneous and allows us to work both in continuous and discrete settings with the same class of functions $\mathbb{U}$.
The article is organized as follows. In Section 2 we collect and prove some auxiliary results needed in the proofs of our main theorems. They are given in Section 3. In Section 4 we apply Theorems 1 and 2 to some queuing systems and birth–death processes.

2 Preliminaries

Let us consider a sequence ${({\xi _{k}})_{k\in \mathbb{N}}}$ of independent copies of a random variable ξ with the distribution function ${F_{\xi }}(x)=\mathbf{P}(\xi \le x)=:1-\exp (-{R_{\xi }}(x))$. Put
(13)
\[ {z_{n}}=\underset{1\le i\le n}{\max }{\xi _{i}}.\]
The following result was proved in [1], see Theorem 1 therein.
Lemma 1.
Assume that the distribution of ξ is such that ${R_{\xi }}$ satisfies condition $(\mathbb{U})$. With $a(n)={R_{\xi }^{-1}}(\log n)$ it holds
(14)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{\xi }}(a(n))({z_{n}}-a(n))}{{L_{2}}(n)}=1\hspace{1em}\textit{a.s.},\]
and
(15)
\[ \underset{n\to \infty }{\liminf }\frac{{r_{\xi }}(a(n))({z_{n}}-a(n))}{{L_{3}}(n)}=-1\hspace{1em}\textit{a.s.},\]
where, for large enough $x\in \mathbb{R}$,
\[ {r_{\xi }}(x):={R^{\prime }_{\xi }}(x)=\frac{{F^{\prime }_{\xi }}(x)}{1-{F^{\prime }_{\xi }}(x)}.\]
The proof of Lemma 1, given in [1], consists of two steps. Firstly, the claim is established for the standard exponential distribution ${\tau ^{e}}$, that is, assuming $\mathbf{P}(\xi \le x)=\mathbf{P}({\tau ^{e}}\le x)=1-\exp (-x)$. In the second step the claim is proved for an arbitrary ${R_{\xi }}$ using regular variation and the representation
(16)
\[ {R_{\xi }^{-1}}({\tau ^{e}})\stackrel{d}{=}\xi \hspace{1em}\text{and}\hspace{1em}{R_{\xi }^{-1}}({z_{n}^{e}})\stackrel{d}{=}{z_{n}}.\]
Here and hereafter ${z_{n}^{e}}={\max _{1\le i\le n}}{\tau _{i}^{e}}$ and ${({\tau _{i}^{e}})_{i\in \mathbb{N}}}$ are independent copies of ${\tau ^{e}}$.
We need the following generalization of Lemma 1.
Lemma 2.
Assume that the law of ξ is such that the function ${R_{\xi }}$ possesses a decomposition (2) with ${R_{1}}$ satisfying (3) and ${R_{0}}$ satisfying condition $(\mathbb{U})$. Then
(17)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({a_{0}}(n))({z_{n}}-{a_{0}}(n))}{{L_{2}}(n)}=1\hspace{1em}\textit{a.s.},\]
and
(18)
\[ \underset{n\to \infty }{\liminf }\frac{{r_{0}}({a_{0}}(n))({z_{n}}-{a_{0}}(n))}{{L_{3}}(n)}=-1\hspace{1em}\textit{a.s.},\]
where ${a_{0}}(n)={R_{0}^{-1}}(\log n)$ and ${r_{0}}(x)={R^{\prime }_{0}}(x)$.
To prove Lemma 2 we need the following simple result, see Theorem 3.1 in [5].
Lemma 3.
Let H be a function regularly varying at $+\infty $ and let ${({c_{n}})_{n\in \mathbb{N}}}$ and ${({d_{n}})_{n\in \mathbb{N}}}$ be two sequences of real numbers such that ${\lim \nolimits_{n\to \infty }}{c_{n}}=+\infty $, ${\lim \nolimits_{n\to \infty }}{c_{n}}/{d_{n}}=1$. Then
\[ \underset{n\to \infty }{\lim }\frac{H({c_{n}})}{H({d_{n}})}=1.\]
Proof of Lemma 2.
Fix a sequence of standard exponential random variables ${({\tau _{i}^{e}})_{i\in \mathbb{N}}}$ and assume without loss of generality that the sequence ${({z_{n}})_{n\in \mathbb{N}}}$ is constructed from ${({\tau _{i}^{e}})_{i\in \mathbb{N}}}$ via formula (16). The subsequent proof is divided into two steps.
Step 1. Suppose additionally that the function ${R_{0}}$ is everywhere nondecreasing, differentiable, and ${R_{0}}(-\infty )=0$. Then ${F_{0}}(x):=1-\exp (-{R_{0}}(x))$ is a distribution function. Put ${\xi ^{\prime }_{i}}={R_{0}^{-1}}({\tau _{i}^{e}})$ for $i\in \mathbb{N}$ and let ${z^{\prime }_{n}}={\max _{1\le i\le n}}{\xi ^{\prime }_{i}}$. From Lemma 1 we infer
(19)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({a_{0}}(n))({z^{\prime }_{n}}-{a_{0}}(n))}{{L_{2}}(n)}=1\hspace{1em}\text{a.s.}\]
Let ${C_{1}}$ be a constant such that (3) holds. From the definition of the function ${R_{\xi }^{-1}}$ and decomposition (2) we obtain
\[ {R_{0}^{-1}}(x-{C_{1}})\le {R_{\xi }^{-1}}(x)\le {R_{0}^{-1}}(x+{C_{1}}),\hspace{1em}x\in \mathbb{R},\]
and thereupon
\[ {R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})\le {R_{\xi }^{-1}}({z_{n}^{e}})\le {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}}).\]
Hence, by monotonicity of ${R_{0}^{-1}}$, we have
(20)
\[ |{R_{\xi }^{-1}}({z_{n}^{e}})-{R_{0}^{-1}}({z_{n}^{e}})|\le {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})-{R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})=2{C_{1}}{\hat{r}_{0}}({z_{n}^{e}}+{C_{1}}(2{\theta _{n}}-1)),\]
where the equality follows from the mean value theorem for differentiable functions, ${\hat{r}_{0}}(x)={({R_{0}^{-1}}(x))^{\prime }}$ and $0\le {\theta _{n}}\le 1$.
It is known, see [10, Chapter 4, Example 4.3.3], that
\[ \underset{n\to \infty }{\lim }\frac{{z_{n}^{e}}}{\log n}=1\hspace{1em}\text{a.s.}\]
Thus, from Lemma 3 we deduce
\[ \underset{n\to \infty }{\lim }\frac{{\hat{r}_{0}}({z_{n}^{e}}+{C_{1}}(2{\theta _{n}}-1))}{{\hat{r}_{0}}(\log n)}=1\hspace{1em}\text{a.s.}\]
In conjunction with (20) this yields
(21)
\[ |{R_{\xi }^{-1}}({z_{n}^{e}})-{R_{0}^{-1}}({z_{n}^{e}})|\le 2{C_{1}}{\hat{r}_{0}}(\log n)(1+o(1))=\frac{2{C_{1}}}{{r_{0}}({a_{0}}(n))}(1+o(1)).\]
Taking together relations (19), (21) we arrive at (17).
Similarly, from Lemma 1 we have
(22)
\[ \underset{n\to \infty }{\liminf }\frac{{r_{0}}({a_{0}}(n))({z^{\prime }_{n}}-{a_{0}}(n))}{{L_{3}}(n)}=-1\hspace{1em}\text{a.s.}\]
Therefore, (18) follows from (22) and (21).
Step 2. Let us now turn to the general case where the function ${R_{0}}$ is nondecreasing and differentiable on some interval $[{x_{0}},\infty )$ with ${x_{0}}>0$. Recall decomposition (2). Let ${\tilde{R}_{0}}:\mathbb{R}\to \mathbb{R}$ and ${\tilde{R}_{1}}:\mathbb{R}\to \mathbb{R}$ be arbitrary nondecreasing differentiable functions such that
\[\begin{aligned}{}{\tilde{R}_{0}}(x)& ={R_{0}}(x)\hspace{1em}\text{and}\hspace{1em}{\tilde{R}_{1}}(x)={R_{1}}(x)\hspace{1em}\text{for}\hspace{2.5pt}x\ge {x_{0}},\\ {} {\tilde{R}_{0}}(x)& ={\tilde{R}_{1}}(x)=0\hspace{1em}\text{for}\hspace{2.5pt}x\le 0.\end{aligned}\]
Put
\[ \tilde{R}(x):={\tilde{R}_{0}}(x)+{\tilde{R}_{1}}(x),\hspace{1em}x\in \mathbb{R}.\]
The functions ${\tilde{R}_{0}}$, ${\tilde{R}_{1}}$ and $\tilde{R}$ satisfy all the assumptions of Step 1. Thus, if we set
\[ {\tilde{\xi }_{i}}={\tilde{R}^{-1}}({\tau _{i}^{e}}),\hspace{2em}{\tilde{z}_{n}}=\underset{1\le i\le n}{\max }{\tilde{\xi }_{i}},\]
then the sequence ${({\tilde{z}_{n}})_{n\in \mathbb{N}}}$ satisfies (17) and (18) with the same normalizing functions ${r_{0}}({a_{0}}(n))$ and ${a_{0}}(n)$. The latter holds true since for sufficiently large $x>0$ we have ${\tilde{R}_{0}^{-1}}(x)={R_{0}^{-1}}(x)$.
It remains to note that the asymptotics of $({\tilde{z}_{n}})$ and $({z_{n}})$ are the same. Indeed, set
\[ {n_{0}}:=\min (i\ge 1:{\tau _{i}^{e}}\ge {y_{0}}),\]
where ${y_{0}}:=R({x_{0}})=\tilde{R}({x_{0}})$. Then ${z_{n}}={\tilde{z}_{n}}$ for $n\ge {n_{0}}$ and we see that both (17) and (18) hold for $({z_{n}})$ as well. This finishes the proof of Lemma 3.  □
The next lemma is a counterpart of Lemma 2 for discrete distributions. Assume that ξ has distribution
\[ \mathbf{P}(\xi =k)={p_{k}},\]
where ${p_{k}}\ge 0$ and ${\textstyle\sum _{k=0}^{\infty }}{p_{k}}=1$. Put
\[ q(k)=\sum \limits_{i>k}{p_{i}}=\exp (-{R_{\xi ,k}}).\]
Lemma 4.
Let ξ be a random variable taking values in $\{0,1,2,3,\dots \}$ and let ${R_{\xi ,k}}$ be such that there exists a decomposition (7) with ${R_{1}}$ satisfying (8) and ${R_{0}}$ satisfying condition $(\mathbb{U})$.
  • (i) if (9) holds, then $({z_{n}})$ satisfies equality (17);
  • (ii) if (11) holds, then $({z_{n}})$ also satisfies (18).
Proof.
Similarly to Lemma 2 the proof is divided into two steps. We provide the details only for the first step leaving the second step for an interested reader. Thus, we put ${\xi ^{\prime }_{i}}={R_{0}^{-1}}({\tau _{i}^{e}})$ for $i\in \mathbb{N}$. Note that ${\xi ^{\prime }_{i}}$ are i.i.d. with the distribution function ${F_{0}}(x)=1-\exp (-{R_{0}}(x))$. Thus, for ${z^{\prime }_{n}}={\max _{1\le i\le n}}{\xi ^{\prime }_{i}}$, equality (19) holds.
Let us consider
\[ \lceil {R_{\xi ,k}^{-1}}(y)\rceil =\inf \left\{k=0,1,2,3,\dots :{R_{\xi ,k}}\ge y\right\},\]
then
\[ |{R_{\xi ,k}^{-1}}(y)-\lceil {R_{\xi ,k}^{-1}}(y)\rceil |\le 1,\]
for all $y\in \mathbb{R}$, and therefore
\[ |{R_{\xi ,k}^{-1}}({z_{n}^{e}})-\lceil {R_{\xi ,k}^{-1}}({z_{n}^{e}})\rceil |\le 1,\hspace{2em}|{R_{0}^{-1}}({z_{n}^{e}})-\lceil {R_{0}^{-1}}({z_{n}^{e}})\rceil |\le 1.\]
Further, condition (8) and monotonicity of the function $\lceil \cdot \rceil $ both imply
\[ \lceil {R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})\rceil \le \lceil {R_{\xi ,k}^{-1}}({z_{n}^{e}})\rceil \le \lceil {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})\rceil .\]
Combining the above estimates, we derive
\[ {R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})-2\le {R_{\xi ,k}^{-1}}({z_{n}^{e}})\le {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})+2.\]
This means
(23)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle |{R_{\xi ,k}^{-1}}({z_{n}^{e}})-{R_{0}^{-1}}({z_{n}^{e}})|& \displaystyle \le & \displaystyle {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})-{R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})+4\\ {} & \displaystyle \le & \displaystyle \frac{2{C_{1}}}{{r_{0}}({a_{0}}(n))}(1+o(1))+4,\end{array}\]
see estimates (20), (21).
Assuming (9) we see that (17) holds. Similarly, condition (11) yields (18).  □
The next simple lemma is probably known, however we prefer to give an elementary few lines proof.
Lemma 5.
For arbitrary $p>1$ and $b\in \mathbb{R}$ it holds
(24)
\[ {\Lambda _{n}}:={\sum \limits_{k=1}^{n}}\frac{{p^{k}}}{{k^{b}}}=\frac{{p^{n+1}}}{(p-1){n^{b}}}(1+o(1)),\hspace{1em}n\to \infty .\]
Proof.
By the Stolz–Cesáro theorem we have
\[\begin{aligned}{}\underset{n\to \infty }{\lim }\frac{(p-1){n^{b}}{\Lambda _{n}}}{{p^{n+1}}}& =\underset{n\to \infty }{\lim }\frac{{\Lambda _{n}}-{\Lambda _{n-1}}}{\frac{{p^{n+1}}}{(p-1){n^{b}}}-\frac{{p^{n}}}{(p-1){(n-1)^{b}}}}\\ {} & =\underset{n\to \infty }{\lim }\frac{\frac{{p^{n}}}{{n^{b}}}}{\frac{{p^{n+1}}}{(p-1){n^{b}}}-\frac{{p^{n}}}{(p-1){(n-1)^{b}}}}=\underset{n\to \infty }{\lim }\frac{p-1}{p-\frac{{n^{b}}}{{(n-1)^{b}}}}=1.\end{aligned}\]
The proof is complete.  □

3 Proofs of Theorems 1 and 2

Proof of Theorem 1.
Let us start with a proof of equality (4). To this end, we introduce the following notation
\[ {Y_{k}}=\underset{{S_{k-1}}\le t<{S_{k}}}{\sup }X(t),\hspace{2em}{Z_{n}}=\underset{1\le k\le n}{\max }{Y_{k}},\hspace{1em}k\in \mathbb{N}.\]
Since $({S_{k}})$ are the moments of regeneration of the process ${(X(t))_{t\ge 0}}$, $({Y_{k}})$ are i.i.d. random variables. Furthermore, it is clear that the sequence $({Y_{k}})$ satisfies conditions of Lemma 2. Therefore,
(25)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({a_{0}}(n))({Z_{n}}-{a_{0}}(n))}{{L_{2}}(n)}=1\hspace{1em}\text{a.s.}\]
Denote by N the counting process for the sequence $({S_{k}})$, that is,
\[ N(t)=\max \{k\ge 0:{S_{k}}\le t\},\hspace{1em}t\ge 0.\]
Since ${\lim \nolimits_{t\to \infty }}N(t)=+\infty $ a.s. and $N(t)$ runs through all positive integers, from (25) we obtain
(26)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({R_{0}^{-1}}(\log N(t)))({Z_{N(t)}}-{R_{0}^{-1}}(\log N(t)))}{{L_{2}}(N(t))}=1\hspace{1em}\text{a.s.}\]
By the strong law of large numbers for N we have
(27)
\[ \underset{t\to \infty }{\lim }\frac{N(t)}{t}=\frac{1}{{\alpha _{T}}}\hspace{1em}\text{a.s.},\]
whence, as $t\to \infty $,
\[ \log N(t)=\log \frac{t}{{\alpha _{T}}}+o(1)\hspace{1em}\text{a.s.}\]
In what follows $o(1)$ is a random function which converges to zero a.s. as $t\to \infty $. Plugging the above representations into (26), we obtain
(28)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({R_{0}^{-1}}(\log \frac{t}{{\alpha _{T}}}+o(1)))({Z_{N(t)}}-{R_{0}^{-1}}(\log \frac{t}{{\alpha _{T}}}+o(1)))}{{L_{2}}(\frac{t}{{\alpha _{T}}}(1+o(1)))}=1\hspace{1em}\text{a.s.}\]
Further, by the slow variation of ${L_{2}}$ we can replace the denominator in (28) by ${L_{2}}(t)$.
Let us recall that under the assumptions of Theorem 1, the function ${\hat{r}_{0}}(x)={({R_{0}^{-1}}(x))^{\prime }}$ is regularly varying at infinity. So using once again Lemma 3, we obtain the equalities
\[ {\hat{r}_{0}}\bigg(\log \frac{t}{{\alpha _{T}}}+o(1)\bigg)={\hat{r}_{0}}\bigg(\log \frac{t}{{\alpha _{T}}}\bigg)(1+o(1))=\frac{1+o(1)}{{r_{0}}({R_{0}^{-1}}(\log \frac{t}{{\alpha _{T}}}))},\hspace{1em}t\to \infty ,\]
and
\[ {R_{0}^{-1}}\bigg(\log \frac{t}{{\alpha _{T}}}+o(1)\bigg)-{R_{0}^{-1}}\bigg(\log \frac{t}{{\alpha _{T}}}\bigg)=o(1){\hat{r}_{0}}\bigg(\log \frac{t}{{\alpha _{T}}}+o(1)\bigg),\hspace{1em}t\to \infty .\]
Combining the above relations, from (28) we derive
(29)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({A_{0}}(t))({Z_{N(t)}}-{A_{0}}(t))}{{L_{2}}(t)}=1\hspace{1em}\text{a.s.},\]
with ${A_{0}}(t)$ and ${r_{0}}(t)$ as in Theorem 1. The same argument with $N(t)$ replaced by $N(t)+1$ yields
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({A_{0}}(t))({Z_{N(t)+1}}-{A_{0}}(t))}{{L_{2}}(t)}=1\hspace{1em}\text{a.s.}\]
It remains to note that
\[ {Z_{N(t)}}\le \bar{X}(t)\le {Z_{N(t)+1}}\hspace{1em}\text{a.s.}\]
Summarizing this gives equality (4). The proof of the relation (5) utilizes equality (18) of Lemma 2 and is similar. We omit the details.  □
The derivation of Theorem 2 is based on Lemma 4 and basically repeats the proof of Theorem 1. We leave the details to a reader.
Suppose that under the assumptions of Theorem 1 the parameter t runs over a countable set $t\in \mathcal{T}:=\{{t_{0}}=0<{t_{1}}<{t_{2}}<\cdots \hspace{0.1667em}\}$ such that ${\lim \nolimits_{i\to \infty }}{t_{i}}=+\infty $ as $i\to \infty $. The set $\mathcal{T}$ can be random and the process X can depend on $\mathcal{T}$. Assume that $\mathbb{P}({S_{i}}\in \mathcal{T})=1$ for all $i\in \mathbb{N}$.
Put ${X_{i}}:=X({t_{i}})$ and ${\bar{X}_{n}}={\max _{0\le i\le n}}{X_{i}}$. Assume that extreme values of the process X are attained at the points of the set $\mathcal{T}$. More precisely, for all $t\ge 0$,
(30)
\[ \underset{0\le s\le t}{\sup }X(s)=\underset{0\le {t_{i}}\le t}{\max }{X_{i}}\hspace{1em}\text{a.s.}\]
In what follows the next proposition will be useful.
Proposition 1.
Under the assumptions of Theorem 1 suppose that there exists a set $\mathcal{T}$ such that condition (30) holds and, further,
(31)
\[ \underset{n\to \infty }{\lim }\frac{{t_{n}}}{n}=\alpha \hspace{1em}\textit{a.s.}\]
Then
(32)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}(A(n))({\bar{X}_{n}}-A(n))}{{L_{2}}(n)}=1\hspace{1em}\textit{a.s.},\]
and
(33)
\[ \underset{n\to \infty }{\liminf }\frac{{r_{0}}(A(n))({\bar{X}_{n}}-A(n))}{{L_{3}}(n)}=-1\hspace{1em}\textit{a.s.},\]
where
\[ A(n)={R_{0}^{-1}}\left(\log \frac{\alpha n}{{\alpha _{T}}}\right).\]
Proof.
A proof given below is similar to the proof of Theorem 1. From equations (27) and (31) we obtain, as $n\to \infty $,
(34)
\[ \frac{N({t_{n}})}{n}=\frac{N({t_{n}})}{{t_{n}}}\frac{{t_{n}}}{n}\to \frac{\alpha }{{\alpha _{T}}}\hspace{1em}\text{a.s.}\]
Further, replacing n by $N({t_{n}})$ in equality (25), which is possible because $N({t_{n}})$ diverges to infinity through all positive integers, we get
(35)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({R_{0}^{-1}}(\log N({t_{n}})))({Z_{N({t_{n}})}}-{R_{0}^{-1}}(\log N({t_{n}})))}{{L_{2}}(N({t_{n}}))}=1\hspace{1em}\text{a.s.}\]
Directly from (30) we derive
\[ {Z_{N({t_{n}})}}\le {\bar{X}_{n}}\le {Z_{N({t_{n}})+1}}\hspace{1em}\text{a.s.}\]
These inequalities and relations (34), (35) yield equality (32). The same argument can be applied for proving (33).  □

4 Applications

Example 1 (Queuing system $GI/G/1$).
Let us consider a single-channel queuing system with customers arriving at $0={t_{0}}<{t_{1}}<{t_{2}}<\cdots <{t_{i}}<\cdots \hspace{0.1667em}$. Let $0={W_{0}},{W_{1}},{W_{2}},\dots ,{W_{i}},\dots $ be the actual waiting times of the customers. Thus, at time $t=0$ a first customer arrives and the service starts immediately. Denote by ${\zeta _{i}}={t_{i}}-{t_{i-1}}$, for $i\in \mathbb{N}$, the interarrival times between successive customers, and ${\eta _{i}}$, $i\in \mathbb{N}$, is the service time of the i-th customer. Suppose that $({\zeta _{i}})$ and $({\eta _{i}})$ are independent sequences of i.i.d. random variables. In the standard notation, this queuing system has the type $GI/G/1$, see [12, 14].
Let $W(t)$ be the waiting time of the last customer in the queue at time $t\ge 0$, that is,
\[ W(t)={W_{\nu (t)}},\hspace{1em}\text{where}\hspace{2.5pt}\nu (t)=\max (k\ge 0:{t_{k}}\le t),\]
and
\[ W({t_{n}})=W({t_{n}}+)={W_{n}}.\]
Set
\[ \bar{W}(t)=\underset{0\le s\le t}{\sup }W(s)=\underset{0\le {t_{k}}\le t}{\max }{W_{k}},\]
then
\[ {\bar{W}_{n}}=\underset{1\le i\le n}{\max }{W_{i}}=\bar{W}({t_{n}}).\]
Denote $\mathbf{E}{\zeta _{i}}=a$, $\mathbf{E}{\eta _{i}}=b$ and assume that both expectations are finite. Further, we impose the following conditions on ${\zeta _{i}}$ and ${\eta _{i}}$:
(36)
\[ \rho :=\frac{b}{a}<1\]
and for some $\gamma >0$, it holds
(37)
\[ \mathbf{E}\exp (\gamma ({\eta _{i}}-{\zeta _{i}}))=1,\hspace{2em}\mathbf{E}({\eta _{i}}-{\zeta _{i}})\exp (\gamma ({\eta _{i}}-{\zeta _{i}}))<\infty .\]
Under these assumptions the evolution of the queuing system can be decomposed into busy periods, when a customer is in service, interleaved by idle periods, when the system is empty. Let us introduce regeneration moments $({S_{k}})$ of the process W as follows: ${S_{0}}=0$ and, for $i\in \mathbb{N}$, ${S_{i}}$ is the arrival time of a new customer at the end of i-th idle period. Let ${T_{i}}$ be the duration of the i-th regeneration cycle, and $\bar{W}({T_{1}})$ be the maximum waiting time during the first regeneration cycle. It is known, see [3] and [13], that under conditions (36) and (37), we have
\[ \mathbf{P}(\bar{W}({T_{1}})>x)=(C+o(1))\exp (-\gamma x),\hspace{1em}x\to \infty .\]
Condition (36) also guarantees that the average duration of the i-th regeneration cycle is finite, that is, ${\alpha _{T}}=\mathbf{E}{T_{i}}<\infty $, see [14, Chapter 14, §3, Theorem 3.2].
Thus, if we set $X(t)=W(t)$, ${R_{0}}(x)=\gamma x$, ${R_{1}}(x)=\log C+o(1)$ and ${r_{0}}(x)=\gamma $, then from Theorem 1 and Proposition 1 we derive the corollary.
Corollary 1.
Assume that the queuing system $GI/G/1$ satisfies conditions (36) and (37). Then
(38)
\[ \underset{t\to \infty }{\limsup }\frac{\gamma \bar{W}(t)-\log t}{{L_{2}}(t)}=\underset{n\to \infty }{\limsup }\frac{\gamma {\bar{W}_{n}}-\log n}{{L_{2}}(n)}=1\hspace{1em}\textit{a.s.},\]
and
(39)
\[ \underset{t\to \infty }{\liminf }\frac{\gamma \bar{W}(t)-\log t}{{L_{3}}(t)}=\underset{n\to \infty }{\liminf }\frac{\gamma {\bar{W}_{n}}-\log n}{{L_{3}}(n)}=-1\hspace{1em}\textit{a.s.}\]
Remark 2.
  • (i) Suppose that
    \[ \mathbf{P}({\zeta _{i}}\le x)=1-\exp (-\lambda x),\hspace{2em}\mathbf{P}({\eta _{i}}\le x)=1-\exp (-\mu x),\hspace{1em}x\ge 0,\]
    that is, we consider the queuing system $M/M/1$. Assume further, that $\rho :=\lambda /\mu <1$. It is easy to check that conditions (36) and (37) are fulfilled, and therefore equalities (38) and (39) hold with $\gamma =\mu -\lambda =\mu (1-\rho )$.
  • (ii) Suppose that
    \[ \mathbf{P}({\zeta _{i}}\le x)=1-\exp (-\lambda x),\hspace{1em}x\ge 0,\]
    and $\mathbf{P}({\eta _{i}}=const=d)=1$. Assume further, that $\rho :=\lambda d<1$. Then relations (36)–(39) hold with $\gamma ={x_{\rho }}/d$, with ${x_{\rho }}>0$ being the unique positive root of the equation
    \[ {e^{x}}=1+\frac{x}{\rho }.\]
Example 2 (Queuing system $M/M/m$).
Let us now consider a queuing system with m servers and customers which arrive according to the Poisson process with intensity λ, and service times being independent copies of a random variable η with an exponential distribution
\[ \mathbf{P}(\eta \le x)=1-\exp (-\mu x),\hspace{1em}x\ge 0.\]
In the standard notation, this queuing system has the type $M/M/m$, see [12, 14].
We impose the following assumption on the parameters λ and μ ensuring existence of the stationary regime:
(40)
\[ \rho :=\frac{\lambda }{m\mu }<1.\]
For $t\ge 0$, let $Q(t)$ denote the length of the queue at time t, that is, the total number of customers in service or pending. Set
\[ \bar{Q}(t)=\underset{0\le s<t}{\sup }Q(s),\hspace{1em}t\ge 0.\]
In the same way as in Example 1, one can introduce regeneration moments $({S_{k}})$ for the process Q: ${S_{0}}:=0$ and, for $i\in \mathbb{N}$, ${S_{i}}$ is the arrival time of a new customer after the i-th busy period. Let ${T_{i}}$ be the duration of the i-th regeneration cycle and $\bar{Q}({T_{1}})$ be the maximum length of the queue in the first regeneration cycle. Put
(41)
\[ \mathbf{P}(\bar{Q}({T_{1}})>x)=\exp (-R(x)).\]
In recent paper [7] the authors established that function R in (41) satisfies conditions (7) and (8) with
\[ {R_{0}}(x)=-x\log \rho ,\hspace{2em}{r_{0}}(x)=-\log \rho .\]
Using Theorem 2 we infer
Corollary 2.
Assume that for a queuing system $M/M/m$, $1\le m<\infty $, the condition (40) is fulfilled. Then
(42)
\[ \underset{t\to \infty }{\limsup }\frac{\bar{Q}(t)\log \frac{1}{\rho }-\log t}{{L_{2}}(t)}=1\hspace{1em}\textit{a.s.},\]
and
(43)
\[ \underset{t\to \infty }{\liminf }\frac{\bar{Q}(t)\log \frac{1}{\rho }-\log t}{{L_{3}}(t)}=-1,\hspace{1em}\textit{a.s.}\]
Remark 3.
Relations (42) and (43) have been proved in [7] by direct calculations. Let us note that in case $m=\infty $, which has also been treated in [7], the asymptotics of $\bar{Q}(t)$ is of completely different form, see also [18].
Example 3 (Birth and death processes).
Let $X={(X(t))_{t\ge 0}}$ be a birth and death processes with parameters
(44)
\[ {\lambda _{n}}=\lambda n+a,\hspace{2em}{\mu _{n}}=\mu n,\hspace{1em}\lambda ,\mu ,a>0,\hspace{2.5pt}n=0,1,2,\dots ,\]
see [14, Ch. 7, $\mathrm{§}6$]. That is, ${(X(t))_{t\ge 0}}$ is a time-homogeneous Markov process such that, for $t\ge 0$, given $\{X(t)=n\}$ the probability of transition to state $n+1$ over a small period of time δ is $(\lambda n+a)\delta +o(\delta )$, $n=0,1,2,3,\dots $, and the probability of transition to $n-1$ is $\mu n\delta +o(\delta )$, $n=1,2,3,\dots $. The parameter a can be interpreted as the infinitesimal intensity of population growth due to immigration. The birth–death process X is usually called the process with linear growth and immigration.
We assume that $X(0)=0$ and
(45)
\[ \rho :=\frac{\lambda }{\mu }<1.\]
Put
\[ {\theta _{0}}=1,\hspace{2em}{\theta _{k}}={\prod \limits_{i=1}^{k}}\frac{{\lambda _{i-1}}}{{\mu _{i}}},\hspace{1em}k\in \mathbb{N}.\]
It is not difficult to check that condition (45) ensures
(46)
\[ \sum \limits_{k\ge 1}{\theta _{k}}<\infty ,\]
and
(47)
\[ \sum \limits_{k\ge 1}\frac{1}{{\lambda _{k}}{\theta _{k}}}=\infty .\]
Under conditions (46) and (47), see [14] and [15], there exists a stationary regime, that is,
\[ \underset{t\to \infty }{\lim }\mathbf{P}(X(t)=k)={p_{k}},\]
with
(48)
\[ {p_{k}}={\theta _{k}}{p_{0}},\hspace{1em}k=0,1,2,3,\dots ,\hspace{1em}\text{where}\hspace{2.5pt}{p_{0}}={\Bigg({\sum \limits_{k=0}^{\infty }}{\theta _{k}}\Bigg)^{-1}}.\]
Further, X is a regenerative process with regeneration moments $({S_{k}})$, where ${S_{0}}=0$ and ${S_{i}}$, $i\in \mathbb{N}$, is the moment of i-th return to state 0. It is known that
\[ \mathbf{E}{T_{k}}=\frac{1}{({\lambda _{0}}+{\mu _{0}}){p_{0}}}=\frac{1}{a{p_{0}}},\]
where ${T_{k}}={S_{k}}-{S_{k-1}}$ is the duration of the k-th regeneration cycle, see Eq. (32) in [22]. If (45) holds, then
\[ M(t):=\mathbf{E}X(t)\to \frac{a}{\mu -\lambda },\hspace{1em}t\to \infty ,\]
see [14]. We are interested in the asymptotic behavior of extreme values
\[ \bar{X}(t)=\underset{0\le s<t}{\sup }X(s),\hspace{1em}t\ge 0.\]
Let us show how to apply Theorem 2 to the asymptotic analysis of $\bar{X}(t)$. Firstly, we need to evaluate accurately the sequence $(R(n))$ defined by
\[ q(n):=\mathbf{P}(\bar{X}({T_{1}})>n)=\exp (-R(n)).\]
It is known, see [3] or Eq. (34) in [22], that
(49)
\[ q(n)=\frac{1}{{\textstyle\textstyle\sum _{k=0}^{n}}{\alpha _{k}}},\]
where ${\alpha _{0}}=1$ and ${\alpha _{k}}={\textstyle\prod _{i=1}^{k}}\frac{{\mu _{i}}}{{\lambda _{i}}}$ for $k\in \mathbb{N}$.
Using equalities (44) and (45) we can rewrite ${\alpha _{k}}$ as follows:
(50)
\[ {\alpha _{k}}=\frac{{\beta _{k}}}{{\rho ^{k}}},\hspace{2em}{\beta _{k}}={\prod \limits_{i=1}^{k}}\left(1-\frac{1}{1+i\lambda /a}\right).\]
Further, using Taylor’s expansion
\[ \log (1+x)=x-\frac{{x^{2}}}{2}+\frac{{x^{3}}}{3}-\cdots \hspace{0.1667em},\hspace{1em}|x|<1,\]
we have
(51)
\[ \log {\beta _{k}}={\sum \limits_{i=1}^{k}}\log \left(1-\frac{1}{1+i\lambda /a}\right)=-{\sum \limits_{i=1}^{k}}\frac{1}{1+i\lambda /a}+{d_{k}},\]
where ${d_{k}}$ has a finite limit, as $k\to \infty $. Combining the relation
(52)
\[ \left|{\sum \limits_{i=1}^{k}}\frac{1}{1+i\lambda /a}-{\sum \limits_{i=1}^{k}}\frac{1}{i\lambda /a}\right|={\sum \limits_{i=1}^{k}}\frac{a}{\lambda i(1+i\lambda /a)}={C_{1}}+o(1),\hspace{1em}k\to \infty ,\]
and the fact
(53)
\[ \underset{n\to \infty }{\lim }{\sum \limits_{i=1}^{n}}\frac{1}{i}-\log n=\gamma ,\]
with $\gamma =0.577\dots $ being the Euler–Mascheroni constant, we conclude
\[ {\sum \limits_{i=1}^{k}}\log \left(1-\frac{1}{1+i\lambda /a}\right)=-\frac{a}{\lambda }\log k+{C_{2}}+o(1),\hspace{1em}k\to \infty .\]
Therefore,
(54)
\[ {\beta _{k}}=C{k^{-a/\lambda }}(1+o(1)),\]
where
(55)
\[ C={e^{{C_{2}}}}:=\underset{n\to \infty }{\lim }{n^{a/\lambda }}{\prod \limits_{i=1}^{n}}\bigg(1-\frac{1}{1+i\lambda /a}\bigg).\]
Now we can apply Lemma 5 to obtain
\[ {\Lambda _{n}}={\sum \limits_{k=1}^{n}}\frac{{\rho ^{-k}}}{{k^{a/\lambda }}}=\frac{{\rho ^{-n-1}}}{(\frac{1}{\rho }-1){n^{a/\lambda }}}(1+o(1)),\hspace{1em}n\to \infty .\]
Taking into account equality (54), we obtain
\[ {\sum \limits_{k=0}^{n}}{\alpha _{k}}=\frac{C{\rho ^{-n-1}}}{(1/\rho -1){n^{a/\lambda }}}(1+o(1)),\hspace{1em}n\to \infty .\]
Thus,
(56)
\[ q(n)=\frac{1/\rho -1}{C}{\rho ^{n+1}}{n^{a/\lambda }}(1+o(1)),\hspace{1em}n\to \infty ,\]
and we have the following representation
\[ R(n)=-\log q(n)={R_{0}}(n)+{R_{1}}(n),\]
where
\[ {R_{0}}(n)=-n\log \rho -\frac{a}{\lambda }\log n,\hspace{2em}{R_{1}}(n)=-\log \frac{1/\rho -1}{C}-\log \rho +o(1),\hspace{1em}n\to \infty .\]
The function ${R_{0}}(x)=-x\log \rho -\frac{a}{\lambda }\log x$ is increasing for $x\ge {x_{0}}=-\frac{a}{\lambda \log \rho }$ and, furthermore, satisfies condition $(\mathbb{U})$.
Since ${r_{0}}(x)=-\log \rho -\frac{a}{\lambda x}$, conditions (9), (11) of Theorem 2 hold.
It remains to find a simple formula for the function ${A_{0}}$ in (10) and (12). To this end, let us write
\[ \log \left(\frac{t}{{\alpha _{T}}}\right)={R_{0}}({A_{0}}(t))=-{A_{0}}(t)\log \rho -\frac{a}{\lambda }\log {A_{0}}(t)={A_{0}}(t)\left(-\log \rho +o(1)\right),\]
as $t\to \infty $. Upon taking logarithms we get
\[ \log {A_{0}}(t)=\log \log \left(\frac{t}{{\alpha _{T}}}\right)+O(1),\hspace{1em}t\to \infty ,\]
and plugging this back into the initial equality yields
\[ {A_{0}}(t)=\frac{\log t+\frac{a}{\lambda }{L_{2}}(t)+O(1)}{-\log \rho }.\]
Thus, from Theorem 2 we infer the following.
Corollary 3.
Let ${(X(t))_{t\ge 0}}$ be the birth and death process with parameters ${\lambda _{n}}=\lambda n+a$, ${\mu _{n}}=\mu n$, where $\lambda ,\mu ,a>0$, $n=0,1,2,3,\dots $. Suppose also that (45) holds. Then
(57)
\[ \underset{t\to \infty }{\limsup }\frac{\bar{X}(t)\log \frac{1}{\rho }-\log t}{{L_{2}}(t)}=1+\frac{a}{\lambda },\hspace{1em}\textit{a.s.},\]
and
(58)
\[ \underset{t\to \infty }{\liminf }\frac{\bar{X}(t)\log \frac{1}{\rho }-\log t-\frac{a}{\lambda }{L_{2}}(t)}{{L_{3}}(t)}=-1,\hspace{1em}\textit{a.s.}\]
Remark 4.
It is clear from the above calculations that the condition $X(0)=0$ can be removed, that is, equations (57) and (58) hold true for an arbitrary starting point $X(0)\in \{0,1,2,\dots \}$.
Let us finally mention without a proof a statement which follows easily from equations (48), (56) and Theorem 2 in [22].
Corollary 4.
Let ${(X(t))_{t\ge 0}}$ be the birth and death process that satisfies all conditions of Corollary 3. Then
(59)
\[ \underset{n\to \infty }{\lim }\mathbf{P}(\bar{X}({t_{n}^{\ast }})\ge n)=1-\exp (-a{p_{0}}x),\hspace{1em}x>0,\]
where ${t_{n}^{\ast }}=C{\rho ^{-n}}{n^{-a/\lambda }}x/(1/\rho -1)$, while ${p_{0}}$ and C are defined by (48) and (55). This relation can also be recast in a more transparent way as follows:
(60)
\[ \underset{n\to \infty }{\lim }\mathbf{P}({C^{-1}}(1/\rho -1){\rho ^{n}}{n^{a/\lambda }}{X^{-1}}(n)>x)=\exp (-a{p_{0}}x),\]
where ${X^{-1}}(n)=\inf \{t\in \mathbb{R}:X(t)=n\}$ is the first time when ${(X(t))_{t\ge 0}}$ visits state $n\in \mathbb{N}$, that is, the distribution of ${\rho ^{n}}{n^{a/\lambda }}{X^{-1}}(n)$ converges to an exponential law.

Acknowledgments

We thank anonymous referees for a number of useful suggestions and remarks.

References

[1] 
Akbash, K.S., Matsak, I.K.: One improvement of the law of the iterated logarithm for the maximum scheme. Ukr. Math. J. 64(8), 1290–1296 (2013). MR3104866. https://doi.org/10.1007/s11253-013-0716-7
[2] 
Anderson, C.W.: Extreme value theory for a class of discrete distributions with applications to some stochastic processes. J. Appl. Probab. 7, 99–113 (1970). MR256441. https://doi.org/10.1017/s0021900200026978
[3] 
Asmussen, S.: Extreme value theory for queues via cycle maxima. Extremes 1(2), 137–168 (1998). MR1814621. https://doi.org/10.1023/A:1009970005784
[4] 
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Encyclopedia of Mathematics and its Applications, vol. 27, p. 494. Cambridge University Press, Cambridge (1989). MR1015093
[5] 
Buldygin, V.V., Klesov, O.I., Steinebach, J.G.: Properties of a subclass of Avakumović functions and their generalized inverses. Ukr. Math. J. 54(2), 149–169 (2002). MR1952816. https://doi.org/10.1023/A:1020178327423
[6] 
Cohen, J.W.: Extreme value distribution for the $M/G/1$ and the $G/M/1$ queueing systems. Ann. Inst. Henri Poincaré Sect. B (N.S.) 4, 83–98 (1968). MR0232466
[7] 
Dovgaĭ, B.V., Matsak, I.K.: Asymptotic behavior of the extreme values of the queue length in $M/M/m$ queuing systems. Kibern. Sist. Anal. 55(2), 171–179 (2019). MR3927561
[8] 
Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events: for Insurance and Finance. Springer (2013) MR1458613. https://doi.org/10.1007/978-3-642-33483-2
[9] 
Feller, W.: An Introduction to Probability Theory and Its Applications. Vol. II. Second edition, p. 669. John Wiley & Sons, Inc., New York-London-Sydney (1971). MR0270403
[10] 
Galambos, J.: The Asymptotic Theory of Extreme Order Statistics, p. 352. John Wiley & Sons, New York-Chichester-Brisbane (1978). Wiley Series in Probability and Mathematical Statistics. MR489334
[11] 
Glasserman, P., Kou, S.-G.: Limits of first passage times to rare sets in regenerative processes. Ann. Appl. Probab. 5(2), 424–445 (1995). MR1336877
[12] 
Gnedenko, B.V., Kovalenko, I.N.: Introduction to Queueing Theory, 2nd edn. Mathematical Modeling, vol. 5, p. 314. Birkhäuser Boston, Inc., Boston, MA (1989). https://doi.org/10.1007/978-1-4615-9826-8. Translated from the second Russian edition by Samuel Kotz. MR1035707
[13] 
Iglehart, D.L.: Extreme values in the $GI/G/1$ queue. Ann. Math. Stat. 43, 627–635 (1972). MR305498. https://doi.org/10.1214/aoms/1177692642
[14] 
Karlin, S.: A First Course in Stochastic Processes, p. 502. Academic Press, New York-London (1966). MR0208657
[15] 
Karlin, S., McGregor, J.: The classification of birth and death processes. Trans. Am. Math. Soc. 86, 366–400 (1957). MR94854. https://doi.org/10.2307/1993021
[16] 
Klass, M.: The minimal growth rate of partial maxima. Ann. Probab. 12(2), 380–389 (1984) MR0735844
[17] 
Klass, M.: The Robbins–Siegmund series criterion for partial maxima. Ann. Probab. 13(4), 1369–1370 (1985) MR0806233
[18] 
Matsak, I.K.: Limit theorem for extreme values of discrete random variables and its application. Teor. Imovir. Mat. Stat. 101, 189–202 (2019).
[19] 
Robbins, H., Siegmund, D.: On the law of the iterated logarithm for maxima and minima. In: Proc. 6th Berkeley Symp. Math. Stat. Prob, vol. 3, pp. 51–70 (1972) MR0400364
[20] 
Serfozo, R.F.: Extreme values of birth and death processes and queues. Stoch. Process. Appl. 27(2), 291–306 (1988). MR931033. https://doi.org/10.1016/0304-4149(87)90043-3
[21] 
Smith, W.L.: Renewal theory and its ramifications. J. R. Stat. Soc. Ser. B 20, 243–302 (1958). MR99090
[22] 
Zakusilo, O.K., Matsak, I.K.: On the extreme values of some regenerative processes. Teor. Imovir. Mat. Stat. 97, 58–71 (2017). MR3745999. https://doi.org/10.1090/tpms/1048
Exit Reading PDF XML


Table of contents
  • 1 Introduction and main results
  • 2 Preliminaries
  • 3 Proofs of Theorems 1 and 2
  • 4 Applications
  • Acknowledgments
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy