1 Introduction and main results
Various problems related to asymptotic behavior of extreme values of regenerative processes is of considerable practical interest and has attracted a lot of attention in probabilistic community. For example, extremes in queuing systems and of birth and death processes have been investigated in [2, 3, 6, 13, 20], to name but a few. Analysis carried out in the above papers is mostly based on the classical theory of extreme values for independent identically distributed (i.i.d.) random variables. A survey of early results in this direction can be found, among other, in paper [3]. In recent paper [22] a slightly different approach to the asymptotic analysis of extreme values of regenerative processes using a nonlinear time transformations has been proposed.
The aforementioned works were mostly aimed at the derivation of weak limit theorems for extremes of regenerative processes. In this article instead, we are interested in almost sure (a.s.) behavior of general regenerative processes and in particular of regenerative processes appearing in queuing and birth–death systems. Our main results formulated in Theorems 1 and 2 below provide the laws of iterated and triple logarithms for the running maximum of regenerative processes. A distinguishing feature of our results is a different scaling required for lim sup and lim inf. Under the assumption that the right tail of the maximum of a regenerative process over its regeneration cycle has an exponential tail, this type of behavior has already been observed in [11], see Proposition 3.2 therein. Our theorems provide a generalization of the aforementioned result and cover, for example, regenerative processes with Weibull-like tails of the maximum over a regeneration cycle. As in many other papers dealing with extremes of regenerative processes, our approach relies on analyzing the a.s. behavior of the running maximum of i.i.d. random variables. In this respect, let us also mention papers [16, 17, 19] dealing with a.s. growth rate of the running maximum, see Section 3.5 in [8] for a survey.
Before formulating the results we introduce necessary definitions. Let us recall, see [4], that a positive measurable function U defined in some neighbourhood of $+\infty $ is called regularly varying at $+\infty $ with index $\kappa \in \mathbb{R}$ if $U(x)={x^{\kappa }}V(x)$, and the function V is slowly varying at $+\infty $, that is
Given a function $H:\mathbb{R}\to \mathbb{R}$ we denote by ${H^{-1}}$ its generalized inverse defined by
The following definition is of crucial importance for our main results.
Definition 1.
Note that the assumption of regular variation of $\hat{h}$ implies that h is eventually positive. Thus, H is eventually strictly increasing and the generalized inverse ${H^{-1}}$ defined by (1) eventually coincides with the usual inverse.
Let $X={(X(t))_{t\ge 0}}$ be a regenerative random process, that is,
\[ X(t)={\xi _{k}}(t-{S_{k-1}})\hspace{1em}\text{for}\hspace{2.5pt}t\in [{S_{k-1}},{S_{k}}),\hspace{2.5pt}k\in \mathbb{N},\]
where
and ${({T_{k}},{\xi _{k}}(\cdot ))_{k\in \mathbb{N}}}$ is a sequence of independent copies of a pair $(T,\xi (\cdot ))$, see, for example, [21, Part II, Chapter 2] and [9, Chapter 11, §8]. The points $({S_{k}})$ are called regeneration epochs and the interval $[{S_{k-1}},{S_{k}})$ is the k-th period of regeneration.For $t\ge 0$, put
and note that $\bar{X}({T_{1}})$ is the maximum of the process X on the first period of regeneration. Let F be the distribution function of $\bar{X}({T_{1}})$, that is,
Put
and
Note also that it is always possible to write a decomposition
where
Here and hereafter we denote by $C,{C_{1}},{C_{2}}$ etc. some positive constants which may vary from place to place and may depend on parameters of the process $X(\cdot )$.
We are ready to formulate our first result.
Theorem 1.
Let ${(X(t))_{t\ge 0}}$ be a regenerative random process. Assume that there exists a decomposition (2) such that (3) holds and the function ${R_{0}}$ satisfies condition $(\mathbb{U})$. Suppose further that ${\alpha _{T}}<\infty $. For large enough $x\in \mathbb{R}$, let ${r_{0}}$ be the derivative of ${R_{0}}$. Then
and
where
(4)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({A_{0}}(t))(\bar{X}(t)-{A_{0}}(t))}{{L_{2}}(t)}=1\hspace{1em}\textit{a.s.},\]Our next result is a counterpart of Theorem 1 for discrete processes taking values in some lattice in $\mathbb{R}$. Such processes are important, among other fields, in the queuing theory. Assume that
and, for $k=0,1,2,3,\dots $, put
Similarly to (2) and (3) we can write a decomposition
where ${R_{0}}:\mathbb{R}\to \mathbb{R}$ and ${R_{1}}:\mathbb{R}\to \mathbb{R}$ are real-valued functions and ${R_{1}}$ is such that
Theorem 2.
Let ${(X(t))_{t\ge 0}}$ be a regenerative random process such that (6) holds. Assume that there exists a decomposition (7) such that (8) is fulfilled and the function ${R_{0}}$ satisfies condition $(\mathbb{U})$. Suppose also that ${\alpha _{T}}<\infty $.
-
(ii) The asymptotic relation entails
(12)
\[ \underset{t\to \infty }{\liminf }\frac{{r_{0}}({A_{0}}(t))(\bar{X}(t)-{A_{0}}(t))}{{L_{3}}(t)}=-1\hspace{1em}\textit{a.s.}\]
Remark 1.
In the discrete setting we assume that there exist extensions of the sequences $({R_{0}}(k))$ and $({R_{1}}(k))$ to functions defined on the whole real line with the extension of ${R_{0}}$ being smooth. While such an assumption might look artificial, it is necessary for keeping the paper homogeneous and allows us to work both in continuous and discrete settings with the same class of functions $\mathbb{U}$.
2 Preliminaries
Let us consider a sequence ${({\xi _{k}})_{k\in \mathbb{N}}}$ of independent copies of a random variable ξ with the distribution function ${F_{\xi }}(x)=\mathbf{P}(\xi \le x)=:1-\exp (-{R_{\xi }}(x))$. Put
The following result was proved in [1], see Theorem 1 therein.
Lemma 1.
Assume that the distribution of ξ is such that ${R_{\xi }}$ satisfies condition $(\mathbb{U})$. With $a(n)={R_{\xi }^{-1}}(\log n)$ it holds
and
where, for large enough $x\in \mathbb{R}$,
(14)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{\xi }}(a(n))({z_{n}}-a(n))}{{L_{2}}(n)}=1\hspace{1em}\textit{a.s.},\](15)
\[ \underset{n\to \infty }{\liminf }\frac{{r_{\xi }}(a(n))({z_{n}}-a(n))}{{L_{3}}(n)}=-1\hspace{1em}\textit{a.s.},\]The proof of Lemma 1, given in [1], consists of two steps. Firstly, the claim is established for the standard exponential distribution ${\tau ^{e}}$, that is, assuming $\mathbf{P}(\xi \le x)=\mathbf{P}({\tau ^{e}}\le x)=1-\exp (-x)$. In the second step the claim is proved for an arbitrary ${R_{\xi }}$ using regular variation and the representation
Here and hereafter ${z_{n}^{e}}={\max _{1\le i\le n}}{\tau _{i}^{e}}$ and ${({\tau _{i}^{e}})_{i\in \mathbb{N}}}$ are independent copies of ${\tau ^{e}}$.
(16)
\[ {R_{\xi }^{-1}}({\tau ^{e}})\stackrel{d}{=}\xi \hspace{1em}\text{and}\hspace{1em}{R_{\xi }^{-1}}({z_{n}^{e}})\stackrel{d}{=}{z_{n}}.\]We need the following generalization of Lemma 1.
Lemma 2.
Assume that the law of ξ is such that the function ${R_{\xi }}$ possesses a decomposition (2) with ${R_{1}}$ satisfying (3) and ${R_{0}}$ satisfying condition $(\mathbb{U})$. Then
and
where ${a_{0}}(n)={R_{0}^{-1}}(\log n)$ and ${r_{0}}(x)={R^{\prime }_{0}}(x)$.
(17)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({a_{0}}(n))({z_{n}}-{a_{0}}(n))}{{L_{2}}(n)}=1\hspace{1em}\textit{a.s.},\](18)
\[ \underset{n\to \infty }{\liminf }\frac{{r_{0}}({a_{0}}(n))({z_{n}}-{a_{0}}(n))}{{L_{3}}(n)}=-1\hspace{1em}\textit{a.s.},\]Lemma 3.
Let H be a function regularly varying at $+\infty $ and let ${({c_{n}})_{n\in \mathbb{N}}}$ and ${({d_{n}})_{n\in \mathbb{N}}}$ be two sequences of real numbers such that ${\lim \nolimits_{n\to \infty }}{c_{n}}=+\infty $, ${\lim \nolimits_{n\to \infty }}{c_{n}}/{d_{n}}=1$. Then
Proof of Lemma 2.
Fix a sequence of standard exponential random variables ${({\tau _{i}^{e}})_{i\in \mathbb{N}}}$ and assume without loss of generality that the sequence ${({z_{n}})_{n\in \mathbb{N}}}$ is constructed from ${({\tau _{i}^{e}})_{i\in \mathbb{N}}}$ via formula (16). The subsequent proof is divided into two steps.
Step 1. Suppose additionally that the function ${R_{0}}$ is everywhere nondecreasing, differentiable, and ${R_{0}}(-\infty )=0$. Then ${F_{0}}(x):=1-\exp (-{R_{0}}(x))$ is a distribution function. Put ${\xi ^{\prime }_{i}}={R_{0}^{-1}}({\tau _{i}^{e}})$ for $i\in \mathbb{N}$ and let ${z^{\prime }_{n}}={\max _{1\le i\le n}}{\xi ^{\prime }_{i}}$. From Lemma 1 we infer
Let ${C_{1}}$ be a constant such that (3) holds. From the definition of the function ${R_{\xi }^{-1}}$ and decomposition (2) we obtain
where the equality follows from the mean value theorem for differentiable functions, ${\hat{r}_{0}}(x)={({R_{0}^{-1}}(x))^{\prime }}$ and $0\le {\theta _{n}}\le 1$.
(19)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({a_{0}}(n))({z^{\prime }_{n}}-{a_{0}}(n))}{{L_{2}}(n)}=1\hspace{1em}\text{a.s.}\]
\[ {R_{0}^{-1}}(x-{C_{1}})\le {R_{\xi }^{-1}}(x)\le {R_{0}^{-1}}(x+{C_{1}}),\hspace{1em}x\in \mathbb{R},\]
and thereupon
\[ {R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})\le {R_{\xi }^{-1}}({z_{n}^{e}})\le {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}}).\]
Hence, by monotonicity of ${R_{0}^{-1}}$, we have
(20)
\[ |{R_{\xi }^{-1}}({z_{n}^{e}})-{R_{0}^{-1}}({z_{n}^{e}})|\le {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})-{R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})=2{C_{1}}{\hat{r}_{0}}({z_{n}^{e}}+{C_{1}}(2{\theta _{n}}-1)),\]It is known, see [10, Chapter 4, Example 4.3.3], that
Thus, from Lemma 3 we deduce
Taking together relations (19), (21) we arrive at (17).
\[ \underset{n\to \infty }{\lim }\frac{{\hat{r}_{0}}({z_{n}^{e}}+{C_{1}}(2{\theta _{n}}-1))}{{\hat{r}_{0}}(\log n)}=1\hspace{1em}\text{a.s.}\]
In conjunction with (20) this yields
(21)
\[ |{R_{\xi }^{-1}}({z_{n}^{e}})-{R_{0}^{-1}}({z_{n}^{e}})|\le 2{C_{1}}{\hat{r}_{0}}(\log n)(1+o(1))=\frac{2{C_{1}}}{{r_{0}}({a_{0}}(n))}(1+o(1)).\]Similarly, from Lemma 1 we have
Therefore, (18) follows from (22) and (21).
(22)
\[ \underset{n\to \infty }{\liminf }\frac{{r_{0}}({a_{0}}(n))({z^{\prime }_{n}}-{a_{0}}(n))}{{L_{3}}(n)}=-1\hspace{1em}\text{a.s.}\]
Step 2. Let us now turn to the general case where the function ${R_{0}}$ is nondecreasing and differentiable on some interval $[{x_{0}},\infty )$ with ${x_{0}}>0$. Recall decomposition (2). Let ${\tilde{R}_{0}}:\mathbb{R}\to \mathbb{R}$ and ${\tilde{R}_{1}}:\mathbb{R}\to \mathbb{R}$ be arbitrary nondecreasing differentiable functions such that
\[\begin{aligned}{}{\tilde{R}_{0}}(x)& ={R_{0}}(x)\hspace{1em}\text{and}\hspace{1em}{\tilde{R}_{1}}(x)={R_{1}}(x)\hspace{1em}\text{for}\hspace{2.5pt}x\ge {x_{0}},\\ {} {\tilde{R}_{0}}(x)& ={\tilde{R}_{1}}(x)=0\hspace{1em}\text{for}\hspace{2.5pt}x\le 0.\end{aligned}\]
Put
The functions ${\tilde{R}_{0}}$, ${\tilde{R}_{1}}$ and $\tilde{R}$ satisfy all the assumptions of Step 1. Thus, if we set
\[ {\tilde{\xi }_{i}}={\tilde{R}^{-1}}({\tau _{i}^{e}}),\hspace{2em}{\tilde{z}_{n}}=\underset{1\le i\le n}{\max }{\tilde{\xi }_{i}},\]
then the sequence ${({\tilde{z}_{n}})_{n\in \mathbb{N}}}$ satisfies (17) and (18) with the same normalizing functions ${r_{0}}({a_{0}}(n))$ and ${a_{0}}(n)$. The latter holds true since for sufficiently large $x>0$ we have ${\tilde{R}_{0}^{-1}}(x)={R_{0}^{-1}}(x)$.It remains to note that the asymptotics of $({\tilde{z}_{n}})$ and $({z_{n}})$ are the same. Indeed, set
where ${y_{0}}:=R({x_{0}})=\tilde{R}({x_{0}})$. Then ${z_{n}}={\tilde{z}_{n}}$ for $n\ge {n_{0}}$ and we see that both (17) and (18) hold for $({z_{n}})$ as well. This finishes the proof of Lemma 3. □
The next lemma is a counterpart of Lemma 2 for discrete distributions. Assume that ξ has distribution
where ${p_{k}}\ge 0$ and ${\textstyle\sum _{k=0}^{\infty }}{p_{k}}=1$. Put
Lemma 4.
Proof.
Similarly to Lemma 2 the proof is divided into two steps. We provide the details only for the first step leaving the second step for an interested reader. Thus, we put ${\xi ^{\prime }_{i}}={R_{0}^{-1}}({\tau _{i}^{e}})$ for $i\in \mathbb{N}$. Note that ${\xi ^{\prime }_{i}}$ are i.i.d. with the distribution function ${F_{0}}(x)=1-\exp (-{R_{0}}(x))$. Thus, for ${z^{\prime }_{n}}={\max _{1\le i\le n}}{\xi ^{\prime }_{i}}$, equality (19) holds.
Let us consider
then
for all $y\in \mathbb{R}$, and therefore
see estimates (20), (21).
\[ |{R_{\xi ,k}^{-1}}({z_{n}^{e}})-\lceil {R_{\xi ,k}^{-1}}({z_{n}^{e}})\rceil |\le 1,\hspace{2em}|{R_{0}^{-1}}({z_{n}^{e}})-\lceil {R_{0}^{-1}}({z_{n}^{e}})\rceil |\le 1.\]
Further, condition (8) and monotonicity of the function $\lceil \cdot \rceil $ both imply
\[ \lceil {R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})\rceil \le \lceil {R_{\xi ,k}^{-1}}({z_{n}^{e}})\rceil \le \lceil {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})\rceil .\]
Combining the above estimates, we derive
\[ {R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})-2\le {R_{\xi ,k}^{-1}}({z_{n}^{e}})\le {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})+2.\]
This means
(23)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle |{R_{\xi ,k}^{-1}}({z_{n}^{e}})-{R_{0}^{-1}}({z_{n}^{e}})|& \displaystyle \le & \displaystyle {R_{0}^{-1}}({z_{n}^{e}}+{C_{1}})-{R_{0}^{-1}}({z_{n}^{e}}-{C_{1}})+4\\ {} & \displaystyle \le & \displaystyle \frac{2{C_{1}}}{{r_{0}}({a_{0}}(n))}(1+o(1))+4,\end{array}\]The next simple lemma is probably known, however we prefer to give an elementary few lines proof.
Proof.
By the Stolz–Cesáro theorem we have
\[\begin{aligned}{}\underset{n\to \infty }{\lim }\frac{(p-1){n^{b}}{\Lambda _{n}}}{{p^{n+1}}}& =\underset{n\to \infty }{\lim }\frac{{\Lambda _{n}}-{\Lambda _{n-1}}}{\frac{{p^{n+1}}}{(p-1){n^{b}}}-\frac{{p^{n}}}{(p-1){(n-1)^{b}}}}\\ {} & =\underset{n\to \infty }{\lim }\frac{\frac{{p^{n}}}{{n^{b}}}}{\frac{{p^{n+1}}}{(p-1){n^{b}}}-\frac{{p^{n}}}{(p-1){(n-1)^{b}}}}=\underset{n\to \infty }{\lim }\frac{p-1}{p-\frac{{n^{b}}}{{(n-1)^{b}}}}=1.\end{aligned}\]
The proof is complete. □3 Proofs of Theorems 1 and 2
Proof of Theorem 1.
Let us start with a proof of equality (4). To this end, we introduce the following notation
Denote by N the counting process for the sequence $({S_{k}})$, that is,
Since ${\lim \nolimits_{t\to \infty }}N(t)=+\infty $ a.s. and $N(t)$ runs through all positive integers, from (25) we obtain
By the strong law of large numbers for N we have
whence, as $t\to \infty $,
In what follows $o(1)$ is a random function which converges to zero a.s. as $t\to \infty $. Plugging the above representations into (26), we obtain
Further, by the slow variation of ${L_{2}}$ we can replace the denominator in (28) by ${L_{2}}(t)$.
\[ {Y_{k}}=\underset{{S_{k-1}}\le t<{S_{k}}}{\sup }X(t),\hspace{2em}{Z_{n}}=\underset{1\le k\le n}{\max }{Y_{k}},\hspace{1em}k\in \mathbb{N}.\]
Since $({S_{k}})$ are the moments of regeneration of the process ${(X(t))_{t\ge 0}}$, $({Y_{k}})$ are i.i.d. random variables. Furthermore, it is clear that the sequence $({Y_{k}})$ satisfies conditions of Lemma 2. Therefore,
(25)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({a_{0}}(n))({Z_{n}}-{a_{0}}(n))}{{L_{2}}(n)}=1\hspace{1em}\text{a.s.}\](26)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({R_{0}^{-1}}(\log N(t)))({Z_{N(t)}}-{R_{0}^{-1}}(\log N(t)))}{{L_{2}}(N(t))}=1\hspace{1em}\text{a.s.}\](27)
\[ \underset{t\to \infty }{\lim }\frac{N(t)}{t}=\frac{1}{{\alpha _{T}}}\hspace{1em}\text{a.s.},\](28)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({R_{0}^{-1}}(\log \frac{t}{{\alpha _{T}}}+o(1)))({Z_{N(t)}}-{R_{0}^{-1}}(\log \frac{t}{{\alpha _{T}}}+o(1)))}{{L_{2}}(\frac{t}{{\alpha _{T}}}(1+o(1)))}=1\hspace{1em}\text{a.s.}\]Let us recall that under the assumptions of Theorem 1, the function ${\hat{r}_{0}}(x)={({R_{0}^{-1}}(x))^{\prime }}$ is regularly varying at infinity. So using once again Lemma 3, we obtain the equalities
with ${A_{0}}(t)$ and ${r_{0}}(t)$ as in Theorem 1. The same argument with $N(t)$ replaced by $N(t)+1$ yields
\[ {\hat{r}_{0}}\bigg(\log \frac{t}{{\alpha _{T}}}+o(1)\bigg)={\hat{r}_{0}}\bigg(\log \frac{t}{{\alpha _{T}}}\bigg)(1+o(1))=\frac{1+o(1)}{{r_{0}}({R_{0}^{-1}}(\log \frac{t}{{\alpha _{T}}}))},\hspace{1em}t\to \infty ,\]
and
\[ {R_{0}^{-1}}\bigg(\log \frac{t}{{\alpha _{T}}}+o(1)\bigg)-{R_{0}^{-1}}\bigg(\log \frac{t}{{\alpha _{T}}}\bigg)=o(1){\hat{r}_{0}}\bigg(\log \frac{t}{{\alpha _{T}}}+o(1)\bigg),\hspace{1em}t\to \infty .\]
Combining the above relations, from (28) we derive
(29)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({A_{0}}(t))({Z_{N(t)}}-{A_{0}}(t))}{{L_{2}}(t)}=1\hspace{1em}\text{a.s.},\]
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}({A_{0}}(t))({Z_{N(t)+1}}-{A_{0}}(t))}{{L_{2}}(t)}=1\hspace{1em}\text{a.s.}\]
It remains to note that
Summarizing this gives equality (4). The proof of the relation (5) utilizes equality (18) of Lemma 2 and is similar. We omit the details. □The derivation of Theorem 2 is based on Lemma 4 and basically repeats the proof of Theorem 1. We leave the details to a reader.
Suppose that under the assumptions of Theorem 1 the parameter t runs over a countable set $t\in \mathcal{T}:=\{{t_{0}}=0<{t_{1}}<{t_{2}}<\cdots \hspace{0.1667em}\}$ such that ${\lim \nolimits_{i\to \infty }}{t_{i}}=+\infty $ as $i\to \infty $. The set $\mathcal{T}$ can be random and the process X can depend on $\mathcal{T}$. Assume that $\mathbb{P}({S_{i}}\in \mathcal{T})=1$ for all $i\in \mathbb{N}$.
Put ${X_{i}}:=X({t_{i}})$ and ${\bar{X}_{n}}={\max _{0\le i\le n}}{X_{i}}$. Assume that extreme values of the process X are attained at the points of the set $\mathcal{T}$. More precisely, for all $t\ge 0$,
In what follows the next proposition will be useful.
Proposition 1.
Under the assumptions of Theorem 1 suppose that there exists a set $\mathcal{T}$ such that condition (30) holds and, further,
Then
and
where
(32)
\[ \underset{t\to \infty }{\limsup }\frac{{r_{0}}(A(n))({\bar{X}_{n}}-A(n))}{{L_{2}}(n)}=1\hspace{1em}\textit{a.s.},\]Proof.
A proof given below is similar to the proof of Theorem 1. From equations (27) and (31) we obtain, as $n\to \infty $,
Further, replacing n by $N({t_{n}})$ in equality (25), which is possible because $N({t_{n}})$ diverges to infinity through all positive integers, we get
Directly from (30) we derive
These inequalities and relations (34), (35) yield equality (32). The same argument can be applied for proving (33). □
(34)
\[ \frac{N({t_{n}})}{n}=\frac{N({t_{n}})}{{t_{n}}}\frac{{t_{n}}}{n}\to \frac{\alpha }{{\alpha _{T}}}\hspace{1em}\text{a.s.}\](35)
\[ \underset{n\to \infty }{\limsup }\frac{{r_{0}}({R_{0}^{-1}}(\log N({t_{n}})))({Z_{N({t_{n}})}}-{R_{0}^{-1}}(\log N({t_{n}})))}{{L_{2}}(N({t_{n}}))}=1\hspace{1em}\text{a.s.}\]4 Applications
Example 1 (Queuing system $GI/G/1$).
Let us consider a single-channel queuing system with customers arriving at $0={t_{0}}<{t_{1}}<{t_{2}}<\cdots <{t_{i}}<\cdots \hspace{0.1667em}$. Let $0={W_{0}},{W_{1}},{W_{2}},\dots ,{W_{i}},\dots $ be the actual waiting times of the customers. Thus, at time $t=0$ a first customer arrives and the service starts immediately. Denote by ${\zeta _{i}}={t_{i}}-{t_{i-1}}$, for $i\in \mathbb{N}$, the interarrival times between successive customers, and ${\eta _{i}}$, $i\in \mathbb{N}$, is the service time of the i-th customer. Suppose that $({\zeta _{i}})$ and $({\eta _{i}})$ are independent sequences of i.i.d. random variables. In the standard notation, this queuing system has the type $GI/G/1$, see [12, 14].
Let $W(t)$ be the waiting time of the last customer in the queue at time $t\ge 0$, that is,
and
Set
then
Denote $\mathbf{E}{\zeta _{i}}=a$, $\mathbf{E}{\eta _{i}}=b$ and assume that both expectations are finite. Further, we impose the following conditions on ${\zeta _{i}}$ and ${\eta _{i}}$:
and for some $\gamma >0$, it holds
Under these assumptions the evolution of the queuing system can be decomposed into busy periods, when a customer is in service, interleaved by idle periods, when the system is empty. Let us introduce regeneration moments $({S_{k}})$ of the process W as follows: ${S_{0}}=0$ and, for $i\in \mathbb{N}$, ${S_{i}}$ is the arrival time of a new customer at the end of i-th idle period. Let ${T_{i}}$ be the duration of the i-th regeneration cycle, and $\bar{W}({T_{1}})$ be the maximum waiting time during the first regeneration cycle. It is known, see [3] and [13], that under conditions (36) and (37), we have
Condition (36) also guarantees that the average duration of the i-th regeneration cycle is finite, that is, ${\alpha _{T}}=\mathbf{E}{T_{i}}<\infty $, see [14, Chapter 14, §3, Theorem 3.2].
(37)
\[ \mathbf{E}\exp (\gamma ({\eta _{i}}-{\zeta _{i}}))=1,\hspace{2em}\mathbf{E}({\eta _{i}}-{\zeta _{i}})\exp (\gamma ({\eta _{i}}-{\zeta _{i}}))<\infty .\]Thus, if we set $X(t)=W(t)$, ${R_{0}}(x)=\gamma x$, ${R_{1}}(x)=\log C+o(1)$ and ${r_{0}}(x)=\gamma $, then from Theorem 1 and Proposition 1 we derive the corollary.
Corollary 1.
Remark 2.
-
(i) Suppose that\[ \mathbf{P}({\zeta _{i}}\le x)=1-\exp (-\lambda x),\hspace{2em}\mathbf{P}({\eta _{i}}\le x)=1-\exp (-\mu x),\hspace{1em}x\ge 0,\]that is, we consider the queuing system $M/M/1$. Assume further, that $\rho :=\lambda /\mu <1$. It is easy to check that conditions (36) and (37) are fulfilled, and therefore equalities (38) and (39) hold with $\gamma =\mu -\lambda =\mu (1-\rho )$.
Example 2 (Queuing system $M/M/m$).
Let us now consider a queuing system with m servers and customers which arrive according to the Poisson process with intensity λ, and service times being independent copies of a random variable η with an exponential distribution
In the standard notation, this queuing system has the type $M/M/m$, see [12, 14].
We impose the following assumption on the parameters λ and μ ensuring existence of the stationary regime:
For $t\ge 0$, let $Q(t)$ denote the length of the queue at time t, that is, the total number of customers in service or pending. Set
In the same way as in Example 1, one can introduce regeneration moments $({S_{k}})$ for the process Q: ${S_{0}}:=0$ and, for $i\in \mathbb{N}$, ${S_{i}}$ is the arrival time of a new customer after the i-th busy period. Let ${T_{i}}$ be the duration of the i-th regeneration cycle and $\bar{Q}({T_{1}})$ be the maximum length of the queue in the first regeneration cycle. Put
In recent paper [7] the authors established that function R in (41) satisfies conditions (7) and (8) with
Using Theorem 2 we infer
Corollary 2.
Assume that for a queuing system $M/M/m$, $1\le m<\infty $, the condition (40) is fulfilled. Then
and
Example 3 (Birth and death processes).
Let $X={(X(t))_{t\ge 0}}$ be a birth and death processes with parameters
see [14, Ch. 7, $\mathrm{§}6$]. That is, ${(X(t))_{t\ge 0}}$ is a time-homogeneous Markov process such that, for $t\ge 0$, given $\{X(t)=n\}$ the probability of transition to state $n+1$ over a small period of time δ is $(\lambda n+a)\delta +o(\delta )$, $n=0,1,2,3,\dots $, and the probability of transition to $n-1$ is $\mu n\delta +o(\delta )$, $n=1,2,3,\dots $. The parameter a can be interpreted as the infinitesimal intensity of population growth due to immigration. The birth–death process X is usually called the process with linear growth and immigration.
(44)
\[ {\lambda _{n}}=\lambda n+a,\hspace{2em}{\mu _{n}}=\mu n,\hspace{1em}\lambda ,\mu ,a>0,\hspace{2.5pt}n=0,1,2,\dots ,\]We assume that $X(0)=0$ and
Put
\[ {\theta _{0}}=1,\hspace{2em}{\theta _{k}}={\prod \limits_{i=1}^{k}}\frac{{\lambda _{i-1}}}{{\mu _{i}}},\hspace{1em}k\in \mathbb{N}.\]
It is not difficult to check that condition (45) ensures
and
Under conditions (46) and (47), see [14] and [15], there exists a stationary regime, that is,
with
Further, X is a regenerative process with regeneration moments $({S_{k}})$, where ${S_{0}}=0$ and ${S_{i}}$, $i\in \mathbb{N}$, is the moment of i-th return to state 0. It is known that
where ${T_{k}}={S_{k}}-{S_{k-1}}$ is the duration of the k-th regeneration cycle, see Eq. (32) in [22]. If (45) holds, then
see [14]. We are interested in the asymptotic behavior of extreme values
Let us show how to apply Theorem 2 to the asymptotic analysis of $\bar{X}(t)$. Firstly, we need to evaluate accurately the sequence $(R(n))$ defined by
It is known, see [3] or Eq. (34) in [22], that
where ${\alpha _{0}}=1$ and ${\alpha _{k}}={\textstyle\prod _{i=1}^{k}}\frac{{\mu _{i}}}{{\lambda _{i}}}$ for $k\in \mathbb{N}$.
Using equalities (44) and (45) we can rewrite ${\alpha _{k}}$ as follows:
Further, using Taylor’s expansion
we have
where ${d_{k}}$ has a finite limit, as $k\to \infty $. Combining the relation
and the fact
with $\gamma =0.577\dots $ being the Euler–Mascheroni constant, we conclude
Now we can apply Lemma 5 to obtain
(50)
\[ {\alpha _{k}}=\frac{{\beta _{k}}}{{\rho ^{k}}},\hspace{2em}{\beta _{k}}={\prod \limits_{i=1}^{k}}\left(1-\frac{1}{1+i\lambda /a}\right).\](51)
\[ \log {\beta _{k}}={\sum \limits_{i=1}^{k}}\log \left(1-\frac{1}{1+i\lambda /a}\right)=-{\sum \limits_{i=1}^{k}}\frac{1}{1+i\lambda /a}+{d_{k}},\](52)
\[ \left|{\sum \limits_{i=1}^{k}}\frac{1}{1+i\lambda /a}-{\sum \limits_{i=1}^{k}}\frac{1}{i\lambda /a}\right|={\sum \limits_{i=1}^{k}}\frac{a}{\lambda i(1+i\lambda /a)}={C_{1}}+o(1),\hspace{1em}k\to \infty ,\]
\[ {\sum \limits_{i=1}^{k}}\log \left(1-\frac{1}{1+i\lambda /a}\right)=-\frac{a}{\lambda }\log k+{C_{2}}+o(1),\hspace{1em}k\to \infty .\]
Therefore,
where
(55)
\[ C={e^{{C_{2}}}}:=\underset{n\to \infty }{\lim }{n^{a/\lambda }}{\prod \limits_{i=1}^{n}}\bigg(1-\frac{1}{1+i\lambda /a}\bigg).\]
\[ {\Lambda _{n}}={\sum \limits_{k=1}^{n}}\frac{{\rho ^{-k}}}{{k^{a/\lambda }}}=\frac{{\rho ^{-n-1}}}{(\frac{1}{\rho }-1){n^{a/\lambda }}}(1+o(1)),\hspace{1em}n\to \infty .\]
Taking into account equality (54), we obtain
\[ {\sum \limits_{k=0}^{n}}{\alpha _{k}}=\frac{C{\rho ^{-n-1}}}{(1/\rho -1){n^{a/\lambda }}}(1+o(1)),\hspace{1em}n\to \infty .\]
Thus,
and we have the following representation
where
\[ {R_{0}}(n)=-n\log \rho -\frac{a}{\lambda }\log n,\hspace{2em}{R_{1}}(n)=-\log \frac{1/\rho -1}{C}-\log \rho +o(1),\hspace{1em}n\to \infty .\]
The function ${R_{0}}(x)=-x\log \rho -\frac{a}{\lambda }\log x$ is increasing for $x\ge {x_{0}}=-\frac{a}{\lambda \log \rho }$ and, furthermore, satisfies condition $(\mathbb{U})$.It remains to find a simple formula for the function ${A_{0}}$ in (10) and (12). To this end, let us write
\[ \log \left(\frac{t}{{\alpha _{T}}}\right)={R_{0}}({A_{0}}(t))=-{A_{0}}(t)\log \rho -\frac{a}{\lambda }\log {A_{0}}(t)={A_{0}}(t)\left(-\log \rho +o(1)\right),\]
as $t\to \infty $. Upon taking logarithms we get
and plugging this back into the initial equality yields
Thus, from Theorem 2 we infer the following. Corollary 3.
Let ${(X(t))_{t\ge 0}}$ be the birth and death process with parameters ${\lambda _{n}}=\lambda n+a$, ${\mu _{n}}=\mu n$, where $\lambda ,\mu ,a>0$, $n=0,1,2,3,\dots $. Suppose also that (45) holds. Then
and
Let us finally mention without a proof a statement which follows easily from equations (48), (56) and Theorem 2 in [22].
Corollary 4.
Let ${(X(t))_{t\ge 0}}$ be the birth and death process that satisfies all conditions of Corollary 3. Then
where ${t_{n}^{\ast }}=C{\rho ^{-n}}{n^{-a/\lambda }}x/(1/\rho -1)$, while ${p_{0}}$ and C are defined by (48) and (55). This relation can also be recast in a more transparent way as follows:
where ${X^{-1}}(n)=\inf \{t\in \mathbb{R}:X(t)=n\}$ is the first time when ${(X(t))_{t\ge 0}}$ visits state $n\in \mathbb{N}$, that is, the distribution of ${\rho ^{n}}{n^{a/\lambda }}{X^{-1}}(n)$ converges to an exponential law.