1 Introduction
The theory of large deviations gives an asymptotic computation of small probabilities on exponential scale; see [4] as a reference of this topic. The basic definition of this theory is the concept of large deviation principle, which provides some asymptotic bounds for a family of probability measures on the same topological space; these bounds are expressed in terms of a speed function (that tends to infinity) and a lower semicontinuous rate function defined on the topological space.
The term moderate deviations is used for a class of large deviation principles which fills the gap between a convergence to a constant (governed by a large deviation principle with a suitable speed function) and an asymptotic normality result. In view of the examples studied in this paper we explain the concept of moderate deviations in the next Assertion 1.1, and our presentation will be restricted to sequences of real random variables defined on the same probability space $(\Omega ,\mathcal{F},P)$; thus the speed function is a sequence $\{{v_{n}}:n\ge 1\}$ such that ${v_{n}}\to \infty $ (as in the rest of the paper).
Assertion 1.1 ((Classical) moderate deviations).
Let $\{{C_{n}}:n\ge 1\}$ be a sequence of real random variables such that the following asymptotic regimes hold.
$\mathbf{R}\mathbf{1}$: $\{{C_{n}}:n\ge 1\}$ converges in probability to zero, and this convergence is governed by a large deviation principle with speed ${v_{n}}$ and rate function ${I_{\mathrm{LD}}}$ (such that ${I_{\mathrm{LD}}}(x)=0$ if and only if $x=0$);
$\mathbf{R}\mathbf{2}$: $\sqrt{{v_{n}}}{C_{n}}$ converges weakly to a centered normal distribution with (positive) variance ${\sigma ^{2}}$.
Then we talk about moderate deviations when, for every family of positive numbers $\{{a_{n}}:n\ge 1\}$ such that
the sequence of random variables $\{\sqrt{{a_{n}}{v_{n}}}{C_{n}}:n\ge 1\}$ satisfies the large deviation principle with speed $1/{a_{n}}$ and rate function ${I_{\mathrm{MD}}}$ defined by
Moreover, one typically has
We have the following remarks.
Remark 1.1.
We can recover the asymptotic regimes $\mathbf{R}\mathbf{1}$ and $\mathbf{R}\mathbf{2}$ in Assertion 1.1 by setting ${a_{n}}=\frac{1}{{v_{n}}}$ and ${a_{n}}=1$ respectively; note that, in both cases, one of the conditions in (1) holds and the other one fails. So the class of large deviation principles in Assertion 1.1 is determined by a family of positive scaling factors $\{{a_{n}}:n\ge 1\}$ that fills the gap between the asymptotic regimes $\mathbf{R}\mathbf{1}$ and $\mathbf{R}\mathbf{2}$. Moreover, by (1), the speed $1/{a_{n}}$ for the random variables $\{\sqrt{{a_{n}}{v_{n}}}{C_{n}}:n\ge 1\}$ has a lower intensity than the speed ${v_{n}}$ (see $\mathbf{R}\mathbf{1}$).
Remark 1.2.
Concerning the random variables $\{{C_{n}}:n\ge 1\}$ in Assertion 1.1, one often has
where $\{{Z_{n}}:n\ge 1\}$ is a sequence which converges in probability to ${z_{\infty }}$ for some ${z_{\infty }}\in \mathbb{R}$ (see, for instance, (23) concerning Example 6.1 where ${z_{\infty }}=t$); moreover this convergence is governed by a large deviation principle with some speed ${v_{n}}$, say, and rate function ${I_{Z}}$, and one has ${I_{Z}}(z)=0$ if and only if $z={z_{\infty }}$. Then, for the rate function ${I_{\mathrm{LD}}}$ in Assertion 1.1, one has
\[ {I_{\mathrm{LD}}}(x)={I_{Z}}(x+{z_{\infty }})\hspace{1em}\text{for all}\hspace{2.5pt}x\in \mathbb{R}.\]
Moreover, if we refer to the equality (3) at the end of Assertion 1.1, one typically has ${I^{\prime\prime }_{Z}}({z_{\infty }})=\frac{1}{{\sigma ^{2}}}$ because ${I^{\prime\prime }_{Z}}({z_{\infty }})={I^{\prime\prime }_{\mathrm{LD}}}(0)$.Now we present a prototype example of the framework in Assertion 1.1, for which one can refer to the law of large numbers and the central limit theorem. Without loss of generality we restrict our attention to the case of centered random variables $\{{X_{n}}:n\ge 1\}$ in order to have a convergence to zero (as in the asymptotic regime $\mathbf{R}\mathbf{1}$ in Assertion 1.1).
Example 1.1.
We set
where $\{{X_{n}}:n\ge 1\}$ is a sequence of i.i.d. centered real random variables. Moreover, we also assume that $\mathbb{E}[{e^{\theta {X_{1}}}}]<\infty $ in a neighborhood of the origin $\theta =0$, and therefore ${\sigma ^{2}}=\mathrm{Var}[{X_{1}}]$ is finite (we also assume that ${\sigma ^{2}}>0$ to avoid trivialities). In such a case the sequence $\{{C_{n}}:n\ge 1\}$ satisfies the large deviation principle with speed ${v_{n}}=n$ and rate function ${I_{\mathrm{LD}}}$ defined by
\[ {I_{\mathrm{LD}}}(x)=\underset{\theta \in \mathbb{R}}{\sup }\left\{\theta x-\log \mathbb{E}\left[{e^{\theta {X_{1}}}}\right]\right\}\hspace{1em}\text{for all}\hspace{2.5pt}x\in \mathbb{R}.\]
This is a consequence of the Cramér theorem (see, e.g., Theorem 2.2.3 in [4]), and one can check that ${I^{\prime\prime }_{\mathrm{LD}}}(0)=\frac{1}{{\sigma ^{2}}}$. Moreover, for every family of positive numbers $\{{a_{n}}:n\ge 1\}$ such that (1) holds, the sequence of random variables $\{\sqrt{{a_{n}}n}{C_{n}}:n\ge 1\}$ satisfies the large deviation principle with speed $1/{a_{n}}$ and rate function ${I_{\mathrm{MD}}}$ defined by (2); this is a consequence of Theorem 3.7.1 in [4] with $d=1$. Actually this prototype example can be presented for an arbitrary d (i.e. for multivariate random variables) by taking into account the multivariate version of the Cramér theorem (see, e.g., Theorem 2.2.30 in [4]).The aim of this paper is to present some examples of noncentral moderate deviations. We use this terminology to mean a class of large deviation principles which fills the gap between a convergence to a constant (governed by a large deviation principle with some speed ${v_{n}}$ and some rate function which uniquely attains the value zero at that constant) and a weak convergence to some non-Gaussian law. This should happen in the same spirit of Assertion 1.1 for some positive scalings $\{{a_{n}}:n\ge 1\}$ such that (1) holds.
The terminology of noncentral moderate deviations appears in three recent results: Proposition 3.3 in [13] (for possibly m-variate random variables), Proposition 4.3 in [15] and Proposition 3.3 in [2]. In the first two cases the weak convergence is trivial because one has a family of identically distributed random variables. Another result is Proposition 2.2 in [12], where the convergence in distribution is not trivial; however the term noncentral moderate deviations does not appear in that paper.
The common line of the examples studied in this paper can be summarized as follows.
Assertion 1.2 (Common line of several examples in this paper).
We consider a sequence of real random variables $\{{C_{n}}:n\ge 1\}$ such that the following asymptotic regimes hold.
$\mathbf{R}\mathbf{1}$: $\{{C_{n}}:n\ge 1\}$ converges in probability to zero, and this convergence is governed by a large deviation principle with speed ${v_{n}}$ (we always have ${v_{n}}\to \infty $) and rate function ${I_{\mathrm{LD}}}$ (such that ${I_{\mathrm{LD}}}(x)=0$ if and only if $x=0$);
$\mathbf{R}\mathbf{2}$: ${v_{n}}{C_{n}}$ converges weakly to a non-Gaussian law.
Moreover, for every family of positive numbers $\{{a_{n}}:n\ge 1\}$ such that (1) holds, the sequence of random variables $\{{a_{n}}{v_{n}}{C_{n}}:n\ge 1\}$ satisfies the large deviation principle with speed $1/{a_{n}}$ and a suitable rate function ${I_{\mathrm{MD}}}$.
Some interesting common features of the examples studied in this paper are given by the following equalities: ${I_{\mathrm{LD}}}(0)={I_{\mathrm{MD}}}(0)=0$ (as in Assertion 1.1),
\[ {I_{\mathrm{MD}}}(x)={I^{\prime }_{\mathrm{LD}}}(0+)x\hspace{2.5pt}\mathrm{or}\hspace{2.5pt}{I_{\mathrm{MD}}}(x)=\infty \hspace{1em}\text{if}\hspace{2.5pt}x>0\]
and
\[ {I_{\mathrm{MD}}}(x)={I^{\prime }_{\mathrm{LD}}}(0-)x\hspace{2.5pt}\mathrm{or}\hspace{2.5pt}{I_{\mathrm{MD}}}(x)=\infty \hspace{1em}\text{if}\hspace{2.5pt}x<0.\]
Not all the noncentral moderate deviation results have these common features; for instance they do not appear in Proposition 3.3 in [2]. This explains the interest of non-central moderate deviations that, in our opinion, deserve to be investigated. We also recall that the common features of the examples studied in this paper can be seen as the analogue of the equality ${I_{\mathrm{MD}}}(x)=\frac{{I^{\prime\prime }_{\mathrm{LD}}}(0)}{2}{x^{2}}$ stated in Assertion 1.1 for the classical moderate deviations (it is a consequence of (2) and (3)).Now we present the outline of the paper with a very brief description of the examples studied in each section. We start with some preliminaries in Section 2. Section 3 is devoted to Example 3.1 which concerns minima of i.i.d. nonnegative random variables. In Section 4 we study Example 4.1, which is based on maxima of i.i.d. random variables in the maximum domain attraction (MDA) of the Gumbel distribution (see, e.g., the family of distributions in Theorem 8.13.4 in [3] together with the well-known Fisher–Tippett theorem, e.g., Theorem 8.13.1 in [3]). In Section 5 we study Example 5.1 which concerns the classical occupancy problem (or the coupon collector’s problem); in particular that is an example with a weak convergence to the Gumbel distribution (as Example 4.1). Finally, in Section 6, we consider Example 6.1 which is inspired by a recent replacement model for random lifetimes in the literature (see, e.g., [5]).
2 Preliminaries
We start with the definition of large deviation principle (see, e.g., [4], pages 4–5). Let $(\mathcal{X},{\tau _{X}})$ be a topological space and let $\{{Z_{n}}:n\ge 1\}$ be a sequence of $\mathcal{X}$-valued random variables defined on the same probability space $(\Omega ,\mathcal{F},P)$. A sequence $\{{v_{n}}:n\ge 1\}$ such that ${v_{n}}\to \infty $ (as $n\to \infty $) is called a speed function, and a lower semicontinuous function $I:\mathcal{X}\to [0,\infty ]$ is called a rate function. Then the sequence $\{{Z_{n}}:n\ge 1\}$ satisfies the large deviation principle (LDP from now on) with speed ${v_{n}}$ and rate function I if
and
The rate function I is said to be good if every level set $\{x\in \mathcal{X}:I(x)\le \eta \}$ (for $\eta \ge 0$) is compact.
(4)
\[ \underset{n\to \infty }{\limsup }\frac{1}{{v_{n}}}\log P({Z_{n}}\in C)\le -\underset{x\in C}{\inf }I(x)\hspace{1em}\text{for all closed sets}\hspace{2.5pt}C,\](5)
\[ \underset{n\to \infty }{\liminf }\frac{1}{{v_{n}}}\log P({Z_{n}}\in O)\ge -\underset{x\in O}{\inf }I(x)\hspace{1em}\text{for all open sets}\hspace{2.5pt}O.\]The following Lemma 2.1 is quite a standard result (anyway we give briefly some hints of the proof for the sake of completeness). All the moderate deviation results in this paper concern the situation presented in Assertion 1.2, and they will be proved by applying Lemma 2.1; more precisely, for every choice of the positive scalings $\{{a_{n}}:n\ge 1\}$ such that (1) holds (with a further condition for Example 6.1, i.e. (24)), we shall have ${s_{n}}=1/{a_{n}}$ and $I={I_{\mathrm{MD}}}$ for some rate function ${I_{\mathrm{MD}}}$. We shall apply Lemma 2.1 also to prove Proposition 6.1 (with ${s_{n}}={v_{n}}=n$ and $I={I_{\mathrm{LD}}}$ for a suitable rate function ${I_{\mathrm{LD}}}$), which provides the LDP for the asymptotic regime $\mathbf{R}\mathbf{1}$ concerning Example 6.1.
Lemma 2.1.
Let $\{{s_{n}}:n\ge 1\}$ be a speed and let $\{{C_{n}}:n\ge 1\}$ be a sequence of real random variables defined on the same probability space $(\Omega ,\mathcal{F},P)$. Moreover let $I:\mathbb{R}\to [0,\infty ]$ be a rate function decreasing on $(-\infty ,0)$, increasing on $(0,\infty )$ and such that $I(x)=0$ if and only if $x=0$. Moreover, assume that:
Then $\{{C_{n}}:n\ge 1\}$ satisfies the LDP with speed ${s_{n}}$ and rate function I.
(6)
\[ \underset{n\to \infty }{\limsup }\frac{1}{{s_{n}}}\log P({C_{n}}\ge x)\le -I(x)\hspace{1em}\textit{for all}\hspace{2.5pt}x>0;\](7)
\[ \underset{n\to \infty }{\limsup }\frac{1}{{s_{n}}}\log P({C_{n}}\le x)\le -I(x)\hspace{1em}\textit{for all}\hspace{2.5pt}x<0;\](8)
\[ \underset{n\to \infty }{\liminf }\frac{1}{{s_{n}}}\log P({C_{n}}\in O)\ge -I(x)\hspace{1em}\left.\begin{array}{l}\textit{for every}\hspace{2.5pt}x\in \{y\in \mathbb{R}:I(y)<\infty \}\textit{, and}\\ {} \textit{for all open sets}\hspace{2.5pt}O\hspace{2.5pt}\textit{such that}\hspace{2.5pt}x\in O\textit{.}\end{array}\right.\]Proof.
It is known that the final statement (8) yields the lower bound for open sets (5) (see, e.g., condition (b) with eq. (1.2.8) in [4]). Here we prove that (6) and (7) yield the upper bound for closed sets (4). Such an upper bound trivially holds if C is the empty set or if $0\in C$, and therefore in what follows we assume that C is nonempty and $0\notin C$; then at least one of the sets $C\cap (0,\infty )$ and $C\cap (-\infty ,0)$ is nonempty. For simplicity we assume that both sets $C\cap (0,\infty )$ and $C\cap (-\infty ,0)$ are nonempty (in fact, if one of them is empty, the proof can be adapted readily). Then we can find ${x_{1}}\in C\cap (0,\infty )$ and ${x_{2}}\in C\cap (-\infty ,0)$ such that $C\subset (-\infty ,{x_{2}}]\cup [{x_{1}},\infty )$, and we have
thus, by Lemma 1.2.15 in [4] (together with (6) and (7)), we have
\[\begin{aligned}{}\underset{n\to \infty }{\limsup }\frac{1}{{s_{n}}}\log P({Z_{n}}\in C)& \le \underset{n\to \infty }{\limsup }\frac{1}{{s_{n}}}\log \{P({Z_{n}}\ge {x_{1}})+P({Z_{n}}\le {x_{2}})\}\\ {} & =\max \{-I({x_{1}}),-I({x_{2}})\}=-\min \{I({x_{1}}),I({x_{2}})\}.\end{aligned}\]
We conclude the proof noting that $\min \{I({x_{1}}),I({x_{2}})\}={\inf _{x\in C}}I(x)$ by the hypotheses. □3 An example with minima of i.i.d. nonnegative random variables
In this section we consider the following example.
Example 3.1.
Let $\{{X_{n}}:n\ge 1\}$ be a sequence of i.i.d. real random variables, with a common distribution function F. We assume that F is strictly increasing on $({\alpha _{F}},{\omega _{F}})$, where
\[ {\alpha _{F}}:=\inf \{x\in \mathbb{R}:F(x)>0\}=0\hspace{2.5pt}\text{and}\hspace{2.5pt}{\omega _{F}}:=\sup \{x\in \mathbb{R}:F(x)<1\};\]
so $\{{X_{n}}:n\ge 1\}$ are nonnegative random variables. We also assume that $F(0)=0$, and that there exists
In what follows we use the notation
Throughout this section we set
Assertion 3.1.
We can recover the asymptotic regimes $\mathbf{R}\mathbf{1}$ and $\mathbf{R}\mathbf{2}$ in Assertion 1.2 as follows.
$\mathbf{R}\mathbf{1}$: $\{{C_{n}}:n\ge 1\}$ converges in probability to zero (actually it is an a.s. convergence); moreover, by Lemma 1 in [14], $\{{C_{n}}:n\ge 1\}$ satisfies the LDP with speed ${v_{n}}=n$ and rate function ${I_{\mathrm{LD}}}$ defined by
$\mathbf{R}\mathbf{2}$: ${v_{n}}{C_{n}}=n\min \{{X_{1}},\dots ,{X_{n}}\}$ converges weakly to the exponential distribution with mean $1/{F^{\prime }}(0+)$; in fact, for every $x>0$, by (9), we get
\[\begin{aligned}{}P(n\min \{{X_{1}},\dots ,{X_{n}}\}\le x)& =1-P(n\min \{{X_{1}},\dots ,{X_{n}}\}>x)\\ {} & =1-{P^{n}}\left({X_{1}}>\frac{x}{n}\right)=1-{\left(1-F\left(\frac{x}{n}\right)\right)^{n}}\\ {} & =1-{\left(1-\frac{nF\left(\frac{x}{n}\right)}{n}\right)^{n}}\to 1-{e^{-{F^{\prime }}(0+)x}}\hspace{1em}\textit{as}\hspace{2.5pt}n\to \infty .\end{aligned}\]
Thus we are studying a sequence of random variables that has interest for any possible concrete model related to the maximum domain of attraction of the Weibull distribution (indeed, exponential distribution is a particular case of the Weibull distribution). For instance, the random variables $\{{X_{n}}:n\ge 1\}$ could represent wind speed data (to determine the viability of windmills).
Now we prove the moderate deviation result.
Proposition 3.1.
For every family of positive numbers $\{{a_{n}}:n\ge 1\}$ such that (1) holds (i.e. ${a_{n}}\to 0$ and ${a_{n}}n\to \infty $), the sequence of random variables $\{{a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}:n\ge 1\}$ satisfies the LDP with speed $1/{a_{n}}$ and rate function ${I_{\mathrm{MD}}}$ defined by
Proof.
We apply Lemma 2.1 for every choice of $\{{a_{n}}:n\ge 1\}$ such that (1) holds, with ${s_{n}}=1/{a_{n}}$, $I={I_{\mathrm{MD}}}$ and $\{{C_{n}}:n\ge 1\}$ as in (10).
□
Proof of (6).
We have to show that, for every $x>0$,
\[ \underset{n\to \infty }{\limsup }{a_{n}}\log P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\ge x)\le -{F^{\prime }}(0+)x.\]
For every $\delta \in (0,x)$ we have
\[\begin{aligned}{}P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\ge x)& ={P^{n}}\left({X_{1}}\ge \frac{x}{{a_{n}}n}\right)\\ {} & ={\left(1-F\left(\frac{x}{{a_{n}}n}-\right)\right)^{n}}\le {\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}\end{aligned}\]
and, by the relation ${\lim \nolimits_{x\to 0+}}F(x)=0$ and the limit in (9),
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }{a_{n}}\log P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\ge x)\\ {} & \hspace{1em}\le \underset{n\to \infty }{\limsup }{a_{n}}n\log \left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}n\left(-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)=-{F^{\prime }}(0+)(x-\delta );\end{aligned}\]
so we obtain (6) by letting δ go to zero. Proof of (7).
We have to show that, for every $x<0$,
\[ \underset{n\to \infty }{\limsup }{a_{n}}\log P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\le x)\le -\infty .\]
This trivially holds as $-\infty \le -\infty $ because the random variables $\{{C_{n}}:n\ge 1\}$ are nonnegative (and therefore $P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\le x)=0$ for every $n\ge 1$). Proof of (8).
We want to show that, for every $x\ge 0$ and for every open set O such that $x\in O$, we have
\[ \underset{n\to \infty }{\liminf }{a_{n}}\log P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\in O)\ge -{F^{\prime }}(0+)x.\]
The case $x=0$ is immediate; indeed, since ${a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}$ converges in probability to zero by the Slutsky theorem (by ${a_{n}}\to 0$ and the weak convergence in $\mathbf{R}\mathbf{2}$ in Assertion 3.1), we have trivially $0\ge 0$. So, from now on, we suppose that $x>0$ and we take $\delta >0$ small enough to have
Then
\[\begin{aligned}{}& P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\in O)\ge P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\in (x-\delta ,x+\delta ))\\ {} & \hspace{1em}=P\left(\min \{{X_{1}},\dots ,{X_{n}}\}\in \left(\frac{x-\delta }{{a_{n}}n},\frac{x+\delta }{{a_{n}}n}\right)\right)\\ {} & \hspace{1em}={P^{n}}\left({X_{1}}>\frac{x-\delta }{{a_{n}}n}\right)-{P^{n}}\left({X_{1}}\ge \frac{x+\delta }{{a_{n}}n}\right)\\ {} & \hspace{1em}={\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}-{\left(1-F\left(\frac{x+\delta }{{a_{n}}n}-\right)\right)^{n}}\\ {} & \hspace{1em}={\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}\left(1-\frac{{\left(1-F\left(\frac{x+\delta }{{a_{n}}n}-\right)\right)^{n}}}{{\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}}\right)\\ {} & \hspace{1em}={\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}\left\{1-\exp \left(n\log \left(\frac{1-F\left(\frac{x+\delta }{{a_{n}}n}-\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\right)\right)\right\}.\end{aligned}\]
Now notice that
\[\begin{aligned}{}& \log \left(\frac{1-F\left(\frac{x+\delta }{{a_{n}}n}-\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\right)\\ {} & \hspace{1em}=\log \left(1+\frac{F\left(\frac{x-\delta }{{a_{n}}n}\right)-F\left(\frac{x+\delta }{{a_{n}}n}-\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\right)\le \frac{F\left(\frac{x-\delta }{{a_{n}}n}\right)-F\left(\frac{x+\delta }{{a_{n}}n}-\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\\ {} & \hspace{1em}=-\frac{F\left(\frac{x+\delta }{{a_{n}}n}-\right)-F\left(\frac{x-\delta }{{a_{n}}n}\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\le -\frac{F\left(\frac{x}{{a_{n}}n}\right)-F\left(\frac{x-\delta }{{a_{n}}n}\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\end{aligned}\]
and, again by the relation ${\lim \nolimits_{x\to 0+}}F(x)=0$ and the limit in (9),
\[ n\log \left(\frac{1-F\left(\frac{x+\delta }{{a_{n}}n}-\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\right)\le -\frac{{a_{n}}nF\left(\frac{x}{{a_{n}}n}\right)-{a_{n}}nF\left(\frac{x-\delta }{{a_{n}}n}\right)}{{a_{n}}\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)}\to -\infty \hspace{1em}\text{as}\hspace{2.5pt}n\to \infty ,\]
because ${a_{n}}\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)\to 0$ and
\[ {a_{n}}nF\left(\frac{x}{{a_{n}}n}\right)-{a_{n}}nF\left(\frac{x-\delta }{{a_{n}}n}\right)\to {F^{\prime }}(0+)(x-(x-\delta ))={F^{\prime }}(0+)\delta >0.\]
In conclusion, we get
\[\begin{aligned}{}& \underset{n\to \infty }{\liminf }{a_{n}}\log P({a_{n}}n\min \{{X_{1}},\dots ,{X_{n}}\}\in O)\\ {} & \hspace{1em}\ge \underset{n\to \infty }{\liminf }{a_{n}}\log {\left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}\\ {} & \hspace{2em}+\underset{n\to \infty }{\liminf }{a_{n}}\log \left(1-\exp \left(n\log \left(\frac{1-F\left(\frac{x+\delta }{{a_{n}}n}-\right)}{1-F\left(\frac{x-\delta }{{a_{n}}n}\right)}\right)\right)\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }{a_{n}}n\log \left(1-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }{a_{n}}n\left(-F\left(\frac{x-\delta }{{a_{n}}n}\right)\right)=-{F^{\prime }}(0+)(x-\delta );\end{aligned}\]
so we obtain (8) by letting δ go to zero. 4 An example with maxima of i.i.d. random variables in the MDA of Gumbel distribution
In this section we consider the following example.
Example 4.1.
Let $\{{X_{n}}:n\ge 1\}$ be a sequence of i.i.d. real random variables, with a common distribution function F. Let ${\omega _{F}}$ be as in Example 3.1, and assume that ${\omega _{F}}=\infty $. We also assume that, for x large enough, F is strictly increasing with positive density f. Moreover let w be the function defined by
and assume that w is differentiable for x large enough, ${\lim \nolimits_{x\to \infty }}{w^{\prime }}(x)=0$ and w is a regularly varying function with exponent $1-\mu $ for some $\mu >0$, i.e. ${\lim \nolimits_{x\to \infty }}\frac{w(tx)}{w(x)}={t^{1-\mu }}$ for all $t>0$. So, it is well known that
for a suitable slowly varying function L, i.e. a function such that
(this can be immediately checked by the definitions of regularly varying and slowly varying functions).
Here we list some particular cases in which w is a regularly varying function with exponent $1-\mu $ for some $\mu >0$.
-
1. Standard normal distribution (see, e.g., [18] (pp. 48, 50–51, 88–90) for the asymptotic behavior of $w(x)$)
We need several consequences of the assumptions in Example 4.1. We start with some results concerning the function L and the value μ (see (11)). In particular we provide an estimate for
Proof.
It is known (see, e.g., [8], Chapter VIII, Section 8, Lemma 2, p. 277) that, for every $\varepsilon >0$, there exists ${x_{\varepsilon }}>0$ such that
\[ {x^{-\varepsilon }} < L(x) < {x^{\varepsilon }}\hspace{1em}\text{for all}\hspace{2.5pt}x > {x_{\varepsilon }}.\]
Then, if we take $\varepsilon <\mu $, we get
and we immediately get the desired limit by letting x go to infinity. □Proof.
Recall from (11) that $\frac{\bar{F}(x)}{f(x)}={x^{1-\mu }}L(x)$; thus $L(x)$ does not vanish because we have $\bar{F}(x)\ne 0$ for all x since ${\omega _{F}}=\infty $. Therefore, for some ${x_{1}}$, we get
\[ \frac{f(x)}{\bar{F}(x)}=\frac{{x^{\mu -1}}}{L(x)}\hspace{1em}\text{for all}\hspace{2.5pt}x>{x_{1}},\]
which yields
\[ \underset{-\log \bar{F}(x)+\log \bar{F}({x_{1}})}{\underbrace{{\int _{{x_{1}}}^{x}}\frac{f(t)}{\bar{F}(t)}dt}}={\int _{{x_{1}}}^{x}}\frac{{t^{\mu -1}}}{L(t)}dt.\]
Recall also the so-called Potter bound (see, e.g., Theorem 1.5.6(i) in [3]): for every $A>1$ and $\delta >0$ there exists ${x_{0}}={x_{0}}(A,\delta )$ such that
\[ \frac{L(y)}{L(z)}\le A\max \left\{{\left(\frac{z}{y}\right)^{\delta }},{\left(\frac{y}{z}\right)^{\delta }}\right\}\hspace{1em}\text{for all}\hspace{2.5pt}y,z>{x_{0}}.\]
Then we have
\[ -\frac{L(x)\log \bar{F}(x)}{{x^{\mu }}}+\frac{L(x)\log \bar{F}({x_{1}})}{{x^{\mu }}}=\frac{L(x)}{{x^{\mu }}}{\int _{{x_{1}}}^{{x_{0}}}}\frac{{t^{\mu -1}}}{L(t)}dt+\frac{L(x)}{{x^{\mu }}}{\int _{{x_{0}}}^{x}}\frac{{t^{\mu -1}}}{L(t)}dt;\]
moreover, by the Potter bound and some computations, we can estimate the last term as
\[ \frac{L(x)}{{x^{\mu }}}{\int _{{x_{0}}}^{x}}\frac{{t^{\mu -1}}}{L(t)}dt\le \frac{A}{{x^{\mu }}}{\int _{{x_{0}}}^{x}}{\left(\frac{x}{t}\right)^{\delta }}{t^{\mu -1}}dt=\frac{A}{\mu -\delta }\left(1-{\left(\frac{{x_{0}}}{x}\right)^{\mu -\delta }}\right).\]
Then, by the definition of ℓ in (12) and by Lemma 4.1, we get $\ell \le \frac{A}{\mu -\delta }$ by letting x go to infinity; so we conclude by the arbitrariness of A and δ. □Remark 4.1 (A discussion on the inequality in Lemma 4.2).
If we consider the four distributions listed above for which the function w is regularly varying with exponent $1-\mu $ for some $\mu >0$, we have
indeed we have $L(\infty )=1$ for standard normal, Gamma and logistic distributions, and $L(\infty )=1/a$ for Weibull distribution. Then, by the L’Hôpital rule we have
(13)
\[ \underset{x\to \infty }{\lim }L(x)=L(\infty )\hspace{1em}\text{for some}\hspace{2.5pt}L(\infty )\in (0,\infty );\]
\[\begin{aligned}{}\underset{x\to \infty }{\lim }\frac{\log \bar{F}(x)}{{x^{\mu }}}& =\underset{x\to \infty }{\lim }\frac{-f(x)/\bar{F}(x)}{\mu {x^{\mu -1}}}\\ {} & =-\underset{x\to \infty }{\lim }\frac{1/w(x)}{\mu {x^{\mu -1}}}=-\underset{x\to \infty }{\lim }\frac{1}{\mu L(x)}=-\frac{1}{\mu L(\infty )},\end{aligned}\]
and therefore
\[ \underset{x\to \infty }{\lim }-\frac{L(x)\log \bar{F}(x)}{{x^{\mu }}}=-\underset{x\to \infty }{\lim }L(x)\underset{x\to \infty }{\lim }\frac{\log \bar{F}(x)}{{x^{\mu }}}=\frac{1}{\mu }.\]
In conclusion, we have shown that the limit of $-\frac{L(x)\log \bar{F}(x)}{{x^{\mu }}}$ as $x\to \infty $ exists if (13) holds (as it happens for the four distributions listed above). We are not aware of cases in which the inequality in Lemma 4.2 is strict.Some further preliminaries are needed. Firstly, under the assumptions in Example 4.1, the following quantities are well defined for n large enough:
and (in one of the following equalities we take into account (11))
The following lemmas provide some properties of the function w and the sequence $\{{h_{n}}:n\ge 1\}$.
(14)
\[ {h_{n}}:={m_{n}}nf({m_{n}})={m_{n}}\frac{f({m_{n}})}{\bar{F}({m_{n}})}=\frac{{m_{n}}}{w({m_{n}})}=\frac{{m_{n}}}{{m_{n}^{1-\mu }}L({m_{n}})}=\frac{{m_{n}^{\mu }}}{L({m_{n}})}.\]Lemma 4.3.
Under the assumptions in Example 4.1 we have $w({x_{n}})\sim w({y_{n}})$ for ${x_{n}},{y_{n}}\to \infty $ such that ${x_{n}}\sim {y_{n}}$.
Proof.
By the well-known Karamata’s representation of slowly varying functions (see, e.g., Theorem 1.3.1 in [3]), there exists $b>0$ such that the function L introduced above can be written as
\[ L(x)={c_{1}}(x)\exp \left({\int _{b}^{x}}\frac{{c_{2}}(t)}{t}dt\right)\hspace{1em}\text{for all}\hspace{2.5pt}x\ge b,\]
where ${c_{1}}$ and ${c_{2}}$ are suitable functions such that ${c_{1}}(x)$ tends to some finite limit and ${c_{2}}(x)\to 0$ (as $x\to \infty $). Then we have to prove that
\[ \frac{w({x_{n}})}{w({y_{n}})}=\frac{{c_{1}}({x_{n}})\exp \left({\textstyle\textstyle\int _{b}^{{x_{n}}}}\frac{{c_{2}}(t)}{t}dt\right){x_{n}^{1-\mu }}}{{c_{1}}({y_{n}})\exp \left({\textstyle\textstyle\int _{b}^{{y_{n}}}}\frac{{c_{2}}(t)}{t}dt\right){y_{n}^{1-\mu }}}\to 1\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\]
By the hypotheses we only need to prove that
\[ \frac{\exp \left({\textstyle\textstyle\int _{b}^{{x_{n}}}}\frac{{c_{2}}(t)}{t}dt\right)}{\exp \left({\textstyle\textstyle\int _{b}^{{y_{n}}}}\frac{{c_{2}}(t)}{t}dt\right)}=\exp \left({\int _{{y_{n}}}^{{x_{n}}}}\frac{{c_{2}}(t)}{t}dt\right)\to 1,\]
which amounts to
Let $\delta \in (0,1)$ and $\varepsilon \in (0,\frac{\delta }{1+\delta })$ be arbitarily fixed. Then there exists ${n_{\varepsilon }}\ge 1$ such that $|{c_{2}}(t)|<\varepsilon $ for $t>{n_{\varepsilon }}$ and $1-\varepsilon <\frac{{x_{n}}}{{y_{n}}} < 1+\varepsilon $ for $n>{n_{\varepsilon }}$. So, for n large enough to have ${x_{n}}\wedge {y_{n}}>{n_{\varepsilon }}$, we get
\[ \left|{\int _{{y_{n}}}^{{x_{n}}}}\frac{{c_{2}}(t)}{t}dt\right|\le {\int _{{x_{n}}\wedge {y_{n}}}^{{x_{n}}\vee {y_{n}}}}\frac{|{c_{2}}(t)|}{t}dt\le \varepsilon {\int _{{x_{n}}\wedge {y_{n}}}^{{x_{n}}\vee {y_{n}}}}\frac{1}{t}dt=\varepsilon \log \frac{{x_{n}}\vee {y_{n}}}{{x_{n}}\wedge {y_{n}}}\]
and
\[ 1\le \frac{{x_{n}}\vee {y_{n}}}{{x_{n}}\wedge {y_{n}}}=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{{x_{n}}}{{y_{n}}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}{x_{n}}\ge {y_{n}}\\ {} \frac{{y_{n}}}{{x_{n}}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}{x_{n}} < {y_{n}}\end{array}\right.\le \left\{\begin{array}{l@{\hskip10.0pt}l}1+\varepsilon & \hspace{2.5pt}\text{if}\hspace{2.5pt}{x_{n}}\ge {y_{n}}\\ {} \frac{1}{1-\varepsilon }& \hspace{2.5pt}\text{if}\hspace{2.5pt}{x_{n}} < {y_{n}}\end{array}\right.<1+\delta .\]
The proof is complete. □Proof.
Firstly we recall that ${h_{n}}=\frac{{m_{n}^{\mu }}}{L({m_{n}})}$ (see (14)); then ${h_{n}}\to \infty $ by taking into account that ${m_{n}}\to \infty $ and by Lemma 4.1. Moreover, we have
So we prove the lemma showing that (15) holds. We take $\eta >0$ and we write
Finally we combine the estimates for ${I_{n}^{(1)}}(\eta )$ and ${I_{n}^{(2)}}(\eta )$, and we get (15) by the arbitrariness of $\eta >0$ and $\rho >0$. This completes the proof. □
\[\begin{aligned}{}{\int _{0}^{{m_{n}}}}\frac{1}{w(t)}dt& ={\int _{0}^{{m_{n}}}}\frac{f(t)}{\bar{F}(t)}dt\\ {} & =-{\int _{0}^{{m_{n}}}}\frac{d}{dt}\hspace{-0.1667em}\log \bar{F}(t)dt\hspace{-0.1667em}=\hspace{-0.1667em}-\log \bar{F}({m_{n}})+\log \bar{F}(0)=\log n+\log \bar{F}(0);\end{aligned}\]
then ${h_{n}}\sim \mu \log n$ if and only if ${h_{n}}\sim \mu {\textstyle\int _{0}^{{m_{n}}}}\frac{1}{w(t)}dt$, which amounts to
(15)
\[ \underset{n\to \infty }{\lim }\frac{\mu }{{h_{n}}}{\int _{0}^{{m_{n}}}}\frac{1}{w(t)}dt=1.\]
\[ \frac{\mu }{{h_{n}}}{\int _{0}^{{m_{n}}}}\frac{1}{w(t)}dt=\underset{=:{I_{n}^{(1)}}(\eta )}{\underbrace{\frac{\mu }{{h_{n}}}{\int _{0}^{\eta {m_{n}}}}\frac{1}{w(t)}dt}}+\underset{=:{I_{n}^{(2)}}(\eta )}{\underbrace{\frac{\mu }{{h_{n}}}{\int _{\eta {m_{n}}}^{{m_{n}}}}\frac{1}{w(t)}dt}}.\]
Now we estimate ${I_{n}^{(1)}}(\eta )$ and ${I_{n}^{(2)}}(\eta )$ separately.
-
• We have (make the change of variable $r=\bar{F}(t)$)\[\begin{aligned}{}{I_{n}^{(1)}}(\eta )& =\frac{\mu }{{h_{n}}}{\int _{0}^{\eta {m_{n}}}}\frac{f(t)}{\bar{F}(t)}dt\\ {} & =\frac{\mu }{{h_{n}}}{\int _{\bar{F}(\eta {m_{n}})}^{\bar{F}(0)}}\frac{dr}{r}=\frac{\mu }{{h_{n}}}(\log \bar{F}(0)-\log \bar{F}(\eta {m_{n}})),\end{aligned}\]where $\bar{F}(0),\bar{F}(\eta {m_{n}})<1$ because ${\omega _{F}}=\infty $. Thus\[ 0\le \underset{n\to \infty }{\liminf }{I_{n}^{(1)}}(\eta )\le \underset{n\to \infty }{\limsup }{I_{n}^{(1)}}(\eta )\le {\eta ^{\mu }}\mu \ell ,\]because $\frac{\mu }{{h_{n}}}\log \bar{F}(0)\to 0$ (since ${h_{n}}\to \infty $), $-\frac{\mu }{{h_{n}}}\log \bar{F}(\eta {m_{n}})\ge 0$,\[\begin{aligned}{}& -\frac{\mu }{{h_{n}}}\log \bar{F}(\eta {m_{n}})=-\frac{\mu }{{m_{n}^{\mu }}/L({m_{n}})}\log \bar{F}(\eta {m_{n}})\\ {} & \hspace{1em}=-\mu {\eta ^{\mu }}\frac{L({m_{n}})}{{(\eta {m_{n}})^{\mu }}}\log \bar{F}(\eta {m_{n}})=\mu {\eta ^{\mu }}\frac{L({m_{n}})}{L(\eta {m_{n}})}\left(-\frac{L(\eta {m_{n}})\log \bar{F}(\eta {m_{n}})}{{(\eta {m_{n}})^{\mu }}}\right),\end{aligned}\]and by taking into account that ${m_{n}}\to \infty $, L is a slowly varying function, and Lemma 4.2.
-
• We have (make the change of variable $u=\frac{t}{{m_{n}}}$)\[\begin{aligned}{}{I_{n}^{(2)}}(\eta )& =\frac{\mu }{{m_{n}^{\mu }}/L({m_{n}})}{\int _{\eta {m_{n}}}^{{m_{n}}}}\frac{1}{{t^{1-\mu }}L(t)}dt\\ {} & =\frac{\mu L({m_{n}})}{{m_{n}^{\mu }}}{\int _{\eta }^{1}}\frac{{m_{n}}du}{{({m_{n}}u)^{1-\mu }}L({m_{n}}u)}={\int _{\eta }^{1}}\frac{L({m_{n}})}{L({m_{n}}u)}\mu {u^{\mu -1}}du,\end{aligned}\]and, since $\frac{L({m_{n}})}{L({m_{n}}u)}\to 1$ uniformly on compact subsets of $(0,\infty )$ (with respect to u) by Theorem 1.2.1 in [3] (here we again take into account that ${m_{n}}\to \infty $), for all $\rho >0$ there exists ${n_{0}}={n_{0}}(\eta ,\rho )$ such that for all $n>{n_{0}}$ and therefore
Throughout this section we set
we have to consider ${n_{0}}$ large enough in order to have a well-defined ${m_{n}}$ for $n\ge {n_{0}}$.
(16)
\[ {C_{n}}:=\frac{{M_{n}}}{{m_{n}}}-1\hspace{1em}\text{for all}\hspace{2.5pt}n\ge {n_{0}}\text{, for some}\hspace{2.5pt}{n_{0}}\text{, where}\hspace{2.5pt}{M_{n}}:=\max \{{X_{1}},\dots ,{X_{n}}\};\]Assertion 4.1.
We can recover the asymptotic regimes $\mathbf{R}\mathbf{1}$ and $\mathbf{R}\mathbf{2}$ in Assertion 1.2 as follows.
$\mathbf{R}\mathbf{1}$: $\{{C_{n}}:n\ge {n_{0}}\}$ converges in probability to zero (actually the authors have checked the almost sure convergence with an argument based on Theorem 4.4.4 in [9], p. 268). Moreover, as an immediate consequence of Proposition 3.1 in [11], $\{{C_{n}}+1:n\ge {n_{0}}\}$ satisfies the LDP with speed ${v_{n}}=\mu \log n$ (or equivalently ${v_{n}}={h_{n}}$ by Lemma 4.4) and rate function J defined by
\[ J(y):=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{{y^{\mu }}-1}{\mu }& \hspace{2.5pt}\textit{if}\hspace{2.5pt}y\ge 1,\\ {} \infty & \hspace{2.5pt}\textit{otherwise}.\end{array}\right.\]
Then, since we deal with a sequence of shifted random variables, we deduce that $\{{C_{n}}:n\ge 2\}$ satisfies the LDP with speed ${v_{n}}={h_{n}}$ and rate function ${I_{\mathrm{LD}}}$ defined by
$\mathbf{R}\mathbf{2}$: ${v_{n}}{C_{n}}={h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)$ converges weakly to the Gumbel distribution by a well-known result by von Mises (see, e.g., Theorem 8.13.7 in [3]).
Thus we are studying a sequence of random variables that has interest for any possible concrete model related to the maximum domain of attraction of the Gumbel distribution. A possible example is when the random variables $\{{X_{n}}:n\ge 1\}$ represent monthly and annual maximum values of daily rainfall and river discharge volumes.
Now we prove the moderate deviation result. We shall see that, for this example, the rate functions ${I_{\mathrm{LD}}}$ and ${I_{\mathrm{MD}}}$ coincide when $\mu =1$.
Proposition 4.1.
For every family of positive numbers $\{{a_{n}}:n\ge 1\}$ such that (1) holds (i.e. ${a_{n}}\to 0$ and ${a_{n}}{h_{n}}\to \infty $), the sequence of random variables $\{{a_{n}}{h_{n}}(\frac{{M_{n}}}{{m_{n}}}-1):n\ge 1\}$ satisfies the LDP with speed $1/{a_{n}}$ and rate function ${I_{\mathrm{MD}}}$ defined by
Proof.
We apply Lemma 2.1 for every choice of $\{{a_{n}}:n\ge 1\}$ such that (1) holds, with ${s_{n}}=1/{a_{n}}$, $I={I_{\mathrm{MD}}}$ and $\{{C_{n}}:n\ge {n_{0}}\}$ as in (16).
□
Proof of (6).
We have to show that, for every $x > 0$,
\[ \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\ge x\right)\le -x.\]
For ${m^{\prime }_{n}}:=(\frac{x}{{a_{n}}{h_{n}}}+1){m_{n}}$ we get
\[\begin{aligned}{}& P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\ge x\right)=P\left({M_{n}}\ge {m^{\prime }_{n}}\right)\\ {} & \hspace{1em}=1-P\left({M_{n}} < {m^{\prime }_{n}}\right)=1-{F^{n}}\left({m^{\prime }_{n}}\right)=1-{\left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right)^{n}}\\ {} & \hspace{1em}=1-\exp \left(n\log \left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right)\right)\le -n\log \left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right).\end{aligned}\]
Thus
\[\begin{aligned}{}& {a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\ge x\right)\le {a_{n}}\log \left(-n\log \left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right)\right)\\ {} & \hspace{1em}={a_{n}}\log n+{a_{n}}\log \left(\frac{\log \left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right)}{-\bar{F}\left({m^{\prime }_{n}}\right)}\right)+{a_{n}}\log \bar{F}\left({m^{\prime }_{n}}\right)\end{aligned}\]
and, noting that
\[ \underset{n\to \infty }{\lim }{a_{n}}\log \left(\frac{\log \left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right)}{-\bar{F}\left({m^{\prime }_{n}}\right)}\right)=0,\]
because ${m^{\prime }_{n}}\to \infty $ (since ${m_{n}}\to \infty $), we obtain
\[ \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\ge x\right)\le \underset{n\to \infty }{\limsup }{a_{n}}\left(\log n+\log \bar{F}\left({m^{\prime }_{n}}\right)\right).\]
Now we apply the Lagrange theorem (also known as the mean value theorem) to the function $g(z)=\log \bar{F}(z)$; so there exists ${\xi _{n}}\in \left({m_{n}},{m^{\prime }_{n}}\right)$ such that
\[ \log \bar{F}\left({m^{\prime }_{n}}\right)=\log \bar{F}({m_{n}})-\frac{f({\xi _{n}})}{\bar{F}({\xi _{n}})}\left({m^{\prime }_{n}}-{m_{n}}\right).\]
Then, by the definitions of ${m_{n}}$, w and ${h_{n}}$, we get
\[\begin{aligned}{}\log \bar{F}\left({m^{\prime }_{n}}\right)& =-\log n-\frac{{m_{n}}x}{w({\xi _{n}}){a_{n}}{h_{n}}}=-\log n-\frac{x}{w({\xi _{n}}){a_{n}}nf({m_{n}})}\\ {} & =-\log n-\frac{w({m_{n}})x}{w({\xi _{n}}){a_{n}}n\bar{F}({m_{n}})}=-\log n-\frac{w({m_{n}})x}{w({\xi _{n}}){a_{n}}},\end{aligned}\]
and therefore
\[ {a_{n}}\left(\log n+\log \bar{F}\left({m^{\prime }_{n}}\right)\right)=-\frac{xw({m_{n}})}{w({\xi _{n}})}.\]
So we get
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\ge x\right)\\ {} & \hspace{1em}\le \underset{n\to \infty }{\limsup }{a_{n}}\left(\log n+\log \bar{F}\left({m^{\prime }_{n}}\right)\right)\le -x\end{aligned}\]
by Lemma 4.3 with ${x_{n}}={m_{n}}$ and ${y_{n}}={\xi _{n}}$ (indeed ${\xi _{n}}\sim {m_{n}}$ because
by construction, and $1+\frac{x}{{a_{n}}{h_{n}}}\to 1$). Thus (6) holds. Proof of (7).
We have to show that, for every $x<0$,
\[ \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\le x\right)\le -\infty .\]
For ${m^{\prime }_{n}}:=(\frac{x}{{a_{n}}{h_{n}}}+1){m_{n}}$ we get
\[ P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\le x\right)=P\left({M_{n}}\le {m^{\prime }_{n}}\right)={F^{n}}\left({m^{\prime }_{n}}\right)={\left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right)^{n}}.\]
Thus
\[\begin{aligned}{}& {a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\ge x\right)\le {a_{n}}n\log \left(1-\bar{F}\left({m^{\prime }_{n}}\right)\right)\sim -{a_{n}}n\bar{F}\left({m^{\prime }_{n}}\right)\\ {} & \hspace{1em}=-\exp \left(\log \left({a_{n}}n\bar{F}\left({m^{\prime }_{n}}\right)\right)\right)=-\exp \left(\log {a_{n}}+\log n+\log \bar{F}\left({m^{\prime }_{n}}\right)\right)\\ {} & \hspace{1em}=-\exp \left(\log {a_{n}}+\frac{{a_{n}}\left(\log n+\log \bar{F}\left({m^{\prime }_{n}}\right)\right)}{{a_{n}}}\right).\end{aligned}\]
Now we can repeat the computations above in the proof of (6) with some slight changes (we mean the part with the application of the Lagrange theorem; the details are omitted), and we find
\[ \underset{n\to \infty }{\lim }{a_{n}}\left(\log n+\log \bar{F}\left({m^{\prime }_{n}}\right)\right)=-x>0,\]
and therefore
\[ \underset{n\to \infty }{\lim }-\exp \left(\log {a_{n}}+\frac{{a_{n}}\left(\log n+\log \bar{F}\left({m^{\prime }_{n}}\right)\right)}{{a_{n}}}\right)=-\infty .\]
Thus (7) holds. Proof of (8).
We want to show that, for every $x\ge 0$ and for every open set O such that $x\in O$, we have
and by letting δ go to zero. The first limit in (17) holds by Lemma 4.3 with ${x_{n}}={m_{n}}$ and ${y_{n}}={\xi _{n}}$ (indeed ${\xi _{n}}\sim {m_{n}}$ because
\[ \underset{n\to \infty }{\liminf }{a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\in O\right)\ge -x.\]
The case $x=0$ is immediate; indeed, since ${a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)$ converges in probability to zero by the Slutsky theorem (by ${a_{n}}\to 0$ and the weak convergence in $\mathbf{R}\mathbf{2}$ in Assertion 4.1), we have trivially $0\ge 0$. So, from now on, we suppose that $x>0$ and we take $\delta > 0$ small enough to have
Then, for ${m_{n}^{(\pm )}}:=\left(1+\frac{x\pm \delta }{{a_{n}}{h_{n}}}\right){m_{n}}$, we get
\[\begin{aligned}{}& P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\in O\right)\ge P\left(x-\delta < {a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\le x+\delta \right)\\ {} & \hspace{1em}=P\left({m_{n}^{(-)}} < {M_{n}}\le {m_{n}^{(+)}}\right)={F^{n}}\left({m_{n}^{(+)}}\right)-{F^{n}}\left({m_{n}^{(-)}}\right)\\ {} & \hspace{1em}={\left(1-\bar{F}\left({m_{n}^{(+)}}\right)\right)^{n}}-{\left(1-\bar{F}\left({m_{n}^{(-)}}\right)\right)^{n}}.\end{aligned}\]
Now we apply the Lagrange theorem to the function $g(z)={(1-\bar{F}(z))^{n}}$; so there exists ${\xi _{n}}\in \left({m_{n}^{(-)}},{m_{n}^{(+)}}\right)$ such that
\[\begin{aligned}{}& {\left(1-\bar{F}\left({m_{n}^{(+)}}\right)\right)^{n}}-{\left(1-\bar{F}\left({m_{n}^{(-)}}\right)\right)^{n}}\\ {} & \hspace{1em}=n{\left(1-\bar{F}\left({\xi _{n}}\right)\right)^{n-1}}f({\xi _{n}})\underset{=\frac{2\delta {m_{n}}}{{a_{n}}{h_{n}}}}{\underbrace{\left({m_{n}^{(+)}}-{m_{n}^{(-)}}\right)}}.\end{aligned}\]
Then, by the definition of ${h_{n}}$, we obtain
\[\begin{aligned}{}& {a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\in O\right)\\ {} & \hspace{1em}\ge {a_{n}}\left(\log n+(n-1)\log \left(1-\bar{F}\left({\xi _{n}}\right)\right)+\log f({\xi _{n}})+\log \frac{2\delta {m_{n}}}{{a_{n}}{h_{n}}}\right)\\ {} & \hspace{1em}={a_{n}}\left((n-1)\log \left(1-\bar{F}\left({\xi _{n}}\right)\right)+\log f({\xi _{n}})+\log (2\delta )-\log {a_{n}}-\log f({m_{n}})\right)\end{aligned}\]
and therefore, by the definition of the function w,
\[\begin{aligned}{}& \underset{n\to \infty }{\liminf }{a_{n}}\log P\left({a_{n}}{h_{n}}\left(\frac{{M_{n}}}{{m_{n}}}-1\right)\in O\right)\\ {} & \hspace{1em}\ge \underset{n\to \infty }{\liminf }{a_{n}}\left((n-1)\log \left(1-\bar{F}\left({\xi _{n}}\right)\right)+\log \frac{f({\xi _{n}})}{f({m_{n}})}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }{a_{n}}\left((n-1)\log \left(1-\bar{F}\left({\xi _{n}}\right)\right)+\log \frac{w({m_{n}})\bar{F}({\xi _{n}})}{\bar{F}({m_{n}})w({\xi _{n}})}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }{a_{n}}\left((n-1)\log (1-\bar{F}({\xi _{n}}))+\log \frac{w({m_{n}})}{w({\xi _{n}})}+\log \frac{\bar{F}({\xi _{n}})}{\bar{F}({m_{n}})}\right).\end{aligned}\]
Then we get (8) if we show the relations
(17)
\[ \begin{array}{l}{\lim \nolimits_{n\to \infty }}{a_{n}}\log \frac{w({m_{n}})}{w({\xi _{n}})}=0\text{,}\hspace{2.5pt}{\liminf _{n\to \infty }}{a_{n}}\log \frac{\bar{F}({\xi _{n}})}{\bar{F}({m_{n}})}\ge -(x+\delta )\text{,}\\ {} {\lim \nolimits_{n\to \infty }}{a_{n}}(n-1)\log (1-\bar{F}({\xi _{n}}))=0\text{,}\end{array}\]
\[ 1+\frac{x-\delta }{{a_{n}}{h_{n}}}\le \frac{{\xi _{n}}}{{m_{n}}}\le 1+\frac{x+\delta }{{a_{n}}{h_{n}}}\]
by construction and $1+\frac{x\pm \delta }{{a_{n}}{h_{n}}}\to 1$). The inequality in (17) holds since
\[ {a_{n}}\log \frac{\bar{F}({\xi _{n}})}{\bar{F}({m_{n}})}\ge {a_{n}}\log \frac{\bar{F}({m_{n}^{(+)}})}{\bar{F}({m_{n}})},\]
by a new application of the Lagrange theorem to the function $g(z)=-\log \bar{F}(z)$, and by applying again Lemma 4.3 (we omit the details to avoid repetitions). Finally we prove the last limit in (17) if we show that
in fact
and
\[ {a_{n}}n\bar{F}({m_{n}^{(+)}})\le {a_{n}}n\bar{F}({\xi _{n}})\le {a_{n}}n\bar{F}({m_{n}^{(-)}}).\]
We have
\[ {a_{n}}n\bar{F}({m_{n}^{(\pm )}})={a_{n}}\frac{\bar{F}({m_{n}^{(\pm )}})}{\bar{F}({m_{n}})}=\exp \left(\frac{{a_{n}}(\log \bar{F}({m_{n}^{(\pm )}})-\log \bar{F}({m_{n}}))}{{a_{n}}}+\log {a_{n}}\right);\]
moreover,
\[ \underset{n\to \infty }{\lim }{a_{n}}(\log \bar{F}({m_{n}^{(\pm )}})-\log \bar{F}({m_{n}}))=-(x\pm \delta )<0\]
(this follows once more from an application of the Lagrange theorem to the function $g(z)=-\log \bar{F}(z)$, and by applying again Lemma 4.3; the details are omitted). Thus
\[ \underset{n\to \infty }{\lim }\frac{{a_{n}}(\log \bar{F}({m_{n}^{(\pm )}})-\log \bar{F}({m_{n}}))}{{a_{n}}}+\log {a_{n}}=-\infty ,\]
which yields the desired last limit in (17). 5 An example inspired by the classical occupancy problem
In this section we consider the following example.
Example 5.1.
Let $\{{T_{n}}:n\ge 1\}$ be a family of random variables such that
where $\{{X_{n,k}}:n\ge 1,1\le k\le n\}$ is a triangular array of random variables (i.e. $\{{X_{n,k}}:1\le k\le n\}$ are independent random variables for each fixed n), and each random variable ${X_{n,k}}$ is geometric with parameter ${p_{n,k}}:=1-\frac{k-1}{n}$, i.e.
It is well known that the random variable ${T_{n}}$ can be seen as the number of balls required to fill n boxes with at least one ball when one puts balls in n boxes at random, and each ball is independently assigned to any fixed box with probability $\frac{1}{n}$; this is known in the literature as the classical occupancy problem. From a different point of view ${T_{n}}$ can also be related to the coupon collector’s problem: a coupon collector chooses at random and independently among n coupon types, and ${T_{n}}$ represents the number of coupons required to collect all the n coupon types. Throughout this section we set
Assertion 5.1.
We can recover the asymptotic regimes $\mathbf{R}\mathbf{1}$ and $\mathbf{R}\mathbf{2}$ in Assertion 1.2 as follows.
$\mathbf{R}\mathbf{1}$: $\{{C_{n}}:n\ge 2\}$ converges in probability to zero (see Example 2.2.7 in [7] presented for the coupon collector’s problem). Moreover, by Proposition 2.1 in [10], $\{{C_{n}}+1:n\ge 2\}$ satisfies the LDP with speed ${v_{n}}=\log n$ and rate function J defined by
\[ J(y):=\left\{\begin{array}{l@{\hskip10.0pt}l}y-1& \hspace{2.5pt}\textit{if}\hspace{2.5pt}y\ge 1,\\ {} \infty & \hspace{2.5pt}\textit{otherwise}.\end{array}\right.\]
Then, since we deal with a sequence of shifted random variables, we deduce that $\{{C_{n}}:n\ge 2\}$ satisfies the LDP with speed ${v_{n}}=\log n$ and rate function ${I_{\mathrm{LD}}}$ defined by
$\mathbf{R}\mathbf{2}$: ${v_{n}}{C_{n}}=\log n\left(\frac{{T_{n}}}{n\log n}-1\right)$ converges weakly to the Gumbel distribution (see, e.g., Example 3.6.11 in [7]).
Now we prove the moderate deviation result. We shall see that, for this example, the rate functions ${I_{\mathrm{LD}}}$ and ${I_{\mathrm{MD}}}$ coincide. Several parts of the proof of the next Proposition 5.1 have some analogies with the one presented for Proposition 2.1 in [10]; so we shall omit some details to avoid repetitions.
Proposition 5.1.
For every family of positive numbers $\{{a_{n}}:n\ge 1\}$ such that (1) holds (i.e. ${a_{n}}\to 0$ and ${a_{n}}\log n\to \infty $), the sequence of random variables $\{{a_{n}}\log n(\frac{{T_{n}}}{n\log n}-1):n\ge 1\}$ satisfies the LDP with speed $1/{a_{n}}$ and rate function ${I_{\mathrm{MD}}}$ defined by
Proof.
We apply Lemma 2.1 for every choice of $\{{a_{n}}:n\ge 1\}$ such that (1) holds, with ${s_{n}}=1/{a_{n}}$, $I={I_{\mathrm{MD}}}$ and $\{{C_{n}}:n\ge 2\}$ as in (18).
□
Proof of (6).
We have to show that, for every $x > 0$,
\[ \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\ge x\right)\le -x.\]
For every $\delta \in (0,x)$ we have (here the last inequality holds by a well-known estimate; see, e.g., Exercise 3.10 in [17], page 58)
\[\begin{aligned}{}& P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\ge x\right)=P\left({T_{n}}\ge \left(\frac{x}{{a_{n}}\log n}+1\right)n\log n\right)\\ {} & \hspace{1em}\le P\left({T_{n}}>\left(\frac{x-\delta }{{a_{n}}\log n}+1\right)n\log n\right)\le {n^{1-(\frac{x-\delta }{{a_{n}}\log n}+1)}}={n^{-\frac{x-\delta }{{a_{n}}\log n}}}.\end{aligned}\]
Thus
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\ge x\right)\\ {} & \hspace{1em}\le \underset{n\to \infty }{\limsup }{a_{n}}\log {n^{-\frac{x-\delta }{{a_{n}}\log n}}}=-x+\delta ,\end{aligned}\]
and we obtain (6) by letting δ go to zero. Proof of (7).
We have to show that, for every $x < 0$,
\[ \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\le x\right)\le -\infty .\]
We have $\frac{x}{{a_{n}}\log n}+1\in (0,1)$ eventually, and therefore (here the last inequality holds by a well-known estimate; for instance, it is a consequence of Theorem 5.10 and Corollary 5.11 in [16])
\[\begin{aligned}{}& P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\le x\right)=P\left({T_{n}}\le \left(\frac{x}{{a_{n}}\log n}+1\right)n\log n\right)\\ {} & \hspace{1em}\le 2{\left(1-\exp \left(-\frac{\left(\frac{x}{{a_{n}}\log n}+1\right)n\log n}{n}\right)\right)^{n}}=2{\left(1-{n^{-\left(\frac{x}{{a_{n}}\log n}+1\right)}}\right)^{n}}.\end{aligned}\]
Thus
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }{a_{n}}\log P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\le x\right)\\ {} & \hspace{1em}\le \underset{n\to \infty }{\limsup }{a_{n}}\log \left(2{\left(1-{n^{-\left(\frac{x}{{a_{n}}\log n}+1\right)}}\right)^{n}}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}n\log \left(1-{n^{-\left(\frac{x}{{a_{n}}\log n}+1\right)}}\right)=\underset{n\to \infty }{\limsup }-{a_{n}}n{n^{-\left(\frac{x}{{a_{n}}\log n}+1\right)}}\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }-{a_{n}}{n^{-\frac{x}{{a_{n}}\log n}}}=\underset{n\to \infty }{\limsup }-{a_{n}}{e^{-\frac{x}{{a_{n}}}}}=-\infty ,\end{aligned}\]
and therefore (7) holds. Proof of (8).
We want to show that, for every $x\ge 0$ and for every open set O such that $x\in O$, we have
\[ \underset{n\to \infty }{\liminf }{a_{n}}\log P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\in O\right)\ge -x.\]
The case $x=0$ is immediate; indeed, since ${a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)$ converges in probability to zero by the Slutsky theorem (by ${a_{n}}\to 0$ and the weak convergence in $\mathbf{R}\mathbf{2}$ in Assertion 5.1), we have trivially $0\ge 0$. So, from now on, we suppose that $x>0$ and we take $\delta >0$ small enough to have
Moreover, we also introduce the notation ${F_{{T_{n}}}}(\cdot )=P({T_{n}}\le \cdot )$ for the distribution function of ${T_{n}}$. Then
\[\begin{aligned}{}& P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\in O\right)\\ {} & \hspace{1em}\ge P\left(x-\delta <{a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\le x+\delta \right)\\ {} & \hspace{1em}=P\left(\left(1+\frac{x-\delta }{{a_{n}}\log n}\right)n\log n<{T_{n}}\le \left(1+\frac{x+\delta }{{a_{n}}\log n}\right)n\log n\right)\\ {} & \hspace{1em}\ge {F_{{T_{n}}}}\left(\left[\left(1+\frac{x+\delta }{{a_{n}}\log n}\right)n\log n\right]\right)-{F_{{T_{n}}}}\left(\left[\left(1+\frac{x-\delta }{{a_{n}}\log n}\right)n\log n\right]+1\right).\end{aligned}\]
Thus, by adapting some computations in the proof of Proposition 2.1 in [10], we have
\[\begin{aligned}{}& {F_{{T_{n}}}}\left(\left[\left(1+\frac{x+\delta }{{a_{n}}\log n}\right)n\log n\right]\right)-{F_{{T_{n}}}}\left(\left[\left(1+\frac{x-\delta }{{a_{n}}\log n}\right)n\log n\right]+1\right)\\ {} & \hspace{1em}\ge {A_{n}}-({A_{n}^{(+)}}+{A_{n}^{(-)}})={A_{n}}\left(1-\frac{{A_{n}^{(+)}}+{A_{n}^{(-)}}}{{A_{n}}}\right)\end{aligned}\]
where ${A_{n}}$, ${A_{n}^{(+)}}$, ${A_{n}^{(-)}}$ are the following nonnegative quantities:
(19)
\[ {A_{n}}:={\left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)^{n}}-{\left(1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right)^{n}},\]
\[ {A_{n}^{(+)}}:=\sum \limits_{\mathrm{even}\hspace{2.5pt}k}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){e^{-\frac{k}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\left(1-{\left(1-\frac{k}{n}\right)^{\frac{k}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)\]
and
\[ {A_{n}^{(-)}}:=\sum \limits_{\mathrm{odd}\hspace{2.5pt}k}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){e^{-\frac{k}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\left(1-{\left(1-\frac{k}{n}\right)^{\frac{k}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right).\]
Thus, we can say that
\[\begin{aligned}{}& {a_{n}}\log P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\in O\right)\ge {a_{n}}\log {A_{n}}+{a_{n}}\log \left(1-\frac{{A_{n}^{(+)}}+{A_{n}^{(-)}}}{{A_{n}}}\right),\end{aligned}\]
and therefore
\[\begin{aligned}{}& \underset{n\to \infty }{\liminf }{a_{n}}\log P\left({a_{n}}\log n\left(\frac{{T_{n}}}{n\log n}-1\right)\in O\right)\\ {} & \hspace{1em}\ge \underset{n\to \infty }{\liminf }{a_{n}}\log {A_{n}}+\underset{n\to \infty }{\liminf }{a_{n}}\log \left(1-\frac{{A_{n}^{(+)}}+{A_{n}^{(-)}}}{{A_{n}}}\right).\end{aligned}\]
Proof of (20).
Concerning ${A_{n}}$ in (19) we apply the Lagrange theorem to the function $g(z)={(1-{e^{-z}})^{n}}$; so there exists
\[ {\xi _{n}}\in \left(\frac{([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}{n},\frac{[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}{n}\right)\]
such that
\[\begin{aligned}{}& {A_{n}}=n{(1-{e^{-{\xi _{n}}}})^{n-1}}{e^{-{\xi _{n}}}}\\ {} & \hspace{2em}\times \left(\frac{[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}{n}-\frac{([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}{n}\right).\end{aligned}\]
Moreover,
\[\begin{aligned}{}& {(1-{e^{-{\xi _{n}}}})^{n-1}}{e^{-{\xi _{n}}}}\ge {\left(1-{e^{-\frac{([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}{n}}}\right)^{n-1}}{e^{-\frac{[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}{n}}}\\ {} & \hspace{1em}\ge {\left(1-{e^{-(1+\frac{x-\delta }{{a_{n}}\log n})\log n}}\right)^{n-1}}{e^{-(1+\frac{x+\delta }{{a_{n}}\log n})\log n}}={\left(1-\frac{{e^{-\frac{x-\delta }{{a_{n}}}}}}{n}\right)^{n-1}}\frac{{e^{-\frac{x+\delta }{{a_{n}}}}}}{n}\end{aligned}\]
and
\[\begin{aligned}{}& \frac{[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}{n}-\frac{([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}{n}\\ {} & \hspace{1em}=\frac{[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]-1-[(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]}{n}\\ {} & \hspace{1em}\ge \frac{(1+\frac{x+\delta }{{a_{n}}\log n})n\log n-1-1-(1+\frac{x-\delta }{{a_{n}}\log n})n\log n}{n}=-\frac{2}{n}+\frac{2\delta }{{a_{n}}}.\end{aligned}\]
Then
\[\begin{aligned}{}{a_{n}}\log {A_{n}}& \ge {a_{n}}\log \left({\left(1-\frac{{e^{-\frac{x-\delta }{{a_{n}}}}}}{n}\right)^{n-1}}{e^{-\frac{x+\delta }{{a_{n}}}}}\left(-\frac{2}{n}+\frac{2\delta }{{a_{n}}}\right)\right)\\ {} & ={a_{n}}(n-1)\log \left(1-\frac{{e^{-\frac{x-\delta }{{a_{n}}}}}}{n}\right)-(x+\delta )+{a_{n}}\log \left(-\frac{2}{n}+\frac{2\delta }{{a_{n}}}\right),\end{aligned}\]
whence we obtain (20) from
and
Proof of (21).
We remark that
Here we note that the limit (22) can be checked with some tedious computations:
\[ \underset{n\to \infty }{\lim }{\left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)^{n}}=1\]
and
\[ \underset{n\to \infty }{\lim }{\left(1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right)^{n}}=1,\]
because
\[\begin{aligned}{}& 1=\underset{n\to \infty }{\liminf }{\left(1-\frac{{e^{-\frac{x+\delta }{{a_{n}}}+\frac{1}{n}}}}{n}\right)^{n}}\le \underset{n\to \infty }{\liminf }{\left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)^{n}}\\ {} & \hspace{1em}\le \underset{n\to \infty }{\limsup }{\left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)^{n}}\le \underset{n\to \infty }{\limsup }{\left(1-\frac{{e^{-\frac{x+\delta }{{a_{n}}}}}}{n}\right)^{n}}=1,\end{aligned}\]
and the second limit can be proved with a similar argument. Therefore
\[\begin{aligned}{}& {A_{n}}={\left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)^{n}}-{\left(1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right)^{n}}\to 1-1=0.\end{aligned}\]
Moreover,
\[\begin{aligned}{}& {A_{n}}={\left(1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right)^{n}}\\ {} & \hspace{2em}\times \left(\frac{{\left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)^{n}}}{{\left(1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right)^{n}}}-1\right)\\ {} & \hspace{1em}\sim \frac{{\left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)^{n}}}{{\left(1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right)^{n}}}-1\\ {} & \hspace{1em}=\exp \left(n\log \left(\frac{1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}}{1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}}\right)\right)-1,\end{aligned}\]
where
(22)
\[ \underset{n\to \infty }{\lim }n\log \left(\frac{1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}}{1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}}\right)=0.\]
\[\begin{aligned}{}& n\log \left(\frac{1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}}{1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}}\right)\\ {} & \hspace{1em}=n\log \left(1+\frac{{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}}{1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}}\right)\end{aligned}\]
where
\[ \underset{n\to \infty }{\lim }{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}=0\]
and
\[ \underset{n\to \infty }{\lim }{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}=0,\]
and therefore
\[\begin{aligned}{}& n\log \left(\frac{1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}}{1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}}\right)\\ {} & \hspace{1em}\sim n\left({e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)\\ {} & \hspace{1em}={e^{\log n-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\\ {} & \hspace{2em}\times \left(1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]+\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}\right).\end{aligned}\]
Moreover,
\[ \underset{n\to \infty }{\lim }{e^{\log n-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}=0\]
and
\[ \underset{n\to \infty }{\lim }{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]+\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}=0,\]
whence we obtain (22). Now, coming back to ${A_{n}}$, by (22) we can say that
\[ {A_{n}}\sim n\log \left(\frac{1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}}{1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}}\right);\]
moreover, by using some manipulations as above for
\[ \frac{1-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}}{1-{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}},\]
we get
\[\begin{aligned}{}& {A_{n}}\sim n\left({e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}-{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\right)\\ {} & \hspace{1em}=n{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\left({e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1-[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n])}}-1\right).\end{aligned}\]
Then
\[\begin{aligned}{}& {A_{n}}\sim n{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}\\ {} & \hspace{2em}\times \left({e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1-[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n])}}-1\right)\\ {} & \hspace{1em}\ge n{e^{-\frac{1}{n}(1+\frac{x+\delta }{{a_{n}}\log n})n\log n}}\left({e^{-\frac{1}{n}((1+\frac{x-\delta }{{a_{n}}\log n})n\log n+1-((1+\frac{x+\delta }{{a_{n}}\log n})n\log n-1))}}-1\right)\\ {} & \hspace{1em}=n\frac{1}{n}{e^{-\frac{x+\delta }{{a_{n}}}}}\left({e^{-\frac{2}{n}+\frac{2\delta }{{a_{n}}}}}-1\right)={e^{-\frac{x+\delta }{{a_{n}}}}}\left({e^{-\frac{2}{n}+\frac{2\delta }{{a_{n}}}}}-1\right)=:{\tilde{A}_{n}}.\end{aligned}\]
Now, by using the same computations as in the proof of Proposition 2.1 in [10], there exists a constant $C>0$ such that
\[\begin{aligned}{}0\le & {A_{n}^{(+)}}\le \frac{C}{n}\left[\left(1+\frac{x+\delta }{{a_{n}}\log n}\right)n\log n\right]\\ {} & \times {e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}}{(1+{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}})^{n-2}}\end{aligned}\]
and
\[\begin{aligned}{}& 0\le {A_{n}^{(-)}}\le \frac{C}{n}\left(\left[\left(1+\frac{x-\delta }{{a_{n}}\log n}\right)n\log n\right]+1\right)\\ {} & \hspace{2em}\times {e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}}{(1+{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}})^{n-2}}.\end{aligned}\]
Then, by observing that ${a_{n}}n={a_{n}}\log n\frac{n}{\log n}\to \infty $,
\[ \underset{n\to \infty }{\lim }{(1+{e^{-\frac{1}{n}[(1+\frac{x+\delta }{{a_{n}}\log n})n\log n]}})^{n-2}}=1\]
and
\[ \underset{n\to \infty }{\lim }{(1+{e^{-\frac{1}{n}([(1+\frac{x-\delta }{{a_{n}}\log n})n\log n]+1)}})^{n-2}}=1,\]
we get
\[\begin{aligned}{}0& \le \underset{n\to \infty }{\limsup }\frac{{A_{n}^{(+)}}}{{A_{n}}}\\ {} & \le \underset{n\to \infty }{\limsup }\frac{C(\log n+\frac{x+\delta }{{a_{n}}})\frac{1}{n}{e^{-\frac{x+\delta }{{a_{n}}}}}}{{\tilde{A}_{n}}}=\underset{n\to \infty }{\limsup }\frac{C(\log n+\frac{x+\delta }{{a_{n}}})\frac{1}{n}}{{e^{-\frac{2}{n}+\frac{2\delta }{{a_{n}}}}}-1}=0\end{aligned}\]
and
\[\begin{aligned}{}0& \le \underset{n\to \infty }{\limsup }\frac{{A_{n}^{(-)}}}{{A_{n}}}\\ {} & \le \underset{n\to \infty }{\limsup }\frac{C(\log n+\frac{x-\delta }{{a_{n}}}+\frac{1}{n})\frac{1}{n}{e^{-\frac{x-\delta }{{a_{n}}}}}}{{\tilde{A}_{n}}}=\underset{n\to \infty }{\limsup }\frac{C(\log n+\frac{x-\delta }{{a_{n}}}+\frac{1}{n})\frac{1}{n}}{{e^{-\frac{2\delta }{{a_{n}}}}}({e^{-\frac{2}{n}+\frac{2\delta }{{a_{n}}}}}-1)}=0.\end{aligned}\]
Thus the limits in (21) are checked. 6 An example inspired by the replacement model in [5]
In this section we consider the following example.
Example 6.1.
Let F and G be two continuous distribution functions on $\mathbb{R}$ such that $F(0)=G(0)=0$, and assume that they are strictly increasing on $[0,\infty )$. Moreover we assume that, for some $t>0$, there exist ${F^{\prime }}(t-)$ and ${G^{\prime }}(t+)$, i.e. the left derivative of $F(x)$ at $x=t$ and the right derivative of $G(x)$ at $x=t$, and that ${F^{\prime }}(t-),{G^{\prime }}(t+)>0$. Then let $\{{Z_{n}}:n\ge 1\}$ be a family of random variables defined on the same probability space $(\Omega ,\mathcal{F},P)$ such that, for some $t>0$ and $\beta \in (0,1)$, their distribution functions are defined by
Note that, for every $n\ge 1$, the distribution function $P({Z_{n}}\le \cdot )$ is continuous. Moreover, if $\beta =F(t)$, after some easy computations one can check that
\[ P({Z_{1}}\le x):=\left\{\begin{array}{l@{\hskip10.0pt}l}F(x)& \hspace{2.5pt}\text{if}\hspace{2.5pt}x\in [0,t],\\ {} F(t)+\frac{1-F(t)}{1-G(t)}(G(x)-G(t))& \hspace{2.5pt}\text{if}\hspace{2.5pt}x\in (t,\infty ).\end{array}\right.\]
Then ${Z_{1}}$ is the random lifetime that appears in a replacement model recently studied in the literature (see eq. (5) in [5]) where, at a fixed time t, an item with lifetime distribution function F is replaced by another item having the same age and a lifetime distribution function G.In general, the distribution functions of the random variables $\{{Z_{n}}:n\ge 1\}$ are suitable modifications of the distribution function of ${Z_{1}}$, $P({Z_{n}}\le t)=\beta $ for every n, and the distribution of ${Z_{n}}$ becomes more and more concentrated around t as n increases.
This example does not seem to have interesting applications to concrete models. However it could give some interesting ideas for the construction of more advanced examples based on other replacement models (see, e.g., the replacement model in [6]). The study of some possible new examples could be the subject of a future work.
Throughout this section we set
Assertion 6.1.
We can recover the asymptotic regimes $\mathbf{R}\mathbf{1}$ and $\mathbf{R}\mathbf{2}$ in Assertion 1.2 as follows.
$\mathbf{R}\mathbf{1}$: $\{{C_{n}}:n\ge 1\}$ converges in probability to zero because $\{{Z_{n}}:n\ge 1\}$ converges in probability to t; in fact, we have
\[ \underset{n\to \infty }{\lim }P({Z_{n}}\le x)=\left\{\begin{array}{l@{\hskip10.0pt}l}0& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\in [0,t),\\ {} \beta & \hspace{2.5pt}\textit{if}\hspace{2.5pt}x=t,\\ {} 1& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\in (t,\infty ).\end{array}\right.\]
Moreover, $\{{C_{n}}:n\ge 1\}$ satisfies a suitable LDP with speed ${v_{n}}=n$, that will be presented in Proposition 6.1 below.
$\mathbf{R}\mathbf{2}$: ${v_{n}}{C_{n}}=n({Z_{n}}-t)$ converges weakly to a suitable asymmetric Laplace distribution with distribution function H (see below). In fact, we have
\[\begin{aligned}{}P(n({Z_{n}}-t)\le x)& =P\left({Z_{n}}\le t+\frac{x}{n}\right)\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}\beta {\left(\frac{F(t+\frac{x}{n})}{F(t)}\right)^{n}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\le 0,\\ {} 1-(1-\beta ){\left(\frac{1-G(t+\frac{x}{n})}{1-G(t)}\right)^{n}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x>0,\end{array}\right.\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}\beta {\left(\frac{F(t)+{F^{\prime }}(t-)\frac{x}{n}+o(1/n)}{F(t)}\right)^{n}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x<0,\\ {} \beta & \hspace{2.5pt}\textit{if}\hspace{2.5pt}x=0,\\ {} 1-(1-\beta ){\left(\frac{1-G(t)-{G^{\prime }}(t+)\frac{x}{n}+o(1/n)}{1-G(t)}\right)^{n}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x>0,\end{array}\right.\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}\beta {\left(1+\frac{{F^{\prime }}(t-)}{F(t)}\frac{x}{n}+o(1/n)\right)^{n}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x<0,\\ {} \beta & \hspace{2.5pt}\textit{if}\hspace{2.5pt}x=0,\\ {} 1-(1-\beta ){\left(1-\frac{{G^{\prime }}(t+)}{1-G(t)}\frac{x}{n}+o(1/n)\right)^{n}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x>0,\end{array}\right.\end{aligned}\]
and therefore
\[ \underset{n\to \infty }{\lim }P(n({Z_{n}}-t)\le x)=H(x)\hspace{1em}\textit{for every}\hspace{2.5pt}x\in \mathbb{R},\]
where H is defined by
\[ H(x):=\left\{\begin{array}{l@{\hskip10.0pt}l}\beta \exp \left(\frac{{F^{\prime }}(t-)}{F(t)}x\right)& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\le 0,\\ {} 1-(1-\beta )\exp \left(-\frac{{G^{\prime }}(t+)}{1-G(t)}x\right)& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x>0.\end{array}\right.\]
So we have: an exponential distribution with mean $\frac{1-G(t)}{{G^{\prime }}(t+)}$ on $(0,\infty )$ with weight $1-\beta $; an exponential distribution with mean $\frac{F(t)}{{F^{\prime }}(t-)}$ on $(-\infty ,0)$ with weight β.
Now we prove the LDP concerning the convergence to zero in $\mathbf{R}\mathbf{1}$ of Assertion 6.1.
Proposition 6.1.
Let $\{{C_{n}}:n\ge 1\}$ be the sequence defined by (23). Then $\{{C_{n}}:n\ge 1\}$ satisfies the LDP with speed n and rate function ${I_{\mathrm{LD}}}$ defined by
\[ {I_{\mathrm{LD}}}(x):=\left\{\begin{array}{l@{\hskip10.0pt}l}\infty & \hspace{2.5pt}\textit{if}\hspace{2.5pt}y\in (-\infty ,-t],\\ {} -\log \frac{F(x+t)}{F(t)}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\in (-t,0],\\ {} -\log \frac{1-G(x+t)}{1-G(t)}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\in (0,\infty ).\end{array}\right.\]
Proof.
We apply Lemma 2.1 with ${s_{n}}=n$, $I={I_{\mathrm{LD}}}$ and $\{{C_{n}}:n\ge 1\}$ as in (23).
□
Proof of (6).
For every $x > 0$ we have
\[\begin{aligned}{}\underset{n\to \infty }{\limsup }\frac{1}{n}\log P({C_{n}}\ge x)& =\underset{n\to \infty }{\limsup }\frac{1}{n}\log (1-P({Z_{n}} < x+t))\\ {} & =\underset{n\to \infty }{\limsup }\frac{1}{n}\log \left((1-\beta ){\left(\frac{1-G(x+t)}{1-G(t)}\right)^{n}}\right)\\ {} & =\log \left(\frac{1-G(x+t)}{1-G(t)}\right)=-{I_{\mathrm{LD}}}(x).\end{aligned}\]
Proof of (7).
For every $x<0$ we have (when $x\le -t$ we use the convention $\log 0=-\infty $)
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }\frac{1}{n}\log P({C_{n}}\le x)=\underset{n\to \infty }{\limsup }\frac{1}{n}\log P({Z_{n}}\le x+t)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }\frac{1}{n}\log \left(\beta {\left(\frac{F(x+t)}{F(t)}\right)^{n}}\right)=\log \left(\frac{F(x+t)}{F(t)}\right)=-{I_{\mathrm{LD}}}(x).\end{aligned}\]
Proof of (8).
We want to show that, for every $x>-t$ and for every open set O such that $x\in O$, we have
\[ \underset{n\to \infty }{\liminf }\frac{1}{n}\log P({C_{n}}\in O)\ge \left\{\begin{array}{l@{\hskip10.0pt}l}\log \frac{F(x+t)}{F(t)}& \hspace{2.5pt}\text{if}\hspace{2.5pt}x\in (-t,0],\\ {} \log \frac{1-G(x+t)}{1-G(t)}& \hspace{2.5pt}\text{if}\hspace{2.5pt}x\in (0,\infty ).\end{array}\right.\]
The case $x=0$ is immediate; indeed, since ${C_{n}}$ converges in probability to zero, we have trivially $0\ge 0$. So, from now on, we suppose that $x>-t$ with $x\ne 0$ and we have two cases: $x\in (-t,0)$ and $x\in (0,\infty )$.If $x\in (-t,0)$ we take $\delta >0$ small enough to have
Then
\[\begin{aligned}{}P({C_{n}}\in O)& \ge P({C_{n}}\in (x-\delta ,x+\delta ))\\ {} & =P({Z_{n}} < x+\delta +t)-P({Z_{n}}\le x-\delta +t)\\ {} & =\beta \left({\left(\frac{F(x+\delta +t)}{F(t)}\right)^{n}}-{\left(\frac{F(x-\delta +t)}{F(t)}\right)^{n}}\right)\\ {} & =\frac{\beta }{F(t)}\underset{ > 0}{\underbrace{(F(x+\delta +t)-F(x-\delta +t))}}\\ {} & \hspace{1em}\times \underset{\ge {\left(\frac{F(x+\delta +t)}{F(t)}\right)^{n-1}} > 0}{\underbrace{{\sum \limits_{k=0}^{n-1}}{\left(\frac{F(x+\delta +t)}{F(t)}\right)^{k}}{\left(\frac{F(x-\delta +t)}{F(t)}\right)^{n-1-k}}}},\end{aligned}\]
and therefore
\[ \underset{n\to \infty }{\liminf }\frac{1}{n}\log P({C_{n}}\in O)\ge \log \left(\frac{F(x+\delta +t)}{F(t)}\right).\]
Then we get (8) for $x\in (-t,0)$ by letting δ go to zero (by the continuity of F).If $x\in (0,\infty )$ we take $\delta >0$ small enough to have
Then
\[\begin{aligned}{}& P({C_{n}}\in O)\ge P({C_{n}}\in (x-\delta ,x+\delta ))\\ {} & \hspace{1em}=P({Z_{n}} < x+\delta +t)-P({Z_{n}}\le x-\delta +t)\\ {} & \hspace{1em}=1-(1-\beta ){\left(\frac{1-G(x+\delta +t)}{1-G(t)}\right)^{n}}-\left(1-(1-\beta ){\left(\frac{1-G(x-\delta +t)}{1-G(t)}\right)^{n}}\right)\\ {} & \hspace{1em}=(1-\beta )\left({\left(\frac{1-G(x-\delta +t)}{1-G(t)}\right)^{n}}-{\left(\frac{1-G(x+\delta +t)}{1-G(t)}\right)^{n}}\right)\\ {} & \hspace{1em}=\underset{ > 0}{\underbrace{\frac{1-\beta }{1-G(t)}(G(x+\delta +t)-G(x-\delta +t))}}\\ {} & \hspace{2em}\times \underset{\ge {\left(\frac{1-G(x-\delta +t)}{1-G(t)}\right)^{n-1}}>0}{\underbrace{{\sum \limits_{k=0}^{n-1}}{\left(\frac{1-G(x-\delta +t)}{1-G(t)}\right)^{k}}{\left(\frac{1-G(x+\delta +t)}{1-G(t)}\right)^{n-1-k}}}}\end{aligned}\]
and therefore
\[ \underset{n\to \infty }{\liminf }\frac{1}{n}\log P({C_{n}}\in O)\ge \log \left(\frac{1-G(x-\delta +t)}{1-G(t)}\right).\]
Then we get (8) for $x\in (0,\infty )$ by letting δ go to zero (by the continuity of G). Now we prove the moderate deviation result. We shall see that, for this example, the family of positive numbers $\{{a_{n}}:n\ge 1\}$ such that (1) holds (i.e. ${a_{n}}\to 0$ and ${a_{n}}n\to \infty $) has to verify also the stricter condition
Obviously (24) yields ${a_{n}}\to 0$.
Proposition 6.2.
Proof.
We apply Lemma 2.1 for every choice of $\{{a_{n}}:n\ge 1\}$ such that (1) and (24) hold, with ${s_{n}}=1/{a_{n}}$, $I={I_{\mathrm{MD}}}$ and $\{{C_{n}}:n\ge 1\}$ as in (23).
□
Proof of (6).
For every $x > 0$ we have
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }{a_{n}}\log P({a_{n}}n({Z_{n}}-t)\ge x)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}\log \left(1-P\left({Z_{n}} < t+\frac{x}{{a_{n}}n}\right)\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}\log \left((1-\beta ){\left(\frac{1-G\left(t+\frac{x}{{a_{n}}n}\right)}{1-G(t)}\right)^{n}}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}n\log \left(\frac{1-G\left(t+\frac{x}{{a_{n}}n}\right)}{1-G(t)}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}n\log \left(\frac{1-G(t)-{G^{\prime }}(t+)\frac{x}{{a_{n}}n}+o(\frac{1}{{a_{n}}n})}{1-G(t)}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}n\log \left(1-\frac{{G^{\prime }}(t+)}{1-G(t)}\frac{x}{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)\right)\\ {} & \hspace{1em}=-\frac{{G^{\prime }}(t+)}{1-G(t)}x=-{I_{\mathrm{MD}}}(x).\end{aligned}\]
Proof of (7).
For every $x<0$ we have (note that $t+\frac{x}{{a_{n}}n}>0$ for n large enough)
\[\begin{aligned}{}& \underset{n\to \infty }{\limsup }{a_{n}}\log P({a_{n}}n({Z_{n}}-t)\le x)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}\log P\left({Z_{n}}\le t+\frac{x}{{a_{n}}n}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}\log \left(\beta {\left(\frac{F\left(t+\frac{x}{{a_{n}}n}\right)}{F(t)}\right)^{n}}\right)=\underset{n\to \infty }{\limsup }{a_{n}}n\log \left(\frac{F\left(t+\frac{x}{{a_{n}}n}\right)}{F(t)}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}n\log \left(\frac{F(t)+{F^{\prime }}(t-)\frac{x}{{a_{n}}n}+o(1/({a_{n}}n))}{F(t)}\right)\\ {} & \hspace{1em}=\underset{n\to \infty }{\limsup }{a_{n}}n\log \left(1+\frac{{F^{\prime }}(t-)}{F(t)}\frac{x}{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)\right)\\ {} & \hspace{1em}=\frac{{F^{\prime }}(t-)}{F(t)}x=-{I_{\mathrm{MD}}}(x).\end{aligned}\]
Proof of (8).
We show that, for every $x\in \mathbb{R}$ and for every open set O such that $x\in O$, we have
\[ \underset{n\to \infty }{\liminf }{a_{n}}\log P({a_{n}}n({Z_{n}}-t)\in O)\ge \left\{\begin{array}{l@{\hskip10.0pt}l}\frac{{F^{\prime }}(t-)}{F(t)}x& \hspace{2.5pt}\text{if}\hspace{2.5pt}x\le 0,\\ {} -\frac{{G^{\prime }}(t+)}{1-G(t)}x& \hspace{2.5pt}\text{if}\hspace{2.5pt}x>0.\end{array}\right.\]
The case $x=0$ is immediate; indeed, since ${a_{n}}n({Z_{n}}-t)$ converges in probability to zero by the Slutsky theorem (by ${a_{n}}\to 0$ and the weak convergence in $\mathbf{R}\mathbf{2}$ in Assertion 6.1), we have trivially $0\ge 0$. So, from now on, we suppose that $x\ne 0$ and we have two cases: $x\in (-\infty ,0)$ and $x\in (0,\infty )$.If $x\in (-\infty ,0)$ we take $\delta >0$ small enough to have
Then (note that $t+\frac{x\pm \delta }{{a_{n}}n}>0$ for n large enough)
\[\begin{aligned}{}& P({a_{n}}n({Z_{n}}-t)\in O)\ge P({a_{n}}n({Z_{n}}-t)\in (x-\delta ,x+\delta ))\\ {} & \hspace{1em}=P\left({Z_{n}} < t+\frac{x+\delta }{{a_{n}}n}\right)-P\left({Z_{n}}\le t+\frac{x-\delta }{{a_{n}}n}\right)\\ {} & \hspace{1em}=\frac{\beta }{{(F(t))^{n}}}\left({\left(F\left(t+\frac{x+\delta }{{a_{n}}n}\right)\right)^{n}}-{\left(F\left(t+\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}\right)\\ {} & \hspace{1em}=\frac{\beta }{{(F(t))^{n}}}\underset{={F^{\prime }}(t-)\frac{2\delta }{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)}{\underbrace{\left(F\left(t+\frac{x+\delta }{{a_{n}}n}\right)-F\left(t+\frac{x-\delta }{{a_{n}}n}\right)\right)}}\\ {} & \hspace{2em}\times \underset{\ge {\left(F\left(t+\frac{x+\delta }{{a_{n}}n}\right)\right)^{n-1}}}{\underbrace{{\sum \limits_{k=0}^{n-1}}{\left(F\left(t+\frac{x+\delta }{{a_{n}}n}\right)\right)^{k}}{\left(F\left(t+\frac{x-\delta }{{a_{n}}n}\right)\right)^{n-1-k}}}},\end{aligned}\]
and therefore (in the final step we use the condition ${a_{n}}\log n\to 0$ in (24))
\[\begin{aligned}{}& \underset{n\to \infty }{\liminf }{a_{n}}\log P({a_{n}}n({Z_{n}}-t)\in O)\\ {} & \hspace{1em}\ge \underset{n\to \infty }{\liminf }{a_{n}}\log \left\{\left({F^{\prime }}(t-)\frac{2\delta }{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)\right){\left(\frac{F\left(t+\frac{x+\delta }{{a_{n}}n}\right)}{F(t)}\right)^{n-1}}\right\}\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }\left\{{a_{n}}\log \left({F^{\prime }}(t-)\frac{2\delta }{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)\right)+{a_{n}}(n-1)\log \left(\frac{F\left(t+\frac{x+\delta }{{a_{n}}n}\right)}{F(t)}\right)\right\}\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }\left\{-{a_{n}}\log n+{a_{n}}(n-1)\log \left(\frac{F(t)+{F^{\prime }}(t-)\frac{x+\delta }{{a_{n}}n}+o(1/({a_{n}}n))}{F(t)}\right)\right\}\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }\left\{-{a_{n}}\log n+{a_{n}}(n-1)\log \left(1+\frac{{F^{\prime }}(t-)}{F(t)}\frac{x+\delta }{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)\right)\right\}\\ {} & \hspace{1em}=\frac{{F^{\prime }}(t-)}{F(t)}(x+\delta ).\end{aligned}\]
So we obtain (8) for $x\in (-\infty ,0)$ by letting δ go to zero.If $x\in (0,\infty )$ we take $\delta >0$ small enough to have
Then
\[\begin{aligned}{}& P({a_{n}}n({Z_{n}}-t)\in O)\ge P({a_{n}}n({Z_{n}}-t)\in (x-\delta ,x+\delta ))\\ {} & \hspace{1em}=P\left({Z_{n}} < t+\frac{x+\delta }{{a_{n}}n}\right)-P\left({Z_{n}}\le t+\frac{x-\delta }{{a_{n}}n}\right)\\ {} & \hspace{1em}=\frac{1-\beta }{{(1-G(t))^{n}}}\left({\left(1-G\left(t+\frac{x-\delta }{{a_{n}}n}\right)\right)^{n}}-{\left(1-G\left(t+\frac{x+\delta }{{a_{n}}n}\right)\right)^{n}}\right)\\ {} & \hspace{1em}=\frac{1-\beta }{{(1-G(t))^{n}}}\underset{={G^{\prime }}(t+)\frac{2\delta }{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)}{\underbrace{\left(G\left(t+\frac{x+\delta }{{a_{n}}n}\right)-G\left(t+\frac{x-\delta }{{a_{n}}n}\right)\right)}}\\ {} & \hspace{2em}\times \underset{\ge {\left(1-G\left(t+\frac{x-\delta }{{a_{n}}n}\right)\right)^{n-1}}}{\underbrace{{\sum \limits_{k=0}^{n-1}}{\left(1-G\left(t+\frac{x-\delta }{{a_{n}}n}\right)\right)^{k}}{\left(1-G\left(t+\frac{x+\delta }{{a_{n}}n}\right)\right)^{n-1-k}}}},\end{aligned}\]
and therefore (in the final step we use the condition ${a_{n}}\log n\to 0$ in (24))
\[\begin{aligned}{}& \underset{n\to \infty }{\liminf }{a_{n}}\log P({a_{n}}n({Z_{n}}-t)\in O)\\ {} & \hspace{1em}\ge \underset{n\to \infty }{\liminf }{a_{n}}\log \left\{\left({G^{\prime }}(t+)\frac{2\delta }{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)\right){\left(\frac{1-G\left(t+\frac{x-\delta }{{a_{n}}n}\right)}{1-G(t)}\right)^{n-1}}\right\}\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }\left\{\hspace{-0.1667em}{a_{n}}\log \left(\hspace{-0.1667em}{G^{\prime }}(t+)\frac{2\delta }{{a_{n}}n}+o\left(\hspace{-0.1667em}\frac{1}{{a_{n}}n}\hspace{-0.1667em}\right)\hspace{-0.1667em}\right)+{a_{n}}(n-1)\log \left(\hspace{-0.1667em}\frac{1-G\left(\hspace{-0.1667em}t+\frac{x-\delta }{{a_{n}}n}\hspace{-0.1667em}\right)}{1-G(t)}\hspace{-0.1667em}\right)\hspace{-0.1667em}\right\}\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }\left\{-{a_{n}}\log n+{a_{n}}(n-1)\log \left(\frac{1-G(t)-{G^{\prime }}(t+)\frac{x-\delta }{{a_{n}}n}+o(\frac{1}{{a_{n}}n})}{1-G(t)}\right)\right\}\\ {} & \hspace{1em}=\underset{n\to \infty }{\liminf }\left\{-{a_{n}}\log n+{a_{n}}(n-1)\log \left(1-\frac{{G^{\prime }}(t+)}{1-G(t)}\frac{x-\delta }{{a_{n}}n}+o\left(\frac{1}{{a_{n}}n}\right)\right)\right\}\\ {} & \hspace{1em}=-\frac{{G^{\prime }}(t+)}{1-G(t)}(x-\delta ).\end{aligned}\]
So we obtain (8) for $x\in (0,\infty )$ by letting δ go to zero.