1 Introduction
Let $X_{1},X_{2},\dots ,X_{n}$ be independent and identically distributed (iid) random variables with common distribution function (df) F and $M_{n}=\max (X_{1},X_{2},\dots ,X_{n})$, $n\ge 1$. Then F is said to belong to the max domain of attraction of a nondegenerate df H under power normalization (denoted by $F\in \mathcal{D}_{p}(H)$) if, for $n\ge 1$, there exist constants $\alpha _{n}>0$, $\beta _{n}>0$, such that
the set of continuity points of H, where $\operatorname{sign}(x)=-1$, 0, or 1 according as $x<0$, $=0$, or $>0$. The limit df H in (1) is called a p-max stable law, and we refer to [5] for details.
(1)
\[\underset{n\to \infty }{\lim }\hspace{0.2778em}P\bigg({\bigg|\frac{M_{n}}{\alpha _{n}}\bigg|}^{\frac{1}{\beta _{n}}}\hspace{0.2778em}\operatorname{sign}(M_{n})\le x\bigg)=H(x),\hspace{1em}x\in \mathcal{C}(H),\]
The p-max stable laws. Two dfs F and G are said to be of the same p-type if $F(x)=G(A\mid x{\mid }^{B}\operatorname{sign}(x))$, $x\in R$, for some positive constants A, B. The p-max stable laws are p-types of one of the following six laws with parameter $\alpha >0$:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle H_{1,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x\le 1,\\{} \exp \{-{(\log x)}^{-\alpha }\}& \text{if}\hspace{2.5pt}1<x;\end{array}\right.\\{} \displaystyle H_{2,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<0,\\{} \exp \{-{(-\log x)}^{\alpha }\}& \text{if}\hspace{2.5pt}0\le x<1,\\{} 1& \text{if}\hspace{2.5pt}1\le x;\end{array}\right.\\{} \displaystyle H_{3}(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x\le 0,\\{} {e}^{-\frac{1}{x}}& \text{if}\hspace{2.5pt}0<x;\end{array}\right.\\{} \displaystyle H_{4,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x\le -1,\\{} \exp \{-{(-\log (-x))}^{-\alpha }\}& \text{if}\hspace{2.5pt}-1<x<0,\\{} 1& \text{if}\hspace{2.5pt}0\le x;\end{array}\right.\\{} \displaystyle H_{5,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}\exp \{-{(\log (-x))}^{\alpha }\}& \text{if}\hspace{2.5pt}x<-1,\\{} 1& \text{if}\hspace{2.5pt}-1\le x;\end{array}\right.\\{} \displaystyle H_{6}(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}{e}^{x}& \text{if}\hspace{2.5pt}x\le 0,\\{} 1& \text{if}\hspace{2.5pt}0<x.\end{array}\right.\end{array}\]
Note that $H_{2,1}(\cdot )$ is the uniform distribution over $(0,1)$. Necessary and sufficient conditions for a df F to belong to $\mathcal{D}_{p}(H)$ for each of the six p-types of p-max stable laws were given in [5] (see also [3]).As in [8], we define the generalized log-Pareto distribution (glogPd) as $W(x)=1+\log H(x)$ for x with $1/e\le H(x)\le 1$, where H is a p-max stable law, and the distribution functions W are given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle W_{1,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<e,\\{} 1-{(\log x)}^{-\alpha }& \text{if}\hspace{2.5pt}e\le x;\end{array}\right.\\{} \displaystyle W_{2,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<{e}^{-1},\\{} 1-{(-\log x)}^{\alpha }& \text{if}\hspace{2.5pt}{e}^{-1}\le x<1,\\{} 1& \text{if}\hspace{2.5pt}1<x;\end{array}\right.\\{} \displaystyle W_{3}(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x\le 1,\\{} 1-\frac{1}{x}& \text{if}\hspace{2.5pt}1<x;\end{array}\right.\\{} \displaystyle W_{4,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<-{e}^{-1},\\{} 1-{(-\log (-x))}^{-\alpha }& \text{if}\hspace{2.5pt}-{e}^{-1}\le x<0,\\{} 1& \text{if}\hspace{2.5pt}0<x;\end{array}\right.\\{} \displaystyle W_{5,\alpha }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<-e,\\{} 1-{(\log (-x))}^{\alpha }& \text{if}\hspace{2.5pt}-e\le x<-1,\\{} 1& \text{if}\hspace{2.5pt}-1<x;\end{array}\right.\\{} \displaystyle W_{6}(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<-1,\\{} 1+x& \text{if}\hspace{2.5pt}-1\le x\le 0,\\{} 1& \text{if}\hspace{2.5pt}0<x;\end{array}\right.\end{array}\]
and the respective probability density functions (pdfs) are the following:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle w_{1,\alpha }(x)& \displaystyle =\frac{\alpha }{x}{(\log x)}^{-(\alpha +1)},\hspace{1em}x\ge e;\\{} \displaystyle w_{2,\alpha }(x)& \displaystyle =\frac{\alpha }{x}{(-\log x)}^{(\alpha -1)},\hspace{1em}{e}^{-1}\le x<1;\\{} \displaystyle w_{3}(x)& \displaystyle =\frac{1}{{x}^{2}},\hspace{1em}x>1;\\{} \displaystyle w_{4,\alpha }(x)& \displaystyle =\frac{-\alpha }{x}{\big(-\log (-x)\big)}^{-(\alpha +1)},\hspace{1em}-{e}^{-1}\le x<0;\\{} \displaystyle w_{5,\alpha }(x)& \displaystyle =\frac{-\alpha }{x}{\big(\log (-x)\big)}^{(\alpha -1)},\hspace{1em}-e\le x<-1;\\{} \displaystyle w_{6}(x)& \displaystyle =1,\hspace{1em}-1\le x\le 0;\end{array}\]
where the pdfs are equal to 0 for the remaining values of x.See also [1] and [9] for more details on generalized log-Pareto distributions. The von-Mises type sufficient conditions for p-max stable laws were obtained in [6].
Von Mises-type parameterization of generalized log-Pareto distributions. The von Mises-type parameterization for generalized log-Pareto distributions is given by
\[\begin{array}{r@{\hskip0pt}l}\displaystyle V_{1}(x)& \displaystyle =1-{\{1+\gamma \log x\}}^{-1/\gamma },\\{} & \displaystyle \hspace{1em}x>0,(1+\gamma \log x)>0,\hspace{2.5pt}\text{whenever}\hspace{2.5pt}\gamma \ge 0,\hspace{2.5pt}\text{and}\\{} \displaystyle V_{2}(x)& \displaystyle =1-{\big\{1-\gamma \log (-x)\big\}}^{-1/\gamma },\\{} & \displaystyle \hspace{1em}x<0,\big(1-\gamma \log (-x)\big)>0,\hspace{2.5pt}\text{whenever}\hspace{2.5pt}\gamma \le 0,\end{array}\]
where the case $\gamma =0$ is interpreted as the limit as $\gamma \to 0$. Let $v_{1}$ and $v_{2}$ denote the densities of $V_{1}$ and $V_{2}$, respectively. The dfs of generalized log-Pareto distributions can be regained from $V_{1}$ and $V_{2}$ by the following identities:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle W_{1,1/\gamma }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<e,\\{} V_{1}({e}^{-1/\gamma }{x}^{1/\gamma })& \text{if}\hspace{2.5pt}e\le x,\gamma >0;\end{array}\right.\\{} \displaystyle W_{2,-1/\gamma }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<{e}^{-1},\\{} V_{1}({e}^{-1/\gamma }{x}^{-1/\gamma })& \text{if}\hspace{2.5pt}{e}^{-1}\le x<1,\\{} 1& \text{if}\hspace{2.5pt}1<x,\gamma >0;\end{array}\right.\\{} \displaystyle W_{4,1/\gamma }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<-{e}^{-1},\\{} V_{2}(-{e}^{1/\gamma }{(-x)}^{1/\gamma })& \text{if}\hspace{2.5pt}-{e}^{-1}\le x<0,\\{} 1& \text{if}\hspace{2.5pt}0<x,\gamma <0;\end{array}\right.\\{} \displaystyle W_{5,-1/\gamma }(x)& \displaystyle =\left\{\begin{array}{l@{\hskip10.0pt}l}0& \text{if}\hspace{2.5pt}x<-e,\\{} V_{2}(-{e}^{1/\gamma }{(-x)}^{1/\gamma })& \text{if}\hspace{2.5pt}-e\le x<-1,\\{} 1& \text{if}\hspace{2.5pt}-1<x,\gamma <0.\end{array}\right.\end{array}\]
Note that $\lim _{\gamma \to 0}V_{1}(x)=W_{3}(x)$, $x>1$, and $\lim _{\gamma \to 0}V_{2}(x)=W_{6}(x)$, $x\in [-1,0]$.
Graphical representation of generalized log-Pareto pdfs. In Fig. 1, observe that the pdfs $v_{1}$ approach the standard Pareto pdf as $\gamma \downarrow 0$, and the pdfs $v_{2}$ approach the standard uniform pdf as $\gamma ↑0$.
The Hellinger distance, also called the Bhattacharya distance, is used to quantify the similarity between two probability distributions, and this was defined in terms of the Hellinger integral introduced by [4]. In view of statistical applications, the distance between the exact and the limiting distributions is measured using the Hellinger distance. Inference procedures based on the Hellinger distance provide alternatives to likelihood-based methods. The minimum Hellinger distance estimation with inlier modification was studied in [7]. In [10], the weak convergence of distributions of extreme order statistics (defined later in Section 2) was examined.
In the next section, we study the variational distance between the exact and asymptotic distributions of power normalized partial maxima of a random sample and the Hellinger distance between these. The results obtained here are similar to those in [10].
2 Hellinger and variational distances for sample maxima
We recall a few definitions for convenience.
Weak domain of attraction. If a df F satisfies (1) for some norming constants and nondegenerate df H, then F is said to belong to the weak domain of attraction of H.
Strong domain of attraction. A df F is said to belong to the strong domain of attraction of a nondegenerate df H if
where sup is taken over all Borel sets B on R.
Limit law for the kth largest order statistic [5]. Let $X_{1:n}\le \cdots \le X_{n:n}$ denote the order statistics from a random sample $X_{1},\dots ,X_{n}$, and for $i=1,\dots ,6$, let
\[\underset{n\to \infty }{\lim }P\bigg({\bigg|\frac{X_{n:n}}{\alpha _{n}}\bigg|}^{{}^{1/\beta _{n}}}\operatorname{sign}(X_{n:n})\le x\bigg)=H_{i,\alpha }(x).\]
Then it is well known that, for integer $k\ge 1$,
(2)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \underset{n\to \infty }{\lim }P\bigg({\bigg|\frac{X_{n-k+1:n}}{\alpha _{n}}\bigg|}^{{}^{1/\beta _{n}}}\operatorname{sign}(X_{n-k+1:n})\le x\bigg)& \displaystyle =H_{i,\alpha }(x)\sum \limits_{j=0}^{k-1}\frac{{(-\log H_{i,\alpha }(x))}^{j}}{j!}\\{} & \displaystyle =H_{i,\alpha ,k}(x),\hspace{1em}\text{say.}\end{array}\]
Hellinger distance [10]. Given dfs F and G with Lebesgue densities f and g, the Hellinger distance between F and G, denoted ${H}^{\ast }(F,G)$, is defined as
The results in this section will be proved for the p-max stable law $H_{2,1}(\cdot )$, and the other cases can be deduced by using the transformation $T(x)=T_{i,\alpha }(x)$ given by $T_{i,\alpha }(x)={H_{i,\alpha }^{-1}}(x)\circ H_{2,1}(x)={H_{i,\alpha }^{-1}}(x)$, $x\in (0,1)$ with $T_{1,\alpha }(x)=\exp ({(-\log x)}^{-1/\alpha })$, $T_{2,\alpha }(x)=\exp (-{(-\log x)}^{1/\alpha })$, $T_{3}(x)=-\frac{1}{\log x}$, $T_{4,\alpha }(x)=-\exp {(-\log x)}^{-1/\alpha })$, $T_{5,\alpha }(x)=-\exp ({(-\log x)}^{1/\alpha })$, and $T_{6}(x)=\log x$.
(3)
\[{H}^{\ast }(F,G)={\Bigg({\int _{-\infty }^{\infty }}{\big({f}^{1/2}(x)-{g}^{1/2}(x)\big)}^{2}dx\Bigg)}^{1/2}.\]We assume that the underlying pdf f is of the form $f(x)=w(x){e}^{g(x)}$ where $g(x)\to 0$ as $x\to r(H)=\sup \{x:H(x)<1\}$, the right extremity of H. Equivalently, we may use the representation $f(x)=w(x)(1+{g}^{\ast }(x))$ by writing $f(x)=w(x){e}^{g(x)}=w(x)(1+({e}^{g(x)}-1))$, $g(x)\to 0$ as $x\to r(F)$. The following result is on Hellinger distance, and its proof is similar to that of Theorem 5.2.5 of [10] and hence is omitted.
Theorem 1.
Let H be a p-max stable law as in (1), and F be an absolutely continuous df with pdf f such that $f(x)>0$ for $x_{0}<x<r(F)$ and $f(x)=0$ otherwise. Assume that $r(F)=r(H)$. Then
where $c>0$ is a universal constant.
(4)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {H}^{\ast }({F}^{n},H)& \displaystyle \le \Bigg\{{\int _{x_{0}}^{r(H)}}\bigg(\frac{nf(x)}{w(x)}-1-\log \bigg(\frac{nf(x)}{w(x)}\bigg)dH(x)\\{} & \displaystyle \hspace{1em}+2H(x_{0})-H(x_{0})\log H(x_{0})\bigg)\Bigg\}{}^{1/2}+\frac{c}{n},\end{array}\]Theorem 2.
Suppose that H is a p-max stable law as in (1), and $w(x),T_{i,\alpha }(x)$ be the corresponding auxiliary functions with $w(x)=h(x)/H(x)$ and $T_{i,\alpha }(x)={H_{i,\alpha }^{-1}}(x)$, where h denotes the pdf of H. Let the pdf f of the df F have the representation $f(x)=w(x){e}^{g_{i}(x)},T(x_{0})<x<r(H_{i})$, for some i and =0 if $x>r(H_{i})$, where $0<x_{0}<1$, and let $g_{i}$ satisfy the condition
where L, δ are positive constants. If $F_{n}(x)=F(A_{n}|x{|}^{B_{n}}\operatorname{sign}(x))$ with
(5)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \big|g_{i}(x)\big|& \displaystyle \le \left\{\begin{array}{l@{\hskip10.0pt}l}L{(\log x)}^{-\alpha \delta }\hspace{1em}& \text{if}\hspace{2.5pt}i=1,\\{} L{(-\log x)}^{\alpha \delta }\hspace{1em}& \text{if}\hspace{2.5pt}i=2,\\{} L\frac{1}{{x}^{\delta }}\hspace{1em}& \text{if}\hspace{2.5pt}i=3,\\{} L{(-\log (-x))}^{-\alpha \delta }\hspace{1em}& \text{if}\hspace{2.5pt}i=4,\\{} L{(\log (-x))}^{\alpha \delta }\hspace{1em}& \text{if}\hspace{2.5pt}i=5,\\{} L{x}^{\delta }\hspace{1em}& \text{if}\hspace{2.5pt}i=6,\end{array}\right.\end{array}\]
\[A_{n}=\left\{\begin{array}{l@{\hskip10.0pt}l}1\hspace{1em}& \text{if}\hspace{2.5pt}i=1,2,3,4,\\{} n\hspace{1em}& \text{if}\hspace{2.5pt}i=5,\\{} 1/n\hspace{1em}& \text{if}\hspace{2.5pt}i=6,\end{array}\right.\hspace{1em}\text{and}\hspace{1em}B_{n}=\left\{\begin{array}{l@{\hskip10.0pt}l}{n}^{-1/\alpha }\hspace{1em}& \text{if}\hspace{2.5pt}i=2,4,\\{} {n}^{1/\alpha }\hspace{1em}& \text{if}\hspace{2.5pt}i=1,3,\\{} 1\hspace{1em}& \text{if}\hspace{2.5pt}i=5,6,\end{array}\right.\]
then
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {H}^{\ast }\big({F}^{n},H\big)& \displaystyle \le \left\{\begin{array}{l@{\hskip10.0pt}l}D{n}^{-\delta }\hspace{1em}& \text{if}\hspace{2.5pt}0<\delta \le 1,\\{} D{n}^{-1}\hspace{1em}& \text{if}\hspace{2.5pt}\delta >1,\end{array}\right.\end{array}\]
where D is a constant depending only on $x_{0},L$, and δ.
Proof.
Without loss of generality, we may assume that $H=H_{2,1}$. The other cases can be deduced by using the transformation $T(x)=T_{i,\alpha }(x)$. We apply Theorem 1 with $x_{0,n}={x_{0}^{n}}$, $\frac{1}{2}<x_{0}<1$. Note that the term $2H_{2,1}({x_{0}^{n}})-H_{2,1}({x_{0}^{n}})\log H_{2,1}({x_{0}^{n}})={x_{0}^{n}}-{x_{0}^{n}}\log {x_{0}^{n}}$ can be neglected. Putting $f_{n}(x)=f({x}^{1/n})/n$, since g is bounded on $(x_{0},1)$, we have from (4)
\[{H}^{\ast }\big({F}^{n},H\big)\le {\Bigg\{{\int _{{x_{0}^{n}}}^{1}}\bigg(\frac{nf_{n}(x)}{w_{2,1}(x)}-1-\log \bigg(\frac{nf_{n}(x)}{w_{2,1}(x)}\bigg)\bigg)dH_{2,1}(x)\Bigg\}}^{1/2}+\frac{c}{n}.\]
Then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle {\int _{{x_{0}^{n}}}^{1}}\left(\frac{nf_{n}(x)}{w_{2,1}(x)}-1-\log \left(\frac{nf_{n}(x)}{w_{2,1}(x)}\right)\right)dH_{2,1}(x)\\{} & \displaystyle \hspace{1em}={\int _{{x_{0}^{n}}}^{1}}\bigg(\frac{f({x}^{1/n})}{w_{2,1}(x)}-1-\log \bigg(\frac{nf({x}^{1/n})}{w_{2,1}(x)}\bigg)\bigg)dH_{2,1}(x)\\{} & \displaystyle \hspace{1em}={\int _{{x_{0}^{n}}}^{1}}\big({e}^{h({x}^{1/n})}-1-h\big({x}^{1/n}\big)\big)dx\\{} & \displaystyle \hspace{1em}\le \bigg(\frac{1}{2}+\frac{L(-\log {x_{0}^{1/n}})}{3!}+\cdots \hspace{0.1667em}\bigg){\int _{{x_{0}^{n}}}^{1}}{\big(h\big({x}^{1/n}\big)\big)}^{2}dx\\{} & \displaystyle \hspace{1em}\le {D}^{\ast }{\int _{0}^{1}}{\big(h\big({x}^{1/n}\big)\big)}^{2}dx\\{} & \displaystyle \hspace{1em}\le {D}^{\ast }\frac{{L}^{2}}{2}{\int _{0}^{1}}{\big(-\log {x}^{1/n}\big)}^{2\delta }dx\\{} & \displaystyle \hspace{1em}={D}^{\ast }\frac{{L}^{2}}{2}{n}^{-2\delta }{\int _{0}^{\infty }}{e}^{-y}{y}^{2\delta }dy\\{} & \displaystyle \hspace{1em}={D}^{\ast }\frac{{L}^{2}}{2}{n}^{-2\delta }\varGamma (2\delta +1),\end{array}\]
and ${H}^{\ast }({F_{n}^{n}},H(x))\le {({D}^{\ast }\frac{{L}^{2}}{2}{n}^{-2\delta }\varGamma (2\delta +1))}^{1/2}+\frac{c}{n}={(\frac{{D}^{\ast }}{2})}^{1/2}L{n}^{-\delta }{(\varGamma (2\delta +1))}^{1/2}+c{n}^{-1}$. Hence,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {H}^{\ast }\big({F}^{n},H\big)& \displaystyle \le \left\{\begin{array}{l@{\hskip10.0pt}l}D{n}^{-\delta }\hspace{1em}& \text{if}\hspace{2.5pt}0<\delta \le 1,\\{} D{n}^{-1}\hspace{1em}& \text{if}\hspace{2.5pt}\delta >1,\end{array}\right.\end{array}\]
where $D={(\frac{{D}^{\ast }}{2})}^{1/2}L\sqrt{\varGamma (2\delta +1)}$ is a constant depending only on $x_{0}$, L, and δ, and Γ is the gamma function. □Theorem 4 below gives the variational distance between exact and approximate distributions of power normalized partial maxima. To prove the result, we use the next result, the proof of which is similar to that of Theorem 5.5.4 of [10] and hence is omitted.
Theorem 3.
Let $H_{j}$, $j=1,\dots ,6$, denote the p-max stable laws as in (1), and $H=H_{j,\alpha ,k}$ denote the limit laws of the power normalized kth largest order statistic as in (2). Let F be an absolutely continuous df with pdf f such that $f(x)>0$ for $x_{0}<x<r(F)$. Let $r(F)=r(H)$ and $w(x)=h(x)/H(x)$ on the support of H, where h is the pdf of H. Then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{B}{\sup }\big|P\big(\big(X_{n:n},\dots ,X_{n-k+1:n}\big)\in B\big)-H_{k}(B)\big|\\{} & \displaystyle \hspace{1em}\le \bigg(\sum \limits_{j=1}^{k}{\int _{x_{0}}^{r(H)}}\left(\frac{nf(x)}{w(x)}-1-\log \left(\frac{nf(x)}{w(x)}\right)\right)dH_{j}(x)+H_{k}(x_{0})+kH_{k-1}(x_{0})\\{} & \displaystyle \hspace{2em}+\sum \limits_{j=1}^{k-1}\int _{x_{j}>x_{0},x_{k}<x_{0}}\log \left(\frac{nf(x_{j})}{w(x_{j})}\right)dH_{k}(x){\bigg)}^{1/2}+\frac{ck}{n}.\end{array}\]
Theorem 4.
Let $H_{j}$, $j=1,\dots ,6$, denote the p-max stable laws as in (1) and $w(x),T_{i,\alpha }$ be the corresponding auxiliary functions with $w(x)=h(x)/H(x)$ and $T_{i,\alpha }(x)={H_{i,\alpha }^{-1}}(x)$. Let the pdf f of the absolutely continuous df F satisfy the representation $f(x)=w(x){e}^{g_{i}(x)}$, $T(x_{0})<x<r(H)$, for some i and $=0$ if $x>r(H)$, where $1/2<x_{0}<1$, and $g_{i}$ satisfy the condition given in (5). Then
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{B}{\sup }\bigg|P\bigg\{\bigg(\bigg|\frac{X_{n-j+1:n}}{A_{n}}{\bigg|}^{1/B_{n}}\operatorname{sign}(X_{n-j+1:n}){\bigg)_{j=1}^{k}}\in B\bigg\}-H_{k}(B)\bigg|\\{} & \displaystyle \hspace{1em}\le D\big({(k/n)}^{\delta }{k}^{1/2}+k/n\big),\end{array}\]
where D is a constant depending on $x_{0}$, L, and δ, and $A_{n}$ and $B_{n}$ are defined in Theorem 2.
Proof.
We prove the result for the particular case $H=H_{2,1}$. Applying Theorem 3 with $x_{0,n}={x_{0}^{n}}$, $\frac{1}{2}<x_{0}<1$, we get
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{B}{\sup }\big|P\big(\big(|X_{n:n}{|}^{n}\operatorname{sign}(X_{n:n}),\dots ,|X_{n-k+1:n}{|}^{n}\operatorname{sign}(X_{n-k+1:n})\big)\in B\big)-H_{k}(B)\big|\\{} & \displaystyle \hspace{1em}\le \bigg(\sum \limits_{j=1}^{k}{\int _{{x_{0}^{n}}}^{r(H)}}\left(\frac{nf_{n}(x)}{w_{2,1}(x)}-1-\log \frac{nf_{n}(x)}{w_{2,1}(x)}\right)dH_{j}(x)+H_{k}({x_{0}^{n}})+kH_{k-1}({x_{0}^{n}})\\{} & \displaystyle \hspace{2em}+\sum \limits_{j=1}^{k-1}\int _{x_{j}>{x_{0}^{n}},x_{k}<{x_{0}^{n}}}\log \frac{nf_{n}(x_{j})}{w_{2,1}(x_{j})}dH_{k}(x){\bigg)}^{1/2}+\frac{ck}{n}.\end{array}\]
Note that $H_{k}(x)=O({(k/x)}^{m})$ uniformly in k and $0<x<1$ for every positive integer m. Moreover, since h is bounded on $(x_{0},1)$, we have
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \sum \limits_{j=1}^{k}{\int _{{x_{0}^{n}}}^{r(H)}}\left(\frac{nf_{n}(x)}{w_{2,1}(x)}-1-\log \left(\frac{nf_{n}(x)}{w_{2,1}(x)}\right)\right)dH_{j}(x)\\{} & \displaystyle \hspace{1em}=\sum \limits_{j=1}^{k}{\int _{{x_{0}^{n}}}^{1}}\bigg(\frac{f({x}^{1/n})}{w_{2,1}(x)}-1-\log \bigg(\frac{f({x}^{1/n})}{w_{2,1}(x)}\bigg)\bigg)h_{j}(x)dx\\{} & \displaystyle \hspace{1em}=\sum \limits_{j=1}^{k}{\int _{{x_{0}^{n}}}^{1}}\big({e}^{h({x}^{1/n})}-1-\log \big({e}^{h({x}^{1/n})}\big)\big)h_{j}(x)dx\\{} & \displaystyle \hspace{1em}=\sum \limits_{j=1}^{k}{\int _{{x_{0}^{n}}}^{1}}\big(1+h\big({x}^{1/n}\big)+\cdots -1-h\big({x}^{1/n}\big)\big)h_{j}(x)dx\\{} & \displaystyle \hspace{1em}\le \bigg(\frac{1}{2}+\frac{L(-\log {x_{0}^{1/n}})}{3!}+\cdots \hspace{0.1667em}\bigg)\sum \limits_{j=1}^{k}{\int _{{x_{0}^{n}}}^{1}}\big(h{\big({x}^{1/n}\big)}^{2}\big)\frac{{(-\log x)}^{j-1}}{(j-1)!}dx\\{} & \displaystyle \hspace{1em}\le \bigg(\frac{1}{2}+\frac{L(-\log {x_{0}^{1/n}})}{3!}+\cdots \hspace{0.1667em}\bigg)\sum \limits_{j=1}^{k}{\int _{0}^{1}}\big(h{\big({x}^{1/n}\big)}^{2}\big)\frac{{(-\log x)}^{j-1}}{(j-1)!}dx\\{} & \displaystyle \hspace{1em}\le \bigg(\frac{1}{2}+\frac{L(-\log {x_{0}^{1/n}})}{3!}+\cdots \hspace{0.1667em}\bigg)\sum \limits_{j=1}^{k}{\int _{0}^{1}}\frac{{L}^{2}{(-\log {x}^{1/n})}^{2\delta }}{2}\frac{{(-\log x)}^{j-1}}{(j-1)!}dx\\{} & \displaystyle \hspace{1em}=\bigg(\frac{1}{2}+\frac{L(-\log {x_{0}^{1/n}})}{3!}+\cdots \hspace{0.1667em}\bigg)\frac{{L}^{2}}{2}\sum \limits_{j=1}^{k}\frac{{n}^{-2\delta }}{\varGamma (j)}{\int _{0}^{1}}{(-\log x)}^{2\delta +j-1}dx\\{} & \displaystyle \hspace{1em}={D}^{\ast }\sum \limits_{j=1}^{k}\frac{{n}^{-2\delta }}{\varGamma (j)}{\int _{0}^{\infty }}{e}^{-y}{y}^{2\delta +j-1}dx\\{} & \displaystyle \hspace{1em}={D}^{\ast }{n}^{-2\delta }\sum \limits_{j=1}^{k}\frac{\varGamma (2\delta +j)}{\varGamma (j)},\end{array}\]
where Γ is the gamma function. Now, note that (see, e.g., [2], p. 47)
\[\sum \limits_{j=1}^{k}\frac{\varGamma (2\delta +j)}{\varGamma (j)}\le {D^{\prime }}\sum \limits_{j=1}^{k}{j}^{2\delta }.\]
Therefore,
\[{D}^{\ast }{n}^{-2\delta }\sum \limits_{j=1}^{k}\frac{\varGamma (2\delta +j)}{\varGamma (j)}\le {D}^{\ast }{D^{\prime }}{n}^{-2\delta }\sum \limits_{j=1}^{k}{j}^{2\delta }\le {D}^{\ast \ast }{n}^{-2\delta }{k}^{2\delta +1},\]
where ${D}^{\ast \ast }={D}^{\ast }{D^{\prime }}$. Hence,
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \underset{B}{\sup }\big|P\big(\big({X_{n:n}^{n}},\dots ,{X_{n-k+1:n}^{n}}\big)\in B\big)-H_{k}(B)\big|\\{} & \displaystyle \hspace{1em}\le {\big({D}^{\ast \ast }{n}^{-2\delta }{k}^{2\delta +1}\big)}^{1/2}+ck/n=D\big({(k/n)}^{\delta }{k}^{1/2}+k/n\big),\end{array}\]
proving the result. □