1 Introduction
1.1 Definition of the model
Let ${({p_{k}})_{k\in \mathbb{N}}}$ be a discrete probability distribution with ${p_{k}}\gt 0$ for infinitely many k. The infinite occupancy scheme is defined by independent allocation of balls over an infinite array boxes $1,2,\dots $ , with probability ${p_{k}}$ of hitting the box k. The scheme is usually called the Karlin occupancy scheme because of Karlin’s remarkable work [12]. We are aware of the two articles [1] and [5] which preceded [12]. A survey of the literature on the infinite occupancy up to 2007 is given in [9]. An incomplete list of very recent contributions includes [3, 4, 6, 7]. Among other things, the authors of [9] discuss applications of the scheme to ecology, database query optimization and literature. Another portion of possible applications can be found in Section 1.1 of [10].
There are deterministic and Poissonized versions of Karlin’s occupancy scheme. In a deterministic version the nth ball is thrown at time $n\in \mathbb{N}$. For $j,n\in \mathbb{N}$, denote by ${\mathcal{K}_{j}}(n)$ and ${\mathcal{K}_{j}^{\ast }}(n)$ the number of boxes hit by at least j balls and exactly j balls, respectively, up to and including time n. Observe that ${\mathcal{K}_{1}}(n)$ is the number of occupied boxes at time n. Sometimes the variables ${\mathcal{K}_{j}^{\ast }}(n)$, with j fixed, are referred to as small counts.
To define the other version of the scheme we need an additional notation. Let ${({S_{k}})_{k\in \mathbb{N}}}$ denote a random walk with independent jumps having an exponential distribution of unit mean. The counting process $\pi :={(\pi (t))_{t\ge 0}}$ given by $\pi (t):=\mathrm{\# }\{k\in \mathbb{N}:{S_{k}}\le t\}$ for $t\ge 0$ is a Poisson process on $[0,\infty )$ of unit intensity.
In a Poissonized version of Karlin’s occupancy scheme the nth ball is thrown at time ${S_{n}}$, $n\in \mathbb{N}$, and it is assumed that the allocation process is independent of ${({S_{k}})_{k\in \mathbb{N}}}$, hence of π. Thus, in the time interval $[0,t]$ there are $\pi (t)$ balls thrown in the Poissonized version and $\lfloor t\rfloor $ balls thrown in the deterministic version. While the occupancy counts of distinct boxes are dependent in the deterministic version, these are independent in the Poissonized version. The latter fact is a principal advantage of the Poissonized version. It is justified by the thinning property of Poisson processes. For $j\in \mathbb{N}$ and $t\ge 0$, denote by ${K_{j}}(t)$ and ${K_{j}^{\ast }}(t)$ the number of boxes containing at least j balls and exactly j balls, respectively, in the Poissonized scheme at time t. The random variables
are the infinite sums of independent indicators. As a consequence, their analysis is much simpler than that of ${\mathcal{K}_{j}}(n)$ and ${\mathcal{K}_{j}^{\ast }}(n)$ which are infinite sums of dependent indicators.
\[ {K_{j}}(t)=\sum \limits_{k\ge 1}{1_{\{\text{the box}\hspace{2.5pt}k\hspace{2.5pt}\text{contains at least}\hspace{2.5pt}j\hspace{2.5pt}\text{balls}\hspace{2.5pt}\text{at time}\hspace{2.5pt}t\}}}\]
and
(1)
\[ {K_{j}^{\ast }}(t)=\sum \limits_{k\ge 1}{1_{\{\text{the box}\hspace{2.5pt}k\hspace{2.5pt}\text{contains exactly}\hspace{2.5pt}j\hspace{2.5pt}\text{balls}\hspace{2.5pt}\text{at time}\hspace{2.5pt}t\}}}\]1.2 Main results
Put
and note that $\rho (t)=0$ for $t\in (0,1]$. Following Karlin [12] we assume that ρ varies regularly at ∞ of index $\alpha \in [0,1]$, that is, $\rho (t)\sim {t^{\alpha }}L(t)$ as $t\to \infty $ for some L slowly varying at ∞. An encyclopaedic treatment of slowly and regularly varying functions can be found in Section 1 of [2].
The function ρ is said to belong to the de Haan class Π if, for all $\lambda \gt 0$,
for some ℓ slowly varying at ∞. The function ℓ is called auxiliary. According to Theorem 3.7.4 in [2], the class Π is a subclass of the class of slowly varying functions. Further detailed information regarding the class Π is given in Section 3 of [2] and in [8]. Denote by ${\Pi _{\ell ,\hspace{0.1667em}\infty }}$ the subclass of the de Haan class Π with the auxiliary functions ℓ satisfying ${\lim \nolimits_{t\to \infty }}\ell (t)=\infty $.
In the case $\alpha \in (0,1]$, according to Theorems 3, 5 and 5’ in [12], both ${K_{j}^{\ast }}(t)$ and ${\mathcal{K}_{j}^{\ast }}(n)$, centered by their means and normalized by their standard deviations, converge in distribution to a random variable with the standard normal distribution. In the case $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$, Corollary 1.6 in [11] provides functional central limit theorems for ${K_{j}^{\ast }}(t)$ and ${\mathcal{K}_{j}^{\ast }}(n)$, properly scaled. Our purpose is to prove laws of the iterated logarithm (LILs) for ${K_{j}^{\ast }}(t)$ as $t\to \infty $ and ${\mathcal{K}_{j}^{\ast }}(n)$ as $n\to \infty $. While doing so, we treat the three cases separately: $\alpha =0$, $\alpha \in (0,1)$ and $\alpha =1$. The reason is that the forms of the LILs are slightly or essentially different in these cases. If ρ is slowly varying at ∞ and satisfies an additional assumption, then the actual limit relation is either a law of the single logarithm or a LIL. However, to keep the presentation simple we prefer to call LILs all the limit relations involving upper or lower limits which appear in the paper.
In Theorems 1, 2 and 3 we present LILs for the Poissonized variables ${K_{j}^{\ast }}(t)$ as $t\to \infty $. Theorem 1 covers a subcase of the case $\alpha =0$ in which $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$ with particular ℓ.
Theorem 1.
Assume that (2) holds. If ℓ in (2) satisfies
for some $\beta \gt 0$ and l slowly varying at ∞, then, for each $j\in \mathbb{N}$,
and
If ℓ in (2) satisfies
for some $\sigma \gt 0$ and $\lambda \in (0,1)$, then, for each $j\in \mathbb{N}$,
and
In both cases
and
(4)
\[ \underset{t\to \infty }{\limsup }\frac{{K_{j}^{\ast }}(t)-\mathbb{E}{K_{j}^{\ast }}(t)}{{(\mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t)\log \mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t))^{1/2}}}={\bigg(\frac{2}{\beta }\bigg)^{1/2}}\hspace{1em}\textit{a.s.}\](5)
\[ \underset{t\to \infty }{\liminf }\frac{{K_{j}^{\ast }}(t)-\mathbb{E}{K_{j}^{\ast }}(t)}{{(\mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t)\log \mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t))^{1/2}}}=-{\bigg(\frac{2}{\beta }\bigg)^{1/2}}\hspace{1em}\textit{a.s.}\](7)
\[ \underset{t\to \infty }{\limsup }\frac{{K_{j}^{\ast }}(t)-\mathbb{E}{K_{j}^{\ast }}(t)}{{(\mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t)\log \log \mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t))^{1/2}}}={\bigg(\frac{2}{\lambda }\bigg)^{1/2}}\hspace{1em}\textit{a.s.}\]Remark 1.
Treatment of the situations in which ρ is slowly varying at ∞, yet $\rho \notin \Pi $, is beyond our reach. To reveal complications arising in this case we only mention that even the large-time asymptotics of $t\mapsto \operatorname{Var}{K_{j}^{\ast }}(t)$ is not known. To find the asymptotics, a second-order relation for ρ like (2) seems to be indispensable. If $\alpha \in (0,1]$, then the regular variation of ρ alone ensures that, for all $\lambda \gt 0$,
Thus, no extra conditions are needed in this case.
Remark 2.
Our present proof only works provided that, for some $a\gt 0$, $\rho (t)=O({(\ell (t))^{a}})$ as $t\to \infty $. In view of this, Theorem 1 does not cover the diverging slowly varying functions ℓ which grow slower than any positive power of the logarithm, for instance, $\ell (t)\sim \log \log t$ as $t\to \infty $. Indeed, it can be checked that ${\lim \nolimits_{t\to \infty }}\ell (t)=\infty $ entails ${\lim \nolimits_{t\to \infty }}(\rho (t)/\log t)=\infty $, whence trivially, for all $a\gt 0$, ${\lim \nolimits_{t\to \infty }}(\rho (t)/{(\ell (t))^{a}})=\infty $.
The following results are concerned with the cases $\alpha \in (0,1)$ and $\alpha =1$, respectively.
Theorem 2.
Assume that, for some $\alpha \in (0,1)$ and some L slowly varying at $+\infty $,
Then, for each $j\in \mathbb{N}$,
and
and
where Γ is the Euler gamma function and
(11)
\[ \underset{t\to \infty }{\limsup }\frac{{K_{j}^{\ast }}(t)-\mathbb{E}{K_{j}^{\ast }}(t)}{{(\mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t)\log \log \mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t))^{1/2}}}={2^{1/2}}\hspace{1em}\textit{a.s.}\]Theorem 3.
Assume that, for some L slowly varying at $+\infty $,
Then, for each $j\ge 2$, relation (11) holds,
and
Assume that, for each small enough $\gamma \gt 0$,
where $\hat{L}(t):={\textstyle\int _{t}^{\infty }}{y^{-1}}L(y)\mathrm{d}y$, being well-defined for large t, is a function slowly varying at ∞ and satisfying
Then relation (11) holds with $j=1$. If (18) does not hold, then
and
In any event
(17)
\[ \underset{t\to \infty }{\lim }\frac{\mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t)}{tL(t)}=\frac{1}{j(j-1)}-\frac{(2j-2)!}{{2^{2j-1}}{(j!)^{2}}}={c_{j,\hspace{0.1667em}1}}.\](18)
\[ \underset{n\to \infty }{\lim }\frac{\hat{L}(\exp ({(n+1)^{1+\gamma }}))}{\hat{L}(\exp ({n^{1+\gamma }}))}=0,\](20)
\[ \underset{t\to \infty }{\limsup }\frac{{K_{1}^{\ast }}(t)-\mathbb{E}{K_{1}^{\ast }}(t)}{{(\mathrm{Var}\hspace{0.1667em}{K_{1}^{\ast }}(t)\log \log \mathrm{Var}\hspace{0.1667em}{K_{1}^{\ast }}(t))^{1/2}}}\le {2^{1/2}}\hspace{1em}\textit{a.s.}\]Theorems 1, 2 and 3 will be deduced in Section 4 from the LIL for infinite sums of independent indicators given in Theorem 5.
Finally, we present LILs for the variables ${\mathcal{K}_{j}^{\ast }}(n)$.
Theorem 4.
Under the assumptions of Theorems 1, 2 or 3, for $j\in \mathbb{N}$, all the LILs stated there hold true with ${\mathcal{K}_{j}^{\ast }}(n)$, $\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)$ and $\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)$ replacing ${K_{j}^{\ast }}(t)$, $\mathbb{E}{K_{j}^{\ast }}(t)$ and $\operatorname{Var}{K_{j}^{\ast }}(t)$, and $n\to \infty $ replacing $t\to \infty $.
A transfer of results available for the Poissonized version to the deterministic version is called de-Poissonization. Theorem 4 will be deduced in Section 4 from Theorems 1, 2 and 3 with the help of a de-Poissonization technique.
Remark 3.
Following the referee’s suggestion, for the sake of comparison, we now provide a verbal description of the LILs obtained in [3]. Under the assumptions of Theorems 1, 2 and 3, the limit relations (4), (5), (7), (8), (11), (12), (20) and (21) hold true with ${K_{j}}(t)$, $\mathbb{E}{K_{j}}(t)$ and $\operatorname{Var}{K_{j}}(t)$ replacing ${K_{j}^{\ast }}(t)$, $\mathbb{E}{K_{j}^{\ast }}(t)$ and $\operatorname{Var}{K_{j}^{\ast }}(t)$. Also, these limit relations hold true with ${\mathcal{K}_{j}}(n)$, $\mathbb{E}{\mathcal{K}_{j}}(n)$ and $\operatorname{Var}{\mathcal{K}_{j}}(n)$ replacing ${K_{j}^{\ast }}(t)$, $\mathbb{E}{K_{j}^{\ast }}(t)$ and $\operatorname{Var}{K_{j}^{\ast }}(t)$, and $n\to \infty $ replacing $t\to \infty $. Typically, the means and the variances of ${K_{j}^{\ast }}(t)$ and ${K_{j}}(t)$, and ${\mathcal{K}_{j}^{\ast }}(n)$ and ${\mathcal{K}_{j}}(n)$ exhibit the same rate of growth, up to a multiplicative constant. The only exception is that, under the assumptions of Theorem 1, for $j\in \mathbb{N}$,
\[ \mathbb{E}{K_{j}^{\ast }}(t)\sim \frac{\ell (t)}{j},\hspace{1em}t\to \infty ,\hspace{1em}\text{and}\hspace{1em}\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)\hspace{2.5pt}\sim \hspace{2.5pt}\frac{\ell (n)}{j},\hspace{1em}n\to \infty ,\]
whereas
\[ \mathbb{E}{K_{j}}(t)\sim \rho (t),\hspace{1em}t\to \infty ,\hspace{1em}\text{and}\hspace{1em}\mathbb{E}{\mathcal{K}_{j}}(n)\sim \rho (n),\hspace{1em}n\to \infty .\]
The latter relations are secured by Lemma 10 and the formula
which can be found in Lemma 1 of [9].2 LIL for infinite sums of independent indicators
Let ${({A_{1}}(t))_{t\ge 0}}$, ${({A_{2}}(t))_{t\ge 0}},\dots $ be independent families of events defined on a common probability space $(\Omega ,\mathcal{F},\mathbb{P})$. Assume that ${\textstyle\sum _{k\ge 1}}\mathbb{P}({A_{k}}(t))\lt \infty $, for each $t\ge 0$, and then put
Since, for $t\ge 0$, $b(t):=\mathbb{E}X(t)={\textstyle\sum _{k\ge 1}}\mathbb{P}({A_{k}}(t))\lt \infty $, we infer $X(t)\lt \infty $ almost surely (a.s.) and further
Under the assumption that, for each $k\in \mathbb{N}$ and $0\le s\lt t$, ${A_{k}}(s)\subseteq {A_{k}}(t)$ a LIL for $X(t)$ can be found in Theorem 1.6 of [3]. As an application, LILs for ${K_{j}}(t)$ were proved in that paper, see Theorems 3.1, 3.3 and 3.4 therein. According to (1), the variable ${K_{j}^{\ast }}(t)$ is a particular instance of $X(t)$. However, for each $k\in \mathbb{N}$, the corresponding events ${({A_{k}}(t))_{t\ge 0}}$ are not monotone in t, and therefore, a LIL for ${K_{j}^{\ast }}(t)$ cannot be deduced from Theorem 1.6 of [3]. This serves a motivation for the present section. Here, dropping the monotonicity assumption we provide sufficient conditions under which a LIL for $X(t)$ holds.
We shall prove a LIL for $X(t)$ under the assumptions (A1)–(A5) and (B1)–(B21) or (B22) given below. The lack of monotonicity only affects our proof for the upper bound of ${\limsup _{t\to \infty }}$ to be done under (A1)–(A5). In view of this, (A2)–(A5) are modified versions of the corresponding assumptions in [3]. (B1), (B21) and (B22) coincide with the corresponding assumptions in [3] under which the lower bound of ${\limsup _{t\to \infty }}$ was found in the cited article.
-
(A1) ${\lim \nolimits_{t\to \infty }}a(t)=\infty $.
-
(A2) There exist independent a.s. nondecreasing stochastic processes ${({\Phi _{1}}(t))_{t\ge 0}}$, ${({\Phi _{2}}(t))_{t\ge 0}},\dots $ taking values in $\{0,1,2,\dots ,M\}$ for some $M\in \mathbb{N}$ and satisfying
-
(A3) Under (A2), there exists ${\mu ^{\ast }}\ge 1$ such that $f(t)=O({(a(t))^{{\mu ^{\ast }}}})$ as $t\to \infty $. In view of (28) and $a(t)\le b(t)$ for $t\ge 0$, necessarily ${\mu ^{\ast }}\ge 1$. Put If $\mu =1$, we assume additionally that either f is eventually continuous or and that where ${z_{q}}(t):={(\log t)^{q}}\mathcal{L}(\log t)$ for some $q\ge 0$ and $\mathcal{L}$ is slowly varying at ∞ and, if $q\gt 0$, $f(t)/a(t)\ne O({z_{s}}(a(t)))$ for $s\in (0,q)$.
Before introducing our next assumption we need some preparation. In view of (A1) and $a(t)\le f(t)$ for $t\ge 0$, we infer ${\lim \nolimits_{t\to \infty }}f(t)=\infty $. For each $\varrho \in (0,1)$, put
In other words, if $\mu \gt 1$, we work with ${\mu _{\rho }}$, if $\mu =1$, we work with ${q_{\rho }}$. Assuming (A3), fix any $\kappa \in (0,1)$ and $\varrho \in (0,1)$ and put
for $n\in \mathbb{N}$, where ${v_{n}}(\kappa ,1)={v_{n}}(\kappa ,1,q,\varrho )=\exp ({n^{(1-\kappa )/({q_{\varrho }}+1)}})$ and ${v_{n}}(\kappa ,\mu )={v_{n}}(\kappa ,\mu ,\varrho )={n^{{\mu _{\varrho }}(1-\kappa )/({\mu _{\varrho }}-1)}}$ for $\mu \gt 1$. Plainly, the sequence ${({t_{n}})_{n\in \mathbb{N}}}$ is nondecreasing with ${\lim \nolimits_{n\to \infty }}{t_{n}}=+\infty $.
(25)
\[ {\mu _{\varrho }}:=\mu +\varrho \hspace{1em}\text{if}\hspace{2.5pt}\mu \gt 1\hspace{1em}\text{and}\hspace{1em}{q_{\varrho }}:=q+\varrho \hspace{1em}\text{if}\hspace{2.5pt}\mu =1.\]-
(A4) Fix any $\kappa \in (0,1)$ and $\varrho \in (0,1)$. There exists a function ${a_{0}}$ satisfying $a(t)\sim {a_{0}}(t)$ as $t\to \infty $, and, for each n large enough, there exists ${s_{n}}={s_{n}}(\kappa ,\mu )\in [{t_{n}}(\kappa ,\mu ),\hspace{0.1667em}{t_{n+1}}(\kappa ,\mu )]$ such that ${a_{0}}(t)\ge {a_{0}}({s_{n}})$ for all $t\in [{t_{n}},{t_{n+1}}]$.
-
(A5) For each n large enough, there exists $A\gt 1$ and a partition ${t_{n}}={t_{0,\hspace{0.1667em}n}}\lt {t_{1,\hspace{0.1667em}n}}\lt \cdots \lt {t_{j,\hspace{0.1667em}n}}={t_{n+1}}$ with $j={j_{n}}$ satisfying\[ 1\le f({t_{k,\hspace{0.1667em}n}})-f({t_{k-1,\hspace{0.1667em}n}})\le A,\hspace{1em}1\le k\le j,\]and, for all $\varepsilon \gt 0$, $({j_{n}}\exp (-\varepsilon {(a({s_{n}}))^{1/2}}))$ is a summable sequence.
Remark 4.
(A2a) and (A2b) entail, for $0\le s\lt t$,
Inequality (27) with $s=0$ and (A2c) together imply that
(27)
\[ |b(t)-b(s)|\le \mathbb{E}\sum \limits_{k\ge 1}|{1_{{A_{k}}(t)}}-{1_{{A_{k}}(s)}}|\le f(t)-f(s).\]Remark 5.
Here is an example of X satisfying (A2) which is motivated by a prospective application of the LIL for $X(t)$ to the variables ${K_{j}^{\ast }}(t)$. Let ${({B_{1}}(t))_{t\ge 0}}$, ${({B_{2}}(t))_{t\ge 0}},\dots $ and ${({C_{1}}(t))_{t\ge 0}}$, ${({C_{2}}(t))_{t\ge 0}},\dots $ be two families of independent events satisfying
For each $k\in \mathbb{N}$ and $t\ge 0$, put ${A_{k}}(t):={B_{k}}(t)\setminus {C_{k}}(t)$ and ${\Phi _{k}}(t):={1_{{B_{k}}(t)}}+{1_{{C_{k}}(t)}}$. The so defined ${\Phi _{k}}$ is a.s. nondecreasing. Since, for $0\le s\lt t$, ${1_{{C_{k}}(s)}}\le {1_{{C_{k}}(t)}}$ and ${1_{{B_{k}}(s)}}\le {1_{{B_{k}}(t)}}$ a.s. we conclude that
\[ |{1_{{A_{k}}(t)}}-{1_{{A_{k}}(s)}}|=|{1_{{B_{k}}(t)}}-{1_{{C_{k}}(t)}}-{1_{{B_{k}}(s)}}+{1_{{C_{k}}(s)}}|\le {\Phi _{k}}(t)-{\Phi _{k}}(s)\hspace{1em}\text{a.s.}\]
While (A2b) is a consequence of (iii), (A2c) is justified by $\mathbb{P}({A_{k}}(0))=\mathbb{P}({B_{k}}(0))-\mathbb{P}({C_{k}}(0))\le \mathbb{E}{\Phi _{k}}(0)$.Putting ${C_{k}}(t):=\oslash $ for all $k\in \mathbb{N}$ and $t\ge 0$ we recover the case of monotone in t families ${({A_{k}}(t))_{t\ge 0}}$ treated in [3].
Remark 6.
The assumption imposed in [3] meaning that, for each $k\in \mathbb{N}$, the family ${({A_{k}}(t))_{t\ge 0}}$ is nondecreasing in t simplifies significantly the analysis of (43). Indeed, for any $\theta \gt 0$, we then infer ${\sup _{v\in [0,\theta ]}}\hspace{0.1667em}|X(t+v)-X(t)|=X(t+\theta )-X(t)$ and ${\sup _{v\in [0,\theta ]}}\hspace{0.1667em}|b(t+v)-b(t)|=b(t+\theta )-b(t)$. In the absence of the monotonicity assumption, it is necessary to find some monotone majorant for $|X(t+v)-X(t)|$ which is sufficiently close to the true supremum.
One may expect that f behaving like $f(t)=O(b(t))$ as $t\to \infty $ should do the job. What is not trivial is that f satisfying ${\lim \nolimits_{t\to \infty }}(f(t)/b(t))=\infty $ may also be suitable. For instance, consider the setting of Theorem 1 and $X(t):={K_{j}^{\ast }}(t)$ for $j\in \mathbb{N}$. By (9), $b(t)\sim \mathrm{const}\hspace{0.1667em}\ell (t)$ as $t\to \infty $, and by (63) and (48), $f(t)\sim \mathrm{const}\hspace{0.1667em}\rho (t)$ as $t\to \infty $. Applying Lemma 9 we conclude that indeed ${\lim \nolimits_{t\to \infty }}(f(t)/b(t))=\infty $.
Remark 7.
A sufficient condition for (A4) is either eventual lower semi-continuity or eventual monotonicity of ${a_{0}}$. The former means that ${\liminf _{y\to x}}{a_{0}}(y)\ge {a_{0}}(x)$, for all large enough x.
Remark 8.
A sufficient condition for (A5) is that f is eventually strictly increasing and eventually continuous. Indeed, one can then choose a partition that satisfies, for large n, $f({t_{k,\hspace{0.1667em}n}})-f({t_{k-1,\hspace{0.1667em}n}})=1$ for $k\in \mathbb{N}$, $k\le j-1$ and $f({t_{j,\hspace{0.1667em}n}})-f({t_{j-1,\hspace{0.1667em}n}})\in [1,2)$. As a consequence,
\[ {j_{n}}=\big\lfloor {v_{n+1}}(\kappa ,\mu )-{v_{n}}(\kappa ,\mu )\big\rfloor =o\big(a({s_{n}})\big),\hspace{1em}n\to \infty ,\]
by Lemma 2(b) below, so that the sequence $({j_{n}}\exp (-\varepsilon {(a({s_{n}}))^{1/2}}))$ is indeed summable.Assuming (A1) and (A3), fix any $\gamma \gt 0$ and put
for large $n\in \mathbb{N}$ with μ as given in (23). Here, with q as given in (24), ${w_{n}}(\gamma ,1)={w_{n}}(\gamma ,1,q)=\exp ({n^{(1+\gamma )/(q+1)}})$ if $\mu =1$ and ${w_{n}}(\gamma ,\mu )={n^{(1+\gamma )/(\mu -1)}}$ if $\mu \gt 1$.
(29)
\[ {\tau _{n}}={\tau _{n}}(\gamma ,\mu ):=\inf \big\{t\gt 0:a(t)\gt {w_{n}}(\gamma ,\mu )\big\}\]-
(B1) The function a is eventually continuous or ${\lim \nolimits_{t\to \infty }}(\log a(t-1)/\log a(t))=1$ if $\mu =1$ and ${\lim \nolimits_{t\to \infty }}(a(t-1)/a(t))=1$ if $\mu \gt 1$.
-
(B21) For sufficiently large $t\gt 0$ and each $\varsigma \gt 0$, let ${R_{\varsigma }}(t)$ denote a set of positive integers satisfying the following two conditions: for each $\varsigma \gt 0$ and each $\gamma \gt 0$, both close to 0, there exists ${n_{0}}={n_{0}}(\varsigma ,\gamma )\in \mathbb{N}$ such that the sets ${R_{\varsigma }}({\tau _{{n_{0}}}}(\gamma ,\mu ))$, ${R_{\varsigma }}({\tau _{{n_{0}}+1}}(\gamma ,\mu )),\dots $ are disjoint; and
-
(B22) For sufficiently large $t\gt 0$, let ${R_{0}}(t)$ denote a set of positive integers satisfying the following two conditions: for each $\gamma \gt 0$ close to 0 there exists ${n_{0}}={n_{0}}(\gamma )\in \mathbb{N}$ such that the sets ${R_{0}}({\tau _{{n_{0}}}}(\gamma ,\mu ))$, ${R_{0}}({\tau _{{n_{0}}+1}}(\gamma ,\mu )),\dots $ are disjoint; and
Now we are ready to present a LIL for infinite sums of independent indicators.
Theorem 5.
Suppose (A1)–(A5), (B1) and either (B21) or (B22). Then, with $\mu \ge 1$ and $q\ge 0$ as defined in (23) and (24), respectively,
\[ \underset{t\to \infty }{\limsup }\frac{X(t)-\mathbb{E}X(t)}{{(2(q+1)\mathrm{Var}\hspace{0.1667em}X(t)\log \log \mathrm{Var}\hspace{0.1667em}X(t))^{1/2}}}=1\hspace{1em}\textit{a.s.}\]
and
\[ \underset{t\to \infty }{\liminf }\frac{X(t)-\mathbb{E}X(t)}{{(2(q+1)\mathrm{Var}\hspace{0.1667em}X(t)\log \log \mathrm{Var}\hspace{0.1667em}X(t))^{1/2}}}=-1\hspace{1em}\textit{a.s.}\]
if $\mu =1$ and
\[ \underset{t\to \infty }{\limsup }\frac{X(t)-\mathbb{E}X(t)}{{(2(\mu -1)\mathrm{Var}\hspace{0.1667em}X(t)\log \mathrm{Var}\hspace{0.1667em}X(t))^{1/2}}}=1\hspace{1em}\textit{a.s.}\]
and
\[ \underset{t\to \infty }{\liminf }\frac{X(t)-\mathbb{E}X(t)}{{(2(\mu -1)\mathrm{Var}\hspace{0.1667em}X(t)\log \mathrm{Var}\hspace{0.1667em}X(t))^{1/2}}}=-1\hspace{1em}\textit{a.s.}\]
if $\mu \gt 1$.
3 Proof of Theorem 5
3.1 Auxiliary results
We start with a simple inequality which will be used in the last part of the proof of Proposition 2.
Proof.
For $k\in \mathbb{N}$ and $t\gt s\ge 0$, ${1_{\{{\Phi _{k}}(t)-{\Phi _{k}}(s)\gt 0\}}}={1_{\{{\Phi _{k}}(t)-{\Phi _{k}}(s)\ge 1\}}}\le {\Phi _{k}}(t)-{\Phi _{k}}(s)$ a.s. The equality stems from the fact that ${\Phi _{k}}$ only takes nonnegative integer values. Hence, for $\vartheta \in \mathbb{R}$ and $0\le s\lt t$,
\[\begin{aligned}{}\mathbb{E}\exp (\vartheta \big(Y(t)-Y(s)\big)& =\prod \limits_{k\ge 1}\mathbb{E}\exp \big(\vartheta \big({\Phi _{k}}(t)-{\Phi _{k}}(s)\big)\big)\\ {} & =\prod \limits_{k\ge 1}\big(1+\mathbb{E}\big({\mathrm{e}^{\vartheta ({\Phi _{k}}(t)-{\Phi _{k}}(s))}}-1\big){1_{\{{\Phi _{k}}(t)-{\Phi _{k}}(s)\gt 0\}}}\big)\\ {} & \le \prod \limits_{k\ge 1}\big(1+\big({\mathrm{e}^{\vartheta M}}-1\big)\mathbb{E}{1_{\{{\Phi _{k}}(t)-{\Phi _{k}}(s)\gt 0\}}}\big)\\ {} & \le \exp \bigg(\big({\mathrm{e}^{\vartheta M}}-1\big)\sum \limits_{k\ge 1}\mathbb{E}\big({\Phi _{k}}(t)-{\Phi _{k}}(s)\big)\bigg)\\ {} & =\exp \big(\big({\mathrm{e}^{\vartheta M}}-1\big)\big(f(t)-f(s)\big)\big).\end{aligned}\]
□For each $B\ge 0$ and each $D\gt 1$, put
Lemma 2 does two things. First, it explains the choice of the sequences $({t_{n}})$ and $({v_{n}})$ and the functions ${g_{1,\hspace{0.1667em}{q_{\varrho }}}}$ and ${g_{{\mu _{\varrho }}}}$ (even though $({t_{n}})$ is not present in Lemma 2 explicitly, it is of crucial importance for defining the sequence $({s_{n}})$). Second, it secures a successful application of the Borel–Cantelli lemma in the proof of Proposition 2.
(31)
\[ {g_{1,\hspace{0.1667em}B}}(t):=(B+1)\log \log t,\hspace{1em}t\gt \mathrm{e},\hspace{1em}\text{and}\hspace{1em}{g_{D}}(t):=(D-1)\log t,\hspace{1em}t\gt 1.\]Lemma 2.
Suppose (A1), (A3) and (A4). Fix any $\varrho \in (0,1)$, any $\kappa \in (0,1)$ and let ${q_{\varrho }}$ and ${\mu _{\varrho }}$ be as defined in (25).
(a) If μ in (23) is equal to 1, then $\exp (-{g_{1,\hspace{0.1667em}{q_{\varrho }}}}(a({s_{n}}(\kappa ,1))))=O({n^{-(1-\kappa )}})$ as $n\to \infty $, and if $\mu \gt 1$, then $\exp (-{g_{{\mu _{\varrho }}}}(a({s_{n}}(\kappa ,\mu ))))=O({n^{-(1-\kappa )}})$.
(b) There exists an integer $r\ge 2$ such that $({(({v_{n+1}}(\kappa ,\mu )-{v_{n}}(\kappa ,\mu ))/a({s_{n}}))^{r}})$ is a summable sequence.
Proof.
(a) Using the definition of ${t_{n}}$, the fact that f is nondecreasing and (A3), we conclude that, as $n\to \infty $,
and, for $\mu \gt 1$,
Since ${\lim \nolimits_{t\to \infty }}(\log {z_{q}}(t)/\log t)=0$, we infer
Assume that f is eventually continuous. Then $f({t_{n}}(\kappa ,1))={v_{n}}(\kappa ,1)$ for large enough n and thereupon $\log a({s_{n}}(\kappa ,1))\le \log f({s_{n}}(\kappa ,1))\le \log f({t_{n+1}}(\kappa ,1))={(n+1)^{(1-\kappa )/({q_{\varrho }}+1)}}$ for large n. Assuming that ${\lim \inf \nolimits_{t\to \infty }}(\log f(t-1)/\log f(t))\gt 0$, we obtain (34) as a consequence of $\log f({t_{n+1}}(\kappa ,1)-1)\le {(n+1)^{(1-\kappa )/({q_{\varrho }}+1)}}$ and $\log a({s_{n}}(\kappa ,1))\le \log f({s_{n}}(\kappa ,1))\le \log f({t_{n+1}}(\kappa ,1))$.
(32)
\[ \exp \big({n^{(1-\kappa )/({q_{\varrho }}+1)}}\big)\le f\big({t_{n}}(\kappa ,1)\big)\le f\big({s_{n}}(\kappa ,1)\big)=O\big(a\big({s_{n}}(\kappa ,1)\big){z_{q}}\big(a\big({s_{n}}(\kappa ,1)\big)\big)\big)\](33)
\[ {n^{{\mu _{\varrho }}(1-\kappa )/({\mu _{\varrho }}-1)}}\le f\big({t_{n}}(\kappa ,\mu )\big)\le f\big({s_{n}}(\kappa ,\mu )\big)=O\big({\big(a\big({s_{n}}(\kappa ,\mu )\big)\big)^{{\mu _{\varrho }}}}\big),\hspace{1em}n\to \infty .\]
\[ \exp \big(-{g_{1,\hspace{0.1667em}{q_{\varrho }}}}\big(a\big({s_{n}}(\kappa ,1)\big)\big)\big)={\big(\log a\big({s_{n}}(\kappa ,1)\big)\big)^{-({q_{\varrho }}+1)}}=O\big({n^{-(1-\kappa )}}\big),\hspace{1em}n\to \infty .\]
Also, for $\mu \gt 1$,
\[ \exp \big(-{g_{{\mu _{\varrho }}}}\big(a\big({s_{n}}(\kappa ,\mu )\big)\big)\big)={\big(a\big({s_{n}}(\kappa ,\mu )\big)\big)^{-({\mu _{\varrho }}-1)}}=O\big({n^{-(1-\kappa )}}\big),\hspace{1em}n\to \infty .\]
(b) We start by proving that (A3) with $\mu =1$ entails
(34)
\[ \log a\big({s_{n}}(\kappa ,1)\big)=O\big({n^{(1-\kappa )/({q_{\varrho }}+1)}}\big),\hspace{1em}n\to \infty .\]We proceed by noting that, as $n\to \infty $,
\[\begin{aligned}{}{v_{n+1}}(\kappa ,1)-{v_{n}}(\kappa ,1)& =\exp \big({(n+1)^{(1-\kappa )/({q_{\varrho }}+1)}}\big)-\exp \big({n^{(1-\kappa )/({q_{\varrho }}+1)}}\big)\\ {} & \sim \big((1-\kappa )/({q_{\varrho }}+1)\big){n^{((1-\kappa )/({q_{\varrho }}+1))-1}}\exp \big({n^{(1-\kappa )/({q_{\varrho }}+1)}}\big)\end{aligned}\]
and, for $\mu \gt 1$,
\[\begin{aligned}{}{v_{n+1}}(\kappa ,\mu )-{v_{n}}(\kappa ,\mu )& ={(n+1)^{{\mu _{\varrho }}(1-\kappa )/({\mu _{\varrho }}-1)}}-{n^{{\mu _{\varrho }}(1-\kappa )/({\mu _{\varrho }}-1)}}\\ {} & \sim \big({\mu _{\varrho }}(1-\kappa )/({\mu _{\varrho }}-1)\big){n^{(1-{\mu _{\varrho }}\kappa )/({\mu _{\varrho }}-1)}}.\end{aligned}\]
Write
\[\begin{aligned}{}\frac{1}{a({s_{n}}(\kappa ,1))}& =O\big({\big(\log a\big({s_{n}}(\kappa ,1)\big)\big)^{{q_{\varrho }}}}\exp \big(-{n^{(1-\kappa )/({q_{\varrho }}+1)}}\big)\big)\\ {} & =O\big({n^{{q_{\varrho }}(1-\kappa )/({q_{\varrho }}+1)}}\exp \big(-{n^{(1-\kappa )/({q_{\varrho }}+1)}}\big)\big),\hspace{1em}n\to \infty .\end{aligned}\]
Here, the first equality is implied by ${z_{q}}(t)=O({(\log t)^{{q_{\varrho }}}})$ as $t\to \infty $ and (32), and the second equality is a consequence of (34). In the case $\mu \gt 1$, invoking (33) we infer
\[ \frac{1}{a({s_{n}}(\kappa ,\mu ))}=O\big({n^{-(1-\kappa )/({\mu _{\varrho }}-1)}}\big),\hspace{1em}n\to \infty .\]
Thus, we have proved that, for $\mu \ge 1$,
\[ \frac{{v_{n+1}}(\kappa ,\mu )-{v_{n}}(\kappa ,\mu )}{a({s_{n}})}=O\big({n^{-\kappa }}\big),\hspace{1em}n\to \infty .\]
Choosing any integer $r\ge 2$ satisfying $r\kappa \gt 1$ completes the proof of part (b). □For $k\in \mathbb{N}$ and $t\ge 0$, put ${X^{\ast }}(t):=X(t)-\mathbb{E}X(t)$ and ${\eta _{k}}(t):={1_{{A_{k}}(t)}}-\mathbb{P}({A_{k}}(t))$. Note that
and that ${\eta _{1}}(t)$, ${\eta _{2}}(t),\dots $ are independent centered random variables.
Lemma 3 provides a uniform bound for higher moments of the increments of ${X^{\ast }}$. The bound serves a starting point of the chaining argument in the spirit of Lemma 4. A result of an application of Lemma 4 to the present setting is given in Lemma 5.
Proof.
In view of the representation
\[ {X^{\ast }}(t)-{X^{\ast }}(s)=\sum \limits_{k\ge 1}\big({1_{{A_{k}}(t)}}-\mathbb{P}\big({A_{k}}(t)\big)-{1_{{A_{k}}(s)}}+\mathbb{P}\big({A_{k}}(s)\big)\big)=:\sum \limits_{k\ge 1}{\eta _{k}}(s,t),\]
the variable ${X^{\ast }}(t)-{X^{\ast }}(s)$ is an infinite sum of independent centered random variables with finite moment of order $2r$.Invoking Rosenthal’s inequality (Theorem 3 in [14]) in the case $r\ge 2$ we infer
\[ \mathbb{E}{\big({X^{\ast }}(t)-{X^{\ast }}(s)\big)^{2r}}\le {C_{r}}\max \bigg({\bigg(\sum \limits_{k\ge 1}\mathbb{E}{\big({\eta _{k}}(s,t)\big)^{2}}\bigg)^{r}},\sum \limits_{k\ge 1}\mathbb{E}{\big({\eta _{k}}(s,t)\big)^{2r}}\bigg).\]
In the case $r=1$, the inequality trivially holds with ${C_{1}}=1$ as is seen from
\[ \mathbb{E}{\big({X^{\ast }}(t)-{X^{\ast }}(s)\big)^{2}}=\sum \limits_{k\ge 1}\mathbb{E}{\big({\eta _{k}}(s,t)\big)^{2}}.\]
In view of (A2), for $r\in \mathbb{N}$ and $0\le s\lt t$,
\[\begin{array}{c}\displaystyle \sum \limits_{k\ge 1}\mathbb{E}{\big({\eta _{k}}(s,t)\big)^{2r}}\\ {} \displaystyle \le {2^{2r-1}}\sum \limits_{k\ge 1}\big(\mathbb{E}{({1_{{A_{k}}(t)}}-{1_{{A_{k}}(s)}})^{2r}}+{\big(\mathbb{P}\big({A_{k}}(t)\big)-\mathbb{P}\big({A_{k}}(s)\big)\big)^{2r}}\big)\\ {} \displaystyle \le {2^{2r-1}}\sum \limits_{k\in \mathbb{N}}\big(\mathbb{E}|{1_{{A_{k}}(t)}}-{1_{{A_{k}}(s)}}|+\big|\mathbb{P}\big({A_{k}}(t)\big)-\mathbb{P}\big({A_{k}}(s)\big)\big|\big)\\ {} \displaystyle \le {2^{2r}}\sum \limits_{k\ge 1}\mathbb{E}\big({\Phi _{k}}(t)-{\Phi _{k}}(s)\big)={2^{2r}}\big(f(t)-f(s)\big).\end{array}\]
Here, we have used ${(a+b)^{2r}}\le {2^{2r-1}}({a^{2r}}+{b^{2r}})$, $a,b\in \mathbb{R}$, for the first inequality, the fact that $|{1_{{A_{k}}(t)}}-{1_{{A_{k}}(s)}}|\in \{0,1\}$ a.s. and $|\mathbb{P}({A_{k}}(t))-\mathbb{P}({A_{k}}(s))|\in [0,1]$ for the second and (A2a) for the third. The argument for the case $0\le t\lt s$ is analogous.Combining fragments together we conclude that (35) holds with ${D_{r}}:={12^{2r}}{C_{r}}$. □
The next result is borrowed from Lemma 2 in [13].
Lemma 4.
Let ${\xi _{1}}$, ${\xi _{2}},\dots $ be random variables. Fix any $m\in \mathbb{N}$ and assume that
\[ \mathbb{E}|{\xi _{i+1}}+\cdots +{\xi _{k}}{|^{{\lambda _{1}}}}\le {({u_{i+1}}+\cdots +{u_{k}})^{{\lambda _{2}}}},\hspace{1em}0\le i\lt k\le m,\]
for some ${\lambda _{1}}\gt 0$, some ${\lambda _{2}}\gt 1$ and some nonnegative numbers ${u_{1}},\dots ,{u_{m}}$. Then
\[ \mathbb{E}{\Big(\underset{1\le k\le m}{\max }|{\xi _{1}}+\cdots +{\xi _{k}}|\Big)^{{\lambda _{1}}}}\le {A_{{\lambda _{1}},{\lambda _{2}}}}{({u_{1}}+\cdots +{u_{m}})^{{\lambda _{2}}}}\]
for a positive constant ${A_{{\lambda _{1}},{\lambda _{2}}}}$.
Lemma 5.
Suppose (A2) and (A5). Then, for any integer $r\ge 2$, there exists a positive constant ${A_{r}}$ such that
Here, j and ${({t_{k,\hspace{0.1667em}n}})_{0\le k\le j}}$ are as defined in (A5), and ${v_{n}}(\kappa ,\mu )$ is as defined in (26).
(36)
\[ \mathbb{E}{\Big(\underset{1\le k\le j}{\max }\hspace{0.1667em}\big|{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})-{X^{\ast }}({t_{n}})\big|\Big)^{2r}}\le {A_{r}}{\big({v_{n+1}}(\kappa ,\mu )-{v_{n}}(\kappa ,\mu )\big)^{r}}.\]Proof.
We first show that the assumption of Lemma 4 holds with ${\lambda _{1}}=2r$, ${\lambda _{2}}=r$, $m=j-1$, ${\xi _{k}}:={X^{\ast }}({t_{k,\hspace{0.1667em}n}})-{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})$ and ${u_{k}}:={D_{r}^{1/r}}(f({t_{k,\hspace{0.1667em}n}})-f({t_{k-1,\hspace{0.1667em}n}}))$ for $k\in \mathbb{N}$, where ${D_{r}}$ is the constant defined in Lemma 3. Let $0\le i\lt k\le j-1$. By (A5), $f({t_{k,\hspace{0.1667em}n}})-f({t_{i,\hspace{0.1667em}n}})={\textstyle\sum _{l=i+1}^{k}}(f({t_{l,\hspace{0.1667em}n}})-f({t_{l-1,\hspace{0.1667em}n}}))\ge 1$. This in combination with Lemma 3 yields
\[\begin{array}{c}\displaystyle \mathbb{E}|{\xi _{i+1}}+\cdots +{\xi _{k}}{|^{2r}}=\mathbb{E}{\big({X^{\ast }}({t_{k,\hspace{0.1667em}n}})-{X^{\ast }}({t_{i,\hspace{0.1667em}n}})\big)^{2r}}\\ {} \displaystyle \le {D_{r}}\max \big({\big(f({t_{k,\hspace{0.1667em}n}})-f({t_{i,\hspace{0.1667em}n}})\big)^{r}},f({t_{k,\hspace{0.1667em}n}})-f({t_{i,\hspace{0.1667em}n}})\big)\\ {} \displaystyle ={D_{r}}{\big(f({t_{k,\hspace{0.1667em}n}})-f({t_{i,\hspace{0.1667em}n}})\big)^{r}}={\Bigg({\sum \limits_{l=i+1}^{k}}{u_{l}}\Bigg)^{r}},\end{array}\]
thereby proving that the assumption of Lemma 4 does indeed hold. Hence, inequality (36) follows from Lemma 4 and the definition of ${t_{n}}$:
\[\begin{aligned}{}& \mathbb{E}{\Big(\underset{1\le k\le j}{\max }\hspace{0.1667em}|{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})-{X^{\ast }}({t_{n}})|\Big)^{2r}}\\ {} & \hspace{1em}=\mathbb{E}{\Big(\underset{1\le k\le j-1}{\max }|{\xi _{1}}+\cdots +{\xi _{k}}|\Big)^{2r}}\\ {} & \hspace{1em}\le {A_{2r,\hspace{0.1667em}r}}{\Bigg({\sum \limits_{l=1}^{j-1}}{u_{l}}\Bigg)^{r}}={A_{2r,\hspace{0.1667em}r}}{D_{r}}{\big(f({t_{j-1,\hspace{0.1667em}n}})-f({t_{n}})\big)^{r}}\\ {} & \hspace{1em}\le {A_{2r,\hspace{0.1667em}r}}{D_{r}}{\big({v_{n+1}}(\kappa ,\mu )-{v_{n}}(\kappa ,\mu )\big)^{r}}.\end{aligned}\]
□3.2 Proof of Theorem 5
We start with a lemma and a proposition which are in essence Lemma 4.13 and Proposition 4.7 in [3]. Although our present assumption (A3) is slightly different from the corresponding assumption in [3], we have checked that the proofs of the aforementioned results in [3] go through.
Lemma 6.
Suppose (A1), (A3), (B1) and either (B21) or (B22), and let $\mu \ge 1$ be as given in (23). For sufficiently small $\delta \gt 0$, pick $\gamma \in (0,(\sqrt{5}-1)/2)\gt 0$ satisfying $(1+\gamma )(1-{\delta ^{2}}/8)\lt 1$. Then
\[ \underset{n\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{1}{(2a({\nu _{n}}){h_{0}}{(a({\nu _{n}}))^{1/2}}}\sum \limits_{k\ge 1}{\eta _{k}}({\nu _{n}})\ge 1-\delta \hspace{2.5pt}\big(\le -(1-\delta )\big)\hspace{1em}\textit{a.s.},\]
where ${\nu _{n}}$ is either ${\tau _{n}}$ or $\lfloor {\tau _{n}}\rfloor $, and ${\tau _{n}}={\tau _{n}}(\gamma ,\mu )$, with γ chosen above, is as defined in (29).
Proposition 1.
Suppose (A1), (A3), (B1) and either (B21) or (B22). Then, with $\mu \ge 1$ and $q\ge 0$ as defined in (23) and (24), respectively,
\[ \underset{t\to \infty }{\limsup }\Big(\underset{t\to \infty }{\liminf }\Big)\frac{{X^{\ast }}(t)}{{(2(q+1)a(t)\log \log a(t))^{1/2}}}\ge 1\hspace{2.5pt}(\le -1)\hspace{1em}\textit{a.s.}\]
and
\[ \underset{t\to \infty }{\limsup }\Big(\underset{t\to \infty }{\liminf }\Big)\frac{{X^{\ast }}(t)}{{(2(\mu -1)a(t)\log a(t))^{1/2}}}\ge 1\hspace{2.5pt}(\le -1)\hspace{1em}\textit{a.s.}\]
in the cases $\mu =1$ and $\mu \gt 1$, respectively.
Proposition 2 is a counterpart of Proposition 4.6 in [3]. Although it is tempting to believe that the proof of Proposition 4.6 in [3] goes through as well, this is not the case. First, in the proof of (43), instead of dealing with the process X and its mean b (which are not monotone anymore), we work with their nondecreasing majorants Y and f. It is not obvious that Y and f are bounded from above similarly to X and b. Second, unlike in [3] we do not require that the variance a is asymptotically nondecreasing. Hence, putting $a({t_{n}})$ in the denominator of (37) is not allowed.
Proposition 2.
Suppose (A1)–(A5). Then, with $\mu \ge 1$ and $q\ge 0$ as defined in (23) and (24), respectively,
\[ \underset{t\to \infty }{\limsup }\Big(\underset{t\to \infty }{\liminf }\Big)\frac{{X^{\ast }}(t)}{{(2(q+1)a(t)\log \log a(t))^{1/2}}}\le 1\hspace{2.5pt}(\ge -1)\hspace{1em}\textit{a.s.}\]
and
\[ \underset{t\to \infty }{\limsup }\Big(\underset{t\to \infty }{\liminf }\Big)\frac{{X^{\ast }}(t)}{{(2(\mu -1)a(t)\log a(t))^{1/2}}}\le 1\hspace{2.5pt}(\ge -1)\hspace{1em}\textit{a.s.}\]
in the cases $\mu =1$ and $\mu \gt 1$, respectively.
Proof of Proposition 2.
In view of (A4), it is enough to show that, for each $\varrho \in (0,1)$ and each positive κ sufficiently close to 0,
where ${t_{n}}={t_{n}}(\kappa ,\mu )$ and ${s_{n}}={s_{n}}(\kappa ,\mu )$ are as defined in (26) and (A4), respectively, ${h_{\varrho }}={g_{1,\hspace{0.1667em}{q_{\varrho }}}}$ if μ in (23) is equal to 1 and ${h_{\varrho }}={g_{{\mu _{\varrho }}}}$ if $\mu \gt 1$ (see (31) for the definitions of ${g_{1,\hspace{0.1667em}{q_{\varrho }}}}$ and ${g_{{\mu _{\varrho }}}}$). Indeed, if (37) holds true, then, for large enough n, a.s.
(37)
\[ \underset{n\to \infty }{\limsup }\frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}\hspace{0.1667em}{X^{\ast }}(u)}{{(2a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}}\le 1+\kappa \hspace{1em}\text{a.s.},\]
\[\begin{array}{c}\displaystyle \underset{t\to \infty }{\limsup }\frac{{X^{\ast }}(t)}{{(2a(t){h_{\varrho }}(a(t)))^{1/2}}}=\underset{t\to \infty }{\limsup }\frac{{X^{\ast }}(t)}{{(2{a_{0}}(t){h_{\varrho }}({a_{0}}(t)))^{1/2}}}\\ {} \displaystyle \le \underset{n\to \infty }{\limsup }\frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}\hspace{0.1667em}{X^{\ast }}(u)}{{(2{a_{0}}({s_{n}}){h_{\varrho }}({a_{0}}({s_{n}})))^{1/2}}}=\underset{n\to \infty }{\limsup }\frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}\hspace{0.1667em}{X^{\ast }}(u)}{{(2a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}}\le 1+\kappa .\end{array}\]
The relation ${\liminf _{t\to \infty }}\frac{{X^{\ast }}(t)}{{(2a(t){h_{\varrho }}(a(t)))^{1/2}}}\ge -1-\kappa $ a.s. does not require a separate proof. It follows from the argument for lim sup upon replacing ${\eta _{k}}(t)$ with $-{\eta _{k}}(t)$.To obtain (37), we first prove in Lemma 7 that
and then show that
(38)
\[ {\limsup _{n\to \infty }}\frac{{X^{\ast }}({s_{n}})}{{(2a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}}\le 1+\kappa \hspace{1em}\text{a.s.}\](39)
\[ \underset{n\to \infty }{\lim }\frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}|{X^{\ast }}(u)-{X^{\ast }}({s_{n}})|}{{(a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}}=0\hspace{1em}\text{a.s.}\]Lemma 7.
Suppose (A1), (A3) and (A4). Then relation (38) holds for any $\kappa \in (0,(\sqrt{5}-1)/2)$.
Proof.
Fix any $\kappa \in (0,(\sqrt{5}-1)/2)$. We first show that there exists $\rho =\rho (\kappa )\gt 0$ satisfying
To prove this, note that our choice of κ ensures $(1-\kappa ){(1+\kappa )^{2}}\gt 1$. Observe next that as positive ρ approaches 0, $2-\exp (2(1+\kappa )\rho )$ becomes arbitrary close to 1, thereby justifying $2-\exp (2(1+\kappa )\rho )\gt {(1-\kappa )^{-1}}{(1+\kappa )^{-2}}$.
By Lemma 4.1 in [3], for $\vartheta \in \mathbb{R}$ and $t\ge 0$,
\[ \mathbb{E}\exp \big(\vartheta {X^{\ast }}(t)\big)\le \exp \big({2^{-1}}{\vartheta ^{2}}\exp (|\vartheta |)a(t)\big).\]
Fix any $\theta \in \mathbb{R}$ and put $\vartheta =\theta /{(2a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}$. Observe that, for large enough n, ${h_{\varrho }}(a({s_{n}}))/a({s_{n}})\le 2{\rho ^{2}}$ for ρ satisfying (40). An application of Markov’s inequality then yields, for large n as above,
\[\begin{array}{c}\displaystyle \mathbb{P}\bigg\{\frac{{X^{\ast }}({s_{n}})}{{(2a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}}\gt 1+\kappa \bigg\}\le {\mathrm{e}^{-(1+\kappa )\theta }}\mathbb{E}\exp \bigg(\theta \frac{{X^{\ast }}({s_{n}})}{{(2a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}}\bigg)\\ {} \displaystyle \le \exp \bigg(-(1+\kappa )\theta +\frac{{\theta ^{2}}}{4{h_{\varrho }}(a({s_{n}}))}\exp \bigg(\frac{\rho |\theta |}{{h_{\varrho }}(a({s_{n}}))}\bigg)\bigg).\end{array}\]
Putting $\theta =2(1+\kappa ){h_{\varrho }}(a({s_{n}}))$ and then invoking Lemma 2(a), we obtain
\[\begin{array}{c}\displaystyle \mathbb{P}\bigg\{\frac{{X^{\ast }}({s_{n}})}{{(2a({s_{n}}){h_{\varrho }}(a({s_{n}})))^{1/2}}}\gt 1+\kappa \bigg\}\\ {} \displaystyle \le \exp \big(-{(1+\kappa )^{2}}\big(2-\exp \big(2(1+\kappa )\rho \big)\big){h_{\varrho }}\big(a({s_{n}})\big)\big)\\ {} \displaystyle =O\bigg(\frac{1}{{n^{(1-\kappa ){(1+\kappa )^{2}}(2-\exp (2(1+\kappa )\rho ))}}}\bigg),\hspace{1em}n\to \infty .\end{array}\]
According to (40),
\[ \sum \limits_{n\ge {n_{0}}}\mathbb{P}\bigg\{\frac{{X^{\ast }}({t_{n}})}{{(2a({t_{n}}){h_{\varrho }}(a({t_{n}})))^{1/2}}}\gt 1+\kappa \bigg\}\lt \infty \]
for some ${n_{0}}\in \mathbb{N}$ large enough. An application of the Borel–Cantelli lemma completes the proof of Lemma 7. □Next, in order to prove (39) it suffices to show that
and
Since (42) is a consequence of (41), we are left with proving (41).
(41)
\[ \underset{n\to \infty }{\lim }\frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}|{X^{\ast }}(u)-{X^{\ast }}({t_{n}})|}{{(a({s_{n}}))^{1/2}}}=0\hspace{1em}\text{a.s.}\](42)
\[ \underset{n\to \infty }{\lim }\frac{|{X^{\ast }}({t_{n}})-{X^{\ast }}({s_{n}})|}{{(a({s_{n}}))^{1/2}}}=0\hspace{1em}\text{a.s.}\]
Proof of (41). Let ${t_{n}}={t_{0,\hspace{0.1667em}n}}\lt \cdots \lt {t_{j,\hspace{0.1667em}n}}={t_{n+1}}$ be a partition defined in (A5). With this at hand, write
\[\begin{aligned}{}& \underset{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}{\sup }\big|{X^{\ast }}(u)-{X^{\ast }}({t_{n}})\big|\\ {} & \hspace{1em}=\underset{1\le k\le j}{\max }\underset{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}{\sup }\big|\big({X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})-{X^{\ast }}({t_{n}})\big)\hspace{-0.1667em}+\hspace{-0.1667em}\big({X^{\ast }}({t_{k-1,\hspace{0.1667em}n}}+v)\hspace{-0.1667em}-\hspace{-0.1667em}{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})\big)\big|\\ {} & \hspace{1em}\le \underset{1\le k\le j}{\max }\big|{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})-{X^{\ast }}({t_{n}})\big|\\ {} & \hspace{2em}+\underset{1\le k\le j}{\max }\underset{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}{\sup }\big|\big({X^{\ast }}({t_{k-1,\hspace{0.1667em}n}}+v)-{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})\big)\big|\hspace{1em}\text{a.s.}\end{aligned}\]
By Markov’s inequality and Lemma 5, for any $r\gt 0$ and all $\varepsilon \gt 0$,
\[\begin{array}{c}\displaystyle \mathbb{P}\Big\{\underset{1\le k\le j}{\max }|{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})-{X^{\ast }}({t_{n}})|\gt \varepsilon {\big(a({s_{n}})\big)^{1/2}}\Big\}\\ {} \displaystyle \le \frac{\mathbb{E}{({\max _{1\le k\le j}}\hspace{0.1667em}|{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})-{X^{\ast }}({t_{n}})|)^{2r}}}{{\varepsilon ^{2r}}{(a({s_{n}}))^{r}}}\le \frac{{A_{r}}{({v_{n+1}}(\kappa ,\mu )-{v_{n}}(\kappa ,\mu ))^{r}}}{{\varepsilon ^{2r}}{(a({s_{n}}))^{r}}}.\end{array}\]
By Lemma 2(b), there exists an integer $r\ge 2$ such that the right-hand side forms a sequence which is summable in n. Hence, an application of the Borel–Cantelli lemma yields
Next, we work towards proving that
According to (A2), for any $0\le s\lt t$, $|X(t)-X(s)|\le Y(t)-Y(s)$ a.s., where the process Y is a.s. nondecreasing. Taking into account Remark 4 we obtain
Finally, for all $\varepsilon \gt 0$,
having utilized Markov’s inequality for the second inequality, Lemma 1 for the third and (A5) for the fourth. Invoking (A5) once again we conclude that the right-hand side is summable in n. Hence, an application of the Borel–Cantelli lemma yields
(43)
\[ \underset{n\to \infty }{\lim }\frac{{\max _{1\le k\le j}}{\sup _{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}}|{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}}+v)-{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})|}{{(a({s_{n}}))^{1/2}}}=0\hspace{1em}\text{a.s.}\]
\[\begin{aligned}{}& \underset{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}{\sup }\big|{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}}+v)-{X^{\ast }}({t_{k-1,\hspace{0.1667em}n}})\big|\\ {} & \hspace{1em}\le \underset{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}{\sup }\big|X({t_{k-1,\hspace{0.1667em}n}}+v)-X({t_{k-1,\hspace{0.1667em}n}})\big|\\ {} & \hspace{2em}+\underset{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}{\sup }\big|b({t_{k-1,\hspace{0.1667em}n}}+v)-b({t_{k-1,\hspace{0.1667em}n}})\big|\\ {} & \hspace{1em}\le \underset{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}{\sup }\big(Y({t_{k-1,\hspace{0.1667em}n}}+v)-Y({t_{k-1,\hspace{0.1667em}n}})\big)\\ {} & \hspace{2em}+\underset{v\in [0,\hspace{0.1667em}{t_{k,\hspace{0.1667em}n}}-{t_{k-1,\hspace{0.1667em}n}}]}{\sup }\big(f({t_{k-1,\hspace{0.1667em}n}}+v)-f({t_{k-1,\hspace{0.1667em}n}})\big)\\ {} & \hspace{1em}=Y({t_{k,\hspace{0.1667em}n}})-Y({t_{k-1,\hspace{0.1667em}n}})+f({t_{k,\hspace{0.1667em}n}})-f({t_{k-1,\hspace{0.1667em}n}})\hspace{1em}\text{a.s.}\end{aligned}\]
By (A1) and (A5),
(44)
\[ \frac{{\max _{1\le k\le j}}\hspace{0.1667em}(f({t_{k,\hspace{0.1667em}n}})-f({t_{k-1,\hspace{0.1667em}n}}))}{{(a({s_{n}}))^{1/2}}}\le \frac{A}{{(a({s_{n}}))^{1/2}}}\hspace{2.5pt}\to 0,\hspace{1em}n\to \infty .\](45)
\[\begin{aligned}{}& \mathbb{P}\Big\{\underset{1\le k\le j}{\max }\big(Y({t_{k,\hspace{0.1667em}n}})-Y({t_{k-1,\hspace{0.1667em}n}})\big)\gt \varepsilon {\big(a({s_{n}})\big)^{1/2}}\Big\}\\ {} & \hspace{1em}\le {\sum \limits_{k=1}^{j}}\mathbb{P}\big\{Y({t_{k,\hspace{0.1667em}n}})-Y({t_{k-1,\hspace{0.1667em}n}})\gt \varepsilon {\big(a({s_{n}})\big)^{1/2}}\big\}\\ {} & \hspace{1em}\le {\mathrm{e}^{-\varepsilon {(a({s_{n}}))^{1/2}}}}{\sum \limits_{k=1}^{j}}\mathbb{E}{\mathrm{e}^{Y({t_{k,\hspace{0.1667em}n}})-Y({t_{k-1,\hspace{0.1667em}n}})}}\\ {} & \hspace{1em}\le {\mathrm{e}^{-\varepsilon {(a({s_{n}}))^{1/2}}}}{\sum \limits_{k=1}^{j}}\exp \big(\big({\mathrm{e}^{M}}-1\big)\big(f({t_{k,\hspace{0.1667em}n}})-f({t_{k-1,\hspace{0.1667em}n}})\big)\big)\\ {} & \hspace{1em}\le \exp \big(A\big({\mathrm{e}^{M}}-1\big)\big)j{\mathrm{e}^{-\varepsilon {(a({s_{n}}))^{1/2}}}},\end{aligned}\]
\[ \underset{n\to \infty }{\lim }\frac{{\max _{1\le k\le j}}(X({t_{k,\hspace{0.1667em}n}})-X({t_{k-1,\hspace{0.1667em}n}}))}{{(a({s_{n}}))^{1/2}}}=0\hspace{1em}\text{a.s.}\]
The proofs of both (41) and Proposition 2 are complete. □4 Proofs related to Karlin’s occupancy scheme
4.1 Auxiliary results
For ease of reference, we state two known results. The former is an obvious extension of Theorem 1.5.3 in [2]. The latter is Lemma 6.2 in [3].
Lemma 8.
Let f be a function which varies regularly at ∞ of positive index and g a positive nondecreasing function with ${\lim \nolimits_{t\to \infty }}g(t)=\infty $. Then there exists a nondecreasing function h satisfying $f(g(t))\sim h(t)$ as $t\to \infty $.
4.2 Asymptotic behavior of $\mathbb{E}{K_{j}}(t)$ and $\operatorname{Var}{K_{j}}(t)$
Given next is a collection of results on the asymptotics of $\mathbb{E}{K_{j}}(t)$ and $\operatorname{Var}{K_{j}}(t)$ taken from Lemma 6.5 of [3]. Recall that ${\Pi _{\ell ,\hspace{0.1667em}\infty }}$ denotes the subclass of the de Haan class Π with the auxiliary functions ℓ, see (2), satisfying ${\lim \nolimits_{t\to \infty }}\ell (t)=\infty $.
Lemma 10.
Assume that $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$. Then, for each $j\in \mathbb{N}$,
and
Assume that $\rho (t)\sim {t^{\alpha }}L(t)$ as $t\to \infty $ for some $\alpha \in (0,1]$ and some L slowly varying at ∞. If $\alpha \in (0,1)$ and $j\in \mathbb{N}$ or $\alpha =1$ and $j\ge 2$, then, as $t\to \infty $,
and
4.3 Asymptotic behavior of $\mathbb{E}{K_{j}^{\ast }}(t)$ and $\operatorname{Var}{K_{j}^{\ast }}(t)$
For $j\in \mathbb{N}$, the asymptotics of $t\mapsto \mathbb{E}{K_{j}^{\ast }}(t)$ as stated in Theorems 1, 2 and 3 can be found in Lemma 6.5 of [3]. Next, we show that, for $j\in \mathbb{N}$, the functions $t\mapsto \operatorname{Var}{K_{j}^{\ast }}(t)$ exhibit the asymptotics given in the aforementioned theorems.
Lemma 11.
Assume that $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$. Then, for each $j\in \mathbb{N}$,
Assume that $\rho (t)\sim {t^{\alpha }}L(t)$ as $t\to \infty $ for some $\alpha \in (0,1]$ and some L slowly varying at ∞. If $\alpha \in (0,1)$ and $j\in \mathbb{N}$ or $\alpha =1$ and $j\ge 2$, then, as $t\to \infty $,
with ${c_{j,\hspace{0.1667em}\alpha }}$ as defined in (15) and (17).
(53)
\[ \underset{t\to \infty }{\lim }\frac{\mathrm{Var}\hspace{0.1667em}{K_{j}^{\ast }}(t)}{{t^{\alpha }}L(t)}={c_{j,\hspace{0.1667em}\alpha }}\gt 0\]Proof.
Assume that $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$. Putting $u=v=0$ in formula (11) of [11] we obtain (52).
According to formula (6) in [9],
Note that (55) does not require even regular variation assumption on ρ.
(55)
\[ \operatorname{Var}{K_{j}^{\ast }}(t)=\mathbb{E}{K_{j}^{\ast }}(t)-{2^{-2j}}\left(\genfrac{}{}{0.0pt}{}{2j}{j}\right)\mathbb{E}{K_{2j}^{\ast }}(2t),\hspace{1em}t\ge 0,j\in \mathbb{N}.\]Assume that ρ is regularly varying at ∞ of index $\alpha =1$. We first discuss the properties of the function $\hat{L}$ stated in Theorem 3. By Lemma 3 in [12], ${\lim \nolimits_{t\to \infty }}{t^{-1}}\rho (t)=0$ and ${\textstyle\int _{1}^{\infty }}{y^{-2}}\rho (y)\mathrm{d}y\le 1$. This implies that the function $\hat{L}(t)={\textstyle\int _{t}^{\infty }}{y^{-1}}L(y)\mathrm{d}y$ is well-defined for large t and thereupon ${\lim \nolimits_{t\to \infty }}\hat{L}(t)=0$. According to Proposition 1.5.9b [2], $\hat{L}$ is slowly varying at ∞ and satisfies (19). This in combination with (16), (22) and (55) entails (54).
Assume now $\alpha \in (0,1)$ and $j\in \mathbb{N}$ or $\alpha =1$ and $j\ge 2$. Then invoking (55) and either (13) or (16) we obtain
\[ \underset{t\to \infty }{\lim }\frac{\operatorname{Var}{K_{j}^{\ast }}(t)}{{t^{\alpha }}L(t)}=\underset{t\to \infty }{\lim }\frac{\mathbb{E}{K_{j}^{\ast }}(t)}{{t^{\alpha }}L(t)}-{2^{-2j}}\left(\genfrac{}{}{0.0pt}{}{2j}{j}\right)\underset{t\to \infty }{\lim }\frac{\mathbb{E}{K_{2j}^{\ast }}(2t)}{{t^{\alpha }}L(t)}={c_{j,\hspace{0.1667em}\alpha }}.\]
We are left with showing that the constants ${c_{j,\hspace{0.1667em}\alpha }}$ are positive for $\alpha \in (0,1)$ and $j\in \mathbb{N}$ and $\alpha =1$ and $j\ge 2$ or equivalently
This is a consequence of
\[ \frac{{2^{\alpha }}\Gamma (2j-\alpha )}{{2^{2j}}j!\Gamma (j-\alpha )}\lt \frac{2\hspace{0.1667em}(2j-1)!}{{2^{2j}}j!(j-1)!}=\frac{(2j-1)!}{(2j)!!(2j-2)!!}\lt 1,\]
where $(2n)!!:=2\cdot 4\cdot \dots \cdot (2n)$ for $n\in \mathbb{N}$. Here, the last inequality is justified with the help of mathematical induction. The proof of Lemma 11 is complete. □4.4 Proof of Theorems 1, 2 and 3
For $k\in \mathbb{N}$ and $t\ge 0$, denote by ${\pi _{k}}(t)$ the number of balls in box k at time t in the Poissonized version. It has already been mentioned in Section 1.1 that the thinning property of Poisson processes implies that the processes ${({\pi _{1}}(t))_{t\ge 0}}$, ${({\pi _{2}}(t))_{t\ge 0}},\dots $ are independent. Moreover, for $k\in \mathbb{N}$, ${({\pi _{k}}(t))_{t\ge 0}}$ is a Poisson process with intensity ${p_{k}}$. As a consequence, both ${K_{j}^{\ast }}(t)$ and ${K_{j}}(t)$ can be represented as the sums of independent indicators
\[ {K_{j}^{\ast }}(t)=\sum \limits_{k=1}{1_{\{{\pi _{k}}(t)=j\}}}\hspace{1em}\text{and}\hspace{1em}{K_{j}}(t)=\sum \limits_{k=1}{1_{\{{\pi _{k}}(t)\ge j\}}},\hspace{1em}t\ge 0,j\in \mathbb{N}.\]
Hence, it is reasonable to prove the desired LILs for the small counts by applying Theorem 5.As a preparation, we start with a lemma which facilitates checking condition (B22) of Theorem 5.
We conclude that in all the settings ${\tau _{n+1}}/{\tau _{n}}$ diverges to ∞ superexponentially fast, whereas $\log {\tau _{n}}$ only grows polynomially fast. Hence, for large enough n, $c({\tau _{n+1}})\gt d({\tau _{n}})$, which justifies the first part of (B22).
Lemma 12.
Assume that either $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$ or ρ is regularly varying at ∞ of index $\alpha \in (0,1]$. If $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$ and $j\in \mathbb{N}$ or $\alpha \in (0,1)$ and $j\in \mathbb{N}$ or $\alpha =1$ and $j\ge 2$, then for any positive functions c and d satisfying ${\lim \nolimits_{t\to \infty }}c(t)=\infty $, ${\lim \nolimits_{t\to \infty }}(c(t)/t)=0$ and ${\lim \nolimits_{t\to \infty }}(d(t)/t)=\infty $,
Proof.
We start by proving a simple but an important inequality. Since
\[\begin{array}{c}\displaystyle \mathrm{Cov}\hspace{0.1667em}({1_{\{{\pi _{k}}(t)\ge j\}}},{1_{\{{\pi _{k}}(t)\ge j+1\}}})\\ {} \displaystyle =\mathbb{P}\big\{{\pi _{k}}(t)\ge j+1\big\}-\mathbb{P}\big\{{\pi _{k}}(t)\ge j\big\}\mathbb{P}\big\{{\pi _{k}}(t)\ge j+1\big\}\ge 0,\end{array}\]
we infer
\[\begin{aligned}{}& \operatorname{Var}({1_{\{{\pi _{k}}(t)=j\}}})\\ {} & \hspace{1em}=\operatorname{Var}({1_{\{{\pi _{k}}(t)\ge j\}}}-{1_{\{{\pi _{k}}(t)\ge j+1\}}})\\ {} & \hspace{1em}=\operatorname{Var}({1_{\{{\pi _{k}}(t)\ge j+1\}}})+\operatorname{Var}({1_{\{{\pi _{k}}(t)\ge j\}}})-2\mathrm{Cov}\hspace{0.1667em}({1_{\{{\pi _{k}}(t)\ge j\}}},{1_{\{{\pi _{k}}(t)\ge j+1\}}})\\ {} & \hspace{1em}\le \operatorname{Var}{1_{\{{\pi _{k}}(t)\ge j+1\}}}+\operatorname{Var}{1_{\{{\pi _{k}}(t)\ge j\}}}.\end{aligned}\]
Therefore, it is enough to show that in the setting of the lemma, for all $j\ge 2$ in the case $\alpha =1$ and for all $j\in \mathbb{N}$ in the other cases,
and
According to formulae (86), (87), (79) and (80) in [3],
and
In view of (52) or (53), depending on the setting, relations (58) or (60), (59) or (61) are equivalent to (56) and (57). □
(56)
\[ \mathrm{Var}\hspace{0.1667em}\bigg(\sum \limits_{k\ge 1}{1_{\{1/{p_{k}}\gt d(t)\}}}{1_{\{{\pi _{k}}(t)\ge j\}}}\bigg)=o\big(\operatorname{Var}{K_{j}^{\ast }}(t)\big),\hspace{1em}t\to \infty ,\](57)
\[ \mathrm{Var}\hspace{0.1667em}\bigg(\sum \limits_{k\ge 1}{1_{\{1/{p_{k}}\le c(t)\}}}{1_{\{{\pi _{k}}(t)\ge j\}}}\bigg)=o\big(\operatorname{Var}{K_{j}^{\ast }}(t)\big),\hspace{1em}t\to \infty .\](58)
\[ \mathrm{Var}\hspace{0.1667em}\bigg(\sum \limits_{k\ge 1}{1_{\{1/{p_{k}}\gt d(t)\}}}{1_{\{{\pi _{k}}(t)\ge j\}}}\bigg)=o\big(\ell (t)\big),\hspace{1em}t\to \infty ,\](61)
\[ \mathrm{Var}\hspace{0.1667em}\bigg(\sum \limits_{k\ge 1}{1_{\{1/{p_{k}}\le c(t)\}}}{1_{\{{\pi _{k}}(t)\ge j\}}}\bigg)=o\big(\rho (t)\big),\hspace{1em}t\to \infty .\]Proof of Theorems 1, 2 and 3.
We first prove Theorem 3 in the case $j=1$. This setting is much simpler than the others, as the LIL for
can be derived from the already available LILs for ${K_{1}}(t)$ and ${K_{2}}(t)$.
The statements of Theorem 3 concerning the function $\hat{L}$ has already been justified in the proof of Lemma 11. According to (51) and (54), $\operatorname{Var}{K_{1}^{\ast }}(t)\sim \operatorname{Var}{K_{1}}(t)\sim t\hat{L}(t)$ as $t\to \infty $. Invoking the latter relation, (50) and (19) we conclude that $\operatorname{Var}{K_{2}}(t)\sim {2^{-1}}tL(t)=o(\operatorname{Var}{K_{1}}(t))$ as $t\to \infty $. By Theorem 3.4 and Remark 1.7 in [3],
\[ \underset{t\to \infty }{\limsup }\Big(\underset{t\to \infty }{\liminf }\Big)\frac{{K_{2}}(t)-\mathbb{E}{K_{2}}(t)}{{(\operatorname{Var}{K_{2}}(t)\log \log \operatorname{Var}{K_{2}}(t))^{1/2}}}={2^{1/2}}\hspace{2.5pt}\big(-{2^{1/2}}\big)\hspace{1em}\text{a.s.}\]
As a consequence, ${K_{2}}(t)-\mathbb{E}{K_{2}}(t)=o({(\operatorname{Var}{K_{1}}(t)\log \log \operatorname{Var}{K_{1}}(t))^{1/2}})$ a.s. as $t\to \infty $. Now, in view of (62),
\[\begin{array}{c}\displaystyle \underset{t\to \infty }{\limsup }\Big(\underset{t\to \infty }{\liminf }\Big)\frac{{K_{1}^{\ast }}(t)-\mathbb{E}{K_{1}^{\ast }}(t)}{{(\operatorname{Var}{K_{1}^{\ast }}(t)\log \log \operatorname{Var}{K_{1}^{\ast }}(t))^{1/2}}}\\ {} \displaystyle =\underset{t\to \infty }{\limsup }\Big(\underset{t\to \infty }{\liminf }\Big)\frac{{K_{1}}(t)-\mathbb{E}{K_{1}}(t)}{{(\operatorname{Var}{K_{1}}(t)\log \log \operatorname{Var}{K_{1}}(t))^{1/2}}}\hspace{1em}\text{a.s.}\end{array}\]
Armed with this, the claim of Theorem 3 in the case $j=1$ is secured by Theorem 3.4 in [3]. Indeed, the theorem states that depending on whether relation (18) holds or not, the right-hand side is either equal to ${2^{1/2}}$ ($-{2^{1/2}}$) or is not larger than ${2^{1/2}}$ (not smaller than $-{2^{1/2}}$) a.s.In the remaining part of the proof we treat simultaneously Theorems 1 and 2 and the case $j\ge 2$ of Theorem 3. It has already been announced that our plan is to derive the LILs from Theorem 5. Hence, now we work towards checking the conditions of the aforementioned theorem in the present setting.
Condition (A1) holds according to (52) in conjunction with ${\lim \nolimits_{t\to \infty }}\ell (t)=\infty $, (53) and (54).
Condition (A2) is justified by a representation ${1_{\{{\pi _{k}}(t)=j\}}}={1_{\{{\pi _{k}}(t)\ge j\}}}-{1_{\{{\pi _{k}}(t)\ge j+1\}}}$ a.s., for all $k,j\in \mathbb{N}$ and $t\ge 0$, see Remark 5. The corresponding function f is given by
We bring out the dependence on j and α to distinguish the so defined functions for the different settings. By the same reasoning, we write ${a_{j,\hspace{0.1667em}\alpha }}$ instead of a, where $a(t)=\operatorname{Var}{K_{j}^{\ast }}(t)$ for $t\ge 0$.
(63)
\[\begin{array}{cc}& \displaystyle {f_{j,\hspace{0.1667em}\alpha }}(t):=\sum \limits_{k\ge 1}\big(\mathbb{P}\big\{{\pi _{k}}(t)\ge j\big\}+\mathbb{P}\big\{{\pi _{k}}(t)\ge j+1\big\}\big)=\mathbb{E}{K_{j}}(t)+\mathbb{E}{K_{j+1}}(t)\\ {} & \displaystyle =\sum \limits_{k\ge 1}\Bigg(1-{\sum \limits_{i=0}^{j-1}}{\mathrm{e}^{-{p_{k}}t}}\frac{{({p_{k}}t)^{i}}}{i!}\Bigg)+\sum \limits_{k\ge 1}\Bigg(1-{\sum \limits_{i=0}^{j}}{\mathrm{e}^{-{p_{k}}t}}\frac{{({p_{k}}t)^{i}}}{i!}\Bigg).\end{array}\]
Condition (A3). Assume first that $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$. According to (48) and (52), for each $j\in \mathbb{N}$, ${f_{j,\hspace{0.1667em}0}}(t)\sim 2\rho (t)$ and ${a_{j,\hspace{0.1667em}0}}(t)\sim C\ell (t)$ as $t\to \infty $, respectively. Here and hereafter, C denotes a constant whose value is of no importance and may vary from formula to formula. Under (3), invoking (46) we conclude that (A3) holds with $\mu =1/\beta +1$. Under (6), using (47) we infer $\mu =1$. Thus, we have to check the additional conditions pertaining to the case $\mu =1$. First, the function ${f_{j,\alpha }}$ is continuous. Second, $q=1/\lambda -1$ and $\mathcal{L}(t)\equiv 1$ for all $t\ge 0$ by another appeal to (47).
Assume now that $\alpha \in (0,1)$ and $j\in \mathbb{N}$ or $\alpha =1$ and $j\ge 2$. Then, according to (49) and (53), ${f_{j,\hspace{0.1667em}\alpha }}(t)\sim C{a_{j,\hspace{0.1667em}\alpha }}(t)$ as $t\to \infty $, which entails $\mu =1$. Further, ${f_{j,\hspace{0.1667em}\alpha }}$ is continuous, $q=0$ and $\mathcal{L}(t)\equiv 1$ for $t\ge 0$.
Condition (A4). Denote by ${a_{0;j,\hspace{0.1667em}\alpha }}$ a version of ${a_{0}}$ for the different settings. Assume first that $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$. Then, according to (52), ${a_{j,\hspace{0.1667em}0}}(t)\sim C\ell (t)$ as $t\to \infty $. Therefore, under (3), ${a_{0;j,\hspace{0.1667em}0}}$ can be chosen as a monotone equivalent of $t\mapsto C{(\log t)^{\beta }}l(\log t)$ which exists by Lemma 8. Under (6), ${a_{0;j,0}}$ can be chosen as ${a_{0;j,\hspace{0.1667em}0}}(t):=C\exp (\sigma {(\log t)^{\lambda }})$ for all $t\ge 1$.
Assume now that $\alpha \in (0,1)$ and $j\in \mathbb{N}$ or $\alpha =1$ and $j\ge 2$. Then, according to (53), ${a_{0;j,\hspace{0.1667em}\alpha }}$ can be chosen as a monotone equivalent of $t\mapsto {c_{j,\hspace{0.1667em}\alpha }}{t^{\alpha }}L(t)$ which exists by Lemma 8.
Thus, in all settings (A4) holds according to Remark 7.
Condition (A5) holds according to Remark 8, for ${f_{j,\hspace{0.1667em}\alpha }}$ is continuous and strictly increasing.
Condition (B1) holds in view of
which shows that ${a_{j,\hspace{0.1667em}\alpha }}$ is a continuous function.
(64)
\[ {a_{j,\hspace{0.1667em}\alpha }}(t)=\operatorname{Var}{K_{j}^{\ast }}(t)=\sum \limits_{k\ge 1}{\mathrm{e}^{-{p_{k}}t}}\frac{{({p_{k}}t)^{j}}}{j!}\bigg(1-{\mathrm{e}^{-{p_{k}}t}}\frac{{({p_{k}}t)^{j}}}{j!}\bigg),\hspace{1em}t\ge 0,\]
Condition (B22). For $t\gt 1$, put $c(t):=t/\log t$ and $d(t):=t\log t$ and then
By Lemma 12, in all settings relation (30), which is the second part of (B22), holds.
Passing to the first part of (B22), we are going to refer to the table below which contains all the necessary information. In the first line, we list the values of μ which have already been found while checking (A3). Recall that the definitions of ${w_{n}}(\gamma ,\mu )$ and ${\tau _{n}}$ can be found right after formula (29) and in (29), respectively. We write ${w_{n}}$ instead of ${w_{n}}(\gamma ,\mu )$ and Set. is a shorthand for Setting. Note that in the case $\alpha =1$ we only consider $j\ge 2$.
Set. | $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$, (3) | $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$, (6) | $\alpha \in (0,1]$, $j\in \mathbb{N}$ |
μ | $1/\beta +1$ | 1 | 1 |
${w_{n}}$ | ${n^{\beta (1+\gamma )}}$ | $\exp ({n^{\lambda (1+\gamma )}})$ | $\exp ({n^{1+\gamma }})$ |
${\tau _{n}}\sim $ | ${\mathrm{e}^{{n^{(1+\gamma )}}}}o({\mathrm{e}^{{n^{(1+\gamma )}}}})$ | ${\mathrm{e}^{{\sigma ^{-1/\lambda }}{n^{(1+\gamma )}}}}(1+o(1))$ | ${\mathrm{e}^{{\alpha ^{-1}}{n^{(1+\gamma )}}}}o({\mathrm{e}^{{\alpha ^{-1}}{n^{(1+\gamma )}}}})$ |
4.5 Proofs of Theorem 4
We start with some preparatory work. It is known, see, for instance, Lemma 1 in [9], that for any probability distribution ${({p_{k}})_{k\in \mathbb{N}}}$ and $j\in \mathbb{N}$,
However, we are not aware of a counterpart of this relation for variances. Proposition 3 fills up this gap. Recall that $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$ means that $\rho \in \Pi $ and that its auxiliary function ℓ, see (2), satisfies ${\lim \nolimits_{t\to \infty }}\ell (t)=\infty $.
(65)
\[ \underset{n\to \infty }{\lim }\big|\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)-\mathbb{E}{K_{j}^{\ast }}(n)\big|=0.\]The proof of Proposition 3 is partly based on a lemma which is a slight extension of Lemma 6.9 in [3]. The new aspect of the lemma is that unlike the cited result it covers the case where $j=1$ and $l\ge 1$ simultaneously.
Lemma 13.
Assume that either $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$ or ρ is regularly varying at ∞ of index $\alpha \in (0,1]$. Then for $l\ge j$, $l,j\in \mathbb{N}$,
Proof.
According to the last formula in the proof of Lemma 6.9 in [3],
\[ \sum \limits_{k\ge 1}\left(\genfrac{}{}{0.0pt}{}{n}{l}\right){p_{k}^{l}}{(1-{p_{k}})^{n}}\hspace{2.5pt}\sim \hspace{2.5pt}\mathbb{E}{K_{l}^{\ast }}(n+l),\hspace{1em}n\to \infty .\]
According to formulae (9), (13), (16) and (22), the function $t\mapsto \mathbb{E}{K_{j}^{\ast }}(t)$ is regularly varying at ∞ of index $\alpha \in [0,1]$. This entails $\mathbb{E}{K_{l}^{\ast }}(n+l)\sim \mathbb{E}{K_{l}^{\ast }}(n)$ as $n\to \infty $.If $\rho \in {\Pi _{\ell ,\hspace{0.1667em}\infty }}$ or ρ is regularly varying at ∞ of index $\alpha \in (0,1)$, then (66) is a consequence of (9) and (10) or (13) and (14). If ρ is regularly varying at ∞ of index $\alpha =1$ and either $j,l\ge 2$ or $j=l=1$, then (66) follows from (16) and (17) or (22), respectively. Finally, under the latter regular variation assumption, if $j=1$ and $l\ge 2$, then (66), with o replacing O, holds true according to (16), (22) and (19). □
Proof of Proposition 3.
We start by noting that, in view of (52), (53) or (54), for $j\in \mathbb{N}$,
In the case $\alpha =1$ this is secured by ${\lim \nolimits_{t\to \infty }}\hat{L}(t)=0$, which follows from the definition of $\hat{L}$.
For $k,j,n\in \mathbb{N}$, the event $\{\text{the box}\hspace{2.5pt}k\hspace{2.5pt}\text{contains exactly}\hspace{2.5pt}j\hspace{2.5pt}\text{balls out of}\hspace{2.5pt}n\}$ will be denoted by ${A_{k}}(j,n)$. Then
and
Proof of (68). For $k,j,n\in \mathbb{N}$,
\[\begin{array}{c}\displaystyle \operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)=\sum \limits_{k\ge 1}\mathbb{P}\big({A_{k}}(j,n)\big)\big(1-\mathbb{P}\big({A_{k}}(j,n)\big)\big)\\ {} \displaystyle +\sum \limits_{i\ne k}\big(\mathbb{P}\big({A_{i}}(j,n)\cap {A_{k}}(j,n)\big)-\mathbb{P}\big({A_{i}}(j,n)\big)\mathbb{P}\big({A_{k}}(j,n)\big)\big).\end{array}\]
It is enough to prove that
(68)
\[ \underset{n\to \infty }{\lim }\frac{{\textstyle\sum _{k\ge 1}}\mathbb{P}({A_{k}}(j,n))(1-\mathbb{P}({A_{k}}(j,n)))-\operatorname{Var}{K_{j}^{\ast }}(n)}{\operatorname{Var}{K_{j}^{\ast }}(n)}=0\](69)
\[ \underset{n\to \infty }{\lim }\frac{{\textstyle\sum _{i\ne k}}(\mathbb{P}({A_{i}}(j,n)\cap {A_{k}}(j,n))-\mathbb{P}({A_{i}}(j,n))\mathbb{P}({A_{k}}(j,n)))}{\operatorname{Var}{K_{j}^{\ast }}(n)}=0.\]
\[ \mathbb{P}\big({A_{k}}(j,n)\big)=\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}.\]
In view of this and (64), the numerator in (68) is equal to
\[\begin{array}{c}\displaystyle \sum \limits_{k\ge 1}\bigg(\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}-{\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\\ {} \displaystyle -{\bigg(\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}\bigg)^{2}}-{\bigg({\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\bigg)^{2}}\bigg).\end{array}\]
According to the penultimate inequality in the proof of Lemma 2.13 in [11], for large enough n and any $j\le n$,
\[ -{B_{j}}{p_{k}}\le \left(\genfrac{}{}{0.0pt}{}{n}{i}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}-{\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\le {A_{j}}{p_{k}}\]
for some positive constants ${A_{j}}$ and ${B_{j}}$. Therefore,
\[ \sum \limits_{k\ge 1}\Bigg|\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}-{\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\Bigg|\le \max ({A_{j}},{B_{j}})=o\big(\operatorname{Var}{K_{j}^{\ast }}(n)\big),\]
as $n\to \infty $, since under our assumptions ${\lim \nolimits_{n\to \infty }}\operatorname{Var}{K_{j}^{\ast }}(n)=\infty $. Further, write
\[\begin{array}{c}\displaystyle \sum \limits_{k\ge 1}\Bigg|{\bigg(\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}\bigg)^{2}}-{\bigg({\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\bigg)^{2}}\Bigg|\\ {} \displaystyle =\sum \limits_{k\ge 1}\Bigg|\bigg(\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}-{\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\bigg)\Bigg|\bigg(\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}+{\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\bigg)\\ {} \displaystyle \le 2\sum \limits_{k\ge 1}\Bigg|\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-j}}-{\mathrm{e}^{-{p_{k}}n}}\frac{{({p_{k}}n)^{j}}}{j!}\Bigg|=o\big(\operatorname{Var}{K_{j}^{\ast }}(n)\big),\hspace{1em}n\to \infty .\end{array}\]
The proof of (68) is complete.
Proof of (69). For $k,i,j,n\in \mathbb{N}$,
\[\begin{array}{c}\displaystyle \mathbb{P}\big({A_{i}}(j,n)\cap {A_{k}}(j,n)\big)-\mathbb{P}\big({A_{i}}(j,n)\big)\mathbb{P}\big({A_{k}}(j,n)\big)\\ {} \displaystyle =\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right){p_{i}^{j}}{p_{k}^{j}}{(1-{p_{i}}-{p_{k}})^{n-2j}}-\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{i}^{j}}{p_{k}^{j}}{(1-{p_{i}})^{n-j}}{(1-{p_{k}})^{n-j}}\\ {} \displaystyle =:{C_{j}}(i,k,n).\end{array}\]
We shall use an appropriate decomposition of ${C_{j}}$
\[\begin{array}{c}\displaystyle {C_{j}}(i,k,n)=\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right){p_{i}^{j}}{p_{k}^{j}}\big({(1-{p_{i}}-{p_{k}})^{n-2j}}-{(1-{p_{i}})^{n-j}}{(1-{p_{k}})^{n-j}}\big)\\ {} \displaystyle \hspace{2em}-\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)\bigg(\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)-\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right)\bigg){p_{i}^{j}}{p_{k}^{j}}{(1-{p_{i}})^{n-j}}{(1-{p_{k}})^{n-j}}\\ {} \displaystyle =:{C_{j}^{(1)}}(i,k,n)+{C_{j}^{(2)}}(i,k,n).\end{array}\]
To analyze ${C_{j}^{(1)}}$ we argue as in the proof of Lemma 1 on p. 152 in [9]. Invoking an expansion
which holds for positive x and y, $x\gt y$, with $x=(1-{p_{i}})(1-{p_{k}})$, $y={p_{i}}{p_{k}}$ and $m=n-2j$, we infer
\[\begin{array}{c}\displaystyle {C_{j}^{(1)}}(i,k,n)\\ {} \displaystyle =\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right){p_{i}^{j}}{p_{k}^{j}}\big({(1-{p_{i}})^{n-2j}}{(1-{p_{k}})^{n-2j}}\big(1-{(1-{p_{i}})^{j}}{(1-{p_{k}})^{j}}\big)\\ {} \displaystyle +O\big((n-2j){p_{i}}{p_{k}}{(1-{p_{i}})^{n-2j-1}}{(1-{p_{k}})^{n-2j-1}}\big)\big)\\ {} \displaystyle =:{F_{j}}(i,k,n)+{G_{j}}(i,k,n).\end{array}\]
Next, we intend to show that the contributions of ${F_{j}}(i,k,n)$, ${G_{j}}(i,k,n)$ and ${C_{j}^{(2)}}(i,k,n)$ to the sum are negligible in comparison to $\operatorname{Var}{K_{j}^{\ast }}(n)$ as $n\to \infty $.
Analysis of ${G_{j}}$. With Lemma 13 at hand, we obtain
\[\begin{array}{c}\displaystyle \sum \limits_{i\ne k}\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right)(n-2j){p_{i}^{j+1}}{p_{k}^{j+1}}{(1-{p_{i}})^{n-2j-1}}{(1-{p_{k}})^{n-2j-1}}\\ {} \displaystyle \le \sum \limits_{i\ge 1}n\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{i}^{j+1}}{(1-{p_{i}})^{n-2j-1}}\sum \limits_{k\ge 1}\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right){p_{k}^{j+1}}{(1-{p_{k}})^{n-2j-1}}\\ {} \displaystyle =O\bigg(\operatorname{Var}{K_{j}^{\ast }}(n)\frac{\operatorname{Var}{K_{j}^{\ast }}(n)}{n}\bigg)=o\big(\operatorname{Var}{K_{j}^{\ast }}(n)\big),\hspace{1em}n\to \infty ,\end{array}\]
having utilized (67) for the last limit relation.
Analysis of ${F_{j}}$. For $m\in \mathbb{N}$ and $x\in [0,1]$, $1-{x^{m}}\le m(1-x)$. Using this with $m=j$ and $x=(1-{p_{i}})(1-{p_{k}})$ we conclude that $1-{(1-{p_{i}})^{j}}{(1-{p_{k}})^{j}}\le j({p_{i}}+{p_{k}}-{p_{i}}{p_{k}})\le j({p_{i}}+{p_{k}})$ and thereupon
\[\begin{array}{c}\displaystyle {F_{j}}(i,k,n)\le j\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right){p_{i}^{j}}{p_{k}^{j}}{(1-{p_{i}})^{n-2j}}{(1-{p_{k}})^{n-2j}}({p_{i}}+{p_{k}})\\ {} \displaystyle =:{F_{j}^{(1)}}(i,k,n)+{F_{j}^{(2)}}(i,k,n).\end{array}\]
Further, invoking Lemma 13 yields
\[\begin{array}{c}\displaystyle 0\le \sum \limits_{i\ne k}{F_{j}^{(1)}}(i,k,n)\le j\sum \limits_{i\ge 1}\left(\genfrac{}{}{0.0pt}{}{n}{j}\right){p_{i}^{j+1}}{(1-{p_{i}})^{n-2j}}\sum \limits_{k\ge 1}\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right){p_{k}^{j}}{(1-{p_{k}})^{n-2j}}\\ {} \displaystyle =O\bigg(\frac{\operatorname{Var}{K_{j}^{\ast }}(n)}{n}\operatorname{Var}{K_{j}^{\ast }}(n)\bigg)=o\big(\operatorname{Var}{K_{j}^{\ast }}(n)\big),\hspace{1em}n\to \infty .\end{array}\]
Here, the latter asymptotic relation is a consequence of (67).The argument for ${F_{j}^{(2)}}$ is analogous, and we omit details.
Analysis of ${C_{j}^{(2)}}$. Notice that $\left(\genfrac{}{}{0.0pt}{}{n}{j}\right)-\left(\genfrac{}{}{0.0pt}{}{n-j}{j}\right)=O({n^{j-1}})$ as $n\to \infty $. Hence, mimicking the argument used for the analysis of ${F_{j}^{(1)}}$ we conclude that
\[\begin{array}{c}\displaystyle \sum \limits_{i\ne k}\big|{C_{j}^{(2)}}(i,k,n)\big|=O\bigg(\sum \limits_{i\ne k}{n^{2j-1}}{p_{i}^{j}}{p_{k}^{j}}{(1-{p_{i}})^{n-j}}{(1-{p_{k}})^{n-j}}\bigg)\\ {} \displaystyle =O\bigg(\sum \limits_{i\ge 1}{n^{j}}{p_{i}^{j}}{(1-{p_{i}})^{n-j}}\sum \limits_{k\ge 1}{n^{j-1}}{p_{k}^{j}}{(1-{p_{k}})^{n-j}}\bigg)=O\bigg(\operatorname{Var}{K_{j}^{\ast }}(n)\frac{\operatorname{Var}{K_{j}^{\ast }}(n)}{n}\bigg)\\ {} \displaystyle =o\big(\operatorname{Var}{K_{j}}(n)\big),\hspace{1em}n\to \infty .\end{array}\]
Combining all the fragments together we arrive at (69). □
With Proposition 3 at hand, we are ready to prove the LIL stated in Theorem 4. We argue along the lines of the proof of Theorem 3.7 in [3].
Proof of Theorem 4.
The deterministic and Poissonized schemes discussed in Section 1.1 are not necessarily defined on a common probability space. In other words, we have not assumed so far that the schemes were defined by throwing one and the same collection of balls. Our plan is to deduce LILs for ${\mathcal{K}_{j}^{\ast }}(n)$ from the corresponding LILs for ${K_{j}^{\ast }}(t)$. To this end, we need to couple the two schemes. Let ${X_{1}}$, ${X_{2}},\dots $ be independent random variables with distribution ${({p_{k}})_{k\in \mathbb{N}}}$, which are independent of a Poisson process π and particularly its arrival sequence ${({S_{n}})_{n\in \mathbb{N}}}$. For all $j,n\in \mathbb{N}$ and $t\ge 0$, we define coupled versions of ${\mathcal{K}_{j}}(n)$, ${\mathcal{K}_{j}^{\ast }}(n)$, ${K_{j}}(t)$ and ${K_{j}^{\ast }}(t)$ as follows, keeping the notation for the variables unchanged:
\[\begin{aligned}{}{\mathcal{K}_{j}}(n)& =\mathrm{\# }\hspace{2.5pt}\text{of distinct values that the variables}\hspace{2.5pt}{X_{1}},{X_{2}},\dots ,{X_{n}}\\ {} & \hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\text{take at least}\hspace{2.5pt}j\hspace{2.5pt}\text{times},\\ {} {\mathcal{K}_{j}^{\ast }}(n)& =\mathrm{\# }\hspace{2.5pt}\text{of distinct values that the variables}\hspace{2.5pt}{X_{1}},{X_{2}},\dots ,{X_{n}}\\ {} & \hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\text{take exactly}\hspace{2.5pt}j\hspace{2.5pt}\text{times},\\ {} {K_{j}}(t)& =\mathrm{\# }\hspace{2.5pt}\text{of distinct values that the variables}\hspace{2.5pt}{X_{1}},{X_{2}},\dots ,{X_{\pi (t)}}\\ {} & \hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\text{take at least}\hspace{2.5pt}j\hspace{2.5pt}\text{times},\\ {} {K_{j}^{\ast }}(t)& =\mathrm{\# }\hspace{2.5pt}\text{of distinct values that the variables}\hspace{2.5pt}{X_{1}},{X_{2}},\dots ,{X_{\pi (t)}}\\ {} & \hspace{1em}\hspace{2.5pt}\hspace{2.5pt}\text{take exactly}\hspace{2.5pt}j\hspace{2.5pt}\text{times},\end{aligned}\]
To justify the construction, observe that the variable ${X_{i}}$ can be thought of as the index of a box hit by the ith ball. The most important conclusion of the preceding discussion is that, for all $j,n\in \mathbb{N}$, ${\mathcal{K}_{j}^{\ast }}(n)={K_{j}^{\ast }}({S_{n}})$ a.s. (for the coupled variables).We prove the result in several steps.
Step 1. According to Step 2 of the proof of Theorem 3.7 in [3],
\[ \underset{n\to \infty }{\lim }\frac{{K_{j}}({S_{n}})-{K_{j}}(n)}{{(\operatorname{Var}{K_{j}}(n)m(\operatorname{Var}{K_{j}}(n)))^{1/2}}}=0\hspace{1em}\text{a.s.},\]
where $m(t)=\log t$ under (2) and (3) and $m(t)=\log \log t$ under the other assumptions of Theorem 4.By Lemmas 10 and 11, for $j\in \mathbb{N}$, $\operatorname{Var}{K_{j}^{\ast }}(t)$ and $\operatorname{Var}{K_{j}}(t)$ are asymptotically equivalent up to a constant, whence
\[ \underset{n\to \infty }{\lim }\frac{{K_{j}}({S_{n}})-{K_{j}}(n)}{{(\operatorname{Var}{K_{j}^{\ast }}(n)m(\operatorname{Var}{K_{j}^{\ast }}(n)))^{1/2}}}=0\hspace{1em}\text{a.s.}\]
By Lemma 11, for $j\in \mathbb{N}$, $\operatorname{Var}{K_{j+1}^{\ast }}(t)$ and $\operatorname{Var}{K_{j}^{\ast }}(t)$ are asymptotically equivalent up to a constant, unless $\alpha =j=1$. In the latter case, invoking in addition (19) we obtain $\operatorname{Var}{K_{j+1}^{\ast }}(t)=o(\operatorname{Var}{K_{j}^{\ast }}(t))$ as $t\to \infty $. This in combination with the last centered limit relation, in which we replace j with $j+1$, yields
\[ \underset{n\to \infty }{\lim }\frac{{K_{j+1}}({S_{n}})-{K_{j+1}}(n)}{{(\operatorname{Var}{K_{j}^{\ast }}(n)m(\operatorname{Var}{K_{j}^{\ast }}(n)))^{1/2}}}=0\hspace{1em}\text{a.s.}\]
Since, for $j\in \mathbb{N}$, ${K_{j}^{\ast }}(t)={K_{j}}(t)-{K_{j+1}}(t)$ a.s., subtracting the last two centered limit relations we arrive at
\[ \underset{n\to \infty }{\lim }\frac{{K_{j}^{\ast }}({S_{n}})-{K_{j}^{\ast }}(n)}{{(\operatorname{Var}{K_{j}^{\ast }}(n)m(\operatorname{Var}{K_{j}^{\ast }}(n)))^{1/2}}}=0\hspace{1em}\text{a.s.}\]
Step 2. Halves of LILs (4), (7), (11) and (20) ((5), (8), (12) and (21)) read
\[ \underset{n\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{{K_{j}^{\ast }}(n)-\mathbb{E}{K_{j}^{\ast }}(n)}{{(\operatorname{Var}{K_{j}^{\ast }}(n)m(\operatorname{Var}{K_{j}^{\ast }}(n)))^{1/2}}}\le C\hspace{2.5pt}(\ge -C)\hspace{1em}\text{a.s.},\]
where the case-dependent constant C is equal to the right-hand side of (4), (7), (11) or (20) ((5), (8), (12) or (21)), respectively. This taken together with the conclusion of Step 1, formula (65) and Proposition 3 enables us to obtain
\[ \underset{n\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{{\mathcal{K}_{j}^{\ast }}(n)-\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)}{{(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)m(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)))^{1/2}}}\le C\hspace{2.5pt}(\ge -C)\hspace{1em}\text{a.s.}\]
Here, we have used a decomposition
\[ {\mathcal{K}_{j}^{\ast }}(n)-\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)=\big({K_{j}^{\ast }}({S_{n}})-{K_{j}^{\ast }}(n)\big)+\big({K_{j}^{\ast }}(n)-\mathbb{E}{K_{j}^{\ast }}(n)\big)+\big(\mathbb{E}{K_{j}^{\ast }}(n)-\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)\big)\]
a.s. This finishes the proof of
\[ \underset{t\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{{\mathcal{K}_{1}^{\ast }}(n)-\mathbb{E}{\mathcal{K}_{1}^{\ast }}(n)}{{(\mathrm{Var}\hspace{0.1667em}{\mathcal{K}_{1}^{\ast }}(n)\log \log \mathrm{Var}\hspace{0.1667em}{\mathcal{K}_{1}^{\ast }}(n))^{1/2}}}\le {2^{1/2}}\hspace{2.5pt}\big(\ge -{2^{1/2}}\big)\hspace{1em}\text{a.s.}\]
in the situation that $\alpha =1$ and relation (18) fails to hold.According to Lemma 6, for any $\delta \gt 0$ and the deterministic sequence $({\tau _{n}})$ defined in (29),
\[ \underset{n\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{{K_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )-\mathbb{E}{K_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )}{C{(\operatorname{Var}{K_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )m(\operatorname{Var}{K_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )))^{1/2}}}\ge 1-\delta \hspace{2.5pt}\big(\le -(1-\delta )\big)\hspace{1em}\text{a.s.}\]
Combining these inequalities with the conclusion of Step 1, formula (65) and Proposition 3 we arrive at
\[\begin{array}{c}\displaystyle \underset{n\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{{\mathcal{K}_{j}^{\ast }}(n)-\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)}{C{(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)m(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)))^{1/2}}}\\ {} \displaystyle \ge (\le )\underset{n\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{{\mathcal{K}_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )-\mathbb{E}{\mathcal{K}_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )}{C{(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )m(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(\lfloor {\tau _{n}}\rfloor )))^{1/2}}}\\ {} \displaystyle \ge 1-\delta \hspace{2.5pt}\big(\le -(1-\delta )\big)\hspace{1em}\text{a.s.}\end{array}\]
Sending $\delta \to 0+$ yields
\[ \underset{n\to \infty }{\limsup }\Big(\underset{n\to \infty }{\liminf }\Big)\frac{{\mathcal{K}_{j}^{\ast }}(n)-\mathbb{E}{\mathcal{K}_{j}^{\ast }}(n)}{{(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)m(\operatorname{Var}{\mathcal{K}_{j}^{\ast }}(n)))^{1/2}}}\ge C\hspace{2.5pt}(\le -C)\hspace{1em}\text{a.s.},\]
which finishes the proof. □