1 Introduction
Recently there has been a surge of activity around limit theorems for random Dirichlet series and their zeros. Throughout the paper, by random Dirichlet series, we mean a random series parameterized by $s\gt 0$:
where $\alpha \in \mathbb{R}$ and ${\eta _{1}}$, ${\eta _{2}},\dots $ are independent copies of a random variable η with zero mean and finite variance, which live on a probability space $(\Omega ,\mathfrak{F},\mathbb{P})$. By Kolmogorov’s three series theorem, the series defining ${X_{\alpha }}(s)$ converges almost surely (a.s.) for each $s\gt 0$. Furthermore, if $\alpha \lt -1/2$, the series ${X_{\alpha }}(0+)$ converges a.s. by the same theorem. Thus, as far as limit theorems for ${X_{\alpha }}(s)$ as $s\to 0+$ are concerned, the case $\alpha \lt -1/2$ is not interesting, hence excluded in the sequel.
The following functional limit theorem (FLT) and law of the iterated logarithm (LIL) were known in the case $\alpha \gt -1/2$. We write ⟹ for weak convergence in a function space, and $C(0,\infty )$ and $C[0,\infty )$ for the spaces of real-valued continuous functions defined on $(0,\infty )$ and $[0,\infty )$, respectively. It is assumed that the spaces $C(0,\infty )$ and $C[0,\infty )$ are endowed with the topology of locally uniform convergence.
Proposition 1.
Assume that $\mathbb{E}[\eta ]=0$, ${\sigma ^{2}}=\mathbb{E}[{\eta ^{2}}]\in (0,\infty )$ and let $\alpha \gt -1/2$. Then
\[ {\bigg({s^{1/2+\alpha }}\sum \limits_{k\ge 2}\frac{{(\log k)^{\alpha }}}{{k^{1/2+st}}}{\eta _{k}}\bigg)_{t\gt 0}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\bigg(\sigma {\int _{[0,\infty )}}{y^{\alpha }}{\mathrm{e}^{-ty}}\mathrm{d}B(y)\bigg)_{t\gt 0}},\hspace{1em}s\to 0+,\]
on $C(0,\infty )$, where ${(B(y))_{y\ge 0}}$ is the standard Brownian motion.
Proposition 1 follows from Corollary 2.3 in [5]. In the cited article, the result was derived by a specialization of a FLT for ${X_{\alpha }}(s)$, with a complex-valued η, in the space of analytic functions.
For a family $({x_{t}})$ of real numbers denote by $C(({x_{t}}))$ the set of its limit points.
Proposition 2.
Assume that $\mathbb{E}[\eta ]=0$, ${\sigma ^{2}}=\mathbb{E}[{\eta ^{2}}]\in (0,\infty )$ and let $\alpha \gt -1/2$. Then, almost surely,
where Γ is the Euler gamma function.
Proposition 2 can be found in Theorem 3.1 of [5]. A classical form of the LIL in terms of lim sup and lim inf was earlier obtained in Theorem 1.1 of [2] in a rather particular case $\mathbb{P}\{\eta =\pm 1\}=1/2$ and $\alpha =0$. Nevertheless, we stress that the work [2] gave impetus to both [5] and the present paper.
Our purpose is to formulate and prove counterparts of Propositions 1 and 2 in a boundary case $\alpha =-1/2$.
In the case $\alpha =0$, real zeros of random Dirichlet series have been in focus of the recent papers [1, 3, 9] (it is assumed in [9] that the distribution of η is symmetric γ-stable for $\gamma \in (0,2]$). In the case $\alpha \gt -1/2$, limit theorems for complex and real zeros of $s\mapsto {X_{\alpha }}(s)$ were proved in [5]. Although we do not directly investigate zeros of $s\mapsto {X_{-1/2}}(s)$ in the present paper, our LIL stated in Theorem 2 below entails that, a.s., $s\mapsto {X_{-1/2}}(s)$ has infinitely many real zeros in any right neighborhood of 0.
2 Main results
We start by stating a FLT for ${X_{-1/2}}(s)$, properly scaled, as $s\to 0+$. If $\alpha \gt -1/2$, the variance of ${X_{\alpha }}(s)$ exhibits a polynomial growth, whereas the growth of $\mathrm{Var}\hspace{0.1667em}[{X_{-1/2}}(s)]$ is logarithmic. This partly justifies the facts that the scaling of time in Proposition 1 is $st$, that is, standard, whereas the scaling of time in Theorem 1 is ${s^{t}}$.
Theorem 1.
Assume that $\mathbb{E}[\eta ]=0$ and ${\sigma ^{2}}=\mathbb{E}[{\eta ^{2}}]\in (0,\infty )$. Then
\[ {\bigg(\frac{1}{{(\log 1/s)^{1/2}}}\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{s^{t}}}}}{\eta _{k}}\bigg)_{t\ge 0}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big(\sigma B(t)\big)_{t\ge 0}},\hspace{1em}s\to 0+,\]
on $C[0,\infty )$, where ${(B(t))_{t\ge 0}}$ is the standard Brownian motion.
Observe that the limit process in Theorem 1 is nowhere differentiable a.s., whereas the limit process in Proposition 1 is infinitely differentiable a.s. This distinction is a manifestation of intricacy of the boundary case $\alpha =-1/2$.
We proceed with a LIL for ${X_{-1/2}}(s)$ as $s\to 0+$. The FLT given in Theorem 1 was used for guessing the LIL’s form, namely, the factor $\log \log \log 1/s$ in the normalization.
Theorem 2.
Assume that $\mathbb{E}[\eta ]=0$ and ${\sigma ^{2}}=\mathbb{E}[{\eta ^{2}}]\in (0,\infty )$. Then
In particular,
and
(1)
\[\begin{aligned}{}& C\bigg(\bigg({\bigg(\frac{1}{2{\sigma ^{2}}\log 1/s\hspace{0.1667em}\log \log \log 1/s}\bigg)^{1/2}}\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}:s\in \big(0,{\mathrm{e}^{-\mathrm{e}}}\big)\bigg)\bigg)\\ {} & \hspace{1em}=[-1,1]\hspace{1em}\textit{a.s.}\end{aligned}\]Since $\mathrm{Var}\hspace{0.1667em}[{X_{-1/2}}(s)]\sim {\sigma ^{2}}\log 1/s$ as $s\to 0+$ (see the beginning of Section 3), we infer
\[ \log \log \mathrm{Var}\hspace{0.1667em}\big[{X_{-1/2}}(s)\big]\sim \log \log \log 1/s,\hspace{1em}s\to 0+,\]
where, as usual, $f(s)\sim g(s)$ as $s\to 0+$ means that ${\lim \nolimits_{s\to 0+}}(f(s)/g(s))=1$. Thus, formulas (2) and (3) are indeed laws of the iterated logarithm.One of the referees has kindly informed us that the results of Theorems 1 and 2 should hold with ${\textstyle\sum _{p}}\frac{{\eta _{p}}}{{p^{1/2+s}}}$ replacing ${\textstyle\sum _{k\ge 2}}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}$, where ${\textstyle\sum _{p}}$ denotes the summation over the prime numbers. Based on this comment we formulate the following conjectures.
Conjecture 1.
Assume that $\mathbb{E}[\eta ]=0$ and ${\sigma ^{2}}=\mathbb{E}[{\eta ^{2}}]\in (0,\infty )$. Then
\[ {\bigg(\frac{1}{{(\log 1/s)^{1/2}}}\sum \limits_{p}\frac{{\eta _{p}}}{{p^{1/2+{s^{t}}}}}\bigg)_{t\ge 0}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big(\sigma B(t)\big)_{t\ge 0}},\hspace{1em}s\to 0+,\]
on $C[0,\infty )$, where ${(B(t))_{t\ge 0}}$ is the standard Brownian motion.
Conjecture 2.
Assume that $\mathbb{E}[\eta ]=0$ and ${\sigma ^{2}}=\mathbb{E}[{\eta ^{2}}]\in (0,\infty )$. Then
\[\begin{aligned}{}& C\bigg(\bigg({\bigg(\frac{1}{2{\sigma ^{2}}\log 1/s\hspace{0.1667em}\log \log \log 1/s}\bigg)^{1/2}}\sum \limits_{p}\frac{{\eta _{p}}}{{p^{1/2+s}}}:s\in \big(0,{\mathrm{e}^{-\mathrm{e}}}\big)\bigg)\bigg)\\ {} & \hspace{1em}=[-1,1]\hspace{1em}\textit{a.s.}\end{aligned}\]
In particular,
and
At the moment we do not see a way to effectively estimate the difference ${\textstyle\sum _{p}}\frac{{\eta _{p}}}{{p^{1/2+s}}}-{\textstyle\sum _{k\ge 2}}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}$. Thus, Conjectures 1 and 2 do not seem to follow from Theorems 1 and 2. On the other hand, we think that both conjectures can be justified by a proper modification of the proofs of Theorems 1 and 2. Some of the modifications required are nonobvious and need a substantial technical work. As a consequence, proofs of these conjectures will be given elsewhere. Once Conjecture 2 has become a theorem, it provides a significant improvement over Theorem 1.3 in [6].
The remainder of the paper is structured as follows. Theorems 2 and 1 are proved in Sections 3 and 4, respectively. The reversed order of the proofs is necessitated by the fact that our proof of Theorem 1 uses some arguments and calculations from the proof of Theorem 2. At the first glance it looks plausible that an economical proof of the LIL may be based on a strong approximation by a Brownian motion of the standard random walk generated by η, that is, the random sequence ${({T_{n}})_{n\ge 0}}$ defined by ${T_{0}}:=0$, ${T_{n}}:={\eta _{1}}+\cdots +{\eta _{n}}$ for $n\in \mathbb{N}$. In Section 5 we explain that this naive idea fails.
3 Proof of Theorem 2
Put
\[ g(s):=\mathbb{E}\bigg[{\bigg(\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}\bigg)^{2}}\bigg]={\sigma ^{2}}\sum \limits_{k\ge 2}\frac{{(\log k)^{-1}}}{{k^{1+2s}}},\hspace{1em}s\gt 0.\]
We start by showing that $g(s)\sim {\sigma ^{2}}\log (1/s)$ as $s\to 0+$. By monotonicity,
\[ {\int _{2}^{\infty }}\frac{{(\log x)^{-1}}}{{x^{1+2s}}}\mathrm{d}x\le \sum \limits_{k\ge 2}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}\le \frac{{(\log 2)^{-1}}}{{2^{1+2s}}}+{\int _{2}^{\infty }}\frac{{(\log x)^{-1}}}{{x^{1+2s}}}\mathrm{d}x.\]
Plainly, ${\lim \nolimits_{s\to 0+}}\frac{{(\log 2)^{-1}}}{{2^{1+2s}}\log 1/s}=0$. Changing the variable we infer
It is convenient to split the presentation into two pieces. We proceed by proving one half of Theorem 2. In what follows we write ${\log ^{(3)}}$ for $\log \log \log $.
Proposition 3.
Under the assumptions of Theorem 2,
and
Replacing ${\eta _{k}}$ with ${\eta _{k}}/\sigma $ we can work under the assumption that ${\sigma ^{2}}=1$. For $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$, put
Let $M:(0,\infty )\to {\mathbb{N}_{0}}$ denote a function satisfying ${\lim \nolimits_{s\to 0+}}M(s)=+\infty $ and
Here, as usual, ${\mathbb{N}_{0}}:=\mathbb{N}\cup \{0\}$.
In Lemma 1, we remove from the series in focus an initial fragment with a vanishing contribution. In all the lemmas given below we assume without further notice that the assumptions of Theorem 2 are in force.
Proof.
Recall that ${T_{0}}=0$ and ${T_{n}}={\eta _{1}}+\cdots +{\eta _{n}}$ for $n\in \mathbb{N}$. According to the LIL for standard random walks,
Observe that
(7)
\[ |{T_{n}}|\le \underset{k\le n}{\max }|{T_{k}}|=O\big({(n\log \log n)^{1/2}}\big),\hspace{1em}n\to \infty \hspace{1em}\text{a.s.}\]
\[ {\sum \limits_{k=2}^{M(s)}}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}={\int _{(3/2,\hspace{0.1667em}M(s)]}}\frac{\mathrm{d}{T_{\lfloor x\rfloor }}}{{(\log x)^{1/2}}{x^{1/2+s}}}.\]
Integrating by parts we obtain
\[\begin{aligned}{}& {\int _{(3/2,\hspace{0.1667em}M(s)]}}\frac{{(\log x)^{-1/2}}}{{x^{1/2+s}}}\mathrm{d}{T_{\lfloor x\rfloor }}\\ {} & \hspace{1em}=\frac{{(\log M(s))^{-1/2}}{T_{M(s)}}}{{(M(s))^{1/2+s}}}-\frac{{(\log 3/2)^{-1/2}}{\eta _{1}}}{{(3/2)^{1/2+s}}}\\ {} & \hspace{2em}+{\int _{3/2}^{M(s)}}\frac{({(\log x)^{-3/2}}/2+(1/2+s){(\log x)^{-1/2}}){T_{\lfloor x\rfloor }}}{{x^{3/2+s}}}\mathrm{d}x.\end{aligned}\]
Relation (6) entails ${\lim \nolimits_{s\to 0+}}{(M(s))^{s}}=1$. This in combination with (7) enables us to conclude that, as $s\to 0+$,
\[\begin{aligned}{}& \frac{{(\log M(s))^{-1/2}}|{T_{M(s)}}|}{{(M(s))^{1/2+s}}}\sim \frac{{(\log M(s))^{-1/2}}|{T_{M(s)}}|}{{(M(s))^{1/2}}}\\ {} & \hspace{1em}=O\big({\big(\log M(s)\big)^{-1/2}}{\big(\log \log M(s)\big)^{1/2}}\big)=o(1).\end{aligned}\]
Since ${\lim \nolimits_{s\to 0+}}f(s)=0$, the latter ensures that
\[ \underset{s\to 0+}{\lim }f(s)\frac{{(\log M(s))^{-1/2}}|{T_{M(s)}}|}{{(M(s))^{1/2+s}}}=0\hspace{1em}\text{a.s.}\]
To complete the proof, it is sufficient to show that
\[ \underset{s\to 0+}{\lim }f(s){\int _{3/2}^{M(s)}}\frac{{(\log x)^{-1/2}}|{T_{\lfloor x\rfloor }}|}{{x^{3/2+s}}}\mathrm{d}x=0\hspace{1em}\text{a.s.}\]
To this end, write
\[\begin{aligned}{}{\int _{3/2}^{M(s)}}\frac{{(\log x)^{-1/2}}|{T_{\lfloor x\rfloor }}|}{{x^{3/2+s}}}\mathrm{d}x& \le \Big(\underset{k\le M(s)}{\max }\hspace{0.1667em}|{T_{k}}|\Big){\int _{3/2}^{M(s)}}\frac{{(\log x)^{-1/2}}}{{x^{3/2+s}}}\mathrm{d}x\\ {} & =O\big({\big(M(s)\log \log M(s)\big)^{1/2}}\big)O(1)\\ {} & =O\big({\big(M(s)\log \log M(s)\big)^{1/2}}\big),\hspace{1em}s\to 0+\hspace{1em}\text{a.s.},\end{aligned}\]
having utilized (7) for the penultimate equality. Since (6) entails
the claim follows. □For $k\in \mathbb{N}$, $\rho \gt 0$ and $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$, define the event
Next, we demonstrate that the second (remaining) part of the series also vanishes if the variables ${\eta _{k}}$ are properly truncated.
Lemma 2.
Proof.
Put $h(s):=(\log \log 1/s){({\log ^{(3)}}1/s)^{1/2}}$. Observe that, for $k\ge 3$ and $s\ge 0$, ${k^{2s}}\log k\ge 1$. Hence, for $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$,
\[\begin{aligned}{}& \sum \limits_{k\ge M(s)+1}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}|{\eta _{k}}|{1_{{A_{k,\rho }}(s)}}\\ {} & \hspace{1em}\le \sum \limits_{k\ge M(s)+1}\frac{|{\eta _{k}}|}{{k^{1/2}}}{1_{\{{k^{-1/2}}|{\eta _{k}}|\gt \rho {(g(s))^{1/2}}{(h(s))^{-1}}\}}}\hspace{1em}\text{a.s.}\end{aligned}\]
The assumption $\mathbb{E}[{\eta ^{2}}]\lt \infty $ entails ${\lim \nolimits_{k\to \infty }}{k^{-1/2}}|{\eta _{k}}|=0$ a.s. and thereupon ${\sup _{k\ge 1}}({k^{-1/2}}|{\eta _{k}}|)\lt \infty $ a.s. Since ${\lim \nolimits_{s\to 0+}}({(g(s))^{1/2}}{(h(s))^{-1}})=+\infty $, we infer
\[ {1_{\{{k^{-1/2}}|{\eta _{k}}|\gt \rho {(g(s))^{1/2}}{(h(s))^{-1}}\}}}\le {1_{\{{\sup _{k\ge 1}}\hspace{0.1667em}({k^{-1/2}}|{\eta _{k}}|)\gt \rho {(g(s))^{1/2}}{(h(s))^{-1}}\}}}=0\]
a.s. for small s. We have proved that the sum in (8) is equal to 0 a.s. for small enough s.Relation (9) is justified as follows:
\[\begin{aligned}{}& \sum \limits_{k\ge M(s)+1}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}\mathbb{E}[|{\eta _{k}}|{1_{{A_{k,\rho }}(s)}}]\\ {} & \hspace{1em}\le \sum \limits_{k\ge M(s)+1}{k^{-1/2}}\mathbb{E}[|\eta |{1_{\{{\rho ^{-1}}{(g(s))^{-1/2}}h(s)|\eta |\gt {k^{1/2}}\}}}]\\ {} & \hspace{1em}\le \mathbb{E}\Bigg[|\eta |{\sum \limits_{k=1}^{\lfloor {\rho ^{-2}}{(g(s))^{-1}}{(h(s))^{2}}{\eta ^{2}}\rfloor }}{k^{-1/2}}\Bigg]\le 2{\rho ^{-1}}\mathbb{E}\big[{\eta ^{2}}\big]{\big(g(s)\big)^{-1/2}}h(s)\\ {} & \hspace{1em}=2{\rho ^{-1}}{\big(g(s)\big)^{-1/2}}h(s)\to 0,\hspace{1em}s\to 0+.\end{aligned}\]
The proof of Lemma 2 is complete. □In what follows, ${({A_{k,\rho }}(s))^{c}}$ denotes the complement of ${A_{k,\rho }}(s)$, that is, for $k\in \mathbb{N}$, $\rho \gt 0$ and $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$,
Our next result is related to the fragment of the series giving the principal contribution. However, this is a lighter version of what we really need, for the convergence here is only along a sequence.
Lemma 3.
Fix any $\gamma \in (0,(\sqrt{5}-1)/2)$, pick any $\rho =\rho (\gamma )$ satisfying
and put ${s_{n}}:=\exp (-\exp ({n^{1-\gamma }}))$ for $n\in \mathbb{N}$. Then
\[ \underset{n\to \infty }{\limsup }f({s_{n}})\sum \limits_{k\ge M({s_{n}})+1}\frac{{(\log k)^{-1/2}}{\widetilde{\eta }_{k,\rho }}({s_{n}})}{{k^{1/2+{s_{n}}}}}\le 1+\gamma \hspace{1em}\textit{a.s.},\]
where ${\widetilde{\eta }_{k,\rho }}(s):={\eta _{k}}{1_{{({A_{k,\rho }}(s))^{c}}}}-\mathbb{E}[{\eta _{k}}{1_{{({A_{k,\rho }}(s))^{c}}}}]$ for $k\in \mathbb{N}$ and $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$.
Proof.
Since $(1-\gamma ){(1+\gamma )^{2}}\gt 1$ whenever $\gamma \in (0,(\sqrt{5}-1)/2)$, ρ satisfying (10) does indeed exist.
Put ${f^{\ast }}(s):={(2g(s)\hspace{0.1667em}{\log ^{(3)}}1/s)^{-1/2}}$ for $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$. Since ${f^{\ast }}(s)\sim f(s)$ as $s\to 0+$, we can and do prove the result, with ${f^{\ast }}$ replacing f. Put
which is valid for integer $k\ge 2$ and $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$, implies that
An application of Markov’s inequality yields, for $u\ge 0$,
\[ X(s)={f^{\ast }}(s)\sum \limits_{k\ge M(s)+1}\frac{{(\log k)^{-1/2}}{\widetilde{\eta }_{k,\rho }}(s)}{{k^{1/2+s}}},\hspace{1em}s\in \big(0,{\mathrm{e}^{-\mathrm{e}}}\big).\]
Using ${\mathrm{e}^{x}}\le 1+x+({x^{2}}/2){\mathrm{e}^{|x|}}$ for $x\in \mathbb{R}$ and $\mathbb{E}[{\widetilde{\eta }_{k,\rho }}(s)]=0$ we deduce, for $u\in \mathbb{R}$,
\[\begin{aligned}{}\mathbb{E}\big[{\mathrm{e}^{uX(s)}}\big]& =\prod \limits_{k\ge M(s)+1}\mathbb{E}\bigg[\exp \bigg(u{f^{\ast }}(s)\frac{{(\log k)^{-1/2}}{\widetilde{\eta }_{k,\rho }}(s)}{{k^{1/2+s}}}\bigg)\bigg]\\ {} & \le \prod \limits_{k\ge M(s)+1}\bigg(1+\frac{{u^{2}}{({f^{\ast }}(s))^{2}}}{2}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}\\ {} & \hspace{1em}\times \mathbb{E}\bigg[{\big({\widetilde{\eta }_{k,\rho }}(s)\big)^{2}}\exp \bigg(|u|{f^{\ast }}(s)\frac{{(\log k)^{-1/2}}|{\widetilde{\eta }_{k,\rho }}(s)|}{{k^{1/2+s}}}\bigg)\bigg]\bigg).\end{aligned}\]
The inequality
(11)
\[\begin{aligned}{}|{\widetilde{\eta }_{k,\rho }}(s)|& \le |{\eta _{k}}|{1_{{({A_{k,\rho }}(s))^{c}}}}+\mathbb{E}[|{\eta _{k}}|{1_{{({A_{k,\rho }}(s))^{c}}}}]\\ {} & \le \frac{2\rho {k^{1/2+s}}}{\log \log 1/s}{\bigg(\frac{\log k\hspace{0.1667em}g(s)}{{\log ^{(3)}}1/s}\bigg)^{1/2}}\le 2\rho {k^{1/2+s}}{\bigg(\frac{\log k\hspace{0.1667em}g(s)}{{\log ^{(3)}}1/s}\bigg)^{1/2}}\hspace{1em}\text{a.s.,}\end{aligned}\]
\[ \exp \bigg(|u|{f^{\ast }}(s)\frac{{(\log k)^{-1/2}}|{\widetilde{\eta }_{k,\rho }}(s)|}{{k^{1/2+s}}}\bigg)\le \exp \bigg(\frac{\sqrt{2}\rho |u|}{{\log ^{(3)}}1/s}\bigg)\hspace{1em}\text{a.s.}\]
Together with the inequalities $\mathbb{E}[{({\widetilde{\eta }_{k,\rho }}(s))^{2}}]\le 1$ and $1+x\le {\mathrm{e}^{x}}$ for $x\in \mathbb{R}$ this gives, for $u\in \mathbb{R}$,
(12)
\[\begin{aligned}{}\mathbb{E}\big[{\mathrm{e}^{uX(s)}}\big]& \le \prod \limits_{k\ge M(s)+1}\exp \bigg(\frac{{u^{2}}{({f^{\ast }}(s))^{2}}}{2}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}\exp \bigg(\frac{\sqrt{2}\rho |u|}{{\log ^{(3)}}1/s}\bigg)\bigg)\\ {} & \le \exp \bigg(\frac{{u^{2}}}{4{\log ^{(3)}}1/s}\exp \bigg(\frac{\sqrt{2}\rho |u|}{{\log ^{(3)}}1/s}\bigg)\bigg).\end{aligned}\]
\[\begin{aligned}{}\mathbb{P}\big\{X({s_{n}})\gt 1+\gamma \big\}& \le {\mathrm{e}^{-(1+\gamma )u}}\mathbb{E}\big[{\mathrm{e}^{uX({s_{n}})}}\big]\\ {} & \le \exp \bigg(-(1+\gamma )u+\frac{{u^{2}}}{4{\log ^{(3)}}1/{s_{n}}}\exp \bigg(\frac{\sqrt{2}\rho u}{{\log ^{(3)}}1/{s_{n}}}\bigg)\bigg).\end{aligned}\]
Putting $u=2(1+\gamma ){\log ^{(3)}}1/{s_{n}}$ we conclude that
\[\begin{aligned}{}\mathbb{P}\big\{X({s_{n}})\gt 1+\gamma \big\}& \le \exp \big(-{(1+\gamma )^{2}}\big(2-\exp \big(2\sqrt{2}(1+\gamma )\rho \big)\big){\log ^{(3)}}1/{s_{n}}\big)\\ {} & =\frac{1}{{n^{(1-\gamma ){(1+\gamma )^{2}}(2-\exp (2\sqrt{2}(1+\gamma )\rho ))}}}.\end{aligned}\]
Thus, in view of (10), ${\textstyle\sum _{n\ge 1}}\mathbb{P}\{X({s_{n}})\gt 1+\gamma \}\lt \infty $, and invoking the direct part of the Borel–Cantelli lemma completes the proof of Lemma 3. □Now we present our final, and the most involved, preparatory result. It shows that the convergence along a sequence discussed in Lemma 3 can be lifted to the full convergence along the real numbers.
Lemma 4.
Let ${({s_{n}})_{n\in \mathbb{N}}}$ be as defined in Lemma 3, where $\gamma \in (0,1/2)$, and $M(s)=\lfloor \log 1/s/\log \log 1/s\rfloor $ for $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$. For $s\in [{s_{n+1}},{s_{n}}]$, the following limit relation holds:
Proof.
Using the fact that M is a nonincreasing function for the arguments close to 0, write, for $s\in [{s_{n+1}},{s_{n}}]$,
\[\begin{aligned}{}& \sum \limits_{k\ge M(s)+1}\frac{{(\log k)^{-1/2}}{\eta _{k}}}{{k^{1/2+s}}}-\sum \limits_{k\ge M({s_{n+1}})+1}\frac{{(\log k)^{-1/2}}{\eta _{k}}}{{k^{1/2+{s_{n+1}}}}}\\ {} & \hspace{1em}={\sum \limits_{k=M(s)+1}^{M({s_{n+1}})}}\frac{{(\log k)^{-1/2}}{\eta _{k}}}{{k^{1/2+s}}}+\sum \limits_{k\ge M({s_{n+1}})+1}{(\log k)^{-1/2}}\bigg(\frac{1}{{k^{1/2+s}}}-\frac{1}{{k^{1/2+{s_{n+1}}}}}\bigg){\eta _{k}}\\ {} & \hspace{1em}=:{I_{n,1}}(s)+{I_{n,2}}(s).\end{aligned}\]
Analysis of ${I_{n,1}}(s)$. Actually, we shall prove that
\[ \underset{n\to \infty }{\lim }\underset{s\in [{s_{n+1}},\hspace{0.1667em}{s_{n}}]}{\sup }|{I_{n,1}}(s)|=0\hspace{1em}\text{a.s.}\]
This relation is even more than we need because ${\lim \nolimits_{s\to 0+}}f(s)=0$. We obtain with the help of summation by parts
\[\begin{aligned}{}{I_{n,1}}(s)& =\frac{{(\log M({s_{n+1}}))^{-1/2}}{T_{M({s_{n+1}})}}}{{(M({s_{n+1}}))^{1/2+s}}}-\frac{{(\log (M(s)+1))^{-1/2}}{T_{M(s)}}}{{(M(s)+1)^{1/2+s}}}\\ {} & \hspace{1em}+{\sum \limits_{k=M(s)+1}^{M({s_{n+1}})-1}}\bigg(\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}-\frac{{(\log (k+1))^{-1/2}}}{{(k+1)^{1/2+s}}}\bigg){T_{k}},\end{aligned}\]
where, as before, ${T_{n}}={\eta _{1}}+\cdots +{\eta _{n}}$ for $n\in \mathbb{N}$. Invoking formula (7) and ${\lim \nolimits_{n\to \infty }}{(M({s_{n+1}}))^{{s_{n+1}}}}=1$ we obtain
\[\begin{aligned}{}& \frac{{(\log M({s_{n+1}}))^{-1/2}}|{T_{M({s_{n+1}})}}|}{{(M({s_{n+1}}))^{1/2+s}}}\\ {} & \hspace{1em}\le \frac{{(\log M({s_{n+1}}))^{-1/2}}|{T_{M({s_{n+1}})}}|}{{(M({s_{n+1}}))^{1/2+{s_{n+1}}}}}\sim \frac{{(\log M({s_{n+1}}))^{-1/2}}|{T_{M({s_{n+1}})}}|}{{(M({s_{n+1}}))^{1/2}}}\\ {} & \hspace{1em}=O\big({\big(\log M({s_{n+1}})\big)^{-1/2}}{\big(\log \log M({s_{n+1}})\big)^{1/2}}\big)\to 0,\hspace{1em}n\to \infty \hspace{1em}\text{a.s.}\end{aligned}\]
By a similar argument we conclude that
\[ \underset{n\to \infty }{\lim }\underset{s\in [{s_{n+1}},\hspace{0.1667em}{s_{n}}]}{\sup }\frac{{(\log (M(s)+1))^{-1/2}}|{T_{M(s)}}|}{{(M(s)+1)^{1/2+s}}}=0\hspace{1em}\text{a.s.}\]
Further, for $s\in [{s_{n+1}},{s_{n}}]$ and large n,
\[\begin{aligned}{}& \Bigg|{\sum \limits_{k=M(s)+1}^{M({s_{n+1}})-1}}\bigg(\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}-\frac{{(\log (k+1))^{-1/2}}}{{(k+1)^{1/2+s}}}\bigg){T_{k}}\Bigg|\\ {} & \hspace{1em}\le {\sum \limits_{k=M(s)+1}^{M({s_{n+1}})-1}}\bigg(\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}-\frac{{(\log (k+1))^{-1/2}}}{{(k+1)^{1/2+s}}}\bigg)|{T_{k}}|\\ {} & \hspace{1em}\le \Big(\underset{j\le M({s_{n+1}})}{\sup }|{T_{j}}|\Big){\sum \limits_{k=M(s)+1}^{M({s_{n+1}})-1}}\bigg(\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}-\frac{{(\log (k+1))^{-1/2}}}{{(k+1)^{1/2+s}}}\bigg)\\ {} & \hspace{1em}\le \Big(\underset{j\le M({s_{n+1}})}{\sup }|{T_{j}}|\Big)\frac{{(\log M(s))^{-1/2}}}{{(M(s))^{1/2+s}}}\le \Big(\underset{j\le M({s_{n+1}})}{\sup }|{T_{j}}|\Big)\frac{{(\log M({s_{n}}))^{-1/2}}}{{(M({s_{n}}))^{1/2+{s_{n+1}}}}}\\ {} & \hspace{1em}=O\big({\big(\log M({s_{n+1}})\big)^{-1/2}}{\big(\log \log M({s_{n+1}})\big)^{1/2}}\big)\to 0,\hspace{1em}n\to \infty \hspace{1em}\text{a.s.}\end{aligned}\]
We have used (7), ${\lim \nolimits_{n\to \infty }}(M({s_{n+1}})/M({s_{n}}))=1$ and ${\lim \nolimits_{n\to \infty }}{(M({s_{n}}))^{{s_{n+1}}}}=1$ for the equality.
Analysis of ${I_{n,2}}(s)$. One can show that
and
For instance, relation (13) follows from ${\sup _{k\ge 1}}({k^{-1/2}}|{\eta _{k}}|)\lt \infty $ a.s. and
(13)
\[\begin{aligned}{}& \underset{n\to \infty }{\lim }\underset{s\ge {s_{n+1}}}{\sup }\sum \limits_{k\ge M({s_{n+1}})+1}{(\log k)^{-1/2}}\bigg(\frac{1}{{k^{1/2+{s_{n+1}}}}}-\frac{1}{{k^{1/2+s}}}\bigg)|{\eta _{k}}|{1_{\{|{\eta _{k}}|\gt {k^{1/2}}\log n\}}}\\ {} & \hspace{1em}=0\hspace{1em}\text{a.s.}\end{aligned}\](14)
\[\begin{aligned}{}& \underset{n\to \infty }{\lim }\underset{s\ge {s_{n+1}}}{\sup }\sum \limits_{k\ge M({s_{n+1}})+1}{(\log k)^{-1/2}}\bigg(\frac{1}{{k^{1/2+{s_{n+1}}}}}-\frac{1}{{k^{1/2+s}}}\bigg)\\ {} & \hspace{1em}\times \mathbb{E}\big[|{\eta _{k}}|{1_{\{|{\eta _{k}}|\gt {k^{1/2}}\log n\}}}\big]=0.\end{aligned}\]
\[\begin{aligned}{}& \sum \limits_{k\ge M({s_{n+1}})+1}{(\log k)^{-1/2}}\bigg(\frac{1}{{k^{1/2+{s_{n+1}}}}}-\frac{1}{{k^{1/2+s}}}\bigg)|{\eta _{k}}|{1_{\{|{\eta _{k}}|\gt {k^{1/2}}\log n\}}}\\ {} & \hspace{1em}\le \sum \limits_{k\ge M({s_{n+1}})+1}\frac{|{\eta _{k}}|}{{k^{1/2}}}{1_{\{{k^{-1/2}}|{\eta _{k}}|\gt \log n\}}}\hspace{1em}\text{a.s.}\end{aligned}\]
The summands on the right-hand side are equal to 0 for large enough n. Here, we have used ${k^{2{s_{n+1}}}}\log k\ge 1$ for $k\ge 3$ and $n\in \mathbb{N}$. More details can be found in the proof of Lemma 2.Put ${\widehat{\eta }_{k}}(n):={\eta _{k}}{1_{\{|{\eta _{k}}|\le {k^{1/2}}\log n\}}}-\mathbb{E}[{\eta _{k}}{1_{\{|{\eta _{k}}|\le {k^{1/2}}\log n\}}}]$ for $k,n\in \mathbb{N}$. For $n\in \mathbb{N}$ and small positive u, put
where ${Y_{n}}(v):={Y_{n}^{\ast }}(\exp (-1/v))$, ${v_{n}}:=1/(\log 1/{s_{n}})=\exp (-{n^{1-\gamma }})$ for $n\in \mathbb{N}$ and $v\gt 0$.
\[ {Y_{n}^{\ast }}(u):=\sum \limits_{k\ge M({s_{n+1}})+1}\frac{{(\log k)^{-1/2}}{\widehat{\eta }_{k}}(n)}{{k^{1/2+u}}}.\]
In view of (13) and (14), it suffices to demonstrate that, for each $s\in [{s_{n+1}},{s_{n}}]$,
\[ \underset{n\to \infty }{\lim }f({s_{n}})\big({Y_{n}^{\ast }}(s)-{Y_{n}^{\ast }}({s_{n+1}})\big)=0\hspace{1em}\text{a.s.}\]
By a technical reason to be explained below, we shall show that, for each $v\hspace{-0.1667em}\in \hspace{-0.1667em}[{v_{n+1}},\hspace{-0.1667em}{v_{n}}]$,
(15)
\[ \underset{n\to \infty }{\lim }f({s_{n}})\big({Y_{n}}(v)-{Y_{n}}({v_{n+1}})\big)=0\hspace{1em}\text{a.s.},\]For $j\in {\mathbb{N}_{0}}$ and $n\in \mathbb{N}$, put
\[ {F_{j}}(n):=\big\{{t_{j,\hspace{0.1667em}m}}(n):={v_{n+1}}+{2^{-j}}m({v_{n}}-{v_{n+1}}):0\le m\le {2^{j}}\big\}.\]
Note that ${F_{j}}(n)\subseteq {F_{j+1}}(n)$ and put $F(n):={\textstyle\bigcup _{j\ge 0}}{F_{j}}(n)$. The set $F(n)$ is dense in the interval $[{v_{n+1}},{v_{n}}]$. For any $u\in [{v_{n+1}},{v_{n}}]$, put
\[ {u_{j}}:=\max \big\{v\in {F_{j}}(n):v\le u\big\}={v_{n+1}}+{2^{-j}}({v_{n}}-{v_{n+1}})\bigg\lfloor \frac{{2^{j}}(u-{v_{n+1}})}{{v_{n}}-{v_{n+1}}}\bigg\rfloor .\]
Then ${\lim \nolimits_{j\to \infty }}{u_{j}}=u$ (we omit the dependence on n in the notation). An important observation is that either ${u_{j-1}}={u_{j}}$ or ${u_{j-1}}={u_{j}}-{2^{-j}}({v_{n}}-{v_{n+1}})$. Consequently, ${u_{j}}={t_{j,m}}$ for some $0\le m\le {2^{j}}$, which implies that either ${u_{j-1}}={t_{j,\hspace{0.1667em}m}}$ or ${u_{j-1}}={t_{j,\hspace{0.1667em}m-1}}$. Since ${Y_{n}}$ is a.s. continuous on $[{v_{n+1}},{v_{n}}]$, we obtain
\[\begin{aligned}{}|{Y_{n}}(u)-{Y_{n}}({v_{n+1}})|& =\underset{l\to \infty }{\lim }|{Y_{n}}({u_{l}})-{Y_{n}}({v_{n+1}})|\\ {} & =\underset{l\to \infty }{\lim }\Bigg|{\sum \limits_{j=1}^{l}}\big({Y_{n}}({u_{j}})-{Y_{n}}({u_{j-1}})\big)+{Y_{n}}({u_{0}})-{Y_{n}}({v_{n+1}})\Bigg|\\ {} & \le \underset{l\to \infty }{\lim }{\sum \limits_{j=0}^{l}}\underset{1\le m\le {2^{j}}}{\max }|{Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})|\\ {} & =\sum \limits_{j\ge 0}\underset{1\le m\le {2^{j}}}{\max }|{Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})|.\end{aligned}\]
Thus, our purpose is to prove that, for all $\varepsilon \gt 0$ and sufficiently large ${n_{0}}\in \mathbb{N}$,
\[ \sum \limits_{n\ge {n_{0}}}\mathbb{P}\bigg\{\sum \limits_{j\ge 0}\underset{1\le m\le {2^{j}}}{\max }f({s_{n}})|{Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})|\gt \varepsilon \bigg\}\lt \infty .\]
Put ${a_{j}}:=(j+1){2^{-j/2}}$ for $j\in {\mathbb{N}_{0}}$. In view of ${\textstyle\sum _{j\ge 0}}{a_{j}}\lt \infty $, it suffices to show that, for all $\varepsilon \gt 0$,
Now we proceed similarly to the proof of Lemma 3 and refer to that proof regarding any missing details. Write, for $u\in \mathbb{R}$ and sufficiently large n,
which proves the claim.
\[\begin{aligned}{}& \mathbb{E}\big[\exp \big(\pm u\big({Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})\big)\big)\big]\\ {} & \hspace{1em}=\mathbb{E}\bigg[\exp \bigg(\pm u\sum \limits_{k\ge M({s_{n+1}})+1}\frac{1}{{(k\log k)^{1/2}}}\bigg(\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}-\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}\bigg){\widehat{\eta }_{k}}(n)\bigg)\bigg]\\ {} & \hspace{1em}\le \prod \limits_{k\ge M({s_{n+1}})+1}(1+\frac{{u^{2}}}{2}\frac{1}{k\log k}{\bigg(\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}-\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}\bigg)^{2}}\\ {} & \hspace{2em}\times \mathbb{E}\bigg[{\big({\widehat{\eta }_{k}}(n)\big)^{2}}\exp \bigg(\frac{|u|}{{(k\log k)^{1/2}}}\bigg(\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}-\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}\bigg)|{\widehat{\eta }_{k}}(n)|\bigg)\bigg].\end{aligned}\]
Now we prove that, for large n, all $j\in {\mathbb{N}_{0}}$ and integer $m\in [0,{2^{j}}]$,
\[\begin{aligned}{}A(j,n)& :=\sum \limits_{k\ge M({s_{n+1}})+1}\frac{1}{k\log k}{\bigg(\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}-\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}\bigg)^{2}}\\ {} & \le \frac{{2^{-j}}({v_{n}}-{v_{n+1}})}{{v_{n+1}^{2}}}.\end{aligned}\]
For fixed $a,b\gt 0$, the function $x\mapsto {x^{-1}}{\mathrm{e}^{-x}}{({\mathrm{e}^{-ax}}-{\mathrm{e}^{-bx}})^{2}}$ is decreasing on $(1,\infty )$. Indeed, $x\mapsto x{\mathrm{e}^{-x}}$ is decreasing on $(1,\infty )$, and
\[ x\mapsto {\mathrm{e}^{-\min (a,b)x}}{x^{-2}}{\big(1-{\mathrm{e}^{-(\max (a,b)-\min (a,b))x}}\big)^{2}}\]
is decreasing on $(0,\infty )$ as the product of two positive decreasing functions. As far as monotonicity of the second function is concerned, observe that, up to a multiplicative constant, it is the Laplace–Stieltjes transform of the Lebesgue–Stieltjes convolution of the uniform distribution on $[0,1]$ with itself. Using this argument with $x=\log k$, $a=\exp (-1/{t_{j,\hspace{0.1667em}m}})$ and $b=\exp (-1/{t_{j,\hspace{0.1667em}m-1}})$ we conclude that the function of argument k under the sum defining $A(j,n)$ is decreasing, whence
\[\begin{aligned}{}A(j,n)& \le {\int _{\mathrm{e}}^{\infty }}\frac{1}{x\log x}\bigg(\frac{1}{{x^{2\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}+\frac{1}{{x^{2\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}\\ {} & \hspace{1em}-\frac{2}{{x^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})+\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}\bigg)\mathrm{d}x\\ {} & ={\int _{2\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}^{\infty }}\frac{{\mathrm{e}^{-x}}}{x}\mathrm{d}x+{\int _{2\exp (-1/{t_{j,\hspace{0.1667em}m}})}^{\infty }}\frac{{\mathrm{e}^{-x}}}{x}\mathrm{d}x\\ {} & \hspace{1em}-2{\int _{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})+\exp (-1/{t_{j,\hspace{0.1667em}m}})}^{\infty }}\frac{{\mathrm{e}^{-x}}}{x}\mathrm{d}x={\int _{2\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}^{1}}\frac{{\mathrm{e}^{-x}}}{x}\mathrm{d}x\\ {} & \hspace{1em}+{\int _{2\exp (-1/{t_{j,\hspace{0.1667em}m}})}^{1}}\frac{{\mathrm{e}^{-x}}}{x}\mathrm{d}x-2{\int _{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})+\exp (-1/{t_{j,\hspace{0.1667em}m}})}^{1}}\frac{{\mathrm{e}^{-x}}}{x}\mathrm{d}x.\end{aligned}\]
Put $I(y):={\textstyle\int _{0}^{y}}{x^{-1}}(1-{\mathrm{e}^{-x}})\mathrm{d}x$ for $y\in [0,1]$ and observe that
The function $x\mapsto {x^{-1}}(1-{\mathrm{e}^{-x}})$ is decreasing on $(0,\infty )$ as the Laplace–Stieltjes transform of the uniform distribution on $[0,1]$. Hence, I is concave on $[0,1]$ and thereupon, for $a,b\in [0,1]$, $I(2a)+I(2b)-2I(a+b)\le 0$. As a consequence,
(17)
\[\begin{aligned}{}A(j,n)& \le -2\log 2+\frac{1}{{t_{j,\hspace{0.1667em}m-1}}}+\frac{1}{{t_{j,\hspace{0.1667em}m}}}+2\log \big({\mathrm{e}^{-1/{t_{j,\hspace{0.1667em}m-1}}}}+{\mathrm{e}^{-1/{t_{j,\hspace{0.1667em}m}}}}\big)\\ {} & \le \frac{{t_{j,\hspace{0.1667em}m}}-{t_{j,\hspace{0.1667em}m-1}}}{{t_{j,\hspace{0.1667em}m-1}}{t_{j,\hspace{0.1667em}m}}}\le \frac{{2^{-j}}({v_{n}}-{v_{n+1}})}{{v_{n+1}^{2}}},\end{aligned}\]Next we work towards estimating
\[ B(u,k,n):=\exp \bigg(\frac{|u|}{{(k\log k)^{1/2}}}\bigg(\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}-\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}\bigg)|{\widehat{\eta }_{k}}(n)|\bigg)\]
for $k\ge M({s_{n+1}})+1$. Assume first that ${2^{j}}\ge {v_{n+1}^{-2}}({v_{n}}-{v_{n+1}})$. Observe that, for $a\gt 0$ and $0\lt s\lt t$,
\[ 0\le \exp \big(-a{\mathrm{e}^{-1/s}}\big)-\exp \big(-a{\mathrm{e}^{-1/t}}\big)\le a\exp \big(-a{\mathrm{e}^{-1/s}}\big){\mathrm{e}^{-1/t}}(1/s-1/t).\]
Using this inequality with $a=\log k$, $t={t_{j,\hspace{0.1667em}m-1}}$ and $s={t_{j,\hspace{0.1667em}m}}$ we obtain
\[ 0\le \frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}-\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}\le \frac{\log k}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}{\mathrm{e}^{-1/{t_{j,\hspace{0.1667em}m}}}}\bigg(\frac{1}{{t_{j,\hspace{0.1667em}m-1}}}-\frac{1}{{t_{j,\hspace{0.1667em}m}}}\bigg).\]
In view of the inequality $x{\mathrm{e}^{-ax}}\le {(\mathrm{e}a)^{-1}}$ for $x\ge 0$, we conclude that the right-hand side does not exceed
\[\begin{aligned}{}& \frac{1}{\mathrm{e}}{\mathrm{e}^{1/{t_{j,\hspace{0.1667em}m-1}}-1/{t_{j,\hspace{0.1667em}m}}}}\bigg(\frac{1}{{t_{j,\hspace{0.1667em}m-1}}}-\frac{1}{{t_{j,\hspace{0.1667em}m}}}\bigg)\\ {} & \hspace{1em}\le \frac{1}{\mathrm{e}}\exp \bigg(\frac{{2^{-j}}({v_{n}}-{v_{n+1}})}{{v_{n+1}^{2}}}\bigg)\frac{{2^{-j}}({v_{n}}-{v_{n+1}})}{{v_{n+1}^{2}}}\le \frac{{2^{-j}}({v_{n}}-{v_{n+1}})}{{v_{n+1}^{2}}},\end{aligned}\]
where the last inequality is justified by our present choice of j. Thus,
\[ B(u,k,n)\le \frac{|u|{2^{-j}}\log n}{{(\log M({s_{n+1}}))^{1/2}}}\frac{{v_{n}}-{v_{n+1}}}{{v_{n+1}^{2}}}=:{C_{j}}(u,n)\]
whenever $k\ge M({s_{n+1}})+1$.Assume now that ${2^{j}}\le {v_{n+1}^{-2}}({v_{n}}-{v_{n+1}})$ for nonnegative integer j. Using a trivial estimate
\[ \frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m-1}})}}}-\frac{1}{{k^{\exp (-1/{t_{j,\hspace{0.1667em}m}})}}}\le 1\]
we infer
whenever $k\ge M({s_{n+1}})+1$.Using now $1+x\le {\mathrm{e}^{x}}$ for $x\in \mathbb{R}$ we arrive at
\[\begin{aligned}{}& \mathbb{E}\big[\exp \big(\pm u\big({Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})\big)\big)\big]\\ {} & \hspace{1em}\le \exp \bigg(\frac{{u^{2}}}{2}\frac{{2^{-j}}({v_{n}}-{v_{n+1}})}{{v_{n+1}^{2}}}{\mathrm{e}^{{C_{j}}(u,n)}}\bigg),\hspace{1em}u\in \mathbb{R}.\end{aligned}\]
By Markov’s inequality and the inequality ${\mathrm{e}^{u|x|}}\le {\mathrm{e}^{ux}}+{\mathrm{e}^{-ux}}$ for $x\in \mathbb{R}$,
\[\begin{aligned}{}& \mathbb{P}\big\{f({s_{n}})|{Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})|\gt \varepsilon {a_{j}}\big\}\\ {} & \hspace{1em}\le \exp \bigg(-u\frac{\varepsilon {a_{j}}}{f({s_{n}})}\bigg)\mathbb{E}\big[\exp \big(u|{Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})|\big)\big]\\ {} & \hspace{1em}\le 2\exp \bigg(-\frac{u\varepsilon {a_{j}}}{f({s_{n}})}+\frac{{u^{2}}}{2}\frac{{2^{-j}}({v_{n}}-{v_{n+1}})}{{v_{n+1}^{2}}}{\mathrm{e}^{{C_{j}}(u,n)}}\bigg).\end{aligned}\]
Put
\[ u=\frac{\varepsilon {2^{j/2}}}{f({s_{n}})}\frac{{v_{n+1}^{2}}}{{v_{n}}-{v_{n+1}}}\hspace{1em}\text{and}\hspace{1em}{k_{n}}:=\frac{1}{{(f({s_{n}}))^{2}}}\frac{{v_{n+1}^{2}}}{{v_{n}}-{v_{n+1}}}.\]
In view of
\[ {\big(\log M({s_{n+1}})\big)^{-1/2}}\sim {n^{\gamma /2-1/2}},\hspace{1em}{\bigg(\frac{{v_{n+1}}}{{v_{n}}-{v_{n+1}}}\bigg)^{1/2}}\sim {(1-\gamma )^{-1/2}}{n^{\gamma /2}},\hspace{1em}n\to \infty ,\]
and
we conclude that, for j satisfying ${2^{j}}\ge {v_{n+1}^{-2}}({v_{n}}-{v_{n+1}})$,
\[\begin{aligned}{}{C_{j}}(u,n)& =\frac{|u|{2^{-j}}\log n}{{(\log M({s_{n+1}}))^{1/2}}}\frac{{v_{n}}-{v_{n+1}}}{{v_{n+1}^{2}}}=\frac{\varepsilon {2^{-j/2}}\log n}{f({s_{n}}){(\log M({s_{n+1}}))^{1/2}}}\\ {} & \le \frac{\varepsilon \log n}{f({s_{n}}){(\log M({s_{n+1}}))^{1/2}}}\frac{{v_{n+1}}}{{({v_{n}}-{v_{n+1}})^{1/2}}}\\ {} & =\frac{\varepsilon {2^{1/2}}{((\log 1/{s_{n}}){v_{n+1}})^{1/2}}{({\log ^{(3)}}1/{s_{n}})^{1/2}}\log n}{{(\log M({s_{n+1}}))^{1/2}}}{\bigg(\frac{{v_{n+1}}}{{v_{n}}-{v_{n+1}}}\bigg)^{1/2}}\\ {} & \sim \varepsilon {2^{1/2}}{n^{\gamma -1/2}}{(\log n)^{3/2}}\to 0,\hspace{1em}n\to \infty .\end{aligned}\]
For nonnegative integer j satisfying ${2^{j}}\le {v_{n+1}^{-2}}({v_{n}}-{v_{n+1}})$, ${C_{j}}(u,n)$ admits the same upper bound:
\[\begin{array}{r}\displaystyle {C_{j}}(u,n)=\frac{|u|\log n}{{(\log M({s_{n+1}}))^{1/2}}}=\frac{\varepsilon {2^{j/2}}\log n}{f({s_{n}}){(\log M({s_{n+1}}))^{1/2}}}\frac{{v_{n+1}^{2}}}{{v_{n}}-{v_{n+1}}}\\ {} \displaystyle \hspace{1em}\le \frac{\varepsilon \log n}{f({s_{n}}){(\log M({s_{n+1}}))^{1/2}}}\frac{{v_{n+1}}}{{({v_{n}}-{v_{n+1}})^{1/2}}}\to 0,\hspace{1em}n\to \infty .\end{array}\]
Therefore, for sufficiently large n such that ${k_{n}}\gt {\varepsilon ^{-2}}\log 2$ and ${\mathrm{e}^{{C_{j}}(u,n)}}\le 3/2$,
\[\begin{aligned}{}& \sum \limits_{j\ge 0}\mathbb{P}\Big\{\underset{1\le m\le {2^{j}}}{\max }f({s_{n}})|{Y_{n}}({t_{j,\hspace{0.1667em}m}})-{Y_{n}}({t_{j,\hspace{0.1667em}m-1}})|\gt \varepsilon {a_{j}}\Big\}\\ {} & \hspace{1em}\le \sum \limits_{j\ge 0}{2^{j}}2\exp \big(-{\varepsilon ^{2}}(j+1){k_{n}}\big)\exp \big(3{\varepsilon ^{2}}{k_{n}}/4\big)=\frac{2\exp (-{\varepsilon ^{2}}{k_{n}}/4)}{1-2\exp (-{\varepsilon ^{2}}{k_{n}})}.\end{aligned}\]
Since ${k_{n}}\sim 2{n^{\gamma }}\log n$ as $n\to \infty $, (16) follows.Now we comment on the reason behind the passage from ${Y_{n}^{\ast }}$ to ${Y_{n}}$ via a logarithmic transformation. In order to have in the present context a successful approximation with the help of a dyadic partition of an interval $[{\lambda _{n+1}},{\lambda _{n}}]$, say, the endpoints should satisfy ${\lim \nolimits_{n\to \infty }}({\lambda _{n}}/{\lambda _{n+1}})=1$. This is the case for ${\lambda _{n}}={v_{n}}$ and not the case for ${\lambda _{n}}={s_{n}}$. □
We are prepared to prove Proposition 3.
Proof of Proposition 3.
We only prove (4) because (5) follows from (4) by replacing ${\eta _{k}}$ with $-{\eta _{k}}$. Recall our convention that ${\sigma ^{2}}=1$.
Choose any $\gamma \gt 0$ sufficiently close to 0, put ${s_{n}}=\exp (-\exp ({n^{1-\gamma }}))$ for $n\in \mathbb{N}$ and select $\rho =\rho (\gamma )$ such that (10) holds true. Let $M(s)=\lfloor \log 1/s/\log \log 1/s\rfloor $ for $s\in (0,{\mathrm{e}^{-\mathrm{e}}})$. By Lemmas 2 and 3 and the fact that $\mathbb{E}[{\eta _{k}}{1_{{({A_{k,\rho }}(s))^{c}}}}]=-\mathbb{E}[{\eta _{k}}{1_{{A_{k,\rho }}(s)}}]$,
Lemma 4 in combination with relation (18) secures
(18)
\[ \underset{n\to \infty }{\limsup }f({s_{n}})\sum \limits_{k\ge M({s_{n}})+1}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{s_{n}}}}}{\eta _{k}}\le 1+\gamma \hspace{1em}\text{a.s.}\]
\[ \underset{s\to 0+}{\limsup }f(s)\sum \limits_{k\ge M(s)+1}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}\le 1+\gamma \hspace{1em}\text{a.s.}\]
Finally, by Lemma 1,
\[ \underset{s\to 0+}{\limsup }f(s)\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}\le 1+\gamma \hspace{1em}\text{a.s.,}\]
which entails (4) because the left-hand side does not depend on γ. □Proposition 4.
Under the assumptions of Theorem 2,
and
Proposition 4 will be proved by using Lemmas 1 and 2, together with two additional lemmas. In the first of these, we show that an initial and a final fragments of the series in question do not contribute to the LIL.
As before, we assume without further notice that the assumptions of Theorem 2 hold true and that ${\sigma ^{2}}=1$. When proving Proposition 4 we use the sets ${A_{k,\rho }}(s)$ and the corresponding variables ${\widetilde{\eta }_{k,\rho }}(s)$ with $\rho =1$.
Lemma 5.
Fix any $\gamma \gt 0$ and put ${\mathfrak{s}_{n}}:=\exp (-\exp ({n^{1+\gamma }}))$ for $n\ge 1$. Let ${N_{1}}$ and ${N_{2}}$ be functions which take positive integer values, may depend on γ and satisfy ${N_{1}}(s)\ge 2$ for small positive s, ${\lim \nolimits_{s\to 0+}}(\log \log {N_{1}}(s)/\log 1/s)=0$, ${\lim \nolimits_{s\to 0+}}(\log \log {N_{2}}(s)/\log 1/s)=1$ and ${\lim \nolimits_{s\to 0+}}s\log {N_{2}}(s)=0$. Then
and
Proof.
As in the proof of Lemma 3, we obtain the result, with ${f^{\ast }}$ replacing f. For $s\gt 0$ close to 0, put
\[ {Z_{1}}(s):={f^{\ast }}(s){\sum \limits_{k=2}^{{N_{1}}(s)}}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\widetilde{\eta }_{k,1}}(s).\]
The reasoning used to derive both (21) and (22) is similar to the one applied in the proof of Lemma 3. Therefore, we give a proof of (21) and indicate the only minor change needed for a proof of (22).Regarding (21), in view of the direct part of the Borel–Cantelli lemma, it is sufficient to demonstrate that, for all $\varepsilon \gt 0$,
To this end, we obtain a counterpart of (12), for $u\in \mathbb{R}$,
Our choice of ${N_{1}}$ guarantees that there exists an $r\gt 0$ close to 0 and such that
for small positive s, and also $(1-\delta )(1+\gamma )\gt 1$, where $\delta :=r{(4{\varepsilon ^{2}})^{-1}}\exp (\sqrt{2}{\varepsilon ^{-1}})$ with ε appearing in (23). Then, for $u\in \mathbb{R}$ and small $s\gt 0$,
(23)
\[ \sum \limits_{n\ge 1}\mathbb{P}\big\{{Z_{1}}({\mathfrak{s}_{n}})\gt \varepsilon \big\}\lt \infty .\]
\[ \mathbb{E}\big[{\mathrm{e}^{u{Z_{1}}(s)}}\big]\le \exp \Bigg(\frac{{u^{2}}{({f^{\ast }}(s))^{2}}}{2}{\sum \limits_{k=2}^{{N_{1}}(s)}}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}\exp \bigg(\frac{\sqrt{2}|u|}{{\log ^{(3)}}1/s}\bigg)\Bigg).\]
For each fixed $s\gt 0$, the function $x\to {(\log x)^{-1}}{x^{-1-2s}}$ is decreasing on $(1,\infty )$. The assumption ${\lim \nolimits_{s\to 0+}}(\log \log {N_{1}}(s)/\log 1/s)=0$ entails ${\lim \nolimits_{s\to 0+}}s\log {N_{1}}(s)=0$. Hence,
(24)
\[\begin{aligned}{}{\sum \limits_{k=2}^{{N_{1}}(s)}}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}& \le \frac{{(\log 2)^{-1}}}{{2^{1+2s}}}+{\int _{2}^{{N_{1}}(s)}}\frac{{(\log x)^{-1}}}{{x^{1+2s}}}\mathrm{d}x\\ {} & =O(1)+{\int _{(2\log 2)s}^{2s\log {N_{1}}(s)}}{y^{-1}}{\mathrm{e}^{-y}}\mathrm{d}y=O\big(\log \log {N_{1}}(s)\big),\hspace{1em}s\to 0+.\end{aligned}\]
\[\begin{aligned}{}\mathbb{E}\big[{\mathrm{e}^{u{Z_{1}}(s)}}\big]& \le \exp \bigg(\frac{r{u^{2}}g(s){({f^{\ast }}(s))^{2}}}{2}\exp \bigg(\frac{\sqrt{2}|u|}{{\log ^{(3)}}1/s}\bigg)\bigg)\\ {} & =\exp \bigg(\frac{r{u^{2}}}{4{\log ^{(3)}}1/s}\exp \bigg(\frac{\sqrt{2}|u|}{{\log ^{(3)}}1/s}\bigg)\bigg).\end{aligned}\]
Applying Markov’s inequality with $u=(1/\varepsilon ){\log ^{(3)}}1/{\mathfrak{s}_{n}}$ yields, for large n,
\[\begin{aligned}{}\mathbb{P}\big\{{Z_{1}}({\mathfrak{s}_{n}})\gt \varepsilon \big\}& \le {\mathrm{e}^{-u\varepsilon }}\mathbb{E}\big[{\mathrm{e}^{u{Z_{1}}({\mathfrak{s}_{n}})}}\big]\le \exp \big(-\big(1-r{\big(4{\varepsilon ^{2}}\big)^{-1}}{\mathrm{e}^{\sqrt{2}{\varepsilon ^{-1}}}}\big){\log ^{(3)}}1/{\mathfrak{s}_{n}}\big)\\ {} & =\frac{1}{{n^{(1-\delta )(1+\gamma )}}},\end{aligned}\]
thereby proving (23) and hence (21).The proof of (22) is analogous to that of (21). The corresponding version of (24) reads
as $s\to 0+$. □
(25)
\[ \sum \limits_{k\ge {N_{2}}(s)+1}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}\le {\int _{{N_{2}}(s)}^{\infty }}\frac{{(\log x)^{-1}}}{{x^{1+2s}}}\mathrm{d}x={\int _{2s\log {N_{2}}(s)}^{\infty }}{y^{-1}}{\mathrm{e}^{-y}}\mathrm{d}y=o(1)\]Lemma 6 treats the component of the series in question which gives a principal contribution to the LIL. Our proof is based on the converse part of the Borel–Cantelli lemma, which requires independence. The independence requirement complicates to some extent a selection of the essential component.
Lemma 6.
Fix sufficiently small $\delta \gt 0$, pick $\gamma \gt 0$ satisfying $(1+\gamma )(1-{\delta ^{2}}/8)\lt 1$ and let, as before, ${\mathfrak{s}_{n}}=\exp (-\exp ({n^{1+\gamma }}))$ for $n\in \mathbb{N}$. Then
Proof.
Functions $s\mapsto \log \log {N_{1}}(s)/\log 1/s$ in Lemma 5 can tend to 0 as fast as we please. Observe that ${\lim \nolimits_{n\to \infty }}({\mathfrak{s}_{n+1}}/{\mathfrak{s}_{n}})=\infty $. Thus, we can choose ${N_{1}}$ satisfying
\[ \frac{\log \log {N_{1}}({\mathfrak{s}_{n+1}})}{\log 1/{\mathfrak{s}_{n}}}=\frac{\log \log {N_{1}}({\mathfrak{s}_{n+1}})}{\log 1/{\mathfrak{s}_{n+1}}}\frac{\log 1/{\mathfrak{s}_{n+1}}}{\log 1/{\mathfrak{s}_{n}}}\to +\infty ,\hspace{1em}n\to \infty .\]
Let ${N_{2}}$ be any function satisfying the assumptions of Lemma 5. Since
\[ \underset{n\to \infty }{\lim }\frac{\log \log {N_{2}}({\mathfrak{s}_{n}})}{\log 1/{\mathfrak{s}_{n}}}=1,\]
we conclude that there exists ${n_{0}}\in \mathbb{N}$ such that
whence
In view of Lemma 5, it is sufficient to check that
where, for small $s\gt 0$,
As a consequence,
(27)
\[ \underset{n\to \infty }{\limsup }{Z_{2}}({\mathfrak{s}_{n}})\ge 1-\delta \hspace{1em}\text{a.s.},\]
\[ {Z_{2}}(s):=f(s){\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\widetilde{\eta }_{k,1}}(s).\]
We shall prove that there exists $\overline{s}\gt 0$ such that for all $s\in (0,\overline{s})$,
(28)
\[ \mathbb{P}\big\{{Z_{2}}(s)\gt 1-\delta \big\}\ge {3^{-1}}{\mathrm{e}^{-(1-{\delta ^{2}}/8){\log ^{(3)}}1/s}}.\]
\[ \sum \limits_{n\ge {n_{1}}}\mathbb{P}\big\{{Z_{2}}({\mathfrak{s}_{n}})\gt 1-\delta \big\}\ge {3^{-1}}\sum \limits_{n\ge {n_{1}}}\frac{1}{{n^{(1+\gamma )(1-{\delta ^{2}}/8)}}}=\infty ,\]
where ${n_{1}}\ge {n_{0}}$ is chosen to ensure that ${\mathfrak{s}_{n}}\lt \overline{s}$ for $n\ge {n_{1}}$. In view of (26) the random variables ${Z_{2}}({\mathfrak{s}_{{n_{1}}}}),{Z_{2}}({\mathfrak{s}_{{n_{1}}+1}}),\dots $ are independent. Hence, divergence of the series entails (27) by the converse part of the Borel–Cantelli lemma.When proving (28) we use the event
Then
Now we show that by selecting an appropriate $u=u(s)=O({({\log ^{(3)}}1/s)^{1/2}})$, the expectation on the left-hand side of (30) is bounded from below by ${\mathrm{e}^{-(1-{\delta ^{2}}/8){\log ^{(3)}}1/s}}$ for small positive s. Also, we shall prove that ${\mathbb{Q}_{s,u}}({U_{s}})\ge 1/3$, thereby deriving (28).
\[ {U_{s}}:=\big\{1-\delta \lt {Z_{2}}(s)\le 1\big\}=\big\{(1-\delta )V(s)\lt W(s)/{\big(g(s)\big)^{1/2}}\le V(s)\big\},\]
where $V(s)={(2\hspace{0.1667em}{\log ^{(3)}}1/s)^{1/2}}$ and
\[ W(s):=\frac{{Z_{2}}(s)}{f(s)}={\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\widetilde{\eta }_{k,1}}(s).\]
For $u\in \mathbb{R}$ and small $s\gt 0$, let ${\mathbb{Q}_{s,u}}$ be a probability measure on $(\Omega ,\mathfrak{F})$ defined by
(29)
\[ {\mathbb{Q}_{s,u}}(A)=\frac{\mathbb{E}[{\mathrm{e}^{uW(s)/{(g(s))^{1/2}}}}{1_{A}}]}{\mathbb{E}[{\mathrm{e}^{uW(s)/{(g(s))^{1/2}}}}]},\hspace{1em}A\in \mathfrak{F}.\](30)
\[\begin{aligned}{}\mathbb{E}\big[{\mathrm{e}^{u(W(s)/{(g(s))^{1/2}}-V(s))}}\big]{\mathbb{Q}_{s,u}}({U_{s}})& ={\mathrm{e}^{-uV(s)}}\mathbb{E}\big[{\mathrm{e}^{uW(s)/{(g(s))^{1/2}}}}{1_{{U_{s}}}}\big]\\ {} & \le \mathbb{P}({U_{s}})\le \mathbb{P}\big\{{Z_{2}}(s)\gt 1-\delta \big\}.\end{aligned}\]We start by demonstrating that, with $u=O({({\log ^{(3)}}1/s)^{1/2}})$,
for some function h satisfying ${\lim \nolimits_{s\to 0+}}h(s)=0$. Put
Here, we have used the fact that the $o(1)$ term can be chosen nonrandom. In view of (24) and (25),
Finally,
because both factors converge to 0 as $s\to 0+$. Now relations (33), (34) and (35) entail (31).
(31)
\[ \mathbb{E}\big[{\mathrm{e}^{uW(s)/{(g(s))^{1/2}}}}\big]\sim {\mathrm{e}^{{u^{2}}/2+{u^{2}}h(s)}},\hspace{1em}s\to 0+,\]
\[ {\xi _{k}}(s)=\frac{{(\log k)^{-1/2}}}{{(g(s))^{1/2}}{k^{1/2+s}}}{\widetilde{\eta }_{k,1}}(s),\hspace{1em}k\ge 2,\hspace{1em}s\in \big(0,{\mathrm{e}^{-\mathrm{e}}}\big).\]
As a consequence, for $u\in \mathbb{R}$,
According to the second inequality in (11),
for each $k\ge 2$. Using
\[ {\mathrm{e}^{x}}=1+x+{x^{2}}/2+o\big({x^{2}}\big)\hspace{1em}\text{and}\hspace{1em}\log (1+x)=x+O\big({x^{2}}\big),\hspace{1em}x\to 0,\]
we obtain
(33)
\[\begin{aligned}{}& \mathbb{E}\big[{\mathrm{e}^{uW(s)/{(g(s))^{1/2}}}}\big]\\ {} & \hspace{1em}={\prod \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\mathbb{E}\big[\exp \big(u{\xi _{k}}(s)\big)\big]\\ {} & \hspace{1em}={\prod \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\mathbb{E}\big[1+u{\xi _{k}}(s)+{u^{2}}{\xi _{k}^{2}}(s)\big(1/2+o(1)\big)\big]\\ {} & \hspace{1em}=\exp {\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\log \big(1+{u^{2}}\mathbb{E}\big[{\xi _{k}^{2}}(s)\big(1/2+o(1)\big)\big]\big)\\ {} & \hspace{1em}=\exp \Bigg({u^{2}}\big(1/2+o(1)\big){\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\mathbb{E}\big[{\xi _{k}^{2}}(s)\big]+{u^{4}}O\Bigg({\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}{\big(\mathbb{E}\big[{\xi _{k}^{2}}(s)\big]\big)^{2}}\Bigg)\Bigg).\end{aligned}\]
\[ {\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}\sim g(s),\hspace{1em}s\to 0+.\]
The latter, combined with uniformity in the integer $k\in [{N_{1}}(s)+1,{N_{2}}(s)]$ of the limit relation
\[ \mathbb{E}\big[{\widetilde{\eta }_{k,1}^{2}}(s)\big]=\mathbb{E}\big[{\eta _{k}^{2}}{1_{{({A_{k,1}}(s))^{c}}}}\big]-{\big(\mathbb{E}[{\eta _{k}}{1_{{({A_{k,1}}(s))^{c}}}}]\big)^{2}}\to 1,\hspace{1em}s\to 0+,\]
results in
(34)
\[ {\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\mathbb{E}\big[{\xi _{k}^{2}}(s)\big]=\frac{1}{g(s)}{\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\frac{{(\log k)^{-1}}}{{k^{1+2s}}}\mathbb{E}\big[{\widetilde{\eta }_{k,1}^{2}}(s)\big]\to 1,\hspace{1em}s\to 0+.\](35)
\[\begin{aligned}{}{u^{2}}{\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}{\big(\mathbb{E}\big[{\xi _{k}^{2}}(s)\big]\big)^{2}}& =\frac{{u^{2}}}{{(g(s))^{2}}}{\sum \limits_{k={N_{1}}(s)+1}^{{N_{2}}(s)}}\frac{{(\log k)^{-2}}}{{k^{2+4s}}}{\big(\mathbb{E}\big[{\widetilde{\eta }_{k,1}^{2}}(s)\big]\big)^{2}}\\ {} & \le \frac{{u^{2}}}{{(g(s))^{2}}}\sum \limits_{k\ge {N_{1}}(s)+1}\frac{{(\log k)^{-2}}}{{k^{2}}}=o(1),\hspace{1em}s\to 0+,\end{aligned}\]Formula (31), with $u\in \mathbb{R}$ fixed, reads ${\lim \nolimits_{s\to 0+}}\mathbb{E}[{\mathrm{e}^{uW(s)/{(g(s))^{1/2}}}}]={\mathrm{e}^{{u^{2}}/2}}$, which implies a central limit theorem $W(s)/{(g(s))^{1/2}}\stackrel{\mathrm{d}}{\to }\mathcal{N}(0,1)$ as $s\to 0+$. Here, $\stackrel{\mathrm{d}}{\to }$ denotes convergence in distribution and $\mathcal{N}(0,1)$ denotes a random variable with the normal distribution with mean 0 and variance 1.
Passing to the proof of (28), put
Formula (31) entails
for small $s\gt 0$. Now we show that the ${\mathbb{Q}_{s,u}}$-distribution of
converges weakly as $s\to 0+$ to the $\mathbb{P}$-distribution of $\mathcal{N}(0,1)$. To this end, we prove convergence of the moment generating functions. Let ${\mathbb{E}_{{\mathbb{Q}_{s,u}}}}$ denote the expectation with respect to the probability measure ${\mathbb{Q}_{s,u}}$. Invoking (31) we conclude that, for $t\in \mathbb{R}$,
for small $s\gt 0$. Now (28) follows from (30), (36) and (37). The proof of Lemma 6 is complete. □
(36)
\[\begin{aligned}{}\mathbb{E}\big[{\mathrm{e}^{u(W(s)/{(g(s))^{1/2}}-V(s))}}\big]& ={\mathrm{e}^{-(1-{\delta ^{2}}/4){\log ^{(3)}}1/s+o({\log ^{(3)}}1/s)}}\\ {} & \ge {\mathrm{e}^{-(1-{\delta ^{2}}/8){\log ^{(3)}}1/s}}\end{aligned}\]
\[\begin{aligned}{}& {\mathbb{E}_{{\mathbb{Q}_{s,u}}}}\big[{\mathrm{e}^{t(W(s)/{(g(s))^{1/2}}-(u+2uh(s)))}}\big]\sim \frac{\mathbb{E}[{\mathrm{e}^{(t+u)W(s)/{(g(s))^{1/2}}}}]}{\mathbb{E}[{\mathrm{e}^{uW(s)/{(g(s))^{1/2}}}}]}{\mathrm{e}^{-t(u+2uh(s))}}\\ {} & \hspace{1em}=\exp \big({(t+u)^{2}}/2+{(t+u)^{2}}h(s)-{u^{2}}/2-{u^{2}}h(s)-t\big(u+2uh(s)\big)\big)\\ {} & \hspace{1em}=\exp \big(\big(1/2+h(s)\big){t^{2}}\big)\to \exp \big({t^{2}}/2\big)=\mathbb{E}\big[\exp \big(t\hspace{0.1667em}\mathcal{N}(0,1)\big)\big],\hspace{1em}s\to 0+.\end{aligned}\]
As a consequence of the weak convergence, we infer
\[\begin{aligned}{}& \underset{s\to 0+}{\limsup }{\mathbb{Q}_{s,u}}\big\{W(s)/{\big(g(s)\big)^{1/2}}\le (1-\delta )V(s)\big\}\\ {} & \hspace{1em}\le \underset{s\to 0+}{\lim }{\mathbb{Q}_{s,u}}\big\{W(s)/{\big(g(s)\big)^{1/2}}\le u+2uh(s)\big\}=\mathbb{P}\big\{\mathcal{N}(0,1)\le 0\big\}=1/2.\end{aligned}\]
Further, the relation ${\lim \nolimits_{s\to 0+}}(V(s)-(u+2uh(s)))=+\infty $ entails
and thereupon
(37)
\[\begin{aligned}{}{\mathbb{Q}_{s,u}}({U_{s}})& ={\mathbb{Q}_{s,u}}\big\{W(s)/{\big(g(s)\big)^{1/2}}\le V(s)\big\}\\ {} & \hspace{1em}-{\mathbb{Q}_{s,u}}\big\{W(s)/{\big(g(s)\big)^{1/2}}\le (1-\delta )V(s)\big\}\ge 1/3\end{aligned}\]Proof of Proposition 4.
It suffices to prove (19). To this end, pick sufficiently small $\delta \gt 0$ and $\gamma \gt 0$, and let ${\mathfrak{s}_{n}}=\exp (-\exp ({n^{1+\gamma }}))$ for $n\ge 1$. We shall invoke a representation
\[\begin{aligned}{}& f({\mathfrak{s}_{n}})\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{\mathfrak{s}_{n}}}}}{\eta _{k}}\\ {} & \hspace{1em}=f({\mathfrak{s}_{n}}){\sum \limits_{k=2}^{\lfloor {(\log 1/{\mathfrak{s}_{n}})^{1/2}}\rfloor }}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{\mathfrak{s}_{n}}}}}{\eta _{k}}\\ {} & \hspace{2em}-f({\mathfrak{s}_{n}}){\sum \limits_{k=2}^{\lfloor {(\log 1/{\mathfrak{s}_{n}})^{1/2}}\rfloor }}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{\mathfrak{s}_{n}}}}}\big({\eta _{k}}{1_{{({A_{k,1}}({\mathfrak{s}_{n}}))^{c}}}}-\mathbb{E}[{\eta _{k}}{1_{{({A_{k,1}}({\mathfrak{s}_{n}}))^{c}}}}]\big)\\ {} & \hspace{2em}+f({\mathfrak{s}_{n}})\sum \limits_{k\ge \lfloor {(\log 1/{\mathfrak{s}_{n}})^{1/2}}\rfloor +1}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{\mathfrak{s}_{n}}}}}\big({\eta _{k}}{1_{{A_{k,1}}({\mathfrak{s}_{n}})}}-\mathbb{E}[{\eta _{k}}{1_{{A_{k,1}}({\mathfrak{s}_{n}})}}]\big)\\ {} & \hspace{2em}+f({\mathfrak{s}_{n}})\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{\mathfrak{s}_{n}}}}}\big({\eta _{k}}{1_{{({A_{k,1}}({\mathfrak{s}_{n}}))^{c}}}}-\mathbb{E}[{\eta _{k}}{1_{{({A_{k,1}}({\mathfrak{s}_{n}}))^{c}}}}]\big).\end{aligned}\]
Here, we have used that $\mathbb{E}[\eta ]=0$. The first three terms on the right-hand side converge to 0 a.s. as $n\to \infty $ by Lemma 1, formula (21) of Lemma 5, with ${N_{1}}(s)=M(s)=\lfloor {(\log 1/s)^{1/2}}\rfloor $, and Lemma 2, respectively. Lemma 6 ensures that
\[ \underset{s\to 0+}{\limsup }f(s)\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}\ge \underset{n\to \infty }{\limsup }f({\mathfrak{s}_{n}})\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+{\mathfrak{s}_{n}}}}}{\eta _{k}}\ge 1-\delta \hspace{1em}\text{a.s.}\]
Sending δ to $0+$ we arrive at (19). □Proof of Theorem 2.
A combination of Propositions 3 and 4 yields (2) and (3). To prove (1) we put $H:=\{z\in \mathbb{C}:\mathrm{Re}\hspace{0.1667em}z\gt 0\}$ and
\[ X(z):=\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+z}}}{\eta _{k}},\hspace{1em}z\in H,\]
and note that the so defined X is a random analytic function, see p. 247 in [5]. Hence, its restriction to positive arguments
is a.s. continuous, and so is $s\mapsto {(2{\sigma ^{2}}\log 1/s\hspace{0.1667em}{\log ^{(3)}}1/s)^{-1/2}}X(s)$ on $(0,{\mathrm{e}^{-\mathrm{e}}})$. In view of (2) and (3), we obtain (1) with the help of the intermediate value theorem for continuous functions. □4 Proof of Theorem 1
It is more convenient to prove the result in an equivalent form
\[ {\bigg(\frac{1}{{s^{1/2}}}\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+\exp (-ts)}}}{\eta _{k}}\bigg)_{t\ge 0}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big(\sigma B(t)\big)_{t\ge 0}},\hspace{1em}s\to +\infty ,\]
on $C[0,\infty )$. Without loss of generality we can and do assume that ${\sigma ^{2}}=1$.In what follows we write X for ${X_{-1/2}}$. We use a standard approach, which consists of two steps: (a) proving weak convergence of finite-dimensional distributions; (b) checking tightness.
(a) If $t=0$, then, for $s\gt 0$, $X({\mathrm{e}^{-ts}})=X(1)={\textstyle\sum _{k\ge 2}}{(\log k)^{-1/2}}{k^{-3/2}}{\eta _{k}}$, and ${s^{-1/2}}X(1)$ converges in probability to $B(0)=0$ as $s\to +\infty $.
Thus, it suffices to show that, for ${t_{1}},{t_{2}}\in (0,\infty )$ (we do not need to consider $t=0$),
and check the Lindeberg–Feller condition: for all $\varepsilon \gt 0$ and each fixed $t\gt 0$,
Proof of (38). Using monotonicity to pass from the series to an integral we obtain
(38)
\[ \mathbb{E}\big[X\big({\mathrm{e}^{-{t_{1}}s}}\big)X\big({\mathrm{e}^{-{t_{2}}s}}\big)\big]\sim \min ({t_{1}},{t_{2}})s,\hspace{1em}s\to +\infty ,\](39)
\[ \underset{s\to +\infty }{\lim }\frac{1}{s}\sum \limits_{k\ge 2}\mathbb{E}\bigg[{\bigg(\frac{{(\log k)^{-1/2}}}{{k^{1/2+\exp (-ts)}}}{\eta _{k}}\bigg)^{2}}{1_{\{{(\log k)^{-1/2}}|{\eta _{k}}|\gt \varepsilon {k^{1/2+\exp (-ts)}}{s^{1/2}}\}}}\bigg]=0.\]
\[\begin{aligned}{}& \mathbb{E}\bigg[\sum \limits_{k\ge 2}\frac{{(\log k)^{-1/2}}}{{k^{1/2+\exp (-{t_{1}}s)}}}{\eta _{k}}\sum \limits_{j\ge 2}\frac{{(\log j)^{-1/2}}}{{j^{1/2+\exp (-{t_{2}}s)}}}{\eta _{j}}\bigg]=\sum \limits_{k\ge 2}\frac{{(\log k)^{-1}}}{{k^{1+\exp (-{t_{1}}s)+\exp (-{t_{2}}s)}}}\\ {} & \hspace{1em}\sim {\int _{\mathrm{e}}^{\infty }}\frac{{(\log y)^{-1}}}{{y^{1+\exp (-{t_{1}}s)+\exp (-{t_{2}}s)}}}\mathrm{d}y={\int _{1}^{\infty }}\frac{{\mathrm{e}^{-(\exp (-{t_{1}}s)+\exp (-{t_{2}}s))y}}}{y}\mathrm{d}y\\ {} & \hspace{1em}\sim -\log \big({\mathrm{e}^{-{t_{1}}s}}+{\mathrm{e}^{-{t_{2}}s}}\big)\sim \min ({t_{1}},{t_{2}})s,\hspace{1em}s\to +\infty .\end{aligned}\]
Proof of (39). For each $k\ge 2$ and each $s\gt 0$, ${(\log k)^{-1/2}}{k^{-1/2-\exp (-ts)}}\le {(\log 2)^{-1/2}}=:1/A$. Hence, the expression under the limit on the left-hand side of (39) does not exceed
\[ \frac{1}{s}\sum \limits_{k\ge 2}\frac{{(\log k)^{-1}}}{{k^{1+2\exp (-ts)}}}\mathbb{E}\big[{\eta ^{2}}{1_{\{|\eta |\gt A{s^{1/2}}\}}}\big].\]
As shown in the proof of (38), ${\textstyle\sum _{k\ge 2}}{(\log k)^{-1}}{k^{-1-2\exp (-ts)}}\sim ts$ as $s\to +\infty $. Further, $\mathbb{E}[{\eta ^{2}}]\lt \infty $ entails ${\lim \nolimits_{s\to +\infty }}\mathbb{E}[{\eta ^{2}}{1_{\{|\eta |\gt A{s^{1/2}}\}}}]=0$. With these at hand, (39) follows.(b) We have to prove tightness on $C[0,T]$ (the set of continuous functions defined on $[0,T]$) for each $T\gt 0$. Since ${(B(t))_{t\in [0,T]}}$ has the same distribution as ${T^{1/2}}{(B(t))_{t\in [0,1]}}$, it is enough to investigate the case $T=1$ only.
Let ${M^{\ast }}:(0,\infty )\to {\mathbb{N}_{0}}$ denote a function satisfying ${\lim \nolimits_{s\to +\infty }}{M^{\ast }}(s)=+\infty $ and ${M^{\ast }}(s)=o(s)$ as $s\to +\infty $. We shall need a relation ${\sup _{k\le n}}\hspace{0.1667em}|{T_{k}}|=O({n^{1/2}})$ in probability as $n\to \infty $, which is a consequence of ${n^{-1/2}}{\max _{k\le n}}\hspace{0.1667em}|{T_{k}}|\stackrel{\mathrm{d}}{\to }|\mathcal{N}(0,1)|$ as $n\to \infty $. Repeating the proof of Lemma 1, with the aforementioned limit relation replacing (7), we infer
From now on, we choose a particular ${M^{\ast }}$ defined as above, for instance, ${M^{\ast }}(s)=\lfloor {s^{1/2}}\rfloor $ and put $a(s):={(\log {M^{\ast }}(s))^{1/2}}$ for $s\ge 0$. Arguing as in the proof of Lemma 2 or in the analysis of ${I_{n,2}}(s)$ in the proof of Lemma 4, we obtain
and
Put ${\eta _{k}^{\ast }}(s):={\eta _{k}}{1_{\{|{\eta _{k}}|\le {k^{1/2}}a(s)\}}}-\mathbb{E}[{\eta _{k}}{1_{\{|{\eta _{k}}|\le {k^{1/2}}a(s)\}}}]$ for $k\in \mathbb{N}$ and $s\gt 0$, and then
The proof of (43) follows closely the last part of the proof of Lemma 4. We use dyadic partitions of $[0,1]$ by points ${t_{j,\hspace{0.1667em}m}^{\ast }}:={2^{-j}}m$ for $j\in {\mathbb{N}_{0}}$ and $m=0,1,\dots ,{2^{j}}$. Similarly to the argument preceding formula (16) we infer
\[ {X^{\ast }}(t,s):=\frac{1}{{s^{1/2}}}\sum \limits_{k\ge {M^{\ast }}(s)+1}\frac{{(\log k)^{-1/2}}{\eta _{k}^{\ast }}(s)}{{k^{1/2+\exp (-ts)}}},\hspace{1em}t\ge 0,\hspace{2.5pt}s\gt 0.\]
In view of (40), (41) and (42) it remains to prove tightness of the distributions of ${({X^{\ast }}(t,s))_{t\in [0,1]}}$ for large $s\gt 0$. According to formula (7.8) on p. 82 in [4], it is enough to show that, for all $\varepsilon \gt 0$,
(43)
\[ \underset{i\to \infty }{\lim }\underset{s\to \infty }{\limsup }\mathbb{P}\Big\{\underset{u,v\in [0,1],\hspace{0.1667em}|u-v|\le {2^{-i}}}{\sup }\hspace{0.1667em}|{X^{\ast }}(u,s)-{X^{\ast }}(v,s)|\gt \varepsilon \Big\}=0.\]
\[\begin{aligned}{}& \underset{u,v\in [0,1],\hspace{0.1667em}|u-v|\le {2^{-i}}}{\sup }\hspace{0.1667em}|{X^{\ast }}(u,s)-{X^{\ast }}(v,s)|\\ {} & \hspace{1em}\le \sum \limits_{j\ge i}\underset{1\le m\le {2^{j}}}{\max }\big|{X^{\ast }}\big({t_{j,\hspace{0.1667em}m}^{\ast }},s\big)-{X^{\ast }}\big({t_{j,\hspace{0.1667em}m-1}^{\ast }},s\big)\big|.\end{aligned}\]
Thus, it suffices to prove that, for all $\varepsilon \gt 0$,
\[ \underset{i\to \infty }{\lim }\underset{s\to +\infty }{\limsup }\mathbb{P}\bigg\{\sum \limits_{j\ge i}\underset{1\le m\le {2^{j}}}{\max }\big|{X^{\ast }}\big({t_{j,\hspace{0.1667em}m}^{\ast }},s\big)-{X^{\ast }}\big({t_{j,\hspace{0.1667em}m-1}^{\ast }},s\big)\big|\gt \varepsilon {s^{1/2}}\bigg\}=0.\]
Put ${a_{j}^{\ast }}:={2^{-j/2}}{j^{2}}$ for $j\in {\mathbb{N}_{0}}$. The last limit relation follows if we can show that, for all $\varepsilon \gt 0$,
\[ \underset{i\to \infty }{\lim }\underset{s\to +\infty }{\limsup }\sum \limits_{j\ge i}\mathbb{P}\Big\{\underset{1\le m\le {2^{j}}}{\max }\big|{X^{\ast }}\big({t_{j,\hspace{0.1667em}m}^{\ast }}\big)-{X^{\ast }}\big({t_{j,\hspace{0.1667em}m-1}^{\ast }}\big)\big|\gt \varepsilon {a_{j}^{\ast }}{s^{1/2}}\Big\}=0.\]
Denote by ${A^{\ast }}(j,s)$ and ${B^{\ast }}(u,k,s)$ the counterparts of $A(j,n)$ and $B(u,k,n)$ in the present situation. Then ${A^{\ast }}(j,s)\le {2^{-j}}s$ and, if ${2^{j}}\ge s$, ${B^{\ast }}(u,k,s)\le |u|{2^{-j}}s=:{C_{j}^{\ast }}(u,s)$, if ${2^{j}}\le s$, ${B^{\ast }}(j,k,s)\le |u|=:{C_{j}^{\ast }}(u,s)$. With these at hand we obtain
\[ \mathbb{E}\big[\exp \big(\pm u\big({X^{\ast }}\big({t_{j,\hspace{0.1667em}m}^{\ast }},s\big)-{X^{\ast }}\big({t_{j,\hspace{0.1667em}m-1}^{\ast }},s\big)\big)\big)\big]\le \exp \bigg(\frac{{2^{-j}}s{u^{2}}}{2}{\mathrm{e}^{{C_{j}^{\ast }}(u,s)}}\bigg),\hspace{1em}u\in \mathbb{R},\]
and thereupon
\[\begin{aligned}{}& \mathbb{P}\big\{\big|{X^{\ast }}\big({t_{j,\hspace{0.1667em}m}^{\ast }},s\big)-{X^{\ast }}\big({t_{j,\hspace{0.1667em}m-1}^{\ast }},s\big)\big|\gt \varepsilon {a_{j}^{\ast }}{s^{1/2}}\big\}\\ {} & \hspace{1em}\le \exp \big(-u\varepsilon {a_{j}^{\ast }}{s^{1/2}}\big)\mathbb{E}\big[\exp \big(u\big|{X^{\ast }}\big({t_{j,\hspace{0.1667em}m}^{\ast }},s\big)-{X^{\ast }}\big({t_{j,\hspace{0.1667em}m-1}^{\ast }},s\big)\big|\big)\big]\\ {} & \hspace{1em}\le 2\exp \bigg(-u\varepsilon {2^{-j/2}}{j^{2}}{s^{1/2}}+\frac{{2^{-j}}s{u^{2}}}{2}{\mathrm{e}^{{C_{j}^{\ast }}(u,s)}}\bigg).\end{aligned}\]
Put $u=\varepsilon {2^{j/2}}{s^{-1/2}}$. Then ${C_{j}^{\ast }}(u,s)\le \varepsilon $ and further
\[\begin{aligned}{}& \sum \limits_{j\ge i}\mathbb{P}\Big\{\underset{1\le m\le {2^{j}}}{\max }\big|{X^{\ast }}\big({t_{j,\hspace{0.1667em}m}^{\ast }},s\big)-{X^{\ast }}\big({t_{j,\hspace{0.1667em}m-1}^{\ast }},s\big)\big|\gt \varepsilon {a_{j}^{\ast }}{s^{1/2}}\Big\}\\ {} & \hspace{1em}\le 2\exp \big({\varepsilon ^{2}}{\mathrm{e}^{\varepsilon }}/2\big)\sum \limits_{j\ge i}{2^{j}}{\mathrm{e}^{-{\varepsilon ^{2}}{j^{2}}}}\to 0,\hspace{1em}i\to \infty .\end{aligned}\]
The proof of Theorem 1 is complete.5 A failure of approach based on a strong approximation
Our proof of Theorem 2 is quite technical. Naturally we wanted to work out a less technical argument. One promising possibility has been to exploit a strong approximation result for centered standard random walks with finite variance, see, for instance, Theorem 12.6 in [7].
Implicit in the preceding proofs is the fact that the principal contribution to the LIL is made by the fragment of the original series which has the variance comparable to the variance of the full series. More precisely, one may reduce attention to the sum ${\textstyle\sum _{k={N_{1}}(s)}^{{N_{2}}(s)}}\frac{{(\log k)^{-1/2}}}{{k^{1/2+s}}}{\eta _{k}}$, where ${N_{1}}$ and ${N_{2}}$ are positive integer-valued functions satisfying ${\lim \nolimits_{s\to 0+}}(\log \log {N_{1}}(s)/\log 1/s)=0$, ${\lim \nolimits_{s\to 0+}}s\log {N_{2}}(s)=0$ and
The reason is that
We hoped it would be possible to work with Gaussian random variables given by ${\textstyle\int _{{N_{1}}(s)}^{{N_{2}}(s)}}\frac{{(\log x)^{-1/2}}}{{x^{1/2+s}}}\mathrm{d}W(x)$ in place of ${\textstyle\int _{{N_{1}}(s)}^{{N_{2}}(s)}}\frac{{(\log x)^{-1/2}}}{{x^{1/2+s}}}\mathrm{d}{T_{\lfloor x\rfloor }}$. Unfortunately, Lemma 7 does not seem to secure such a possibility. Indeed, after integrating by parts we intended to show that
Since
\[ \underset{s\to 0+}{\lim }f(s){\int _{{N_{1}}(s)}^{{N_{2}}(s)}}\frac{|{T_{\lfloor x\rfloor }}-W(x)|}{{(\log x)^{1/2}}{x^{3/2+s}}}\mathrm{d}x=0\hspace{1em}\text{a.s.}\]
In view of Lemma 7 the latter would be a consequence of
(45)
\[ \underset{s\to 0+}{\lim }f(s){\int _{{N_{1}}(s)}^{{N_{2}}(s)}}\frac{{(\log \log x)^{1/2}}}{{(\log x)^{1/2}}{x^{1+s}}}\mathrm{d}x=0.\]
\[\begin{aligned}{}{\int _{{N_{1}}(s)}^{{N_{2}}(s)}}\frac{{(\log \log x)^{1/2}}}{{(\log x)^{1/2}}{x^{1+s}}}\mathrm{d}x& =\frac{1}{{s^{1/2}}}{\int _{s\log {N_{1}}(s)}^{s\log {N_{2}}(s)}}\frac{{(\log 1/s+\log z)^{1/2}}{\mathrm{e}^{-z}}}{{z^{1/2}}}\mathrm{d}z\\ {} & \le \frac{{(\log \log {N_{2}}(s))^{1/2}}}{{s^{1/2}}}{\int _{0}^{s\log {N_{2}}(s)}}\frac{{\mathrm{e}^{-z}}}{{z^{1/2}}}\mathrm{d}z\\ {} & \sim 2{\big(\log {N_{2}}(s)\log \log {N_{2}}(s)\big)^{1/2}},\hspace{1em}s\to 0+,\end{aligned}\]
the validity of (45) required ${\lim \nolimits_{s\to 0+}}(\log {N_{2}}(s)/\log 1/s)=0$, which was incompatible with (44).We note in passing that another strong approximation of ${({T_{n}})_{n\ge 0}}$, this time by sums of independent Gaussian random variables, has a better error the big O of square root, see [8]. Revisiting the calculation above reveals that this approximation does not seem to help either.