Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. Strong laws of large numbers for lightly ...

Strong laws of large numbers for lightly trimmed sums of generalized Oppenheim expansions
Rita Giuliano ORCID icon link to view author Rita Giuliano details   Milto Hadjikyriakou 1  

Authors

 
Placeholder
https://doi.org/10.15559/25-VMSTA272
Pub. online: 13 February 2025      Type: Research Article      Open accessOpen Access

1 Part of this work was conducted while the author was a visiting scholar at the University of Cyprus.

Received
29 July 2024
Revised
29 January 2025
Accepted
29 January 2025
Published
13 February 2025

Abstract

In the framework of generalized Oppenheim expansions, almost sure convergence results for lightly trimmed sums are proven. First, a particular class of expansions is identified for which a convergence result is proven assuming that only the largest summand is deleted from the sum; this result generalizes a strong law recently proven for the Lüroth digits and also covers some new cases that have never been studied before. Next, any assumptions concerning the structure of the Oppenheim expansions are dropped and a result concerning trimmed sums is proven when at least two summands are trimmed; combining this latter theorem with the asymptotic behavior of the r-th maximum term of the expansion, a convergence result is obtained for the case in which only the largest summand is deleted from the sum.

1 Introduction

The framework of this work has been introduced and generalized in [7] and [8], respectively, and is described as follows: let ${({B_{n}})_{n\ge 1}}$ be a sequence of integer-valued random variables defined on $(\Omega ,\mathcal{A},P)$, where $\Omega =[0,1]$, $\mathcal{A}$ is the σ-algebra of the Borel subsets of $[0,1]$ and P is the Lebesgue measure on $[0,1]$. Let $\{{F_{n}},n\ge 1\}$ be a sequence of probability distribution functions with ${F_{n}}(0)=0$, ${F_{n}}(1)=1$ $\forall n$ and, moreover, let ${\varphi _{n}}:{\mathbb{N}^{\ast }}\to {\mathbb{R}^{+}}$ be a sequence of functions. Furthermore, let ${({q_{n}})_{n\ge 1}}$ with ${q_{n}}={q_{n}}({h_{1}},\dots ,{h_{n}})$ be a sequence of nonnegative numbers (i.e. possibly depending on the n integers ${h_{1}},\dots ,{h_{n}}$) such that, for ${h_{1}}\ge 1$ and ${h_{j}}\ge {\varphi _{j-1}}({h_{j-1}})$, $j=2,\dots ,n$, we have
\[ P({B_{n+1}}={h_{n+1}}|{B_{n}}={h_{n}},\dots ,{B_{1}}={h_{1}})={F_{n}}({\beta _{n}})-{F_{n}}({\alpha _{n}}),\]
where
\[\begin{aligned}{}& {\alpha _{n}}={\delta _{n}}({h_{n}},{h_{n+1}}+1,{q_{n}}),\hspace{1em}{\beta _{n}}={\delta _{n}}({h_{n}},{h_{n+1}},{q_{n}})\\ {} & \hspace{1em}\text{with}\hspace{2.5pt}{\delta _{j}}(h,k,q)=\frac{{\varphi _{j}}(h)(1+q)}{k+{\varphi _{j}}(h)q}.\end{aligned}\]
Let ${Q_{n}}={q_{n}}({B_{1}},\dots ,{B_{n}})$, and define
\[ {R_{n}}=\frac{{B_{n+1}}+{\varphi _{n}}({B_{n}}){Q_{n}}}{{\varphi _{n}}({B_{n}})(1+{Q_{n}})}=\frac{1}{{\delta _{n}}({B_{n}},{B_{n+1}},{Q_{n}})}.\]
In [8] (see Lemma 3 there) it has been proven that for any integer n and $x\ge 1$,
\[ P({R_{n}}\gt x)\le {F_{n}}\bigg(\frac{1}{x}\bigg),\]
which implies that if ${U_{n}}$ is a random variable with distribution ${F_{n}}$ and ${Y_{n}}=\frac{1}{{U_{n}}}$ for every integer n, then
\[ P({R_{n}}\gt x)\le P({Y_{n}}\gt x),\hspace{1em}\forall x\ge 1,\]
i.e. the sequence ${({R_{n}})_{n\ge 1}}$ is stochastically dominated by the sequence ${({Y_{n}})_{n\ge 1}}$.
Since the random variables ${({R_{n}})_{n\ge 1}}$, in general, are not independent and do not have finite expectations, a traditional strong law for the quantity $\frac{1}{{a_{n}}}{\textstyle\sum _{i=1}^{n}}{R_{i}}$ cannot be proven. However, in [7], under some conditions for the involved distributions, the convergence in probability of $\frac{1}{n\log n}{\textstyle\sum _{i=1}^{n}}{R_{i}}$ is established. This result, raises the question whether a strong law of large numbers can be proven, after deleting finitely many of the largest summands from the partial sums. Particularly, let r be a fixed integer. We are interested in studying the almost sure convergence of
\[ \frac{{^{(r)}}{S_{n}}}{n\log n},\]
where ${^{(r)}}{S_{n}}={\textstyle\sum _{i=1}^{n}}{R_{i}}-{\textstyle\sum _{k=1}^{r}}{M_{n}^{(k)}}$ and ${M_{n}^{(r)}}$ denotes the r-th maximum of ${R_{1}},\dots ,{R_{n}}$ (in decreasing order, i.e. ${M_{n}^{(1)}}$ denotes the maximum). In the literature, the sequence ${({^{(r)}}{S_{n}})_{n\ge 1}}$ is known as the lightly trimmed sum process. Note that in the case where r is substituted by a sequence ${({r_{n}})_{n\ge 1}}$ such that ${r_{n}}\to \infty $ and ${r_{n}}/n\to 0$ as $n\to \infty $ we have the so-called moderate trimming while in the case where ${r_{n}}/n\to c\in (0,1)$ the resulting sequence is said to be heavily trimmed. For more details we refer the interested reader to [3] (and references therein). Convergence results for moderately trimmed sums of Oppenheim expansions can be found [9].
The problem of trimming has been extensively studied in the literature: we cite here [12, 11, 10, 4, 5, 3], where i.i.d. sequences are considered, and [1], which studies stationary sequences. It is worth to be stressed that generic Oppenheim expansions, besides not being independent, are not stationary either, a fact that highlights the novelty of the current work.
The structure of the paper is as follows. In Section 2 we state the main results of this paper, i.e. Theorems 1, 2 and 3; the first result is a strong law for the lightly trimmed sum processes for the case $r=1$ and concerns a special class of Oppenheim expansions. It is worth mentioning that Theorem 1 covers the Lüroth, Engel and the Sylvester sequences of digits (already studied in the literature) but also concerns some new examples, leading to asymptotic results that have never appeared in the literature before. See Remark 1 for details. Theorem 2 and Theorem 3 address the general case, i.e. we do not impose any condition on the sequence of expansions taken into account. Theorem 2 is an asymptotic result for $r\ge 2$ which becomes instrumental for proving another asymptotic law for $r=1$, that is, Theorem 3 which, though being weaker than Theorem 1, is proven under more general conditions since any assumptions for the structure of the Oppenheim expansion are dropped and we impose more relaxed conditions for the involved distribution functions. The special class of expansions mentioned above is studied in greater detail in Section 3, where we provide some preliminary results that will be utilized in the proof of Theorem 1. A detailed proof of Theorem 1 is discussed in Section 4. Section 5 contains some preliminaries for the proofs of Theorems 2 and 3, which are contained in Section 6.

2 The main results

In this section we state the main results of this paper.

2.1 A strong law

The first result presented (Theorem 1), is a strong law for the trimmed sums of a special class of generalized Oppenheim expansions (discussed also in the subsequent Proposition 4) in the case where only the maximum term is excluded, i.e. we provide an a.s. convergence result for ${^{(1)}}{S_{n}}$.
We call a sequence $\Lambda ={({\lambda _{j}})_{j\in \mathbb{N}}}$ good if it is strictly increasing and tends to $+\infty $, with ${\lambda _{j}}\ge 1$ for every $j\ge 1$ and ${\lambda _{0}}=0$.
Theorem 1.
Consider the random variables ${({R_{n}})_{n\ge 1}}$ and assume that there exists a good sequence $\Lambda ={({\lambda _{j}})_{j\in \mathbb{N}}}$ such that, for every $x\in \Lambda $ and for every n, $x{\varphi _{n}}({B_{n}})+(x-1){Q_{n}}{\varphi _{n}}({B_{n}})$ is an integer. Moreover, assume the following:
  • (i)
    \[ \underset{n}{\sup }({\lambda _{n+1}}-{\lambda _{n}})=\ell \lt +\infty ;\]
  • (ii) ${F_{n}}\equiv F$ for all integers n and there exists a constant $\alpha \gt 0$ such that
    (1)
    \[ \underset{t\to 0}{\lim }\frac{F(t)}{t}=\alpha .\]
Then
\[ \frac{{S_{n}}-{M_{n}^{(1)}}}{n\log n}\to \alpha \hspace{1em}\textit{a.s.},\]
where ${S_{n}}={\textstyle\sum _{i=1}^{n}}{R_{i}}$ and ${M_{n}^{(1)}}=\max \{{R_{1}}\dots ,{R_{n}}\}$.
Remark 1.
It is important to identify functions ${\varphi _{n}}$ for which the conditions imposed in Theorem 1 are satisfied. First, recall that the notation ${q_{n}}$ stands for the sequence of nonnegative numbers such that ${q_{n}}({B_{1}},\dots ,{B_{n}})={Q_{n}}$. As a first example, consider positive integers ${a_{1}},\dots ,{a_{p}}$ and assume that
\[ {\varphi _{kp+j-1}}=1/{a_{j}},\hspace{1em}\text{for}\hspace{2.5pt}k\in \mathbb{N},\hspace{2.5pt}j=1,\dots ,p.\]
Define $\kappa =L.C.M.({a_{1}},\dots ,{a_{p}})$ and $\Lambda ={(\kappa n)_{n\ge 1}}$ and assume that ${q_{n}}\equiv {c_{n}}$ where ${({c_{n}})_{n\ge 1}}$ is a sequence of positive numbers chosen from the set Λ. Then, for any $x\in \Lambda $,
\[ x{\varphi _{n}}({B_{n}})+(x-1){Q_{n}}{\varphi _{n}}({B_{n}})\]
is an integer.
Moreover, the conditions of the Theorem are satisfied if $\Lambda ={\mathbb{N}^{\ast }}$, ${q_{n}}\equiv 0$ and ${\varphi _{n}}(h)={\textstyle\sum _{k=1}^{m}}{h^{k}}$ for some integer m. Note that for $m=1$ we get the corresponding φ function for the Engel series while for $m=2$ we have the Sylvester expansion. If ${\varphi _{n}}(h)={\textstyle\sum _{k=0}^{m}}{h^{k}}$, the case of $m=0$ covers the Lüroth case (see [7] for details). Notice that also Theorem 3 of [6] also covers these three cases (see Remark 6) and [2] studies the Lüroth case, but our theorem is more general than the ones provided in [6] and [2] because we make no assumptions on the involved distributions.
The proof of Theorem 1 is given in Section 4.

2.2 A general upper bound

In Theorem 1 we consider the case in which only the largest summand is deleted from the sum of Oppenheim random variables that satisfy a specific condition. Although Theorem 1 covers a large subclass of Oppenheim random variables and well-known expansions, we are interested in studying if convergence can be established in the general framework. To this end, we drop any assumption concerning the structure of the Oppenheim expansions and we present an upper bound concerning trimmed sums when at least two summands are trimmed, i.e. in Theorem 2 we provide conditions under which convergence is established for the trimmed partial sums of any generalized Oppenheim expansion.
As a direct consequence of Theorem 2 we obtain another asymptotic result (Theorem 3) for the particular case where $r=1$. The fact that Theorems 2 and 3 are proven without imposing any constraints on the structure of the Oppenheim random variables leads to convergence to zero rather than to a positive constant and as a result Theorem 2 and Theorem 3 are most probably far from being optimal; our aim is merely to get some kind of information anyway and, if possible, to indicate a new line of research, see the discussion in Remark 5.
We start with some notation and state some assumptions that are crucial for obtaining the desired results. For every integer n we denote ${m_{n}}=\lfloor {\log _{2}}n\rfloor $ (where $\lfloor x\rfloor $ is the greatest integer less than or equal to x), for a given positive increasing function h we set ${t_{n}^{(h)}}=h({2^{{m_{n}}}})$, while ϕ will denote a fixed positive function such that:
  • (A1) for some $\beta \in [0,1)$ we have
    \[ \underset{x\to \infty }{\limsup }\frac{\phi (x)}{{\log ^{\beta }}x}\lt \infty ;\]
  • (A2) $x\mapsto \frac{\log x}{\phi (x)}$ is ultimately nondecreasing.
Let
\[ {\beta _{0}}=\inf \big\{\beta \ge 0:\hspace{2.5pt}\text{(A1) holds}\big\}\]
and define
(2)
\[ {r_{{\beta _{0}}}}=\min \bigg\{r\in \mathbb{N}:r\gt \frac{1}{1-{\beta _{0}}}\bigg\}=\bigg\lfloor \frac{1}{1-{\beta _{0}}}\bigg\rfloor +1.\]
Remark 2.
Obviously ${r_{{\beta _{0}}}}\ge 2$. If $\phi (x)=o({\log ^{\beta }}x)$ for every $\beta \in [0,1)$, then ${\beta _{0}}=0$ and ${r_{{\beta _{0}}}}=2$. Examples are $\phi (x)=\log \log \cdots \log x$ or any bounded ϕ.
Remark 3.
Observe that assumption (A1) implies that
(3)
\[ {\sum \limits_{k=1}^{\infty }}{\bigg(\frac{\phi ({2^{k}})}{k}\bigg)^{r}}\lt \infty \hspace{1em}\forall r\ge {r_{{\beta _{0}}}}.\]
We shall denote ${h_{0}}(x)=\frac{x\log x}{\phi (x)}$ and
(4)
\[ {a_{n}}:={h_{0}}(n)=\frac{n\log n}{\phi (n)}.\]
Theorem 2.
Consider the random variables ${({R_{n}})_{n\ge 1}}$ and assume that for the involved distribution functions ${({F_{n}})_{n\ge 1}}$ the following condition is satisfied:
(5)
\[ \underset{n\ge 1}{\sup }\underset{x\to 0}{\limsup }\frac{{F_{n}}(x)}{x}\lt \infty .\]
Then, for every $p\gt 2$ and for every $r\ge {r_{{\beta _{0}}}}$,
\[ \underset{n\to \infty }{\lim }\frac{{^{(r)}}{S_{n}}}{{a_{n}^{p}}}=0,\hspace{1em}P\textit{-a.s.}\]
where ${a_{n}}$ is defined in (4) and ${r_{{\beta _{0}}}}$ in (2).
Theorem 3.
Let the assumptions of Theorem 2 hold. Then, for every $p\gt 2$,
\[ \frac{{S_{n}}-{M_{n}^{(1)}}}{{(n\log n)^{p}}}\phi {(n)^{p}}\to 0,\hspace{1em}P\textit{-a.s.}\]
Remark 4.
As mentioned at the beginning of this subsection, although the last result is weaker than the one presented in 2.1 (the convergence is to zero and not to a positive constant), it is obtained without imposing any conditions on the structure of the random variables ${R_{n}}$. Moreover, the involved variables are not assumed to follow the same law and condition (1) is relaxed to condition (5).
Remark 5.
Notice that in Theorem 2 (and Theorem 3) we can take any $p\gt 2$ and $\phi (x)={\log ^{\gamma }}x$, with any $\gamma \lt 1$. If Theorem 2 were true also for $\gamma =1$ and $p=2$, we would get that $\frac{{^{(r)}}{S_{n}}}{{n^{2}}}\to 0$. This observation may suggest that the “correct” equivalent to ${^{(r)}}{S_{n}}$, i.e. $\frac{{^{(r)}}{S_{n}}}{{q_{n}^{(r)}}}\to 1$, might be some ${q_{n}^{(r)}}={o_{r}}({n^{2}})$ (at least in some instances), but at present this remains an open problem.

3 Preliminaries for Theorem 1

For the random variables ${({R_{n}})_{n\ge 1}}$ defined above, the following two relations were proven (see Lemma 2 relation (5) and Lemma 3, respectively, in [8]): for $x,y\ge 1$ and $m\lt n$,
(6)
\[ \begin{aligned}{}& P({R_{m}}\gt x)=E\bigg[{F_{m}}\bigg(\frac{{\varphi _{m}}({B_{m}})(1+{Q_{m}})}{\lceil x{\varphi _{m}}({B_{m}})+(x-1){Q_{m}}{\varphi _{m}}({B_{m}})\rceil +{Q_{m}}{\varphi _{m}}({B_{m}})}\bigg)\bigg],\\ {} & P({R_{m}}\gt x,{R_{n}}\gt y)\\ {} & \hspace{1em}=E\bigg[{I_{({R_{m}}\gt x)}}{F_{n}}\bigg(\frac{{\varphi _{n}}({B_{n}})(1+{Q_{n}})}{\lceil y{\varphi _{n}}({B_{n}})+(y-1){Q_{n}}{\varphi _{n}}({B_{n}})\rceil +{Q_{n}}{\varphi _{n}}({B_{n}})}\bigg)\bigg],\end{aligned}\]
where $\lceil x\rceil $ denotes the least integer greater than or equal to x. Then the following proposition is obvious.
Proposition 1.
For the random variables ${({R_{n}})_{n\ge 1}}$, the following results hold true:
  • (a) Assume that $x\ge 1$ and $m\in \mathbb{N}$ are such that $x{\varphi _{m}}({B_{m}})+(x-1){Q_{m}}{\varphi _{m}}({B_{m}})$ is an integer. Then
    \[ P({R_{m}}\gt x)={F_{m}}\bigg(\frac{1}{x}\bigg).\]
  • (b) Assume in addition that $y\ge 1$ and $n\in \mathbb{N}$ are such that $y{\varphi _{n}}({B_{n}})+(y-1){Q_{n}}{\varphi _{n}}({B_{n}})$ is an integer. Then
    \[ P({R_{m}}\gt x,{R_{n}}\gt y)={F_{m}}\bigg(\frac{1}{x}\bigg){F_{n}}\bigg(\frac{1}{y}\bigg).\]
The following result provides a generalization of relation (6) which will be used for obtaining Proposition 3.
Proposition 2.
Consider the random variables ${({R_{n}})_{n\ge 1}}$ and assume that ${x_{i}}\ge 1$, $\forall i=1,2,\dots ,n$. Then,
\[\begin{aligned}{}& P({R_{1}}\gt {x_{1}},\dots ,{R_{n}}\gt {x_{n}})\\ {} & \hspace{1em}=E\bigg[\hspace{-0.1667em}{F_{n}}\bigg(\hspace{-0.1667em}\frac{{\varphi _{n}}({B_{n}})(1+{Q_{n}})}{\lceil {x_{n}}{\varphi _{n}}({B_{n}})+({x_{n}}-1){Q_{n}}{\varphi _{n}}({B_{n}})\rceil +{\varphi _{n}}({B_{n}}){Q_{n}}}\hspace{-0.1667em}\bigg){I_{({R_{1}}\gt {x_{1}},\dots ,{R_{n-1}}\gt {x_{n-1}})}}\hspace{-0.1667em}\bigg].\end{aligned}\]
Proof.
The proof follows by applying similar steps as in the case of $n=2$ (Lemmas 2 and 3 in [8]).  □
Proposition 3.
Consider the random variables ${({R_{n}})_{n\ge 1}}$. Then, for every integer n and for every finite set of numbers ${x_{i}}\ge 1$, $\forall i=1,2,\dots ,n$, such that ${x_{k}}{\varphi _{k}}({B_{k}})+({x_{k}}-1){Y_{k}}{\varphi _{k}}({B_{k}})$ is an integer for every $k=1,2,\dots ,n$, we have
\[ P({R_{1}}\gt {x_{1}},\dots ,{R_{n}}\gt {x_{n}})={F_{1}}\bigg(\frac{1}{{x_{1}}}\bigg)\dots {F_{n}}\bigg(\frac{1}{{x_{n}}}\bigg).\]
Proof.
The result follows by induction. The case $n=2$ is discussed in Proposition 1. Assume that the statement is true for $n-1$. Then by Proposition 2 we can write
\[\begin{aligned}{}& P({R_{1}}\gt {x_{1}},\dots ,{R_{n}}\gt {x_{n}})\\ {} & \hspace{1em}=E\bigg[\hspace{-0.1667em}{F_{n}}\bigg(\frac{{\varphi _{n}}({B_{n}})(1+{Q_{n}})}{\lceil {x_{n}}{\varphi _{n}}({B_{n}})+({x_{n}}-1){Q_{n}}{\varphi _{n}}({B_{n}})\rceil +{\varphi _{n}}({B_{n}}){Q_{n}}}\bigg){I_{({R_{1}}\gt {x_{1}},\dots ,{R_{n-1}}\gt {x_{n-1}})}}\bigg]\\ {} & \hspace{1em}={F_{n}}\bigg(\frac{1}{{x_{n}}}\bigg)P({R_{1}}\gt {x_{1}},\dots ,{R_{n-1}}\gt {x_{n-1}})\end{aligned}\]
which leads to the conclusion, by the induction hypothesis.  □
Let $\Lambda ={({\lambda _{j}})_{j\in \mathbb{N}}}$ be a good sequence (defined in Subsection 2.1) and, for $u\in [1,+\infty )$, let ${j_{u}}$ be the only integer such that ${\lambda _{{j_{u}}-1}}\lt u\le {\lambda _{{j_{u}}}}$ (i.e. ${\lambda _{{j_{u}}}}$ is the minimum element in Λ larger than or equal to u).
The proposition that follows will be a “key” result for obtaining the convergence theorem of this section: by employing a subclass of Oppenheim expansions that satisfies a particular condition we define a sequence of discrete random variables that is proven to consist of independent random variables the densities of which can be easily calculated.
Proposition 4.
Consider the random variables ${({R_{n}})_{n\ge 1}}$ and assume that there exists a good sequence Λ such that, for every $x\in \Lambda $ and for every n, $x{\varphi _{n}}({B_{n}})+(x-1){Q_{n}}{\varphi _{n}}({B_{n}})$ is an integer. For every n, denote
(7)
\[ {T_{n}}={\lambda _{{j_{{R_{n}}}}}}.\]
Then ${T_{n}}$ takes values in Λ, and the sequence ${({T_{n}})_{n\ge 1}}$ consists of independent random variables. Moreover, the discrete density of ${T_{n}}$ is given by the formula
\[ {F_{n}}\bigg(\frac{1}{{\lambda _{s-1}}}\bigg)-{F_{n}}\bigg(\frac{1}{{\lambda _{s}}}\bigg),\hspace{1em}s\in {\mathbb{N}^{\ast }}.\]
Proof.
Observe the relation ${\lambda _{{j_{r}}}}\gt {\lambda _{n}}\Leftrightarrow r\gt {\lambda _{n}}$, for any integer n. Thus, for every finite set of integers $\{{i_{1}},\dots ,{i_{k}}\}$ and for every finite set of integers $\{{n_{{i_{1}}}},\dots ,{n_{{i_{k}}}}\}$ we have
\[\begin{aligned}{}P({T_{{i_{1}}}}\gt {\lambda _{{n_{{i_{1}}}}}},\dots ,{T_{{i_{k}}}}\gt {\lambda _{{n_{{i_{k}}}}}})& =P({R_{{i_{1}}}}\gt {\lambda _{{n_{{i_{1}}}}}},\dots ,{R_{{i_{k}}}}\gt {\lambda _{{n_{{i_{k}}}}}})\\ {} & ={F_{{i_{1}}}}\bigg(\frac{1}{{\lambda _{{n_{{i_{1}}}}}}}\bigg)\cdot \cdots \cdot {F_{{i_{k}}}}\bigg(\frac{1}{{\lambda _{{n_{{i_{k}}}}}}}\bigg),\end{aligned}\]
which proves the independence of the random variables ${({T_{n}})_{n}}$. For the density, note that, for every integer $s\in {\mathbb{N}^{\ast }}$, we have
\[ P({T_{n}}={\lambda _{s}})=P({T_{n}}\gt {\lambda _{s-1}})-P({T_{n}}\gt {\lambda _{s}})={F_{n}}\bigg(\frac{1}{{\lambda _{s-1}}}\bigg)-{F_{n}}\bigg(\frac{1}{{\lambda _{s}}}\bigg).\]
 □
Remark 6.
The result above is a generalization of Theorem 3 in Galambos [6], in which ${y_{n}}=0$, $\Lambda =\mathbb{N}$ and ${F_{n}}(x)=F(x)=x{I_{[0,1]}}$.
Last we present, without proof, the result which is part of Theorem 1 in [12], and it is instrumental for the proof of the convergence result we are interested in.
Theorem 4.
Let ${({X_{n}})_{n\ge 1}}$ be a sequence of i.i.d. random variables and ${^{(r)}}{S_{n}}$ denote the n-th sample sum with the first r largest terms removed. Let A be an absolutely continuous increasing function defined on $[0,+\infty )$, with $A(0)=0$ and satisfying
  • (i) $\frac{A(x)}{{x^{\frac{1}{\alpha }}}}$ is non decreasing for some $\alpha \in (0,2)$,
  • (ii) ${\sup _{x\gt 0}}\frac{A(2x)}{A(x)}\lt \infty $,
and let B be its inverse function. For every $s\gt 0$, denote
\[ {J_{s}}={\int _{1}^{\infty }}{\big[P({X_{1}}\gt x)\big]^{s}}\mathrm{d}{B^{s}}(x)\]
and assume that ${J_{r+1}}\lt +\infty $; then there exists a sequence ${({c_{n}})_{n\in \mathbb{N}}}$ of numbers such that
\[ \underset{n\to \infty }{\lim }\frac{{^{(r)}}{S_{n}}}{A(n)}-{c_{n}}=0,\hspace{1em}P\textit{-a.s.}\]
Moreover, the constants ${c_{n}}$ can be chosen to be
\[ {c_{n}}=\frac{n}{A(n)}{\int _{|x|\le \tau A(n)}}x\mathrm{d}\big(P({X_{1}}\le x)\big),\]
where $\tau \gt 0$ is an arbitrary constant.

4 Proof of Theorem 1

By utilizing the results proved in Section 3, we are now ready to prove Theorem 1 presented in Section 2.
Proof.
Following the notation introduced in (7), let ${T_{n}}={\lambda _{{j_{{R_{n}}}}}}$ and define ${\tilde{M}_{n}^{(1)}}=\max \{{T_{1}}\dots ,{T_{n}}\}$. Then,
\[ {T_{n}}-\ell \le {R_{n}}\le {T_{n}}\hspace{1em}\text{and}\hspace{1em}{\tilde{M}_{n}^{(1)}}-\ell \le {M_{n}^{(1)}}\le {\tilde{M}_{n}^{(1)}}.\]
Thus,
\[ \frac{{\textstyle\textstyle\sum _{k=1}^{n}}{T_{k}}-{\tilde{M}_{n}^{(1)}}-\ell n}{n\log n}\le \frac{{S_{n}}-{M_{n}^{(1)}}}{n\log n}\le \frac{{\textstyle\textstyle\sum _{k=1}^{n}}{T_{k}}-({\tilde{M}_{n}^{(1)}}-\ell )}{n\log n},\]
so it is sufficient to study the convergence of
\[ \frac{{\textstyle\textstyle\sum _{k=1}^{n}}{T_{k}}-{\tilde{M}_{n}^{(1)}}}{n\log n}.\]
By Proposition 4, the sequence ${({T_{n}})_{n\ge 1}}$ defined in (7) consists of independent and identically distributed random variables and therefore Theorem 4 can be employed; to this extent, since we are interested in the case $r=1$ and $A(x)=x\log x$, first we have to check that
\[ {J_{2}}={\int _{1}^{\infty }}{\big[P({T_{1}}\gt x)\big]^{2}}\mathrm{d}{B^{2}}(x)\lt \infty ,\]
where $B(x)$ is the inverse of $A(x)=x\log x$.
Note that $\mathrm{d}{B^{2}}(x)=2B(x){B^{\prime }}(x)\mathrm{d}x$ while $P({T_{1}}\gt x)=F(\frac{1}{x})$ (by Proposition 4). Hence, due to (1), we have that
\[ {J_{2}}={\int _{1}^{\infty }}{F^{2}}\bigg(\frac{1}{x}\bigg)2B(x){B^{\prime }}(x)\mathrm{d}x\le {C_{1}}+{C_{2}}{\int _{1}^{\infty }}\bigg(\frac{1}{{x^{2}}}\bigg)B(x){B^{\prime }}(x)\mathrm{d}x.\]
Now use the change of variables $B(x)=y$; since $x=A(y)$ and $\mathrm{d}y={B^{\prime }}(x)\mathrm{d}x$, we have that
\[ {J_{2}}\le {C_{1}}+{C_{2}}{\int _{B(1)}^{\infty }}\frac{y}{{A^{2}}(y)}\mathrm{d}y=\frac{{C_{1}}+{C_{2}}}{\log B(1)}\lt \infty .\]
Hence, by Theorem 4, there is ${c_{n}}$ such that as $n\to \infty $
\[ \frac{{\textstyle\textstyle\sum _{k=1}^{n}}{T_{k}}-{\tilde{M}_{n}^{(1)}}}{n\log n}-{c_{n}}\to 0,\hspace{1em}P\text{-a.s.},\]
where
\[\begin{aligned}{}{c_{n}}& =\frac{n}{A(n)}{\int _{1}^{A(n)}}x\mathrm{d}\big(P({T_{1}}\le x)\big)=-\frac{1}{\log n}{\int _{1}^{n\log n}}x\mathrm{d}\big(P({T_{1}}\gt x)\big)\\ {} & =-\frac{1}{\log n}{\int _{1}^{n\log n}}x\mathrm{d}F\bigg(\frac{1}{x}\bigg).\end{aligned}\]
Using integration by parts we have that
\[ {c_{n}}=-\frac{1}{\log n}\bigg(n\log nF\bigg(\frac{1}{n\log n}\bigg)-1\bigg)+\frac{1}{\log n}{\int _{1}^{n\log n}}F\bigg(\frac{1}{x}\bigg)\mathrm{d}x\]
which can be equivalently written as
\[ {c_{n}}=-\frac{1}{\log n}\bigg(\frac{F(\frac{1}{n\log n})}{\frac{1}{n\log n}}\bigg)+\frac{1}{\log n}+\frac{1}{\log n}{\int _{1}^{n\log n}}F\bigg(\frac{1}{x}\bigg)\mathrm{d}x={I_{1}}+{I_{2}}+{I_{3}}.\]
Obviously, ${I_{2}}\to 0$ and by employing (1) we have that ${I_{1}}\to 0$ for $n\to \infty $. For ${I_{3}}$, we start by observing that
\[ {\int _{1}^{n\log n}}F\bigg(\frac{1}{x}\bigg)\mathrm{d}x={\int _{1}^{\frac{1}{n\log n}}}-\frac{F(y)}{{y^{2}}}\mathrm{d}y={\int _{\frac{1}{n\log n}}^{1}}\frac{F(y)}{{y^{2}}}\mathrm{d}y.\]
By (1), for fixed $\epsilon \gt 0$ let $\delta \in (0,1)$ be such that, for $y\in (0,\delta )$,
\[ \alpha -\epsilon \le \frac{F(y)}{y}\le \alpha +\epsilon \]
and let ${n_{0}}$ be sufficiently large such that $\frac{1}{n\log n}\lt \delta $ for $\forall n\ge {n_{0}}$. Then
\[ {\int _{\frac{1}{n\log n}}^{\delta }}\frac{(\alpha -\epsilon )}{y}\mathrm{d}y\lt {\int _{\frac{1}{n\log n}}^{\delta }}\frac{F(y)}{{y^{2}}}\mathrm{d}y\lt {\int _{\frac{1}{n\log n}}^{\delta }}\frac{(\alpha +\epsilon )}{y}\mathrm{d}y\]
which leads to
\[\begin{aligned}{}(\alpha -\epsilon )\log \delta -(\alpha -\epsilon )\log \bigg(\frac{1}{n\log n}\bigg)& \lt {\int _{\frac{1}{n\log n}}^{\delta }}\frac{F(y)}{{y^{2}}}\mathrm{d}y\\ {} & \lt (\alpha +\epsilon )\log \delta -(\alpha +\epsilon )\log \bigg(\frac{1}{n\log n}\bigg).\end{aligned}\]
Hence, due to the arbitrariness of ϵ,
(8)
\[ {\int _{\frac{1}{n\log n}}^{\delta }}\frac{F(y)}{{y^{2}}}\mathrm{d}y\sim \alpha \big(\log n+\log (\log n)\big).\]
Moreover, for $n\to \infty $,
(9)
\[ \frac{1}{\log n}{\int _{\delta }^{1}}\frac{F(y)}{{y^{2}}}\mathrm{d}y\to 0.\]
Relations (8) and (9) together give that ${I_{3}}\to \alpha $ as $n\to \infty $. This concludes the proof.  □

5 Preliminaries for Theorems 2 and 3

Before proving Theorems 2 and 3, we prove some preliminary results.
Lemma 1.
Consider the random variables ${({R_{n}})_{n\ge 1}}$, and let the related distributions ${({F_{n}})_{n\ge 1}}$ satisfy assumption (5). Moreover, assume that h is a positive increasing function with the following property: there exists an integer ρ (that depends on h) such that
(10)
\[ {\sum \limits_{m=1}^{\infty }}{\bigg(\frac{{2^{k}}}{h({2^{k}})}\bigg)^{r}}={\sum \limits_{k=1}^{\infty }}{\bigg(\frac{{2^{k}}}{{t_{{2^{k}}}^{(h)}}}\bigg)^{r}}\lt \infty ,\hspace{1em}\forall r\ge \rho .\]
Then, for $r\ge \rho $, we have
\[ P\big({M_{n}^{(r)}}\gt {t_{n}^{(h)}}\hspace{0.1667em}\hspace{0.1667em}i.o.\big)=0.\]
Proof.
In the proof we write simply ${t_{n}}$ in place of ${t_{n}^{(h)}}$. Let $r\ge \rho $. For any integer $j\ge 0$ we define the event
\[ {A_{j}}=\big\{{R_{i}}\gt {t_{{2^{j}}}}\hspace{2.5pt}\text{for at least}\hspace{2.5pt}r\hspace{2.5pt}\text{indices such that}\hspace{2.5pt}i\lt {2^{j+1}}\big\}\]
and, for any integer $n\ge 1$,
\[ {B_{n}}=\big\{{M_{n}^{(r)}}\gt {t_{n}}\big\}.\]
Let j be fixed and note that, for every n such that ${2^{j}}\le n\lt {2^{j+1}}$, we have ${B_{n}}\subseteq {A_{j}}$; thus
\[ \bigcup \limits_{\{n:{m_{n}}=j\}}{B_{n}}\subseteq {A_{j}},\]
which implies that
\[\begin{aligned}{}\big\{{M_{n}^{(r)}}\gt {t_{n}}\hspace{0.1667em}\hspace{0.1667em}\mathrm{i}.\mathrm{o}.\big\}& =\bigcap \limits_{s}\bigcup \limits_{n\ge s}{B_{n}}\subseteq \bigcap \limits_{s}\bigcup \limits_{\{n:{m_{n}}\ge {m_{s}}\}}{B_{n}}=\bigcap \limits_{s}\bigcup \limits_{j\ge {m_{s}}}\bigg(\bigcup \limits_{\{n:{m_{n}}=j\}}{B_{n}}\bigg)\\ {} & \subseteq \bigcap \limits_{s}\bigcup \limits_{j\ge {m_{s}}}{A_{j}}=\bigcap \limits_{k}\bigcup \limits_{j\ge k}{A_{j}}=\{{A_{j}}\hspace{0.1667em}\hspace{0.1667em}\mathrm{i}.\mathrm{o}.\}.\end{aligned}\]
The first “⊆” holds true since for $s\ge 1$ we have that $\{n:n\ge s\}\subset \{n:{m_{n}}\ge {m_{s}}\}$, while the third equality is valid based on the observation
\[ {\bigcap \limits_{s=1}^{\infty }}\bigg(\bigcup \limits_{j\ge {m_{s}}}{A_{j}}\bigg)\hspace{-0.1667em}=\hspace{-0.1667em}{\bigcap \limits_{k=0}^{\infty }}\Bigg\{{\bigcap \limits_{s={2^{k}}}^{{2^{k+1}}-1}}\bigg(\bigcup \limits_{j\ge {m_{s}}}{A_{j}}\bigg)\Bigg\}\hspace{-0.1667em}=\hspace{-0.1667em}{\bigcap \limits_{k=0}^{\infty }}\Bigg\{{\bigcap \limits_{s={2^{k}}}^{{2^{k+1}}-1}}\bigg(\bigcup \limits_{j\ge k}{A_{j}}\bigg)\Bigg\}\hspace{-0.1667em}=\hspace{-0.1667em}{\bigcap \limits_{k=0}^{\infty }}\bigg(\bigcup \limits_{j\ge k}{A_{j}}\bigg).\]
Now, for every integer k,
\[\begin{aligned}{}P({A_{k}})& \le \sum \limits_{1\le {i_{1}}\lt {i_{2}}\lt \cdots \lt {i_{r}}\lt {2^{k+1}}}P({R_{{i_{1}}}}\gt {t_{{2^{k}}}},\dots ,{R_{{i_{r}}}}\gt {t_{{2^{k}}}})\\ {} & \le \sum \limits_{1\le {i_{1}}\lt {i_{2}}\lt \cdots \lt {i_{r}}\lt {2^{k+1}}}{\prod \limits_{j=1}^{r}}{F_{{i_{j}}}}\bigg(\frac{1}{{t_{{2^{k}}}}}\bigg)\\ {} & \le C\sum \limits_{1\le {i_{1}}\lt {i_{2}}\lt \cdots \lt {i_{r}}\lt {2^{k+1}}}\frac{1}{{({t_{{2^{k}}}})^{r}}}\le C\left(\genfrac{}{}{0pt}{}{{2^{k+1}}}{r}\right)\frac{1}{{({t_{{2^{k}}}})^{r}}}\le C{\bigg(\frac{{2^{k}}}{{t_{{2^{k}}}}}\bigg)^{r}},\end{aligned}\]
where the second inequality is due to Proposition 3 and the third one to condition (5). Thus,
\[ {\sum \limits_{k=1}^{\infty }}P({A_{k}})\le C{\sum \limits_{k=1}^{\infty }}{\bigg(\frac{{2^{k}}}{{t_{{2^{k}}}}}\bigg)^{r}},\]
which is finite because of (10). The result follows by the Borel–Cantelli lemma.  □
Remark 7.
Lemma 1 is satisfied in particular by ${h_{0}}$ since for this function we have ${r^{({h_{0}})}}={r_{0}}$ and
\[ \sum \limits_{k}{\bigg(\frac{{2^{k}}}{{h_{0}}({2^{k}})}\bigg)^{r}}\sim \sum \limits_{k}{\bigg(\frac{\phi ({2^{k}})}{k}\bigg)^{r}}\lt \infty ,\hspace{1em}r\ge {r_{{\beta _{0}}}},\]
by (3).
The corollary that follows studies the asymptotic behavior of the r-th maximum term of the Oppenheim expansion.
Corollary 1.
For every $r\ge {r_{{\beta _{0}}}}$ we have
\[ \underset{n}{\lim }\frac{{M_{n}^{(r)}}}{{a_{n}}}=0,\hspace{1em}P\textit{-a.s.}\]
Proof.
We prove that, for every $\varepsilon \gt 0$,
\[ P\big({M_{n}^{(r)}}\gt \varepsilon {a_{n}}\hspace{0.1667em}\hspace{0.1667em}\text{i.o.}\big)=0,\hspace{1em}\forall r\ge {r_{{\beta _{0}}}}.\]
Assuming that $\varepsilon \lt 1$ (which is not restrictive), this can be derived by the previous Lemma 1 by choosing $h(x)=\varepsilon {h_{0}}(x)$, for $x\ge 1$ (recall Remark 7), since for this function we can easily obtain ${t_{n}^{(h)}}\le \varepsilon {a_{n}}$ due to (A2), and therefore
(11)
\[ P\big({M_{n}^{(r)}}\gt \varepsilon {a_{n}}\hspace{0.1667em}\hspace{0.1667em}\mathrm{i}.\mathrm{o}.\big)\le P\big({M_{n}^{(r)}}\gt {t_{n}^{(h)}}\hspace{0.1667em}\hspace{0.1667em}\mathrm{i}.\mathrm{o}.\big).\]
The conclusion follows by observing that
\[ {\sum \limits_{m=1}^{\infty }}{\bigg(\frac{{2^{m}}}{{t_{{2^{m}}}^{(h)}}}\bigg)^{r}}\le {\sum \limits_{m=1}^{\infty }}{\bigg(\frac{\phi ({2^{m}})}{m}\bigg)^{r}}\lt \infty ,\hspace{1em}r\ge {r_{{\beta _{0}}}}.\]
 □
Lemma 2.
Assume the conditions of Lemma 1 and, additionally, $h(x)\ge x$ ultimately. For every m, denote by ${N_{m}}$ the number of indices $j\lt {2^{m+1}}$ for which ${R_{j}}\gt {t_{{2^{m}}}^{(h)}}$. Then, for every integer s such that $s\ge r$,
\[ P({N_{{m_{n}}}}\ge s\hspace{0.1667em}\hspace{0.1667em}i.o.)=0.\]
Proof.
Observe that
\[ P({N_{{m_{n}}}}\ge s\hspace{0.1667em}\hspace{0.1667em}\mathrm{i}.\mathrm{o}.)\le P\big({M_{n}^{(s)}}\ge {t_{n}^{(h)}}\hspace{0.1667em}\hspace{0.1667em}\mathrm{i}.\mathrm{o}.\big),\]
and we can apply the Borel–Cantelli lemma because of Lemma 1: in fact, ultimately,
\[ {\bigg(\frac{{2^{m}}}{h({2^{m}})}\bigg)^{s}}\le {\bigg(\frac{{2^{m}}}{h({2^{m}})}\bigg)^{r}},\]
whence
\[ {\sum \limits_{m=1}^{\infty }}{\bigg(\frac{{2^{m}}}{h({2^{m}})}\bigg)^{s}}\lt \infty ,\]
due to Lemma 1.  □
Lemma 3.
Let $h(x)=x{\log ^{\alpha }}x$, $\alpha \gt 0$, and ${t_{n}^{(h)}}=h({2^{{m_{n}}}})={2^{{m_{n}}}}{(\log {2^{{m_{n}}}})^{\alpha }}$. Let $p,q\gt 0$ be fixed with $p\gt 2+\frac{1}{2q}$. Then the series
\[ \sum \limits_{n}\frac{{(\phi (n))^{2pq}}}{{n^{2qp-2q+1}}{(\log n)^{2pq}}}{\sum \limits_{j=1}^{n}}{\big({t_{j}^{(h)}}\big)^{2q}}\]
converges.
Proof.
For the sake of simplicity, we shall drop the superscript and write ${t_{n}}$ in place of ${t_{n}^{(h)}}$. Observe that
\[ {\sum \limits_{j=1}^{n}}{t_{j}^{2q}}\hspace{-0.1667em}\le \hspace{-0.1667em}{\sum \limits_{k=0}^{{m_{n}}+1}}\Bigg({\sum \limits_{j={2^{k}}}^{{2^{k+1}}-1}}{h^{2q}}\big({2^{k}}\big)\Bigg)\hspace{-0.1667em}=\hspace{-0.1667em}{\sum \limits_{k=0}^{{m_{n}}+1}}{2^{k(2q+1)}}{\big(\log {2^{k}}\big)^{2\alpha q}}=C{\sum \limits_{k=0}^{{m_{n}}+1}}{2^{k(2q+1)}}{k^{2\alpha q}}.\]
By an application of the Cesaro theorem, it is not difficult to see that
\[ {\sum \limits_{k=0}^{N}}{2^{k(2q+1)}}{k^{2\alpha q}}\sim C\cdot {2^{N(2q+1)}}{N^{2\alpha q}},\hspace{1em}N\to \infty .\]
Hence, ultimately,
\[ {\sum \limits_{j=1}^{n}}{t_{j}^{2q}}\le C\cdot {2^{(2q+1){m_{n}}}}{m_{n}^{2\alpha q}}\le C\cdot {2^{(2q+1){\log _{2}}n}}{({\log _{2}}n)^{2\alpha q}}=C\cdot {n^{2q+1}}{(\log n)^{2q\alpha }},\]
and
\[\begin{aligned}{}\frac{{(\phi (n))^{2pq}}}{{n^{2qp-2q+1}}{(\log n)^{2qp}}}{\sum \limits_{j=1}^{n}}{t_{j}^{2}}& \le \frac{C{(\phi (n))^{2pq}}}{{n^{2pq-4q}}{(\log n)^{2qp-2q\alpha }}}\\ {} & \le \frac{C}{{n^{2pq-4q}}{(\log n)^{2qp-2q\alpha -2\beta pq}}}\end{aligned}\]
due to (A1). The claim follows by known results on the Bertrand series.  □
Corollary 2.
Let ${({t_{n}^{(h)}})_{n\ge 1}}$ be the sequence defined in Lemma 3 and consider the random variables ${({R_{n}})_{n\ge 1}}$. Then, for every $p\gt 2$,
\[ \frac{1}{{a_{n}^{p}}}{\sum \limits_{j=1}^{n}}{R_{j}}I\big({R_{j}}\le {t_{j}^{(h)}}\big)\to 0,\hspace{1em}P\textit{-a.s.}\]
Proof.
We write ${t_{n}}$ in place of ${t_{n}^{(h)}}$; we set ${R^{\prime }_{j}}={R_{j}}I({R_{j}}\le {t_{j}})$ and ${S^{\prime }_{n}}={\textstyle\sum _{j=1}^{n}}{R^{\prime }_{j}}$. Let $p\gt 2$ and q be large enough so that $2q(p-2)\gt 1$ (which means $p\gt 2+\frac{1}{2q}$). Then
\[\begin{aligned}{}P\big(|{S^{\prime }_{n}}-{a_{n}}|\ge \varepsilon {a_{n}^{p}}\big)& =P\big(|{S^{\prime }_{n}}-{a_{n}}{|^{2q}}\ge {\varepsilon ^{2q}}{a_{n}^{2pq}}\big)\\ {} & \le \frac{1}{{\varepsilon ^{2q}}{a_{n}^{2pq}}}E\big[{\big({S^{\prime }_{n}}-{a_{n}}\big)^{2q}}\big]\\ {} & \le \frac{{2^{2q-1}}}{{\varepsilon ^{2q}}{a_{n}^{2pq}}}E\big[{\big({S^{\prime }_{n}}\big)^{2q}}\big]+\frac{{2^{2q-1}}}{{\varepsilon ^{2q}}{a_{n}^{2q(p-1)}}}\\ {} & =\frac{{2^{2q-1}}}{{\varepsilon ^{2q}}{a_{n}^{2pq}}}E\Bigg[{\Bigg({\sum \limits_{j=1}^{n}}{R^{\prime }_{j}}\Bigg)^{2q}}\Bigg]+\frac{{2^{2q-1}}}{{\varepsilon ^{2q}}{a_{n}^{2q(p-1)}}}\\ {} & \le \frac{{2^{2q-1}}{n^{2q-1}}}{{\varepsilon ^{2q}}{a_{n}^{2pq}}}\Bigg({\sum \limits_{j=1}^{n}}E\big[{\big({R^{\prime }_{j}}\big)^{2q}}\big]\Bigg)+\frac{{2^{2q-1}}}{{\varepsilon ^{2q}}{a_{n}^{2q(p-1)}}}\\ {} & \le \frac{{2^{2q-1}}}{{\varepsilon ^{2q}}}\Bigg(\frac{(\phi {(n)^{2pq}}}{{n^{2qp-2q+1}}{(\log n)^{2pq}}}{\sum \limits_{j=1}^{n}}{t_{j}^{2q}}+\frac{(\phi {(n)^{2q(p-1)}}}{{(n\log n)^{2q(p-1)}}}\Bigg).\end{aligned}\]
The result follows by applying the Borel–Cantelli lemma, since
\[ \sum \limits_{n}\frac{(\phi {(n)^{2pq}}}{{n^{2qp-2q+1}}{(\log n)^{2pq}}}{\sum \limits_{j=1}^{n}}{t_{j}^{2q}}\]
converges by Lemma 3, and
\[ \sum \limits_{n}\frac{\phi {(n)^{2q(p-1)}}}{{(n\log n)^{2q(p-1)}}}\le \sum \limits_{n}\frac{1}{{n^{2q(p-1)}}{(\log n)^{2q(p-1)(1-\beta )}}}\]
converges, since $2q(p-1)\gt 2q(p-2)\ge 1$.  □

6 The proofs of Theorems 2 and 3

By utilizing the results proven in the previous section we are ready for the proofs of Theorems 2 and 3; we start obviously with Theorem 2 which will serve as the source result for Theorem 3.
Proof.
The proof is motivated by the proof of Theorem 1 in [12]. In detail: since ${r_{{\beta _{0}}}}\gt \frac{1}{1-{\beta _{0}}}$, there exists β satisfying assumption (A2) such that ${r_{{\beta _{0}}}}\gt \frac{1}{1-\beta }$; take $\alpha \in (\frac{1}{{r_{{\beta _{0}}}}},1-\beta )$ and set $h(x)=x{\log ^{\alpha }}x$, for $x\ge 1$. Then $\alpha {r_{{\beta _{0}}}}\gt 1$ and Lemma 2 can be applied to h. Recall the notation used before, i.e.
\[ {t_{n}}=h\big({2^{{m_{n}}}}\big),\hspace{1em}{R^{\prime }_{n}}={R_{n}}I({R_{n}}\le {t_{n}}),\hspace{1em}{a_{n}}=\frac{n\log n}{\phi (n)}\hspace{1em}\text{and}\hspace{1em}{S^{\prime }_{n}}={\sum \limits_{i=1}^{n}}{R^{\prime }_{j}}.\]
Furthermore, for every $\varepsilon \gt 0$, put
\[ {S_{n}}(\varepsilon )={\sum \limits_{j=1}^{n}}{R_{j}}I({R_{j}}\le \varepsilon {a_{n}}).\]
Since
\[ {t_{n}}=h\big({2^{{m_{n}}}}\big)\le h\big({2^{{\log _{2}}n}}\big)=h(n)=n{\log ^{\alpha }}n,\]
and recalling that $\alpha \lt 1-\beta $, we have that
\[ \underset{n\to \infty }{\lim }\frac{{t_{n}}}{{a_{n}}}=0.\]
Thus, for fixed $\varepsilon \gt 0$, we can take n sufficiently large such that ${t_{n}}\lt \varepsilon {a_{n}}$. Then
\[\begin{aligned}{}\big|{S_{n}}(\varepsilon )-{S^{\prime }_{n}}\big|& =\Bigg|{\sum \limits_{j=1}^{n}}{R_{j}}I({t_{j}}\lt {R_{j}}\le \varepsilon {a_{n}})\Bigg|\le \varepsilon {a_{n}}{N_{{m_{n}}}}+{\sum \limits_{k=1}^{{m_{n}}}}\varepsilon {a_{{2^{{m_{n}}-k+1}}}}{N_{{m_{n}}-k}}\\ {} & \le \varepsilon {a_{n}}\Bigg({N_{{m_{n}}}}\hspace{-0.1667em}+\hspace{-0.1667em}{\sum \limits_{k=1}^{{m_{n}}}}{\bigg(\frac{1}{2}\bigg)^{k-1}}{N_{{m_{n}}-k}}\Bigg)\le \varepsilon {a_{n}^{p}}\Bigg({N_{{m_{n}}}}\hspace{-0.1667em}+\hspace{-0.1667em}{\sum \limits_{k=1}^{{m_{n}}}}{\bigg(\frac{1}{2}\bigg)^{k-1}}\hspace{-0.1667em}{N_{{m_{n}}-k}}\Bigg)\\ {} & \le \varepsilon {a_{n}^{p}}{N_{{m_{n}}}}\Bigg(1+{\sum \limits_{k=1}^{{m_{n}}}}{\bigg(\frac{1}{2}\bigg)^{k-1}}\Bigg),\end{aligned}\]
where the third relation is due to the inequality $\frac{{a_{n}}}{{a_{2n}}}\le \frac{1}{2}$. Take any $s\ge r$; then, by Lemma 2, we obtain, ultimately,
\[ \big|{S_{n}}(\varepsilon )-{S^{\prime }_{n}}\big|\le \varepsilon {a_{n}^{p}}s\Bigg(1+{\sum \limits_{k=1}^{{m_{n}}}}{\bigg(\frac{1}{2}\bigg)^{k-1}}\Bigg)\le 3\varepsilon {a_{n}^{p}}s,\hspace{1em}P\text{-a.s.}\]
Moreover,
\[\begin{aligned}{}\big|{S_{n}}(\varepsilon )-{^{(r)}}{S_{n}}\big|& =\Bigg|{\sum \limits_{i=1}^{n}}{R_{j}}I({R_{j}}\gt \varepsilon {a_{n}})-{\sum \limits_{k=2}^{r}}{M_{n}^{(k)}}\Bigg|\\ {} & \le {\sum \limits_{i=1}^{n}}{R_{j}}I({R_{j}}\gt \varepsilon {a_{n}})+{\sum \limits_{k=2}^{r}}{M_{n}^{(k)}}.\end{aligned}\]
By (11), the first summand is finite for sufficiently large n while Corollary 1 ensures that $\frac{1}{{a_{n}}}{\textstyle\sum _{k=2}^{r}}{M_{n}^{(k)}}\to 0$. Then, as $n\to \infty $,
\[ {a_{n}^{-1}}\big|{S_{n}}(\varepsilon )-{^{(r)}}{S_{n}}\big|\to 0\]
and so $|{S_{n}}(\varepsilon )-{^{(r)}}{S_{n}}|\le \varepsilon {a_{n}}$. Finally,
\[ \big|{^{(r)}}{S_{n}}-{S^{\prime }_{n}}\big|\le \big|{S_{n}}(\varepsilon )-{S^{\prime }_{n}}\big|+\big|{S_{n}}(\varepsilon )-{^{(r)}}{S_{n}}\big|\le \varepsilon {a_{n}^{p}}(3s+1)\]
and Corollary 2 gives the conclusion, by the arbitrariness of ε.  □
Now we give the proof of Theorem 3.
Proof.
First, observe that for any $r\ge {r_{{\beta _{0}}}}\ge 2$,
\[ \frac{{S_{n}}-{M_{n}^{(1)}}}{{(n\log n)^{p}}}\phi {(n)^{p}}=\frac{{^{(r)}}{S_{n}}}{{(n\log n)^{p}}}\phi {(n)^{p}}+{\sum \limits_{k=2}^{r}}\frac{{M_{n}^{(k)}}}{{(n\log n)^{p}}}\phi {(n)^{p}}.\]
The convergence of the latter expression is established by Theorem 2 and Corollary 1.  □

Acknowledgement

The authors wish to thank an anonymous referee for his/her suggestions and corrections, which have led to a better version of the paper.

References

[1] 
Aaronson, J., Nakada, H.: Trimmed sums for non-negative, mixing stationary processes. Stoch. Process. Appl. 104(2), 173–192 (2003). MR1961618. https://doi.org/10.1016/S0304-4149(02)00236-3
[2] 
Athreya, J.S., Athreya, K.B.: Extrema of luroth digits and a zeta function limit relation. Integers 21, A96 (2021)
[3] 
Berkes, I., Horvath, L., Schauer, J.: Asymptotic behavior of trimmed sums. Stoch. Dyn. 12(01), 1150002 (2012). MR2887914. https://doi.org/10.1142/S0219493712003602
[4] 
Csörgo, S., Simons, G.: A strong law of large numbers for trimmed sums, with applications to generalized St. Petersburg games. Stat. Probab. Lett. 26(1), 65–73 (1996). MR1385664. https://doi.org/10.1016/0167-7152(94)00253-3
[5] 
Einmahl, J., Haeusler, E., Mason, D.: On the relationship between the almost sure stability of weighted empirical distributions and sums of order statistics. Probab. Theory Relat. Fields 79(1), 59–74 (1988). MR0952994. https://doi.org/10.1007/BF00319104
[6] 
Galambos, J.: Futher ergodic results on the Oppenheim series. Q. J. Math. 25(1), 135–141 (1974). MR0347759. https://doi.org/10.1093/qmath/25.1.135
[7] 
Giuliano, R.: Convergence results for Oppenheim expansions. Monatshefte Math. 187(3), 509–530 (2018). MR3858429. https://doi.org/10.1007/s00605-017-1126-y
[8] 
Giuliano, R., Hadjikyriakou, M.: On exact laws of large numbers for Oppenheim expansions with infinite mean. J. Theor. Probab. 34, 1579–1606 (2021). MR4289895. https://doi.org/10.1007/s10959-020-01010-3
[9] 
Giuliano, R., Hadjikyriakou, M.: Intermediately trimmed sums of Oppenheim expansions: a strong law. arXiv preprint arXiv:2310.00669 (2023)
[10] 
Maller, R.: Relative stability of trimmed sums. Z. Wahrscheinlichkeitstheor. Verw. Geb. 66, 61–80 (1984)
[11] 
Mori, T.: The strong law of large numbers when extreme terms are excluded from sums. Z. Wahrscheinlichkeitstheor. Verw. Geb. 36(3), 189–194 (1976). MR0423494. https://doi.org/10.1007/BF00532544
[12] 
Mori, T.: Stability for sums of iid random variables when extreme terms are excluded. Z. Wahrscheinlichkeitstheor. Verw. Geb. 40(2), 159–167 (1977). MR0458542. https://doi.org/10.1007/BF00532880
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 The main results
  • 3 Preliminaries for Theorem 1
  • 4 Proof of Theorem 1
  • 5 Preliminaries for Theorems 2 and 3
  • 6 The proofs of Theorems 2 and 3
  • Acknowledgement
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy