1 Introduction
Let $({\xi _{1}},{\eta _{1}})$, $({\xi _{2}},{\eta _{2}}),\dots $ be independent copies of an ${\mathbb{N}^{2}}$-valued random vector $(\xi ,\eta )$ with arbitrarily dependent components. Denote by ${({\Pi _{k}})_{k\in {\mathbb{N}_{0}}}}$ (as usual, ${\mathbb{N}_{0}}:=\mathbb{N}\cup \{0\}$) the standard multiplicative random walk defined by
\[ {\Pi _{0}}:=1,\hspace{1em}{\Pi _{k}}={\xi _{1}}\cdot {\xi _{2}}\cdots {\xi _{k}},\hspace{1em}k\in \mathbb{N}.\]
A multiplicative perturbed random walk is the sequence ${({\Theta _{k}})_{k\in \mathbb{N}}}$ given by
Note that if $\mathbb{P}\{\eta =\xi \}=1$, then ${\Pi _{k}}={\Theta _{k}}$ for all $k\in \mathbb{N}$. If $\mathbb{P}\{\xi =1\}=1$, then ${({\Theta _{k}})_{k\in \mathbb{N}}}$ is just a sequence of independent copies of a random variable η. In this article we investigate some arithmetic properties of the random sets ${({\Pi _{k}})_{k\in \mathbb{N}}}$ and ${({\Theta _{k}})_{k\in \mathbb{N}}}$.To set the scene, we introduce first some necessary notation. Let $\mathcal{P}$ denote the set of prime numbers. For an integer $n\in \mathbb{N}$ and $p\in \mathcal{P}$, let ${\lambda _{p}}(n)$ denote the multiplicity of prime p in the prime decomposition of n, that is,
For every $p\in \mathcal{P}$, the function ${\lambda _{p}}:\mathbb{N}\mapsto {\mathbb{N}_{0}}$ is totally additive in the sense that
\[ {\lambda _{p}}(mn)={\lambda _{p}}(m)+{\lambda _{p}}(n),\hspace{1em}p\in \mathcal{P},\hspace{1em}m,n\in \mathbb{N}.\]
The set of functions ${({\lambda _{p}})_{p\in \mathcal{P}}}$ is a basic brick from which many other arithmetic functions can be constructed. For example, with $\mathrm{GCD}\hspace{0.1667em}(A)$ and $\mathrm{LCM}\hspace{0.1667em}(A)$ denoting the greatest common divisor and the least common multiple of a set $A\subset \mathbb{N}$, respectively, we have
The listed arithmetic functions applied either to $A=\{{\Pi _{1}},\dots ,{\Pi _{n}}\}$ or $A=\{{\Theta _{1}},\dots ,{\Theta _{n}}\}$ are the main objects of investigation in the present paper. From the additivity of ${\lambda _{p}}$ we infer
and
Fix any $p\in \mathcal{P}$. Formulae (1) and (2) demonstrate that $S(p):={({S_{k}}(p))_{k\in {\mathbb{N}_{0}}}}$ is a standard additive random walk with the generic step ${\lambda _{p}}(\xi )$, whereas the sequence $T(p):={({T_{k}}(p))_{k\in \mathbb{N}}}$ is a particular instance of an additive perturbed random walk, see [6], generated by the pair $({\lambda _{p}}(\xi ),{\lambda _{p}}(\eta ))$.
(1)
\[ {S_{k}}(p):={\lambda _{p}}({\Pi _{k}})={\sum \limits_{j=1}^{k}}{\lambda _{p}}({\xi _{j}}),\hspace{1em}p\in \mathcal{P},\hspace{1em}k\in {\mathbb{N}_{0}},\](2)
\[ {T_{k}}(p):={\lambda _{p}}({\Theta _{k}})={\sum \limits_{j=1}^{k-1}}{\lambda _{p}}({\xi _{j}})+{\lambda _{p}}({\eta _{k}}),\hspace{1em}p\in \mathcal{P},\hspace{1em}k\in \mathbb{N}.\]2 Main results
2.1 Distributional properties of the prime counts $({\lambda _{p}}(\xi ),{\lambda _{p}}(\eta ))$
As is suggested by (1) and (2) the first step in the analysis of $S(p)$ and $T(p)$ should be the derivation of the joint distribution ${({\lambda _{p}}(\xi ),{\lambda _{p}}(\eta ))_{p\in \mathcal{P}}}$. The next lemma confirms that the finite-dimensional distributions of the infinite vector ${({\lambda _{p}}(\xi ),{\lambda _{p}}(\eta ))_{p\in \mathcal{P}}}$, are expressible via the probability mass function of $(\xi ,\eta )$. However, the obtained formulae are not easy to handle except some special cases. For $i,j\in \mathbb{N}$, put
Lemma 1.
Fix $p\in \mathcal{P}$ and nonnegative integers ${({k_{q}})_{q\in \mathcal{P},q\le p}}$ and ${({\ell _{q}})_{q\in \mathcal{P},q\le p}}$. Then
\[ \mathbb{P}\big\{{\lambda _{q}}(\xi )\ge {k_{q}},{\lambda _{q}}(\eta )\ge {\ell _{q}},q\in \mathcal{P},q\le p\big\}={\sum \limits_{i,j=1}^{\infty }}{w_{Ki,Lj}},\]
where $K:={\textstyle\prod _{q\le p,q\in \mathcal{P}}}{q^{{k_{q}}}}$ and $L:={\textstyle\prod _{q\le p,q\in \mathcal{P}}}{q^{{\ell _{q}}}}$.
Proof.
This follows from
\[\begin{aligned}{}& \mathbb{P}\big\{{\lambda _{q}}(\xi )\ge {k_{q}},{\lambda _{q}}(\eta )\ge {\ell _{q}},q\in \mathcal{P},q\le p\big\}\\ {} & \hspace{1em}=\mathbb{P}\bigg\{\prod \limits_{q\le p,q\in \mathcal{P}}{q^{{k_{q}}}}\hspace{2.5pt}\text{divides}\hspace{2.5pt}\xi ,\prod \limits_{q\le p,q\in \mathcal{P}}{q^{{\ell _{q}}}}\hspace{2.5pt}\text{divides}\hspace{2.5pt}\eta \bigg\}={\sum \limits_{i,j=1}^{\infty }}{w_{Ki,Lj}}.\end{aligned}\]
Obviously, if ξ and η are independent, then
□We proceed with the series of examples.
Example 1.
For $\alpha \gt 1$, let $\mathbb{P}\{\xi =k\}={(\zeta (\alpha ))^{-1}}{k^{-\alpha }}$, $k\in \mathbb{N}$, where ζ is the Riemann zeta-function. For $k\in \mathbb{N}$, ${p_{1}},\dots ,{p_{k}}\in \mathcal{P}$ and ${j_{1}},\dots ,{j_{k}}\in {\mathbb{N}_{0}}$ we have
\[\begin{aligned}{}& \mathbb{P}\big\{{\lambda _{{p_{1}}}}(\xi )\ge {j_{1}},\dots ,{\lambda _{{p_{k}}}}(\xi )\ge {j_{k}}\big\}=\mathbb{P}\big\{{p_{1}^{{j_{1}}}}\cdots {p_{k}^{{j_{k}}}}\hspace{2.5pt}\text{divides}\hspace{2.5pt}\xi \big\}\\ {} & \hspace{1em}={\sum \limits_{i=1}^{\infty }}\mathbb{P}\big\{\xi =\big({p_{1}^{{j_{1}}}}\cdots {p_{k}^{{j_{k}}}}\big)i\big\}={\big({p_{1}^{{j_{1}}}}\cdots {p_{k}^{{j_{k}}}}\big)^{-\alpha }}={p_{1}^{-\alpha {j_{1}}}}\cdots {p_{k}^{-\alpha {j_{k}}}}.\end{aligned}\]
Thus, ${({\lambda _{p}}(\xi ))_{p\in \mathcal{P}}}$ are mutually independent and ${\lambda _{p}}(\xi )$ has a geometric distribution on ${\mathbb{N}_{0}}$ with parameter ${p^{-\alpha }}$, for every fixed $p\in \mathcal{P}$.Example 2.
For $\beta \in (0,1)$, let $\mathbb{P}\{\xi =k\}={\beta ^{k-1}}(1-\beta )$, $k\in \mathbb{N}$. Then
Example 3.
Let $\mathrm{Poi}(\lambda )$ be a random variable with the Poisson distribution with parameter λ and put
where ${_{0}}{F_{{p^{k}}}}$ is the generalized hypergeometric function, see Chapter 16 in [10].
\[ \mathbb{P}\{\xi =k\}=\mathbb{P}\big\{\mathrm{Poi}(\lambda )=k|\mathrm{Poi}(\lambda )\ge 1\big\}={\big({e^{\lambda }}-1\big)^{-1}}{\lambda ^{k}}/k!,\hspace{1em}k\in \mathbb{N}.\]
Then
(3)
\[\begin{aligned}{}& \mathbb{P}\big\{{\lambda _{p}}(\xi )\ge k\big\}={\big({e^{\lambda }}-1\big)^{-1}}{\sum \limits_{j=1}^{\infty }}{\lambda ^{{p^{k}}j}}/\big({p^{k}}j\big)!\\ {} & \hspace{1em}=\bigg({_{0}}{F_{{p^{k}}}}\bigg(;\frac{1}{{p^{k}}},\frac{2}{{p^{k}}},\dots ,\frac{{p^{k}}-1}{{p^{k}}};{\bigg(\frac{\lambda }{{p^{k}}}\bigg)^{{p^{k}}}}\bigg)-1\bigg),\end{aligned}\]In all examples above, the distribution of ${\lambda _{p}}(\xi )$ for every fixed $p\in \mathcal{P}$ is extremely light-tailed. It is not that difficult to construct ‘weird’ distributions where all ${\lambda _{p}}(\xi )$ have infinite expectations.
Example 4.
Let ${({g_{p}})_{p\in \mathcal{P}}}$ be any probability distribution supported by $\mathcal{P}$, ${g_{p}}\gt 0$, and ${({t_{k}})_{k\in {\mathbb{N}_{0}}}}$ any probability distribution on $\mathbb{N}$ such that ${\textstyle\sum _{k=1}^{\infty }}k{t_{k}}=\infty $ and ${t_{k}}\gt 0$. Define a probability distribution $\mathfrak{h}$ on $\mathcal{Q}:={\textstyle\bigcup _{p\in \mathcal{P}}}\{p,{p^{2}},\dots \}$ by
\[ \mathfrak{h}\big(\big\{{p^{k}}\big\}\big)={g_{p}}{t_{k}},\hspace{1em}p\in \mathcal{P},\hspace{1em}k\in \mathbb{N}.\]
If ξ is a random variable with distribution $\mathfrak{h}$, then
\[ \mathbb{P}\big\{{\lambda _{p}}(\xi )\ge k\big\}={g_{p}}{\sum \limits_{j=k}^{\infty }}{t_{j}},\hspace{1em}k\in \mathbb{N},\hspace{1em}p\in \mathcal{P},\]
which implies $\mathbb{E}[{\lambda _{p}}(\xi )]={g_{p}}{\textstyle\sum _{k=1}^{\infty }}k{t_{k}}=\infty $, $p\in \mathcal{P}$.This example can be modified by taking $g:={\textstyle\sum _{p\in \mathcal{P}}}{g_{p}}\lt 1$ and charging all points of $\mathbb{N}\setminus \mathcal{Q}$ (this set contains 1 and all integers having at least two different prime factors) with arbitrary positive masses of the total weight $1-g$. The obtained probability distribution charges all points of $\mathbb{N}$ and still possesses the property that all ${\lambda _{p}}$’s have infinite expectations.
Let X be a random variable taking values in $\mathbb{N}$. Since
we conclude that $\mathbb{E}[{({\lambda _{p}}(X))^{k}}]\lt \infty $, for all $p\in \mathcal{P}$, whenever $\mathbb{E}[{\log ^{k}}X]\lt \infty $, $k\in \mathbb{N}$. It is also clear that the converse implication is false in general. However, when $k=1$ the inequality $\mathbb{E}[\log X]\lt \infty $ is in fact equivalent to ${\textstyle\sum _{p\in \mathcal{P}}}\mathbb{E}[{\lambda _{p}}(X)]\log p\lt \infty $. As we have seen in the above examples, checking that $\mathbb{E}[{({\lambda _{p}}(X))^{k}}]\lt \infty $ might be a much more difficult task than proving a stronger assumption $\mathbb{E}[{\log ^{k}}X]\lt \infty $. Thus, we shall mostly work under moment conditions on $\log \xi $ and $\log \eta $.
2.2 Limit theorems for $S(p)$ and $T(p)$
From Donsker’s invariance principle we immediately obtain the following proposition. Let $D:=D([0,\infty ),\mathbb{R})$ be the Skorokhod space endowed with the standard ${J_{1}}$-topology.
Proposition 1.
Assume that $\mathbb{E}[{\log ^{2}}\xi ]\in (0,\infty )$. Then,
\[ {\bigg({\bigg(\frac{{S_{\lfloor ut\rfloor }}(p)-ut\mathbb{E}[{\lambda _{p}}(\xi )]}{\sqrt{t}}\bigg)_{u\ge 0}}\bigg)_{p\in \mathcal{P}}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big({\big({W_{p}}(u)\big)_{u\ge 0}}\big)_{p\in \mathcal{P}}},\hspace{1em}t\to \infty ,\]
on the product space ${D^{\mathbb{N}}}$, where, for all $n\in \mathbb{N}$ and all ${p_{1}}\lt {p_{2}}\lt \cdots \lt {p_{n}}$, ${p_{i}}\in \mathcal{P}$, $i\le n$, $({({W_{{p_{1}}}}(u))_{u\ge 0}},\dots ,{({W_{{p_{n}}}}(u))_{u\ge 0}})$ is an n-dimensional centered Wiener process with covariance matrix $C=\| {C_{i,\hspace{0.1667em}j}}{\| _{1\le i,j\le n}}$ given by ${C_{i,\hspace{0.1667em}j}}={C_{j,\hspace{0.1667em}i}}=\mathrm{Cov}\hspace{0.1667em}({\lambda _{{p_{i}}}}(\xi ),{\lambda _{{p_{j}}}}(\xi ))$.
According to the proof of Proposition 1.3.13 in [6], see pp. 28–29 therein, the following holds true for the perturbed random walks $T(p)$, $p\in \mathcal{P}$.
Proposition 2.
Assume that $\mathbb{E}[{\log ^{2}}\xi ]\in (0,\infty )$ and
Then,
(5)
\[ \underset{t\to \infty }{\lim }{t^{2}}\mathbb{P}\big\{{\lambda _{p}}(\eta )\ge t\big\}=0,\hspace{1em}p\in \mathcal{P}.\]
\[ {\bigg({\bigg(\frac{{T_{\lfloor ut\rfloor }}(p)-ut\mathbb{E}[{\lambda _{p}}(\xi )]}{\sqrt{t}}\bigg)_{u\ge 0}}\bigg)_{p\in \mathcal{P}}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big({\big({W_{p}}(u)\big)_{u\ge 0}}\big)_{p\in \mathcal{P}}},\hspace{1em}t\to \infty ,\]
on the product space ${D^{\mathbb{N}}}$.
Remark 1.
Since $\mathbb{P}\{{\lambda _{p}}(\eta )\log p\ge t\}\le \mathbb{P}\{\log \eta \ge t\}$, the condition
is clearly sufficient for (5).
From the continuous mapping theorem under the assumptions of Proposition 2 we infer
see Proposition 1.3.13 in [6].
(7)
\[\begin{aligned}{}& {\bigg({\bigg(\frac{{\max _{1\le k\le \lfloor ut\rfloor }}({T_{k}}(p)-k\mathbb{E}[{\lambda _{p}}(\xi )])}{\sqrt{t}}\bigg)_{u\ge 0}}\bigg)_{p\in \mathcal{P}}}\\ {} & \hspace{1em}\Longrightarrow \hspace{2.5pt}{\Big({\Big(\underset{0\le v\le u}{\sup }{W_{p}}(v)\Big)_{u\ge 0}}\Big)_{p\in \mathcal{P}}},\hspace{1em}t\to \infty ,\end{aligned}\]Formula (7), for a fixed $p\in \mathcal{P}$, belongs to the realm of limit theorems for the maximum of a single additive perturbed random walk. This circle of problems is well-understood, see Section 1.3.3 in [6] and [7], in the situation when the underlying additive standard random walk is centered and attracted to a stable Lévy process. In our setting the perturbed random walks ${({T_{k}}(p))_{k\in \mathbb{N}}}$ and ${({T_{k}}(q))_{k\in \mathbb{N}}}$ are dependent whenever $p,q\in \mathcal{P}$, $p\ne q$, which make derivation of the joint limit theorems harder and leads to various asymptotic regimes.
Note that (5) implies $\mathbb{E}[{\lambda _{p}}(\eta )]\lt \infty $ and (6) implies $\mathbb{E}[\log \eta ]\lt \infty $. Theorem 5 below tells us that under such moment conditions and assuming also $\mathbb{E}[{\log ^{2}}\xi ]\lt \infty $ the maxima ${\max _{1\le k\le n}}\hspace{0.1667em}{T_{k}}(p)$, $p\in \mathcal{P}$, of noncentered perturbed random walks $T(p)$ have the same behavior as ${S_{n}}(p)$, $p\in \mathcal{P}$ as $n\to \infty $.
Theorem 5.
Assume that $\mathbb{E}[{\log ^{2}}\xi ]\lt \infty $ and $\mathbb{E}[{\lambda _{p}}(\eta )]\lt \infty $, $p\in \mathcal{P}$. Suppose further that
Then, as $t\to \infty $,
Moreover, if also (5) holds for all $p\in \mathcal{P}$, then (9) holds on the product space ${D^{\mathbb{N}}}$.
(8)
\[ \mathbb{P}\{\xi \textit{is divisible by}\hspace{2.5pt}p\}=\mathbb{P}\big\{{\lambda _{p}}(\xi )\gt 0\big\}\gt 0,\hspace{1em}p\in \mathcal{P}.\](9)
\[ {\bigg({\bigg(\frac{{\max _{1\le k\le \lfloor tu\rfloor }}\hspace{0.1667em}{T_{k}}(p)-\mathbb{E}[{\lambda _{p}}(\xi )]tu}{{t^{1/2}}}\bigg)_{u\ge 0}}\bigg)_{p\in \mathcal{P}}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\big({\big({W_{p}}(u)\big)_{u\ge 0}}\big)_{p\in \mathcal{P}}}.\]In the next result we shall assume that η dominates ξ in a sense that the asymptotic behavior of ${\max _{1\le k\le n}}{T_{k}}(p)$ is regulated by the perturbations ${({\lambda _{p}}({\eta _{k}}))_{k\le n}}$ for all $p\in {\mathcal{P}_{0}}$, where ${\mathcal{P}_{0}}$ is a finite subset of prime numbers and those p’s dominate all other primes.
Theorem 6.
Assume (4). Suppose further that there exists a finite set ${\mathcal{P}_{0}}\subseteq \mathcal{P}$, $d:=|{\mathcal{P}_{0}}|$, such that the distributional tail of ${({\lambda _{p}}(\eta ))_{p\in {\mathcal{P}_{0}}}}$ is regularly varying at infinity in the following sense. For some positive function ${(a(t))_{t\gt 0}}$ and a measure ν satisfying $\nu (\{x\in {\mathbb{R}^{d}}:\| x\| \ge r\})=c\cdot {r^{-\alpha }}$, $c\gt 0$, $\alpha \in (0,1)$, it holds
on the space of locally finite measures on ${(0,\infty ]^{d}}$ endowed with the vague topology. Then
where ${({({M_{p}}(u))_{u\ge 0}})_{p\in {\mathcal{P}_{0}}}}$ is a multivariate extreme process defined by
Here the pairs $({t_{k}},{y_{k}})$ are the atoms of a Poisson point process on $[0,\infty )\times {(0,\infty ]^{d}}$ with the intensity measure $\mathbb{LEB}\otimes \nu $ and the supremum is taken coordinatewise. Moreover, suppose that $\mathbb{E}[{\lambda _{p}}(\eta )]\lt \infty $, for $p\in \mathcal{P}\setminus {\mathcal{P}_{0}}$. Then
(10)
\[ t\mathbb{P}\big\{{\big(a(t)\big)^{-1}}{\big({\lambda _{p}}(\eta )\big)_{p\in {\mathcal{P}_{0}}}}\in \cdot \big\}\hspace{2.5pt}\stackrel{\mathrm{v}}{\longrightarrow }\hspace{2.5pt}\nu (\cdot ),\hspace{1em}t\to \infty ,\](11)
\[ {\bigg({\bigg(\frac{{\max _{1\le k\le \lfloor tu\rfloor }}\hspace{0.1667em}{T_{k}}(p)}{a(t)}\bigg)_{u\ge 0}}\bigg)_{p\in {\mathcal{P}_{0}}}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\big({\big({M_{p}}(u)\big)_{u\ge 0}}\big)_{p\in {\mathcal{P}_{0}}}},\hspace{1em}t\to \infty ,\](12)
\[ {\big({M_{p}}(u)\big)_{p\in {\mathcal{P}_{0}}}}=\underset{k:\hspace{0.1667em}{t_{k}}\le u}{\sup }{y_{k}},\hspace{1em}u\ge 0.\]2.3 Limit theorems for the $\mathrm{LCM}\hspace{0.1667em}$
The results from the previous section will be applied below to the analysis of
on the Skorokhod space D, where ${(W(u))_{u\ge 0}}$ is a standard Brownian motion and ${\mu _{\xi }}=\mathbb{E}[\log \xi ]$ was defined in (4).
\[ {\mathfrak{P}_{n}}:=\mathrm{LCM}\hspace{0.1667em}\big(\{{\Pi _{1}},{\Pi _{2}},\dots ,{\Pi _{n}}\}\big)\hspace{1em}\text{and}\hspace{1em}{\mathfrak{T}_{n}}:=\mathrm{LCM}\hspace{0.1667em}\big(\{{\Theta _{1}},{\Theta _{2}},\dots ,{\Theta _{n}}\}\big).\]
A moment’s reflection shows that the analysis of ${\mathfrak{P}_{n}}$ is trivial. Indeed, by definition, ${\Pi _{n-1}}$ divides ${\Pi _{n}}$ and thereupon ${\mathfrak{P}_{n}}={\Pi _{n}}$ for $n\in \mathbb{N}$. Thus, assuming that ${\sigma _{\xi }^{2}}:=\mathrm{Var}\hspace{0.1667em}(\log \xi )\in (0,\infty )$, an application of the Donsker functional limit theorem yields
(14)
\[ {\bigg(\frac{\log {\mathfrak{P}_{\lfloor tu\rfloor }}-{\mu _{\xi }}tu}{{t^{1/2}}}\bigg)_{u\ge 0}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big({\sigma _{\xi }}W(u)\big)_{u\ge 0}},\hspace{1em}t\to \infty ,\]A simple structure of the sequence ${({\mathfrak{P}_{n}})_{n\in \mathbb{N}}}$ breaks down completely upon introducing the perturbations $({\eta _{k}})$, which makes the analysis of ${({\mathfrak{T}_{n}})_{n\in \mathbb{N}}}$ a much harder problem. As an illustration, consider the case $\xi =1$ in which
Thus, the problem encompasses, as a particular case, the investigation of the $\mathrm{LCM}\hspace{0.1667em}$ of an independent sample. This itself constitutes a highly nontrivial challenge. Note that
\[ \log {\mathfrak{T}_{n}}=\log \prod \limits_{p\in \mathcal{P}}{p^{{\max _{1\le k\le n}}\hspace{0.1667em}({\lambda _{p}}({\xi _{1}})+\cdots +{\lambda _{p}}({\xi _{k-1}})+{\lambda _{p}}({\eta _{k}}))}}=\sum \limits_{p\in \mathcal{P}}\underset{1\le k\le n}{\max }{T_{k}}(p)\log p,\]
which shows that the asymptotics of ${\mathfrak{T}_{n}}$ is intimately connected with the behavior of ${\max _{1\le k\le n}}{T_{k}}(p)$, $p\in \mathcal{P}$.As one can guess from Theorem 5 in a ‘typical’ situation relation (14) holds with $\log {\mathfrak{T}_{\lfloor tu\rfloor }}$ replacing $\log {\mathfrak{P}_{\lfloor tu\rfloor }}$. The following heuristics suggest the right form of assumptions ensuring that perturbations ${({\eta _{k}})_{k\in \mathbb{N}}}$ have an asymptotically negligible impact on $\log {\mathfrak{T}_{n}}$. Take a prime $p\in \mathcal{P}$. Its contribution to $\log {\mathfrak{T}_{n}}$ (up to a factor $\log p$) is ${\max _{1\le k\le n}}{T_{k}}(p)$. According to Theorem 5, this maximum is asymptotically the same as ${S_{n}}(p)$. However, as p gets large, the mean $\mathbb{E}[{\lambda _{p}}(\xi )]$ of the random walk ${S_{n-1}}(p)$ becomes small because of the identity
and bound the rate of growth of ${\max _{1\le k\le n}}{\lambda _{p}}({\eta _{k}})$ for all $p\in {\mathcal{P}_{2}}(n)$. It is important to note that under the assumption (8) it holds
\[ \sum \limits_{p\in \mathcal{P}}\mathbb{E}\big[{\lambda _{p}}(\xi )\big]\log p=\mathbb{E}[\log \xi ]\lt \infty .\]
Thus, for large $p\in \mathcal{P}$, the remainder ${\max _{1\le k\le n}}{T_{k}}(p)-{S_{n-1}}(p)$ can, in principle, become larger than ${S_{n-1}}(p)$ itself if the tail of ${\lambda _{p}}(\eta )$ is sufficiently heavy. In order to rule out such a possibility, we introduce the deterministic sets
(15)
\[ {\mathcal{P}_{1}}(n):=\big\{p\in \mathcal{P}:\mathbb{P}\big\{{\lambda _{p}}(\xi )\gt 0\big\}\ge {n^{-1/2}}\big\}\hspace{1em}\text{and}\hspace{1em}{\mathcal{P}_{2}}(n):=\mathcal{P}\setminus {\mathcal{P}_{1}}(n),\]
\[\begin{aligned}{}& \min {\mathcal{P}_{2}}(n)=\min \big\{p\in \mathcal{P}:p\in {\mathcal{P}_{2}}(n)\big\}\\ {} & \hspace{1em}=\min \big\{p\in \mathcal{P}:\mathbb{P}\big\{{\lambda _{p}}(\xi )\gt 0\big\}\lt {n^{-1/2}}\big\}\to \infty ,\hspace{1em}n\to \infty .\end{aligned}\]
Therefore, if $\mathbb{E}[\log \xi ]\lt \infty $ and (8) holds, then
Theorem 7.
Assume $\mathbb{E}[{\log ^{2}}\xi ]\lt \infty $, $\mathbb{E}[\log \eta ]\lt \infty $, (8) and the following two conditions:
and
Then
where ${\mu _{\xi }}=\mathbb{E}[\log \xi ]\lt \infty $, ${\sigma _{\xi }^{2}}=\mathrm{Var}\hspace{0.1667em}[\log \xi ]$ and ${(W(u))_{u\ge 0}}$ is a standard Brownian motion.
(17)
\[ \sum \limits_{p\in \mathcal{P}}\mathbb{E}\big[{\big({\big({\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )\big)^{+}}\big)^{2}}\big]\log p\lt \infty \](18)
\[ \sum \limits_{p\in {\mathcal{P}_{2}}(n)}\mathbb{E}\big[{\big({\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )\big)^{+}}\big]\log p=o\big({n^{-1/2}}\big),\hspace{1em}n\to \infty .\](19)
\[ {\bigg(\frac{\log {\mathfrak{T}_{\lfloor tu\rfloor }}-{\mu _{\xi }}tu}{{t^{1/2}}}\bigg)_{u\ge 0}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\big({\sigma _{\xi }}W(u)\big)_{u\ge 0}},\hspace{1em}t\to \infty ,\]Remark 3.
If $\mathbb{E}[{\log ^{2}}\eta ]\lt \infty $, then (17) holds true. Indeed, since we assume $\mathbb{E}[{\log ^{2}}\xi ]\lt \infty $,
Taking into account (16) and the fact that $\mathbb{E}[\log \eta ]\lt \infty $, the assumption (20) is nothing else but a condition of the speed of convergence of the series
\[\begin{aligned}{}& \mathbb{E}\bigg[\sum \limits_{p\in \mathcal{P}}{\big({\big({\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )\big)^{+}}\big)^{2}}\log p\bigg]\le \mathbb{E}\bigg[\sum \limits_{p\in \mathcal{P}}\big({\lambda _{p}^{2}}(\eta )+{\lambda _{p}^{2}}(\xi )\big)\log p\bigg]\\ {} & \hspace{1em}\le \frac{1}{\log 2}\mathbb{E}\bigg[{\bigg(\sum \limits_{p\in \mathcal{P}}{\lambda _{p}}(\eta )\log p\bigg)^{2}}\bigg]+\mathbb{E}\bigg[{\bigg(\sum \limits_{p\in \mathcal{P}}{\lambda _{p}}(\xi )\log p\bigg)^{2}}\bigg]\\ {} & \hspace{1em}=\frac{1}{\log 2}\big(\mathbb{E}\big[{\log ^{2}}\eta \big]+\mathbb{E}\big[{\log ^{2}}\xi \big]\big)\lt \infty .\end{aligned}\]
The condition (18) can be replaced by a stronger one which only involves the distribution of η, namely
(20)
\[ \sum \limits_{p\in {\mathcal{P}_{2}}(n)}\mathbb{E}\big[{\lambda _{p}}(\eta )\big]\log p=o\big({n^{-1/2}}\big),\hspace{1em}n\to \infty .\]Example 8.
In the settings of Example 1, let ξ and η be arbitrarily dependent with
\[ \mathbb{P}\{\xi =k\}=\frac{1}{\zeta (\alpha ){k^{\alpha }}},\hspace{1em}\mathbb{P}\{\eta =k\}=\frac{1}{\zeta (\beta ){k^{\beta }}},\hspace{1em}k\in \mathbb{N},\]
for some $\alpha ,\beta \gt 1$. Note that $\mathbb{E}[{\log ^{2}}\xi ]\lt \infty $ and $\mathbb{E}[{\log ^{2}}\eta ]\lt \infty $. Direct calculations show that
\[\begin{aligned}{}{\mathcal{P}_{1}}(n)& =\big\{p\in \mathcal{P}:{p^{-\alpha }}\ge {n^{-1/2}}\big\}=\big\{p\in \mathcal{P}:p\le {n^{1/(2\alpha )}}\big\},\\ {} {\mathcal{P}_{2}}(n)& =\big\{p\in \mathcal{P}:p\gt {n^{1/(2\alpha )}}\big\}.\end{aligned}\]
From the chain of relations
\[ \mathbb{E}\big[{\lambda _{p}}(\eta )\big]=\sum \limits_{j\ge 1}\mathbb{P}\big\{{\lambda _{p}}(\eta )\ge j\big\}=\sum \limits_{j\ge 1}{p^{-\beta j}}=\frac{{p^{-\beta }}}{1-{p^{-\beta }}}\le 2{p^{-\beta }},\]
and using the notation $\pi (x)$ for the number of primes smaller than x, we obtain
\[\begin{aligned}{}\sum \limits_{p\in {\mathcal{P}_{2}}(n)}\mathbb{E}\big[{\lambda _{p}}(\eta )\big]\log p& \le 2\sum \limits_{p\in \mathcal{P},p\gt {n^{1/(2\alpha )}}}\frac{\log p}{{p^{\beta }}}=2{\int _{({n^{1/(2\alpha )}},\hspace{0.1667em}\infty )}}\frac{\log x}{{x^{\beta }}}\mathrm{d}\pi (x)\\ {} & \sim 2{\int _{{n^{1/(2\alpha )}}}^{\infty }}\frac{\log x}{{x^{\beta }}}\frac{\mathrm{d}x}{\log x}=\frac{2{n^{(1-\beta )/(2\alpha )}}}{\beta -1},\hspace{1em}n\to \infty .\end{aligned}\]
Here the asymptotic equivalence follows from the prime number theorem and integration by parts, see, for example Eq. (16) in [3]. Thus, (20) holds if
In the settings of Theorem 6 the situation is much simpler in a sense that almost no extra assumptions are needed to derive a limit theorem for ${\mathfrak{T}_{n}}$.
Theorem 9.
Under the same assumptions as in Theorem 6 and assuming additionally that
it holds
Note that in Theorem 9 it is allowed to take $\xi =1$, which yields the following limit theorem for the $\mathrm{LCM}\hspace{0.1667em}$ of an independent integer-valued random variables.
Corollary 1.
Under the same assumptions on η as in Theorem 6, it holds
\[ {\bigg(\frac{\log \mathrm{LCM}\hspace{0.1667em}({\eta _{1}},{\eta _{2}},\dots ,{\eta _{\lfloor tu\rfloor }})}{a(t)}\bigg)_{u\ge 0}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\bigg(\sum \limits_{p\in {\mathcal{P}_{0}}}{M_{p}}(u)\log p\bigg)_{u\ge 0}},\hspace{1em}t\to \infty .\]
Remark 4.
The results presented in Theorems 7 and 9 constitute a contribution to a popular topic in probabilistic number theory, namely, the asymptotic analysis of the $\mathrm{LCM}\hspace{0.1667em}$ of various random sets. For random sets comprised of independent random variables uniformly distributed on $\{1,2,\dots ,n\}$ this problem has been addressed in [2–5, 9]. Some models with a more sophisticated dependence structure have been studied [1] and [8].
3 Limit theorems for coupled perturbed random walks
Theorems 5 and 6 will be derived from general limit theorems for the maxima of arbitrary additive perturbed random walks indexed by some parameters ranging in a countable set in the situation when the underlying additive standard random walks are positively divergent and attracted to a Brownian motion.
Let $\mathcal{A}$ be a countable or finite set of real numbers and
\[ {\big(\big({X_{1}}(r),{Y_{1}}(r)\big)\big)_{r\in \mathcal{A}}},\hspace{1em}{\big(\big({X_{2}}(r),{Y_{2}}(r)\big)\big)_{r\in \mathcal{A}}},\dots \]
be independent copies of an ${\mathbb{R}^{2\times |\mathcal{A}|}}$ random vector ${(X(r),Y(r))_{r\in \mathcal{A}}}$ with arbitrarily dependent components. For each $r\in \mathcal{A}$, the sequence ${({S_{k}^{\ast }}(r))_{k\in {\mathbb{N}_{0}}}}$ given by
\[ {S_{0}^{\ast }}(r):=0,\hspace{1em}{S_{k}^{\ast }}(r):={X_{1}}(r)+\cdots +{X_{k}}(r),\hspace{1em}k\in \mathbb{N},\]
is an additive standard random walk. For each $r\in \mathcal{A}$, the sequence ${({T_{k}^{\ast }}(r))_{k\in \mathbb{N}}}$ defined by
is an additive perturbed random walk. The sequence ${({({T_{k}^{\ast }}(r))_{k\in \mathbb{N}}})_{r\in \mathcal{A}}}$ is a collection of (generally) dependent additive perturbed random walks. Proposition 3.
Assume that, for each $r\in \mathcal{A}$, $\mu (r)\hspace{-0.1667em}:=\hspace{-0.1667em}\mathbb{E}[X(r)]\hspace{-0.1667em}\in \hspace{-0.1667em}(0,\infty )$, $\mathrm{Var}\hspace{0.1667em}[X(r)]\in [0,\infty )$ and $\mathbb{E}[Y(r)]\lt \infty $. Then
where, for all $n\in \mathbb{N}$ and arbitrary ${r_{1}}\lt {r_{2}}\lt \cdots \lt {r_{n}}$ with ${r_{i}}\in \mathcal{A}$, $i\le n$, $({({W_{{r_{1}}}}(u))_{u\ge 0}},\dots ,{({W_{{r_{n}}}}(u))_{u\ge 0}})$ is an n-dimensional centered Wiener process with covariance matrix $C=\| {C_{i,\hspace{0.1667em}j}}{\| _{1\le i,j\le n}}$ with the entries ${C_{i,\hspace{0.1667em}j}}={C_{j,\hspace{0.1667em}i}}=\mathrm{Cov}(X({r_{i}}),X({r_{j}}))$.
(23)
\[ {\bigg({\bigg(\frac{{\max _{1\le k\le \lfloor tu\rfloor }}\hspace{0.1667em}{T_{k}^{\ast }}(r)-\mu (r)tu}{{t^{1/2}}}\bigg)_{u\ge 0}}\bigg)_{r\in \mathcal{A}}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\big({\big({W_{r}}(u)\big)_{u\ge 0}}\big)_{r\in \mathcal{A}}},\hspace{1em}t\to \infty ,\]Proof.
We shall prove an equivalent statement that, as $t\to \infty $,
in the product topology of ${D^{\mathbb{N}}}$. Fix any $r\in \mathcal{A}$ and write
In view of (24) the proof is complete once we can show that
Let $({X_{0}}(r),{Y_{0}}(r))$ be a copy of $(X(r),Y(r))$ which is independent of the vector ${({X_{k}}(r),{Y_{k}}(r))_{k\in \mathbb{N}}}$. Since the collection
has the same distribution as
the variable
has the same distribution as
\[ {\bigg({\bigg(\frac{{\max _{0\le k\le \lfloor tu\rfloor }}\hspace{0.1667em}{T_{k+1}^{\ast }}(r)-\mu (r)tu}{{t^{1/2}}}\bigg)_{u\ge 0}}\bigg)_{r\in \mathcal{A}}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\big({\big({W_{r}}(u)\big)_{u\ge 0}}\big)_{r\in \mathcal{A}}},\]
which differs from (23) by a shift of the subscript k. By the multidimensional Donsker theorem,
(24)
\[ {\bigg({\bigg(\frac{{S_{\lfloor tu\rfloor }^{\ast }}(r)-\mu (r)tu}{{t^{1/2}}}\bigg)_{u\ge 0}}\bigg)_{r\in \mathcal{A}}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big({\big({W_{r}}(u)\big)_{u\ge 0}}\big)_{r\in \mathcal{A}}},\hspace{1em}t\to \infty ,\](25)
\[\begin{aligned}{}& \underset{0\le k\le \lfloor tu\rfloor }{\max }\hspace{0.1667em}{T_{k+1}^{\ast }}(r)-\mu (r)tu\\ {} & \hspace{1em}=\underset{0\le k\le \lfloor tu\rfloor }{\max }\hspace{0.1667em}\big({S_{k}^{\ast }}(r)-{S_{\lfloor tu\rfloor }^{\ast }}(r)+{Y_{k+1}}(r)\big)+{S_{\lfloor tu\rfloor }^{\ast }}(r)-\mu (r)tu.\end{aligned}\](26)
\[ {n^{-1/2}}\Big(\underset{0\le k\le n}{\max }\hspace{0.1667em}\big({S_{k}^{\ast }}(r)-{S_{n}^{\ast }}(r)+{Y_{k+1}}(r)\big)\Big)\hspace{2.5pt}\stackrel{\mathbb{P}}{\to }\hspace{2.5pt}0,\hspace{1em}n\to \infty .\]By assumption, $\mathbb{E}(-{S_{1}^{\ast }}(r))\in (-\infty ,0)$ and $\mathbb{E}{(Y(r)-X(r))^{+}}\lt \infty $. Hence, by Theorem 1.2.1 and Remark 1.2.3 in [6],
\[ \underset{k\to \infty }{\lim }\big(-{S_{k}^{\ast }}(r)+{Y_{k+1}}(r)-{X_{k+1}}(r)\big)=-\infty \hspace{1em}\text{a.s.}\]
As a consequence, the a.s. limit
\[\begin{aligned}{}& \underset{n\to \infty }{\lim }\max ({Y_{0}}(r),\underset{0\le k\le n-1}{\max }\hspace{0.1667em}\big(-{S_{k}^{\ast }}(r)+{Y_{k+1}}(r)-{X_{k+1}}(r)\big)\\ {} & \hspace{1em}=\max ({Y_{0}}(r),\underset{k\ge 0}{\max }\hspace{0.1667em}\big(-{S_{k}^{\ast }}(r)+{Y_{k+1}}(r)-{X_{k+1}}(r)\big)\end{aligned}\]
is a.s. finite. This completes the proof of (26). □Remark 5.
Proposition 3 tells us that fluctuations of ${\max _{1\le k\le \lfloor tu\rfloor }}\hspace{0.1667em}{T_{k}^{\ast }}(r)$ on the level of finite-dimensional distributions are driven by the Brownian fluctuations of ${S_{\lfloor tu\rfloor }^{\ast }}(r)$. According to formula (25), a functional version of this statement would be true if we could check that, for every fixed $T\gt 0$,
\[ {t^{-1/2}}\underset{u\in [0,\hspace{0.1667em}T]}{\sup }\underset{0\le k\le \lfloor tu\rfloor }{\max }\hspace{0.1667em}\big({S_{k}^{\ast }}(r)-{S_{\lfloor tu\rfloor }^{\ast }}(r)+{Y_{k+1}}(r)\big)\hspace{2.5pt}\stackrel{\mathbb{P}}{\to }\hspace{2.5pt}0,\hspace{1em}t\to \infty .\]
But the left-hand side is bounded from below by
\[ {t^{-1/2}}\underset{u\in [0,\hspace{0.1667em}T]}{\sup }{Y_{\lfloor tu\rfloor +1}}(r)={t^{-1/2}}\underset{0\le k\le \lfloor Tt\rfloor +1}{\max }{Y_{k}}(r).\]
Under the sole assumption $\mathbb{E}[Y(r)]\lt \infty $ this maximum does not converge to zero in probability, as $t\to \infty $. Thus, under the standing assumptions of Proposition 3 the functional convergence does not hold.Proof of Theorem 5.
To deduce the finite-dimensional convergence (9) we apply Proposition 3 with $\mathcal{A}=\mathcal{P}$, $X(p)={\lambda _{p}}(\xi )$ and $Y(p)={\lambda _{p}}(\eta )$. The assumption (8) in conjunction with $\mathbb{E}[{\log ^{2}}\xi ]\lt \infty $ implies that $\mathbb{E}[{\lambda _{p}}(\xi )]\in (0,\infty )$ and $\mathrm{Var}\hspace{0.1667em}[{\lambda _{p}}(\xi )]\in [0,\infty )$, for all $p\in \mathcal{P}$.
Suppose that (5) holds true for all $p\in \mathcal{P}$. Fix $p\in \mathcal{P}$, $t\gt 0$, and note that by the subadditivity of the supremum and the fact that ${({S_{k}}(p))_{k\in {\mathbb{N}_{0}}}}$ is nondecreasing we have
Assumption (5) implies that, for every fixed $T\gt 0$,
(27)
\[ {S_{\lfloor tu\rfloor -1}}(p)\le \underset{1\le k\le \lfloor tu\rfloor }{\max }{T_{k}}(p)\le {S_{\lfloor tu\rfloor -1}}(p)+\underset{1\le k\le \lfloor tu\rfloor }{\max }{\lambda _{p}}({\eta _{k}}),\hspace{1em}u\ge 0.\]
\[ {t^{-1/2}}\underset{u\in [0,T]}{\sup }\underset{1\le k\le \lfloor tu\rfloor }{\max }{\lambda _{p}}({\eta _{k}})={t^{-1/2}}\underset{1\le k\le \lfloor tT\rfloor }{\max }{\lambda _{p}}({\eta _{k}})\hspace{2.5pt}\stackrel{\mathbb{P}}{\to }\hspace{2.5pt}0,\hspace{1em}t\to \infty .\]
By Proposition 1 and taking into account (27) this means that (9) holds true on the product space ${D^{\mathbb{N}}}$. □Proposition 4.
Assume $\mathbb{E}[X(r)]\lt \infty $, $r\in \mathcal{A}$. Assume further that there exists a finite set ${\mathcal{A}_{0}}\subseteq \mathcal{A}$, $d:=|{\mathcal{A}_{0}}|$, such that the distributional tail of ${(Y(r))_{r\in {\mathcal{A}_{0}}}}$ is regularly varying at infinity in the following sense. For some positive function ${(a(t))_{t\gt 0}}$ and a measure ν satisfying $\nu (\{x\in {\mathbb{R}^{d}}:\| x\| \ge r\})=c\cdot {r^{-\alpha }}$, $c\gt 0$, $\alpha \in (0,1)$, it holds
on the space of locally finite measures on ${(0,\infty ]^{d}}$ endowed with the vague topology. Then
where ${({({M_{r}}(u))_{u\ge 0}})_{r\in {\mathcal{A}_{0}}}}$ is defined as in (12). If $\mathbb{E}[|Y(r)|]\lt \infty $, for $r\in \mathcal{A}\setminus {\mathcal{A}_{0}}$, then also
(28)
\[ t\mathbb{P}\big\{{\big(a(t)\big)^{-1}}{\big(Y(r)\big)_{r\in {\mathcal{A}_{0}}}}\in \cdot \big\}\hspace{2.5pt}\stackrel{\mathrm{v}}{\longrightarrow }\hspace{2.5pt}\nu (\cdot ),\hspace{1em}t\to \infty ,\](29)
\[ {\bigg({\bigg(\frac{{\max _{1\le k\le \lfloor tu\rfloor }}\hspace{0.1667em}{T_{k}^{\ast }}(r)}{a(t)}\bigg)_{u\ge 0}}\bigg)_{r\in {\mathcal{A}_{0}}}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\big({\big({M_{r}}(u)\big)_{u\ge 0}}\big)_{r\in {\mathcal{A}_{0}}}},\hspace{1em}t\to \infty ,\]Proof.
According to Corollary 5.18 in [11]
and (29) follows from the inequalities
\[ {\bigg({\bigg(\frac{{\max _{1\le k\le \lfloor tu\rfloor }}{Y_{k}}(r)}{a(t)}\bigg)_{u\ge 0}}\bigg)_{r\in {\mathcal{A}_{0}}}}\hspace{2.5pt}\Longrightarrow \hspace{2.5pt}{\big({\big({M_{r}}(u)\big)_{u\ge 0}}\big)_{r\in {\mathcal{A}_{0}}}},\hspace{1em}t\to \infty ,\]
in the product topology of ${D^{\mathbb{N}}}$. The function ${(a(t))_{t\ge 0}}$ is regularly varying at infinity with index $1/\alpha \gt 1$. Thus, by the law of large numbers, for all $r\in \mathcal{A}$,
\[\begin{aligned}{}& \underset{1\le k\le \lfloor tu\rfloor }{\min }{S_{k-1}^{\ast }}(r)+\underset{1\le k\le \lfloor tu\rfloor }{\max }{Y_{k}}(r)\le \underset{1\le k\le \lfloor tu\rfloor }{\max }{T_{k}^{\ast }}(r)\\ {} & \hspace{1em}\le \underset{1\le k\le \lfloor tu\rfloor }{\max }{S_{k-1}^{\ast }}(r)+\underset{1\le k\le \lfloor tu\rfloor }{\max }{Y_{k}}(r).\end{aligned}\]
In view of (31) and (32), to prove (30) it suffices to check that
\[ \bigg({\bigg(\frac{{\max _{1\le k\le \lfloor tu\rfloor }}{Y_{k}}(r)}{a(t)}\bigg)_{u\ge 0}}\bigg)\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}0,\hspace{1em}t\to \infty ,\]
for every fixed $r\in \mathcal{A}\setminus {\mathcal{A}_{0}}$. This, in turn, follows from
\[ \frac{{Y_{n}}(r)}{n}\hspace{2.5pt}\stackrel{\mathrm{a}.\mathrm{s}.}{\longrightarrow }\hspace{2.5pt}0,\hspace{1em}n\to \infty ,\hspace{1em}r\in \mathcal{A}\setminus {\mathcal{A}_{0}},\]
which is a consequence of the assumption $\mathbb{E}[|Y(r)|]\lt \infty $, $r\in \mathcal{A}\setminus {\mathcal{A}_{0}}$, and the Borel–Cantelli lemma. □Proof of Theorem 6.
Follows immediately from Proposition 4 applied with $\mathcal{A}=\mathcal{P}$, $X(p)={\lambda _{p}}(\xi )$ and $Y(p)={\lambda _{p}}(\eta )$. □
4 Proof of Theorem 7
We aim at proving that
which together with the relation
(33)
\[ \frac{{\textstyle\sum _{p\in \mathcal{P}}}({\max _{1\le k\le n}}{T_{k}}(p)-{S_{n-1}}(p))\log p}{\sqrt{n}}\hspace{2.5pt}\stackrel{\mathbb{P}}{\longrightarrow }0,\hspace{1em}n\to \infty ,\]
\[ \sum \limits_{p\in \mathcal{P}}{S_{n}}(p)\log p=\log {\Pi _{n}}=\log {\mathfrak{P}_{n}},\hspace{1em}n\in \mathbb{N},\]
implies Theorem 7 by the Slutsky lemma and (14).Let $({\xi _{0}},{\eta _{0}})$ be an independent copy of $(\xi ,\eta )$ which is also independent of ${({\xi _{n}},{\eta _{n}})_{n\in \mathbb{N}}}$. By the same reasoning as we have used in the proof of (26) we obtain
Taking into account
we see that (33) is a consequence of
Since, for every fixed $p\in \mathcal{P}$,
by assumption (8), it suffices to check that, for every fixed $\varepsilon \gt 0$,
In order to check (37), we divide the sum into two disjoint parts with summations over ${\mathcal{P}_{1}}(n)$ and ${\mathcal{P}_{2}}(n)$. For the first sum, by Markov’s inequality, we obtain
(34)
\[\begin{aligned}{}& {\Big(\underset{1\le k\le n}{\max }{T_{k}}(p)-{S_{n-1}}(p)\Big)_{p\in \mathcal{P}}}\\ {} & \hspace{1em}\stackrel{d}{=}{\Big(\max \Big({\lambda _{p}}({\eta _{0}}),\underset{1\le k\lt n}{\max }\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)\Big)\Big)_{p\in \mathcal{P}}}.\end{aligned}\](35)
\[ \frac{{\textstyle\sum _{p\in \mathcal{P}}}{\max _{1\le k\lt n}}{({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p))^{+}}\log p}{\sqrt{n}}\hspace{2.5pt}\stackrel{\mathbb{P}}{\longrightarrow }0,\hspace{1em}n\to \infty .\](36)
\[ \underset{k\ge 1}{\max }{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\lt \infty \hspace{1em}\text{a.s.}\](37)
\[ \underset{M\to \infty }{\lim }\underset{n\to \infty }{\limsup }\mathbb{P}\bigg\{\sum \limits_{p\in \mathcal{P},p\gt M}\underset{1\le k\lt n}{\max }{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\log p\gt \varepsilon \sqrt{n}\bigg\}.\]
\[\begin{aligned}{}& \hspace{-11.38092pt}\mathbb{P}\bigg\{\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\underset{1\le k\lt n}{\max }{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\log p\gt \varepsilon \sqrt{n}/2\bigg\}\\ {} & \le \frac{2}{\varepsilon \sqrt{n}}\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\mathbb{E}\Big(\underset{1\le k\lt n}{\max }{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\Big)\log p\\ {} & \le \frac{2}{\varepsilon \sqrt{n}}\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\log p\sum \limits_{k\ge 1}\mathbb{E}{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\\ {} & =\frac{2}{\varepsilon \sqrt{n}}\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\log p\sum \limits_{j\ge 1}\mathbb{P}\big\{{\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )=j\big\}\sum \limits_{k\ge 1}\mathbb{E}{\big(j-{S_{k-1}}(p)\big)^{+}}\\ {} & \le \frac{2}{\varepsilon \sqrt{n}}\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\log p\sum \limits_{j\ge 1}j\mathbb{P}\big\{{\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )=j\big\}\sum \limits_{k\ge 0}\mathbb{P}\big\{{S_{k}}(p)\le j\big\}\\ {} & \le \frac{2}{\varepsilon \sqrt{n}}\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\log p\sum \limits_{j\ge 1}j\mathbb{P}\big\{{\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )=j\big\}\frac{2j}{\mathbb{E}[({\lambda _{p}}(\xi )\wedge j)]},\end{aligned}\]
where the last estimate is a consequence of Erickson’s inequality for renewal functions, see Eq. (6.5) in [6]. Further, since for $p\in {\mathcal{P}_{1}}(n)$,
\[ \mathbb{E}\big[\big({\lambda _{p}}(\xi )\wedge j\big)\big]\ge \mathbb{P}\big\{{\lambda _{p}}(\xi )\ge 1\big\}=\mathbb{P}\big\{{\lambda _{p}}(\xi )\gt 0\big\}\ge {n^{-1/2}},\]
we obtain
\[\begin{aligned}{}& \mathbb{P}\bigg\{\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\underset{1\le k\lt n}{\max }{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\log p\gt \varepsilon \sqrt{n}/2\bigg\}\\ {} & \hspace{1em}\le \frac{4}{\varepsilon }\sum \limits_{p\in {\mathcal{P}_{1}}(n),p\gt M}\log p\mathbb{E}\big[{\big({\big({\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )\big)^{+}}\big)^{2}}\big]\\ {} & \hspace{1em}\le \frac{4}{\varepsilon }\sum \limits_{p\in \mathcal{P},p\gt M}\log p\mathbb{E}\big[{\big({\big({\lambda _{p}}(\eta )-{\lambda _{p}}(\xi )\big)^{+}}\big)^{2}}\big].\end{aligned}\]
The right-hand side converges to 0, as $M\to \infty $ by (17). For the sum over ${\mathcal{P}_{2}}(n)$ the derivation is simpler. By Markov’s inequality
\[\begin{aligned}{}& \mathbb{P}\bigg\{\sum \limits_{p\in {\mathcal{P}_{2}}(n),p\gt M}\underset{1\le k\lt n}{\max }{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\log p\gt \varepsilon \sqrt{n}/2\bigg\}\\ {} & \hspace{1em}\le \frac{2}{\varepsilon \sqrt{n}}\mathbb{E}\bigg[\sum \limits_{p\in {\mathcal{P}_{2}}(n),p\gt M}\underset{1\le k\lt n}{\max }{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})-{S_{k-1}}(p)\big)^{+}}\log p\bigg]\\ {} & \hspace{1em}\le \frac{2n}{\varepsilon \sqrt{n}}\mathbb{E}\bigg[\sum \limits_{p\in {\mathcal{P}_{2}}(n),p\gt M}{\big({\lambda _{p}}({\eta _{k}})-{\lambda _{p}}({\xi _{k}})\big)^{+}}\log p\bigg],\end{aligned}\]
and the right-hand side tends to zero as $n\to \infty $ in view of (18). The proof is complete.5 Proof of Theorem 9
From Theorem 6 with the aid of the continuous mapping theorem we conclude that
Since $(a(t))$ is regularly varying at infinity, (38) follows from
by Markov’s inequality. To check the latter, note that
\[ {\bigg(\frac{{\textstyle\sum _{p\in {\mathcal{P}_{0}}}}{\max _{1\le k\le \lfloor tu\rfloor }}{T_{k}}(p)\log p}{a(t)}\bigg)_{u\ge 0}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}{\bigg(\sum \limits_{p\in {\mathcal{P}_{0}}}{M_{p}}(u)\log p\bigg)_{u\ge 0}},\]
as $t\to \infty $. It suffices to check
(38)
\[ {\bigg(\frac{{\textstyle\sum _{p\in \mathcal{P}\setminus {\mathcal{P}_{0}}}}{\max _{1\le k\le \lfloor tu\rfloor }}{T_{k}}(p)\log p}{a(t)}\bigg)_{u\ge 0}}\hspace{2.5pt}\stackrel{\mathrm{f}.\mathrm{d}.\mathrm{d}.}{\longrightarrow }\hspace{2.5pt}0,\hspace{1em}t\to \infty .\](39)
\[ \frac{{\textstyle\sum _{p\in \mathcal{P}\setminus {\mathcal{P}_{0}}}}\mathbb{E}[{\max _{1\le k\le n}}{T_{k}}(p)]\log p}{a(n)}\hspace{2.5pt}\to \hspace{2.5pt}0,\hspace{1em}n\to \infty ,\]
\[\begin{aligned}{}& \sum \limits_{p\in \mathcal{P}\setminus {\mathcal{P}_{0}}}\mathbb{E}\Big[\underset{1\le k\le n}{\max }{T_{k}}(p)\Big]\log p\le \sum \limits_{p\in \mathcal{P}\setminus {\mathcal{P}_{0}}}\mathbb{E}\Big[{S_{n-1}}(p)+\underset{1\le k\le n}{\max }{\lambda _{p}}({\eta _{k}})\Big]\log p\\ {} & \hspace{1em}\le (n-1)\sum \limits_{p\in \mathcal{P}\setminus {\mathcal{P}_{0}}}\mathbb{E}\big[{\lambda _{p}}(\xi )\big]\log p+n\sum \limits_{p\in \mathcal{P}\setminus {\mathcal{P}_{0}}}\mathbb{E}\big[{\lambda _{p}}(\eta )\big]\log p\\ {} & \hspace{1em}\le (n-1)\mathbb{E}[\log \xi ]+n\sum \limits_{p\in \mathcal{P}\setminus {\mathcal{P}_{0}}}\mathbb{E}\big[{\lambda _{p}}(\eta )\big]\log p=O(n),\hspace{1em}n\to \infty ,\end{aligned}\]
where we have used the inequality $\mathbb{E}[\log \xi ]\lt \infty $ and the assumption (21). Using that $\alpha \in (0,1)$ and $(a(t))$ is regularly varying at infinity with index $1/\alpha $, we obtain (39).