Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 9, Issue 2 (2022)
  4. Asymptotic results for families of rando ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

Asymptotic results for families of random variables having power series distributions
Volume 9, Issue 2 (2022), pp. 207–228
Claudio Macci ORCID icon link to view author Claudio Macci details   Barbara Pacchiarotti ORCID icon link to view author Barbara Pacchiarotti details   Elena Villa ORCID icon link to view author Elena Villa details  

Authors

 
Placeholder
https://doi.org/10.15559/21-VMSTA198
Pub. online: 3 February 2022      Type: Research Article      Open accessOpen Access

Received
13 September 2021
Revised
6 December 2021
Accepted
25 December 2021
Published
3 February 2022

Abstract

Suitable families of random variables having power series distributions are considered, and their asymptotic behavior in terms of large (and moderate) deviations is studied. Two examples of fractional counting processes are presented, where the normalizations of the involved power series distributions can be expressed in terms of the Prabhakar function. The first example allows to consider the counting process in [Integral Transforms Spec. Funct. 27 (2016), 783–793], the second one is inspired by a model studied in [J. Appl. Probab. 52 (2015), 18–36].

1 Introduction

Several discrete distributions in probability concern nonnegative integer valued random variables. A random variable X has a power series distribution if
\[ P(X=k)=\frac{{d_{k}}{\delta ^{k}}}{D(\delta )}\hspace{2.5pt}\text{for each integer}\hspace{2.5pt}k\ge 0,\]
where $\delta >0$ is called power parameter, $\{{d_{k}}:k\ge 0\}$ is a family of nonnegative numbers and the normalization $D(\delta ):={\textstyle\sum _{k\ge 0}}{d_{k}}{\delta ^{k}}\in (0,\infty )$ is called the series function. Typically, analytical properties of the series function $D(\cdot )$ can be related to some statistical properties of the power series distribution. Moreover, the probability generating function of a power series distributed random variables X can be easily expressed in terms of the function D; in fact we have
\[ \mathbb{E}[{u^{X}}]=\frac{D(u\delta )}{D(\delta )}\hspace{2.5pt}\text{for all}\hspace{2.5pt}u>0.\]
The reference [19] made great advances in the theory of power series distributions. Another important contribution was given by the modified power series distributions in [14], which include distributions derived from Lagrangian expansions (see, e.g., [4]). Other more recent references on these distributions concern some families which contain the geometric distribution as a particular case: the generalized hypergeometric family, the q-series family and the Lerch family. Among the references on the Lerch family we recall [13] and [15]; see also [17] as a reference on the related Hurwitz–Lerch zeta function.
In this paper we consider a family of random variables $\{N(t):t\ge 0\}$, whose univariate marginal distributions are expressed in terms of a family of power series distributions $\{{\mathcal{P}_{j}}:j\ge 0\}$ with power parameter δ; moreover, for all $j\ge 0$, we set $\delta :={\delta _{j}}(t)$ for some functions $\{{\delta _{j}}(\cdot ):j\ge 0\}$. A precise definition is given at the beginning of Section 3 (some assumptions are needed and they are collected in Condition 1) and it is a generalization of the basic model with a unique power series distribution, i.e. the case with
\[ P(N(t)=k)=\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}\hspace{2.5pt}\text{for each integer}\hspace{2.5pt}k\ge 0\]
for some coefficients $\{{d_{k}}:k\ge 0\}$, a series function $D(\cdot )$ and a function $\delta (\cdot )$.
We recall that, when we deal with the basic model, we have suitable weighted Poisson distributed random variables; in fact, for each integer $k\ge 0$, we have
\[ P(N(t)=k)=\frac{w(k)\frac{{(\lambda \delta (t))^{k}}}{k!}{e^{-\lambda \delta (t)}}}{{\textstyle\sum _{j\ge 0}}w(j)\frac{{(\lambda \delta (t))^{j}}}{j!}{e^{-\lambda \delta (t)}}},\hspace{2.5pt}\text{with}\hspace{2.5pt}w(k)=\frac{k!}{{\lambda ^{k}}}{d_{k}}.\]
This kind of structure was already highlighted in [1, Section 4] for the case $\delta (t)={t^{\nu }}$ and ${d_{k}}=\frac{{\lambda ^{k}}}{\Gamma (\nu k+1)}$ (for some $\nu \in (0,1]$), and therefore $w(k)=\frac{k!}{\Gamma (\nu k+1)}$. Weighted Poisson distributions are often related to the concepts of overdispersion and underdispersion; for some insights on this topic see, e.g., [5, 6], the recent paper [3] and references therein.
Our main results in the present paper concern large (and moderate) deviations as $t\to \infty $ for the above mentioned general family $\{N(t):t\ge 0\}$. We remind that the theory of large deviations deals with asymptotic computation of small probabilities on an exponential scale (see, e.g., [7] as a reference on this topic).
At the best of our knowledge, we are not aware of similar results in the literature for power series distributions; therefore we think that our general results may find applications in several models and may bring to further investigations. In this paper we apply the results to different classes of fractional counting processes found in the literature, where the function $D(\cdot )$ can be expressed in terms of the Prabhakar function (the definition of this function will be recalled in Section 2.2). Namely, in Section 4 we shall present two particular examples. The first one is related to the fractional process in [20], and it allows us to generalize some large deviation results in the current literature, as discussed at the end of Section 4.1. The second one is related to a fractional process in [10], and we discuss a class of cases for which certain conditions on some involved parameters fail.
We also point out that the model in [20] is a particular case of the family in [3, Section 3.1, Eq. (48)], where the weights are expressed by a ratio of Gamma functions; thus, as a possible future work, one might try to investigate a wider class of models defined by suitable generalizations of the Prabhakar function.
We conclude the introduction with the outline of the paper. We start with some preliminaries in Section 2. In Section 3 we give a precise definition of the model, and we prove the results. Finally, in Section 4, we apply our results to some examples of fractional counting processes found in the literature.

2 Preliminaries

In this section we start with some preliminaries on large deviations. Moreover, in view of the examples presented in Section 4, we present some preliminaries on some special functions.

2.1 On large deviations

We start with the definition of large deviation principle (LDP from now on). In view of what follows, our presentation concerns the case $t\to \infty $; moreover, for simplicity, we refer to a family of real-valued random variables $\{{X_{t}}:t>0\}$ defined on the same probability space $(\Omega ,\mathcal{F},P)$.
A lower semicontinuous function $I:\mathbb{R}\to [0,\infty ]$ is called rate function, and it is said to be good if all its level sets $\{\{x\in \mathbb{R}:I(x)\le \eta \}:\eta \ge 0\}$ are compact. Then $\{{X_{t}}:t>0\}$ satisfies the LDP with speed ${v_{t}}\to \infty $ and rate function I if
\[ \underset{t\to \infty }{\limsup }\frac{1}{{v_{t}}}\log P({X_{t}}\in C)\le -\underset{x\in C}{\inf }I(x)\hspace{2.5pt}\text{for all closed sets}\hspace{2.5pt}C\]
and
\[ \underset{t\to \infty }{\liminf }\frac{1}{{v_{t}}}\log P({X_{t}}\in O)\ge -\underset{x\in O}{\inf }I(x)\hspace{2.5pt}\text{for all open sets}\hspace{2.5pt}O.\]
The notion of moderate deviations is used when we have a class of LDPs for families of centered (or asymptotically centered) random variables which depend on some positive scaling factors $\{a(t):t>0\}$ such that
(1)
\[ a(t)\to 0\hspace{2.5pt}\text{and}\hspace{2.5pt}{v_{t}}a(t)\to \infty \hspace{2.5pt}\text{as}\hspace{2.5pt}t\to \infty \]
and, moreover, all these LDPs (whose speed functions depend on the scaling factors) are governed by the same quadratic rate function vanishing at zero. We can also say that, as usually happens, this class of LDPs fills the gap between a convergence to zero and an asymptotic normality (see Remark 4).
The main tool for large deviations used in this paper is the Gärtner–Ellis theorem (see, e.g., [7, Theorem 2.3.6]; actually we can refer to the statement (c) only), and here we recall its statement for real-valued random variables. In view of this, we also recall that a convex function $f:\mathbb{R}\to (-\infty ,\infty ]$ is essentially smooth (see, e.g., [7, Definition 2.3.5]) if the interior of ${\mathcal{D}_{f}}:=\{\theta \in \mathbb{R}:f(\theta )<\infty \}$ is nonempty, f is differentiable throughout the interior of ${\mathcal{D}_{f}}$, and f is steep (i.e. $|{f^{\prime }}(t)|$ is divergent as t approaches to any finite point of the boundary of ${\mathcal{D}_{f}}$). In our applications the function f is always finite everywhere and differentiable; therefore f is essentially smooth because the steepness condition holds vacuously.
Theorem 1.
Let $\{{X_{t}}:t>0\}$ be a family of real-valued random variables defined on the same probability space $(\Omega ,\mathcal{F},P)$ and let ${v_{t}}$ be such that ${v_{t}}\to \infty $. Moreover assume that, for all $\theta \in \mathbb{R}$, there exists
\[ f(\theta ):=\underset{t\to \infty }{\lim }\frac{1}{{v_{t}}}\log \mathbb{E}\left[{e^{\theta {X_{t}}}}\right]\]
as an extended real number; we also assume that the origin $\theta =0$ belongs to the interior of the set $\mathcal{D}(f):=\{\theta \in \mathbb{R}:f(\theta )<\infty \}$. Then, if f is essentially smooth and lower semicontinuous, the family of random variables $\{{X_{t}}/{v_{t}}:t>0\}$ satisfies the LDP with speed ${v_{t}}$ and good rate function ${f^{\ast }}$ defined by ${f^{\ast }}(x):={\sup _{\theta \in \mathbb{R}}}\{\theta x-f(\theta )\}$.

2.2 On special functions for some fractional counting processes

In this paper, for $\alpha \in (0,1]$ and $\beta ,\gamma >0$, we consider the Prabhakar function ${E_{\alpha ,\beta }^{\gamma }}(\cdot )$ defined by
\[ {E_{\alpha ,\beta }^{\gamma }}(u):=\sum \limits_{k\ge 0}\frac{{u^{k}}{(\gamma )_{k}}}{k!\Gamma (\alpha k+\beta )}\hspace{2.5pt}(\text{for}\hspace{2.5pt}u\in \mathbb{R}),\]
where
\[ {(\gamma )_{k}}:=\left\{\begin{array}{l@{\hskip10.0pt}l}1& \hspace{2.5pt}\text{if}\hspace{2.5pt}k=0,\\ {} \gamma (\gamma +1)\cdots (\gamma +k-1)& \hspace{2.5pt}\text{if}\hspace{2.5pt}k\ge 1\end{array}\right.\]
is the rising factorial (Pochhammer symbol). The Prabhakar function is also known as the Mittag-Leffler function with three parameters; the Mittag-Leffler function with two parameters concerns the case $\gamma =1$, and the classical Mittag-Leffler function concerns the case $\beta =\gamma =1$. Here we are interested to the case of a positive argument u and we refer to the asymptotic behavior of ${E_{\alpha ,\beta }^{\gamma }}(\cdot )$ as the argument tends to infinity (see, e.g., [11, page 23], which concerns a result in [18] where the argument z of ${E_{\alpha ,\beta }^{\gamma }}(\cdot )$ is complex; obviously we are interested in the case $|\arg (z)|<\frac{\alpha \pi }{2}$). In particular, for some $\omega (u)$ such that $\omega (u)\to 0$ as $u\to \infty $, we have
(2)
\[ {E_{\alpha ,\beta }^{\gamma }}(u)=\frac{1}{\Gamma (\gamma )}{e^{{u^{1/\alpha }}}}{u^{\frac{\gamma -\beta }{\alpha }}}\frac{1}{{\alpha ^{\gamma }}}\sum \limits_{k\ge 0}{c_{k}}{u^{-\frac{k}{\alpha }}}(1+\omega (u)),\]
where the coefficients $\{{c_{k}}:k\ge 0\}$ are obtained by a suitable inverse factorial expansion. Moreover, when we shall present the first application of our results to some fractional counting processes (see Section 4.1), we shall restrict the attention to the case with a positive integer γ; then we refer to [11, Eq. (4.4)], i.e.
(3)
\[ {E_{\alpha ,\beta }^{\gamma +1}}(u)=\frac{1}{{\alpha ^{\gamma }}\gamma !}{\sum \limits_{j=0}^{\gamma }}{d_{j,\alpha ,\beta }^{(\gamma )}}{E_{\alpha ,\beta -j}^{1}}(u)\]
for some coefficients $\{{d_{j,\alpha ,\beta }^{(\gamma )}}:j\in \{0,1,\dots ,\gamma \}\}$ defined by a recursive expression provided by [11, Eq. (4.6)], and in particular we have ${d_{\gamma ,\alpha ,\beta }^{(\gamma )}}=1$. We also recall the following asymptotic formula for the case $\gamma =1$ (see, e.g., [12, Eq. (4.4.16)]), i.e.
(4)
\[ {E_{\alpha ,\beta }^{1}}(u)=\frac{1}{\alpha }{u^{\frac{1-\beta }{\alpha }}}{e^{{u^{1/\alpha }}}}+O(1/u)\hspace{2.5pt}\text{as}\hspace{2.5pt}u\to \infty .\]
As we shall see, the Prabhakar function plays an important role in the examples presented in Section 4 based on some fractional counting processes found in the literature. We also mention that a different use of the Prabhakar function can be found in [9] for the definition of a new class of Lévy processes (called Prabhakar Lévy processes).

3 Model and results

We consider a family of power series distributions $\{{\mathcal{P}_{j}}:j\ge 0\}$ such that, for each $j\ge 0$, ${\mathcal{P}_{j}}$ has the probability mass function
\[ {p_{j}}(k):=\frac{{d_{k,j}}{\delta ^{k}}}{{D_{j}}(\delta )}\hspace{2.5pt}\text{for each integer}\hspace{2.5pt}k\ge 0,\]
where $\delta >0$, and $\{{d_{k,j}}:k\ge 0\}$ is a sequence of nonnegative numbers such that
\[ {D_{j}}(\delta ):=\sum \limits_{k\ge 0}{d_{k,j}}{\delta ^{k}}\in (0,\infty ).\]
Then we consider a family of random variables $\{N(t):t\ge 0\}$ whose probability mass functions depend on $\{{d_{k,k}}:k\ge 0\}$ only; more precisely, assuming that
\[ \sum \limits_{j\ge 0}\frac{{d_{j,j}}{\delta ^{j}}}{{D_{j}}(\delta )}\in (0,\infty )\hspace{2.5pt}\text{for all}\hspace{2.5pt}\delta >0,\]
we have
\[ P(N(t)=k):=\frac{\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{{\textstyle\sum _{j\ge 0}}\frac{{d_{j,j}}{({\delta _{j}}(t))^{j}}}{{D_{j}}({\delta _{j}}(t))}}\hspace{2.5pt}\text{for each integer}\hspace{2.5pt}k\ge 0,\]
and ${\delta _{j}}(t)\to \infty $ as $t\to \infty $ (for all $j\ge 0$).
Remark 1.
In the next Section 4 we show how our results can be applied to some stochastic processes found in some literature. Actually the authors of these works use the term “stochastic process” even if they do not specify the joint distributions of the involved random variables. In our results we do not need to define the joint distributions of the random variables $\{N(t):t\ge 0\}$; indeed we only need to consider their univariate marginal distributions and, for this reason, we use the term “family of random variables”. In a possible future work one could investigate the possibility to define all the finite-dimensional distributions in order to prove sample-path large deviation results.
In our results some hypotheses are needed, and they are collected in the next Condition 1.
Condition 1.
We consider the following hypotheses.
  • $(\mathbf{B}\mathbf{1})$: There exists $n\ge 0$ such that the elements of both sequences $\{{\mathcal{P}_{j}}:j\ge 0\}$ and $\{{\delta _{j}}(\cdot ):j\ge 0\}$ do not depend on $j\ge n$; in particular, for $j\ge n$, we simply write ${d_{k}}$ in place of ${d_{k,j}}$, $D(\cdot )$ in place of ${D_{j}}(\cdot )$, and $\delta (\cdot )$ in place of ${\delta _{j}}(\cdot )$. Moreover we assume that there exist two functions $v:(0,\infty )\to (0,\infty )$ and $\Delta :(0,\infty )\to \mathbb{R}$ such that $v(t)\to \infty $ as $t\to \infty $,
    (5)
    \[ \underset{t\to \infty }{\lim }\frac{1}{v(t)}\log D(ut)=\Delta (u)\hspace{2.5pt}\textit{for all}\hspace{2.5pt}u>0,\]
    and $\Delta (\cdot )$ is a differentiable function.
  • $(\mathbf{B}\mathbf{2})$: The set $\{k\ge 0:{d_{k,k}}>0\}$ is unbounded; thus, if we refer to the case $k\ge n$, the set $\{k\ge 0:{d_{k}}>0\}$ is unbounded.
  • $(\mathbf{B}\mathbf{3})$: For all $k\in \{0,1,\dots ,n-1\}$ we have:
    (6)
    \[ \underset{t\to \infty }{\lim }\frac{\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}}=0\hspace{2.5pt}\textit{if}\hspace{2.5pt}{d_{k}}>0,\]
    and ${d_{k,k}}=0$ if ${d_{k}}=0$.
Firstly we note that the function $\Delta (\cdot )$ is increasing. Moreover, if $n=0$, we have the basic model with a unique power series distribution and, in particular, $(\mathbf{B}\mathbf{3})$ holds vacuously (because the set $\{0,1,\dots ,n-1\}$ in $(\mathbf{B}\mathbf{3})$ is empty).
Remark 2.
Here we illustrate two consequences of $(\mathbf{B}\mathbf{2})$ and $(\mathbf{B}\mathbf{3})$ in Condition 1. Firstly $(\mathbf{B}\mathbf{2})$ allows to avoid the case $P(0\le N(t)\le M)=1$ (for all $t\ge 0$) for some $M\in (0,\infty )$; in fact, in such a case, it is easy to check that the results proved below hold with $\Lambda (\theta )=0$ for all $\theta \in \mathbb{R}$, where Λ is the function defined in Eq. (11). Moreover $(\mathbf{B}\mathbf{2})$ and $(\mathbf{B}\mathbf{3})$ yield
(7)
\[ \underset{t\to \infty }{\lim }\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}=0\hspace{2.5pt}\text{and}\hspace{2.5pt}\underset{t\to \infty }{\lim }\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}=0\hspace{2.5pt}\text{for all}\hspace{2.5pt}k\ge 0.\]
In fact, for all $k\ge 0$, there exist $h>k$ such that ${d_{h}}>0$, and therefore
\[ 0\le \frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}\le \frac{{d_{k}}{(\delta (t))^{k}}}{{d_{h}}{(\delta (t))^{h}}}\to 0\hspace{2.5pt}(\text{as}\hspace{2.5pt}t\to \infty );\]
moreover the first limit in Eq. (7) follows from the second one in Eq. (7) together with $(\mathbf{B}\mathbf{3})$.
We also briefly discuss some particular cases concerning the functions $v(\cdot )$ and $\Delta (\cdot )$ in Eq. (5).
Remark 3.
Assume that there exists
(8)
\[ \underset{t\to \infty }{\lim }\frac{v(ut)}{v(t)}=:\bar{v}(u)\hspace{2.5pt}\text{for all}\hspace{2.5pt}u>0\]
as a finite limit; then the limit in Eq. (5) can be verified only for $u=1$, and we have
\[ \Delta (u)=\Delta (1)\bar{v}(u)\hspace{2.5pt}\text{for all}\hspace{2.5pt}u>0.\]
In particular, if $v(\cdot )$ is a regularly varying function of index $\varrho >0$ (see, e.g., [8, Definition A3.1(b)]), the limit in Eq. (8) holds with $\bar{v}(u)={u^{\varrho }}$. On the other hand, if $v(\cdot )$ is a slowly varying function (see, e.g., [8, Definition A3.1(a)]), the limit in Eq. (8) holds with $\bar{v}(u)=1$ (and this case is not interesting).
In view of what follows, it is useful to introduce the following notation:
(9)
\[ {R_{n}}(u,t):=\left\{\begin{array}{l@{\hskip10.0pt}l}0& \hspace{2.5pt}\text{if}\hspace{2.5pt}n=0,\\ {} {\textstyle\textstyle\sum _{k=0}^{n-1}}\left(\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}-\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}\right)& \hspace{2.5pt}\text{if}\hspace{2.5pt}n\ge 1,\end{array}\right.\]
where n is the value in Condition 1. Then we have
(10)
\[ \underset{t\to \infty }{\lim }{R_{n}}(u,t)=0;\]
this is trivial if $n=0$ and, if $n\ge 1$, this is a consequence of Eq. (7) (for $k\in \{0,1,\dots ,n-1\}$).
We start with the first result.
Proposition 1.
Assume that Condition 1 holds. Then $\left\{\frac{N(t)}{v(\delta (t))}:t>0\right\}$ satisfies the LDP with speed $v(\delta (t))$ and good rate function ${\Lambda ^{\ast }}$ defined by
(11)
\[ {\Lambda ^{\ast }}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-\Lambda (\theta )\},\hspace{2.5pt}\textit{where}\hspace{2.5pt}\Lambda (\theta ):=\Delta ({e^{\theta }})-\Delta (1).\]
Proof.
We want to apply the Gärtner–Ellis theorem (Theorem 1). In order to do this we remark that, for all $\theta \in \mathbb{R}$, we have
\[\begin{aligned}{}\frac{1}{v(\delta (t))}& \log \mathbb{E}\left[{e^{\theta N(t)}}\right]\\ {} =& \frac{1}{v(\delta (t))}\log \frac{{\textstyle\sum _{k\ge 0}}\frac{{d_{k,k}}{({e^{\theta }}{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{{\textstyle\sum _{j\ge 0}}\frac{{d_{j,j}}{({\delta _{j}}(t))^{j}}}{{D_{j}}({\delta _{j}}(t))}}\\ {} =& \frac{1}{v(\delta (t))}\log \sum \limits_{k\ge 0}\frac{{d_{k,k}}{({e^{\theta }}{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}-\frac{1}{v(\delta (t))}\log \sum \limits_{j\ge 0}\frac{{d_{j,j}}{({\delta _{j}}(t))^{j}}}{{D_{j}}({\delta _{j}}(t))}\\ {} =& \frac{1}{v(\delta (t))}\log \left(D(\delta (t))\sum \limits_{k\ge 0}\frac{{d_{k,k}}{({e^{\theta }}{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\right)\\ {} & \hspace{2.5pt}-\frac{1}{v(\delta (t))}\log \left(D(\delta (t))\sum \limits_{j\ge 0}\frac{{d_{j,j}}{({\delta _{j}}(t))^{j}}}{{D_{j}}({\delta _{j}}(t))}\right).\end{aligned}\]
So, if we prove that
(12)
\[ \underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log \left(D(\delta (t))\sum \limits_{k\ge 0}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\right)=\Delta (u)\hspace{2.5pt}\text{for all}\hspace{2.5pt}u>0,\]
where $\Delta (\cdot )$ is the function in Condition 1, the limit in Eq. (12) with $u={e^{\theta }}$ and $u=1$ yields
(13)
\[ \underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log \mathbb{E}\left[{e^{\theta N(t)}}\right]=\Delta ({e^{\theta }})-\Delta (1)=\Lambda (\theta )\hspace{2.5pt}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R},\]
where $\Lambda (\cdot )$ is the function in Eq. (11). Then the desired LDP holds as a straighforward application of Theorem 1.
So in the remaining part of the proof we show that the limit in Eq. (12) holds. This will be done by considering $n\ge 1$; actually, for $n=0$, we have the same computations and some parts are even simplified. Firstly, if we consider the function ${R_{n}}(u,t)$ defined in Eq. (9), for all $u>0$ we have
(14)
\[ \sum \limits_{k\ge 0}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}=\frac{D(u\delta (t))}{D(\delta (t))}+{R_{n}}(u,t);\]
in fact we have
\[\begin{aligned}{}\sum \limits_{k\ge 0}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}=& {\sum \limits_{k=0}^{n-1}}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}+\sum \limits_{k\ge n}\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}\\ {} =& \sum \limits_{k\ge 0}\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}+{R_{n}}(u,t),\end{aligned}\]
and we get Eq. (14) by taking into account the definition of $D(\cdot )$. Then Eq. (14) yields
\[\begin{aligned}{}D(\delta (t))\sum \limits_{k\ge 0}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}=& D(u\delta (t))+{R_{n}}(u,t)D(\delta (t))\\ {} =& D(u\delta (t))\left(1+{R_{n}}(u,t)\frac{D(\delta (t))}{D(u\delta (t))}\right);\end{aligned}\]
thus (if we take the logarithms, divide by $v(\delta (t))$ and let t go to infinity) the limit in Eq. (12) holds if we show that
(15)
\[ \underset{t\to \infty }{\lim }{R_{n}}(u,t)\frac{D(\delta (t))}{D(u\delta (t))}=0\hspace{2.5pt}\text{for all}\hspace{2.5pt}u>0.\]
So we complete the proof by showing that the limit in Eq. (15) holds. In fact, we have
\[\begin{aligned}{}{R_{n}}(u,t)\frac{D(\delta (t))}{D(u\delta (t))}=& {\sum \limits_{k=0}^{n-1}}\left(\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}-\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}\right)\frac{D(\delta (t))}{D(u\delta (t))}\\ {} =& {\sum \limits_{k=0}^{n-1}}\left(\frac{\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}}\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}-\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}\right)\frac{D(\delta (t))}{D(u\delta (t))}\\ {} =& {\sum \limits_{k=0}^{n-1}}\left(\frac{\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}}-1\right)\frac{{d_{k}}{(u\delta (t))^{k}}}{D(\delta (t))}\frac{D(\delta (t))}{D(u\delta (t))}\\ {} =& {\sum \limits_{k=0}^{n-1}}\left(\frac{\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}}-1\right)\frac{{d_{k}}{(u\delta (t))^{k}}}{D(u\delta (t))},\end{aligned}\]
and the desired limit in Eq. (15) holds by the limit in Eq. (6), and by the second limit in Eq. (7) (here we have $u\delta (t)$ instead of $\delta (t)$, and that limit still holds).  □
Now we study moderate deviations. More precisely, we prove a class of LDPs which depends on any possible choice of positive numbers $\{a(t):t>0\}$ such that (1) holds with ${v_{t}}=v(\delta (t))$, which is the speed in Proposition 1. We remark that ${\Lambda ^{\prime\prime }}(0)$ that appears below (Proposition 2 and Remark 4) cannot be negative; in fact, as we have seen in the proof of Proposition 1, the function Λ is the pointwise limit of logarithms of moment generating functions, which are convex functions (see, e.g., [7, Lemma 2.2.5(a)]).
Proposition 2.
Assume that Condition 1 holds and, if we refer to the function $\Delta (\cdot )$ in that condition, let $\Lambda (\cdot )$ be the function in Eq. (11). Assume that there exists ${\Delta ^{\prime\prime }}(1)$, and therefore there exists ${\Lambda ^{\prime\prime }}(0)$. Moreover assume that, for $D(\cdot )$, $\delta (\cdot )$ and $v(\cdot )$ in Condition 1 (and for $\Lambda (\cdot )$ in Eq. (11)), the following conditions hold:
(16)
\[ \left.\begin{array}{l}\textit{if}\hspace{2.5pt}u(t)\to 1\hspace{2.5pt}\textit{as}\hspace{2.5pt}t\to \infty ,\hspace{2.5pt}\textit{then}\\ {} {H_{1}}(t):=\log \frac{D\left(u(t)\delta (t)\right)}{D(\delta (t))}-v(\delta (t))(\Delta (u(t))-\Delta (1))\hspace{2.5pt}\textit{is bounded};\end{array}\right.\]
(17)
\[ {H_{2}}(t):=\sqrt{v(\delta (t))}\left({\Lambda ^{\prime }}(0)-\frac{\delta (t){D^{\prime }}(\delta (t))}{v(\delta (t))D(\delta (t))}\right)\hspace{2.5pt}\textit{is bounded};\]
(18)
\[ {H_{3}}(t):=\frac{1}{\sqrt{v(\delta (t))}}\left(\frac{\delta (t){D^{\prime }}(\delta (t))}{D(\delta (t))}-\mathbb{E}[N(t)]\right)\hspace{2.5pt}\textit{is bounded}.\]
Then, for every choice of $\{a(t):t>0\}$ such that Eq. (1) holds with ${v_{t}}=v(\delta (t))$, $\left\{\frac{N(t)-\mathbb{E}[N(t)]}{v(\delta (t))}\sqrt{v(\delta (t))a(t)}:t>0\right\}$ satisfies the LDP with speed $1/a(t)$ and good rate function ${\tilde{\Lambda }^{\ast }}$ defined by
(19)
\[ {\tilde{\Lambda }^{\ast }}(x):=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{{x^{2}}}{2{\Lambda ^{\prime\prime }}(0)}& \hspace{2.5pt}\textit{for}\hspace{2.5pt}{\Lambda ^{\prime\prime }}(0)>0,\\ {} \left\{\begin{array}{l@{\hskip10.0pt}l}0& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x=0,\\ {} \infty & \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\ne 0,\end{array}\right.& \hspace{2.5pt}\textit{for}\hspace{2.5pt}{\Lambda ^{\prime\prime }}(0)=0.\end{array}\right.\]
Proof.
We want to apply the Gärtner–Ellis theorem (Theorem 1). So, in what follows, we show that
(20)
\[ \underset{t\to \infty }{\lim }\frac{1}{1/a(t)}\log \mathbb{E}\left[{e^{\frac{\theta }{a(t)}\frac{N(t)-\mathbb{E}[N(t)]}{v(\delta (t))}\sqrt{v(\delta (t))a(t)}}}\right]=\frac{{\theta ^{2}}}{2}{\Lambda ^{\prime\prime }}(0)=:\tilde{\Lambda }(\theta )\hspace{2.5pt}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R};\]
in fact it is easy to verify that
\[ {\tilde{\Lambda }^{\ast }}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-\tilde{\Lambda }(\theta )\}\]
coincides with the rate function in the statement of the proposition.
Firstly we observe that
\[\begin{aligned}{}{\Lambda _{t}}(\theta ):=& \frac{1}{1/a(t)}\log \mathbb{E}\left[{e^{\frac{\theta }{a(t)}\frac{N(t)-\mathbb{E}[N(t)]}{v(\delta (t))}\sqrt{v(\delta (t))a(t)}}}\right]\\ {} =& a(t)\left(\log \frac{{\textstyle\sum _{k\ge 0}}\frac{{d_{k,k}}{\left({e^{\frac{\theta }{\sqrt{v(\delta (t))a(t)}}}}{\delta _{k}}(t)\right)^{k}}}{{D_{k}}({\delta _{k}}(t))}}{{\textstyle\sum _{j\ge 0}}\frac{{d_{j,j}}{({\delta _{j}}(t))^{j}}}{{D_{j}}({\delta _{j}}(t))}}-\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\mathbb{E}[N(t)]\right).\end{aligned}\]
Moreover, we set again $u(t):={e^{\frac{\theta }{\sqrt{v(\delta (t))a(t)}}}}$; in fact, by Eq. (1) with ${v_{t}}=v(\delta (t))$, we have $u(t)\to 1$ because $\sqrt{v(\delta (t))a(t)}\to \infty $. Then we can check that
\[ {\Lambda _{t}}(\theta )={A_{1}}(t)+{A_{2}}(t)+{A_{3}}(t),\]
where
\[\begin{aligned}{}{A_{1}}(t):=& a(t)\\ {} & \times \left(\log \frac{{\textstyle\sum _{k\ge 0}}\frac{{d_{k,k}}{\left({e^{\frac{\theta }{\sqrt{v(\delta (t))a(t)}}}}{\delta _{k}}(t)\right)^{k}}}{{D_{k}}({\delta _{k}}(t))}}{{\textstyle\sum _{j\ge 0}}\frac{{d_{j,j}}{({\delta _{j}}(t))^{j}}}{{D_{j}}({\delta _{j}}(t))}}-v(\delta (t))\Lambda \left(\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\right)\right)\\ {} =& a(t)\left(\log \frac{\frac{D(u(t)\delta (t))}{D(\delta (t))}+{R_{n}}(u(t),t)}{1+{R_{n}}(1,t)}-v(\delta (t))\Lambda \left(\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\right)\right)\end{aligned}\]
(in the last equality we take into account Eq. (14) with $u=u(t)$ and $u=1$),
\[ {A_{2}}(t):=v(\delta (t))a(t)\left(\Lambda \left(\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\right)-\frac{\theta }{v(\delta (t))\sqrt{v(\delta (t))a(t)}}\frac{\delta (t){D^{\prime }}(\delta (t))}{D(\delta (t))}\right),\]
and
\[ {A_{3}}(t):=a(t)\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\left(\frac{\delta (t){D^{\prime }}(\delta (t))}{D(\delta (t))}-\mathbb{E}[N(t)]\right).\]
So, if we refer to the function $\tilde{\Lambda }(\cdot )$ in Eq. (20), we complete the proof if we show that (for all $\theta \in \mathbb{R}$)
(21)
\[ \underset{t\to \infty }{\lim }{A_{1}}(t)=0,\hspace{2.5pt}\underset{t\to \infty }{\lim }{A_{2}}(t)=\tilde{\Lambda }(\theta ),\hspace{2.5pt}\underset{t\to \infty }{\lim }{A_{3}}(t)=0.\]
We start by considering ${H_{1}}(t)$ in Eq. (16), and we have
\[ {H_{1}}(t)=\log \frac{D(u(t)\delta (t))}{D(\delta (t))}-v(\delta (t))\Lambda \left(\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\right)\]
by the definition of the function $\Lambda (\cdot )$ in Eq. (11) and by $u(t)={e^{\frac{\theta }{\sqrt{v(\delta (t))a(t)}}}}$. Then we can easily verify that
\[\begin{aligned}{}{A_{1}}(t)=& a(t){H_{1}}(t)+a(t)\left(\log \frac{\frac{D(u(t)\delta (t))}{D(\delta (t))}+{R_{n}}(u(t),t)}{1+{R_{n}}(1,t)}-\log \frac{D(u(t)\delta (t))}{D(\delta (t))}\right)\\ {} =& a(t){H_{1}}(t)+a(t)\log \left(1+{R_{n}}(u(t),t)\frac{D(\delta (t))}{D(u(t)\delta (t))}\right)\\ {} & -a(t)\log (1+{R_{n}}(1,t)),\end{aligned}\]
where, since $a(t)\to 0$, $a(t){H_{1}}(t)\to 0$ by Eq. (16), and $a(t)\log (1+{R_{n}}(1,t))\to 0$ by Eq. (10) with $u=1$. Moreover, we have
\[ \underset{t\to \infty }{\lim }{R_{n}}(u(t),t)\frac{D(\delta (t))}{D(u(t)\delta (t))}=0;\]
in fact this is trivial if $n=0$ and, if $n\ge 1$, we have
\[\begin{aligned}{}0\le & |{R_{n}}(u(t),t)|\frac{D(\delta (t))}{D(u(t)\delta (t))}\\ {} =& {\sum \limits_{k=0}^{n-1}}\left|\frac{{d_{k,k}}{(u(t){\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}-\frac{{d_{k}}{(u(t)\delta (t))^{k}}}{D(\delta (t))}\right|\frac{D(\delta (t))}{D(u(t)\delta (t))}\\ {} =& {\sum \limits_{k=0}^{n-1}}\left|\frac{\frac{{d_{k,k}}{(u(t){\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{\frac{{d_{k}}{(u(t)\delta (t))^{k}}}{D(\delta (t))}}-1\right|\frac{{d_{k}}{(u(t)\delta (t))^{k}}}{D(u(t)\delta (t))}\\ {} =& {\sum \limits_{k=0}^{n-1}}\left|\frac{\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}}-1\right|\frac{{d_{k}}{(u(t)\delta (t))^{k}}}{D(u(t)\delta (t))}\end{aligned}\]
and, since $u(t)\to 1$, the last expression tends to zero by the limit in Eq. (6), and by the second limit in Eq. (7). Then the first limit in Eq. (21) is verified.
Now we consider the Taylor formula for $\Lambda (\cdot )$, and we have
\[ \Lambda (\eta )=\underset{=0}{\underbrace{\Lambda (0)}}+{\Lambda ^{\prime }}(0)\eta +\frac{{\Lambda ^{\prime\prime }}(0)}{2}{\eta ^{2}}+o({\eta ^{2}})\]
where $\frac{o({\eta ^{2}})}{{\eta ^{2}}}\to 0$ as $\eta \to 0$. Then
\[\begin{aligned}{}{A_{2}}(t)=& v(\delta (t))a(t)\left(\Lambda \left(\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\right)-\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\frac{\delta (t){D^{\prime }}(\delta (t))}{v(\delta (t))D(\delta (t))}\right)\\ {} =& v(\delta (t))a(t)\left(\left({\Lambda ^{\prime }}(0)-\frac{\delta (t){D^{\prime }}(\delta (t))}{v(\delta (t))D(\delta (t))}\right)\frac{\theta }{\sqrt{v(\delta (t))a(t)}}\right.\\ {} & \hspace{2.5pt}\left.+\frac{{\Lambda ^{\prime\prime }}(0)}{2}\frac{{\theta ^{2}}}{v(\delta (t))a(t)}+o\left(\frac{1}{v(\delta (t))a(t)}\right)\right)\\ {} =& \sqrt{a(t)}\theta {H_{2}}(t)+\frac{{\Lambda ^{\prime\prime }}(0)}{2}{\theta ^{2}}+v(\delta (t))a(t)o\left(\frac{1}{v(\delta (t))a(t)}\right),\end{aligned}\]
and the second limit in Eq. (21) holds by Eq. (17) and $a(t)\to 0$, and also by $v(\delta (t))a(t)\to \infty $.
Finally, we have
\[ {A_{3}}(t)=\sqrt{a(t)}\theta {H_{3}}(t),\]
and the third limit in Eq. (21) holds by Eq. (18) and $a(t)\to 0$.  □
We conclude with some consequences of Proposition 2, which are typical features of moderate deviations.
Remark 4.
The class of LDPs in Proposition 2 fill the gap between two following asymptotic behaviors.
  • 1. The weak convergence of $\left\{\frac{N(t)-\mathbb{E}[N(t)]}{\sqrt{v(\delta (t))}}:t>0\right\}$ to the centered Normal distribution with variance ${\Lambda ^{\prime\prime }}(0)$ (in fact the proof of Proposition 2 still works if $a(t)=1$ and, in such a case, the first condition in Eq. (1) fails).
  • 2. The convergence of $\left\{\frac{N(t)-\mathbb{E}[N(t)]}{v(\delta (t))}:t>0\right\}$ to zero (in probability) which corresponds to the case $a(t)=\frac{1}{v(\delta (t))}$ (in such a case the second condition in Eq. (1), with ${v_{t}}=v(\delta (t))$, fails).
Actually in the second case we have in mind cases in which the limit
(22)
\[ \underset{t\to \infty }{\lim }\frac{\mathbb{E}[N(t)]}{v(\delta (t))}={\Lambda ^{\prime }}(0)\]
holds. To better explain this fact we remark that, if the limit in Eq. (22) holds, then we have
\[\begin{aligned}{}\underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}& \log \mathbb{E}\left[{e^{\theta (N(t)-\mathbb{E}[N(t)])}}\right]\\ {} =& \underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log \mathbb{E}\left[{e^{\theta N(t)}}\right]-\theta \frac{\mathbb{E}[N(t)]}{v(\delta (t))}=\Lambda (\theta )-\theta {\Lambda ^{\prime }}(0)\end{aligned}\]
for all $\theta \in \mathbb{R}$ (here we take into account the limit in Eq. (13)); then, if we apply the Gärtner–Ellis theorem (Theorem 1), the family of random variables $\left\{\frac{N(t)-\mathbb{E}[N(t)]}{v(\delta (t))}:t>0\right\}$ satisfies the LDP with speed $v(\delta (t))$ and good rate function J defined by
\[ J(y):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta y-(\Lambda (\theta )-\theta {\Lambda ^{\prime }}(0))\}={\Lambda ^{\ast }}(y+{\Lambda ^{\prime }}(0)),\]
and the rate function J uniquely vanishes at $y=0$ (because ${\Lambda ^{\ast }}(x)$ uniquely vanishes at $x={\Lambda ^{\prime }}(0)$).

4 Application of results to some fractional counting processes

In this section we present two examples of applications of our results to some fractional counting processes found in the literature; so we refer to the content of Section 2.2. The first example (in Section 4.1) concerns the basic model, i.e. the case $n=0$; the second example (in Section 4.2) depends on two sequences of parameters $\{{\alpha _{j}}:j\ge 0\}$ and $\{{\tilde{\alpha }_{j}}:j\ge 0\}$ satisfying suitable conditions. So, in Section 4.2.2, we discuss a class of cases for which such conditions fail, and we cannot refer to a straightforward application of our results because the hypotheses of the Gärtner–Ellis theorem (Theorem 1) fail.

4.1 An example related to the basic model

A reference for this example is [20]; some other connections with the literature are presented below in the last paragraph of this section. In this example we have $n=0$. For $\beta ,\gamma ,\lambda >0$ and $\alpha \in (0,1]$, we set
\[ {d_{k}}:=\frac{{\lambda ^{k}}{(\gamma )_{k}}}{k!\Gamma (\alpha k+\beta )},\]
where ${(\gamma )_{k}}$ is the rising factorial; therefore we get
\[ D(u)={E_{\alpha ,\beta }^{\gamma }}(\lambda u),\]
where ${E_{\alpha ,\beta }^{\gamma }}(\cdot )$ is the Prabhakar function.
We start with a discussion on Condition 1. Moreover we discuss Eqs. (16), (17) and (18) in Proposition 2; in this case we assume that γ is a positive integer.
Discussion on Condition 1. We start noting that $(\mathbf{B}\mathbf{2})$ and $(\mathbf{B}\mathbf{3})$ trivially holds. Moreover, as far as $(\mathbf{B}\mathbf{1})$ is concerned, we have
(23)
\[ \left.\begin{array}{l}v(\delta (t)):={(\delta (t))^{1/\alpha }}\hspace{2.5pt}\text{and}\hspace{2.5pt}\Delta (u):={(\lambda u)^{1/\alpha }},\\ {} \text{and therefore we have}\hspace{2.5pt}\Lambda (\theta )={\lambda ^{1/\alpha }}({e^{\theta /\alpha }}-1)\end{array}\right.\]
(we refer to Eq. (2) for the limit in Eq. (5)). Note that the function $v(\cdot )$ in Eq. (23) is regularly varying with index $\varrho =\frac{1}{\alpha }$; in fact (see Remark 3) we have $\Delta (u)=\Delta (1)\bar{v}(u)$ with $\bar{v}(u)={u^{1/\alpha }}$ and $\Delta (1)={\lambda ^{1/\alpha }}$.
Discussion on Eqs. (16), (17) and (18) in Proposition 2 (when γ is a positive integer). In view of what follows, we remark that, by Eqs. (3)–(4) and ${d_{\gamma ,\alpha ,\beta }^{(\gamma )}}=1$, we have
(24)
\[\begin{aligned}{}{E_{\alpha ,\beta }^{\gamma +1}}(u)=& \frac{1}{{\alpha ^{\gamma }}\gamma !}\left({E_{\alpha ,\beta -\gamma }^{1}}(u)+{\sum \limits_{j=0}^{\gamma -1}}{d_{j,\alpha ,\beta }^{(\gamma )}}{E_{\alpha ,\beta -j}^{1}}(u)\right)\\ {} =& \frac{{e^{{u^{1/\alpha }}}}}{{\alpha ^{\gamma +1}}\gamma !}\left({u^{\frac{\gamma +1-\beta }{\alpha }}}+{\sum \limits_{j=0}^{\gamma -1}}{d_{j,\alpha ,\beta }^{(\gamma )}}{u^{\frac{j+1-\beta }{\alpha }}}+O({e^{-{u^{1/\alpha }}}}/u)\right)\\ {} =& \frac{{e^{{u^{1/\alpha }}}}{u^{\frac{\gamma +1-\beta }{\alpha }}}}{{\alpha ^{\gamma +1}}\gamma !}\left(1+{\sum \limits_{j=0}^{\gamma -1}}{d_{j,\alpha ,\beta }^{(\gamma )}}{u^{\frac{j-\gamma }{\alpha }}}+o({u^{-1/\alpha }})\right)\\ {} =& \frac{{e^{{u^{1/\alpha }}}}{u^{\frac{\gamma +1-\beta }{\alpha }}}}{{\alpha ^{\gamma +1}}\gamma !}\left(1+O({u^{-1/\alpha }})\right).\end{aligned}\]
We start with Eq. (16). We take $u(t)\to 1$ as $t\to \infty $ and, by Eq. (24), we have
\[\begin{aligned}{}{H_{1}}(t)=& \log \frac{{E_{\alpha ,\beta }^{\gamma }}\left(\lambda u(t)\delta (t)\right)}{{E_{\alpha ,\beta }^{\gamma }}(\lambda \delta (t))}-{(\delta (t))^{1/\alpha }}({\lambda ^{1/\alpha }}{(u(t))^{1/\alpha }}-{\lambda ^{1/\alpha }})\\ {} =& \log \frac{{e^{{(\lambda u(t)\delta (t))^{1/\alpha }}}}{(\lambda u(t)\delta (t))^{\frac{\gamma -\beta }{\alpha }}}\left(1+O({(u(t)\delta (t))^{-1/\alpha }})\right)}{{e^{{(\lambda \delta (t))^{1/\alpha }}}}{(\lambda \delta (t))^{\frac{\gamma -\beta }{\alpha }}}\left(1+O({(\delta (t))^{-1/\alpha }})\right)}\\ {} & \hspace{2.5pt}-{(\lambda \delta (t))^{1/\alpha }}({(u(t))^{1/\alpha }}-1)\\ {} =& \frac{\gamma -\beta }{\alpha }\log u(t)+\log \left(1+O({(u(t)\delta (t))^{-1/\alpha }})\right)\\ {} & \hspace{2.5pt}-\log \left(1+O({(\delta (t))^{-1/\alpha }})\right)\to 0\hspace{2.5pt}(\text{as}\hspace{2.5pt}t\to \infty ).\end{aligned}\]
Thus ${H_{1}}(t)$ is bounded and Eq. (16) holds.
Now we consider Eq. (17). We recall that
\[ {D^{\prime }}(u)=\lambda \frac{d}{du}{E_{\alpha ,\beta }^{\gamma }}(\lambda u)=\lambda \gamma {E_{\alpha ,\alpha +\beta }^{\gamma +1}}(\lambda u)\]
(see, e.g., [16, Eq. (1.9.5) with $n=1$]). Then, since ${\Lambda ^{\prime }}(0)=\frac{{\lambda ^{1/\alpha }}}{\alpha }$, again by Eq. (24) we get
\[\begin{aligned}{}{H_{2}}(t)=& \sqrt{{(\delta (t))^{1/\alpha }}}\left({\Lambda ^{\prime }}(0)-\frac{\delta (t)\lambda \gamma {E_{\alpha ,\alpha +\beta }^{\gamma +1}}(\lambda \delta (t))}{{(\delta (t))^{1/\alpha }}{E_{\alpha ,\beta }^{\gamma }}(\lambda \delta (t))}\right)\\ {} =& {(\delta (t))^{1/(2\alpha )}}\left(\frac{{\lambda ^{1/\alpha }}}{\alpha }\right.\\ {} & \hspace{2.5pt}\left.-\frac{{(\delta (t))^{1-1/\alpha }}\lambda \gamma \frac{{e^{{(\lambda \delta (t))^{1/\alpha }}}}{(\lambda \delta (t))^{\frac{\gamma +1-\alpha -\beta }{\alpha }}}}{{\alpha ^{\gamma +1}}\gamma !}\left(1+O({(\delta (t))^{-1/\alpha }})\right)}{\frac{{e^{{(\lambda \delta (t))^{1/\alpha }}}}{(\lambda \delta (t))^{\frac{\gamma -\beta }{\alpha }}}}{{\alpha ^{\gamma }}(\gamma -1)!}\left(1+O({(\delta (t))^{-1/\alpha }})\right)}\right)\\ {} =& {(\delta (t))^{1/(2\alpha )}}\left(\frac{{\lambda ^{1/\alpha }}}{\alpha }-\frac{{\lambda ^{1/\alpha }}}{\alpha }\left(\frac{1+O({(\delta (t))^{-1/\alpha }})}{1+O({(\delta (t))^{-1/\alpha }})}\right)\right)\\ {} =& \frac{{\lambda ^{1/\alpha }}}{\alpha }{(\delta (t))^{1/(2\alpha )}}\frac{O({(\delta (t))^{-1/\alpha }})}{1+O({(\delta (t))^{-1/\alpha }})}\\ {} =& \frac{{\lambda ^{1/\alpha }}}{\alpha }\frac{O({(\delta (t))^{-1/(2\alpha )}})}{1+O({(\delta (t))^{-1/\alpha }})}\to 0\hspace{2.5pt}(\text{as}\hspace{2.5pt}t\to \infty ).\end{aligned}\]
Thus ${H_{2}}(t)$ is bounded and Eq. (17) holds.
Remark 5.
We have just shown that ${H_{2}}(t)\to 0$ as $t\to \infty $; then we can immediately verify the limit in Eq. (22) in Remark 4 noting that
\[ {H_{2}}(t)=\sqrt{v(\delta (t))}\left({\Lambda ^{\prime }}(0)-\frac{\mathbb{E}[N(t)]}{v(\delta (t))}\right)\]
and $v(\delta (t))\to \infty $.
We conclude with Eq. (18) which can be immediately verified; in fact we have ${H_{3}}(t)=0$ because $n=0$, and therefore ${H_{3}}(t)$ is bounded.
Connections with the literature. If we set $\beta =\gamma =1$ and $\delta (t)={t^{\alpha }}$, we recover the case in [1, Section 4], and therefore the case in [2] with $m=1$. Moreover the function Λ in Eq. (23) coincides with the function ${\Lambda _{\alpha ,\lambda }}(\theta )$ in the proof of Proposition 4.1 in [1] and with the function $\Lambda (\theta )$ in [2, Eq. (7)] specialized to the case $m=1$ (in both cases the parameter ν in [1] and [2] coincides with α here). In particular, we recover the case of Proposition 2 in [2] with $m=1$ by applying Proposition 2 to the example in this section with $\beta =\gamma =1$ and $\delta (t)={t^{\alpha }}$.
We also note that, by Eq. (23), we get
\[ {\Lambda ^{\prime }}(0)=\frac{{\lambda ^{1/\alpha }}}{\alpha }\hspace{2.5pt}\text{and}\hspace{2.5pt}{\Lambda ^{\prime\prime }}(0)=\frac{{\lambda ^{1/\alpha }}}{{\alpha ^{2}}}.\]
In particular, the equality ${\Lambda ^{\prime\prime }}(0)=\frac{{\lambda ^{1/\alpha }}}{{\alpha ^{2}}}$ coincides with the equality in [2, Eq. (9)] for the matrix C specialized to the case $m=1$ (and therefore the matrix reduces to a number); in fact, if we consider α in place of the parameter ν in [2], we get
\[ {c^{(\alpha )}}:=\frac{1}{\alpha }\left(\frac{1}{\alpha }-1\right){\lambda ^{1/\alpha }}+\frac{1}{\alpha }{\lambda ^{1/\alpha }}=\frac{{\lambda ^{1/\alpha }}}{{\alpha ^{2}}}.\]

4.2 An example with eventually constant parameters

A reference for this example is [10]; more precisely we refer to the definition in (3.5) therein. Here we assume that $n\ge 1$; in fact, if $n=0$, we have a particular case of Example 1. For $\lambda >0$ and for some $\{{\alpha _{j}}:j\ge 0\}$ with ${\alpha _{j}}\in (0,1]$ for all $j\ge 0$, we set
\[ {d_{k,j}}:=\frac{{\lambda ^{k}}}{\Gamma ({\alpha _{j}}k+1)}\hspace{2.5pt}(\text{for all}\hspace{2.5pt}k,j\ge 0);\]
therefore we get
\[ {D_{j}}(u)={E_{{\alpha _{j}},1}^{1}}(\lambda u),\]
where ${E_{{\alpha _{j}},1}^{1}}(\cdot )$ is the Mittag-Leffler function (with $\alpha ={\alpha _{j}}$).
As far as the functions $\{{\delta _{j}}(\cdot ):j\ge 0\}$ are concerned, here we consider the case
\[ {\delta _{j}}(t):={t^{{\tilde{\alpha }_{j}}}}\]
for some ${\tilde{\alpha }_{j}}\in (0,1]$ (for all $j\ge 0$). Note that the parameters $\{{\tilde{\alpha }_{j}}:j\ge 0\}$ allow to have a generalization of the case in [10, Eq. (3.5)], which can be recovered by setting ${\tilde{\alpha }_{j}}=1$ (for all $j\ge 0$).

4.2.1 On the conditions in Section 3

We start with a discussion on Condition 1. Moreover we discuss Eqs. (16), (17) and (18) in Proposition 2. In particular, as far as Condition 1 is concerned, we present sufficient conditions on the parameters $\{{\alpha _{j}}:j\ge 0\}$ and $\{{\tilde{\alpha }_{j}}:j\ge 0\}$ in order to have $(\mathbf{B}\mathbf{3})$; moreover, in order to explain what can happen when these sufficient conditions fail, a class of cases is studied in detail in the next Section 4.2.2.
Discussion on Condition 1. We start with $(\mathbf{B}\mathbf{1})$. It is easy to check that we have to consider the following restrictions on the parameters that do not appear in [10]: there exist $n\ge 1$ and $\tilde{\alpha },\alpha \in (0,1]$ such that
\[ {\tilde{\alpha }_{j}}=\tilde{\alpha }\hspace{2.5pt}\text{and}\hspace{2.5pt}{\alpha _{j}}=\alpha \hspace{2.5pt}\text{for all}\hspace{2.5pt}j\ge n.\]
Thus we have
\[ \delta (t)={t^{\tilde{\alpha }}}\]
and, for $j\ge n$, we can refer to the application to fractional counting processes in Section 4.1 with $\beta =\gamma =1$; thus we set
\[ {d_{k}}:=\frac{{\lambda ^{k}}}{\Gamma (\alpha k+1)}\hspace{2.5pt}(\text{for all}\hspace{2.5pt}k\ge 0),\]
and we have
\[ D(u):={E_{\alpha ,1}^{1}}(\lambda u).\]
Then, referring to the statement above with Eq. (23) (with $\beta =\gamma =1$), we can say that $(\mathbf{B}\mathbf{1})$ holds with
\[ v(t)={t^{1/\alpha }}\hspace{2.5pt}\text{and}\hspace{2.5pt}\Delta (u)={(\lambda u)^{1/\alpha }};\]
thus, in particular, we have
\[ v(\delta (t))={t^{\tilde{\alpha }/\alpha }}.\]
Condition $(\mathbf{B}\mathbf{2})$ trivially holds because all the coefficients $\{{d_{k,j}}:k,j\ge 0\}$ are positive. We also note that the limits in Eq. (7) hold; in fact (see Remark 2) we have
(25)
\[ \frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}=\frac{{\lambda ^{k}}}{\Gamma ({\alpha _{k}}k+1)}\frac{{({t^{{\tilde{\alpha }_{k}}}})^{k}}}{{E_{{\alpha _{k}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{k}}}})}\to 0\hspace{2.5pt}(\text{as}\hspace{2.5pt}t\to \infty )\]
and
(26)
\[ \frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}=\frac{{\lambda ^{k}}}{\Gamma (\alpha k+1)}\frac{{({t^{\tilde{\alpha }}})^{k}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}\to 0\hspace{2.5pt}(\text{as}\hspace{2.5pt}t\to \infty ),\]
where the limits hold by Eq. (4) with $u=\lambda {t^{{\tilde{\alpha }_{k}}}}$ and $u=\lambda {t^{\tilde{\alpha }}}$.
Finally we discuss $(\mathbf{B}\mathbf{3})$. We trivially have ${d_{k}}>0$ and, moreover,
\[ \frac{\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}}=\frac{\frac{{\lambda ^{k}}}{\Gamma ({\alpha _{k}}k+1)}\frac{{({t^{{\tilde{\alpha }_{k}}}})^{k}}}{{E_{{\alpha _{k}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{k}}}})}}{\frac{{\lambda ^{k}}}{\Gamma (\alpha k+1)}\frac{{({t^{\tilde{\alpha }}})^{k}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}}=\frac{\Gamma (\alpha k+1)}{\Gamma ({\alpha _{k}}k+1)}{t^{({\tilde{\alpha }_{k}}-\tilde{\alpha })k}}\frac{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}{{E_{{\alpha _{k}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{k}}}})};\]
thus, by taking into account again Eq. (4) with $u=\lambda {t^{{\tilde{\alpha }_{k}}}}$ and $u=\lambda {t^{\tilde{\alpha }}}$, the limit in Eq. (6) holds if, for all $k\in \{0,1,\dots ,n-1\}$, we have
(27)
\[ \frac{\tilde{\alpha }}{\alpha }-\frac{{\tilde{\alpha }_{k}}}{{\alpha _{k}}}<0\]
or
\[ \frac{\tilde{\alpha }}{\alpha }-\frac{{\tilde{\alpha }_{k}}}{{\alpha _{k}}}=0\hspace{2.5pt}\text{and}\hspace{2.5pt}{\tilde{\alpha }_{k}}-\tilde{\alpha }<0.\]
Discussion on Eqs. (16), (17) and (18) in Proposition 2. For Eqs. (16) and (17) we can refer to the discussion for Example 1, with $\beta =\gamma =1$. For Eq. (18) we start noting that
\[\begin{aligned}{}{H_{3}}(t)=& \frac{1}{\sqrt{v(\delta (t))}}\left(\frac{\delta (t){D^{\prime }}(\delta (t))}{D(\delta (t))}-\mathbb{E}[N(t)]\right)\\ {} =& \frac{1}{\sqrt{v(\delta (t))}}\left(\sum \limits_{k\ge 1}k\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}-\sum \limits_{k\ge 1}k\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}{\left(\sum \limits_{k\ge 0}\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\right)^{-1}}\right).\end{aligned}\]
We remark that, by Eq. (14) with $u=1$,
\[ \sum \limits_{k\ge 0}\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}=1+{R_{n}}(1,t);\]
thus we can easily check that
\[\begin{aligned}{}{H_{3}}(t)=& \frac{{(1+{R_{n}}(1,t))^{-1}}}{\sqrt{v(\delta (t))}}\\ {} & \hspace{2.5pt}\times \left((1+{R_{n}}(1,t))\sum \limits_{k\ge 1}k\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}-\sum \limits_{k\ge 1}k\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\right)\\ {} =& \frac{{(1+{R_{n}}(1,t))^{-1}}}{\sqrt{v(\delta (t))}}\left({R_{n}}(1,t)\sum \limits_{k\ge 1}k\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}\right.\\ {} & \hspace{2.5pt}\left.+\left(\sum \limits_{k\ge 1}k\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}-\sum \limits_{k\ge 1}k\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\right)\right)\\ {} =& {(1+{R_{n}}(1,t))^{-1}}{R_{n}}(1,t)\frac{\delta (t){D^{\prime }}(\delta (t))}{v(\delta (t))D(\delta (t))}\sqrt{v(\delta (t))}\\ {} & \hspace{2.5pt}+\frac{{(1+{R_{n}}(1,t))^{-1}}}{\sqrt{v(\delta (t))}}{\sum \limits_{k=1}^{n-1}}k\left(\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))}-\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\right).\end{aligned}\]
Then we have the following statements.
  • • $\frac{{d_{k}}{(\delta (t))^{k}}}{D(\delta (t))},\frac{{d_{k,k}}{({\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\to 0$ by Eqs. (25) and (26).
  • • By Eq. (14) with $u=1$ (and by Eqs. (25) and (26) again)
    \[ {R_{n}}(1,t)={\sum \limits_{k=0}^{n-1}}\left(\frac{{\lambda ^{k}}}{\Gamma ({\alpha _{k}}k+1)}\frac{{({t^{{\tilde{\alpha }_{k}}}})^{k}}}{{E_{{\alpha _{k}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{k}}}})}-\frac{{\lambda ^{k}}}{\Gamma (\alpha k+1)}\frac{{({t^{\tilde{\alpha }}})^{k}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}\right)\to 0;\]
    actually, as it was explained for Eqs. (25) and (26), we can say that ${R_{n}}(1,t)\hspace{0.1667em}\to \hspace{0.1667em}0$ exponentially fast, and therefore
    \[ {R_{n}}(1,t)\sqrt{v(\delta (t))}\to 0,\]
    because $v(\delta (t))={t^{\tilde{\alpha }/\alpha }}$.
  • • $\frac{\delta (t){D^{\prime }}(\delta (t))}{v(\delta (t))D(\delta (t))}\to {\Lambda ^{\prime }}(0)$, because we can refer to the limit in Eq. (22) stated in Remark 5 (for the previous example) with $\beta =\gamma =1$.
In conclusion, ${H_{3}}(t)$ tends to zero, and therefore it is bounded. Thus Eq. (18) is verified.

4.2.2 A choice of the parameters for which Eq. (27) fails

In this section we illustrate what can happen if Eq. (27) fails. For simplicity we consider the case $n=1$; however we expect to have a similar situation even if $n\ge 2$ (but the computations are more complicated). Thus we consider the framework in Section 4.2 with $n=1$ and
\[ \frac{\tilde{\alpha }}{\alpha }-\frac{{\tilde{\alpha }_{0}}}{{\alpha _{0}}}>0.\]
We recall that ${d_{0}},{d_{0,0}}>0$. The aim is to show that, for all $\theta \in \mathbb{R}$, there exists the limit
(28)
\[ \Psi (\theta ):=\underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log \mathbb{E}\left[{e^{\theta N(t)}}\right]=\underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log \frac{{\textstyle\sum _{k\ge 0}}\frac{{d_{k,k}}{({e^{\theta }}{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}}{{\textstyle\sum _{j\ge 0}}\frac{{d_{j,j}}{({\delta _{j}}(t))^{j}}}{{D_{j}}({\delta _{j}}(t))}}\in \mathbb{R},\]
but the function $\Psi (\cdot )$ is not differentiable and we cannot consider a straightforward application of the Gärtner–Ellis theorem (Theorem 1), as we did in Proposition 1.
Firstly we analyze ${R_{n}}(u,t)$ in Eq. (9). Under our hypotheses it does not depend on u, and therefore we simply write ${R_{1}}(t)$; then we have
\[ {R_{1}}(t):=\frac{{d_{0,0}}}{{D_{0}}({\delta _{0}}(t))}-\frac{{d_{0}}}{D(\delta (t))}=\frac{{d_{0,0}}}{{E_{{\alpha _{0}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{0}}}})}-\frac{{d_{0}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}.\]
So we can say that ${R_{1}}(t)>0$ eventually (i.e. for t large enough) and ${R_{1}}(t)\to 0$ as $t\to \infty $ by Eq. (4) with $u=\lambda {t^{{\tilde{\alpha }_{0}}}}$ and $u=\lambda {t^{\tilde{\alpha }}}$. Moreover,
\[\begin{aligned}{}\frac{1}{v(\delta (t))}\log {R_{1}}(t)=& \frac{1}{{t^{\tilde{\alpha }/\alpha }}}\log \left(\frac{{d_{0,0}}}{{E_{{\alpha _{0}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{0}}}})}-\frac{{d_{0}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}\right)\\ {} =& \frac{1}{{t^{\tilde{\alpha }/\alpha }}}\log \left(\frac{{d_{0}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}\left(\frac{\frac{{d_{0,0}}}{{E_{{\alpha _{0}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{0}}}})}}{\frac{{d_{0}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}}-1\right)\right)\\ {} =& \frac{1}{{t^{\tilde{\alpha }/\alpha }}}\log \left(\frac{{d_{0}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}\right)+\frac{1}{{t^{\tilde{\alpha }/\alpha }}}\log \left(\frac{\frac{{d_{0,0}}}{{E_{{\alpha _{0}},1}^{1}}(\lambda {t^{{\tilde{\alpha }_{0}}}})}}{\frac{{d_{0}}}{{E_{\alpha ,1}^{1}}(\lambda {t^{\tilde{\alpha }}})}}-1\right)\end{aligned}\]
and
\[\begin{aligned}{}\underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log {R_{1}}(t)& =\underset{t\to \infty }{\lim }\frac{1}{{t^{\tilde{\alpha }/\alpha }}}\log ({d_{0}}{e^{-{(\lambda {t^{\tilde{\alpha }}})^{1/\alpha }}}})\\ {} & \hspace{2.5pt}+\underset{t\to \infty }{\lim }\frac{1}{{t^{\tilde{\alpha }/\alpha }}}\log \left(\frac{{d_{0,0}}}{{d_{0}}}{e^{-{(\lambda {t^{{\tilde{\alpha }_{0}}}})^{1/{\alpha _{0}}}}+{(\lambda {t^{\tilde{\alpha }}})^{1/\alpha }}}}-1\right)\\ {} & =-{\lambda ^{1/\alpha }}+\underset{t\to \infty }{\lim }\frac{-{(\lambda {t^{{\tilde{\alpha }_{0}}}})^{1/{\alpha _{0}}}}+{(\lambda {t^{\tilde{\alpha }}})^{1/\alpha }}}{{t^{\tilde{\alpha }/\alpha }}};\end{aligned}\]
thus, by taking into account that $\frac{\tilde{\alpha }}{\alpha }-\frac{{\tilde{\alpha }_{0}}}{{\alpha _{0}}}>0$, we get
(29)
\[ \underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log {R_{1}}(t)=0.\]
Now we take into account Eq. (14). Then, by Eq. (5) in Condition 1, for all $u>0$ we have
\[ \underset{t\to \infty }{\lim }\frac{1}{v(\delta (t))}\log \frac{D(u\delta (t))}{D(\delta (t))}=\Delta (u)-\Delta (1).\]
Then we can prove the following result.
Lemma 1.
For all $u>0$ we have ${\lim \nolimits_{t\to \infty }}\frac{1}{v(\delta (t))}\log {\textstyle\sum _{k\ge 0}}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}=\max \left\{\Delta (u)-\Delta (1),0\right\}$.
Proof.
Firstly, by Eq. (14) with $n=1$ and recalling that ${R_{1}}(t)>0$ eventually (i.e. for t large enough), we can apply Lemma 1.2.15 in [7] and, by Eq. (29), for all $u>0$, we have
\[\begin{aligned}{}\underset{t\to \infty }{\limsup }\frac{1}{v(\delta (t))}& \log \sum \limits_{k\ge 0}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\\ {} =& \max \left\{\underset{t\to \infty }{\limsup }\frac{1}{v(\delta (t))}\log \frac{D(u\delta (t))}{D(\delta (t))},\underset{t\to \infty }{\limsup }\frac{1}{v(\delta (t))}\log {R_{1}}(t)\right\}\\ {} =& \max \left\{\Delta (u)-\Delta (1),0\right\}.\end{aligned}\]
Moreover, in a similar way (actually here the application of Lemma 1.2.15 in [7] is not needed), for all $u>0$, we have
\[\begin{aligned}{}\underset{t\to \infty }{\liminf }\frac{1}{v(\delta (t))}& \log \sum \limits_{k\ge 0}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\\ {} \ge & \left\{\begin{array}{l@{\hskip10.0pt}l}{\liminf _{t\to \infty }}\frac{1}{v(\delta (t))}\log \frac{D(u\delta (t))}{D(\delta (t))}=\Delta (u)-\Delta (1)& \hspace{2.5pt}\text{if}\hspace{2.5pt}u>1,\\ {} {\liminf _{t\to \infty }}\frac{1}{v(\delta (t))}\log {R_{1}}(t)=0& \hspace{2.5pt}\text{if}\hspace{2.5pt}u\in (0,1],\end{array}\right.\end{aligned}\]
which yields
\[ \underset{t\to \infty }{\liminf }\frac{1}{v(\delta (t))}\log \sum \limits_{k\ge 0}\frac{{d_{k,k}}{(u{\delta _{k}}(t))^{k}}}{{D_{k}}({\delta _{k}}(t))}\ge \max \left\{\Delta (u)-\Delta (1),0\right\},\]
because $\Delta (\cdot )$ is an increasing function.  □
Finally, if we refer to the limit computed in Lemma 1 with $u={e^{\theta }}$, there exists the limit in Eq. (28) (for all $\theta \in \mathbb{R}$) and we have
\[\begin{aligned}{}\Psi (\theta )=& \max \left\{\Delta ({e^{\theta }})-\Delta (1),0\right\}-\max \left\{\Delta ({e^{0}})-\Delta (1),0\right\}\\ {} =& \max \left\{\Delta ({e^{\theta }})-\Delta (1),0\right\}.\end{aligned}\]
Moreover, by Eqs. (11) and (23), we get
\[ \Psi (\theta )=\max \left\{\Lambda (\theta ),0\right\}=\max \left\{{\lambda ^{1/\alpha }}({e^{\theta /\alpha }}-1),0\right\}.\]
In conclusion, the function $\Psi (\cdot )$ is not differentiable at the origin $\theta =0$, indeed the left derivative is equal to zero and the right derivative is equal to $\frac{{\lambda ^{1/\alpha }}}{\alpha }$.

Acknowledgments

The authors wish to thank the anonymous referees for their careful reading and suggestions to improve the presentation of the paper. The authors also thank Roberto Garra, Roberto Garrappa, and Francesco Mainardi for some discussion on the Prabhakar function. We also thank Camilla Feroldi for the activity on her thesis (which contains a preliminary version of some results in this paper) under the supervision of Elena Villa.

References

[1] 
Beghin, L., Macci, C.: Large deviations for fractional Poisson processes. Stat. Probab. Lett. 83, 1193–1202 (2013). MR3041393. https://doi.org/10.1016/j.spl.2013.01.017
[2] 
Beghin, L., Macci, C.: Asymptotic results for a multivariate version of the alternative fractional Poisson process. Stat. Probab. Lett. 129, 260–268 (2017). MR3688542. https://doi.org/10.1016/j.spl.2017.06.009
[3] 
Cahoy, D., Di Nardo, E., Polito, F.: Flexible models for overdispersed and underdispersed count data. Stat. Pap. 62, 2969–2990 (2021). MR4332214. https://doi.org/10.1007/s00362-021-01222-7
[4] 
Consul, P.C., Shenton, L.R.: Some interesting properties of Lagrangian distributions. Commun. Stat. 2, 263–272 (1973). MR0408069. https://doi.org/10.1080/03610917308548270
[5] 
del Castillo, J., Pérez-Casany, M.: Weighted Poisson distributions for overdispersion and underdispersion situations. Ann. Inst. Stat. Math. 50, 567–585 (1998). MR1664520. https://doi.org/10.1023/A:1003585714207
[6] 
del Castillo, J., Pérez-Casany, M.: Overdispersed and underdispersed Poisson generalizations. J. Stat. Plan. Inference 134, 486–500 (2005). MR2200069. https://doi.org/10.1016/j.jspi.2004.04.019
[7] 
Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications, 2nd edn. Springer, New York (1998). MR1619036. https://doi.org/10.1007/978-1-4612-5320-4
[8] 
Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events. Springer, Berlin (1997). MR1458613. https://doi.org/10.1007/978-3-642-33483-2
[9] 
Gajda, J., Beghin, L.: Prabhakar Lévy processes. Stat. Probab. Lett. 178, 109162 (2021). 9 pp. MR4279303. https://doi.org/10.1016/j.spl.2021.109162
[10] 
Garra, R., Orsingher, E., Polito, F.: State dependent fractional point processes. J. Appl. Probab. 52, 18–36 (2015). MR3336844. https://doi.org/10.1239/jap/1429282604
[11] 
Giusti, A., Colombaro, I., Garra, R., Garrappa, R., Polito, F., Popolizio, M., Mainardi, F.: A practical guide to Prabhakar fractional calculus. Fract. Calc. Appl. Anal. 23, 9–54 (2020). MR4069921. https://doi.org/10.1515/fca-2020-0002
[12] 
Gorenflo, R., Kilbas, A.A., Mainardi, F., Rogosin, S.V.: Mittag-Leffler Functions, Related Topics and Applications. Springer, Heidelberg (2014). MR3244285. https://doi.org/10.1007/978-3-662-43930-2
[13] 
Gupta, P.L., Gupta, R.C., Ong, S.H., Srivastava, H.M.: A class of Hurwitz-Lerch zeta distributions and their applications in reliability. Appl. Math. Comput. 196, 521–531 (2008). MR2388708. https://doi.org/10.1016/j.amc.2007.06.012
[14] 
Gupta, R.C.: Modified power series distribution and some of its applications. Sankhya, Ser. B 36, 288–298 (1974). MR0391334
[15] 
Kemp, A.W.: Families of power series distributions, with particular reference to the Lerch family. J. Stat. Plan. Inference 140, 2255–2259 (2010). MR2609484. https://doi.org/10.1016/j.jspi.2010.01.021
[16] 
Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Boston (2006). MR2218073
[17] 
Luo, M.J., Parmar, R.K., Raina, R.K.: On extended Hurwitz-Lerch zeta function. J. Math. Anal. Appl. 448, 1281–1304 (2017). MR3582282. https://doi.org/10.1016/j.jmaa.2016.11.046
[18] 
Paris, R.B.: Asymptotics of the special functions of fractional calculus. In: Handbook of Fractional Calculus with Applications, vol. 1, pp. 297–325. De Gruyter, Berlin (2019). MR3888406
[19] 
Patil, G.P.: Certain properties of the generalized power series distribution. Ann. Inst. Stat. Math. 14, 179–182 (1962). MR0156395. https://doi.org/10.1007/BF02868639
[20] 
Pogány, T.K., Tomovski, Ž.: Probability distribution built by Prabhakar function. Related Turán and Laguerre inequalities. Integral Transforms Spec. Funct. 27, 783–793 (2016). MR3544402. https://doi.org/10.1080/10652469.2016.1201817
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Model and results
  • 4 Application of results to some fractional counting processes
  • Acknowledgments
  • References

Copyright
© 2022 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Fractional counting processes large deviations moderate deviations Mittag-Leffler functions

MSC2010
60F10 60E05 60G22

Funding
All the authors acknowledge the support of Indam-GNAMPA (research project “Stime asintotiche: principi di invarianza e grandi deviazioni”). Claudio Macci and Barbara Pacchiarotti also acknowledge the support of the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C18000100006) and of University of Rome Tor Vergata (research program “Beyond Borders”, project “Asymptotic Methods in Probability” (CUP E89C20000680005)).

Metrics
since March 2018
483

Article info
views

332

Full article
views

303

PDF
downloads

113

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    1
Theorem 1.
Theorem 1.
Let $\{{X_{t}}:t>0\}$ be a family of real-valued random variables defined on the same probability space $(\Omega ,\mathcal{F},P)$ and let ${v_{t}}$ be such that ${v_{t}}\to \infty $. Moreover assume that, for all $\theta \in \mathbb{R}$, there exists
\[ f(\theta ):=\underset{t\to \infty }{\lim }\frac{1}{{v_{t}}}\log \mathbb{E}\left[{e^{\theta {X_{t}}}}\right]\]
as an extended real number; we also assume that the origin $\theta =0$ belongs to the interior of the set $\mathcal{D}(f):=\{\theta \in \mathbb{R}:f(\theta )<\infty \}$. Then, if f is essentially smooth and lower semicontinuous, the family of random variables $\{{X_{t}}/{v_{t}}:t>0\}$ satisfies the LDP with speed ${v_{t}}$ and good rate function ${f^{\ast }}$ defined by ${f^{\ast }}(x):={\sup _{\theta \in \mathbb{R}}}\{\theta x-f(\theta )\}$.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy