Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 12, Issue 2 (2025)
  4. Noncentral moderate deviations for time- ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • More
    Article info Full article Related articles

Noncentral moderate deviations for time-changed Lévy processes with inverse of stable subordinators
Volume 12, Issue 2 (2025), pp. 203–224
Antonella Iuliano ORCID icon link to view author Antonella Iuliano details   Claudio Macci ORCID icon link to view author Claudio Macci details   Alessandra Meoli ORCID icon link to view author Alessandra Meoli details  

Authors

 
Placeholder
https://doi.org/10.15559/24-VMSTA269
Pub. online: 31 December 2024      Type: Research Article      Open accessOpen Access

Received
7 August 2024
Revised
16 December 2024
Accepted
17 December 2024
Published
31 December 2024

Abstract

This paper presents some extensions of recent noncentral moderate deviation results. In the first part, the results in [Statist. Probab. Lett. 185, Paper No. 109424, 8 pp. (2022)] are generalized by considering a general Lévy process $\{S(t):t\ge 0\}$ instead of a compound Poisson process. In the second part, it is assumed that $\{S(t):t\ge 0\}$ has bounded variation and is not a subordinator; thus $\{S(t):t\ge 0\}$ can be seen as the difference of two independent nonnull subordinators. In this way, the results in [Mod. Stoch. Theory Appl. 11, 43–61] for Skellam processes are generalized.

1 Introduction

The theory of large deviations gives an asymptotic computation of small probabilities on exponential scale (see [3] as a reference of this topic). In particular a large deviation principle (LDP from now on) provides asymptotic bounds for families of probability measures on the same topological space; these bounds are expressed in terms of a speed function (that tends to infinity) and a nonnegative lower semicontinuous rate function defined on the topological space.
The term moderate deviations is used for a class of LDPs which fills the gap between the two following asymptotic regimes: a convergence to a constant ${x_{0}}$ (at least in probability), and a weak convergence to a nonconstant centered Gaussian random variable. The convergence to ${x_{0}}$ is governed by a reference LDP with speed ${v_{t}}$, and a rate function I, say, that uniquely vanishes at ${x_{0}}$ (i.e. $I(x)=0$ if and only if $x={x_{0}}$). Then we have a class of LDPs which depends on the choice of some positive scalings $\{{a_{t}}:t\gt 0\}$ such that ${a_{t}}\to 0$ and ${a_{t}}{v_{t}}\to \infty $ (as $t\to \infty $).
In this paper the topological space mentioned above is the real line $\mathbb{R}$ equipped with the Borel σ-algebra, ${x_{0}}=0\in \mathbb{R}$ and ${v_{t}}=t$; thus we shall have
(1)
\[ {a_{t}}\to 0\hspace{1em}\text{and}\hspace{1em}{a_{t}}t\to \infty \hspace{1em}(\text{as}\hspace{2.5pt}t\to \infty ).\]
In some recent papers (see, e.g., [4] and some references cited therein), the term noncentral moderate deviations has been introduced when one has the situation described above, but the weak convergence is towards a nonconstant and non-Gaussian distributed random variable. A multivariate example is given in [10].
A possible way to construct examples of moderate deviation results is the following: $\{{C_{t}}:t\gt 0\}$ is a family of random variables which converges to zero as $t\to \infty $ (and satisfies the reference LDP with a certain rate function and a certain speed ${v_{t}}$); moreover, for a certain function ϕ such that $\phi (x)\to \infty $ as $x\to \infty $, $\{\phi ({v_{t}}){C_{t}}:t\gt 0\}$ converges weakly to some nondegenerating random variable; then, for every family of positive scalings $\{{a_{t}}:t\gt 0\}$ such that ${a_{t}}\to 0$ and ${a_{t}}{v_{t}}\to \infty $ (as $t\to \infty $), one should be able to prove the LDP for $\{\phi ({a_{t}}{v_{t}}){C_{t}}:t\gt 0\}$ with a certain rate function and speed $1/{a_{t}}$. We remark that, according to this approach, one typically has $\phi (x)=\sqrt{x}$ for central moderate deviation results (i.e. for the cases in which the weak convergence is towards a normal distribution); moreover, to give an example with a different situation, we have $\phi (x)=x$ for the noncentral moderate deviation result in [7] (note that r and ${\gamma _{r}}$ in that reference plays the role of t and ${a_{t}}$ in this paper). The results in this paper follow this approach with $\phi (x)={x^{\beta }}$ for some $\beta \in (0,1)$; more precisely $\beta =\alpha (\nu )$ (see (7)) in Section 3 and $\beta ={\alpha _{1}}(\nu )$ (see (12)) in Section 4.
We aim to present some extensions of the recent results presented in [1] and [9]. In particular, we recall that a subordinator is a nondecreasing (real-valued) Lévy process. Throughout this paper we always deal with real-valued light-tailed Lévy processes $\{S(t):t\ge 0\}$ described in Condition 1.1, with an independent random time-change in terms of inverse of stable subordinators.
Condition 1.1.
Let $\{S(t):t\ge 0\}$ be a real-valued Lévy process, and let ${\kappa _{S}}$ be the function defined by
\[ {\kappa _{S}}(\theta ):=\log \mathbb{E}[{e^{\theta S(1)}}].\]
We assume that the function ${\kappa _{S}}$ is finite in a neighborhood of the origin $\theta =0$. In particular the random variable $S(1)$ has finite mean $m:={\kappa ^{\prime }_{S}}(0)$ and finite variance $q:={\kappa ^{\prime\prime }_{S}}(0)$.
We recall that, if $\{S(t):t\ge 0\}$ in Condition 1.1 is a Poisson process and $\{{L_{\nu }}(t):t\ge 0\}$ is an independent inverse of a stable subordinator, then the process $\{S({L_{\nu }}(t)):t\ge 0\}$ is a (time) fractional Poisson process (see [11]; see also Section 2.4 in [12] for more general time fractional processes).
The aim of this paper is to provide some extensions of recent noncentral moderate deviation results published in the literature. More precisely we mean:
  • (1) the generalization of the results in [1] by considering a general Lévy process $\{S(t):t\ge 0\}$ instead of a compound Poisson process;
  • (2) the generalization of the results in [9] by considering the difference between two nonnull independent subordinators $\{S(t):t\ge 0\}$ instead of a Skellam process (which is the difference between two independent Poisson processes).
For the first item, we have only one (independent) random time-change for $\{S(t):t\ge 0\}$, and we can specify the results to the fractional Skellam processes of type 2 in [8]. For the second item, we shall assume that $\{S(t):t\ge 0\}$ has bounded variation and is not a subordinator; thus $\{S(t):t\ge 0\}$ can be seen as the difference of two independent nonnull subordinators $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ (see Lemma 4.1 in this paper). Then, for the second item, we have two (independent) random time-changes for $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$, and we can specify the results to the fractional Skellam processes of type 1 in [8].
The outline of the paper is as follows. In Section 2 we recall some preliminaries. The extensions presented above in items 1 and 2 are studied in Sections 3 and 4, respectively. In Section 5 we discuss the possibility to have some generalizations with more general random time-changes, and in particular we present Propositions 5.1 and 5.2 which provide a generalization of the reference LDPs in Propositions 3.1 and 4.1, respectively. In Section 6 we present some comparisons between rate functions, and in particular we follow the same lines of the comparisons in Section 5 in [9]. Finally, motivated by potential applications to other fractional processes in the literature, in Section 7 we discuss the case of the difference of two (independent) tempered stable subordinators.

2 Preliminaries

In this section we recall some preliminaries on large deviations and on the inverse of the stable subordinator, together with the Mittag-Leffler function.

2.1 Preliminaries on large deviations

We start with some basic definitions (see, e.g., [3]). In view of what follows we present definitions and results for families of real random variables $\{Z(t):t\gt 0\}$ defined on the same probability space $(\Omega ,\mathcal{F},P)$, where t goes to infinity. A real-valued function $\{{v_{t}}:t\gt 0\}$ such that ${v_{t}}\to \infty $ (as $t\to \infty $) is called a speed function, and a lower semicontinuous function $I:\mathbb{R}\to [0,\infty ]$ is called a rate function. Then $\{Z(t):t\gt 0\}$ satisfies the LDP with speed ${v_{t}}$ and rate function I if
\[ \underset{t\to \infty }{\limsup }\frac{1}{{v_{t}}}\log P(Z(t)\in C)\le -\underset{x\in C}{\inf }I(x)\hspace{1em}\text{for all closed sets}\hspace{2.5pt}C,\]
and
\[ \underset{t\to \infty }{\liminf }\frac{1}{{v_{t}}}\log P(Z(t)\in O)\ge -\underset{x\in O}{\inf }I(x)\hspace{1em}\text{for all open sets}\hspace{2.5pt}O.\]
The rate function I is said to be good if, for every $\beta \ge 0$, the level set $\{x\in \mathbb{R}:I(x)\le \beta \}$ is compact. We also recall the following known result (see, e.g., Theorem 2.3.6(c) in [3]).
Theorem 2.1 (Gärtner–Ellis theorem).
Assume that, for all $\theta \in \mathbb{R}$, there exists
\[ \Lambda (\theta ):=\underset{t\to \infty }{\lim }\frac{1}{{v_{t}}}\log \mathbb{E}\left[{e^{{v_{t}}\theta Z(t)}}\right]\]
as an extended real number; moreover assume that the origin $\theta =0$ belongs to the interior of the set $\mathcal{D}(\Lambda ):=\{\theta \in \mathbb{R}:\Lambda (\theta )\lt \infty \}$. Furthermore, let ${\Lambda ^{\ast }}$ be the Legendre–Fenchel transform of Λ, i.e. the function defined by
\[ {\Lambda ^{\ast }}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-\Lambda (\theta )\}.\]
Then, if Λ is essentially smooth and lower semicontinuous, $\{Z(t):t\gt 0\}$ satisfies the LDP with good rate function ${\Lambda ^{\ast }}$.
We also recall (see, e.g., Definition 2.3.5 in [3]) that Λ is essentially smooth if the interior of $\mathcal{D}(\Lambda )$ is nonempty, the function Λ is differentiable throughout the interior of $\mathcal{D}(\Lambda )$, and Λ is steep, i.e. $|{\Lambda ^{\prime }}({\theta _{n}})|\to \infty $ whenever ${\theta _{n}}$ is a sequence of points in the interior of $\mathcal{D}(\Lambda )$ which converge to a boundary point of $\mathcal{D}(\Lambda )$.

2.2 Preliminaries on the inverse of a stable subordinator

We start with the definition of the Mittag-Leffler function (see, e.g., [6], eq. (3.1.1))
\[ {E_{\nu }}(x):={\sum \limits_{k=0}^{\infty }}\frac{{x^{k}}}{\Gamma (\nu k+1)}.\]
It is known (see Proposition 3.6 in [6] for the case $\alpha \in (0,2)$; indeed α in that reference coincides with ν in this paper) that we have
(2)
\[ \left\{\begin{array}{l}{E_{\nu }}(x)\sim \frac{{e^{{x^{1/\nu }}}}}{\nu }\hspace{1em}\text{as}\hspace{2.5pt}x\to \infty ;\\ {} \text{if}\hspace{2.5pt}y\lt 0\text{, then}\hspace{2.5pt}\frac{1}{x}\log {E_{\nu }}(yx)\to 0\hspace{1em}\text{as}\hspace{2.5pt}x\to \infty .\end{array}\right.\]
Then, if we consider the inverse of the stable subordinator $\{{L_{\nu }}(t):t\ge 0\}$ for $\nu \in (0,1)$, we have
(3)
\[ \mathbb{E}[{e^{\theta {L_{\nu }}(t)}}]={E_{\nu }}(\theta {t^{\nu }})\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}.\]
This formula appears in several references with $\theta \le 0$ only; however this restriction is not needed because we can refer to the analytic continuation of the Laplace transform with complex argument.

3 Results with only one random time-change

Throughout this section we assume that the following condition holds.
Condition 3.1.
Let $\{S(t):t\ge 0\}$ be a real-valued Lévy process as in Condition 1.1, and let $\{{L_{\nu }}(t):t\ge 0\}$ be an inverse of a stable subordinator for $\nu \in (0,1)$. Moreover assume that $\{S(t):t\ge 0\}$ and $\{{L_{\nu }}(t):t\ge 0\}$ are independent.
The next Propositions 3.1, 3.2 and 3.3 provide a generalization of Propositions 3.1, 3.2 and 3.3 in [1], respectively, in which $\{S(t):t\ge 0\}$ is a compound Poisson process. We start with the reference LDP for the convergence in probability to zero of $\left\{\frac{S({L_{\nu }}(t))}{t}:t\gt 0\right\}$.
Proposition 3.1.
Assume that Condition 3.1 holds. Moreover, let ${\Lambda _{\nu ,S}}$ be the function defined by
(4)
\[ {\Lambda _{\nu ,S}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{({\kappa _{S}}(\theta ))^{1/\nu }}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}{\kappa _{S}}(\theta )\ge 0,\\ {} 0& \hspace{2.5pt}\textit{if}\hspace{2.5pt}{\kappa _{S}}(\theta )\lt 0,\end{array}\right.\]
and assume that it is an essentially smooth function. Then $\left\{\frac{S({L_{\nu }}(t))}{t}:t\gt 0\right\}$ satisfies the LDP with speed ${v_{t}}=t$ and good rate function ${I_{\mathrm{LD}}}$ defined by
(5)
\[ {I_{\mathrm{LD}}}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Lambda _{\nu ,S}}(\theta )\}.\]
Proof.
The desired LDP can be derived by applying the Gärtner–Ellis theorem (i.e. Theorem 2.1). In fact we have
(6)
\[ \mathbb{E}[{e^{\theta S({L_{\nu }}(t))}}]={E_{\nu }}({\kappa _{S}}(\theta ){t^{\nu }})\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}\hspace{2.5pt}\text{and for all}\hspace{2.5pt}t\ge 0,\]
whence we obtain
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log \mathbb{E}[{e^{\theta S({L_{\nu }}(t))}}]={\Lambda _{\nu ,S}}(\theta )\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}\]
by (2).  □
Remark 3.1.
The function ${\Lambda _{\nu ,S}}$ in Proposition 3.1, eq. (4), could not be essentially smooth. Here we present a counterexample. Let $\{S(t):t\ge 0\}$ be defined by $S(t):={S_{1}}(t)-{S_{2}}(t)$, where $\{{S_{1}}(t):t\ge 0\}$ is a tempered stable subordinator with parameters $\beta \in (0,1)$ and $r\gt 0$, and let $\{{S_{2}}(t):t\ge 0\}$ be the deterministic subordinator defined by ${S_{2}}(t)=ht$ for some $h\gt 0$. Then
\[ {\kappa _{{S_{1}}}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{r^{\beta }}-{(r-\theta )^{\beta }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \le r,\\ {} \infty & \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \gt r\end{array}\right.\]
and ${\kappa _{{S_{2}}}}(\theta ):=h\theta $; thus
\[ {\kappa _{S}}(\theta ):={\kappa _{{S_{1}}}}(\theta )+{\kappa _{{S_{2}}}}(-\theta )=\left\{\begin{array}{l@{\hskip10.0pt}l}{r^{\beta }}-{(r-\theta )^{\beta }}-h\theta & \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \le r,\\ {} \infty & \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \gt r.\end{array}\right.\]
It is easy to check that, for this example, the function ${\Lambda _{\nu ,S}}$ is essentially smooth if and only if
\[ \underset{\theta \uparrow r}{\lim }{\Lambda ^{\prime }_{\nu ,S}}(\theta )=+\infty ;\]
moreover this condition occurs if and only if ${\kappa _{S}}(r)\gt 0$, i.e. if and only if $h\lt {r^{\beta -1}}$. To better explain this see, Figure 1.
vmsta269_g001.jpg
Fig. 1.
The function ${\Lambda _{\nu ,S}}$ in Remark 3.1 for $\theta \le r=1$. Numerical values: $\nu =0.5$, $\beta =0.25$; $h=0.5$ on the left, and $h=3$ on the right
Now we present weak convergence results. In view of these results it is useful to consider the following notation:
(7)
\[ \alpha (\nu ):=\left\{\begin{array}{l@{\hskip10.0pt}l}1-\nu /2& \hspace{2.5pt}\text{if}\hspace{2.5pt}m=0,\\ {} 1-\nu & \hspace{2.5pt}\text{if}\hspace{2.5pt}m\ne 0.\end{array}\right.\]
Proposition 3.2.
Assume that Condition 3.1 holds and let $\alpha (\nu )$ be defined in (7). We have the following statements.
  • • If $m=0$, then $\{{t^{\alpha (\nu )}}\frac{S({L_{\nu }}(t))}{t}:t\gt 0\}$ converges weakly to $\sqrt{q{L_{\nu }}(1)}Z$, where Z is a standard normally distributed random variable, and independent of ${L_{\nu }}(1)$.
  • • If $m\ne 0$, then $\{{t^{\alpha (\nu )}}\frac{S({L_{\nu }}(t))}{t}:t\gt 0\}$ converges weakly to $m{L_{\nu }}(1)$.
Proof.
In both cases $m=0$ and $m\ne 0$ we study suitable limits (as $t\to \infty $) in terms of the moment generating function in (6); note that, when we take the limit, we do not take into account (2).
If $m=0$, then we have
\[\begin{aligned}{}& \mathbb{E}\left[{e^{\theta {t^{\alpha (\nu )}}\frac{S({L_{\nu }}(t))}{t}}}\right]=\mathbb{E}\left[{e^{\theta \frac{S({L_{\nu }}(t))}{{t^{\nu /2}}}}}\right]={E_{\nu }}\left({\kappa _{S}}\left(\frac{\theta }{{t^{\nu /2}}}\right){t^{\nu }}\right)\\ {} & \hspace{1em}={E_{\nu }}\left(\left(\frac{q{\theta ^{2}}}{2{t^{\nu }}}+o\left(\frac{1}{{t^{\nu }}}\right)\right){t^{\nu }}\right)\to {E_{\nu }}\left(\frac{q{\theta ^{2}}}{2}\right)\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}.\end{aligned}\]
Thus the desired weak convergence is proved noting that (here we take into account (3))
\[ \mathbb{E}\left[{e^{\theta \sqrt{q{L_{\nu }}(1)}Z}}\right]=\mathbb{E}\left[{e^{\frac{{\theta ^{2}}q}{2}{L_{\nu }}(1)}}\right]={E_{\nu }}\left(\frac{q{\theta ^{2}}}{2}\right)\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}.\]
If $m\ne 0$, then we have
\[\begin{aligned}{}& \mathbb{E}\left[{e^{\theta {t^{\alpha (\nu )}}\frac{S({L_{\nu }}(t))}{t}}}\right]=\mathbb{E}\left[{e^{\theta \frac{S({L_{\nu }}(t))}{{t^{\nu }}}}}\right]={E_{\nu }}\left({\kappa _{S}}\left(\frac{\theta }{{t^{\nu }}}\right){t^{\nu }}\right)\\ {} & \hspace{1em}={E_{\nu }}\left(\left(\frac{m\theta }{{t^{\nu }}}+o\left(\frac{1}{{t^{\nu }}}\right)\right){t^{\nu }}\right)\to {E_{\nu }}\left(m\theta \right)\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}.\end{aligned}\]
Thus the desired weak convergence is proved by (3).  □
Now we present the noncentral moderate deviation results.
Proposition 3.3.
Assume that Condition 3.1 holds and let $\alpha (\nu )$ be defined in (7). Moreover, assume that $q\gt 0$ if $m=0$. Then, for every family of positive numbers $\{{a_{t}}:t\gt 0\}$ such that (1) holds, the family of random variables $\left\{\frac{{({a_{t}}t)^{\alpha (\nu )}}S({L_{\nu }}(t))}{t}:t\gt 0\right\}$ satisfies the LDP with speed $1/{a_{t}}$ and good rate function ${I_{\mathrm{MD}}}(\cdot ;m)$ defined by:
\[ \left.\begin{array}{c@{\hskip10.0pt}l}\textit{if}\hspace{2.5pt}m=0,& \hspace{2.5pt}{I_{\mathrm{MD}}}(x;0):=({(\nu /2)^{\nu /(2-\nu )}}-{(\nu /2)^{2/(2-\nu )}}){\left(\frac{2{x^{2}}}{q}\right)^{1/(2-\nu )}};\\ {} \textit{if}\hspace{2.5pt}m\ne 0,& \hspace{2.5pt}{I_{\mathrm{MD}}}(x;m):=\left\{\begin{array}{l@{\hskip10.0pt}l}({\nu ^{\nu /(1-\nu )}}-{\nu ^{1/(1-\nu )}}){\left(\frac{x}{m}\right)^{1/(1-\nu )}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\frac{x}{m}\ge 0,\\ {} \infty & \hspace{2.5pt}\textit{if}\hspace{2.5pt}\frac{x}{m}\lt 0.\end{array}\right.\end{array}\right.\]
Proof.
For every $m\in \mathbb{R}$ we apply the Gärtner–Ellis theorem (Theorem 2.1). So we have to show that we can consider the function ${\Lambda _{\nu ,m}}$ defined by
\[ {\Lambda _{\nu ,m}}(\theta ):=\underset{t\to \infty }{\lim }\frac{1}{1/{a_{t}}}\log \mathbb{E}\left[{e^{\frac{\theta }{{a_{t}}}\frac{{({a_{t}}t)^{\alpha (\nu )}}S({L_{\nu }}(t))}{t}}}\right]\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R},\]
or equivalently
\[ {\Lambda _{\nu ,m}}(\theta ):=\underset{t\to \infty }{\lim }{a_{t}}\log {E_{\nu }}\left({\kappa _{S}}\left(\frac{\theta }{{({a_{t}}t)^{1-\alpha (\nu )}}}\right){t^{\nu }}\right)\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R};\]
in particular we refer to (2) when we take the limit. Moreover, again for every $m\in \mathbb{R}$, we shall see that the function ${\Lambda _{\nu ,m}}$ satisfies the hypotheses of the Gärtner–Ellis theorem (this can be checked by considering the expressions of the function ${\Lambda _{\nu ,m}}$ below), and therefore the LDP holds with good rate function ${I_{\mathrm{MD}}}(\cdot ;m)$ defined by
(8)
\[ {I_{\mathrm{MD}}}(x;m):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Lambda _{\nu ,m}}(\theta )\}.\]
Then, as we shall explain below, for every $m\in \mathbb{R}$ the rate function expression in (8) coincides with the rate function ${I_{\mathrm{MD}}}(\cdot ;m)$ in the statement.
If $m=0$, we have
\[\begin{aligned}{}& {a_{t}}\log {E_{\nu }}\left({\kappa _{S}}\left(\frac{\theta }{{({a_{t}}t)^{1-\alpha (\nu )}}}\right){t^{\nu }}\right)\\ {} & \hspace{1em}={a_{t}}\log {E_{\nu }}\left(\left(\frac{q{\theta ^{2}}}{2{({a_{t}}t)^{\nu }}}+o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right){t^{\nu }}\right)\\ {} & \hspace{1em}={a_{t}}\log {E_{\nu }}\left(\frac{1}{{a_{t}^{\nu }}}\left(\frac{q{\theta ^{2}}}{2}+{({a_{t}}t)^{\nu }}o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right)\right),\end{aligned}\]
and therefore
\[ \underset{t\to \infty }{\lim }{a_{t}}\log {E_{\nu }}\left({\kappa _{S}}\left(\frac{\theta }{{({a_{t}}t)^{1-\alpha (\nu )}}}\right){t^{\nu }}\right)={\left(\frac{q{\theta ^{2}}}{2}\right)^{1/\nu }}=:{\Lambda _{\nu ,0}}(\theta )\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R};\]
thus the desired LDP holds with good rate function ${I_{\mathrm{MD}}}(\cdot ;0)$ defined by (8) which coincides with the rate function expression in the statement (indeed one can check that, for all $x\in \mathbb{R}$, the supremum in (8) is attained at $\theta ={\theta _{x}}:={\left(\frac{2}{q}\right)^{1/(2-\nu )}}{\left(\frac{\nu x}{2}\right)^{\nu /(2-\nu )}}$).
If $m\ne 0$, we have
\[\begin{aligned}{}& {a_{t}}\log {E_{\nu }}\left({\kappa _{S}}\left(\frac{\theta }{{({a_{t}}t)^{1-\alpha (\nu )}}}\right){t^{\nu }}\right)\\ {} & \hspace{1em}={a_{t}}\log {E_{\nu }}\left(\left(\frac{\theta m}{{({a_{t}}t)^{\nu }}}+o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right){t^{\nu }}\right)\\ {} & \hspace{1em}={a_{t}}\log {E_{\nu }}\left(\frac{1}{{a_{t}^{\nu }}}\left(\theta m+{({a_{t}}t)^{\nu }}o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right)\right)\end{aligned}\]
and therefore
\[\begin{aligned}{}& \underset{t\to \infty }{\lim }{a_{t}}\log {E_{\nu }}\left({\kappa _{S}}\left(\frac{\theta }{{({a_{t}}t)^{1-\alpha (\nu )}}}\right){t^{\nu }}\right)\\ {} & \hspace{1em}=\left\{\begin{array}{l@{\hskip10.0pt}l}{(\theta m)^{1/\nu }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta m\ge 0\\ {} 0& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta m\lt 0\end{array}\right.=:{\Lambda _{\nu ,m}}(\theta )\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R};\end{aligned}\]
thus the desired LDP holds with good rate function ${I_{\mathrm{MD}}}(\cdot ;m)$ defined by (8) which coincides with the rate function expression in the statement (indeed one can check that the supremum in (8) is attained at $\theta ={\theta _{x}}:=\frac{1}{m}{\left(\frac{\nu x}{m}\right)^{\nu /(1-\nu )}}$ for $\frac{x}{m}\ge 0$, and it is equal to infinity for $\frac{x}{m}\lt 0$ by letting $\theta \to \infty $ if $m\lt 0$, and by letting $\theta \to -\infty $ if $m\gt 0$).  □
Remark 3.2.
As we said above, the results in this section provide a generalization of the results in [1] in which $\{S(t):t\ge 0\}$ is a compound Poisson process. More precisely we mean that $S(t):={\textstyle\sum _{k=1}^{N(t)}}{X_{k}}$, where $\{{X_{n}}:n\ge 1\}$ are i.i.d. real-valued light-tailed random variables with finite mean μ and finite variance ${\sigma ^{2}}$ (in [1] it was requested that ${\sigma ^{2}}\gt 0$ to avoid trivialities), independent of a Poisson process $\{N(t):t\ge 0\}$ with intensity $\lambda \gt 0$. Therefore ${\kappa _{S}}(\theta )=\lambda (\mathbb{E}[{e^{\theta {X_{1}}}}]-1)$ for all $\theta \in \mathbb{R}$; moreover (see m and q in Condition 1.1) $m=\lambda \mu $ and $q=\lambda ({\sigma ^{2}}+{\mu ^{2}})$.
Moreover we can adapt the content of Remark 3.4 in [1] and we can say that, for every $m\in \mathbb{R}$ (thus the case $m=0$ can be also considered), we have ${I_{\mathrm{MD}}}(x;m)={I_{\mathrm{MD}}}(-x;-m)$ for every $x\in \mathbb{R}$. Finally, if we refer to λ and μ at the beginning of this remark, we recover the rate functions in Proposition 3.3 in [1] as follows:
  • • if $m=\lambda \mu =0$ (and therefore $\mu =0$ and $q=\lambda {\sigma ^{2}}$), then ${I_{\mathrm{MD}}}(\cdot ;0)$ in Proposition 3.3 in this paper coincides with ${I_{\mathrm{MD},0}}$ in Proposition 3.3 in [1];
  • • if $m=\lambda \mu \ne 0$ (and therefore $\mu \ne 0$), then ${I_{\mathrm{MD}}}(\cdot ;m)$ in Proposition 3.3 in this paper coincides with ${I_{\mathrm{MD},\mu }}$ in Proposition 3.3 in [1].
Remark 3.3.
In Proposition 3.3 we have assumed that $q\ne 0$ when $m=0$. Indeed, if $q=0$ and $m=0$, the process $\{S({L_{\nu }}(t)):t\ge 0\}$ in Propositions 3.1, 3.2 and 3.3 is identically equal to zero (because $S(t)=0$ for all $t\ge 0$) and the weak convergence in Proposition 3.2 (for $m=0$) is towards a constant random variable (i.e. the costant random variable equal to zero). Moreover, again if $q=0$ and $m=0$, the rate function ${I_{\mathrm{MD}}}(x;0)$ in Proposition 3.3 is not well-defined (because there is a denominator equal to zero).

4 Results with two independent random time-changes

Throughout this section we assume that the following condition holds.
Condition 4.1.
Let $\{S(t):t\ge 0\}$ be a real-valued Lévy process as in Condition 1.1, and let $\{{L_{{\nu _{1}}}^{(1)}}(t):t\ge 0\}$ and $\{{L_{{\nu _{2}}}^{(2)}}(t):t\ge 0\}$ be two independent inverses of stable subordinators for ${\nu _{1}},{\nu _{2}}\in (0,1)$, and independent of $\{S(t):t\ge 0\}$. We assume that $\{S(t):t\ge 0\}$ has bounded variation, and it is not a subordinator.
We have the following consequence of Condition 4.1.
Lemma 4.1.
Assume that Condition 4.1 holds. Then there exists two nonnull independent subordinators $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ such that $\{S(t):t\ge 0\}$ is distributed as $\{{S_{1}}(t)-{S_{2}}(t):t\ge 0\}$.
We can assume that the statement in Lemma 4.1 is known even if we do not have an exact reference for that result (however a statement of this kind appears in the Introduction of [2]). The idea of the proof is the following. If $\Pi (dx)$ is the Lévy measure of a Lévy process with bounded variation, then ${1_{(0,\infty )}}(x)\Pi (dx)$ and ${1_{(-\infty ,0)}}(x)\Pi (dx)$ are again Lévy measures of Lévy processes with bounded variation; thus ${1_{(0,\infty )}}(x)\Pi (dx)$ is the Lévy measure associated to the subordinator $\{{S_{1}}(t):t\ge 0\}$, ${1_{(-\infty ,0)}}(x)\Pi (dx)$ is the Lévy measure associated to the opposite of the subordinator $\{{S_{2}}(t):t\ge 0\}$, and $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ are independent.
Remark 4.1.
Let ${\kappa _{{S_{1}}}}$ and ${\kappa _{{S_{2}}}}$ be the analogues of the function ${\kappa _{S}}$ for the process $\{S(t):t\ge 0\}$ in Condition 1.1, i.e. the functions defined by
\[ {\kappa _{{S_{i}}}}(\theta ):=\log \mathbb{E}[{e^{\theta {S_{i}}(1)}}]\hspace{1em}(\text{for}\hspace{2.5pt}i=1,2),\]
where $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ are the subordinators in Lemma 4.1. In particular both functions are finite in a neighborhood of the origin. Then, if we set
\[ {m_{i}}={\kappa ^{\prime }_{{S_{i}}}}(0)\hspace{1em}\text{and}\hspace{1em}{q_{i}}={\kappa ^{\prime\prime }_{{S_{i}}}}(0)\hspace{1em}(\text{for}\hspace{2.5pt}i\in \{1,2\}),\]
we have (we recall that ${\kappa _{S}}(\theta )={\kappa _{{S_{1}}}}(\theta )+{\kappa _{{S_{2}}}}(-\theta )$ for all $\theta \in \mathbb{R}$)
\[ m={\kappa ^{\prime }_{S}}(0)={m_{1}}-{m_{2}}\hspace{1em}\text{and}\hspace{1em}q={\kappa ^{\prime\prime }_{S}}(0)={q_{1}}+{q_{2}}.\]
We recall that, since $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ are nontrivial subordinators, then ${m_{1}},{m_{2}}\gt 0$.
The next Propositions 4.1, 4.2 and 4.3 provide a generalization of Propositions 3.1, 3.2 and 3.3 in [9], respectively, in which $\{S(t):t\ge 0\}$ is a Skellam process (and therefore $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ are two Poisson processes with intensities ${\lambda _{1}}$ and ${\lambda _{2}}$, respectively). We start with the reference LDP for the convergence of $\left\{\frac{{S_{1}}({L_{{\nu _{1}}}^{(1)}}(t))-{S_{2}}({L_{{\nu _{2}}}^{(2)}}(t))}{t}:t\gt 0\right\}$ to zero in probability. In this first result the case ${\nu _{1}}\ne {\nu _{2}}$ is allowed.
Proposition 4.1.
Assume that Condition 4.1 holds (therefore we can refer to the independent subordinators $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ in Lemma 4.1). Let ${\Psi _{{\nu _{1}},{\nu _{2}}}}$ be the function defined by
(9)
\[ {\Psi _{{\nu _{1}},{\nu _{2}}}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{({\kappa _{{S_{1}}}}(\theta ))^{1/{\nu _{1}}}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\theta \ge 0,\\ {} {({\kappa _{{S_{2}}}}(-\theta ))^{1/{\nu _{2}}}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\theta \lt 0.\end{array}\right.\]
Then $\left\{\frac{{S_{1}}({L_{{\nu _{1}}}^{(1)}}(t))-{S_{2}}({L_{{\nu _{2}}}^{(2)}}(t))}{t}:t\gt 0\right\}$ satisfies the LDP with speed ${v_{t}}=t$ and good rate function ${J_{\mathrm{LD}}}$ defined by
(10)
\[ {J_{\mathrm{LD}}}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Psi _{{\nu _{1}},{\nu _{2}}}}(\theta )\}.\]
Proof.
We prove this proposition by applying the Gärtner–Ellis theorem. More precisely, we have to show that
(11)
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log \mathbb{E}\left[{e^{t\theta \frac{{S_{1}}({L_{{\nu _{1}}}^{(1)}}(t))-{S_{2}}({L_{{\nu _{2}}}^{(2)}}(t))}{t}}}\right]={\Psi _{{\nu _{1}},{\nu _{2}}}}(\theta )\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}),\]
where ${\Psi _{{\nu _{1}},{\nu _{2}}}}$ is the function in (9).
The case $\theta =0$ is immediate. For $\theta \ne 0$ we have
\[ \log \mathbb{E}\left[{e^{t\theta \frac{{S_{1}}({L_{{\nu _{1}}}^{(1)}}(t))-{S_{2}}({L_{{\nu _{2}}}^{(2)}}(t))}{t}}}\right]=\log {E_{{\nu _{1}}}}({\kappa _{{S_{1}}}}(\theta ){t^{{\nu _{1}}}})+\log {E_{{\nu _{2}}}}({\kappa _{{S_{2}}}}(-\theta ){t^{{\nu _{2}}}}).\]
Then, by taking into account the asymptotic behavior of the Mittag-Leffler function in (2), we have
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{1}}}}({\kappa _{{S_{1}}}}(\theta ){t^{{\nu _{1}}}})+\underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{2}}}}({\kappa _{{S_{2}}}}(-\theta ){t^{{\nu _{2}}}})={({\kappa _{{S_{1}}}}(\theta ))^{1/{\nu _{1}}}}\]
for $\theta \gt 0$, and
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{1}}}}({\kappa _{{S_{1}}}}(\theta ){t^{{\nu _{1}}}})+\underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{2}}}}({\kappa _{{S_{2}}}}(-\theta ){t^{{\nu _{2}}}})={({\kappa _{{S_{2}}}}(-\theta ))^{1/{\nu _{2}}}}\]
for $\theta \lt 0$; thus the limit in (11) is checked. Finally, the desired LDP holds because the function ${\Psi _{{\nu _{1}},{\nu _{2}}}}$ is essentially smooth. The essential smoothness of ${\Psi _{{\nu _{1}},{\nu _{2}}}}$ trivially holds if ${\Psi _{{\nu _{1}},{\nu _{2}}}}(\theta )$ is finite everywhere (and differentiable). So now we assume that ${\Psi _{{\nu _{1}},{\nu _{2}}}}(\theta )$ is not finite everywhere. For $i=1,2$ we have
\[ \frac{d}{d\theta }{({\kappa _{{S_{i}}}}(\theta ))^{1/{\nu _{i}}}}=\frac{1}{{\nu _{i}}}{({\kappa _{{S_{i}}}}(\theta ))^{1/{\nu _{i}}-1}}{\kappa ^{\prime }_{{S_{i}}}}(\theta ),\]
and therefore the range of values of each one of these derivatives (for $\theta \ge 0$ such that ${\kappa _{{S_{i}}}}(\theta )\lt \infty $) is $[0,\infty )$; therefore the range of values of ${\Psi ^{\prime }_{{\nu _{1}},{\nu _{2}}}}(\theta )$ (for $\theta \in \mathbb{R}$ such that ${\Psi _{{\nu _{1}},{\nu _{2}}}}(\theta )\lt \infty $) is $(-\infty ,\infty )$, and the essential smoothness of ${\Psi _{{\nu _{1}},{\nu _{2}}}}$ is proved.  □
From now on we assume that ${\nu _{1}}$ and ${\nu _{2}}$ coincide, and therefore we simply consider the symbol ν, where $\nu ={\nu _{1}}={\nu _{2}}$. Moreover we set
(12)
\[ {\alpha _{1}}(\nu ):=1-\nu .\]
Proposition 4.2.
Assume that Condition 4.1 holds (therefore we can refer to the independent subordinators $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ in Lemma 4.1). Moreover, assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$ and let ${\alpha _{1}}(\nu )$ be defined in (12). Then $\{{t^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}:t\gt 0\}$ converges weakly to ${m_{1}}{L_{\nu }^{(1)}}(1)-{m_{2}}{L_{\nu }^{(2)}}(1)$.
Proof.
We have to check that
\[ \underset{t\to \infty }{\lim }\mathbb{E}\left[{e^{\theta {t^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}}}\right]=\underset{={E_{\nu }}({m_{1}}\theta ){E_{\nu }}(-{m_{2}}\theta )}{\underbrace{\mathbb{E}\left[{e^{\theta ({m_{1}}{L_{\nu }^{(1)}}(1)-{m_{2}}{L_{\nu }^{(2)}}(1))}}\right]}}\hspace{1em}\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}\]
(here we take into account that ${L_{\nu }^{(1)}}(1)$ and ${L_{\nu }^{(2)}}(1)$ are i.i.d., and the expression of the moment generating function in (3)). This can be readily done by noting that
\[\begin{aligned}{}& \mathbb{E}\left[{e^{\theta {t^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}}}\right]=\mathbb{E}\left[{e^{\theta \frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{{t^{\nu }}}}}\right]\\ {} & \hspace{1em}={E_{\nu }}\left({\kappa _{{S_{1}}}}\left(\frac{\theta }{{t^{\nu }}}\right){t^{\nu }}\right){E_{\nu }}\left({\kappa _{{S_{2}}}}\left(-\frac{\theta }{{t^{\nu }}}\right){t^{\nu }}\right)\\ {} & \hspace{1em}={E_{\nu }}\left(\left({m_{1}}\frac{\theta }{{t^{\nu }}}+o\left(\frac{1}{{t^{\nu }}}\right)\right){t^{\nu }}\right){E_{\nu }}\left(\left(-{m_{2}}\frac{\theta }{{t^{\nu }}}+o\left(\frac{1}{{t^{\nu }}}\right){t^{\nu }}\right)\right),\end{aligned}\]
and we get the desired limit letting t go to infinity (for each fixed $\theta \in \mathbb{R}$).  □
Proposition 4.3.
Assume that Condition 4.1 holds (therefore we can refer to the independent subordinators $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ in Lemma 4.1). Moreover assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$ and let ${\alpha _{1}}(\nu )$ be defined in (12). Then, for every family of positive numbers $\{{a_{t}}:t\gt 0\}$ such that (1) holds, the family of random variables $\left\{{({a_{t}}t)^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}:t\gt 0\right\}$ satisfies the LDP with speed $1/{a_{t}}$ and good rate function ${J_{\mathrm{MD}}}$ defined by
\[ {J_{\mathrm{MD}}}(x):=\left\{\begin{array}{l@{\hskip10.0pt}l}({\nu ^{\nu /(1-\nu )}}-{\nu ^{1/(1-\nu )}}){\left(\frac{x}{{m_{1}}}\right)^{1/(1-\nu )}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\ge 0,\\ {} ({\nu ^{\nu /(1-\nu )}}-{\nu ^{1/(1-\nu )}}){\left(-\frac{x}{{m_{2}}}\right)^{1/(1-\nu )}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\lt 0.\end{array}\right.\]
Proof.
We prove this proposition by applying the Gärtner–Ellis theorem. More precisely, we have to show that
(13)
\[ \underset{t\to \infty }{\lim }\frac{1}{1/{a_{t}}}\log \mathbb{E}\left[{e^{\frac{\theta }{{a_{t}}}{({a_{t}}t)^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}}}\right]={\widetilde{\Psi }_{\nu }}(\theta )\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}),\]
where ${\widetilde{\Psi }_{\nu }}$ is the function defined by
\[ {\widetilde{\Psi }_{\nu }}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{({m_{1}}\theta )^{1/\nu }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \ge 0,\\ {} {(-{m_{2}}\theta )^{1/\nu }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \lt 0;\end{array}\right.\]
indeed, since the function ${\widetilde{\Psi }_{\nu }}$ is finite (for all $\theta \in \mathbb{R}$) and differentiable, the desired LDP holds noting that the Legendre–Fenchel transform ${\widetilde{\Psi }_{\nu }^{\ast }}$ of ${\widetilde{\Psi }_{\nu }}$, i.e. the function ${\widetilde{\Psi }_{\nu }^{\ast }}$ defined by
(14)
\[ {\widetilde{\Psi }_{\nu }^{\ast }}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\widetilde{\Psi }_{\nu }}(\theta )\}\hspace{1em}(\text{for all}\hspace{2.5pt}x\in \mathbb{R}),\]
coincides with the function ${J_{\mathrm{MD}}}$ in the statement of the proposition (for $x=0$ the supremum in (14) is attained at $\theta =0$, for $x\gt 0$ that supremum is attained at $\theta =\frac{1}{{m_{1}}}{(\frac{\nu x}{{m_{1}}})^{\nu /(1-\nu )}}$, for $x\lt 0$ that supremum is attained at $\theta =-\frac{1}{{m_{2}}}{(-\frac{\nu x}{{m_{2}}})^{\nu /(1-\nu )}}$).
So we conclude the proof by checking the limit in (13). The case $\theta =0$ is immediate. For $\theta \ne 0$ we have
\[\begin{aligned}{}& \log \mathbb{E}\left[{e^{\frac{\theta }{{a_{t}}}{({a_{t}}t)^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}}}\right]=\log \mathbb{E}\left[{e^{\theta \frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{{({a_{t}}t)^{\nu }}}}}\right]\\ {} & \hspace{1em}=\log {E_{\nu }}\left({\kappa _{{S_{1}}}}\left(\frac{\theta }{{({a_{t}}t)^{\nu }}}\right){t^{\nu }}\right)+\log {E_{\nu }}\left({\kappa _{{S_{2}}}}\left(-\frac{\theta }{{({a_{t}}t)^{\nu }}}\right){t^{\nu }}\right)\\ {} & \hspace{1em}=\log {E_{\nu }}\left(\left({m_{1}}\frac{\theta }{{({a_{t}}t)^{\nu }}}+o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right){t^{\nu }}\right)\\ {} & \hspace{2em}+\log {E_{\nu }}\left(\left(-{m_{2}}\frac{\theta }{{({a_{t}}t)^{\nu }}}+o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right){t^{\nu }}\right)\\ {} & \hspace{1em}=\log {E_{\nu }}\left(\frac{{m_{1}}}{{a_{t}^{\nu }}}\left(\theta +{({a_{t}}t)^{\nu }}o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right)\right)\\ {} & \hspace{2em}+\log {E_{\nu }}\left(\frac{{m_{2}}}{{a_{t}^{\nu }}}\left(-\theta +{({a_{t}}t)^{\nu }}o\left(\frac{1}{{({a_{t}}t)^{\nu }}}\right)\right)\right).\end{aligned}\]
Then, by taking into account the asymptotic behavior of the Mittag-Leffler function in (2), we have
\[ \underset{t\to \infty }{\lim }\frac{1}{1/{a_{t}}}\log \mathbb{E}\left[{e^{\frac{\theta }{{a_{t}}}{({a_{t}}t)^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}}}\right]={({m_{1}}\theta )^{1/\nu }}\]
for $\theta \gt 0$, and
\[ \underset{t\to \infty }{\lim }\frac{1}{1/{a_{t}}}\log \mathbb{E}\left[{e^{\frac{\theta }{{a_{t}}}{({a_{t}}t)^{{\alpha _{1}}(\nu )}}\frac{{S_{1}}({L_{\nu }^{(1)}}(t))-{S_{2}}({L_{\nu }^{(2)}}(t))}{t}}}\right]={(-{m_{2}}\theta )^{1/\nu }}\]
for $\theta \lt 0$. Thus the limit in (13) is checked.  □

5 A discussion on possible generalizations

In Sections 3 and 4 we have proved two moderate deviation results, i.e. two collections of three results: a reference LDP (Propositions 3.1 and 4.1), a weak convergence result (Propositions 3.2 and 4.2), and a collection of LDPs which depend on some positive scalings $\{{a_{t}}:t\gt 0\}$ which satisfies condition (1) (Propositions 3.3 and 4.3).
Then one can wonder if it is possible to consider more general time-changes to have a generalization of at least one of these three results in each collection. This question is quite natural because, if one looks at the results presented above, the asymptotic behavior of Mittag-Leffler function (see (2)) seems to play a role in the applications of Gärtner–Ellis theorem (thus in all the results, except those of weak convergence).
In what follows we consider random time-changes which satisfy the following condition.
Condition 5.1.
Let $\{{L_{f}}(t):t\ge 0\}$ be a nonnegative process such that there exists
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log \mathbb{E}[{e^{\rho {L_{f}}(t)}}]=\left\{\begin{array}{l@{\hskip10.0pt}l}f(\rho )& \hspace{2.5pt}\text{if}\hspace{2.5pt}\rho \ge 0\\ {} 0& \hspace{2.5pt}\text{if}\hspace{2.5pt}\rho \lt 0\end{array}\right.=:{\Upsilon _{f}}(\rho ),\]
where f is a regular, convex and nondecreasing (real-valued) function defined on $[0,\infty )$ such that $f(0)=0$ and ${f^{\prime }}(0)=0$ (here ${f^{\prime }}(0)$ is the right derivative of f at $\eta =0$).
This condition is a quite natural way if one deals with inverse of heavy-tailed subordinators, indeed it is satisfied by inverse stable subordinators. This will be explained in the following remark.
Remark 5.1.
Condition 5.1 holds if $\{{L_{f}}(t):t\ge 0\}$ is the inverse of subordinator $\{V(t):t\ge 0\}$ such that $\mathbb{E}[V(1)]=\infty $. In such a case, if we consider the function
\[ {\kappa _{V}}(\xi ):=\log \mathbb{E}[{e^{\xi V(1)}}],\]
we have a regular and increasing function for $\xi \le 0$, ${\kappa _{V}}(0)=0$ and ${\kappa ^{\prime }_{V}}(0-)=\infty $ (here ${\kappa ^{\prime }_{V}}(0-)=\infty $ is the left derivative of ${\kappa _{V}}$ at $\xi =0$) and ${\kappa _{V}}(\xi )=\infty $ for $\xi \gt 0$. Then the restriction of ${\kappa _{V}}$ over $(-\infty ,0]$ is invertible, and assume values in $(-\infty ,0]$; so we denote such inverse function by ${\kappa _{V}^{-1}}$. Then one can check that
\[ f(\rho )=-{\kappa _{V}^{-1}}(-\rho )\hspace{1em}(\text{for}\hspace{2.5pt}\rho \ge 0),\]
and this agrees with formulas (12)–(13) in [5]. In particular we recover the case of stable subordinators with ${\kappa _{V}}(\xi )=-{(-\xi )^{\nu }}$ for $\xi \le 0$ and $f(\rho )={\rho ^{1/\nu }}$ for $\rho \ge 0$.
Then we can present a generalization of the reference LDPs presented in Propositions 3.1 and 4.1. The proofs are very similar to the ones presented for those propositions, and we omit the details.
Proposition 5.1.
Let $\{S(t):t\ge 0\}$ be a real-valued Lévy process as in Condition 1.1; moreover let $\{{L_{f}}(t):t\ge 0\}$ be real-valued Lévy process as in Condition 5.1, and assume that $\{S(t):t\ge 0\}$ and $\{{L_{f}}(t):t\ge 0\}$ are independent. Furthermore, we consider the function
\[ {\Lambda _{f,S}}(\theta ):={\Upsilon _{f}}({\kappa _{S}}(\theta ))=\left\{\begin{array}{l@{\hskip10.0pt}l}f({\kappa _{S}}(\theta ))& \hspace{2.5pt}\textit{if}\hspace{2.5pt}{\kappa _{S}}(\theta )\ge 0,\\ {} 0& \hspace{2.5pt}\textit{if}\hspace{2.5pt}{\kappa _{S}}(\theta )\lt 0,\end{array}\right.\]
and assume that it is an essentially smooth function. Then $\left\{\frac{S({L_{f}}(t))}{t}:t\gt 0\right\}$ satisfies the LDP with speed ${v_{t}}=t$ and good rate function ${I_{\mathrm{LD},f}}$ defined by
\[ {I_{\mathrm{LD},f}}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Lambda _{f,S}}(\theta )\}.\]
Proposition 5.2.
Let $\{S(t):t\ge 0\}$ be a real-valued Lévy process as in Condition 1.1, and let $\{{L_{{f_{1}}}^{(1)}}(t):t\ge 0\}$ and $\{{L_{{f_{2}}}^{(2)}}(t):t\ge 0\}$ be two independent processes as in Condition 5.1 (for some functions ${f_{1}}$ and ${f_{2}}$, respectively), and independent of $\{S(t):t\ge 0\}$. Moreover, assume that $\{S(t):t\ge 0\}$ has bounded variation, and it is not a subordinator (therefore we can refer to the independent subordinators $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ in Lemma 4.1). Then let ${\Psi _{{f_{1}},{f_{2}}}}$ be the function defined by
\[ {\Psi _{{f_{1}},{f_{2}}}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{f_{1}}({\kappa _{{S_{1}}}}(\theta ))& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\theta \ge 0,\\ {} {f_{2}}({\kappa _{{S_{2}}}}(-\theta ))& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\theta \lt 0,\end{array}\right.\]
and assume that it is essentially smooth. Then $\left\{\frac{{S_{1}}({L_{{f_{1}}}^{(1)}}(t))-{S_{2}}({L_{{f_{2}}}^{(2)}}(t))}{t}:t\gt 0\right\}$ satisfies the LDP with speed ${v_{t}}=t$ and good rate function ${J_{\mathrm{LD},{f_{1}},{f_{2}}}}$ defined by
\[ {J_{\mathrm{LD},{f_{1}},{f_{2}}}}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Psi _{{f_{1}},{f_{2}}}}(\theta )\}.\]
Finally, we discuss a possible way to generalize Propositions 3.3 and 4.3 when Condition 5.1 holds, together with some possible further conditions. Firstly one should have the analogue of Propositions 3.2 and 4.2; thus one should have the weak convergence of $\left\{\frac{S({L_{f}}(t))}{{t^{1-\alpha (f)}}}:t\gt 0\right\}$ (for some $\alpha (f)\in (0,1)$ which plays the role of $\alpha (\nu )$ in (7)) and $\left\{\frac{{S_{1}}({L_{f}^{(1)}}(t))-{S_{2}}({L_{f}^{(2)}}(t))}{{t^{1-{\alpha _{1}}(f)}}}\right\}$ (for some ${\alpha _{1}}(f)\in (0,1)$ which plays the role of ${\alpha _{1}}(\nu )$ in (12)) to some nondegenerating random variables. In particular, we expect that, as it happens in the previous sections (see eqs. (7) and (12)), $\alpha (f)={\alpha _{1}}(f)$ when the Lévy process $\{S(t):t\ge 0\}$ is not centered (i.e. $m\ne 0$ in Section 3, or ${m_{1}}\ne {m_{2}}$ in Section 4), and $1-\alpha (f)=\frac{1-{\alpha _{1}}(f)}{2}$ when the Lévy process $\{S(t):t\ge 0\}$ is centered.
Then one should be able to handle some suitable limits which follow from suitable applications of the Gärtner–Ellis theorem; more precisely, for every choice of the positive scalings $\{{a_{t}}:t\gt 0\}$ such that (1) holds, we mean
\[ {a_{t}}\log \mathbb{E}\left[\exp \left({\kappa _{S}}\left(\frac{\theta }{{({a_{t}}t)^{1-\alpha (f)}}}\right){L_{f}}(t)\right)\right]\]
for the possible generalization of Proposition 3.3, and
\[\begin{aligned}{}& {a_{t}}\left(\log \mathbb{E}\left[\exp \left({\kappa _{{S_{1}}}}\left(\frac{\theta }{{({a_{t}}t)^{1-{\alpha _{1}}(f)}}}\right){L_{f}}(t)\right)\right]\right.\\ {} & \hspace{1em}\left.+\log \mathbb{E}\left[\exp \left({\kappa _{{S_{2}}}}\left(-\frac{\theta }{{({a_{t}}t)^{1-{\alpha _{1}}(f)}}}\right){L_{f}}(t)\right)\right]\right)\end{aligned}\]
for the possible generalization of Proposition 4.3.
In our opinion, we can prove the generalizations of the other results by requiring some other conditions, and Condition 5.1 is not enough. We also point out that, if it is possible to prove these generalizations, the rate functions could not have an explicit expression, and only variational formulas would be available; see ${I_{\mathrm{MD}}}(\cdot ;m)$ in Proposition 3.3 (which can be derived from the variational formula (8)) and ${J_{\mathrm{MD}}}$ in Proposition 4.3 (which can be derived from the variational formula (14)).

6 Comparisons between rate functions

In this section $\{S(t):t\ge 0\}$ is a real-valued Lévy process as in Condition 1.1, with bounded variation, and it is not a subordinator; thus we can refer to both Conditions 3.1 and 4.1, and in particular (as stated in Lemma 4.1) we can refer to the nontrivial independent subordinators $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ such that $\{S(t):t\ge 0\}$ is distributed as $\{{S_{1}}(t)-{S_{2}}(t):t\ge 0\}$. In particular we can refer to the LDPs in Propositions 3.1 and 4.1, which are governed by the rate functions ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$, and to the classes of LDPs in Propositions 3.3 and 4.3, which are governed by the rate functions ${I_{\mathrm{MD}}}(\cdot ;m)={I_{\mathrm{MD}}}(\cdot ;{m_{1}}-{m_{2}})$ and ${J_{\mathrm{MD}}}$. All these rate functions uniquely vanish at $x=0$. So, by arguing as in [9] (Section 5), we present some generalizations of the comparisons between rate functions (at least around $x=0$) presented in that reference. Those comparisons allow us to compare different convergences to zero; this could be explained by adapting the explanations in [9] (Section 5), and here we omit the details.
Remark 6.1.
Throughout this section we always assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$. So we simply write ${\Psi _{\nu }}$ in place of the function ${\Psi _{{\nu _{1}},{\nu _{2}}}}$ in Proposition 4.1 (see (9)).
We start by comparing ${J_{\mathrm{LD}}}$ in Proposition 4.1 and ${I_{\mathrm{LD}}}$ in Proposition 3.1.
Proposition 6.1.
Assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$. Then ${J_{\mathrm{LD}}}(0)={I_{\mathrm{LD}}}(0)=0$ and, for $x\ne 0$, we have ${I_{\mathrm{LD}}}(x)\gt {J_{\mathrm{LD}}}(x)\gt 0$.
Proof.
Firstly, since the range of values of ${\Psi ^{\prime }_{\nu }}(\theta )$ (for θ such that the derivative is well-defined) is $(-\infty ,\infty )$ (see the final part of the proof of Proposition 4.1, and the change of notation in Remark 6.1), for all $x\in \mathbb{R}$ there exists ${\theta _{x}^{(1)}}$ such that ${\Psi ^{\prime }_{\nu }}({\theta _{x}^{(1)}})=x$, and therefore
\[ {J_{\mathrm{LD}}}(x)={\theta _{x}^{(1)}}x-{\Psi _{\nu }}({\theta _{x}^{(1)}}).\]
We recall that ${\theta _{x}^{(1)}}=0$ (${\theta _{x}^{(1)}}\gt 0$ and ${\theta _{x}^{(1)}}\lt 0$, respectively) if and only if $x=0$ ($x\gt 0$ and $x\lt 0$, respectively). Then ${J_{\mathrm{LD}}}(0)=0$ and, moreover, we have ${I_{\mathrm{LD}}}(0)=0$; indeed the equation ${\Lambda ^{\prime }_{\nu ,S}}(\theta )=0$, i.e.
\[ \frac{1}{\nu }{({\kappa _{{S_{1}}}}(\theta )+{\kappa _{{S_{2}}}}(-\theta ))^{1/\nu -1}}({\kappa ^{\prime }_{{S_{1}}}}(\theta )+{\kappa ^{\prime }_{{S_{2}}}}(-\theta ))=0,\]
yields the solution $\theta =0$.
We conclude with the case $x\ne 0$. If $x\gt 0$, then we have
\[ {\kappa _{{S_{1}}}}({\theta _{x}^{(1)}})\gt \max \{{\kappa _{{S_{1}}}}({\theta _{x}^{(1)}})+{\kappa _{{S_{2}}}}(-{\theta _{x}^{(1)}}),0\};\]
this yields ${\Psi _{\nu }}({\theta _{x}^{(1)}})\gt {\Lambda _{\nu ,S}}({\theta _{x}^{(1)}})$, and therefore
\[ {J_{\mathrm{LD}}}(x)={\theta _{x}^{(1)}}x-{\Psi _{\nu }}({\theta _{x}^{(1)}})\lt {\theta _{x}^{(1)}}x-{\Lambda _{\nu ,S}}({\theta _{x}^{(1)}})\le \underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Lambda _{\nu ,S}}(\theta )\}={I_{\mathrm{LD}}}(x).\]
Similarly, if $x\lt 0$, then we have
\[ {\kappa _{{S_{2}}}}(-{\theta _{x}^{(1)}})\gt \max \{{\kappa _{{S_{1}}}}({\theta _{x}^{(1)}})+{\kappa _{{S_{2}}}}(-{\theta _{x}^{(1)}}),0\};\]
this yields ${\Psi _{\nu }}({\theta _{x}^{(1)}})\gt {\Lambda _{\nu ,S}}({\theta _{x}^{(1)}})$, and we can conclude following the lines of the case $x\gt 0$.  □
The next Proposition 6.2 provides a similar result which concerns the comparison of ${J_{\mathrm{MD}}}$ in Proposition 4.3 and ${I_{\mathrm{MD}}}(\cdot ;m)={I_{\mathrm{MD}}}(\cdot ;{m_{1}}-{m_{2}})$ in Proposition 3.3. In particular, we have ${I_{\mathrm{MD}}}(x;{m_{1}}-{m_{2}})\gt {J_{\mathrm{MD}}}(x)\gt 0$ for all $x\ne 0$ if and only if ${m_{1}}\ne {m_{2}}$ (note that, if ${m_{1}}\ne {m_{2}}$, we have $m={m_{1}}-{m_{2}}\ne 0$; thus $\alpha (\nu )$ in (7) coincides with ${\alpha _{1}}(\nu )$ in (12)); on the contrary, if ${m_{1}}={m_{2}}$, we have ${I_{\mathrm{MD}}}(x;0)\gt {J_{\mathrm{MD}}}(x)\gt 0$ only if $|x|$ is small enough (and strictly positive).
Proposition 6.2.
We have ${J_{\mathrm{MD}}}(0)={I_{\mathrm{MD}}}(0;{m_{1}}-{m_{2}})=0$. Moreover, if $x\ne 0$, we have two cases.
  • 1. If ${m_{1}}\ne {m_{2}}$, then ${I_{\mathrm{MD}}}(x;{m_{1}}-{m_{2}})\gt {J_{\mathrm{MD}}}(x)\gt 0$.
  • 2. If ${m_{1}}={m_{2}}={m_{\ast }}$ for some ${m_{\ast }}\gt 0$, there exists ${\delta _{\nu }}\gt 0$ such that: ${I_{\mathrm{MD}}}(x;0)\gt {J_{\mathrm{MD}}}(x)\gt 0$ if $0\lt |x|\lt {\delta _{\nu }}$, ${J_{\mathrm{MD}}}(x)\gt {I_{\mathrm{MD}}}(x;0)\gt 0$ if $|x|\gt {\delta _{\nu }}$, and ${I_{\mathrm{MD}}}(x;0)={J_{\mathrm{MD}}}(x)\gt 0$ if $|x|={\delta _{\nu }}$.
Proof.
The equality ${J_{\mathrm{MD}}}(0)={I_{\mathrm{MD}}}(0;{m_{1}}-{m_{2}})=0$ (case $x=0$) is immediate. So, in what follows, we take $x\ne 0$. We start with the case ${m_{1}}\ne {m_{2}}$, and we have two cases.
  • • Assume that ${m_{1}}\gt {m_{2}}$. Then for $x\lt 0$ we have ${J_{\mathrm{MD}}}(x)\lt \infty ={I_{\mathrm{MD}}}(x;{m_{1}}-{m_{2}})$. For $x\gt 0$ we have $\frac{x}{{m_{1}}}\lt \frac{x}{{m_{1}}-{m_{2}}}$, which is trivially equivalent to ${J_{\mathrm{MD}}}(x)\lt {I_{\mathrm{MD}}}(x;{m_{1}}-{m_{2}})$.
  • • Assume that ${m_{1}}\lt {m_{2}}$. Then for $x\gt 0$ we have ${J_{\mathrm{MD}}}(x)\lt \infty ={I_{\mathrm{MD}}}(x;{m_{1}}-{m_{2}})$. For $x\lt 0$ we have $-\frac{x}{{m_{2}}}\lt -\frac{x}{{m_{2}}-{m_{1}}}$, which is trivially equivalent to ${J_{\mathrm{MD}}}(x)\lt {I_{\mathrm{MD}}}(x;{m_{1}}-{m_{2}})$.
Finally, if ${m_{1}}={m_{2}}={m_{\ast }}$ for some ${m_{\ast }}\gt 0$, the statement to prove trivially holds noting that, for two constants ${c_{\nu ,{m_{\ast }}}^{(1)}},{c_{\nu ,{m_{\ast }}}^{(2)}}\gt 0$, we have ${J_{\mathrm{MD}}}(x)={c_{\nu ,{m_{\ast }}}^{(1)}}|x{|^{1/(1-\nu )}}$ and ${I_{\mathrm{MD}}}(x;0)={c_{\nu ,{m_{\ast }}}^{(2)}}|x{|^{1/(1-\nu /2)}}$.  □
We remark that the inequalities around $x=0$ in Propositions 6.1 and 6.2 are not surprising; indeed we expect to have a slower convergence to zero when we deal with two independent random time-changes (because in that case we have more randomness).
Now we consider comparisons between rate functions for different values of $\nu \in (0,1)$. We mean the rate functions in Propositions 3.1 and 4.1, and we restrict our attention to a comparison around $x=0$. In view of what follows, we consider some slightly different notation: ${I_{\mathrm{LD},\nu }}$ in place of ${I_{\mathrm{LD}}}$ in Proposition 3.1; ${J_{\mathrm{LD},\nu }}$ in place of ${J_{\mathrm{LD}}}$ in Proposition 4.1, with ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$.
Proposition 6.3.
Let $\nu ,\eta \in (0,1)$ be such that $\eta \lt \nu $. Then: ${I_{\mathrm{LD},\eta }}(0)={I_{\mathrm{LD},\nu }}(0)=0$, ${J_{\mathrm{LD},\eta }}(0)={J_{\mathrm{LD},\nu }}(0)=0$; for some $\delta \gt 0$, we have ${I_{\mathrm{LD},\eta }}(x)\gt {I_{\mathrm{LD},\nu }}(x)\gt 0$ and ${J_{\mathrm{LD},\eta }}(x)\gt {J_{\mathrm{LD},\nu }}(x)\gt 0$ for $0\lt |x|\lt \delta $.
Proof.
We can say that there exists $\delta \gt 0$ small enough such that, for $|x|\lt \delta $, there exist ${\theta _{x,\nu }^{(1)}},{\theta _{x,\nu }^{(2)}}\in \mathbb{R}$ such that
\[ {I_{\mathrm{LD},\nu }}(x)={\theta _{x,\nu }^{(2)}}x-{\Lambda _{\nu ,S}}({\theta _{x,\nu }^{(2)}})\hspace{1em}\text{and}\hspace{1em}{J_{\mathrm{LD},\nu }}(x)={\theta _{x,\nu }^{(1)}}x-{\Psi _{\nu }}({\theta _{x,\nu }^{(1)}});\]
moreover ${\theta _{x,\nu }^{(1)}}={\theta _{x,\nu }^{(2)}}=0$ if $x=0$, and ${\theta _{x,\nu }^{(1)}},{\theta _{x,\nu }^{(2)}}\ne 0$ if $x\ne 0$; finally we have
\[ 0\le {\Psi _{\nu }}({\theta _{x,\nu }^{(1)}}),{\Lambda _{\nu ,S}}({\theta _{x,\nu }^{(2)}})\lt 1.\]
Then, by taking into account the same formulas with η in place of ν (together with the inequality $\frac{1}{\eta }\gt \frac{1}{\nu }$), it is easy to check that
\[ 0\le {\Lambda _{\eta ,S}}({\theta _{x,\nu }^{(2)}})\lt {\Lambda _{\nu ,S}}({\theta _{x,\nu }^{(2)}})\lt 1\hspace{1em}\text{and}\hspace{1em}0\le {\Psi _{\eta }}({\theta _{x,\nu }^{(1)}})\lt {\Psi _{\nu }}({\theta _{x,\nu }^{(1)}})\lt 1\]
(see (4) for the first chain of inequalities, and (9) with ${\nu _{1}}={\nu _{2}}=\nu $ for the second chain of inequalities); thus
\[ {I_{\mathrm{LD},\nu }}(x)={\theta _{x,\nu }^{(2)}}x-{\Lambda _{\nu ,S}}({\theta _{x,\nu }^{(2)}})\lt {\theta _{x,\nu }^{(2)}}x-{\Lambda _{\eta ,S}}({\theta _{x,\nu }^{(2)}})\le \underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Lambda _{\eta ,S}}(\theta )\}={I_{\mathrm{LD},\eta }}(x)\]
and
\[ {J_{\mathrm{LD},\nu }}(x)={\theta _{x,\nu }^{(1)}}x-{\Psi _{\nu }}({\theta _{x,\nu }^{(1)}})\lt {\theta _{x,\nu }^{(1)}}x-{\Psi _{\eta }}({\theta _{x,\nu }^{(1)}})\le \underset{\theta \in \mathbb{R}}{\sup }\{\theta x-{\Psi _{\eta }}(\theta )\}={J_{\mathrm{LD},\eta }}(x).\]
This completes the proof.  □
As a consequence of Proposition 6.3 we can say that, in all the above convergences to zero governed by some rate function (that uniquely vanishes at zero), the smaller the ν, the faster the convergence of the random variables to zero.
We also remark that we can obtain a version of Proposition 6.3 in terms of the rate functions ${I_{\mathrm{MD}}}(\cdot ;{m_{1}}-{m_{2}})={I_{\mathrm{MD},\nu }}(\cdot ;{m_{1}}-{m_{2}})$ in Proposition 3.3 and ${J_{\mathrm{MD}}}={J_{\mathrm{MD},\nu }}$ in Proposition 4.3. Indeed we can obtain the same kind of inequalities, and this is easy to check because we have explicit expressions of the rate functions (here we omit the details).

7 An example with independent tempered stable subordinators

Throughout this section we consider the examples presented below.
Example 7.1.
Let $\{{S_{1}}(t):t\ge 0\}$ and $\{{S_{2}}(t):t\ge 0\}$ be two independent tempered stable subordinators with parameters $\beta \in (0,1)$ and ${r_{1}}\gt 0$, and $\beta \in (0,1)$ and ${r_{2}}\gt 0$, respectively. Then
\[ {\kappa _{{S_{i}}}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{r_{i}^{\beta }}-{({r_{i}}-\theta )^{\beta }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \le {r_{i}},\\ {} \infty & \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \gt {r_{i}}\end{array}\right.\]
for $i=1,2$; thus
\[\begin{aligned}{}{\kappa _{S}}(\theta )& :={\kappa _{{S_{1}}}}(\theta )+{\kappa _{{S_{2}}}}(-\theta )\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}{r_{1}^{\beta }}-{({r_{1}}-\theta )^{\beta }}+{r_{2}^{\beta }}-{({r_{2}}+\theta )^{\beta }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}-{r_{2}}\le \theta \le {r_{1}},\\ {} \infty & \hspace{2.5pt}\text{otherwise}.\end{array}\right.\end{aligned}\]
Note that, to be consistent in the comparisons between ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$, we always take the rate function ${J_{\mathrm{LD}}}$ in Proposition 4.1 with ${\nu _{1}}={\nu _{2}}=\nu $. Moreover, we remark that, for Example 7.1, the function ${\Lambda _{\nu ,S}}$ is essentially smooth because
vmsta269_g002.jpg
Fig. 2.
Rate functions ${I_{\mathrm{LD}}}={I_{\mathrm{LD},\nu }}$ (top) and ${J_{\mathrm{LD}}}={J_{\mathrm{LD},\nu }}$ (bottom) for the processes in Example 7.1 and different values of ν ($\nu =0.1,0.5,0.9$). Numerical values of the other parameters: ${r_{1}}=1$, ${r_{2}}=2$, $\beta =0.5$
vmsta269_g003.jpg
Fig. 3.
Rate functions ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$ for the processes in Example 7.1 and different values of β ($\beta =0.3,0.5,0.7$ on the left top, on the right top and bottom). Numerical values of the other parameters: ${r_{1}}=1$, ${r_{2}}=2$, $\nu =0.5$
\[ \underset{\theta \downarrow -{r_{2}}}{\lim }{\Lambda ^{\prime }_{\nu ,S}}(\theta )=-\infty \hspace{1em}\text{and}\hspace{1em}\underset{\theta \uparrow {r_{1}}}{\lim }{\Lambda ^{\prime }_{\nu ,S}}(\theta )=+\infty ;\]
indeed these conditions hold if and only if ${\kappa _{S}}({r_{1}})$ and ${\kappa _{S}}(-{r_{2}})$ are positive, and in fact we have
\[\begin{aligned}{}{\kappa _{S}}({r_{1}})& ={\kappa _{S}}(-{r_{2}})={r_{1}^{\beta }}+{r_{2}^{\beta }}-{({r_{1}}+{r_{2}})^{\beta }}\\ {} & ={({r_{1}}+{r_{2}})^{\beta }}\left({\left(\frac{{r_{1}}}{{r_{1}}+{r_{2}}}\right)^{\beta }}+{\left(\frac{{r_{2}}}{{r_{1}}+{r_{2}}}\right)^{\beta }}-1\right)\gt 0.\end{aligned}\]
vmsta269_g004.jpg
Fig. 4.
Rate functions ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$ for the processes in Example 7.1 and different values of ${r_{2}}$ (${r_{2}}=5,10,50$ on the left top, on the right top and bottom). Numerical values of the other parameters: ${r_{1}}=1$, $\nu =0.5$, $\beta =0.5$
We start with Figure 2. The graphs agree with the inequalities in Proposition 6.3. Moreover Figure 2 is more informative than Figure 3 in [9]; indeed the graphs of ${J_{\mathrm{LD}}}={J_{\mathrm{LD},\nu }}$ at the bottom in Figure 2 (for different values of ν) show that the inequalities in Proposition 6.3 hold only in a neighborhood of the origin $x=0$.
In Figures 3 and 4 we take different values of β and of ${r_{2}}$, respectively, when the other paramaters are fixed. The graphs in these figures agree with Proposition 6.1.

Acknowledgement

We thank Prof. Zhiyi Chi for some comments on the proof of Lemma 4.1. We also thank a referee for some useful comments which led to the addition of Section 5 and an improvement in the presentation of the paper.

References

[1] 
Beghin, L., Macci, C.: Non-central moderate deviations for compound fractional Poisson processes. Statist. Probab. Lett. 185 (2022). Paper No. 109424, 8 pp.. MR4383208. https://doi.org/10.1016/j.spl.2022.109424
[2] 
Chi, Z.: On exact sampling of the first passage event of a Lévy process with infinite Lévy measure and bounded variation. Stochastic Processes Appl. 126, 1124–1144 (2016). MR3461193. https://doi.org/10.1016/j.spa.2015.11.001
[3] 
Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications, 2nd edn. Springer, New York (1998). MR1619036. https://doi.org/10.1007/978-1-4612-5320-4
[4] 
Giuliano, R., Macci, C.: Some examples of noncentral moderate deviations for sequences of real random variables. Mod. Stoch. Theory Appl. 10, 111–144 (2023). MR4573675. https://doi.org/10.15559/23-vmsta219
[5] 
Glynn, P.W., Whitt, W.: Large deviations behavior of counting processes and their inverses. Queueing Syst. Theory Appl. 17, 107–128 (1994). MR1295409. https://doi.org/10.1007/BF01158691
[6] 
Gorenflo, R., Kilbas, A.A., Mainardi, F., Rogosin, S.V.: Mittag-Leffler Functions, Related Topics and Applications. Springer, New York (2014). MR3244285. https://doi.org/10.1007/978-3-662-43930-2
[7] 
Iafrate, F., Macci, C.: Asymptotic results for the last zero crossing time of a Brownian motion with non-null drift. Stat. Probab. Lett. 166, Paper no. 108881, 9 pp. (2020). MR4129103. https://doi.org/10.1016/j.spl.2020.108881
[8] 
Kerss, A., Leonenko, N.N., Sikorskii, A.: Fractional Skellam processes with applications to finance. Fract. Calc. Appl. Anal. 17, 532–551 (2014). MR3181070. https://doi.org/10.2478/s13540-014-0184-2
[9] 
Lee, J., Macci, C.: Noncentral moderate deviations for fractional Skellam processes. Mod. Stoch. Theory Appl. 11, 43–61 (2024). MR4692017. https://doi.org/10.15559/23-VMSTA235
[10] 
Leonenko, N., Macci, C., Pacchiarotti, B.: Large deviations for a class of tempered subordinators and their inverse processes. Proc. R. Soc. Edinb., Sect. A 151, 2030–2050 (2021). MR4349433. https://doi.org/10.1017/prm.2020.95
[11] 
Meerschaert, M.M., Nane, E., Vellaisamy, P.: The fractional Poisson process and the inverse stable subordinator. Electron. J. Probab. 16, 1600–1620 (2011). MR2835248. https://doi.org/10.1214/EJP.v16-920
[12] 
Meerschaert, M.M., Sikorskii, A.: Stochastic models for fractional calculus, Walter de Gruyter & Co., Berlin (2012). MR2884383. https://doi.org/10.1515/978-3-110-25816-5
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries
  • 3 Results with only one random time-change
  • 4 Results with two independent random time-changes
  • 5 A discussion on possible generalizations
  • 6 Comparisons between rate functions
  • 7 An example with independent tempered stable subordinators
  • Acknowledgement
  • References

Copyright
© 2025 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Large deviations weak convergence Mittag-Leffler function tempered stable subordinators

MSC2020
60F10 60F05 60G22 33E12

Funding
A.I. acknowledges the support from MUR-PRIN 2022 PNRR (project P2022XSF5H “Stochastic Models in Biomathematics and Applications”) and INdAM-GNCS.
C.M. acknowledges the support from MUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C23000330006), from University of Rome Tor Vergata (project “Asymptotic Properties in Probability” (CUP E83C22001780005)) and INdAM-GNAMPA.
A.M. acknowledges the support from MUR-PRIN 2022 (project 2022XZSAFN “Anomalous Phenomena on Regular and Irregular Domains: Approximating Complexity for the Applied Sciences”), from MUR-PRIN 2022 PNRR (project P2022XSF5H “Stochastic Models in Biomathematics and Applications”) and INdAM-GNCS.

Metrics
since March 2018
183

Article info
views

30

Full article
views

70

PDF
downloads

18

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    4
  • Theorems
    1
vmsta269_g001.jpg
Fig. 1.
The function ${\Lambda _{\nu ,S}}$ in Remark 3.1 for $\theta \le r=1$. Numerical values: $\nu =0.5$, $\beta =0.25$; $h=0.5$ on the left, and $h=3$ on the right
vmsta269_g002.jpg
Fig. 2.
Rate functions ${I_{\mathrm{LD}}}={I_{\mathrm{LD},\nu }}$ (top) and ${J_{\mathrm{LD}}}={J_{\mathrm{LD},\nu }}$ (bottom) for the processes in Example 7.1 and different values of ν ($\nu =0.1,0.5,0.9$). Numerical values of the other parameters: ${r_{1}}=1$, ${r_{2}}=2$, $\beta =0.5$
vmsta269_g003.jpg
Fig. 3.
Rate functions ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$ for the processes in Example 7.1 and different values of β ($\beta =0.3,0.5,0.7$ on the left top, on the right top and bottom). Numerical values of the other parameters: ${r_{1}}=1$, ${r_{2}}=2$, $\nu =0.5$
vmsta269_g004.jpg
Fig. 4.
Rate functions ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$ for the processes in Example 7.1 and different values of ${r_{2}}$ (${r_{2}}=5,10,50$ on the left top, on the right top and bottom). Numerical values of the other parameters: ${r_{1}}=1$, $\nu =0.5$, $\beta =0.5$
Theorem 2.1 (Gärtner–Ellis theorem).
vmsta269_g001.jpg
Fig. 1.
The function ${\Lambda _{\nu ,S}}$ in Remark 3.1 for $\theta \le r=1$. Numerical values: $\nu =0.5$, $\beta =0.25$; $h=0.5$ on the left, and $h=3$ on the right
vmsta269_g002.jpg
Fig. 2.
Rate functions ${I_{\mathrm{LD}}}={I_{\mathrm{LD},\nu }}$ (top) and ${J_{\mathrm{LD}}}={J_{\mathrm{LD},\nu }}$ (bottom) for the processes in Example 7.1 and different values of ν ($\nu =0.1,0.5,0.9$). Numerical values of the other parameters: ${r_{1}}=1$, ${r_{2}}=2$, $\beta =0.5$
vmsta269_g003.jpg
Fig. 3.
Rate functions ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$ for the processes in Example 7.1 and different values of β ($\beta =0.3,0.5,0.7$ on the left top, on the right top and bottom). Numerical values of the other parameters: ${r_{1}}=1$, ${r_{2}}=2$, $\nu =0.5$
vmsta269_g004.jpg
Fig. 4.
Rate functions ${I_{\mathrm{LD}}}$ and ${J_{\mathrm{LD}}}$ for the processes in Example 7.1 and different values of ${r_{2}}$ (${r_{2}}=5,10,50$ on the left top, on the right top and bottom). Numerical values of the other parameters: ${r_{1}}=1$, $\nu =0.5$, $\beta =0.5$
Theorem 2.1 (Gärtner–Ellis theorem).
Assume that, for all $\theta \in \mathbb{R}$, there exists
\[ \Lambda (\theta ):=\underset{t\to \infty }{\lim }\frac{1}{{v_{t}}}\log \mathbb{E}\left[{e^{{v_{t}}\theta Z(t)}}\right]\]
as an extended real number; moreover assume that the origin $\theta =0$ belongs to the interior of the set $\mathcal{D}(\Lambda ):=\{\theta \in \mathbb{R}:\Lambda (\theta )\lt \infty \}$. Furthermore, let ${\Lambda ^{\ast }}$ be the Legendre–Fenchel transform of Λ, i.e. the function defined by
\[ {\Lambda ^{\ast }}(x):=\underset{\theta \in \mathbb{R}}{\sup }\{\theta x-\Lambda (\theta )\}.\]
Then, if Λ is essentially smooth and lower semicontinuous, $\{Z(t):t\gt 0\}$ satisfies the LDP with good rate function ${\Lambda ^{\ast }}$.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy