1 Introduction
A large deviation principle provides some asymptotic bounds for a family of probability measures on the same topological space $\mathcal{X}$; moreover one often refers to a family of $\mathcal{X}$-valued random variables, $\{{C_{t}}\}$, whose laws are those probability measures. These asymptotic bounds are expressed in terms of a speed function ${v_{t}}$ (that tends to infinity) and a lower semicontinuous rate function $I:\mathcal{X}\to [0,\infty ]$. The concept of large deviation principle is a basic definition in the theory of large deviations; this theory allows us to compute the probabilities of rare events on an exponential scale (see [10] as a reference of this topic).
The term moderate deviations is often used in the literature to mean a class of large deviation principles that, in some sense, fills the gap between two asymptotic regimes:
The speed functions and the random variables involved in these large deviation principles depend on some scalings in a suitable class; moreover, the large deviation principles in this class are governed by the same quadratic rate function that uniquely vanishes at zero. Typically the scalings consist of families of positive numbers $\{{a_{t}}:t\gt 0\}$ such that
and one can show that $\{\sqrt{{v_{t}}{a_{t}}}{C_{t}}\}$ satisfies the large deviation principle with speed $\frac{1}{{a_{t}}}$; note that $\frac{1}{{a_{t}}}$ has a lower intensity than the speed ${v_{t}}$, and this explains the use of the term moderate. We also recall that we recover the two asymptotic regimes stated above for ${a_{t}}=\frac{1}{{v_{t}}}$ (in this case ${v_{t}}{a_{t}}\to \infty $ fails) and for ${a_{t}}=1$ (in this case ${a_{t}}\to 0$ fails).
The term noncentral moderate deviations has been recently used in the literature when we have a class of large deviation principles that, in some sense, fills the gap between a convergence to a constant (typically zero) and the weak convergence towards a non-Gaussian distribution. Some examples of noncentral moderate deviations can be found in [12], where the weak convergences are towards Gumbel, exponential, and Laplace distributions. In that reference, the interested reader can find some other previous references in the literature with some other examples.
The aim of this paper is to present some examples of noncentral moderate deviations based on fractional Skellam processes. In these examples we always have ${v_{t}}=t$, and therefore the scalings are families of positive numbers $\{{a_{t}}:t\gt 0\}$ such that
Skellam processes are given by the difference of two independent Poisson processes. In this paper we consider two fractional Skellam processes studied in [20]; some more recent generalized versions of these processes can be found in [14] and [17]. The fractional Skellam processes in [20] are closely related to the definition of the fractional Poisson process in the literature. We recall that a fractional Poisson process is obtained as an independent random time-change of a Poisson process with an inverse of stable subordinator (see, e.g., [5], [6] and [22]); here we are referring to the time fractional Poisson process and, for the definitions of space and space-time fractional Poisson process, the interested reader can refer to [23] (see also [19] as a very recent paper on time-changed space-time fractional Poisson processes). Then the fractional Skellam processes studied in [20] are obtained in a quite natural way as follows.
It is easy to check (see Remark 2.3) that the fractional Skellam process of type 2 is a particular compound fractional Poisson process (and this is not surprising because a Skellam process is a particular compound Poisson process; see Remark 2.1). Therefore, the moderate deviation results for the fractional Skellam process of type 2 can be obtained from the ones in [3]. Here, since we deal with random time-changes of Skellam processes, for completeness we recall the references [7], [8] and [18].
-
• Fractional Skellam process of type 1. A difference between two independent fractional Poisson processes (so we have two independent random time-changes for each one of the involved fractional Poisson processes);
-
• Fractional Skellam process of type 2. An independent random time-change of a Skellam process with an inverse of stable subordinator.
Here for completeness we present a brief review of the references with results on large/moderate deviations for fractional Poisson processes or similar models: [1] and [2] with results for the (possibly multivariate) alternative fractional Poisson process, [4] with results for random time-changed continuous-time Markov chains on integers with alternating rates, [21] with results for a nonstandard model based on the Prabhakar function in [24], and for a state dependent model in [11].
We conclude with the outline of the paper. We start with some preliminaries in Section 2. The results for the fractional Skellam processes of type 1 and 2 are presented in Sections 3 and 4, respectively. In Section 5 we compare some rate functions and present some plots. Finally, in Section 6, we present some concluding remarks.
2 Preliminaries
In this section, we recall some preliminaries on large deviations and on fractional Skellam processes.
2.1 On large deviations
Here we present definitions and results for families of real random variables $\{{Z_{t}}:t\gt 0\}$ defined on the same probability space $(\Omega ,\mathcal{F},P)$; moreover, in view of what follows, we consider the case $t\to \infty $. We start with the definition of large deviation principle (see, e.g., [10, pages 4–5]). A family of numbers $\{{v_{t}}:t\gt 0\}$ such that ${v_{t}}\to \infty $ (as $t\to \infty $) is called a speed function, and a lower semicontinuous function $I:\mathbb{R}\to [0,\infty ]$ is called a rate function. Then $\{{Z_{t}}:t\gt 0\}$ satisfies the large deviation principle (LDP from now on) with speed ${v_{t}}$ and a rate function I if
\[ \underset{t\to \infty }{\limsup }\frac{1}{{v_{t}}}\log P({Z_{t}}\in C)\le -\underset{x\in C}{\inf }I(x)\hspace{1em}\text{for all closed sets}\hspace{2.5pt}C,\]
and
\[ \underset{t\to \infty }{\liminf }\frac{1}{{v_{t}}}\log P({Z_{t}}\in O)\ge -\underset{x\in O}{\inf }I(x)\hspace{1em}\text{for all open sets}\hspace{2.5pt}O.\]
Moreover the rate function I is said to be good if, for every $\beta \ge 0$, the level set $\{x\in \mathbb{R}:I(x)\le \beta \}$ is compact. The following well-known theorem provides a sufficient condition to have an LDP, and makes it easy to compute the corresponding speed and rate functions (see, e.g., Theorem 2.3.6(c) in [10]).Theorem 1 (Gärtner–Ellis theorem).
Assume that, for all $\theta \in \mathbb{R}$, there exists
\[ \Lambda (\theta ):=\underset{t\to \infty }{\lim }\frac{1}{{v_{t}}}\log \mathbb{E}\big[{e^{{v_{t}}\theta {Z_{t}}}}\big]\]
as an extended real number; moreover assume that the origin $\theta =0$ belongs to the interior of the set
Furthermore let ${\Lambda ^{\ast }}$ be the Legendre–Fenchel transform of the function Λ, i.e. the function defined by
\[ {\Lambda ^{\ast }}(x):=\underset{\theta \in \mathbb{R}}{\sup }\big\{\theta x-\Lambda (\theta )\big\}.\]
Then, if Λ is essentially smooth and lower semi-continuous, then $\{{Z_{t}}:t\gt 0\}$ satisfies the LDP with speed ${v_{t}}$ and good rate function ${\Lambda ^{\ast }}$.
We also recall (see, e.g., Definition 2.3.5 in [10]) that Λ is essentially smooth if the interior of $\mathcal{D}(\Lambda )$ is nonempty, the function Λ is differentiable throughout the interior of $\mathcal{D}(\Lambda )$, and Λ is steep, i.e. $|{\Lambda ^{\prime }}({\theta _{n}})|\to \infty $ whenever $\{{\theta _{n}}:n\ge 1\}$ is a sequence of points in the interior of $\mathcal{D}(\Lambda )$ which converge to a boundary point of $\mathcal{D}(\Lambda )$. A particular simple case (which always occurs in the applications of the Gärtner–Ellis theorem in this paper) is when $\mathcal{D}(\Lambda )=\mathbb{R}$ and Λ is a differentiable function; indeed, in such a case, the function Λ is essentially smooth (the steepness condition holds vacuously) and lower semi-continuous.
2.2 On fractional Skellam processes
We start with the definition of the (nonfractional) Skellam process. Let $\{{N_{{\lambda _{1}}}}(t):t\ge 0\}$ and $\{{N_{{\lambda _{2}}}}(t):t\ge 0\}$ be two independent Poisson processes with intensities ${\lambda _{1}}\gt 0$ and ${\lambda _{2}}\gt 0$, respectively. In particular we consider the notation $\underline{\lambda }=({\lambda _{1}},{\lambda _{2}})$. Then the process $\{{S_{\underline{\lambda }}}(t):t\ge 0\}$ defined by
is called Skellam process. Moreover, for each fixed $t\ge 0$, we have
Remark 2.1.
It is easy to check that $\{{S_{\underline{\lambda }}}(t):t\ge 0\}$ can be seen as a compound Poisson process $\{{\textstyle\sum _{k=1}^{{N_{{\lambda _{1}}+{\lambda _{2}}}}(t)}}{X_{k}}:t\ge 0\}$, where $\{{X_{k}}:k\ge 1\}$ and $\{{N_{{\lambda _{1}}+{\lambda _{2}}}}(t):t\ge 0\}$ are independent, $\{{X_{k}}:k\ge 1\}$ are i.i.d. random variables such that
and $\{{N_{{\lambda _{1}}+{\lambda _{2}}}}(t):t\ge 0\}$ is a Poisson process with intensity ${\lambda _{1}}+{\lambda _{2}}$.
In view of what follows we recall some other preliminaries. We start with the definition of the Mittag-Leffler function (see, e.g., [13], eq. (3.1.1))
(this is the correct version of eq. (3) in [3]; indeed we need the condition presented here for $x\to -\infty $, instead of ${E_{\nu }}(x)\to 0$).
\[ {E_{\nu }}(x):={\sum \limits_{k=0}^{\infty }}\frac{{x^{k}}}{\Gamma (\nu k+1)}\hspace{1em}\text{for}\hspace{2.5pt}\nu ,x\in \mathbb{C}.\]
Actually throughout this paper we have $\nu \in (0,1)$ and $x\in \mathbb{R}$; moreover, it is known (see Proposition 3.6 in [13] for the case $\alpha \in (0,2)$; indeed α in that reference coincides with ν in this paper) that we have
(2)
\[ \left\{\begin{array}{l@{\hskip10.0pt}l}{E_{\nu }}(x)\sim \frac{{e^{{x^{1/\nu }}}}}{\nu },& \text{as}\hspace{2.5pt}x\to \infty ,\\ {} \frac{1}{x}\log {E_{\nu }}(x)\to 0,& \text{as}\hspace{2.5pt}x\to -\infty \end{array}\right.\]Now we recall some moment generating functions which can be expressed in terms of the Mittag-Leffler function. If we consider the inverse of the stable subordinator $\{{L_{\nu }}(t):t\ge 0\}$, then we have
This formula appears in several references with $\theta \le 0$ only; however, this restriction is not needed because we can refer to the analytic continuation of the Laplace transform with complex argument.
(3)
\[ \mathbb{E}\big[{e^{\theta {L_{\nu }}(t)}}\big]={E_{\nu }}\big(\theta {t^{\nu }}\big)\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}).\]The fractional Poisson process $\{{N_{\nu ,\lambda }}(t):t\ge 0\}$ is defined by
where $\{{N_{\lambda }}(t):t\ge 0\}$ is a (nonfractional) Poisson process with intensity λ, independent of $\{{L_{\nu }}(t):t\ge 0\}$; it is known that
Now we are ready to provide the definitions of two fractional Skellam processes and their moment generating functions (see [20], Definitions 3.1–3.2 and Theorems 3.1–3.2). In particular, we consider the notation $\underline{\nu }=({\nu _{1}},{\nu _{2}})$ for ${\nu _{1}},{\nu _{2}}\in (0,1)$.
Fractional Skellam process of type 1. It is the process $\{{Y_{\underline{\nu },\underline{\lambda }}}(t):t\ge 0\}$ defined by
\[ {Y_{\underline{\nu },\underline{\lambda }}}(t):={N_{{\nu _{1}},{\lambda _{1}}}}(t)-{N_{{\nu _{2}},{\lambda _{2}}}}(t),\]
where $\{{N_{{\nu _{1}},{\lambda _{1}}}}(t):t\ge 0\}$ and $\{{N_{{\nu _{2}},{\lambda _{2}}}}(t):t\ge 0\}$ are two independent fractional Poisson processes. Then we have
\[ \mathbb{E}\big[{e^{\theta {Y_{\underline{\nu },\underline{\lambda }}}(t)}}\big]={E_{{\nu _{1}}}}\big({\lambda _{1}}\big({e^{\theta }}-1\big){t^{{\nu _{1}}}}\big){E_{{\nu _{2}}}}\big({\lambda _{2}}\big({e^{-\theta }}-1\big){t^{{\nu _{2}}}}\big)\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}).\]
Remark 2.2.
Some results for $\{{Y_{\underline{\nu },\underline{\lambda }}}(t):t\ge 0\}$ presented below concern the case ${\nu _{1}}={\nu _{2}}$; in this case we set ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$, and use the notation ${Y_{\nu ,\underline{\lambda }}}(t)$ in place of ${Y_{\underline{\nu },\underline{\lambda }}}(t)$.
Fractional Skellam process of type 2. It is the process $\{{Z_{\nu ,\underline{\lambda }}}(t):t\ge 0\}$ defined by
where the Skellam process $\{{S_{\underline{\lambda }}}(t):t\ge 0\}$ and the inverse of the stable subordinator $\{{L_{\nu }}(t):t\ge 0\}$ are independent. Then we have
\[ \mathbb{E}\big[{e^{\theta {Z_{\nu ,\underline{\lambda }}}(t)}}\big]={E_{\nu }}\big(\big({\lambda _{1}}\big({e^{\theta }}-1\big)+{\lambda _{2}}\big({e^{-\theta }}-1\big)\big){t^{\nu }}\big)\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}).\]
Remark 2.3.
We have recalled that the fractional Poisson process can be seen as a time changed (nonfractional) Poisson process with an independent inverse of the stable subordinator. Then, by taking into account Remark 2.1, we can say that $\{{Z_{\nu ,\underline{\lambda }}}(t):t\ge 0\}$ is distributed as the compound fractional Poisson process $\{{\textstyle\sum _{k=1}^{{N_{\nu ,{\lambda _{1}}+{\lambda _{2}}}}(t)}}{X_{k}}:t\ge 0\}$, where $\{{X_{k}}:k\ge 1\}$ and $\{{N_{\nu ,{\lambda _{1}}+{\lambda _{2}}}}(t):t\ge 0\}$ are independent, $\{{X_{k}}:k\ge 1\}$ are i.i.d. random variables as in Remark 2.1, and $\{{N_{\nu ,{\lambda _{1}}+{\lambda _{2}}}}(t):t\ge 0\}$ is a fractional Poisson process.
Remark 2.4.
Assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$ (and recall the slight change of notation explained in Remark 2.2 for the fractional Skellam process of type 1). Then, if ${\lambda _{1}}={\lambda _{2}}$, the random variables ${Y_{\nu ,\underline{\lambda }}}(t)$ and ${Z_{\nu ,\underline{\lambda }}}(t)$ are symmetric (around zero); namely ${Y_{\nu ,\underline{\lambda }}}(t)$ and ${Z_{\nu ,\underline{\lambda }}}(t)$ are distributed as $-{Y_{\nu ,\underline{\lambda }}}(t)$ and $-{Z_{\nu ,\underline{\lambda }}}(t)$, respectively. Then we have some consequences highlighted in Remarks 3.3 and 4.3.
3 Noncentral moderate deviations for the type 1 process
We start with the first result for which we could have ${\nu _{1}}\ne {\nu _{2}}$.
Proposition 3.1.
Let ${\Psi _{\underline{\nu },\underline{\lambda }}^{(1)}}$ be the function defined by
Then $\{\frac{{Y_{\underline{\nu },\underline{\lambda }}}(t)}{t}:t\gt 0\}$ satisfies the LDP with speed ${v_{t}}=t$ and good rate function ${I_{\mathrm{LD}}^{(1)}}$ defined by
(4)
\[ {\Psi _{\underline{\nu },\underline{\lambda }}^{(1)}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{({\lambda _{1}}({e^{\theta }}-1))^{1/{\nu _{1}}}},& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\theta \ge 0,\\ {} {({\lambda _{2}}({e^{-\theta }}-1))^{1/{\nu _{2}}}},& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\theta \lt 0.\end{array}\right.\]Proof.
We prove this proposition by applying the Gärtner–Ellis theorem. More precisely, we have to show that
where ${\Psi _{\underline{\nu },\underline{\lambda }}^{(1)}}$ is the function in (4).
(6)
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log \mathbb{E}\big[{e^{t\theta \frac{{Y_{\underline{\nu },\underline{\lambda }}}(t)}{t}}}\big]={\Psi _{\underline{\nu },\underline{\lambda }}^{(1)}}(\theta )\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}),\]The case $\theta =0$ is immediate. Moreover, we remark that
\[\begin{aligned}{}\log \mathbb{E}\big[{e^{t\theta \frac{{Y_{\underline{\nu },\underline{\lambda }}}(t)}{t}}}\big]& =\log \mathbb{E}\big[{e^{\theta {Y_{\underline{\nu },\underline{\lambda }}}(t)}}\big]\\ {} & =\log {E_{{\nu _{1}}}}\big({\lambda _{1}}\big({e^{\theta }}-1\big){t^{{\nu _{1}}}}\big)+\log {E_{{\nu _{2}}}}\big({\lambda _{2}}\big({e^{-\theta }}-1\big){t^{{\nu _{2}}}}\big).\end{aligned}\]
Then, by taking into account the asymptotic behaviour of the Mittag-Leffler function in (2), we have
\[\begin{array}{c}\displaystyle \underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{1}}}}\big({\lambda _{1}}\big({e^{\theta }}-1\big){t^{{\nu _{1}}}}\big)\\ {} \displaystyle +\underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{2}}}}\big({\lambda _{2}}\big({e^{-\theta }}-1\big){t^{{\nu _{2}}}}\big)={\big({\lambda _{1}}\big({e^{\theta }}-1\big)\big)^{1/{\nu _{1}}}}\hspace{1em}\text{for}\hspace{2.5pt}\theta \gt 0,\end{array}\]
and
\[\begin{array}{c}\displaystyle \underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{1}}}}\big({\lambda _{1}}\big({e^{\theta }}-1\big){t^{{\nu _{1}}}}\big)\\ {} \displaystyle +\underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{{\nu _{2}}}}\big({\lambda _{2}}\big({e^{-\theta }}-1\big){t^{{\nu _{2}}}}\big)={\big({\lambda _{2}}\big({e^{-\theta }}-1\big)\big)^{1/{\nu _{2}}}}\hspace{1em}\text{for}\hspace{2.5pt}\theta \lt 0.\end{array}\]
Thus the limit in (6) is checked.In conclusion, the desired LDP holds noting that the function ${\Psi _{\underline{\nu },\underline{\lambda }}^{(1)}}$ in (4) is finite (for all $\theta \in \mathbb{R}$) and differentiable. □
Remark 3.1.
In general, we do not have an explicit expression for the rate function ${I_{\mathrm{LD}}^{(1)}}$ in Proposition 3.1 (see (5)). However, if ${\nu _{1}}={\nu _{2}}=\frac{1}{2}$, then we have
\[ {I_{\mathrm{LD}}^{(1)}}(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}x\log \big(\frac{1}{2}+\frac{1}{2}\sqrt{1+\frac{2x}{{\lambda _{1}^{2}}}}\big)-{\big(\frac{1}{2}\sqrt{{\lambda _{1}^{2}}+2x}-\frac{{\lambda _{1}}}{2}\big)^{2}},& \hspace{2.5pt}\text{if}\hspace{2.5pt}x\ge 0,\\ {} -x\log \big(\frac{1}{2}+\frac{1}{2}\sqrt{1-\frac{2x}{{\lambda _{2}^{2}}}}\big)-{\big(\frac{1}{2}\sqrt{{\lambda _{2}^{2}}-2x}-\frac{{\lambda _{2}}}{2}\big)^{2}},& \hspace{2.5pt}\text{if}\hspace{2.5pt}x\lt 0.\end{array}\right.\]
Indeed, after some computations, one can check that the supremum in (5) (with ${\nu _{1}}={\nu _{2}}=\frac{1}{2}$) is attained at $\theta =0$ for $x=0$, at $\theta =\log (\frac{1}{2}+\frac{1}{2}\sqrt{1+\frac{2x}{{\lambda _{1}^{2}}}})$ for $x\gt 0$, and at $\theta =-\log (\frac{1}{2}+\frac{1}{2}\sqrt{1-\frac{2x}{{\lambda _{2}^{2}}}})$ for $x\lt 0$.From now on we assume that ${\nu _{1}}$ and ${\nu _{2}}$ coincide, and therefore we consider the change of notation in Remark 2.2 for $\nu ={\nu _{1}}={\nu _{2}}$. Moreover, we set
Proposition 3.2.
Assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$ and let ${\alpha _{1}}(\nu )$ be defined in (7). Then $\{{t^{{\alpha _{1}}(\nu )}}\frac{{Y_{\nu ,\underline{\lambda }}}(t)}{t}:t\gt 0\}$ converges weakly to ${\lambda _{1}}{L_{\nu }^{\circ }}(1)-{\lambda _{2}}{L_{\nu }^{\circ \circ }}(1)$, where ${L_{\nu }^{\circ }}(1)$ and ${L_{\nu }^{\circ \circ }}(1)$ are two i.i.d. random variables distributed as ${L_{\nu }}(1)$.
Proof.
We have to check that
\[ \underset{t\to \infty }{\lim }\mathbb{E}\Big[{e^{\theta {t^{{\alpha _{1}}(\nu )}}\frac{{Y_{\nu ,\underline{\lambda }}}(t)}{t}}}\Big]=\underset{={E_{\nu }}({\lambda _{1}}\theta ){E_{\nu }}(-{\lambda _{2}}\theta )}{\underbrace{\mathbb{E}\big[{e^{\theta ({\lambda _{1}}{L_{\nu }^{\circ }}(1)-{\lambda _{2}}{L_{\nu }^{\circ \circ }}(1))}}\big]}}\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R})\]
(here we take into account that ${L_{\nu }^{\circ }}(1)$ and ${L_{\nu }^{\circ \circ }}(1)$ are i.i.d., and the expression of the moment generating function in eq. (3)). This can be readily done noting that, for two suitable remainders $o(\frac{1}{{t^{2\nu }}})$ such that ${t^{2\nu }}o(\frac{1}{{t^{2\nu }}})\to 0$, we have
\[\begin{aligned}{}& \mathbb{E}\Big[{e^{\theta {t^{{\alpha _{1}}(\nu )}}\frac{{Y_{\nu ,\underline{\lambda }}}(t)}{t}}}\Big]=\mathbb{E}\Big[{e^{\theta \frac{{Y_{\nu ,\underline{\lambda }}}(t)}{{t^{\nu }}}}}\Big]={E_{\nu }}\big({\lambda _{1}}\big({e^{\theta /{t^{\nu }}}}-1\big){t^{\nu }}\big){E_{\nu }}\big({\lambda _{2}}\big({e^{-\theta /{t^{\nu }}}}-1\big){t^{\nu }}\big)\\ {} & \hspace{1em}={E_{\nu }}\bigg({\lambda _{1}}\bigg(\frac{\theta }{{t^{\nu }}}+\frac{{\theta ^{2}}}{2{t^{2\nu }}}+o\bigg(\frac{1}{{t^{2\nu }}}\bigg)\bigg){t^{\nu }}\bigg){E_{\nu }}\bigg({\lambda _{2}}\bigg(-\frac{\theta }{{t^{\nu }}}+\frac{{\theta ^{2}}}{2{t^{2\nu }}}+o\bigg(\frac{1}{{t^{2\nu }}}\bigg)\bigg){t^{\nu }}\bigg)\\ {} & \hspace{1em}={E_{\nu }}\bigg({\lambda _{1}}\bigg(\theta +\frac{{\theta ^{2}}}{2{t^{\nu }}}+{t^{\nu }}o\bigg(\frac{1}{{t^{2\nu }}}\bigg)\bigg)\bigg){E_{\nu }}\bigg({\lambda _{2}}\bigg(-\theta +\frac{{\theta ^{2}}}{2t\nu }+{t^{\nu }}o\bigg(\frac{1}{{t^{2\nu }}}\bigg)\bigg)\bigg);\end{aligned}\]
then we get the desired limit letting t go to infinity (for each fixed $\theta \in \mathbb{R}$). □Proposition 3.3.
Assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$ and let ${\alpha _{1}}(\nu )$ be defined in (7). Then, for every family of positive numbers $\{{a_{t}}:t\gt 0\}$ such that (1) holds, the family of random variables $\{\frac{{({a_{t}}t)^{{\alpha _{1}}(\nu )}}{Y_{\nu ,\underline{\lambda }}}(t)}{t}:t\gt 0\}$ satisfies the LDP with speed $1/{a_{t}}$ and good rate function ${I_{\mathrm{MD}}^{(1)}}$ defined by
\[ {I_{\mathrm{MD}}^{(1)}}(x):=\left\{\begin{array}{l@{\hskip10.0pt}l}({\nu ^{\nu /(1-\nu )}}-{\nu ^{1/(1-\nu )}}){(\frac{x}{{\lambda _{1}}})^{1/(1-\nu )}},& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\ge 0,\\ {} ({\nu ^{\nu /(1-\nu )}}-{\nu ^{1/(1-\nu )}}){(\frac{x}{-{\lambda _{2}}})^{1/(1-\nu )}},& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\lt 0.\end{array}\right.\]
Proof.
We prove this proposition by applying the Gärtner–Ellis theorem. More precisely, we have to show that
where ${\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}}$ is the function defined by
(8)
\[ \underset{t\to \infty }{\lim }\frac{1}{1/{a_{t}}}\log \mathbb{E}\Big[{e^{\frac{\theta }{{a_{t}}}\frac{{({a_{t}}t)^{{\alpha _{1}}(\nu )}}{Y_{\nu ,\underline{\lambda }}}(t)}{t}}}\Big]={\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}}(\theta )\hspace{1em}(\text{for all}\hspace{2.5pt}\theta \in \mathbb{R}),\]
\[ {\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{({\lambda _{1}}\theta )^{1/\nu }},& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \ge 0,\\ {} {(-{\lambda _{2}}\theta )^{1/\nu }},& \hspace{2.5pt}\text{if}\hspace{2.5pt}\theta \lt 0.\end{array}\right.\]
Indeed, since the function ${\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}}$ is finite (for all $\theta \in \mathbb{R}$) and differentiable, the desired LDP holds noting that the Legendre–Fenchel transform ${({\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}})^{\ast }}$ of ${\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}}$, i.e. the function ${({\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}})^{\ast }}$ defined by
\[ {\big({\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}}\big)^{\ast }}(x):=\underset{\theta \in \mathbb{R}}{\sup }\big\{\theta x-{\tilde{\Psi }_{\nu ,\underline{\lambda }}^{(1)}}(\theta )\big\}\hspace{1em}(\text{for all}\hspace{2.5pt}x\in \mathbb{R}),\]
coincides with the function ${I_{\mathrm{MD}}^{(1)}}$ in the statement of the proposition (for $x=0$ the supremum is attained at $\theta =0$, for $x\gt 0$ the supremum is attained at $\theta =\frac{1}{{\lambda _{1}}}{(\frac{\nu x}{{\lambda _{1}}})^{\nu /(1-\nu )}}$, for $x\lt 0$ the supremum is attained at $\theta =-\frac{1}{{\lambda _{2}}}{(\frac{\nu x}{-{\lambda _{2}}})^{\nu /(1-\nu )}}$).So we conclude the proof by checking the limit in (8). The case $\theta =0$ is immediate. Moreover, we remark that, for two suitable remainders $o(\frac{1}{{({a_{t}}t)^{2\nu }}})$ such that ${({a_{t}}t)^{2\nu }}o(\frac{1}{{({a_{t}}t)^{2\nu }}})\to 0$, we have
\[\begin{aligned}{}\log \mathbb{E}\Big[{e^{\frac{\theta }{{a_{t}}}\frac{{({a_{t}}t)^{{\alpha _{1}}(\nu )}}{Y_{\nu ,\underline{\lambda }}}(t)}{t}}}\Big]& =\log \mathbb{E}\Big[{e^{\theta \frac{{Y_{\nu ,\underline{\lambda }}}(t)}{{({a_{t}}t)^{\nu }}}}}\Big]\\ {} & =\log {E_{\nu }}\big({\lambda _{1}}\big({e^{\theta /{({a_{t}}t)^{\nu }}}}-1\big){t^{\nu }}\big)+\log {E_{\nu }}\big({\lambda _{2}}\big({e^{-\theta /{({a_{t}}t)^{\nu }}}}\hspace{-0.1667em}-\hspace{-0.1667em}1\big){t^{\nu }}\big)\\ {} & =\log {E_{\nu }}\bigg({\lambda _{1}}\bigg(\frac{\theta }{{({a_{t}}t)^{\nu }}}+\frac{{\theta ^{2}}}{2{({a_{t}}t)^{2\nu }}}+o\bigg(\frac{1}{{({a_{t}}t)^{2\nu }}}\bigg)\bigg){t^{\nu }}\bigg)\\ {} & \hspace{1em}+\log {E_{\nu }}\bigg({\lambda _{2}}\bigg(-\frac{\theta }{{({a_{t}}t)^{\nu }}}+\frac{{\theta ^{2}}}{2{({a_{t}}t)^{2\nu }}}+o\bigg(\frac{1}{{({a_{t}}t)^{2\nu }}}\bigg)\bigg){t^{\nu }}\bigg)\\ {} & =\log {E_{\nu }}\bigg(\frac{{\lambda _{1}}}{{a_{t}^{\nu }}}\bigg(\theta +\frac{{\theta ^{2}}}{2{({a_{t}}t)^{\nu }}}+{({a_{t}}t)^{\nu }}o\bigg(\frac{1}{{({a_{t}}t)^{2\nu }}}\bigg)\bigg)\bigg)\\ {} & \hspace{1em}+\log {E_{\nu }}\bigg(\frac{{\lambda _{2}}}{{a_{t}^{\nu }}}\bigg(-\theta +\frac{{\theta ^{2}}}{2{({a_{t}}t)^{\nu }}}+{({a_{t}}t)^{\nu }}o\bigg(\frac{1}{{({a_{t}}t)^{2\nu }}}\bigg)\bigg)\bigg).\end{aligned}\]
Then, by taking into account the asymptotic behaviour of the Mittag-Leffler function in (2), we have
\[ \underset{t\to \infty }{\lim }\frac{1}{1/{a_{t}}}\log \mathbb{E}\Big[{e^{\frac{\theta }{{a_{t}}}\frac{{({a_{t}}t)^{{\alpha _{1}}(\nu )}}{Y_{\nu ,\underline{\lambda }}}(t)}{t}}}\Big]={({\lambda _{1}}\theta )^{1/\nu }}\hspace{1em}\text{for}\hspace{2.5pt}\theta \gt 0,\]
and
\[ \underset{t\to \infty }{\lim }\frac{1}{1/{a_{t}}}\log \mathbb{E}\Big[{e^{\frac{\theta }{{a_{t}}}\frac{{({a_{t}}t)^{{\alpha _{1}}(\nu )}}{Y_{\nu ,\underline{\lambda }}}(t)}{t}}}\Big]={(-{\lambda _{2}}\theta )^{1/\nu }}\hspace{1em}\text{for}\hspace{2.5pt}\theta \lt 0.\]
Thus the limit in (8) is checked. □Remark 3.3.
Assume that ${\lambda _{1}}={\lambda _{2}}$. Then: if ${\nu _{1}}={\nu _{2}}$, the rate function ${I_{\mathrm{LD}}^{(1)}}$ in Proposition 3.1 is a symmetric function (we can say this, even if an explicit expression of ${I_{\mathrm{LD}}^{(1)}}$ is not available, because ${\Psi _{\underline{\nu },\underline{\lambda }}^{(1)}}$ is a symmetric function); the weak limit in Proposition 3.2 is a symmetric random variable; the rate function ${I_{\mathrm{MD}}^{(1)}}$ in Proposition 3.3 is a symmetric function.
4 Noncentral moderate deviations for the type 2 process
The results in this section can be derived directly from the results in [3]; indeed, by taking into account Remark 2.3, the fractional Skellam process of type 2 is a particular compound fractional Poisson process. So we only give the statements of propositions without proofs.
Proposition 4.1.
Let ${\Psi _{\nu ,\underline{\lambda }}^{(2)}}$ be the function defined by
Then $\{\frac{{Z_{\nu ,\underline{\lambda }}}(t)}{t}:t\gt 0\}$ satisfies the LDP with speed ${v_{t}}=t$ and good rate function ${I_{\mathrm{LD}}^{(2)}}$ defined by
(9)
\[ {\Psi _{\nu ,\underline{\lambda }}^{(2)}}(\theta ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{({\lambda _{1}}({e^{\theta }}-1)+{\lambda _{2}}({e^{-\theta }}-1))^{1/\nu }},& \hspace{2.5pt}\textit{if}\hspace{2.5pt}{\lambda _{1}}({e^{\theta }}-1)+{\lambda _{2}}({e^{-\theta }}-1)\ge 0,\\ {} 0,& \hspace{2.5pt}\textit{if}\hspace{2.5pt}{\lambda _{1}}({e^{\theta }}-1)+{\lambda _{2}}({e^{-\theta }}-1)\lt 0.\end{array}\right.\]Remark 4.1.
The LDP in Proposition 4.1 with ${\lambda _{1}}={\lambda _{2}}=\lambda $ coincides with the LDP in Proposition 4.2 in [4] with ${\alpha _{1}}={\alpha _{2}}={\beta _{1}}={\beta _{2}}=\lambda $. Indeed the function ${\Lambda _{\nu }}$ in that proposition is defined by
where, after some computations, one can check that
The next two propositions can be derived from Propositions 3.2–3.3 in [3]. By Remark 2.3 we have ${Z_{\nu ,\underline{\lambda }}}(t)\stackrel{d}{=}{\textstyle\sum _{k=1}^{{N_{\nu ,{\lambda _{1}}+{\lambda _{2}}}}(t)}}{X_{k}}$, where
which coincides with $\alpha (\nu )$ in [3]. Also note that, if ${\lambda _{1}}={\lambda _{2}}=\lambda $ for some $\lambda \gt 0$, we take into account that $\sqrt{\frac{4{\lambda _{1}}{\lambda _{2}}}{{\lambda _{1}}+{\lambda _{2}}}}=\sqrt{2\lambda }$ in Proposition 4.2, and $\frac{{\lambda _{1}}+{\lambda _{2}}}{2{\lambda _{1}}{\lambda _{2}}}=\frac{1}{\lambda }$ in Proposition 4.3.
\[ \mu :=E({X_{k}})=\frac{{\lambda _{1}}-{\lambda _{2}}}{{\lambda _{1}}+{\lambda _{2}}}\hspace{2.5pt}\text{and}\hspace{2.5pt}{\sigma ^{2}}:=\mathrm{Var}({X_{k}})=\frac{4{\lambda _{1}}{\lambda _{2}}}{{({\lambda _{1}}+{\lambda _{2}})^{2}}}\]
coincide with μ and ${\sigma ^{2}}$ in [3]. Moreover, we set
(10)
\[ {\alpha _{2}}(\nu ):=\left\{\begin{array}{l@{\hskip10.0pt}l}1-\nu /2,& \hspace{2.5pt}\text{if}\hspace{2.5pt}\mu =0,\hspace{2.5pt}\text{i.e. if}\hspace{2.5pt}{\lambda _{1}}={\lambda _{2}},\\ {} 1-\nu ,& \hspace{2.5pt}\text{if}\hspace{2.5pt}\mu \ne 0,\hspace{2.5pt}\text{i.e. if}\hspace{2.5pt}{\lambda _{1}}\ne {\lambda _{2}},\end{array}\right.\]Proposition 4.2.
Let ${\alpha _{2}}(\nu )$ be defined in (10). Then:
-
• if ${\lambda _{1}}={\lambda _{2}}=\lambda $ for some $\lambda \gt 0$, then $\{{t^{{\alpha _{2}}(\nu )}}\frac{{Z_{\nu ,\underline{\lambda }}}(t)}{t}:t\gt 0\}$ converges weakly to $\sqrt{2\lambda {L_{\nu }}(1)}W$, where W is a standard Normal distributed random variable, and independent to ${L_{\nu }}(1)$;
-
• if ${\lambda _{1}}\ne {\lambda _{2}}$, then $\{{t^{{\alpha _{2}}(\nu )}}\frac{{Z_{\nu ,\underline{\lambda }}}(t)}{t}:t\gt 0\}$ converges weakly to $({\lambda _{1}}-{\lambda _{2}}){L_{\nu }}(1)$.
Proposition 4.3.
Let ${\alpha _{2}}(\nu )$ be defined in (10). Then, for every family of positive numbers $\{{a_{t}}:t\gt 0\}$ such that (1) holds, the family of random variables $\{\frac{{({a_{t}}t)^{{\alpha _{2}}(\nu )}}{Z_{\nu ,\underline{\lambda }}}(t)}{t}:t\gt 0\}$ satisfies the LDP with speed $1/{a_{t}}$ and good rate function ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}$ defined by:
-
• if ${\lambda _{1}}\gt {\lambda _{2}}$,\[ {I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x):=\left\{\begin{array}{l@{\hskip10.0pt}l}({\nu ^{\nu /(1-\nu )}}-{\nu ^{1/(1-\nu )}}){(\frac{x}{{\lambda _{1}}-{\lambda _{2}}})^{1/(1-\nu )}},& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\ge 0,\\ {} \infty ,& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\lt 0;\end{array}\right.\]
-
• if ${\lambda _{1}}\lt {\lambda _{2}}$,\[ {I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x):=\left\{\begin{array}{l@{\hskip10.0pt}l}({\nu ^{\nu /(1-\nu )}}-{\nu ^{1/(1-\nu )}}){(\frac{x}{-({\lambda _{2}}-{\lambda _{1}})})^{1/(1-\nu )}},& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\le 0,\\ {} \infty ,& \hspace{2.5pt}\textit{if}\hspace{2.5pt}x\gt 0.\end{array}\right.\]
Remark 4.2.
The sets $\{x\in \mathbb{R}:{I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)\lt \infty \}$ (see Proposition 4.3) coincide with the supports of the weak limits in Proposition 4.2: we mean $\mathbb{R}$ if ${\lambda _{1}}={\lambda _{2}}$, $[0,\infty )$ if ${\lambda _{1}}\gt {\lambda _{2}}$, and $(-\infty ,0]$ if ${\lambda _{1}}\lt {\lambda _{2}}$.
Remark 4.3.
Assume that ${\lambda _{1}}={\lambda _{2}}$. Then: the rate function ${I_{\mathrm{LD}}^{(2)}}$ in Proposition 4.1 is a symmetric function (we can say this, even if an explicit expression of ${I_{\mathrm{LD}}^{(2)}}$ is not available, because ${\Psi _{\underline{\nu },\underline{\lambda }}^{(2)}}$ is a symmetric function); the weak limit in Proposition 4.2 is a symmetric random variable; the rate function ${I_{\mathrm{MD}}^{(2)}}$ in Proposition 4.3 is a symmetric function.
5 Comparisons between rate functions
In this section we compare the rate functions for the two types of fractional Skellam process, and for different values of ν. Moreover, we present some plots.
5.1 Results and remarks
All the LDPs presented in the previous sections are governed by rate functions that uniquely vanish at $x=0$. So, as we explain below, it is interesting to compare the rate functions, at least around $x=0$.
We start by comparing ${I_{\mathrm{LD}}^{(1)}}$ in Proposition 3.1 and ${I_{\mathrm{LD}}^{(2)}}$ in Proposition 4.1.
Proposition 5.1.
Assume that ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$. Then ${I_{\mathrm{LD}}^{(1)}}(0)={I_{\mathrm{LD}}^{(2)}}(0)=0$ and, for $x\ne 0$, we have ${I_{\mathrm{LD}}^{(2)}}(x)\gt {I_{\mathrm{LD}}^{(1)}}(x)\gt 0$.
Proof.
We start noting that, for the function ${\Psi _{\nu ,\underline{\lambda }}^{(1)}}$ in Proposition 3.1 (see Remark 5.1 and (4)) and the function ${\Psi _{\nu ,\underline{\lambda }}^{(2)}}$ in Proposition 4.1 (see (9) respectively), we have
The first statement in (11) is immediate. For the second statement we have two cases (for completeness we remark that ${\lambda _{1}}({e^{\theta }}-1)+{\lambda _{2}}({e^{-\theta }}-1)\ge 0$ for all $\theta \in \mathbb{R}$ if ${\lambda _{1}}={\lambda _{2}}$).
Thus (11) is checked.
(11)
\[ {\Psi _{\nu ,\underline{\lambda }}^{(1)}}(0)={\Psi _{\nu ,\underline{\lambda }}^{(2)}}(0)=0;\hspace{1em}{\Psi _{\nu ,\underline{\lambda }}^{(1)}}(\theta )\gt {\Psi _{\nu ,\underline{\lambda }}^{(2)}}(\theta )\hspace{1em}\text{for}\hspace{2.5pt}\theta \ne 0\text{.}\]-
• If $\theta \gt 0$, then\[ {\lambda _{1}}\big({e^{\theta }}-1\big)\gt \max \big\{{\lambda _{1}}\big({e^{\theta }}-1\big)+{\lambda _{2}}\big({e^{-\theta }}-1\big),0\big\}\]which yields ${\Psi _{\nu ,\underline{\lambda }}^{(1)}}(\theta )\gt {\Psi _{\nu ,\underline{\lambda }}^{(2)}}(\theta )$ by taking the power with exponent $1/\nu $;
-
• If $\theta \lt 0$, then\[ {\lambda _{2}}\big({e^{-\theta }}-1\big)\gt \max \big\{{\lambda _{1}}\big({e^{\theta }}-1\big)+{\lambda _{2}}\big({e^{-\theta }}-1\big),0\big\}\]which yields ${\Psi _{\nu ,\underline{\lambda }}^{(1)}}(\theta )\gt {\Psi _{\nu ,\underline{\lambda }}^{(2)}}(\theta )$ again, by taking the power with exponent $1/\nu $.
We also remark that, for every $x\in \mathbb{R}$, there exist ${\theta _{x}^{(1)}},{\theta _{x}^{(2)}}\in \mathbb{R}$ such that
\[ {I_{\mathrm{LD}}^{(1)}}(x)={\theta _{x}^{(1)}}x-{\Psi _{\nu ,\underline{\lambda }}^{(1)}}\big({\theta _{x}^{(1)}}\big)\hspace{2.5pt}\text{and}\hspace{2.5pt}{I_{\mathrm{LD}}^{(2)}}(x)={\theta _{x}^{(2)}}x-{\Psi _{\nu ,\underline{\lambda }}^{(2)}}\big({\theta _{x}^{(2)}}\big);\]
moreover ${\theta _{x}^{(1)}}={\theta _{x}^{(2)}}=0$ if $x=0$, and ${\theta _{x}^{(1)}},{\theta _{x}^{(2)}}\ne 0$ if $x\ne 0$. Then, if $x\ne 0$, we get
\[ {I_{\mathrm{LD}}^{(1)}}(x)={\theta _{x}^{(1)}}x-{\Psi _{\nu ,\underline{\lambda }}^{(1)}}\big({\theta _{x}^{(1)}}\big)\lt {\theta _{x}^{(1)}}x-{\Psi _{\nu ,\underline{\lambda }}^{(2)}}\big({\theta _{x}^{(1)}}\big)\le \underset{\theta \in \mathbb{R}}{\sup }\big\{\theta x-{\Psi _{\nu ,\underline{\lambda }}^{(2)}}(\theta )\big\}={I_{\mathrm{LD}}^{(2)}}(x),\]
where the strict inequality holds by (11), and by taking into account that ${\theta _{x}^{(1)}}\ne 0$. □The next Proposition 5.2 provides a similar result which concerns the comparison of ${I_{\mathrm{MD}}^{(1)}}$ in Proposition 3.3 and ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}$ in Proposition 4.3. In particular, it is possible to obtain the same strict inequality, for all $x\ne 0$, only if ${\lambda _{1}}\ne {\lambda _{2}}$ (note that in such a case ${\alpha _{1}}(\nu )={\alpha _{2}}(\nu )=1-\nu $ by (7) and (10)).
Proposition 5.2.
We have ${I_{\mathrm{MD}}^{(1)}}(0)={I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(0)=0$. Moreover, if $x\ne 0$, we have two cases.
-
1. If ${\lambda _{1}}\ne {\lambda _{2}}$, then ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)\gt {I_{\mathrm{MD}}^{(1)}}(x)\gt 0$.
-
2. If ${\lambda _{1}}={\lambda _{2}}=\lambda $ for some $\lambda \gt 0$, there exists ${\delta _{\nu ,\lambda }}\gt 0$ such that: ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)\gt {I_{\mathrm{MD}}^{(1)}}(x)\gt 0$ if $0\lt |x|\lt {\delta _{\nu ,\lambda }}$, ${I_{\mathrm{MD}}^{(1)}}(x)\gt {I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)\gt 0$ if $|x|\gt {\delta _{\nu ,\lambda }}$, and ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)={I_{\mathrm{MD}}^{(1)}}(x)\gt 0$ if $|x|={\delta _{\nu ,\lambda }}$.
Proof.
The equalities ${I_{\mathrm{MD}}^{(1)}}(0)={I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(0)=0$ (case $x=0$) are immediate. So, in what follows, we take $x\ne 0$. We start with the case ${\lambda _{1}}\ne {\lambda _{2}}$, and we have two cases.
Finally, if ${\lambda _{1}}={\lambda _{2}}=\lambda $ for some $\lambda \gt 0$, the statement to prove trivially holds noting that, for two constants ${c_{\nu ,\lambda }^{(1)}},{c_{\nu ,\lambda }^{(2)}}\gt 0$, we have ${I_{\mathrm{MD}}^{(1)}}(x)={c_{\nu ,\lambda }^{(1)}}|x{|^{1/(1-\nu )}}$ and ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)={c_{\nu ,\lambda }^{(2)}}|x{|^{1/(1-\nu /2)}}$. □
-
• Assume that ${\lambda _{1}}\gt {\lambda _{2}}$. Then for $x\lt 0$ we have ${I_{\mathrm{MD}}^{(1)}}(x)\lt \infty ={I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)$. For $x\gt 0$ we have $\frac{x}{{\lambda _{1}}}\lt \frac{x}{{\lambda _{1}}-{\lambda _{2}}}$, which is trivially equivalent to
-
• Assume that ${\lambda _{1}}\lt {\lambda _{2}}$. Then for $x\gt 0$ we have ${I_{\mathrm{MD}}^{(1)}}(x)\lt \infty ={I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)$. For $x\lt 0$ we have $\frac{x}{-{\lambda _{2}}}\lt \frac{x}{-({\lambda _{2}}-{\lambda _{1}})}$, which is trivially equivalent to
Proposition 5.1 tells us that if we compare the rate functions in Propositions 3.1 and 4.1, the rate function of the fractional Skellam process of type 2 is larger than that of the fractional Skellam process of type 1. Proposition 5.2 tells us the same for the rate functions in Propositions 3.3 and 4.3 but, when ${\lambda _{1}}={\lambda _{2}}$, the rate function of the fractional Skellam process of type 2 is larger only around $x=0$.
These inequalities between rate functions allow us to say that the convergence of random variables for the fractional Skellam process of type 2 is faster than the corresponding convergence for the fractional Skellam process of type 1. We explain this by considering the LDPs in Proposition 3.1, with ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$, and in Proposition 4.1. Indeed, for every $\delta \gt 0$ we have
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log P\bigg(\frac{|{Y_{\nu ,\underline{\lambda }}}(t)|}{t}\hspace{-0.1667em}\gt \hspace{-0.1667em}\delta \bigg)\hspace{-0.1667em}=\hspace{-0.1667em}-{J_{\mathrm{LD}}^{(1)}}(\delta ),\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\text{where}\hspace{2.5pt}{J_{\mathrm{LD}}^{(1)}}(\delta )\hspace{-0.1667em}:=\hspace{-0.1667em}\min \big\{{I_{\mathrm{LD}}^{(1)}}(\delta ),{I_{\mathrm{LD}}^{(1)}}(-\delta )\big\}\]
and
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log P\bigg(\frac{|{Z_{\nu ,\underline{\lambda }}}(t)|}{t}\hspace{-0.1667em}\gt \hspace{-0.1667em}\delta \bigg)\hspace{-0.1667em}=\hspace{-0.1667em}-{J_{\mathrm{LD}}^{(2)}}(\delta ),\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\text{where}\hspace{2.5pt}{J_{\mathrm{LD}}^{(2)}}(\delta )\hspace{-0.1667em}:=\hspace{-0.1667em}\min \big\{{I_{\mathrm{LD}}^{(2)}}(\delta ),{I_{\mathrm{LD}}^{(2)}}(-\delta )\big\};\]
therefore ${J_{\mathrm{LD}}^{(2)}}(\delta )\gt {J_{\mathrm{LD}}^{(1)}}(\delta )\gt 0$ and, for every $\varepsilon \in (0,{J_{\mathrm{LD}}^{(2)}}(\delta )-{J_{\mathrm{LD}}^{(1)}}(\delta ))$, there exists ${t_{\varepsilon }}$ such that
\[ \frac{P(\frac{|{Z_{\nu ,\underline{\lambda }}}(t)|}{t}\gt \delta )}{P(\frac{|{Y_{\nu ,\underline{\lambda }}}(t)|}{t}\gt \delta )}\lt {e^{-t({J_{\mathrm{LD}}^{(2)}}(\delta )-{J_{\mathrm{LD}}^{(1)}}(\delta )-\varepsilon )}}\hspace{1em}\text{for all}\hspace{2.5pt}t\gt {t_{\varepsilon }},\]
where ${e^{-t({J_{\mathrm{LD}}^{(2)}}(\delta )-{J_{\mathrm{LD}}^{(1)}}(\delta )-\varepsilon )}}\to 0$ as $t\to \infty $.We can follow the same lines to obtain a similar estimate starting from LDPs in Propositions 3.3 and 4.3; in this case, when ${\lambda _{1}}={\lambda _{2}}$, δ has to be chosen small enough (see Proposition 5.2). Here we do not repeat all the computations; however, we can say that if we set
\[ {J_{\mathrm{MD}}^{(1)}}(\delta ):=\min \big\{{I_{\mathrm{MD}}^{(1)}}(\delta ),{I_{\mathrm{MD}}^{(1)}}(-\delta )\big\}\hspace{2.5pt}\text{and}\hspace{2.5pt}{J_{\mathrm{MD}}^{(2)}}(\delta ):=\min \big\{{I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(\delta ),{I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(-\delta )\big\},\]
we have ${J_{\mathrm{MD}}^{(2)}}(\delta )\gt {J_{\mathrm{MD}}^{(1)}}(\delta )\gt 0$ and, for every $\varepsilon \in (0,{J_{\mathrm{MD}}^{(2)}}(\delta )-{J_{\mathrm{MD}}^{(1)}}(\delta ))$, there exists ${t_{\varepsilon }}$ such that
\[ \frac{P(\frac{{({a_{t}}t)^{{\alpha _{2}}(\nu )}}|{Z_{\nu ,\underline{\lambda }}}(t)|}{t}\gt \delta )}{P(\frac{{({a_{t}}t)^{{\alpha _{1}}(\nu )}}|{Y_{\nu ,\underline{\lambda }}}(t)|}{t}\gt \delta )}\lt {e^{-({J_{\mathrm{MD}}^{(2)}}(\delta )-{J_{\mathrm{MD}}^{(1)}}(\delta )-\varepsilon )/{a_{t}}}}\hspace{1em}\text{for all}\hspace{2.5pt}t\gt {t_{\varepsilon }},\]
where ${e^{-({J_{\mathrm{MD}}^{(2)}}(\delta )-{J_{\mathrm{MD}}^{(1)}}(\delta )-\varepsilon )/{a_{t}}}}\to 0$ as $t\to \infty $.Remark 5.2.
We can say that it is not surprising that the convergence of random variables for the fractional Skellam process of type 2 is faster than the corresponding convergence for the fractional Skellam process of type 1. Indeed the fractional Skellam process of type 1 is defined with two independent random time-changes for the involved fractional Poisson processes; on the contrary, for the fractional Skellam process of type 2, we have a unique independent random time-change. So, in some sense, in the second case we have less randomness. Moreover, if we compare the asymptotic behaviour (as $t\to \infty $) of the variances provided by Remarks 3.1 and 3.2 in [20], i.e.
\[ \mathrm{Var}\big[{Y_{\nu ,\underline{\lambda }}}(t)\big]=\frac{{t^{\nu }}({\lambda _{1}}+{\lambda _{2}})}{\Gamma (1+\nu )}+\frac{({\lambda _{1}^{2}}+{\lambda _{2}^{2}}){t^{2\nu }}}{\nu }\bigg(\frac{1}{\Gamma (2\nu )}-\frac{1}{\nu {\Gamma ^{2}}(\nu )}\bigg)\]
and
\[ \mathrm{Var}\big[{Z_{\nu ,\underline{\lambda }}}(t)\big]=\frac{{t^{\nu }}({\lambda _{1}}+{\lambda _{2}})}{\Gamma (1+\nu )}+{({\lambda _{1}}-{\lambda _{2}})^{2}}{t^{2\nu }}\bigg(\frac{2}{\Gamma (2\nu +1)}-\frac{1}{{\Gamma ^{2}}(1+\nu )}\bigg),\]
in the second case we have smaller asymptotic variances. Indeed, we have
\[ \underset{t\to \infty }{\lim }\frac{\mathrm{Var}[{Y_{\nu ,\underline{\lambda }}}(t)]}{{t^{2\nu }}}\gt \underset{t\to \infty }{\lim }\frac{\mathrm{Var}[{Z_{\nu ,\underline{\lambda }}}(t)]}{{t^{2\nu }}},\]
noting that
\[ \underset{t\to \infty }{\lim }\frac{\mathrm{Var}[{Y_{\nu ,\underline{\lambda }}}(t)]}{{t^{2\nu }}}=\frac{{\lambda _{1}^{2}}+{\lambda _{2}^{2}}}{\nu }\bigg(\frac{1}{\Gamma (2\nu )}-\frac{1}{\nu {\Gamma ^{2}}(\nu )}\bigg)\]
and
\[ \underset{t\to \infty }{\lim }\frac{\mathrm{Var}[{Z_{\nu ,\underline{\lambda }}}(t)]}{{t^{2\nu }}}={({\lambda _{1}}-{\lambda _{2}})^{2}}\bigg(\frac{2}{\Gamma (2\nu +1)}-\frac{1}{{\Gamma ^{2}}(1+\nu )}\bigg),\]
where
\[ \frac{2}{\Gamma (2\nu +1)}-\frac{1}{{\Gamma ^{2}}(1+\nu )}=\frac{2}{2\nu \Gamma (2\nu )}-\frac{1}{{\nu ^{2}}{\Gamma ^{2}}(\nu )}=\frac{1}{\nu }\bigg(\frac{1}{\Gamma (2\nu )}-\frac{1}{\nu {\Gamma ^{2}}(\nu )}\bigg)\]
and ${({\lambda _{1}}-{\lambda _{2}})^{2}}\lt {\lambda _{1}^{2}}+{\lambda _{2}^{2}}$.Now we consider comparisons between rate functions for different values of $\nu \in (0,1)$. We mean the rate functions in Propositions 3.1 and 4.1, and we restrict our attention to a comparison around $x=0$. In view of what follows, we consider some slightly different notation: ${I_{\mathrm{LD},\nu }^{(1)}}$ in place of ${I_{\mathrm{LD}}^{(1)}}$ in Proposition 3.1, with ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$, and ${I_{\mathrm{LD},\nu }^{(2)}}$ in place of ${I_{\mathrm{LD}}^{(2)}}$ in Proposition 4.1.
Proposition 5.3.
Let $\nu ,\eta \in (0,1)$ be such that $\eta \lt \nu $. Then, for $k\in \{1,2\}$, we have ${I_{\mathrm{LD},\eta }^{(k)}}(0)={I_{\mathrm{LD},\nu }^{(k)}}(0)=0$ and, for some $\delta \gt 0$, ${I_{\mathrm{LD},\eta }^{(k)}}(x)\gt {I_{\mathrm{LD},\nu }^{(k)}}(x)\gt 0$ for $0\lt |x|\lt \delta $.
Proof.
We take an arbitrarily fixed $k\in \{1,2\}$. Then, for every $x\in \mathbb{R}$, there exists ${\theta _{x}^{(\nu ,k)}}\in \mathbb{R}$ such that
\[ {I_{\mathrm{LD},\nu }^{(k)}}(x)={\theta _{x}^{(\nu ,k)}}x-{\Psi _{\nu ,\underline{\lambda }}^{(k)}}\big({\theta _{x}^{(\nu ,k)}}\big);\]
we recall that we are referring to the function ${\Psi _{\nu ,\underline{\lambda }}^{(1)}}$ in (4) (see also Remark 5.1) and to the function ${\Psi _{\nu ,\underline{\lambda }}^{(2)}}$ in (9). We can also say that ${\theta _{x}^{(\nu ,k)}}=0$ if and only if $x=0$ and, moreover, there exists $\delta \gt 0$ such that $0\le {\Psi _{\nu ,\underline{\lambda }}^{(1)}}({\theta _{x}^{(\nu ,k)}})\lt 1$ if $|x|\lt \delta $. Then, by taking into account the same formulas with η in place of ν, it is easy to check that
\[ 0\le {\Psi _{\eta ,\underline{\lambda }}^{(k)}}\big({\theta _{x}^{(\nu ,k)}}\big)\lt {\Psi _{\nu ,\underline{\lambda }}^{(k)}}\big({\theta _{x}^{(\nu ,k)}}\big)\lt 1\]
(see (4) and (9); moreover we take into account that $\frac{1}{\eta }\gt \frac{1}{\nu }$), whence we obtain
\[\begin{array}{c}\displaystyle {I_{\mathrm{LD},\nu }^{(k)}}(x)={\theta _{x}^{(\nu ,k)}}x-{\Psi _{\nu ,\underline{\lambda }}^{(k)}}\big({\theta _{x}^{(\nu ,k)}}\big)\\ {} \displaystyle \lt {\theta _{x}^{(\nu ,k)}}x-{\Psi _{\eta ,\underline{\lambda }}^{(k)}}\big({\theta _{x}^{(\nu ,k)}}\big)\le \underset{\theta \in \mathbb{R}}{\sup }\big\{\theta x-{\Psi _{\eta ,\underline{\lambda }}^{(k)}}(\theta )\big\}={I_{\mathrm{LD},\eta }^{(k)}}(x).\end{array}\]
This completes the proof. □As a consequence of Proposition 5.3 we can present some estimates that allow us to compare the convergence to zero of different families of random variables for different values of $\nu \in (0,1)$. Roughly speaking we can say the smaller the ν, the faster the convergence of the random variables to zero. This could be explained by presenting a modified version of the computations presented just after the proof of Proposition 5.2; here we omit the details.
5.2 Some plots
We start with Figure 1 which shows the rate functions ${I_{\mathrm{LD}}^{(2)}}(x)$ and ${I_{\mathrm{LD}}^{(1)}}(x)$ when ${\nu _{1}}={\nu _{2}}=\nu $ for various sets of parameters $({\lambda _{1}},{\lambda _{2}},\nu )$. In each case the plots agree with the statements in Proposition 5.1, i.e. ${I_{\mathrm{LD}}^{(2)}}(x)\gt {I_{\mathrm{LD}}^{(1)}}(x)\gt 0$ for $x\ne 0$ and ${I_{\mathrm{LD}}^{(2)}}(0)={I_{\mathrm{LD}}^{(1)}}(0)=0$. In particular, if ${\lambda _{1}}={\lambda _{2}}$, the rate functions are symmetric (around zero) as announced in Remarks 3.3 and 4.3.
Fig. 1.
Top left: $({\lambda _{1}},{\lambda _{2}},\nu )=(1,3,.7)$. Top right: $({\lambda _{1}},{\lambda _{2}},\nu )=(5,1,.3)$. Bottom left: $({\lambda _{1}},{\lambda _{2}},\nu )=(2,2,.5)$. Bottom right: $({\lambda _{1}},{\lambda _{2}},\nu )=(.5,.5,.5)$
In Figure 2 we present the plots of ${I_{\mathrm{MD}}^{(1)}}(x)$ and ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)$ when ${\lambda _{1}}={\lambda _{2}}$. For each set of parameters, the plots agree with the statement of Proposition 5.2: ${I_{\mathrm{MD}}^{(1)}}(x)$ is smaller than ${I_{\mathrm{MD},\underline{\lambda }}^{(2)}}(x)$ when x is within a bounded interval that includes zero, and becomes larger outside of that interval.
Fig. 2.
Left: ${\lambda _{1}}={\lambda _{2}}=2$, $\nu =.5$. Right: ${\lambda _{1}}={\lambda _{2}}=.5$, $\nu =.3$
6 Concluding remarks
In this paper we prove noncentral moderate deviations for two fractional Skellam processes presented in the literature. The main tool used in the proofs is the Gärtner–Ellis theorem (see Theorem 1) which can be applied because the involved moment generating functions are available in a closed form given in terms of the Mittag-Leffler function.
For the classical (nonfractional) Skellam process we can obtain a classical moderate deviation result that fills the gap between the two following asymptotic regimes:
-
1. The convergence of $\frac{{S_{\underline{\lambda }}}(t)}{t}-({\lambda _{1}}+{\lambda _{2}})$ in probability to zero, which is governed by a LDP with speed t;
-
2. The weak convergence of $\sqrt{t}(\frac{{S_{\underline{\lambda }}}(t)}{t}-({\lambda _{1}}+{\lambda _{2}}))$ to a centered Normal distribution.
In general, the applications of the Gärtner–Ellis theorem for random time-changed processes are quite standard. However, in order to obtain noncentral moderate deviation results as the ones in this paper, it is important to have a random time-change with a slowing effect as happens here with the inverse of the stable subordinator; in this way we have normalized processes that tend to zero (as happens for $\frac{{N_{{\nu _{1}},{\lambda _{1}}}}(t)-{N_{{\nu _{2}},{\lambda _{2}}}}(t)}{t}$ and $\frac{{S_{\underline{\lambda }}}({L_{\nu }}(t))}{t}$ here). In future work one could consider more general models with (independent) random time-changes in terms of an inverse of a general subordinator (see, e.g., the recent reference [15]) with the same slowing effect.
The results in this work (and in [3]) concern one-dimensional cases. It could be nice to obtain sample-path versions of these results. One could try to combine the sample-path results for light-tailed Lévy processes in [9] (which can be applied to nonfractional Skellam processes) with possible sample-path results for the inverse of the stable subordinator; unfortunately the derivation of sample-path results for the inverse of the stable subordinator seems to be a difficult task.
We conclude with a discussion on the connection between the scaling exponents of the normalizing factors for the moderate deviations of the fractional Skellam processes, and the concept of long-range dependence (LRD). We recall (see, e.g., Definition 2.1 in [16]) that a nonstationary stochastic process $\{X(t):t\ge 0\}$ has the long-range dependence if we have the following property for the correlation coefficient:
\[ \frac{\mathrm{Cov}(X(t),X(s))}{\sqrt{\mathrm{Var}[X(t)]\mathrm{Var}[X(s)]}}\sim c(s){t^{-h}}\hspace{1em}(\text{as}\hspace{2.5pt}t\to \infty ),\]
for some $c(s)\gt 0$ and $h\in (0,1)$. Then we have the following statements on the scaling exponents ${\alpha _{1}}(\nu )$ in (7) and ${\alpha _{2}}(\nu )$ in (10).-
• For the fractional Skellam process of type 1 with ${\nu _{1}}={\nu _{2}}=\nu $ for some $\nu \in (0,1)$, we have the LRD for $\{{Y_{\nu ,\underline{\lambda }}}(t):t\ge 0\}$ with $h=1-{\alpha _{1}}(\nu )=\nu $ (this is a consequence of eqs. (19) and (20) and Remark 2 in [16], and some computations).
-
• For the fractional Skellam process of type 2 we have the LRD for $\{{Z_{\nu ,\underline{\lambda }}}(t):t\ge 0\}$ with\[ h=1-{\alpha _{2}}(\nu )=\left\{\begin{array}{l@{\hskip10.0pt}l}\nu /2,& \hspace{2.5pt}\text{if}\hspace{2.5pt}{\lambda _{1}}={\lambda _{2}},\\ {} \nu ,& \hspace{2.5pt}\text{if}\hspace{2.5pt}{\lambda _{1}}\ne {\lambda _{2}}\end{array}\right.\](this is a consequence of eqs. (2.11)–(2.16) in [17] with $k=1$, and some computations).