1 Introduction
Random walks in continuous time are largely employed in several fields of both theoretical and applied interest. In this paper we consider a class of continuous-time Markov chains on integers, called the basic model, which can have transitions to adjacent states only, and with alternating transition rates to their adjacent states; namely we assume to have the same transition rates for the odd states, and the same transition rates for the even states. We also consider some independent random time-changes of the basic model.
Markov chains with alternating rates are useful in the study of chain molecular diffusions. We recall the paper [31], where a molecule is modeled as a freely-joined chain of two regularly alternating kinds of atoms, which have alternating jump rates. Another reference is [6] where a simple birth-death process with alternating rates has been studied as a model for an infinitely long chain of atoms joined by links which are subject to random alternating shocks. Recent results on the transient probabilities of such model, also in the presence of suitable reflecting or absorbing states, are provided in [32, 33] and [34].
In this paper we also consider independent random time-changes of the basic model which provide more flexible versions of the chemical models in the references cited above. More precisely we consider the inverse stable subordinator or, alternatively, the (possibly tempered) stable subordinator. In the first case the particle is subject to a sort of trapping and delaying effect; on the contrary, in the second case, we allow positive jumps in the random time-changed argument, which produces a possible rushing effect.
We start with a more rigorous presentation of the basic model in terms of the generator. In general we consider a continuous-time Markov chain $\{X(t):t\ge 0\}$ on $\mathbb{Z}$ (where $\mathbb{Z}$ is the set of integers), and we consider the state probabilities
which satisfy the condition ${p_{k,n}}(0)={1_{\{k=n\}}}$; the generator $G={({g_{k,n}})_{k,n\in \mathbb{Z}}}$ of $\{X(t):t\ge 0\}$ is defined by
Then, for some ${\alpha _{1}},{\alpha _{2}},{\beta _{1}},{\beta _{2}}>0$, we assume to have (see Figure 1)
\[ {g_{k,n}}:=\left\{\begin{array}{l@{\hskip10.0pt}l}{\alpha _{1}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}n=k+1\hspace{2.5pt}\text{and}\hspace{2.5pt}k\hspace{2.5pt}\text{is even}\\ {} {\beta _{1}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}n=k+1\hspace{2.5pt}\text{and}\hspace{2.5pt}k\hspace{2.5pt}\text{is odd}\\ {} {\alpha _{2}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}n=k-1\hspace{2.5pt}\text{and}\hspace{2.5pt}k\hspace{2.5pt}\text{is even}\\ {} {\beta _{2}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}n=k-1\hspace{2.5pt}\text{and}\hspace{2.5pt}k\hspace{2.5pt}\text{is odd}\\ {} 0& \hspace{2.5pt}\text{otherwise}\end{array}\right.\hspace{2.5pt}(\text{for}\hspace{2.5pt}k\ne n);\]
therefore
We remark that this is a generalization of the model in [10]; in fact we recover that model by setting
\[ \left\{\begin{array}{l}{\alpha _{1}}=\lambda \eta +\mu (1-\eta )\\ {} {\beta _{1}}=\mu \eta +\lambda (1-\eta )\\ {} {\alpha _{2}}=\lambda \theta +\mu (1-\theta )\\ {} {\beta _{2}}=\mu \theta +\lambda (1-\theta )\end{array}\right.\]
for $\lambda ,\mu >0$ and $\eta ,\theta \in [0,1]$; moreover the case $(\theta ,\eta )=(1,1)$ was studied in [8], whereas the case $(\theta ,\eta )=(0,1)$ identifies the model investigated in [6] and [34].In particular we extend the results in [10] by giving explicit expressions of the probability generating function, mean and variance of $X(t)$ (for each fixed $t>0$), and we study the asymptotic behavior (as $t\to \infty $) in the fashion of large deviations. Here we also give explicit expressions of the state probabilities.
Moreover we consider some random time-changes of the basic model $\{X(t):t\ge 0\}$, with independent processes. This is motivated by the great interest that the theory of random time-changes (and subordination) is being receiving starting from [5] (see also [30]). In particular this theory allows to construct non-standard models which are useful for possible applications in different fields; indeed, in many circumstances, the process is more realistically assumed to evolve according to a random (so-called operational) time, instead of the usual deterministic one. For instance, in applications to finance, the particle jumps usually represent price changes separated by a random waiting time between trades; then a time-changed version captures the role of information flow and activity time in modeling price changes (see e.g. [17]). Similarly, in applications to hydrology, the velocity irregularities caused by a heterogeneous porous media can be described by heavy tailed particle jumps, whereas suitable assumptions concerning the distribution of the waiting times allow to model particle sticking or trapping (see e.g. [4]).
A wide class of random time-changes concerns subordinators, namely nondecreasing Lévy processes (see, for example, [29, 19, 22, 24] and [9]); recent works with different kind of random time-changes are [11, 3] and [12]. The random time-changes of $\{X(t):t\ge 0\}$ studied in this paper are related to fractional differential equations and stable processes. More precisely we consider:
In both cases, i.e. for both $\{X({T^{\nu }}(t)):t\ge 0\}$ and $\{X({\tilde{S}^{\nu ,\mu }}(t)):t\ge 0\}$, we provide expressions for the state probabilities in terms of the generalized Fox-Wright function. We recall [23] among the references with the inverse of the stable subordinator, and [15, 27] and [28] among the references with the tempered stable subordinator. Typically these two random time-changes are associated to some generalized derivative in the literature; namely the Caputo left fractional derivative (see, for example, (2.4.14) and (2.4.15) in [18]) in the first case, and the shifted fractional derivative (see (6) in [1]; see also (17) in [1] for the connections with the fractional Riemann-Liouville derivative) in the second case.
We also try to extend the large deviation results for $\{X(t):t\ge 0\}$ to the cases with a random time-change considered in this paper. It is useful to remark that all the large deviation principles in this paper are proved by applications of the Gärtner Ellis Theorem; moreover these large deviation principles yield the convergence (at least in probability) to the values at which the large deviation rate functions uniquely vanish. Thus, motivated by potential applications, when dealing with large deviation principles with the same speed function, we compare the rate functions to establish if we have a faster or slower convergence (if they are comparable). In conclusion the evaluation of the rate function can be an important task, in particular when they are given in terms of a variational formula (as happens with the application of the Gärtner Ellis Theorem).
The applications of the Gärtner Ellis Theorem are based on suitable limits of moment generating functions. So, in view of the applications of this theorem, we study the probability generating functions of the random variables of the processes; in particular the formulas obtained for $\{X({T^{\nu }}(t)):t\ge 0\}$ have some analogies with many results in the literature for other time-fractional processes (for instance the probability generating functions are expressed in terms of the Mittag-Leffler function), with both continuous and discrete state space (see, for example, [22, 14, 2] and [16]). For $\{X({T^{\nu }}(t)):t\ge 0\}$ we can consider large deviations only (the difficulties to obtain a moderate deviation result are briefly discussed); moreover we compute (and plot) different large deviation rate functions for various choices of $\nu \in (0,1)$ and we conclude that, the smaller is ν, the faster is the convergence of $\frac{{X^{\nu }}(t)}{t}$ to zero (as $t\to \infty $). For $\{X({\tilde{S}^{\nu ,\mu }}(t)):t\ge 0\}$ we can obtain large and moderate deviations for the tempered case $\mu >0$ only; in fact in this case we can apply the Gärtner Ellis Theorem because we have light-tailed distributed random variables (namely the moment generating functions of the involved random variables are finite in a neighborhood of the origin).
There are some references in the literature with applications of the Gärtner Ellis Theorem to time-changed processes. However there are very few cases where the random time-change is given by the inverse of the stable subordinator; see e.g. [13] and [35] where the time-changed processes are fractional Brownian motions (see also [20] and [25] for other asymptotic results for time-changed Gaussian processes with inverse stable subordinators). We are not aware of any other references where the time-changed process takes values on $\mathbb{Z}$.
We conclude with the outline of the paper. Section 2 is devoted to some preliminaries on large deviations. In Section 3 we present the results for the basic model, i.e. the (non-fractional) process $\{X(t):t\ge 0\}$. Then we present some results for the process $\{X(t):t\ge 0\}$ with random time-changes: the case with the inverse of the stable subordinator is studied in Section 4, the case with the (possibly tempered) stable subodinator is studied in Section 5. We conclude with the short final Section 6 devoted to some conclusions. We also present a final appendix (Section A) with the state probabilities expressions.
2 Preliminaries on large deviations
Some results in this paper concerns the theory of large deviations; so, in this section, we recall some preliminaries (see e.g. [7], pages 4–5). A family of probability measures $\{{\pi _{t}}:t>0\}$ on a topological space $\mathcal{Y}$ satisfies the large deviation principle (LDP for short) with rate function I and speed function ${v_{t}}$ if: ${\lim \nolimits_{t\to +\infty }}{v_{t}}=+\infty $, $I:\mathcal{Y}\to [0,+\infty ]$ is lower semicontinuous,
\[ \underset{t\to +\infty }{\liminf }\frac{1}{{v_{t}}}\log {\pi _{t}}(O)\ge -\underset{y\in O}{\inf }I(y)\]
for all open sets O, and
\[ \underset{t\to +\infty }{\limsup }\frac{1}{{v_{t}}}\log {\pi _{t}}(C)\le -\underset{y\in C}{\inf }I(y)\]
for all closed sets C. A rate function is said to be good if all its level sets $\{\{y\in \mathcal{Y}:I(y)\le \eta \}:\eta \ge 0\}$ are compact.We also present moderate deviation results. This terminology is used when, for each family of positive numbers $\{{a_{t}}:t>0\}$ such that ${a_{t}}\to 0$ and ${v_{t}}{a_{t}}\to \infty $, we have a family of laws of centered random variables (which depend on ${a_{t}}$), which satisfies the LDP with speed function $1/{a_{t}}$, and they are governed by the same quadratic rate function which uniquely vanishes at zero (for every choice of $\{{a_{t}}:t>0\}$). More precisely we have a rate function $J(y)=\frac{{y^{2}}}{2{\sigma ^{2}}}$, for some ${\sigma ^{2}}>0$. Typically moderate deviations fill the gap between a convergence to zero of centered random variables, and a convergence in distribution to a centered Normal distribution with variance ${\sigma ^{2}}$.
The main large deviation tool used in this paper is the Gärtner Ellis Theorem (see e.g. Theorem 2.3.6 in [7]).
3 Results for the basic model (non-fractional case)
In this section we present the results for the basic model. Some of them will be used for the models with random time-changes in the next sections. We start with some non-asymptotic results, where t is fixed, which concern probability generating functions, means and variances. In the second part we present the asymptotic results, namely large and (moderate) deviation results as $t\to \infty $.
In particular the probability generating functions $\{{F_{k}}(\cdot ,t):k\in \mathbb{Z},t\ge 0\}$ are important in both parts; they are defined by
\[ {F_{k}}(z,t):=\mathbb{E}\left[{z^{X(t)}}|X(0)=k\right]={\sum \limits_{n=-\infty }^{\infty }}{z^{n}}{p_{k,n}}(t)\hspace{2.5pt}(\text{for}\hspace{2.5pt}k\in \mathbb{Z}),\]
where $\{{p_{k,n}}(t):k,n\in \mathbb{Z},t\ge 0\}$ are the state probabilities in (1).We also have to consider the function $\Lambda :\mathbb{R}\to \mathbb{R}$ defined by
where
(2)
\[ \Lambda (\gamma ):=\frac{h({e^{\gamma }})}{{e^{\gamma }}}-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2},\](3)
\[ \left.\begin{array}{l}h(z):=\frac{1}{2}\sqrt{\tilde{h}(z;\underline{\alpha },\underline{\beta })},\hspace{2.5pt}\text{where we mean}\hspace{2.5pt}\tilde{h}(z;\underline{\alpha },\underline{\beta })=\tilde{h}(z;{\alpha _{1}},{\alpha _{2}},{\beta _{1}},{\beta _{2}})\hspace{2.5pt}\text{and}\\ {} \tilde{h}(z;\underline{\alpha },\underline{\beta }):={({\alpha _{1}}+{\alpha _{2}}-({\beta _{1}}+{\beta _{2}}))^{2}}{z^{2}}+4({\beta _{1}}{z^{2}}+{\beta _{2}})({\alpha _{1}}{z^{2}}+{\alpha _{2}}).\end{array}\right.\]Remark 3.1.
The non-asymptotic results presented below depend on $k=X(0)$, and we have different formulations when k is odd or even. In particular we can reduce from a case to another by exchanging $({\alpha _{1}},{\alpha _{2}})$ and $({\beta _{1}},{\beta _{2}})$. On the contrary k is negligible for the asymptotic results; in fact $\tilde{h}(z;\underline{\alpha },\underline{\beta })=\tilde{h}(z;\underline{\beta },\underline{\alpha })$, and we have an analogous property for the function Λ, for its first derivative ${\Lambda ^{\prime }}$ and its second derivative ${\Lambda ^{\prime\prime }}$.
The function Λ is the analogue of the function Λ in equation (14) in [10], and plays a crucial role in the proofs of the large (and moderate) deviation results. However we refer to this function also for the non-asymptotic results in order to have simpler expressions; in particular we refer to the derivatives ${\Lambda ^{\prime }}(0)$ and ${\Lambda ^{\prime\prime }}(0)$ and therefore we present the following lemma.
Lemma 3.1.
Let Λ be the function in (2). Then we have
\[ {\Lambda ^{\prime }}(0)=\frac{2({\alpha _{1}}{\beta _{1}}-{\alpha _{2}}{\beta _{2}})}{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}\]
and
\[ {\Lambda ^{\prime\prime }}(0)=\frac{4({\alpha _{1}}{\beta _{1}}+{\alpha _{2}}{\beta _{2}})}{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}-\frac{8{({\alpha _{1}}{\beta _{1}}-{\alpha _{2}}{\beta _{2}})^{2}}}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{3}}}.\]
Moreover ${\Lambda ^{\prime\prime }}(0)>0$; in fact
\[\begin{aligned}{}{\Lambda ^{\prime\prime }}(0)& =\frac{4}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{3}}}\{({\alpha _{1}}{\beta _{1}}+{\alpha _{2}}{\beta _{2}})\\ {} & \times [{({\alpha _{1}}+{\alpha _{2}})^{2}}+{({\beta _{1}}+{\beta _{2}})^{2}}+2{\alpha _{1}}{\beta _{2}}+2{\alpha _{2}}{\beta _{1}}]+8{\alpha _{1}}{\alpha _{2}}{\beta _{1}}{\beta _{2}}\}.\end{aligned}\]
Proof.
The desired equalities can be checked with some cumbersome computations. Here we only say that it is useful to check the equalities in terms of the function h and its derivatives. In fact we have
\[ {\Lambda ^{\prime }}(\gamma )=\frac{{h^{\prime }}({e^{\gamma }}){e^{2\gamma }}-{e^{\gamma }}h({e^{\gamma }})}{{e^{2\gamma }}}={h^{\prime }}({e^{\gamma }})-{e^{-\gamma }}h({e^{\gamma }}),\]
which yields ${\Lambda ^{\prime }}(0)={h^{\prime }}(1)-h(1)$, and
\[ {\Lambda ^{\prime\prime }}(\gamma )={h^{\prime\prime }}({e^{\gamma }}){e^{\gamma }}-(-{e^{-\gamma }}h({e^{\gamma }})+{h^{\prime }}({e^{\gamma }}))={h^{\prime\prime }}({e^{\gamma }}){e^{\gamma }}+{e^{-\gamma }}h({e^{\gamma }})-{h^{\prime }}({e^{\gamma }}),\]
which yields ${\Lambda ^{\prime\prime }}(0)={h^{\prime\prime }}(1)+h(1)-{h^{\prime }}(1)={h^{\prime\prime }}(1)-{\Lambda ^{\prime }}(0)$. □3.1 Non-asymptotic results
In this section we present explicit formulas for probability generating functions (see Proposition 3.1), means and variances (see Proposition 3.2). In all these propositions we can check what we said in Remark 3.1 about the exchange of $({\alpha _{1}},{\alpha _{2}})$ and $({\beta _{1}},{\beta _{2}})$.
In view of this we present some preliminaries. It is known that the state probabilities solve the equations
We remark that, if we consider the matrix
the equations (5) can be rewritten as
\[ \left\{\begin{array}{l}{\dot{p}_{k,2n}}(t)={\beta _{1}}{p_{k,2n-1}}(t)-({\alpha _{1}}+{\alpha _{2}}){p_{k,2n}}(t)+{\beta _{2}}{p_{k,2n+1}}(t)\\ {} {\dot{p}_{k,2n+1}}(t)={\alpha _{1}}{p_{k,2n}}(t)-({\beta _{1}}+{\beta _{2}}){p_{k,2n+1}}(t)+{\alpha _{2}}{p_{k,2n+2}}(t)\\ {} {p_{k,n}}(0)={1_{\{k=n\}}}\end{array}\right.\hspace{2.5pt}(\text{for}\hspace{2.5pt}k\in \mathbb{Z}).\]
So, if we consider the decomposition
where ${G_{k}}$ and ${H_{k}}$ are the generating functions defined by
and
\[ {H_{k}}(z,t):={\sum \limits_{j=-\infty }^{\infty }}{z^{2j+1}}{p_{k,2j+1}}(t)={\sum \limits_{j=-\infty }^{\infty }}{z^{2j-1}}{p_{k,2j-1}}(t),\]
we have
(5)
\[ \left\{\begin{array}{l}\frac{\partial {G_{k}}(z,t)}{\partial t}=z{\beta _{1}}{H_{k}}(z,t)-({\alpha _{1}}+{\alpha _{2}}){G_{k}}(z,t)+\frac{{\beta _{2}}}{z}{H_{k}}(z,t)\\ {} \frac{\partial {H_{k}}(z,t)}{\partial t}=z{\alpha _{1}}{G_{k}}(z,t)-({\beta _{1}}+{\beta _{2}}){H_{k}}(z,t)+\frac{{\alpha _{2}}}{z}{G_{k}}(z,t)\\ {} {G_{k}}(z,0)={z^{k}}\cdot {1_{\{k\hspace{2.5pt}\mathrm{is}\hspace{2.5pt}\mathrm{even}\}}},\hspace{2.5pt}{H_{k}}(z,0)={z^{k}}\cdot {1_{\{k\hspace{2.5pt}\mathrm{is}\hspace{2.5pt}\mathrm{odd}\}}}\end{array}\right.\hspace{2.5pt}(\text{for}\hspace{2.5pt}k\in \mathbb{Z}).\](6)
\[ A:=\left(\begin{array}{c@{\hskip10.0pt}c}-({\alpha _{1}}+{\alpha _{2}})& z{\beta _{1}}+\frac{{\beta _{2}}}{z}\\ {} z{\alpha _{1}}+\frac{{\alpha _{2}}}{z}& -({\beta _{1}}+{\beta _{2}})\end{array}\right),\]
\[ \left\{\begin{array}{l}\frac{\partial }{\partial t}\left(\begin{array}{c}{G_{k}}(z,t)\\ {} {H_{k}}(z,t)\end{array}\right)=A\left(\begin{array}{c}{G_{k}}(z,t)\\ {} {H_{k}}(z,t)\end{array}\right)\\ {} {G_{k}}(z,0)={z^{k}}\cdot {1_{\{k\hspace{2.5pt}\mathrm{is}\hspace{2.5pt}\mathrm{even}\}}},\hspace{2.5pt}{H_{k}}(z,0)={z^{k}}\cdot {1_{\{k\hspace{2.5pt}\mathrm{is}\hspace{2.5pt}\mathrm{odd}\}}}\end{array}\right.\hspace{2.5pt}(\text{for}\hspace{2.5pt}k\in \mathbb{Z}).\]
Thus
(7)
\[ \left(\begin{array}{c}{G_{2k}}(z,t)\\ {} {H_{2k}}(z,t)\end{array}\right)={e^{At}}\left(\begin{array}{c}{z^{2k}}\\ {} 0\end{array}\right)\hspace{2.5pt}\text{and}\hspace{2.5pt}\left(\begin{array}{c}{G_{2k+1}}(z,t)\\ {} {H_{2k+1}}(z,t)\end{array}\right)={e^{At}}\left(\begin{array}{c}0\\ {} {z^{2k+1}}\end{array}\right).\]We start with the probability generating functions.
Proposition 3.1.
For $z>0$ we have
\[ {F_{k}}(z,t)={z^{k}}{e^{-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}t}}\left(\cosh \left(\frac{th(z)}{z}\right)+\frac{{c_{k}}(z)}{h(z)}\sinh \left(\frac{th(z)}{z}\right)\right),\]
where
(8)
\[ {c_{k}}(z):=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{({\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}}))z}{2}+{\alpha _{1}}{z^{2}}+{\alpha _{2}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}k\hspace{2.5pt}\textit{is even}\\ {} \frac{({\alpha _{1}}+{\alpha _{2}}-({\beta _{1}}+{\beta _{2}}))z}{2}+{\beta _{1}}{z^{2}}+{\beta _{2}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}k\hspace{2.5pt}\textit{is odd}.\end{array}\right.\]Proof.
The main part of the proof consists of the computation of the exponential matrix ${e^{At}}$, where A is the matrix in (6), and finally we easily conclude by taking into account (4) and (7).
The eigenvalues of A are
(where h is defined by (3)), and it is known that we can find a matrix S such that
(9)
\[ {\hat{h}_{\pm }}(z):=-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}\pm \frac{h(z)}{z}\]
\[ S\left(\begin{array}{c@{\hskip10.0pt}c}-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}-\frac{h(z)}{z}& 0\\ {} 0& -\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}+\frac{h(z)}{z}\end{array}\right){S^{-1}}=A;\]
in particular we can consider the matrix
\[ S:=\left(\begin{array}{c@{\hskip10.0pt}c}\frac{{\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}})}{2}-\frac{h(z)}{z}& \frac{{\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}})}{2}+\frac{h(z)}{z}\\ {} z{\alpha _{1}}+\frac{{\alpha _{2}}}{z}& z{\alpha _{1}}+\frac{{\alpha _{2}}}{z}\end{array}\right)\]
and its inverse is
\[ {S^{-1}}=-\frac{z}{2h(z)}\left(\begin{array}{c@{\hskip10.0pt}c}1& \frac{-z[{\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}})]-2h(z)}{2({\alpha _{1}}{z^{2}}+{\alpha _{2}})}\\ {} -1& \frac{z[{\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}})]-2h(z)}{2({\alpha _{1}}{z^{2}}+{\alpha _{2}})}.\end{array}\right).\]
Then the desired exponential matrix is
\[\begin{aligned}{}{e^{At}}& =S\left(\begin{array}{c@{\hskip10.0pt}c}{e^{(-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}-\frac{h(z)}{z})t}}& 0\\ {} 0& {e^{(-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}+\frac{h(z)}{z})t}}\end{array}\right){S^{-1}}\\ {} & =-\frac{z}{2h(z)}S\left(\begin{array}{c@{\hskip10.0pt}c}{e^{{\hat{h}_{-}}(z)t}}& {e^{{\hat{h}_{-}}(z)t}}\cdot \frac{-z[{\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}})]-2h(z)}{2({\alpha _{1}}{z^{2}}+{\alpha _{2}})}\\ {} -{e^{{\hat{h}_{+}}(z)t}}& {e^{{\hat{h}_{+}}(z)t}}\cdot \frac{z[{\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}})]-2h(z)}{2({\alpha _{1}^{2}}z+{\alpha _{2}})}\end{array}\right);\end{aligned}\]
moreover, after some computations, we have
\[ {e^{At}}=\left(\begin{array}{c@{\hskip10.0pt}c}{u_{11}}(z,t)& {u_{12}}(z,t)\\ {} {u_{21}}(z,t)& {u_{22}}(z,t)\end{array}\right),\]
where
\[\begin{aligned}{}{u_{11}}(z,t)& =\frac{({\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}}))z}{2h(z)}\cdot \frac{{e^{{\hat{h}_{+}}(z)t}}-{e^{{\hat{h}_{-}}(z)t}}}{2}+\frac{{e^{{\hat{h}_{-}}(z)t}}+{e^{{\hat{h}_{+}}(z)t}}}{2},\\ {} {u_{21}}(z,t)& =\frac{{\alpha _{1}}{z^{2}}+{\alpha _{2}}}{h(z)}\cdot \frac{{e^{{\hat{h}_{+}}(z)t}}-{e^{{\hat{h}_{-}}(z)t}}}{2},\\ {} {u_{12}}(z,t)& =-\frac{z}{2h(z)}\cdot \frac{z}{{\alpha _{1}}{z^{2}}+{\alpha _{2}}}\left({e^{{\hat{h}_{-}}(z)t}}-{e^{{\hat{h}_{+}}(z)t}}\right)\\ {} & \times \left(\frac{{h^{2}}(z)}{{z^{2}}}-\frac{{({\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}}))^{2}}}{4}\right)\\ {} & =-\frac{1}{h(z)({\alpha _{1}}{z^{2}}+{\alpha _{2}})}\\ {} & \times \left({h^{2}}(z)-\frac{{({\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}}))^{2}}{z^{2}}}{4}\right)\frac{{e^{{\hat{h}_{-}}(z)t}}-{e^{{\hat{h}_{+}}(z)t}}}{2}\\ {} & =\frac{{\beta _{1}}{z^{2}}+{\beta _{2}}}{h(z)}\cdot \frac{{e^{{\hat{h}_{+}}(z)t}}-{e^{{\hat{h}_{-}}(z)t}}}{2}\end{aligned}\]
and
\[\begin{aligned}{}{u_{22}}(z,t)& =\frac{{e^{{\hat{h}_{-}}(z)t}}}{2}\left(\frac{({\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}}))z}{2h(z)}+1\right)\\ {} & +\frac{{e^{{\hat{h}_{+}}(z)t}}}{2}\left(-\frac{({\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}}))z}{2h(z)}+1\right)\\ {} & =\frac{({\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}}))z}{2h(z)}\cdot \frac{{e^{{\hat{h}_{-}}(z)t}}-{e^{{\hat{h}_{+}}(z)t}}}{2}+\frac{{e^{{\hat{h}_{-}}(z)t}}+{e^{{\hat{h}_{+}}(z)t}}}{2}.\end{aligned}\]
We complete the proof noting that, by (4) and (7), we have
and
in fact these equalities yield
\[\begin{aligned}{}{F_{2k}}(z,t)& ={z^{2k}}\left(\frac{{e^{{\hat{h}_{-}}(z)t}}+{e^{{\hat{h}_{+}}(z)t}}}{2}\right.\\ {} & \left.+\frac{1}{h(z)}\left(\frac{{\beta _{1}}+{\beta _{2}}-({\alpha _{1}}+{\alpha _{2}})}{2}z+{\alpha _{1}}{z^{2}}+{\alpha _{2}}\right)\frac{{e^{{\hat{h}_{+}}(z)t}}-{e^{{\hat{h}_{-}}(z)t}}}{2}\right)\\ {} & ={z^{2k}}{e^{-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}t}}\left(\cosh \left(\frac{th(z)}{z}\right)+\frac{{c_{2k}}(z)}{h(z)}\sinh \left(\frac{th(z)}{z}\right)\right)\end{aligned}\]
and
\[\begin{aligned}{}{F_{2k+1}}(z,t)& ={z^{2k+1}}\left(\frac{{e^{{\hat{h}_{-}}(z)t}}+{e^{{\hat{h}_{+}}(z)t}}}{2}\right.\\ {} & \left.+\frac{1}{h(z)}\left(\frac{{\alpha _{1}}+{\alpha _{2}}-({\beta _{1}}+{\beta _{2}})}{2}z+{\beta _{1}}{z^{2}}+{\beta _{2}}\right)\frac{{e^{{\hat{h}_{+}}(z)t}}-{e^{{\hat{h}_{-}}(z)t}}}{2}\right)\\ {} & ={z^{2k+1}}{e^{-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}t}}\left(\cosh \left(\frac{th(z)}{z}\right)+\frac{{c_{2k+1}}(z)}{h(z)}\sinh \left(\frac{th(z)}{z}\right)\right).\end{aligned}\]
□In the next proposition we give mean and variance; in particular we refer to ${\Lambda ^{\prime }}(0)$ and ${\Lambda ^{\prime\prime }}(0)$ given in Lemma 3.1.
Proposition 3.2.
We have
\[ \mathbb{E}[X(t)|X(0)=k]=k+{\Lambda ^{\prime }}(0)t+\xi \frac{1-{e^{-({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})t}}}{2},\]
where
\[ \xi :={\left.{\left(\frac{{c_{k}}(z)}{h(z)}\right)^{\prime }}\right|_{z=1}}=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{2({\alpha _{1}}+{\alpha _{2}})({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{2}}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}k\hspace{2.5pt}\textit{is even}\\ {} \frac{2({\beta _{1}}+{\beta _{2}})({\beta _{1}}-{\beta _{2}}-{\alpha _{1}}+{\alpha _{2}})}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{2}}}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}k\hspace{2.5pt}\textit{is odd}.\end{array}\right.\]
Moreover, if k is even, we have
\[\begin{aligned}{}\mathrm{Var}[X(t)|X(0)=k]& ={\Lambda ^{\prime\prime }}(0)t+({\rho _{11}}t+{\rho _{10}}){e^{-({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})t}}\\ {} & +{\rho _{2}}{e^{-2({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})t}}+{\rho _{0}},\end{aligned}\]
where:
\[ {\rho _{11}}:=\frac{8({\alpha _{1}}+{\alpha _{2}})({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})({\alpha _{1}}{\beta _{1}}-{\alpha _{2}}{\beta _{2}})}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{3}}},\]
\[\begin{aligned}{}{\rho _{10}}& :=\frac{1}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{3}}}\\ {} & \times \left\{({\alpha _{1}}+{\alpha _{2}})({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})({\alpha _{1}}+{\alpha _{2}}-{\beta _{1}}-{\beta _{2}})\right.\\ {} & -6({\alpha _{2}}-{\beta _{1}})({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})({\alpha _{1}}+{\alpha _{2}}-{\beta _{1}}-{\beta _{2}})\\ {} & -2(7{\alpha _{2}}+{\beta _{1}}-2{\beta _{2}})({\beta _{1}}+{\beta _{2}})({\alpha _{1}}+{\alpha _{2}}-{\beta _{1}}-{\beta _{2}})\\ {} & -4{({\alpha _{2}}-{\beta _{2}})^{2}}({\alpha _{1}}+{\alpha _{2}}-{\beta _{1}}-{\beta _{2}})\\ {} & +8{\alpha _{2}}({\beta _{1}}+{\beta _{2}})({\alpha _{1}}+{\alpha _{2}})-8{\alpha _{2}}({\beta _{1}}+{\beta _{2}})({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})\\ {} & \left.+8{\beta _{1}}{({\beta _{1}}+{\beta _{2}})^{2}}-\frac{16{({\alpha _{2}}+{\beta _{1}})^{2}}{({\beta _{1}}+{\beta _{2}})^{2}}}{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}\right\},\end{aligned}\]
\[ {\rho _{2}}:=-\frac{{({\alpha _{1}}+{\alpha _{2}})^{2}}{({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})^{2}}}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{4}}}\]
and
\[\begin{aligned}{}{\rho _{0}}& :=\frac{1}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{3}}}\\ {} & \left\{(-7{\alpha _{1}}+3{\alpha _{2}}+10{\beta _{1}}-4{\beta _{2}})({\beta _{1}}+{\beta _{2}})({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})\right.\\ {} & +4({\alpha _{2}}+{\alpha _{1}})({\alpha _{2}}+2{\beta _{2}})({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})\\ {} & +4{({\alpha _{2}}-{\beta _{2}})^{2}}({\alpha _{1}}-{\alpha _{2}}-{\beta _{1}}+{\beta _{2}})\\ {} & +4({\alpha _{2}}-{\beta _{2}})({\alpha _{2}}+{\beta _{1}})({\beta _{1}}+{\beta _{2}})-10({\alpha _{2}}+{\beta _{1}}){({\beta _{1}}+{\beta _{2}})^{2}}\\ {} & \left.+8({\alpha _{2}}+{\beta _{1}}){({\alpha _{2}}-{\beta _{2}})^{2}}\right\}+\frac{20{({\alpha _{2}}+{\beta _{1}})^{2}}{({\beta _{1}}+{\beta _{2}})^{2}}}{{({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})^{4}}}.\end{aligned}\]
Finally, if k is odd, $\mathrm{Var}[X(t)|X(0)=k]$ can be obtained by exchanging $({\alpha _{1}},{\alpha _{2}})$ and $({\beta _{1}},{\beta _{2}})$ in the above expression (we recall that, as pointed out in Remark 3.1, ${\Lambda ^{\prime\prime }}(0)$ does not change).
Proof.
The desired expressions of mean and variance can be obtained with suitable (well-known) formulas in terms of ${\left.\frac{d{F_{k}}(z,t)}{dz}\right|_{z=1}}$ and ${\left.\frac{{d^{2}}{F_{k}}(z,t)}{d{z^{2}}}\right|_{z=1}}$; these two values can be computed by considering the explicit formulas of ${F_{k}}(z,t)$ in Proposition 3.1. The computations are cumbersome and we omit the details. □
3.2 Asymptotic results
In this section we present Propositions 3.3 and 3.4, which are the generalization of Propositions 3.1 and 3.2 in [10]. In both cases we apply the Gärtner Ellis Theorem, and we use the probability generating function in Proposition 3.1. Actually the proof of Proposition 3.4 here is slightly different from the proof of Proposition 3.2 in [10].
We also give some brief comments on the interest of these results (whatever we choose $k\in \mathbb{Z}$). Proposition 3.3 allows to say that $\frac{X(t)}{t}$ converges in probability to ${\Lambda ^{\prime }}(0)$ (as $t\to \infty $); moreover, for every measurable set A such that ${\Lambda ^{\prime }}(0)\notin \bar{A}$, roughly speaking $P\Big(\frac{X(t)}{t}\in A\Big|X(0)=k\Big)$ decays exponentially fast with a rate given by ${\inf _{y\in A}}{\Lambda ^{\ast }}(y)$, where ${\Lambda ^{\ast }}$ is the large deviation rate function. On the other hand Proposition 3.4 provides a class of LDPs that fill the gap between the convergence of $\frac{X(t)}{t}$ to ${\Lambda ^{\prime }}(0)$ cited above, and the weak convergence of $\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{\sqrt{t}}$ to the centered Normal distribution with variance ${\Lambda ^{\prime\prime }}(0)$.
Proposition 3.3.
For all $k\in \mathbb{Z}$, $\Big\{P\Big(\frac{X(t)}{t}\in \cdot \Big|X(0)=k\Big):t>0\Big\}$ satisfies the LDP with speed function ${v_{t}}=t$ and good rate function ${\Lambda ^{\ast }}(y):={\sup _{\gamma \in \mathbb{R}}}\{\gamma y-\Lambda (\gamma )\}$.
Proposition 3.4.
Let $\{{a_{t}}:t>0\}$ be such that ${a_{t}}\to 0$ and $t{a_{t}}\to +\infty $ (as $t\to +\infty $). Then, for all $k\in \mathbb{Z}$, $\Big\{P\Big(\sqrt{t{a_{t}}}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\in \cdot \Big|X(0)=k\Big):t>0\Big\}$ satisfies the LDP with speed function ${v_{t}}=\frac{1}{{a_{t}}}$ and good rate function $J(y):=\frac{{y^{2}}}{2{\Lambda ^{\prime\prime }}(0)}$.
Proof.
We apply the Gärtner Ellis Theorem. More precisely we show that
where
in fact we can easily check that $J(y)={\sup _{\gamma \in \mathbb{R}}}\Big\{\gamma y-\frac{{\gamma ^{2}}}{2}{\Lambda ^{\prime\prime }}(0)\Big\}$ (for all $y\in \mathbb{R}$).
(10)
\[ \underset{t\to \infty }{\lim }{a_{t}}\log \mathbb{E}\left[\exp \left(\frac{\gamma }{{a_{t}}}{X_{k}}(t;{a_{t}})\right)\Big|X(0)=k\right]=\frac{{\gamma ^{2}}}{2}{\Lambda ^{\prime\prime }}(0)\hspace{2.5pt}(\text{for all}\hspace{2.5pt}\gamma \in \mathbb{R})\]We remark that
\[\begin{aligned}{}& {a_{t}}\log \mathbb{E}\left[\exp \left(\frac{\gamma }{{a_{t}}}\sqrt{t{a_{t}}}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\right)\Big|X(0)=k\right]\\ {} & ={a_{t}}\left(\log \mathbb{E}\left[\exp \left(\frac{\gamma }{\sqrt{t{a_{t}}}}X(t)\right)\Big|X(0)=k\right]-\frac{\gamma }{\sqrt{t{a_{t}}}}\mathbb{E}[X(t)|X(0)=k]\right).\end{aligned}\]
As far as the right hand side is concerned, we take into account Proposition 3.1 for the moment generating function and Proposition 3.2 for the mean; then we get
\[\begin{aligned}{}& \underset{t\to \infty }{\lim }{a_{t}}\log \mathbb{E}\left[\exp \left(\frac{\gamma }{{a_{t}}}\sqrt{t{a_{t}}}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\right)\Big|X(0)=k\right]\\ {} & =\underset{t\to \infty }{\lim }{a_{t}}\left(k\frac{\gamma }{\sqrt{t{a_{t}}}}-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}t+t\frac{h({e^{\gamma /\sqrt{t{a_{t}}}}})}{{e^{\gamma /\sqrt{t{a_{t}}}}}}-\frac{\gamma }{\sqrt{t{a_{t}}}}(k+{\Lambda ^{\prime }}(0)t)\right)\end{aligned}\]
and, by (2), we obtain
\[\begin{aligned}{}& \underset{t\to \infty }{\lim }{a_{t}}\log \mathbb{E}\left[\exp \left(\frac{\gamma }{{a_{t}}}\sqrt{t{a_{t}}}\frac{X(t)-\mathbb{E}[X(t)|X(0)=k]}{t}\right)\Big|X(0)=k\right]\\ {} & =\underset{t\to \infty }{\lim }t{a_{t}}\left(\Lambda \left(\frac{\gamma }{\sqrt{t{a_{t}}}}\right)-\frac{\gamma }{\sqrt{t{a_{t}}}}{\Lambda ^{\prime }}(0)\right).\end{aligned}\]
Finally, if we consider the second order Taylor formula for the function Λ, we have
\[ \underset{t\to \infty }{\lim }t{a_{t}}\left(\Lambda \left(\frac{\gamma }{\sqrt{t{a_{t}}}}\right)-\frac{\gamma }{\sqrt{t{a_{t}}}}{\Lambda ^{\prime }}(0)\right)=\underset{t\to \infty }{\lim }t{a_{t}}\left(\frac{{\gamma ^{2}}}{2t{a_{t}}}{\Lambda ^{\prime\prime }}(0)+o\left(\frac{{\gamma ^{2}}}{t{a_{t}}}\right)\right)\]
for a remainder $o\left(\frac{{\gamma ^{2}}}{t{a_{t}}}\right)$ such that $o\left(\frac{{\gamma ^{2}}}{t{a_{t}}}\right)/\frac{{\gamma ^{2}}}{t{a_{t}}}\to 0$, and (10) is checked. □Remark 3.2.
The expressions of mean and variance in Proposition 3.2 yield the following limits:
\[ \underset{t\to \infty }{\lim }\frac{\mathbb{E}[X(t)|X(0)=k]}{t}={\Lambda ^{\prime }}(0);\hspace{2.5pt}\underset{t\to \infty }{\lim }\frac{\mathrm{Var}[X(t)|X(0)=k]}{t}={\Lambda ^{\prime\prime }}(0).\]
These limits give a generalization of the analogue limits in [10].4 Results with the inverse of the stable subordinator
In this section we consider the process $\{{X^{\nu }}(t):t\ge 0\}$, for $\nu \in (0,1)$, i.e.
where $\{{T^{\nu }}(t):t\ge 0\}$ is the inverse of the stable subordinator, independent of a version of the non-fractional process $\{{X^{1}}(t):t\ge 0\}$ studied above. This random time-change has interest when we study a chain molecular diffusion and, for some reasons (for instance some environmental conditions), we need to refer to a modification of the basic model with a sort of trapping and delaying effect.
So, in view of what follows, we recall some preliminaries. We start with the definition of the Mittag-Leffler function (see e.g. [26], page 17):
\[ {E_{\nu }}(x):=\sum \limits_{j\ge 0}\frac{{x^{j}}}{\Gamma (\nu j+1)}\hspace{2.5pt}(\text{for all}\hspace{2.5pt}x\in \mathbb{R}).\]
Then we have
In some references this formula is stated assuming that $\gamma \le 0$ but this restriction is not needed because we can refer to the analytic continuation of the Laplace transform with complex argument. We also recall that formula (24) in [21] provides a version of (12) for $t=1$ (in that formula there is $-s$ in place of γ, and $s\in \mathbb{C}$).4.1 Probability generating function
Now we prove Proposition 4.1, which provides an expression for the probability generating functions $\{{F_{k}^{\nu }}(\cdot ,t):k\in \mathbb{Z},t\ge 0\}$ defined by
\[ {F_{k}^{\nu }}(z,t):=\mathbb{E}\left[{z^{{X^{\nu }}(t)}}|{X^{\nu }}(0)=k\right]={\sum \limits_{n=-\infty }^{\infty }}{z^{n}}{p_{k,n}^{\nu }}(t)\hspace{2.5pt}(\text{for}\hspace{2.5pt}k\in \mathbb{Z}),\]
where $\{{p_{k,n}^{\nu }}(t):k,n\in \mathbb{Z},t\ge 0\}$ are the state probabilities defined by
Obviously Proposition 4.1 is the analogue of Proposition 3.1 (and we can recover it by setting $\nu =1$). Proposition 4.1.
For $z>0$ we have
\[\begin{aligned}{}{F_{k}^{\nu }}(z,t)& ={z^{k}}\left(\frac{{E_{\nu }}({\hat{h}_{-}}(z){t^{\nu }})+{E_{\nu }}({\hat{h}_{+}}(z){t^{\nu }})}{2}\right.\\ {} & \left.+\frac{{c_{k}}(z)}{h(z)}\cdot \frac{{E_{\nu }}({\hat{h}_{+}}(z){t^{\nu }})-{E_{\nu }}({\hat{h}_{-}}(z){t^{\nu }})}{2}\right),\end{aligned}\]
where ${c_{k}}(z)$ is as in (8) and ${\hat{h}_{\pm }}(z)$ are the eigenvalues in (9).
Proof.
We recall that ${T^{\nu }}(0)=0$. Then, if we refer the expression of the probability generating functions $\{{F_{k}}(\cdot ,t):k\in \mathbb{Z},t\ge 0\}$ in Proposition 3.1, we have
\[\begin{aligned}{}{F_{k}^{\nu }}(z,t)& =\mathbb{E}\left[{z^{{X^{1}}({T^{\nu }}(t))}}|{X^{1}}(0)=k\right]\\ {} & =\mathbb{E}\left[{F_{k}}(z,{T^{\nu }}(t))|{X^{1}}(0)=k\right]\\ {} & =\mathbb{E}\left[{z^{k}}{e^{-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}{T^{\nu }}(t)}}\left(\cosh \left(\frac{{T^{\nu }}(t)h(z)}{z}\right)\right.\right.\\ {} & \left.\left.+\frac{{c_{k}}(z)}{h(z)}\sinh \left(\frac{{T^{\nu }}(t)h(z)}{z}\right)\right)\Big|{X^{1}}(0)=k\right].\end{aligned}\]
Then, by taking into account the moment generating function in (12), after some manipulations we get
\[\begin{aligned}{}{\tilde{F}_{k}^{\nu ,\mu }}(z,t)& ={z^{k}}\left(\left(1+\frac{{c_{k}}(z)}{h(z)}\right)\frac{\mathbb{E}[{e^{{\hat{h}_{+}}(z){T^{\nu }}(t)}}]}{2}+\left(1-\frac{{c_{k}}(z)}{h(z)}\right)\frac{\mathbb{E}[{e^{{\hat{h}_{-}}(z){T^{\nu }}(t)}}]}{2}\right)\\ {} & ={z^{k}}\left(\left(1+\frac{{c_{k}}(z)}{h(z)}\right)\frac{{E_{\nu }}({\hat{h}_{+}}(z){t^{\nu }})}{2}+\left(1-\frac{{c_{k}}(z)}{h(z)}\right)\frac{{E_{\nu }}({\hat{h}_{-}}(z){t^{\nu }})}{2}\right).\end{aligned}\]
So we can immediately check that this coincides with the expression in the statement of the proposition. □4.2 Asymptotic results
In this section we present Proposition 4.2, which is the analogue of Proposition 3.3. Unfortunately we cannot present a moderate deviation result, namely we cannot present the analogue of Proposition 3.4; see the discussion in Remark 4.1.
Finally, in Remark 4.2, we compare the convergence of processes for different values of $\nu \in (0,1)$. In fact, if we consider the framework of Proposition 4.2 below, the rate function ${\Lambda _{\nu }^{\ast }}(y)$ uniquely vanishes at $y=0$, and therefore $\frac{{X^{\nu }}(t)}{t}$ converges to 0 as $t\to \infty $ (we recall that, for $\nu =1$, $\frac{{X^{\nu }}(t)}{t}$ converges to ${\Lambda ^{\prime }}(0)$ as $t\to \infty $); moreover, the more ${\Lambda _{\nu }^{\ast }}(y)$ is larger around $y=0$, the more the convergence of $\frac{{X^{\nu }}(t)}{t}$ is faster. In particular in Remark 4.2 we take $0<{\nu _{1}}<{\nu _{2}}<1$, and we get strict inequalities between ${\Lambda _{{\nu _{1}}}^{\ast }}(y)$ and ${\Lambda _{{\nu _{2}}}^{\ast }}(y)$ in a sufficiently small neighborhood of the origin $y=0$ (except the origin itself because we have ${\Lambda _{{\nu _{1}}}^{\ast }}(0)={\Lambda _{{\nu _{2}}}^{\ast }}(0)=0$).
Proposition 4.2.
We set
\[ {\Lambda _{\nu }}(\gamma ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{(\Lambda (\gamma ))^{1/\nu }}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\Lambda (\gamma )\ge 0\\ {} 0& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\Lambda (\gamma )<0,\end{array}\right.\]
where Λ is the function in (2). Then, for all $k\in \mathbb{Z}$, $\Big\{P\Big(\frac{{X^{\nu }}(t)}{t}\in \cdot \Big|{X^{\nu }}(0)=k\Big):t>0\Big\}$ satisfies the LDP with speed function ${v_{t}}=t$ and good rate function ${\Lambda _{\nu }^{\ast }}(y):={\sup _{\gamma \in \mathbb{R}}}\{\gamma y-{\Lambda _{\nu }}(\gamma )\}$.
Proof.
We want to apply the Gärtner Ellis Theorem and, for all $\gamma \in \mathbb{R}$, we have to take the limit of $\frac{1}{t}\log {F_{k}^{\nu }}({e^{\gamma }},t)$ (as $t\to \infty $). Obviously we consider the expression of the function ${F_{k}^{\nu }}(z,t)$ in Proposition 4.1.
Firstly, if $\nu \in (0,1)$, we have
this can be checked noting that ${\hat{h}_{-}}(z)<0$, ${\hat{h}_{+}}({e^{\gamma }})=\Lambda (\gamma )$ (for all $\gamma \in \mathbb{R}$), by taking into account the limit
(14)
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log {F_{k}^{\nu }}({e^{\gamma }},t)={\Lambda _{\nu }}(\gamma )\hspace{2.5pt}(\text{for all}\hspace{2.5pt}\gamma \in \mathbb{R});\]
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log {E_{\nu }}(c{t^{\nu }})=\left\{\begin{array}{l@{\hskip10.0pt}l}0& \hspace{2.5pt}\text{if}\hspace{2.5pt}c\le 0\\ {} {c^{1/\nu }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}c>0\end{array}\right.\]
(this limit can be seen as a consequence of an expansion of Mittag-Leffler function; see (1.8.27) in [18] with $\alpha =\nu $ and $\beta =1$), and by considering a suitable application of Lemma 1.2.15 in [7].Moreover the function ${\Lambda _{\nu }}$ in the limit (14) is nonnegative and attains its minimum, equal to zero, at the points of the set $\{\gamma \in \mathbb{R}:\Lambda (\gamma )\le 0\}$; we recall that this set can be reduced to the single point $\gamma =0$ if and only if ${\Lambda ^{\prime }}(0)=0$. Thus we can apply the Gärtner Ellis Theorem (because the function in the limit is finite everywhere and differentiable), and the desired LDP holds. □
Remark 4.1.
We have some difficulties to get the extension of Proposition 3.4 for the time-fractional case. In fact, if a moderate deviation holds, we expect that it is governed by the rate function ${J_{\nu }}(y):=\frac{{y^{2}}}{2{\Lambda ^{\prime\prime }}(0)}$, where ${\Lambda ^{\prime\prime }}(0)$ is the second derivative at the origin $\gamma =0$ of ${\Lambda _{\nu }}$, and assuming that such value exists and it is finite. On the contrary ${\Lambda ^{\prime\prime }}(0)$ exists only if $\nu \in (0,1/2]$, and it is equal to zero. So, in such a case, we should have
and this rate function is not interesting; in fact it is the largest rate function that we have for a sequence that converges to zero (for instance this rate function comes up when we have constant random variables converging to zero).
Remark 4.2.
We take $0<{\nu _{1}}<{\nu _{2}}<1$. We recall that:
Thus, by combining these two statements, there exists ${\delta ^{\prime }}>0$ such that, for $0<|y|<{\delta ^{\prime }}$, we have
-
• for $\nu \in (0,1)$ and $y\in \mathbb{R}$, the equation ${\Lambda ^{\prime }_{\nu }}(\gamma )=y$ admits a solution; for the case $y=0$ we have\[ \{\gamma \in \mathbb{R}:{\Lambda ^{\prime }_{\nu }}(\gamma )=0\}=\{\gamma \in \mathbb{R}:\Lambda (\gamma )\le 0\},\]and therefore we have a unique solution $\gamma =0$ if and only if ${\Lambda ^{\prime }}(0)=0$; on the contrary, if $y\ne 0$, we have a unique solution ${\gamma _{y,\nu }}\in \mathbb{R}$, say;
-
• there exists $\delta >0$ such that, if $\inf \{|\gamma -\tilde{\gamma }|:\Lambda (\tilde{\gamma })\le 0\}<\delta $, then $\Lambda (\gamma )\in (0,1)$, and therefore $0<{\Lambda _{{\nu _{1}}}}(\gamma )<{\Lambda _{{\nu _{2}}}}(\gamma )$.
\[\begin{aligned}{}0& <{\Lambda _{{\nu _{2}}}^{\ast }}(y)={\gamma _{y,{\nu _{2}}}}y-{\Lambda _{{\nu _{2}}}}({\gamma _{y,{\nu _{2}}}})\\ {} & <{\gamma _{y,{\nu _{2}}}}y-{\Lambda _{{\nu _{1}}}}({\gamma _{y,{\nu _{2}}}})\le \underset{\gamma \in \mathbb{R}}{\sup }\{\gamma y-{\Lambda _{{\nu _{1}}}}(\gamma )\}={\Lambda _{{\nu _{1}}}^{\ast }}(y)\end{aligned}\]
(see Figure 2 where ${\Lambda ^{\prime }}(0)=0$ and we consider some specific values of ν). In conclusion we can say that
We also remark that the statement (15) is not surprising if we take into account the time-change representation (11). In fact, if we denote the stable subordinator by $\{{S^{\nu }}(t):t\ge 0\}$, we have that
thus, as $\nu \in (0,1)$ decreases, the increasing trend of $\{{S^{\nu }}(t):t\ge 0\}$ increases, and therefore the increasing trend of the inverse of the stable subordinator $\{{T^{\nu }}(t):t\ge 0\}$ decreases. Then, for $0<{\nu _{1}}<{\nu _{2}}<1$, the increasing trend of the random time-change $\{{T^{{\nu _{1}}}}(t):t\ge 0\}$ for $X(\cdot )$ is slower than the increasing trend of $\{{T^{{\nu _{2}}}}(t):t\ge 0\}$; so $\frac{{X^{1}}({T^{{\nu _{1}}}}(t))}{t}$ converges to zero faster than $\frac{{X^{1}}({T^{{\nu _{2}}}}(t))}{t}$ (as $t\to \infty $), and this statement meets (15).
(16)
\[ \mathbb{E}[{e^{\gamma {S^{\nu }}(t)}}]=\left\{\begin{array}{l@{\hskip10.0pt}l}{e^{-|\gamma {|^{\nu }}t}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\gamma \le 0\\ {} \infty & \hspace{2.5pt}\text{otherwise};\end{array}\right.\]Fig. 2.
The rate function ${\Lambda _{\nu }^{\ast }}$ around $y=0$ for ${\Lambda ^{\prime }}(0)=0$ (only in this case ${\Lambda _{\nu }^{\ast }}$ is differentiable everywhere; on the contrary, for $y=0$, left and right hand derivatives of ${\Lambda _{\nu }^{\ast }}(y)$ do not coincide) and some values for ν: $\nu =1/4$ (dashed line), $\nu =1/2$ (continuous line) and $\nu =1$ (dotted line)
5 Results with the (possibly tempered) stable subordinator
In this section we consider the process $\{{\tilde{X}^{\nu ,\mu }}(t):t\ge 0\}$, for $\nu \in (0,1)$ and $\mu \ge 0$, i.e.
where $\{{\tilde{S}^{\nu ,\mu }}(t):t\ge 0\}$ is a (possibly tempered) stable subordinator, independent of a version of the non-fractional process $\{{X^{1}}(t):t\ge 0\}$ studied above.
So we recall some preliminaries on $\{{\tilde{S}^{\nu ,\mu }}(t):t\ge 0\}$. Firstly, for $t>0$, we have
where we take into account (16). Moreover, for $\mu >0$, if we consider the function ${\Psi _{\nu ,\mu }}$ defined by
for all $t>0$ we have
and
(actually, if $\mu =0$, the above formulas (19) and (20) hold as left derivatives equal to infinity).
\[ P({\tilde{S}^{\nu ,\mu }}(t)\in dx)=\underset{=:{f_{{\tilde{S}^{\nu ,\mu }}(t)}}(x)}{\underbrace{{e^{-\mu x+{\mu ^{\nu }}t}}{f_{{S^{\nu }}(t)}}(x)}}dx,\]
where
and $\{{S^{\nu }}(t):t\ge 0\}$ is the stable subordinator; note that $\{{\tilde{S}^{\nu ,\mu }}(t):t\ge 0\}$ with $\mu =0$ coincides with $\{{S^{\nu }}(t):t\ge 0\}$. Moreover we have
(17)
\[ \mathbb{E}[{e^{\gamma {\tilde{S}^{\nu ,\mu }}(t)}}]={e^{{\mu ^{\nu }}t}}\mathbb{E}[{e^{(\gamma -\mu ){S^{\nu }}(t)}}]=\left\{\begin{array}{l@{\hskip10.0pt}l}{e^{-t({(\mu -\gamma )^{\nu }}-{\mu ^{\nu }})}}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\gamma \le \mu \\ {} \infty & \hspace{2.5pt}\text{otherwise},\end{array}\right.\](18)
\[ {\Psi _{\nu ,\mu }}(\gamma ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{\mu ^{\nu }}-{(\mu -\gamma )^{\nu }}& \hspace{2.5pt}\text{if}\hspace{2.5pt}\gamma \le \mu \\ {} \infty & \hspace{2.5pt}\text{otherwise},\end{array}\right.\](19)
\[ \frac{\mathbb{E}[{\tilde{S}^{\nu ,\mu }}(t)]}{t}=\frac{\nu {\mu ^{\nu -1}}t}{t}=\nu {\mu ^{\nu -1}}={\Psi ^{\prime }_{\nu ,\mu }}(0)\](20)
\[ \frac{\mathrm{Var}[{\tilde{S}^{\nu ,\mu }}(t)]}{t}=\frac{-\nu (\nu -1){\mu ^{\nu -2}}t}{t}=-\nu (\nu -1){\mu ^{\nu -2}}={\Psi ^{\prime\prime }_{\nu ,\mu }}(0)\]5.1 Probability generating function
Now we prove Proposition 5.1, which provides an expression for the probability generating functions $\{{\tilde{F}_{k}^{\nu ,\mu }}(\cdot ,t):k\in \mathbb{Z},t\ge 0\}$ defined by
Obviously Proposition 5.1 is the analogue of Propositions 3.1 and 4.1. The condition ${\hat{h}_{+}}(z)\le \mu $ will be discussed after the proof.
\[ {\tilde{F}_{k}^{\nu ,\mu }}(z,t):=\mathbb{E}\left[{z^{{\tilde{X}^{\nu ,\mu }}(t)}}|{\tilde{X}^{\nu ,\mu }}(0)=k\right]={\sum \limits_{n=-\infty }^{\infty }}{z^{n}}{\tilde{p}_{k,n}^{\nu ,\mu }}(t)\hspace{2.5pt}(\text{for}\hspace{2.5pt}k\in \mathbb{Z}),\]
where $\{{\tilde{p}_{k,n}^{\nu ,\mu }}(t):k,n\in \mathbb{Z},t\ge 0\}$ are the state probabilities defined by
(21)
\[ {\tilde{p}_{k,n}^{\nu ,\mu }}(t):=P({\tilde{X}^{\nu ,\mu }}(t)=n|{\tilde{X}^{\nu ,\mu }}(0)=k).\]Proposition 5.1.
For $z>0$ we have
\[ {\tilde{F}_{k}^{\nu ,\mu }}(z,t)=\left\{\begin{array}{l@{\hskip10.0pt}l}{z^{k}}\left(\frac{{e^{-t({(\mu -{\hat{h}_{+}}(z))^{\nu }}-{\mu ^{\nu }})}}+{e^{-t({(\mu -{\hat{h}_{-}}(z))^{\nu }}-{\mu ^{\nu }})}}}{2}\right.\\ {} \hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}+\left.\frac{{c_{k}}(z)}{h(z)}\cdot \frac{{e^{-t({(\mu -{\hat{h}_{+}}(z))^{\nu }}-{\mu ^{\nu }})}}-{e^{-t({(\mu -{\hat{h}_{-}}(z))^{\nu }}-{\mu ^{\nu }})}}}{2}\right)& \hspace{2.5pt}\textit{if}\hspace{2.5pt}{\hat{h}_{+}}(z)\le \mu \\ {} \infty & \hspace{2.5pt}\textit{otherwise},\end{array}\right.\]
where ${c_{k}}(z)$ is as in (8) and ${\hat{h}_{\pm }}(z)$ are the eigenvalues in (9).
Proof.
We recall that ${\tilde{S}^{\nu ,\mu }}(0)=0$. Then, if we refer the expression of the probability generating functions $\{{F_{k}}(\cdot ,t):k\in \mathbb{Z},t\ge 0\}$ in Proposition 3.1, we have
\[\begin{aligned}{}{\tilde{F}_{k}^{\nu ,\mu }}(z,t)& =\mathbb{E}\left[{z^{{X^{1}}({\tilde{S}^{\nu ,\mu }}(t))}}|{X^{1}}(0)=k\right]\\ {} & =\mathbb{E}\left[{F_{k}}(z,{\tilde{S}^{\nu ,\mu }}(t))|{X^{1}}(0)=k\right]\\ {} & =\mathbb{E}\left[{z^{k}}{e^{-\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}{\tilde{S}^{\nu ,\mu }}(t)}}\left(\cosh \left(\frac{{\tilde{S}^{\nu ,\mu }}(t)h(z)}{z}\right)\right.\right.\\ {} & \left.\left.+\frac{{c_{k}}(z)}{h(z)}\sinh \left(\frac{{\tilde{S}^{\nu ,\mu }}(t)h(z)}{z}\right)\right)\Big|{X^{1}}(0)=k\right].\end{aligned}\]
Then, by taking into account the moment generating function in (17), after some manipulations we get (we recall that ${\hat{h}_{-}}(z)<0$)
\[\begin{aligned}{}{\tilde{F}_{k}^{\nu ,\mu }}(z,t)& ={z^{k}}\left(\left(1+\frac{{c_{k}}(z)}{h(z)}\right)\frac{\mathbb{E}[{e^{{\hat{h}_{+}}(z){\tilde{S}^{\nu ,\mu }}(t)}}]}{2}\right.\\ {} & \left.+\left(1-\frac{{c_{k}}(z)}{h(z)}\right)\frac{\mathbb{E}[{e^{{\hat{h}_{-}}(z){\tilde{S}^{\nu ,\mu }}(t)}}]}{2}\right)\\ {} & ={z^{k}}\left(\left(1+\frac{{c_{k}}(z)}{h(z)}\right)\frac{{e^{-t({(\mu -{\hat{h}_{+}}(z))^{\nu }}-{\mu ^{\nu }})}}}{2}\right.\\ {} & \left.+\left(1-\frac{{c_{k}}(z)}{h(z)}\right)\frac{{e^{-t({(\mu -{\hat{h}_{-}}(z))^{\nu }}-{\mu ^{\nu }})}}}{2}\right)\end{aligned}\]
if ${\hat{h}_{+}}(z)\le \mu $ (and infinity otherwise). So we can easily check that this coincides with the expression in the statement of the proposition. □We conclude this section with a brief discussion on the condition ${\hat{h}_{+}}(z)\le \mu $ for $\mu \ge 0$. For $z>0$ we have
\[\begin{aligned}{}& \frac{\sqrt{{({\alpha _{1}}+{\alpha _{2}}-({\beta _{1}}+{\beta _{2}}))^{2}}{z^{2}}+4({\beta _{1}}{z^{2}}+{\beta _{2}})({\alpha _{1}}{z^{2}}+{\alpha _{2}})}}{z}\\ {} & -\frac{{\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}}}{2}\le \mu \end{aligned}\]
by (9) and (3). Then, after some easy computations, it is easy to check that this is equivalent to
\[ {\alpha _{1}}{\beta _{1}}{z^{4}}-({\mu ^{2}}+\mu ({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})+{\alpha _{1}}{\beta _{1}}+{\alpha _{2}}{\beta _{2}}){z^{2}}+{\alpha _{2}}{\beta _{2}}\le 0;\]
in conclusion we have ${\hat{h}_{+}}(z)\le \mu $ if and only if $\sqrt{{m_{-}}(\mu )}\le z\le \sqrt{{m_{+}}(\mu )}$, where
\[\begin{aligned}{}{m_{\pm }}(\mu )& :=\frac{1}{2{\alpha _{1}}{\beta _{1}}}\left\{{\mu ^{2}}+\mu ({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})+{\alpha _{1}}{\beta _{1}}+{\alpha _{2}}{\beta _{2}}\right.\\ {} & \left.\pm \sqrt{{({\mu ^{2}}+\mu ({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})+{\alpha _{1}}{\beta _{1}}+{\alpha _{2}}{\beta _{2}})^{2}}-4{\alpha _{1}}{\alpha _{2}}{\beta _{1}}{\beta _{2}}}\right\}.\end{aligned}\]
In particular, for case $\mu =0$, we have ${\hat{h}_{+}}(z)\le 0$ if and only if
\[ \sqrt{\min \left\{1,\frac{{\alpha _{2}}{\beta _{2}}}{{\alpha _{1}}{\beta _{1}}}\right\}}\le z\le \sqrt{\max \left\{1,\frac{{\alpha _{2}}{\beta _{2}}}{{\alpha _{1}}{\beta _{1}}}\right\}}\]
because
\[ {m_{\pm }}(0)=\frac{{\alpha _{1}}{\beta _{1}}+{\alpha _{2}}{\beta _{2}}\pm |{\alpha _{1}}{\beta _{1}}-{\alpha _{2}}{\beta _{2}}|}{2{\alpha _{1}}{\beta _{1}}};\]
so we have ${m_{-}}(0)=1$ and/or ${m_{+}}(0)=1$, and they are both equal to 1 if and only if ${\alpha _{1}}{\beta _{1}}={\alpha _{2}}{\beta _{2}}$ or, equivalently, ${\Lambda ^{\prime }}(0)=0$ by Lemma 3.1.5.2 Asymptotic results
In this section we present Propositions 5.2 and 5.3. The first one is the analogue of Propositions 3.3 and 4.2; the second one concerns moderate deviations and it is the analogue of Proposition 3.4 for the basic model studied in Section 3. Thus the model $\{{\tilde{X}^{\nu ,\mu }}(t):t\ge 0\}$ in this section has more analogies with the basic model $\{X(t):t\ge 0\}$ (studied in Section 3) than the process $\{{X^{\nu }}(t):t\ge 0\}$ studied in Section 4. In the proofs of Propositions 5.2 and 5.3 we apply the Gärtner Ellis Theorem, and we use the probability generating function in Proposition 5.1; the condition $\mu >0$ is required.
Obviously we can repeat the brief comments on the interest of the results presented just before Propositions 3.3 and 3.4 with some modifications; for instance, for a suitable function ${\tilde{\Lambda }_{\nu ,\mu }}$ presented below (see Proposition 5.2), we have to consider ${\tilde{\Lambda }^{\prime }_{\nu ,\mu }}(0)$ and ${\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)$ instead of ${\Lambda ^{\prime }}(0)$ and ${\Lambda ^{\prime\prime }}(0)$.
Proposition 5.2.
Assume that $\mu >0$, and set
\[ {\tilde{\Lambda }_{\nu ,\mu }}(\gamma ):=\left\{\begin{array}{l@{\hskip10.0pt}l}{\mu ^{\nu }}-{(\mu -\Lambda (\gamma ))^{\nu }}& \hspace{2.5pt}\textit{if}\hspace{2.5pt}\Lambda (\gamma )\le \mu \\ {} \infty & \hspace{2.5pt}\textit{otherwise},\end{array}\right.\]
where Λ is the function in (2). Then, for all $k\in \mathbb{Z}$, $\Big\{P\Big(\frac{{\tilde{X}^{\nu ,\mu }}(t)}{t}\in \cdot \Big|{\tilde{X}^{\nu ,\mu }}(0)=k\Big):t\hspace{0.1667em}>\hspace{0.1667em}0\Big\}$ satisfies the LDP with speed function ${v_{t}}\hspace{0.1667em}=\hspace{0.1667em}t$ and good rate function ${\tilde{\Lambda }_{\nu ,\mu }^{\ast }}(y):={\sup _{\gamma \in \mathbb{R}}}\{\gamma y-{\tilde{\Lambda }_{\nu ,\mu }}(\gamma )\}$.
Proof.
We want to apply the Gärtner Ellis Theorem and, for all $\gamma \in \mathbb{R}$, we have to take the limit of $\frac{1}{t}\log {\tilde{F}_{k}^{\nu ,\mu }}({e^{\gamma }},t)$ (as $t\to \infty $). Obviously we consider the expression of the function ${\tilde{F}_{k}^{\nu ,\mu }}(z,t)$ in Proposition 5.1.
Firstly we have
this can be checked noting that ${\hat{h}_{-}}(z)<0$, ${\hat{h}_{+}}({e^{\gamma }})=\Lambda (\gamma )$ (for all $\gamma \in \mathbb{R}$), and by considering a suitable application of Lemma 1.2.15 in [7].
(22)
\[ \underset{t\to \infty }{\lim }\frac{1}{t}\log {\tilde{F}_{k}^{\nu ,\mu }}({e^{\gamma }},t)={\tilde{\Lambda }_{\nu ,\mu }}(\gamma )\hspace{2.5pt}(\text{for all}\hspace{2.5pt}\gamma \in \mathbb{R});\]The function ${\tilde{\Lambda }_{\nu ,\mu }}$ in the limit (22) is essentially smooth (see e.g. Definition 2.3.5 in [7]); in fact it is finite in a neighborhood of the origin, differentiable in the interior of the set $\mathcal{D}:=\{\gamma \in \mathbb{R}:{\tilde{\Lambda }_{\nu ,\mu }}(\gamma )<\infty \}$, and steep (namely ${\tilde{\Lambda }^{\prime }_{\nu ,\mu }}({\gamma _{n}})\to \infty $ for every sequence $\{{\gamma _{n}}:n\ge 1\}$ in the interior of $\mathcal{D}$ which converges to a boundary point of the interior of $\mathcal{D}$) because, if ${\gamma _{0}}$ is such that $\Lambda ({\gamma _{0}})=\mu $, we have
\[ {\tilde{\Lambda }^{\prime }_{\nu ,\mu }}(\gamma )=\nu {(\mu -\Lambda (\gamma ))^{\nu -1}}{\Lambda ^{\prime }}(\gamma )\to \infty \hspace{2.5pt}(\text{as}\hspace{2.5pt}\gamma \to {\gamma _{0}}).\]
Then we can apply the Gärtner Ellis Theorem (in fact the function ${\tilde{\Lambda }_{\nu ,\mu }}$ is also lower semi-continuous), and the desired LDP holds. □In view of the next result on moderate deviations we compute ${\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)$. We remark that, if we consider the function ${\Psi _{\nu ,\mu }}$ in (18), we have
\[ {\tilde{\Lambda }_{\nu ,\mu }}(\gamma )={\Psi _{\nu ,\mu }}(\Lambda (\gamma ))\hspace{2.5pt}(\text{for all}\hspace{2.5pt}\gamma \in \mathbb{R}).\]
Thus we have
\[ {\tilde{\Lambda }^{\prime }_{\nu ,\mu }}(\gamma )\hspace{0.1667em}=\hspace{0.1667em}{\Psi ^{\prime }_{\nu ,\mu }}(\Lambda (\gamma )){\Lambda ^{\prime }}(\gamma ),\hspace{2.5pt}{\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(\gamma )\hspace{0.1667em}=\hspace{0.1667em}{\Psi ^{\prime }_{\nu ,\mu }}(\Lambda (\gamma )){\Lambda ^{\prime\prime }}(\gamma )+{\Psi ^{\prime\prime }_{\nu ,\mu }}(\Lambda (\gamma )){({\Lambda ^{\prime }}(\gamma ))^{2}}\]
and therefore (for the second equality see (19) and (20))
\[ {\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)={\Psi ^{\prime }_{\nu ,\mu }}(0){\Lambda ^{\prime\prime }}(0)+{\Psi ^{\prime\prime }_{\nu ,\mu }}(0){({\Lambda ^{\prime }}(0))^{2}}=\nu {\mu ^{\nu -1}}{\Lambda ^{\prime\prime }}(0)-\nu (\nu -1){\mu ^{\nu -2}}{({\Lambda ^{\prime }}(0))^{2}}.\]
We remark that ${\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)>0$ because ${\Lambda ^{\prime\prime }}(0)>0$ (see Lemma 3.1) and $\mu >0$. Proposition 5.3.
Assume that $\mu >0$. Let $\{{a_{t}}:t>0\}$ be such that ${a_{t}}\to 0$ and $t{a_{t}}\to +\infty $ (as $t\to +\infty $). Then, for all $k\in \mathbb{Z}$, $\Big\{P\Big(\sqrt{t{a_{t}}}\frac{{\tilde{X}^{\nu ,\mu }}(t)-\mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]}{t}\in \cdot \Big|{\tilde{X}^{\nu ,\mu }}(0)=k\Big):t>0\Big\}$ satisfies the LDP with speed function ${v_{t}}=\frac{1}{{a_{t}}}$ and good rate function ${J_{\nu ,\mu }}(y):=\frac{{y^{2}}}{2{\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)}$.
Proof.
We apply the Gärtner Ellis Theorem. More precisely we show that
where
(23)
\[ \underset{t\to \infty }{\lim }{a_{t}}\log \mathbb{E}\left[\exp \left(\frac{\gamma }{{a_{t}}}{\tilde{X}_{k}}(t;{a_{t}})\right)\Big|{\tilde{X}^{\nu ,\mu }}(0)=k\right]=\frac{{\gamma ^{2}}}{2}{\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)\hspace{2.5pt}(\text{for all}\hspace{2.5pt}\gamma \in \mathbb{R})\]
\[ {\tilde{X}_{k}}(t;{a_{t}}):=\sqrt{t{a_{t}}}\frac{{\tilde{X}^{\nu ,\mu }}(t)-\mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]}{t};\]
in fact we can easily check that ${J_{\nu ,\mu }}(y)={\sup _{\gamma \in \mathbb{R}}}\left\{\gamma y-\frac{{\gamma ^{2}}}{2}{\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)\right\}$ (for all $y\in \mathbb{R}$).We remark that
\[\begin{aligned}{}& {a_{t}}\log \mathbb{E}\left[\exp \left(\frac{\gamma }{{a_{t}}}\sqrt{t{a_{t}}}\frac{{\tilde{X}^{\nu ,\mu }}(t)-\mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]}{t}\right)\Big|{\tilde{X}^{\nu ,\mu }}(0)=k\right]\\ {} & ={a_{t}}\left(\log \mathbb{E}\left[\exp \left(\frac{\gamma }{\sqrt{t{a_{t}}}}{\tilde{X}^{\nu ,\mu }}(t)\right)\Big|{\tilde{X}^{\nu ,\mu }}(0)=k\right]\right.\\ {} & \left.-\frac{\gamma }{\sqrt{t{a_{t}}}}\mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]\right)\\ {} & ={a_{t}}\left(\log {\tilde{F}_{k}^{\nu ,\mu }}({e^{\gamma /\sqrt{t{a_{t}}}}},t)-\frac{\gamma }{\sqrt{t{a_{t}}}}\mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]\right),\end{aligned}\]
where ${\tilde{F}_{k}^{\nu ,\mu }}(z,t)$ is the probability generating function in Proposition 5.1. Moreover, by Proposition 3.2 (together with a conditioning with respect to $\{{\tilde{S}^{\nu ,\mu }}(t):t\ge 0\}$ and some properties of this process) we have
\[ \mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]=k+{\Lambda ^{\prime }}(0)\mathbb{E}[{\tilde{S}^{\nu ,\mu }}(t)]+\mathbb{E}[b({\tilde{S}^{\nu ,\mu }}(t))],\]
where $b(r)={\left.{\left(\frac{{c_{k}}(z)}{h(z)}\right)^{\prime }}\right|_{z=1}}\frac{1-{e^{-({\alpha _{1}}+{\alpha _{2}}+{\beta _{1}}+{\beta _{2}})r}}}{2}$ is a bounded function of $r\ge 0$; thus, by (19), we have
\[ \mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]=k+{\Lambda ^{\prime }}(0){\Psi ^{\prime }_{\nu ,\mu }}(0)t+\mathbb{E}[b({\tilde{S}^{\nu ,\mu }}(t))].\]
Then, since ${\hat{h}_{-}}({e^{\gamma }})<0$ and ${\hat{h}_{+}}({e^{\gamma }})=\Lambda (\gamma )$ for all $\gamma \in \mathbb{R}$, we get
\[\begin{aligned}{}& \underset{t\to \infty }{\lim }{a_{t}}\left(\log {\tilde{F}_{k}^{\nu ,\mu }}({e^{\gamma /\sqrt{t{a_{t}}}}},t)-\frac{\gamma }{\sqrt{t{a_{t}}}}\mathbb{E}[{\tilde{X}^{\nu ,\mu }}(t)|{\tilde{X}^{\nu ,\mu }}(0)=k]\right)\\ {} & =\underset{t\to \infty }{\lim }{a_{t}}\left(k\frac{\gamma }{\sqrt{t{a_{t}}}}+t{\tilde{\Lambda }_{\nu ,\mu }}\left(\frac{\gamma }{\sqrt{t{a_{t}}}}\right)\right.\\ {} & \left.-\frac{\gamma }{\sqrt{t{a_{t}}}}\left(k+{\Lambda ^{\prime }}(0){\Psi ^{\prime }_{\nu ,\mu }}(0)t+\mathbb{E}[b({\tilde{S}^{\nu ,\mu }}(t))]\right)\right)\\ {} & =\underset{t\to \infty }{\lim }t{a_{t}}\left({\tilde{\Lambda }_{\nu ,\mu }}\left(\frac{\gamma }{\sqrt{t{a_{t}}}}\right)-\frac{\gamma }{\sqrt{t{a_{t}}}}{\Lambda ^{\prime }}(0){\Psi ^{\prime }_{\nu ,\mu }}(0)\right);\end{aligned}\]
in fact the term with $\mathbb{E}[b({\tilde{S}^{\nu ,\mu }}(t))]$ is negligible because it is the function $b(\cdot )$ is bounded. Finally, if we consider the second order Taylor formula for the function ${\tilde{\Lambda }_{\nu ,\mu }}$, we have
\[\begin{aligned}{}& {\tilde{\Lambda }_{\nu ,\mu }}\left(\frac{\gamma }{\sqrt{t{a_{t}}}}\right)-\frac{\gamma }{\sqrt{t{a_{t}}}}{\Lambda ^{\prime }}(0){\Psi ^{\prime }_{\nu ,\mu }}(0)\\ {} & =\frac{\gamma }{\sqrt{t{a_{t}}}}{\tilde{\Lambda }^{\prime }_{\nu ,\mu }}(0)+\frac{{\gamma ^{2}}}{2t{a_{t}}}{\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)+o\left(\frac{{\gamma ^{2}}}{t{a_{t}}}\right)-\frac{\gamma }{\sqrt{t{a_{t}}}}{\Lambda ^{\prime }}(0){\Psi ^{\prime }_{\nu ,\mu }}(0)\\ {} & =\frac{{\gamma ^{2}}}{2t{a_{t}}}{\tilde{\Lambda }^{\prime\prime }_{\nu ,\mu }}(0)+o\left(\frac{{\gamma ^{2}}}{t{a_{t}}}\right)\end{aligned}\]
for a remainder $o\left(\frac{{\gamma ^{2}}}{t{a_{t}}}\right)$ such that $o\left(\frac{{\gamma ^{2}}}{t{a_{t}}}\right)/\frac{{\gamma ^{2}}}{t{a_{t}}}\to 0$, and (23) can be easily checked. □6 Conclusions
In this paper we study continuous-time Markov chains on integers which allow transitions to adjacent states only, with alternating rates. We present some explicit formulas (means, variances, state probabilities), and we also study the asymptotic behaviour in the fashion of large deviations by applying the Gärtner Ellis Theorem. Moreover we study independent random time-changes of these Markov chains with the inverse of the stable subordinator, the stable subordinator and the tempered stable subordinator. We do not have any large deviation results with the stable subordinator (because we cannot apply the Gärtner Ellis Theorem); on the contrary, when we deal with the tempered stable subordinator, we can provide a complete analysis as we did for the basic model. We also give some large deviation results with the inverse of the stable subordinator but, in this case, we cannot obtain a result on moderate deviations. Some other (possibly dependent) more general random time-changes could be investigated in the future.