1 Introduction
1.1 Overview
Simultaneous renewal is an important topic for a practical application of Markov chains. Although it has its own value, for example, in queuing theory, we are interested in its investigation because it plays an essential role in coupling construction, which can be used to derive stability estimates of the n-step transition probabilities and other results like the law of large numbers and limit theorems.
For example, in [5, 6], we can find how a stability estimate can be calculated using the coupling method for two time-inhomogeneous Markov chains with discrete time on the general state space. Good examples of applications of the coupling method (for both homogeneous and inhomogeneous Markov chains) are given in [2, 3].
It worth mentioning that the coupling construction for time-inhomogeneous chains is slightly different from its classical setup (see, e.g., [16, 17]). Such a time-inhomogeneous coupling for general state space can be found in [5]. Its modification, called the maximal coupling, can be used for a discrete space. More information about maximal coupling and its application to stability in the time-homogeneous case can be found in [14, 15].
The crucial problem in the application of the results in the listed papers was calculation of the expectation for the coupling moment deriving from the simultaneous renewal. But there were no good estimates for the expectation of a simultaneous renewal for the time-inhomogeneous case.
For the time-homogeneous case, the paper [13] proposes such an estimate based on the Daley inequality (see [1]).
In [9], we derived conditions (see Thm. 3.1) that guarantee that the expectation for the simultaneous renewal time is finite. But there were no practical estimates for the expectation.
In [8], we derived an analogue of the Daley inequality that is used in this paper. The key condition for this inequality is a finiteness of the second moment for the stochastic dominant of the original renewal process. Thats why it is a crucial condition for the estimate construction.
1.2 Definitions and notation
We consider two independent time-inhomogeneous Markov chains with discrete time and general state space $(E,\mathfrak{E})$. We assume that both chains are defined on the same probability space $(\varOmega ,\mathfrak{F},\mathbb{P})$. Denote these chains as $({X_{n}^{(1)}})$, $({X_{n}^{(2)}})$, $n\ge 0$. We use the following notation for the one-step transition probabilities:
where $x\in E$ is an arbitrary element, $l\in \{1,2\}$, and $A\in \mathfrak{E}$ is an arbitrary set.
We continue to use the definitions and notation from [9]. We consider some set $C\in \mathfrak{E}$, and our goal is to find an upper bound for the expectation of the first time of visiting the set C by both chains.
Define the renewal intervals
where $l\in \{1,2\}$, $n\ge 1$, and renewal times
(2)
\[\begin{array}{c}{\theta _{0}^{(l)}}=\inf \big\{t\ge 0,{X_{t}^{(l)}}\in C\big\},\\{} {\theta _{n}^{(l)}}=\inf \big\{t\ge {\theta _{n-1}^{(l)}},{X_{t}^{(l)}}\in C\big\},\end{array}\]Then we can define the renewal probabilities
It is worth mentioning that, in general, ${g_{n}^{(t,l)}}$ also depends on the value x of ${X_{t}^{(l)}}$ which can hit different states inside C. However, we will omit x for simplicity. We refer the reader to [9] for more details about definition (4).
(4)
\[{g_{n}^{(t,l)}}=\mathbb{P}\big\{{\theta _{k}^{(l)}}=n\big|{\tau _{k-1}^{(l)}}=t\big\},\hspace{1em}\hspace{2.5pt}l\in \{1,2\},\hspace{2.5pt}n\ge 1.\]Let us define the renewal sequence recursively:
The time of simultaneous hitting the set C is defined as
2 Estimate for the expectation of the simultaneous hitting time
First, we need put the condition on ${u_{n}^{(t,l)}}$ that guarantees its separation from 0. In the time-homogeneous case, this follows from the renewal theorem, but for the time-inhomogeneous case, there is no such theorem. Therefore, we need the following condition.
Condition A. There are a constant $\gamma >0$ and a number $n_{0}\ge 0$ such that, for all t, l and $n\ge n_{0}$,
It is important that this condition also guarantees certain “regularity” of a chain in terms of periodicity. The periodic chains obviously do not satisfy it.
There are various theorems that allow us to check Condition A in practice. See, for example, [7], Theorems 4.1, 4.2, 4.3. We will later use some of them.
We need a condition of the stochastic domination in order to apply Theorem 3.1 from [8].
Condition B. Distributions $({g_{n}^{(t,l)}})$ are stochastically dominated by some sequence $(\hat{g}_{n})$, $\hat{g}_{n}\ge 0$, which means that
and that the stochastic dominant $(\hat{g}_{n})$ has finite first and second moments
(9)
\[{G_{n}^{(t,l)}}=\sum \limits_{k>n}{g_{k}^{(t,l)}}\le \hat{G}_{n}=\sum \limits_{k>n}\hat{g}_{k}\]The sequence $\hat{G}_{n}$ is nonincreasing because $\hat{g}_{n}\ge 0$.
It is worth mentioning that we do not require $(\hat{g}_{n})$ to be a probability distribution, that is, the total mass $\sum _{k\ge 1}\hat{g}_{k}$ not necessarily equals 1.
Theorem 1.
Assume that conditions (A) and (B) hold for the chains $({X_{n}^{(l)}})$, $l\in \{1,2\}$, defined before and that the renewal sequences are generated by them. Then the expectation of the simultaneous hitting time for the set C satisfies the inequality
where
Proof.
Let us recall the notation from [9]:
\[\nu _{1}:=\min \big\{j\ge \nu _{0}:{\tau _{j}^{2}}-{\tau _{\nu _{0}}^{1}}>n_{0},\hspace{2.5pt}\text{or}\hspace{2.5pt}{\tau _{j}^{2}}-{\tau _{\nu _{0}}^{1}}=0\big\},\]
and further on
\[\nu _{2m}:=\min \big\{j\ge \nu _{2m-1}:\hspace{2.5pt}{\tau _{j}^{1}}-{\tau _{\nu _{2m-1}}^{2}}>n_{0}\hspace{0.1667em},\hspace{2.5pt}\text{or}\hspace{2.5pt}{\tau _{j}^{1}}-{\tau _{\nu _{2m-1}}^{2}}=0\big\},\]
The moments $\nu _{k}$ are called coupling trials. Let us define $\tau =\min \{n\ge 1:B_{n}=0\}$ and the sequence of sigma-fields $\mathfrak{B}_{n}$, $n\ge 0$, by
We will use the same idea as in the Theorem 5.1 from [9].
First, we assume that ${\theta _{0}^{(2)}}=0$, which means that the second chain starts from the set C.
The next representation of time T is following directly from the definitions:
Using Lemma 1 and the fact that $\{\tau >n-1\}\in \mathfrak{B}_{n-1}$, we can derive the following inequality:
(14)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E[B_{n}\mathbb{1}_{\tau >n}|\mathfrak{B}_{n-1}]& \displaystyle =E[B_{n}\mathbb{1}_{\tau \ge n}|\mathfrak{B}_{n-1}]+E[\mathbb{01}_{\tau =n}|\mathfrak{B}_{n-1}]\\{} & \displaystyle =\mathbb{1}_{\tau \ge n}E[B_{n}|\mathfrak{B}_{n-1}]\le \mathbb{1}_{\tau \ge n}M.\end{array}\]Lemma 8.5 from [9] implies
Taking the unconditional expectation of the both parts in (14) gives us
Applying this inequality to (13), we have
(17)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}[T]& \displaystyle \le \mathbb{E}[\theta _{0}]+\mathbb{E}\Bigg[\sum \limits_{n=0}^{\tau }B_{n}\Bigg]=\mathbb{E}[\theta _{0}]+\sum \limits_{n\ge 0}\mathbb{E}[B_{n}\mathbb{1}_{\tau >n}]\\{} & \displaystyle \le \mathbb{E}[\theta _{0}]+\sum \limits_{n\ge 0}M{(1-\gamma )}^{n}=\mathbb{E}[\theta _{0}]+\frac{M}{\gamma }.\end{array}\]Now, we have to get rid of the assumption ${\theta _{0}^{(2)}}=0$. The same calculations as in [9] after formula (20) give us
□
3 Application to the birth–death processes
Consider two time-inhomogeneous processes ${X}^{(1)}$ and ${X}^{(2)}$ with the following transition probabilities on the tth step:
and
We would like to estimate the expectation applying Theorem 1. So we have to check the regularity condition A and the domination condition B.
We will need the second moment of the dominating distribution, which is difficult to derive for chains ${X}^{(1)}$ and ${X}^{(2)}$. So the idea is to construct a domination sequence based on some simple homogeneous Markov chain whose renewal sequence is well studied and whose second moment can be calculated easily. The closest chain similar to the birth–death chains we consider here is a random walk on the half-line.
The domination sequence based on such a random walk is constructed in Lemma 2, and Lemma 3 gives its first and second moments that we need for Theorem 1.
Next, we will check regularity condition A. First, we assume that, for every $t>0$, ${g_{1}^{(l)}}=\alpha _{t0}>0$, and
We will use Corollary to Theorem 4.2 from [7] in order to check Condition A. It says that if ${g_{1}^{(t)}}>0$ and a domination sequence exists, then Condition A holds. Moreover, its proof (see [7, p. 12], inequality for $F(G)$) contains an estimate for γ:
Finally, we can state the following result.
Theorem 2.
Assume that for chains with transition probabilities $P_{t}$, $Q_{t}$ defined before, condition (20) holds and that there exists p that satisfies condition (28) for both chains ${X}^{(1)}$ and ${X}^{(2)}$. If both chains start from the zero state, then the expectation of their simultaneous renewal satisfies the inequality
where $\hat{\mu }_{1},\hat{\mu }_{2}$ are defined in Lemma 3, and γ is defined in (21).
4 Auxiliary results
Proof.
From Lemma 8.3 of [9] we can derive:
(24)
\[\begin{array}{c}\mathbb{E}[B_{2n+1}|\mathfrak{B}_{2n}]=\displaystyle \sum \limits_{t}\mathbb{E}\big[{R_{t+n_{0}}^{(2)}}\big]\mathbb{1}_{{\tau _{\nu _{2n}}^{(1)}}=t},\\{} \mathbb{E}[B_{2n}|\mathfrak{B}_{2n-1}]=\displaystyle \sum \limits_{t}\mathbb{E}\big[{R_{t+n_{0}}^{(1)}}\big]\mathbb{1}_{{\tau _{\nu _{2n-1}}^{(2)}}=t}.\end{array}\]At the same time, Theorem 3.1 from [8] gives us the inequality
taking into account the domination condition B.
Lemma 2.
Consider the following time-inhomogeneous birth–death Markov chain $Z_{t}$ with the transition probabilities on the t-th step
and the time-homogeneous random walk $\hat{Z}_{t}$ with the transition probability matrix
Let
and let ${g_{n}^{(t)}}$ be a distribution of the first after t returning into 0 for the chain Z, which is in the zero state at the moment t.
(26)
\[P_{t}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\alpha _{t0}& 1-\alpha _{t0}& 0& 0& 0& \dots \\{} 0& \alpha _{t1}& 0& 1-\alpha _{t1}& 0& \dots \\{} 0& 0& \alpha _{t2}& 0& 1-\alpha _{t2}& \dots \\{} \dots ,\end{array}\right)\](27)
\[\hat{P}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}0& 1& 0& 0& 0& \dots \\{} p& 0& 1-p& 0& 0& \dots \\{} 0& p& 0& 1-p& 0& \dots \\{} \dots ,\end{array}\right).\]
Assume that there exists some p such that, for all t, i, s, j, the following inequations hold:
Denote by $f_{n}$ the renewal probability for the chain $\hat{Z}_{t}$ ($f_{n}$ is the probability of the first returning to 0 for the chain $\hat{Z}$ started at 0):
and let $\hat{g}_{n}=f_{n}/p$, $n>1$, and $\hat{g}_{1}=1$.
(28)
\[\begin{array}{c}p>1/2,\\{} p(1-p)\ge (1-\alpha _{ti})\alpha _{sj},\hspace{1em}\forall t,s,i,j.\end{array}\](29)
\[f_{n}=\hat{\mathbb{P}}\{\hat{Z}_{0}=0,\hat{Z}_{1}\ne 0,\dots ,\hat{Z}_{n-1}\ne 0,\hat{Z}_{n}=0\},\]
Then the sequence $(\hat{g}_{n})_{n\ge 1}$ stochastically dominates ${g_{n}^{(t)}}$, or, in other words,
where ${G_{n}^{(t)}}=\sum _{k>n}{g_{k}^{(t)}}$, $\hat{G}_{n}=\sum _{k>n}\hat{g}_{k}$, and $\mathbb{P}$, $\mathbb{E}$ and $\hat{\mathbb{P}}$, $\hat{\mathbb{E}}$ are the probabilities and expectations on the canonical probability space for the chains Z and $\hat{Z}$, respectively.
Proof.
First of all, notice that $(\hat{g}_{n})$ is not a probability distribution. But this is not a big problem since the domination sequence in our construction does not have to be a distribution.
For $n>1$, consider the event
It can be interpreted as a set of trajectories $\omega =(\omega _{1},\dots ,\omega _{2n})$, $\omega _{i}\in \{+1,-1\}$, where $\omega _{i}=+1$ if $Z_{t+i}$ goes up and $\omega _{i}=-1$ otherwise. It is clear that, in order to return back to 0 at time $2n$, there should be exactly n steps up (the first one must be step up) and n steps down. It is worth mentioning that not every trajectory of length $2n$ that has n steps up and n down belongs to $A_{(2n)t}$ because some of them might visit 0 before $2n+t$, which is not acceptable for $A_{(2n)t}$. The exact number of such trajectories in $A_{(2n)t}$ is unknown and not important for this proof. What is important, is that each $\omega \in A_{(2n)t}$ corresponds to the same trajectory for the chain $\hat{Z}$. This means that summing $\hat{\mathbb{P}}\{\omega \}$ for all $\omega \in A_{(2n)t}$ gives the probability $f_{2n}$. Strictly speaking, the chains Z and $\hat{Z}$ are defined on different probability spaces, but there is an obvious correspondence between the trajectories, and the difference is only in the probabilities. So we can use the same symbol ω for both.
Since ω has exactly n steps up and n steps down, its probability is a product of n different $(1-\alpha _{t_{i}n_{i}})$ and n different $\alpha _{t_{j},n_{i}+1}$, for $i,j=\overline{1,n}$. Notice that some of $n_{i}$ can be the same.
This means that, after reordering, the probability of such ω can be presented as
for some $t_{i},n_{i},t_{j}$. We emphasize again that the terms in that product may repeat, but this is not important for this proof.
Proof.
First we note that since $\hat{g}_{n}=f_{n}/p$, $n>1$, and $\hat{g}_{1}=1=1+f_{1}$, we have $\hat{\mu }_{1}=\mu _{1}/p+1$ and $\hat{\mu }_{2}=\mu _{2}/p+1$, where $\mu _{1}$ and $\mu _{2}$ are the expectation and the second moment for the probability distribution $f_{n},n\ge 1$.
So, $\mu _{1}={F^{\prime }}(1)$ and $\mu _{2}={F^{\prime\prime }}(1)+\mu _{1}$. □