Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 10, Issue 3 (2023)
  4. On geometric recurrence for time-inhomog ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

On geometric recurrence for time-inhomogeneous autoregression
Volume 10, Issue 3 (2023), pp. 313–341
Vitaliy Golomoziy ORCID icon link to view author Vitaliy Golomoziy details  

Authors

 
Placeholder
https://doi.org/10.15559/23-VMSTA228
Pub. online: 3 May 2023      Type: Research Article      Open accessOpen Access

Received
8 October 2022
Revised
18 March 2023
Accepted
18 March 2023
Published
3 May 2023

Abstract

The time-inhomogeneous autoregressive model AR(1) is studied, which is the process of the form ${X_{n+1}}={\alpha _{n}}{X_{n}}+{\varepsilon _{n}}$, where ${\alpha _{n}}$ are constants, and ${\varepsilon _{n}}$ are independent random variables. Conditions on ${\alpha _{n}}$ and distributions of ${\varepsilon _{n}}$ are established that guarantee the geometric recurrence of the process. This result is applied to estimate the stability of n-steps transition probabilities for two autoregressive processes ${X^{(1)}}$ and ${X^{(2)}}$ assuming that both ${\alpha _{n}^{(i)}}$, $i\in \{1,2\}$, and distributions of ${\varepsilon _{n}^{(i)}}$, $i\in \{1,2\}$, are close enough.

1 Introduction

The classical autoregressive model ${X_{n+1}}=\alpha {X_{n}}+{\varepsilon _{n}}$, where α is a constant and ${\varepsilon _{n}}$ are i.i.d. standard normal random variables, is well studied, and it is known, in particular, that for $\alpha \in (0,1)$ the corresponding Markov chain is geometrically recurrent, positive and ergodic.
However, in real applications, we cannot always guarantee that all ${\varepsilon _{n}}$ are standard normal i.i.d. random variables. In addition, parameter α may not always be the same for all ${X_{n}}$. From the theoretical standpoint, it is interesting to study the behavior of such a model when α is “oscillating” around unity and ${\varepsilon _{n}}$ are independent but not necessarily normal and identically distributed.
So, we come to the model ${X_{n+1}}={\alpha _{n}}{X_{n}}+{\varepsilon _{n}}$, where ${\alpha _{n}}$ are some numbers “around” unity and ${\varepsilon _{n}}$ are independent random variables. Our goal is to find the conditions which guarantee the geometric recurrence of such a chain and study its stability in terms of the stability of n-steps transition probability functions.
Usually, when studying the recurrence of Markov chains, we use techniques developed in the classical theory based on a drift condition (see, for example, [5, 24]). However, the process of interest is time-inhomogeneous, so the aforementioned techniques are not applicable. That’s why we use a particular time-inhomogeneous drift condition developed in the paper [13]. In order to establish geometric recurrence, we use results from [12].
The general theory for inhomogeneous Markov chains is much more involved than its homogeneous counterpart. One of the most popular instruments in research is the coupling method (see classical books by T. Lindvall [22] and H. Thorisson [25]). An interested reader may find an example of applying the coupling method to the stability of Markov chains on a countable phase space in both homogeneous and inhomogeneous cases in the papers [15, 16, 18, 20]. The papers [1, 3, 7] are devoted to the calculation of convergence rates for subgeometrically ergodic, homogeneous Markov chains, and [4] addresses the inhomogeneous case. All these works use the coupling method. We heavily rely on the coupling method in the present paper. We present a modified coupling construction that enables us to couple two different inhomogeneous chains. Such modified coupling generates an inhomogeneous renewal process which we use in our analysis. We can refer to the papers [14, 17] for more details about inhomogeneous renewal processes. Generally speaking, renewal processes have been used for a long time to study Markov chains. See, for example, [2]. The papers [6, 8, 9, 19, 21] are also related to the inhomogeneous case. Another interesting example of using the renewal theory for the analysis of Markov chains with applications in statistics is [23]. Since general inhomogeneous renewal theory is not well-developed, we use some techniques that are specially adapted to the problem under consideration. One of the key techniques that helps to handle inhomogeneity is stochastic domination. Building a dominating sequence is a key aspect of our development. See more details about this subject in [11] and [10].
Summarizing the methods described above, we can outline the plan for this paper. First, we use the modified drift condition from [13] to establish geometric recurrence of the autoregressive model. Then, we obtain an estimate for ${\mathbb{E}_{x}}[{\beta ^{{\sigma _{C}}}}]$ for some $\beta >1$, where ${\sigma _{C}}$ is the first return time to some “recurrence” set C. Second, we use the result from [12] to establish a similar estimate for a pair of autoregressive processes, i.e. ${\mathbb{E}_{x,y}}[{\beta ^{{\sigma _{C\times C}}}}]$. Third, we introduce the coupling schema for two inhomogeneous processes and employ the renewal theory for the stability analysis. Finally, we construct a domination sequence (see Lemmas 1–4) and use it to obtain the stability estimate. We will show that n-steps transition probabilities for inhomogeneous processes will be close, assuming that one-step transition probabilities are also close.
This paper is organized as follows. Section 2 contains main definitions and notations. In Section 3, we present the geometric recurrence result. Section 4 contains the main result, the stability estimate. Section 5 includes technical auxiliary lemmas. Finally, the appendix contains the example of calculating all the necessary constants in a practical application.

2 Definitions and notation

Throughout this paper, we assume that all random variables are defined on a common probability space $(\Omega ,ℱ,\mathbb{P})$. In Section 3 we deal with an inhomogeneous Markov chain ${({X_{n}})_{n\ge 0}}$ with values in a general phase space $(E,ℰ)$. We denote the chain’s transition probability by
\[ {P_{n}}(x,A)=\mathbb{P}\left\{{X_{n+1}}\in A|{X_{n}}=x\right\},\hspace{1em}x\in E,A\in ℰ.\]
When it is convenient, we will understand ${P_{n}}$ as an operator acting on an appropriate functional space as
\[ {P_{n}}f(x)={\int _{\mathbb{R}}}{P_{n}}(x,dy)f(y).\]
By ${\mathbb{P}_{x}}\{X\in \cdot \}$ we denote the probability measure generated by the process ${({X_{n}})_{n\ge 0}}$ which starts at $x\in E$. Thus, for example
\[ {\mathbb{P}_{x}}\{{X_{n}}\in A,{X_{m}}\in B\}=\mathbb{P}\{{X_{n}}\in A,{X_{m}}\in B|{X_{0}}=x\}.\]
We denote by ${\mathbb{E}_{x}}$ the corresponding expectation. Strictly speaking, ${\mathbb{P}_{x}}$ is defined on a measurable space $({E^{\infty }},{ℰ^{\infty }})$, and for every $w=({w_{0}},{w_{1}},\dots )\in {E^{\infty }}$ we have a canonical variable ${X_{n}}(w)={w_{n}}$. Thus, rigorously speaking ${X_{n}}$, in the context of $\mathbb{P}$ and ${\mathbb{P}_{x}}$, represents two different random variables defined on different probability spaces. However, such notation is very convenient and helps to develop the proper intuition.
Next, we need a notation for a process conditioned on ${X_{n}}=x$, i.e.
\[ \mathbb{P}\{{({X_{n+m}})_{m\ge 0}}\in \cdot |{X_{n}}=x\}.\]
Note that in the homogeneous case, this would be the same ${\mathbb{P}_{x}}$ introduced above. But in our inhomogeneous case, we must maintain the index n in the notation since, generally speaking, probabilities
\[\mathbb{P}\{{({X_{n+m}})_{m\ge 0}}\in \cdot |{X_{n}}=x\},\hspace{2.5pt}\text{and}\hspace{2.5pt}\mathbb{P}\{{({X_{k+m}})_{m\ge 0}}\in \cdot |{X_{k}}=x\}\]
are not equal if $k\ne n$. At the same time, we would like to stick to the homogeneous notation as closely as possible. Thus, with some abuse of notation, we introduce probabilities ${\mathbb{P}_{x}^{n}}\{{({X_{k}})_{k\ge 0}}\in \cdot \}$, which mean
\[ {\mathbb{P}_{x}^{n}}\left\{{({X_{k}})_{k\ge 0}}\in \cdot \right\}=\mathbb{P}\left\{{({X_{n+k}})_{k\ge 0}}\in \cdot |{X_{n}}=x\right\}.\]
We will denote the corresponding expectation by ${\mathbb{E}_{x}^{n}}$. It is important to note that, unlike the homogeneous case, ${\mathbb{P}_{x}^{n}}$ and ${\mathbb{P}_{x}^{m}}$ are probabilities defined on a different probability spaces. Thus, we can have a “homogeneous-style” derivation like
\[ {\mathbb{E}_{x}}\left[f({X_{n+k}})\right]={\mathbb{E}_{x}}\left[{\mathbb{E}_{{X_{n}}}^{n}}\left[f({X_{k}})\right]\right].\]
In Section 4 and later we deal with a pair of inhomogeneous Markov chains ${\left({X_{n}^{(1)}},{X_{n}^{(2)}}\right)_{n\ge 0}}$. In this case, we will add a subscript i to probabilities and expectations related to the chain ${X^{(i)}}$, $i\in \{1,2\}$, thus having ${P_{i,n}}$, the transition probability, ${\mathbb{P}_{i,x}}$ and ${\mathbb{P}_{i,x}^{n}}$, the canonical probabilities generated by the chain ${X_{n}^{(i)}}$. In case we need to operate with both chains simultaneously, we will use the notation
\[\begin{array}{l}\displaystyle {\mathbb{P}_{x,y}^{n}}\left\{{\left({X_{m}^{(1)}},{X_{m}^{(2)}}\right)_{m\ge 0}}\in \cdot \right\}\\ {} \displaystyle ={\mathbb{P}_{x,y}}\left\{{\left({X_{n+m}^{(1)}},{X_{n+m}^{(2)}}\right)_{m\ge 0}}\in \cdot \Bigg|\left({X_{n}^{(1)}},{X_{n}^{(2)}}\right)=(x,y)\right\}.\end{array}\]
As usual, we denote by ${\mathbb{E}_{x,y}^{n}}$ the corresponding expectation.

3 Geometric recurrence for inhomogeneous autoregression chains

In this section we deal with a single time-inhomogeneous Markov chain ${({X_{n}})_{n\ge 1}}$ with values in $(\mathbb{R},ℬ)$ where ℬ is a Borel σ-field.
The main result of this section is presented in the following theorem.
Theorem 1.
Consider a time-inhomogeneous Markov chain ${({X_{n}})_{n\ge 0}}$ with values in $(\mathbb{R},ℬ)$ starting from a fixed ${x_{0}}\in \mathbb{R}$ and having the form
\[ {X_{n+1}}={\alpha _{n+1}}{X_{n}}+{W_{n+1}},\hspace{1em}n\ge 0,\]
where ${\alpha _{n}}\ge 0$, ${W_{n}},\hspace{2.5pt}n\ge 1$, are independent and centered random variables defined on the same probability space $(\Omega ,ℱ,\mathbb{P})$. Denote their distribution functions and tails by ${\Gamma _{n}}(x)$ and ${\bar{\Gamma }_{n}}(x)=1-{\Gamma _{n}}(x)={\textstyle\int _{x}^{\infty }}{\Gamma _{n}}(dy)$, respectively. Assume the following conditions hold.
  • 1. There exists $\delta >0$ such that
    \[ {\sum \limits_{k=1}^{\infty }}{\left({\prod \limits_{j=1}^{k}}\max \{{\alpha _{j}}+\delta ,1\}\right)^{-1}}{(1-{\alpha _{k}}-\delta )^{+}}=\infty .\]
  • 2. $G:={\sup _{n\ge 1}}\mathbb{E}{W_{n}^{+}}<\infty $.
Then for all $x\in \mathbb{R}$ and for all $n\in \mathbb{N}$:
(1)
\[ {\mathbb{E}_{x}^{n}}\left[{\prod \limits_{j=1}^{{\sigma _{C}}}}\frac{1}{{\alpha _{j}}+\delta }\right]\le 1+|x|+\left(2G+1-\delta \right){𝟙_{[-c,c]}}(x).\]
Here
(2)
\[ c:=\max \left\{\frac{2G+1}{\delta }-1,1\right\},\]
${\sigma _{C}}=\inf \left\{n\ge 1|{X_{n}}\in C\right\}$ (stipulating that $\inf \varnothing =\infty $) is the first return time to the set $C=[-c,c]$.
Proof.
The main tool in the proof is the drift condition from the paper [13]. As a test function we will consider
\[ V(x)=1+|x|.\]
We will use this function for all $n\ge 1$. We have then
\[\begin{aligned}{}{P_{n}}V(x)& ={\int _{\mathbb{R}}}(1+|{\alpha _{n}}x+y|){\Gamma _{n}}(dy)\\ {} & =1-{\int _{(-\infty ,-{\alpha _{n}}x]}}({\alpha _{n}}x+y){\Gamma _{n}}(dy)+{\int _{(-{\alpha _{n}}x,\infty )}}({\alpha _{n}}x+y){\Gamma _{n}}(dy).\end{aligned}\]
For $x\ge 0$ we can derive
\[\begin{aligned}{}{P_{n}}V(x)& =1-2{\int _{(-\infty ,-{\alpha _{n}}x]}}({\alpha _{n}}x+y){\Gamma _{n}}(dy)+{\int _{\mathbb{R}}}({\alpha _{n}}x+y){\Gamma _{n}}(dy)\\ {} & =1+{\alpha _{n}}x-2{\alpha _{n}}x{\Gamma _{n}}(-{\alpha _{n}}x)+2{\int _{(-\infty ,-{\alpha _{n}}x]}}(-y){\Gamma _{n}}(dy)\\ {} & \le {\alpha _{n}}(1+|x|)+(1-{\alpha _{n}})+2\mathbb{E}{W_{n}^{-}}={\alpha _{n}}V(x)+(1-{\alpha _{n}})+2G,\end{aligned}\]
where we used the fact $\mathbb{E}{W_{n}^{+}}-\mathbb{E}{W_{n}^{-}}=\mathbb{E}{W_{n}}=0$ and Condition 2.
Put
\[ {\lambda _{n}}={\alpha _{n}}+\delta ,\]
and write
(3)
\[\begin{aligned}{}{P_{n}}V(x)& \le {\lambda _{n}}V(x)+(1-{\alpha _{n}})+2G-({\lambda _{n}}-{\alpha _{n}})V(x)\\ {} & ={\lambda _{n}}V(x)+(1-{\alpha _{n}})+2G-\delta \left(1+|x|\right).\end{aligned}\]
Now we select constant c such that
\[ (1-{\alpha _{n}})+2G-\delta (1+c)\le 0,\]
for example
\[ c=\max \left\{\frac{2G+1}{\delta }-1,1\right\}.\]
Thus, we arrived to the following inequality for all $x\ge 0$:
(4)
\[ {P_{n}}V(x)\le {\lambda _{n}}V(x)+(2G+1-\delta ){𝟙_{[-c,c]}}(x),\]
By exactly the same reasoning aplied to $x\le 0$ we see that inequality (4) is valid for all $x\in \mathbb{R}$.
Condition 1, along with inequality (4), ensures the conditions of Theorem 4 with the set $C=[-c,c]$. The desired result now follows from Theorem 4.  □
Remark 1.
Condition 1 in Theorem 1 is a relaxed form of a separation from unity condition:
\[ \underset{n}{\sup }{\alpha _{n}}<1.\]
Clearly, in this case, we can select $\delta =(1-{\sup _{n}}{\alpha _{n}})/2$, so that ${\alpha _{n}}+\delta <1$ for all n, and Condition 1 is equivalent to
\[ \sum \limits_{n\ge 1}(1-{\alpha _{n}}-\delta )=\infty \]
which is obviously true since $1-{\alpha _{n}}-\delta \ge (1-{\sup _{n}}{\alpha _{n}})/2>0$. The novelty of this result is that we allow ${\alpha _{n}}$ to be greater than 1 from time to time.
Remark 2.
Since random variables ${W_{n}}$ are centered, Condition 2 is equivivalent to
\[ \underset{n}{\sup }\mathbb{E}|{W_{n}}|<\infty .\]
The following immediate corollary could be useful in practical applications.
Corollary 1.
Assume conditions of Theorem 1 hold true and there exist two constants $\psi >1$ and ${C_{0}}>0$ such that
\[ {\prod \limits_{k=1}^{n}}({\alpha _{k}}+\delta )\le {C_{0}}{\psi ^{-n}}.\]
Then, for all $x\in \mathbb{R}$,
\[ {\mathbb{E}_{x}^{n}}\left[{\psi ^{{\sigma _{C}}}}\right]\le {C_{0}}\left(1+|x|+\left(2G+1-\delta \right){𝟙_{[-c,c]}}(x)\right).\]
Remark 3.
In the homogeneous theory, geometric recurrence implies a corresponding chain’s positivity and geometric ergodicity. However, in the time-inhomogeneous case, such a conclusion is wrong in general since essentially inhomogeneous chains (which are not asymptotically homogeneous) usually do not have a stationary distribution.

4 Stability of general autoregressive models

In this section we consider a set of independent random variables ${W_{n}^{(i)}}:\Omega \to \mathbb{R}$ with distributions ${\Gamma _{n}^{(i)}}(A)=\mathbb{P}\{{W_{n}^{(i)}}\in A\}$ and a pair of Markov chains
(5)
\[ {X_{n}^{(i)}}={\alpha _{n}^{(i)}}{X_{n-1}^{(i)}}+{W_{n}^{(i)}},\]
$n\ge 1$, $i\in \{1,2\}$, ${x_{0}^{(i)}}\in \mathbb{R}$.
Our goal is to demonstrate that
\[ \underset{n}{\sup }{\left\| \mathbb{P}\left\{{X_{n}^{(1)}}\in \cdot \right\}-\mathbb{P}\left\{{X_{n}^{(2)}}\in \cdot \right\}\right\| _{TV}}\to 0,\hspace{1em}\varepsilon \to 0,\]
where ε defines how “close” are families $\left\{{\Gamma _{n}^{(1)}},n\ge 0\right\}$ and $\left\{{\Gamma _{n}^{(2)}},n\ge 1\right\}$.
We define ε by
(6)
\[ \varepsilon :=\frac{1}{2}\underset{x\in \mathbb{R},n\ge 1,A\in ℬ}{\sup }\left|{\Gamma _{n}^{(1)}}\left(A-{\alpha _{n}^{(1)}}x\right)-{\Gamma _{n}^{(2)}}\left(A-{\alpha _{n}^{(2)}}x\right)\right|.\]
It is important to emphasize that one-step transition probabilities for two chains do not coincide when $n\to \infty $ and remain different. Strictly speaking, ε is an internal characteristic of a pair of processes and “changing” ε means shifting to another pair of processes. This can be illustrated by the example that we will use across this paper which emphasizes the dependence of ε and assumes the following representation:
(7)
\[ \begin{array}{l}{\Gamma _{n}^{(1)}}=(1-\tilde{\varepsilon }){\Gamma _{n}}+\tilde{\varepsilon }{R_{n}^{(1)}},\\ {} {\Gamma _{n}^{(2)}}=(1-\tilde{\varepsilon }){\Gamma _{n}}+\tilde{\varepsilon }{R_{n}^{(2)}},\end{array}\]
where $\tilde{\varepsilon }$ is a small constant. Here we can think of ${\Gamma _{n}}$ as a “common part” of ${\Gamma _{n}^{(1)}}$ and ${\Gamma _{n}^{(2)}}$, and ${R_{n}^{(i)}}$ states for a residual part. This particular example can also be considered as a two-steps model, in which we flip a coin with a probability of “success” $1-\tilde{\varepsilon }$, and if the flip was “successful” we render the next ${W_{n}^{(i)}}$ from a common distribution ${\Gamma _{n}}$, otherwise from a residual distribution ${R_{n}^{(i)}}$. The connection between $\tilde{\varepsilon }$ and ε in (6) is immediate:
\[ \varepsilon =\frac{1}{2}\underset{x\in R,n\ge 1,A\in ℬ}{\sup }\left|\tilde{\varepsilon }{R_{n}^{(1)}}\left(A-{\alpha _{n}^{(1)}}x\right)-\tilde{\varepsilon }{R_{n}^{(2)}}\left(A-{\alpha _{n}^{(2)}})x\right)\right|\le \tilde{\varepsilon }.\]
Our goal is to find conditions which ensure proximity of n-steps transition probabilities for all n, given only one-step proximity. Additionally, we want to demonstrate that such proximity is approximately of order ε, and all involved constants can be calculated in practical applications.
For this purpose we construct a coupling for chains ${X^{(1)}}$ and ${X^{(2)}}$. First, we assume that conditions of Theorem 3 hold, let $C=[-c,c]$ to be a corresponding set and introduce the following condition.
Condition M (Minorization condition).
Assume that there exists a sequence of real numbers $\{{a_{n}},n\ge 1\}$, ${a_{n}}\in (0,1)$, and a sequence of probability measures ${\nu _{n}}$ on $(\mathbb{R},ℬ)$ such that:
\[\begin{array}{l}\displaystyle \underset{x\in C}{\inf }{\Gamma _{n}}\left(A-{\alpha _{n}^{(i)}}x\right)\ge {a_{n}}{\nu _{n}}(A),\hspace{1em}i\in \{1,2\},\\ {} \displaystyle \underset{n}{\inf }{\nu _{n}}(C)>0,\\ {} \displaystyle 0<{a_{\ast }}:=\underset{n}{\inf }{a_{n}}\le {a_{n}}\le {a^{\ast }}:=\underset{n}{\sup }{a_{n}}<1,\end{array}\]
for all $A\in ℬ$ and $n\ge 1$.
Remark 4.
Going forward we will require Condition M for both $\left\{{\Gamma _{n}^{(1)}},n\ge 1\right\}$ and $\left\{{\Gamma _{n}^{(2)}},n\ge 1\right\}$ with the same $\{{a_{n}},n\ge 1\}$ and ${\nu _{n}}$. At the same time, in the scheme (7) it is sufficient to require Condition M only for the “common part” ${\Gamma _{n}}$. This illustrates that parameters $\{{a_{n}},n\ge 1\}$ and measures ${\nu _{n}}$ do not depend on ε.
Second, we introduce substochastic kernels
(8)
\[ {Q_{n}}(x,\cdot )={\min _{i}}\left\{{\Gamma _{n}^{(i)}}\left(\cdot -{\alpha _{n}^{(i)}}x\right)\right\},\]
where min should be understood as a minimum of two measures (see [5], Proposition D.2.8 as an example of a formal definition).
Note that by definition of ε and elementary equality $a\wedge b=\frac{a+b-|a-b|}{2}$
\[ {Q_{n}}(x,\mathbb{R})\ge 1-\varepsilon .\]
We denote the residual substochastic kernels by
\[ {R_{n}^{(i)}}(x,A)={\Gamma _{n}^{(i)}}\left(A-{\alpha _{n}^{(i)}}x\right)-{Q_{n}}(x,A),\]
so that
\[ {R_{n}^{(i)}}(x,\mathbb{R})\le \varepsilon .\]
Here, ${Q_{n}}$ is a general analogue of a “common part” in (7).
To prove the main result, we will need some regularity conditions on ${Q_{n}}$.
Condition T (Tails condition).
Denote ${A_{m}}=\{y\in \mathbb{R}:|y|\in [m,m+1)\}$. Assume that there exist sequences $\{{\hat{S}_{n}},n\ge 1\}$, $\{{\hat{r}_{n}},n\ge 1\}$, such that
\[\begin{array}{l}\displaystyle {Q^{t,k}}(x,{A_{m}})\le {Q^{t,k}}(x,\mathbb{R}){\hat{S}_{m}},\hspace{1em}x\in C,\\ {} \displaystyle {\nu _{t}}{Q^{t,k}}({A_{m}})\le {\nu _{t}}{Q^{t,k}}(x,\mathbb{R}){\hat{S}_{m}},\\ {} \displaystyle \hat{m}=\sum \limits_{m\ge 1}{\hat{S}_{m}}<\infty ,\\ {} \displaystyle \underset{i,k,x\in {A_{m}}}{\sup }{\int _{\mathbb{R}}}{R_{k}^{(i)}}(x,dy)|y|\le {\hat{r}_{m}},\\ {} \displaystyle \Delta =\sum \limits_{m\ge 1}{\hat{r}_{m}}{\hat{S}_{m}}<\infty ,\end{array}\]
The motivation for this condition is as follows. We may think of ${\tilde{Q}^{t,k}}(x,\cdot )=\frac{{Q^{t,k}}(x,\cdot )}{{Q^{t,k}}(x,\mathbb{R})}$ as a proper k-steps transition probability, which is true in the case of schema (7). Generally speaking, this is not true, and we need additional regularity conditions to ensure that function $x\mapsto {Q_{t}}(x,\mathbb{R})$ does not vary too much (in schema (7) ${Q_{t}}(x,\mathbb{R})=1-\tilde{\varepsilon }$ and does not depend on x at all). We do not need ${\tilde{Q}_{t}}={\tilde{Q}^{t,1}}$ to be a proper Markov kernel in our derivations, so we do not impose any additional conditions. But we can think of ${\tilde{Q}_{t}}$ as a Markov kernel and ${\tilde{Q}^{t,k}}$ as k-steps transition probability function to develop the intuition behind Condition T. First, imagine ${\tilde{Q}_{t}}(x,\cdot )$ is a homogeneous Markov kernel that is driving “coupling chain”, so that ${\tilde{Q}_{t}}(x,\cdot )=\tilde{Q}(x,\cdot )$. Assume the kernel is irreducible and geometrically recurrent (and thus positive), so there exists an invariant probability measure, say π. Denote ${\bar{\pi }_{m}}=\pi ({A_{m}})$. We can write
\[\begin{aligned}{}\sum \limits_{m\ge 0}{\tilde{Q}^{k}}(x,{A_{m}})& =\sum \limits_{m\ge 0}{\tilde{Q}^{k}}(x,{A_{m}})-{\bar{\pi }_{m}}+{\bar{\pi }_{m}}\le \sum \limits_{m\ge 0}|{\tilde{Q}^{k}}(x,\cdot )-\pi |({A_{m}})+{\bar{\pi }_{m}}\\ {} & \le 1+|{\tilde{Q}^{k}}(x,\cdot )-\pi |(\mathbb{R})=1+||{\tilde{Q}^{k}}(x,\cdot )-\pi |{|_{TV}}\\ {} & \le 1+\sum \limits_{k\ge 1}||{\tilde{Q}^{k}}(x,\cdot )-\pi |{|_{TV}}<\infty .\end{aligned}\]
Assuming that C is a geometrically recurrent set, we may conclude
\[ \underset{x\in C,k\ge 1}{\sup }\sum \limits_{m\ge 0}{\tilde{Q}^{k}}(x,{A_{m}})\le 1+\underset{x\in C}{\sup }\sum \limits_{k\ge 1}||{\tilde{Q}^{k}}(x,\cdot )-\pi |{|_{TV}}<\infty .\]
This demonstrates that condition $\frac{{Q^{t,k}}(x,{A_{m}})}{{Q^{t,k}}(x,\mathbb{R})}\le {\hat{S}_{m}}$, $x\in C$, is reasonable.
Let us now discuss the condition on ${R_{k}^{(i)}}(x,dy)$. This condition requires that the “shifted” residual process does not go far away from its starting position. For example, if ${R_{k}^{(i)}}$ are also centred and have the “autoregressive” form, ${R_{k}^{(i)}}(x,dy)={F_{k}^{(i)}}(dy-x)$, the derivation of Section 3 demonstrates that it is reasonable to expect ${\hat{r}_{n}}=O(n)$. In this case, condition $\Delta <\infty $ is reduced to
\[ \underset{x\in C}{\sup }\sum \limits_{n\ge 1}n||{Q^{n}}(x,\cdot )-\pi (\cdot )|{|_{TV}}<\infty ,\]
and
\[ {\int _{\mathbb{R}}}|y|\pi (dy)<\infty ,\]
which are also reasonable conditions that are satisfied in the case of the geometrically ergodic autoregressive model. Moreover, the homogeneous theory tells us that existing of a small set C (in the sense of Condition M) with a finite second moment of the return time is sufficient for the above conditions to hold.
It is easy to show that a similar motivation remains valid for inhomogeneous autoregressive models. Assume ${\tilde{Q}_{n}}(x,\cdot )={\tilde{\Gamma }_{n}}(\cdot -{\alpha _{n}}x)$, so that we can construct the chain ${({Y_{n}})_{n\ge 0}}$ of the form ${Y_{n+1}}={\alpha _{n+1}}{Y_{n}}+{U_{n+1}}$, $n\ge 0$, where ${U_{n}}\sim {\tilde{\Gamma }_{n}}$ are independent and centred random variables. The transition kernels of the chain $({Y_{n}})$ are ${\tilde{Q}_{n}}$. Then the Chebyshev inequality turns into
\[ {\tilde{Q}^{t,k}}(x,{A_{m}})\le {\mathbb{P}_{x}^{t}}\left\{|{Y_{k}}|\ge m\right\}\le \frac{{\mathbb{E}_{x}^{t}}[|{Y_{k}}{|^{2}}]}{{m^{2}}}.\]
Next, we write
\[ {Y_{n}}={\sum \limits_{k=0}^{n}}\left({\prod \limits_{j=k+1}^{n}}{\alpha _{j}}\right){U_{k}},\]
here ${U_{0}}\in [-c,c]$ is a fixed (nonrandom) initial state and ${\textstyle\prod _{n+1}^{n}}=1$. Assume that ${U_{n}},n\ge 1$, have the uniformly bounded second moment. Denote its bound by ${\mu _{2}}$. We can then set
\[ {\hat{S}_{m}}=\frac{\max \{{\mu _{2}},{c^{2}}\}}{{m^{2}}}\underset{n}{\sup }{\sum \limits_{k=0}^{n}}{\prod \limits_{j=k+1}^{n}}{\alpha _{j}^{2}}.\]
So, we can conclude that the existence of the uniformly bounded second moment (or any higher moments) for distributions ${\tilde{\Gamma }_{n}}$ along with the condition
(9)
\[ \underset{n}{\sup }{\sum \limits_{k=0}^{n}}{\prod \limits_{j=k+1}^{n}}{\alpha _{j}^{2}}<\infty \]
are the sufficient conditions for building $\{{\hat{S}_{n}},n\ge 1\}$. Note that Condition 1 of Theorem 1 requires a set $\{n\ge 1:{\alpha _{n}}>1-\delta \}$ (for some $\delta <1$) to be very sparse, so that it is reasonable to expect
\[ \underset{n}{\sup }{\sum \limits_{k=0}^{n}}{\prod \limits_{j=k+1}^{n}}{\alpha _{j}^{2}}\le \frac{M}{1-{\delta ^{\gamma }}},\]
for some constants M and $\gamma >0$. Of course, in order to ensure $\textstyle\sum \limits_{m}{\hat{r}_{m}}{\hat{S}_{m}}<\infty $ we may need to impose a stronger condition on ${U_{n}}$, for example, to have a uniformly bounded fourth, or even exponential, moment.
Generally speaking, Condition T requires that the “residual mean” grows slower than the ${\tilde{Q}_{n}}$-chain “advances in space”, in other words, the probability of hitting distant regions of the phase space is small enough to compensate for the “residual mean’s” growth as the starting point x is moving away from the origin. It should be clear now that the most typical autoregression, the Gaussian one, satisfies this condition as long as all variances are uniformly bounded.
Remark 5.
In the discussion above, we have shown that we may expect ${\hat{r}_{n}}=O(n)$, which in turn means that $\hat{m} < A\Delta $, where A is some constant. However, we have never formally proved that ${\hat{r}_{n}}\ge 1$, so we keep both conditions $\hat{m}<\infty $ and $\Delta <\infty $ even if, in usual circumstances, the latter implies the former.
Finally, we would like to mention that Δ is in fact $\Delta (\varepsilon )$ (we have shown that ${R_{k}^{(i)}}(x,\mathbb{R})\le \varepsilon $), which means it should be small as $\varepsilon \to 0$. In schema (7) we clearly have $\Delta =O(\varepsilon )$.
Assuming Condition M holds true, we define “noncoupling” operators
\[\begin{array}{l}\displaystyle {T_{t}^{(i)}}(x,A)=\frac{{\Gamma _{t}^{(i)}}(A-{a_{t}}x)-{a_{t}}{\nu _{t}}(A)}{1-{a_{t}}},\\ {} \displaystyle {T_{xy}^{(t)}}(A,B)={T_{t}^{(1)}}(x,A){T_{t}^{(2)}}(y,B).\end{array}\]
We define the Markov chain ${\bar{Z}_{n}}=\left({Z_{n}^{(1)}},{Z_{n}^{(2)}},{d_{n}}\right)$ with values in $(\mathbb{R},\mathbb{R},\{0,1,2\})$ by setting its transition probabilities
\[\begin{array}{l}\displaystyle {\bar{P}_{n}}(x,y,1;A\times B\times \{2\})={𝟙_{x=y}}{Q_{n}}(x,A\cap B),\\ {} \displaystyle {\bar{P}_{n}}(x,y,1;A\times B\times \{0\})={𝟙_{x=y}}\frac{{R_{n}^{(1)}}(x,A){R_{n}^{(2)}}(y,B)}{1-{Q_{n}}(x,\mathbb{R})},\end{array}\]
we assume the latter probability is equal to zero if ${Q_{n}}(x,\mathbb{R})=1$,
\[\begin{aligned}{}{\bar{P}_{n}}(x,y,0;A\times B\times \{0\})& =(1-{a_{n}}){𝟙_{C\times C}}(x,y){T_{n}^{(1)}}(x,A){T_{n}^{(2)}}(y,B)\\ {} & +(1-{𝟙_{C\times C}}(x,y)){\Gamma _{n}^{(1)}}(A-{\alpha _{n}^{(1)}}x){\Gamma _{n}^{(2)}}(A-{\alpha _{n}^{(2)}}y),\end{aligned}\]
\[\begin{array}{l}\displaystyle {\bar{P}_{n}}(x,y,0;A\times B\times \{1\})={𝟙_{C\times C}}(x,y){a_{n}}{\nu _{n}}(A\cap B).\\ {} \displaystyle {\bar{P}_{n}}(x,y,2;\cdot )={\bar{P}_{n}}(x,y,1;\cdot ).\end{array}\]
All other probabilities are equal to zero.
It is straightforward that marginal distributions of the process ${\bar{Z}_{n}}$ equal to those of ${X_{n}^{(i)}}$.
We will use the canonical probability ${\bar{\mathbb{P}}_{x,y,d}^{t}}$ and the expectation ${\bar{\mathbb{E}}_{x,y,d}^{t}}$, $x,y\in \mathbb{R}$, $d\in \{0,1,2\}$ in the same sense as before.
Let us denote by
\[\begin{array}{l}\displaystyle {\bar{\sigma }_{C\times C}}={\bar{\sigma }_{C\times C}}(1)=\inf \left\{n\ge 1:\left({Z_{n}^{(1)}},{Z_{n}^{(2)}}\right)\in C\times C\right\},\\ {} \displaystyle {\bar{\sigma }_{C\times C}}(m)=\inf \left\{n\ge {\bar{\sigma }_{C\times C}}(m-1):\left({Z_{n}^{(1)}},{Z_{n}^{(2)}}\right)\in C\times C\right\},\hspace{1em}m\ge 2,\end{array}\]
the first and m-th return times to $C\times C$ by the pair $\left({Z_{n}^{(1)}},{Z_{n}^{(2)}}\right)$.
We will also need a special notation for the sets
(10)
\[ \begin{array}{l}{D_{n}}=\{{d_{1}}={d_{2}}=\cdots ={d_{n}}=0\},\\ {} {D_{nk}}=\{{d_{1}}={d_{2}}=\cdots ={d_{n}}=0,{\bar{\sigma }_{C\times C}}(k)=n\},\\ {} {B_{nk}}=\left\{{d_{k}}\in \{1,2\},{d_{k+1}}=0,\dots {d_{n}}=0\right\},\end{array}\]
and for the values
(11)
\[ {\rho _{nk}}=\underset{x,y\in C,t}{\sup }{\bar{\mathbb{P}}_{x,y,0}^{t}}\left({D_{nk}}\right).\]
Theorem 2.
Let ${X^{(i)}}$ be two Markov chains defined in (5) that simultaneously satisfy Condition M, Condition T and conditions of Corollary 1 with $\psi >1$, and $C=[-c,c]$ be a corresponding set.
Then there exist constants ${M_{1}},{M_{2}}\in \mathbb{R}$, such that for every $x\in C$
(12)
\[\begin{aligned}{}\left|\left|{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(1)}}\in \cdot \right\}-{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(2)}}\in \cdot \right\}\right|\right|& \le \varepsilon \hat{m}{M_{1}}+\Delta {M_{2}},\end{aligned}\]
where $\hat{m}$ and Δ are defined in Condition T.
For every $x\notin C$ the following inequality holds true:
(13)
\[\begin{aligned}{}\left|\left|{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(1)}}\in \cdot \right\}-{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(2)}}\in \cdot \right\}\right|\right|& \le \varepsilon (2\hat{m}{M_{1}}+\hat{\mu }(x))+2\Delta {M_{2}},\end{aligned}\]
where
\[ \hat{\mu }(x)=\underset{t}{\sup }\sum \limits_{k\ge 1}\left({\prod \limits_{j=0}^{k-1}}{Q_{t+j}}{𝟙_{\mathbb{R}\setminus C}}\right)(x,\mathbb{R}\setminus C).\]
Remark 6.
By ${Q_{t}}{𝟙_{A}}$ we understand a kernel that is equal to ${Q_{t}}(x,\cdot )$ if $x\in A$ and zero otherwise. So we have
\[\begin{array}{l}\displaystyle \left({\prod \limits_{j=0}^{n-1}}{Q_{t+j}}{𝟙_{A}}\right)(x,B)\\ {} \displaystyle ={𝟙_{A}}(x){\int _{A}}\dots {\int _{A}}{Q_{t}}(x,d{y_{1}}){Q_{t+1}}({y_{1}},d{y_{2}})\dots {Q_{t+n-1}}({y_{n-1}},B).\end{array}\]
Remark 7.
Recall the process $\bar{Z}={\left({Z_{n}^{(1)}},{Z_{n}^{(2)}},{d_{n}}\right)_{n\ge 0}}$ that is defined above. We can interpret $\hat{\mu }(x)$ as
\[ \hat{\mu }(x)=\underset{t}{\sup }\sum \limits_{n\ge 1}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{1}}=\cdots ={d_{n}}=2,{Z_{j}^{(1)}}={Z_{j}^{(2)}}\notin C,j\le n\}.\]
We can think of $\hat{\mu }(x)$ as an “expectation” for the first time the “coupled chain” visits C. The difficulty of calculating $\hat{\mu }$ in a practical application depends on the exact model. However, it is typical that the “common part process” enjoys the same properties as each of the original ones. In the context of this paper, this means that the “common part process” will be geometrically recurrent.
We can demonstrate how to calculate $\hat{\mu }(x)$ in schema (7). In this case, we can factor out decoupling trials and the process’s advancement in space. Let ${({Y_{n}})_{n\ge 0}}$ be a canonical autoregressive process generated by the family $\{{\Gamma _{n}},n\ge 1\}$ and ${\tilde{\mathbb{P}}_{x}^{t}}$ be the corresponding probability. Without loss of generality we assume $\tilde{\varepsilon }=\varepsilon $. Let
\[ \tilde{\sigma }=\inf \{t\ge 1:{Y_{t}}\in C\}\]
and assume there exists $\psi >1$ such that
\[ g(x)=\underset{t}{\sup }{\tilde{\mathbb{E}}_{x}^{t}}\left[{\psi ^{\tilde{\sigma }}}\right]<\infty .\]
Then, we can write
\[\begin{array}{l}\displaystyle \hat{\mu }(x)=\underset{t}{\sup }\sum \limits_{n\ge 1}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{1}}=\cdots ={d_{n}}=2,{Z_{j}^{(1)}}={Z_{j}^{(2)}}\notin C,j\le n\}\\ {} \displaystyle =\underset{t}{\sup }\sum \limits_{n\ge 1}{(1-\varepsilon )^{n}}{\tilde{\mathbb{P}}_{x}^{t}}\left(\tilde{\sigma }\ge n\right)\le \underset{t}{\sup }\sum \limits_{n\ge 1}{(1-\varepsilon )^{n}}{\psi ^{-n}}g(x)\\ {} \displaystyle =g(x)\frac{(1-\varepsilon ){\psi ^{-1}}}{1-(1-\varepsilon ){\psi ^{-1}}}\to \frac{g(x)}{\psi -1},\hspace{1em}\varepsilon \to 0.\end{array}\]
Finally, $g(x)$ can usually be estimated by constructing a Foster–Lyapunov function satisfying a drift condition for the process ${({Y_{n}})_{n\ge 0}}$.
Remark 8.
It is possible to calculate all constants involved in (12) in terms of G, δ, ${a_{\ast }}$, ${a^{\ast }}$, ${\inf _{n}}{\nu _{n}}(C)$, ψ that are defined in Theorem 1, Corollary 1 and Condition M. See Appendix A for an example of such calculation.
Proof.
Using the standard coupling approach we first obtain
\[\begin{aligned}{}& \left|{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(1)}}\in A\right\}-{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(2)}}\in A\right\}\right|\\ {} & =\left|{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{\bar{Z}_{n}}\in (A,\mathbb{R},\{0,1,2\})\right\}-{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{\bar{Z}_{n}}\in (\mathbb{R},A,\{0,1,2\})\right\}\right|\\ {} & =\left|{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{\bar{Z}_{n}}\in (A,\mathbb{R},\{0\})\right\}-{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{\bar{Z}_{n}}\in (\mathbb{R},A,\{0\})\right\}\right|\\ {} & \le \max \left\{{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{\bar{Z}_{n}}\in (A,\mathbb{R},\{0\})\right\},{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{\bar{Z}_{n}}\in (\mathbb{R},A,\{0\})\right\}\right\}\\ {} & \le {\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{n}}=0\right\}.\end{aligned}\]
Let us denote a kernel
\[ {\Lambda _{k}^{t}}(x,A)={\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{k}}\in \{1,2\},{Z_{k}^{(1)}}={Z_{k}^{(2)}}\in A\right\},\]
and a set
\[ {A_{m}}=\{y\in \mathbb{R}:|y|\in [m,m+1)\}.\]
We note that since ${\bar{P}_{n}}(x,y,1;\cdot )={\bar{P}_{n}}(x,y,2;\cdot )$ we have
\[\begin{array}{l}\displaystyle {\bar{\mathbb{P}}_{x,x,1}^{t}}({B_{nk}})={\int _{\mathbb{R}}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{k}}=1,{Z_{k}^{(1)}}={Z^{(2)}}=dy\right\}{\bar{\mathbb{P}}_{y,y,1}^{t+k}}({D_{n-k}})\\ {} \displaystyle +{\int _{\mathbb{R}}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{k}}=2,{Z_{k}^{(1)}}={Z_{k}^{(2)}}=dy\right\}{\bar{\mathbb{P}}_{y,y,2}^{t+k}}({D_{n-k}})\\ {} \displaystyle ={\Lambda _{k}^{t}}(x,dy){\bar{\mathbb{P}}_{y,y,1}^{t+k}}({D_{n-k}}).\end{array}\]
We apply this equality to the last decoupling time and derive:
(14)
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}& \{{d_{n}}=0\}={\sum \limits_{k=1}^{n-1}}{\bar{\mathbb{P}}_{x,x,1}^{t}}({B_{nk}})={\sum \limits_{k=1}^{n-1}}{\int _{\mathbb{R}}}{\Lambda _{k}^{t}}(x,dy){\bar{\mathbb{P}}_{y,y,1}^{t+k}}({D_{n-k}})\\ {} & ={\sum \limits_{k=1}^{n-1}}\sum \limits_{m\ge 0}{\int _{{A_{m}}}}{\Lambda _{k}^{t}}(x,dy){\bar{\mathbb{P}}_{y,y,1}^{t+k}}({D_{n-k}})\\ {} & \le {\sum \limits_{k=1}^{n-1}}\sum \limits_{m\ge 0}{\Lambda _{k}^{t}}(x,{A_{m}})\underset{y\in {A_{m}}}{\sup }{\bar{\mathbb{P}}_{y,y,1}^{t+k}}({D_{n-k}}).\end{aligned}\]
Next, we want to replace ${\Lambda _{k}^{t}}(x,{A_{m}})$ with ${\hat{S}_{m}}$ and apply Lemma 8. However, this can only be done for $x\in C$. Indeed:
\[\begin{array}{l}\displaystyle {\Lambda _{k}^{t}}(x,{A_{m}})={\sum \limits_{j=0}^{k-1}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{j}}=1\}{\bar{\mathbb{P}}_{{\nu _{j}},1}^{t+j}}\{{d_{l}}=2,l\in \overline{1,k-j},{Z_{k-j}^{(1)}}={Z_{k-j}^{(2)}}\in {A_{m}}\}\\ {} \displaystyle ={\sum \limits_{j=0}^{k-1}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{j}}=1\}{\nu _{j}}{Q^{t+j,k-j}}({A_{m}})\le {\sum \limits_{j=0}^{k-1}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{j}}=1\}{\nu _{j}}{Q^{t+j,k-j}}(\mathbb{R}){\hat{S}_{m}}\\ {} \displaystyle ={\hat{S}_{m}}{\sum \limits_{j=0}^{k-1}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{j}}=1,{d_{j+1}}=\cdots {d_{k}}=2\}={\hat{S}_{m}}{\mathbb{P}_{x,x,1}^{t}}\{{d_{k}}=2\}\le {\hat{S}_{m}}.\end{array}\]
Here ${\nu _{j}}(du,dv)={𝟙_{du=dv}}({𝟙_{\{j=0\}}}{\delta _{x}}+{𝟙_{\{j>0\}}}\nu (du))$ helps us to include the case of $j=0$ (i.e. no decouplings from the beginning) into the previous derivations. Applying this bound and Lemma 8 to (14) we obtain for $x\in C$:
(15)
\[ {\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{n}}=0\}\le \sum \limits_{m\ge 0}{\hat{S}_{m}}(\varepsilon {M_{1}}+{\hat{r}_{m}}{M_{2}})=\varepsilon \hat{m}{M_{1}}+\Delta {M_{2}}.\]
Now we consider the case $x\notin C$. Let us denote
\[ \bar{\sigma }=\inf \{t>0:{Z_{t}^{(1)}}={Z_{t}^{(2)}}\in C,{d_{1}}=\cdots ={d_{t}}=2\}\le \infty .\]
For $x\notin C$ we have
\[\begin{array}{l}\displaystyle {\Lambda _{k}^{t}}(x,{A_{m}})={\sum \limits_{j=1}^{k-1}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{j}}=1\}{\bar{\mathbb{P}}_{\nu ,1}^{t+j}}\{{d_{l}}=2,l\in \overline{1,k-j},{Z_{k-j}^{(1)}}={Z_{k-j}^{(2)}}\in {A_{m}}\}\\ {} \displaystyle +{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{2}}=\cdots ={d_{k-1}}=2,{Z_{k-1}^{(1)}}={Z_{k-1}^{(2)}}\in {A_{m}}\}\le {\hat{S}_{m}}\\ {} \displaystyle +{\sum \limits_{j=1}^{k-1}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{2}}=\cdots ={d_{k-1}}=2,\bar{\sigma }=j,{Z_{k-1}^{(1)}}={Z_{k-1}^{(2)}}\in {A_{m}}\}\\ {} \displaystyle +{\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{2}}=\cdots ={d_{k-1}}=2,\bar{\sigma }\ge k,{Z_{k-1}^{(1)}}={Z_{k-1}^{(2)}}\in {A_{m}}\}\\ {} \displaystyle \le {\hat{S}_{m}}+{\hat{S}_{m}}{\bar{\mathbb{P}}_{x,x,1}^{t}}({d_{1}}=\cdots ={d_{k-1}}=2,\bar{\sigma } < k)+{q_{k,m}}(x)\le 2{\hat{S}_{m}}+{q_{k,m}}(x),\end{array}\]
where
\[ {q_{k,m}}(x)={\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{2}}=\cdots ={d_{k-1}}=2,\bar{\sigma }\ge k,{Z_{k-1}^{(1)}}={Z_{k-1}^{(2)}}\in {A_{m}}\}.\]
Denote
\[ {q_{k}}(x)=\sum \limits_{m\ge 0}{q_{k,m}}(x).\]
Applying this result and the obvious inequality ${\bar{\mathbb{P}}_{y,y,1}^{t+k}}({D_{n-k}})\le \varepsilon $ to (14), we get for $x\notin C$
\[ {\bar{\mathbb{P}}_{x,x,1}^{t}}\{{d_{n}}=0\}\le 2\varepsilon \hat{m}{M_{1}}+2\Delta {M_{2}}+\varepsilon \sum \limits_{k\ge 0}{q_{k}}(x)\le \varepsilon (2\hat{m}{M_{1}}+\hat{\mu }(x))+2\Delta {M_{2}}.\]
Here we used the fact
\[ \sum \limits_{k\ge 0}{q_{k}}(x)=\hat{\mu }(x).\]
 □

5 Auxiliary results

Everywhere in this section we assume that conditions of Theorem 1 are valid as well as Condition C and will use the corresponding notation.
We start by introducing an important result that allows us to connect the expectation of the exponential moment of ${\sigma _{C\times C}}$ with individual exponential moments of ${\sigma _{C}^{(i)}}$.
Theorem 3.
Let
\[ {X_{n+1}^{(1)}}={\alpha _{n}^{(1)}}{X_{n}^{(1)}}+{W_{n}^{(1)}},\]
and
\[ {X_{n+1}^{(2)}}={\alpha _{n}^{(2)}}{X_{n}^{(2)}}+{W_{n}^{(2)}},\]
be two Markov chains. Let ${\Gamma _{n}^{(i)}}$ be a distribution of ${W_{n}^{(i)}}$ and all ${W_{n}^{(i)}}$ be independent. Assume that both chains satisfy conditions of Corollary 1 and Condition M for the set $C=[-c,c]$ where c is defined in (2).
Then there exist ${\psi _{1}}>1$ and ${\Xi _{0}},{\Xi _{1}}\in \mathbb{R}$, such that for every $x,y\in \mathbb{R}$ and $n\ge 0$
\[ {\mathbb{E}_{x,y}^{n}}\left[{\psi _{1}^{{\sigma _{C\times C}}}}\right]\le {\Xi _{0}}(|x|+|y|)+{\Xi _{1}}.\]
Remark 9.
Constants ${\Xi _{0}}$, ${\Xi _{1}}$ can be calculated using results from [12]. We will show how this can be done in Appendix A.
Proof.
From [12], Theorem 4.2, we get
\[ {\mathbb{E}_{x,y}^{n}}\left[{\psi _{1}^{{\sigma _{C\times C}}}}\right]\le M\left({\mathbb{E}_{x}^{n}}\left[{\psi _{0}^{{\sigma _{C}^{(1)}}}}\right]{S_{1}}({\psi _{0}})+{\mathbb{E}_{y}^{n}}\left[{\psi _{0}^{{\sigma _{C}^{(2)}}}}\right]{S_{2}}({\psi _{0}})\right),\]
where, ${\psi _{0}},{\psi _{1}}\in (1,\psi )$ and $M\in \mathbb{R}$ are some constants and
\[ {S_{i}}(u)=\underset{n,x\in C}{\sup }\left\{\frac{1}{1-{a_{n}}}\left({\mathbb{E}_{x}^{n}}\left[{u^{{\sigma _{C}^{(i)}}}}\right]-{a_{n}}{\mathbb{E}_{{\nu _{n}}}^{n}}\left[{u^{{\sigma _{C}^{(i)}}}}\right]\right)\right\}.\]
Since ${\psi _{0}}<\psi $, Corollary 1 turns into
\[ {S_{i}}({\psi _{0}})\le \frac{1}{1-{a^{\ast }}}\underset{n,x\in C}{\sup }{\mathbb{E}_{x}^{n}}\left[{\psi ^{{\sigma _{C}^{(i)}}}}\right]\le \frac{{C_{0}}(2+2G+c-\delta )}{1-{a^{\ast }}}.\]
Next we apply Corollary 1 to ${\mathbb{E}_{x}^{n}}\left[{\psi _{0}^{{\sigma _{C}^{(i)}}}}\right]$ and get
(16)
\[ {\mathbb{E}_{x,y}^{n}}\left[{\psi _{1}^{{\sigma _{C\times C}}}}\right]\le {\Xi _{0}}(|x|+|y|)+{\Xi _{1}},\]
(17)
\[ {\Xi _{0}}=\frac{M{C_{0}^{2}}(2+2G+c-\delta )}{1-{a^{\ast }}},\]
(18)
\[ {\Xi _{1}}={\Xi _{0}}(2+(1+2G-\delta )({𝟙_{C}}(x)+{𝟙_{C}}(y))).\]
 □
Next we introduce
(19)
\[ h(x,y)=\underset{n}{\sup }{\mathbb{E}_{x,y}^{n}}\left[{\psi _{1}^{{\sigma _{C\times C}}}}\right]\]
and
(20)
\[ {H_{t}}(x)={\int _{{\mathbb{R}^{2}}\setminus C\times C}}\frac{{R_{t}^{(1)}}(x,dy){R_{t}^{(2)}}(x,dz)}{1-{Q_{t}}(x,\mathbb{R})}h(y,z).\]
Going forward we will use the notation from Theorem 1, Corollary 1, Theorem 3 and Condition M. We now present a series of lemmas that are necessary to prove the main result.
Lemma 1.
For all $(x,y)\notin C\times C$
\[ {\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{\sigma _{C\times C}}\ge n\right\}\le {\psi _{1}^{-n}}h(x,y).\]
Proof.
For all $(x,y)\notin C\times C$
(21)
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{\sigma _{C\times C}}\ge n\right\}& =\mathbb{P}\left\{{\sigma _{C\times C}^{(t)}}\ge n\Big|{X_{t}^{(1)}}=x,{X_{t}^{(2)}}=y\right\}\le {\delta _{1}^{-n}}{\mathbb{E}_{x,y}^{t}}\left[{\delta _{1}^{-{\sigma _{C\times C}}}}\right]\\ {} & ={\psi _{1}^{-n}}h(x,y).\end{aligned}\]
The first equality above is due to the definition ${\bar{\mathbb{P}}_{x,y,0}^{t}}$, then we used the Chernoff inequality and definition (19).  □
Remark.
Note, that it is important in the preceding derivations that $(x,y)\notin C\times C$. In this case the distribution of the pair $\left({Z^{(1)}},{Z^{(2)}}\right)$ coincides with that of $\left({X^{(1)}},{X^{(2)}}\right)$ on the time intervall $[0,{\bar{\sigma }_{C\times C}}-1]$ which makes the first equality valid and allows us to move from the probability measure $\bar{\mathbb{P}}$ to $\mathbb{P}$, and use geometric recurrence result for a paired chain.
Lemma 2.
\[ {\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}\ge n\right\}\le {\psi _{1}^{n-1}}{H_{t}}(x),\]
where ${H_{t}}(x)$ is defined in (20) and $x\in \mathbb{R}$.
Proof.
We use Lemma 1 and definition of probability $\bar{\mathbb{P}}$ to derive
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}& \left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}\ge n\right\}\\ {} & ={\int _{{\mathbb{R}^{2}}\setminus C\times C}}\frac{{R_{t}^{(1)}}(x,dy){R_{t}^{(2)}}(x,dz)}{1-{Q_{t}}(x,\mathbb{R})}{\bar{\mathbb{P}}_{y,z,0}^{t}}\left\{{\bar{\sigma }_{C\times C}}\ge n-1\right\}\\ {} & \le {\psi _{1}^{n-1}}{\int _{{\mathbb{R}^{2}}\setminus C\times C}}\frac{{R_{t}^{(1)}}(x,dy){R_{t}^{(2)}}(x,dz)}{1-{Q_{t}}(x,\mathbb{R})}h(y,z)\\ {} & ={\psi _{1}^{n-1}}{H_{t}}(x).\end{aligned}\]
 □
Now, we consider a situation when the pair $\left({Z_{k}^{(1)}},{Z_{k}^{(2)}}\right)$ hits $C\times C$ exactly once.
Lemma 3.
(22)
\[ {\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{j}}=0,j=\overline{1,n},{\bar{\sigma }_{C\times C}}\le n<{\bar{\sigma }_{C\times C}}(2)\right\}\le \frac{S\left((n-1){H_{t}}(x)+\varepsilon \right)}{{\psi _{1}^{n-1}}},\]
where
\[ S=(1-{a_{\ast }})\underset{x,y\in C,t\ge 0,}{\sup }{\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{x,y}^{(t)}}(du,dv)h(u,v).\]
Proof.
For any $x\in \mathbb{R}$
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}& \left\{{d_{j}}=0,j=\overline{1,n},{\bar{\sigma }_{C\times C}}\le n<{\bar{\sigma }_{C\times C}}(2)\right\}\\ {} & ={\sum \limits_{k=1}^{n}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{j}}=0,j=\overline{1,n},\hspace{2.5pt}{\bar{\sigma }_{C\times C}}=k,{\bar{\sigma }_{C\times C}}(2)>n\right\}\\ {} & ={\sum \limits_{k=1}^{n}}{\int _{C\times C}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}=k,\left({Z_{k}^{(1)}},{Z_{k}^{(2)}},{d_{k}}\right)=(du,dv,0)\right\}\times \\ {} & \times {\bar{\mathbb{P}}_{u,v,0}^{t+k}}\left\{{\bar{\sigma }_{C\times C}}>n-k\right\}\le {\sum \limits_{k=1}^{n}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}=k,{d_{k}}=0\right\}\times \\ {} & \times \underset{x,y\in C}{\sup }{\bar{\mathbb{P}}_{x,y,0}^{t+k}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}>n-k\right\}.\end{aligned}\]
Note that we cannot apply Lemma 1 to the last term, because initial values $(x,y)\in C\times C$, and so the distribution of the first step for a $\bar{Z}$-chain under the measure $\bar{\mathbb{P}}$ is different from that of $({X^{(1)}},{X^{(2)}})$-chain under $\mathbb{P}$. So we have for all $({x_{0}},{y_{0}})\in C\times C$
\[\begin{array}{l}\displaystyle {\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t+k}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}>n-k\right\}\\ {} \displaystyle =(1-{a_{t+k}}){\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{t+k}^{(1)}}({x_{0}},dx){T_{t+k}^{(2)}}({y_{0}},dy){\bar{\mathbb{P}}_{x,y,0}^{t+k+1}}\left\{{\bar{\sigma }_{C\times C}}\ge n-k\right\}.\end{array}\]
Now we can apply Lemma 1 to ${\bar{\mathbb{P}}_{x,y,0}^{t+k+1}}\left\{{\bar{\sigma }_{C\times C}}\ge n-k\right\}$ and obtain the inequality
(23)
\[\begin{aligned}{}\underset{{x_{0}},{y_{0}}\in C}{\sup }{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t+k}}& \left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}\ge n-k\right\}\\ {} & \le \frac{1-{a_{\ast }}}{{\psi _{1}^{(n-k)}}}\underset{{x_{0}},{y_{0}}\in C}{\sup }{\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{{x_{0}},{y_{0}}}^{(t+k)}}(dx,dy)h(x,y)\\ {} & =S{\psi _{1}^{-(n-k)}}.\end{aligned}\]
Using Lemma 2 for $k\ge 2$ we get
(24)
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{1}}=0,{d_{k}}=0,{\bar{\sigma }_{C\times C}}=k\right\}& \le {\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}\ge k\right\}\\ {} & \le {\psi _{1}^{-(k-1)}}{H_{t}}(x).\end{aligned}\]
It is clear that for $k=1$
(25)
\[ {\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}=1\right\}=\frac{{R_{t}^{(1)}}(x,C){R_{t}^{(2)}}(x,C)}{1-{Q_{t}}(x,\mathbb{R})}\le \varepsilon .\]
Now combining (23)–(25) we get that
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}& \left\{{d_{j}}=0,j=\overline{1,n},{\bar{\sigma }_{C\times C}}\le n<{\bar{\sigma }_{C\times C}}(2)\right\}\\ {} & \le S{\sum \limits_{k=1}^{n}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}=k,{d_{k}}=0\right\}{\psi _{1}^{-(n-k)}}\\ {} & \le S\left({\sum \limits_{k=2}^{n}}{H_{t}}(x){\psi _{1}^{-(k-1)}}{\psi _{1}^{-(n-k)}}+\varepsilon {\psi _{1}^{-(n-1)}}\right)\\ {} & =S\left({\psi _{1}^{-(n-1)}}(n-1){H_{t}}(x)+\varepsilon {\psi _{1}^{-(n-1)}}\right)\\ {} & =S{\psi _{1}^{-(n-1)}}\left((n-1){H_{t}}(x)+\varepsilon \right).\end{aligned}\]
 □
Finally, we look at the situation when the pair $\left({Z_{k}^{(1)}},{Z_{k}^{(2)}}\right)$ hits $C\times C$ exactly $k\ge 2$ times.
Lemma 4.
For all $x\in \mathbb{R}$
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}& \left\{{D_{n}},{\bar{\sigma }_{C\times C}}(k)\le n<{\bar{\sigma }_{C\times C}}(k+1)\right\}\\ {} & \le S{H_{t}}(x){\sum \limits_{i=2}^{n-k+1}}{\delta _{1}^{-(i-1)}}{\sum \limits_{j=k-1}^{n-i}}{\rho _{j,k-1}}{\delta _{1}^{-(n-i-j)}}\\ {} & +\varepsilon S{\sum \limits_{j=k-1}^{n-1}}{\rho _{j,k-1}}{\delta _{1}^{-(n-j-1)}},\end{aligned}\]
where ${\rho _{nk}}$ is defined in (11), and S is from Lemma 3.
Proof.
Using the first entrance – last exit decomposition and the Markov property, we get the following representation of the probability of interest
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}& \left\{{D_{n}},{\bar{\sigma }_{C\times C}}(k)\le n<{\bar{\sigma }_{C\times C}}(k+1)\right\}\\ {} & ={\sum \limits_{i=1}^{n-k+1}}{\sum \limits_{j=i+k-1}^{n}}{\int _{C\times C}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{D_{i1}},\left({Z_{i}^{(1)}},{Z_{i}^{(2)}}\right)=(d{y_{1}},d{y_{2}})\right\}\times \\ {} & \times {\int _{C\times C}}{\bar{\mathbb{P}}_{x,y,0}^{t+i}}\left\{{D_{j-i,k-1}},\left({Z_{j-i}^{(1)}},{Z_{j-i}^{(2)}}\right)=(d{z_{1}},d{z_{2}})\right\}\\ {} & \times {\bar{\mathbb{P}}_{{z_{1}},{z_{2}},0}^{t+j}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}>n-j\right\}.\end{aligned}\]
Here index i stands for the first hitting. Since there are k total hittings during the n timesteps, the first hitting may not occur later than $n-k$, otherwise there will be no “space” for the rest $k-1$ hittings. Similar arguments applied to j, which is the time of the last hitting. Since there are exactly $k-1$ hittings on the time interval $(i,j]$, j may not be closer than $k-1$ to i, otherwise $k-1$ hittings will not fit into the interval. Now we take supremum over $x,y\in C$ and $t\ge 0$ in the second and third terms and arrive to the inequality
(26)
\[ \begin{array}{l}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{D_{n}},{\bar{\sigma }_{C\times C}}(k)\le n<{\bar{\sigma }_{C\times C}}(k+1)\right\}\le {\textstyle\sum \limits_{i=1}^{n-k+1}}{\textstyle\sum \limits_{j=i+k-1}^{n}}{\bar{\mathbb{P}}_{x,x,1}^{t}}\left\{{D_{i1}}\right\}\times \\ {} \times {\sup _{x,y\in C,t}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{D_{j-i,k-1}}\right\}{\sup _{x,y\in C,t}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}>n-j\right\}\end{array}\]
We immediately recognize the second multiplier as ${\rho _{j-i,k-1}}$.
Using the same arguments as in Lemma 3 (see inequality (23)), we get an estimate for the third multiplier
\[ \underset{x,y\in C,t}{\sup }{\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}>n-j\right\}\le S{\psi _{1}^{-(n-j)}}.\]
Using Lemma 2 we obtain a bound for the first multiplier for $i\ge 2$
\[\begin{aligned}{}{\mathbb{P}_{x,x,1}^{t}}\left\{{d_{1}}={d_{i}}=0,{\bar{\sigma }_{C\times C}}=i\right\}& \le \underset{x\in \mathbb{R}}{\sup }{\mathbb{P}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}\ge i\right\}\\ {} & \le {\psi _{1}^{-(i-1)}}{H_{t}}(x).\end{aligned}\]
For $i=1$ we have
(27)
\[ {\mathbb{P}_{x,x,1}^{t}}\left\{{d_{1}}=0,{\bar{\sigma }_{C\times C}}=1\right\}=\frac{{R_{t}^{(1)}}(x,C){R_{t}^{(2)}}(x,C)}{1-{Q_{t}}(x,\mathbb{R})}\le \varepsilon .\]
Substituting the bounds for the first and third multipliers into (26), we get
(28)
\[\begin{aligned}{}{\bar{\mathbb{P}}_{x,x,1}^{t}}& \left\{{D_{n}},{\bar{\sigma }_{C\times C}}(k)\le n<{\bar{\sigma }_{C\times C}}(k+1)\right\}\\ {} & \le {\sum \limits_{i=2}^{n-k+1}}{\sum \limits_{j=i+k-1}^{n}}{H_{t}}(x){\psi _{1}^{-(i-1)}}{\rho _{j-i,k-1}}S{\psi _{1}^{-(n-j)}}\\ {} & +{\sum \limits_{j=k}^{n}}\varepsilon {\rho _{j-1,k-1}}S{\psi _{1}^{-(n-j)}}\\ {} & =S{H_{t}}(x){\sum \limits_{i=2}^{n-k+1}}{\psi _{1}^{-(i-1)}}{\sum \limits_{j=i+k-1}^{n}}{\rho _{j-i,k-1}}{\psi _{1}^{-(n-j)}}\\ {} & +\varepsilon S{\sum \limits_{j=k}^{n}}{\rho _{j-1,k-1}}{\psi _{1}^{-(n-j)}}\\ {} & =S{H_{t}}(x){\sum \limits_{i=2}^{n-k+1}}{\delta _{1}^{-(i-1)}}{\sum \limits_{j=k-1}^{n-i}}{\rho _{j,k-1}}{\delta _{1}^{-(n-i-j)}}\\ {} & +\varepsilon S{\sum \limits_{j=k-1}^{n-1}}{\rho _{j,k-1}}{\delta _{1}^{-(n-j-1)}}.\end{aligned}\]
 □
It is critical that the probability from Lemma 4 is double summable over n, k. One may notice that since ${\rho _{jk}}=0$ if $j < k$ (because it is impossible that ${\bar{\sigma }_{C\times C}}(k)=j < k$), the double sum in the right hand side of the inequality from Lemma 4 is a convolution of two geometric sequences (${\psi _{1}^{-n}}$) with ${\rho _{jk}}$ (as function of j with fixed k) evaluated at $n-k+1$. Thus in order to establish finitness of the desired sum it is required to demonstrate that the double sequence $\{{\rho _{nk}},n\ge 1,k=\overline{1,n}\}$ is summable. For doing so we will need the next three lemmas.
Lemma 5.
(29)
\[ {\rho _{nk}}=\underset{x,y\in C,t}{\sup }{\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{D_{nk}}\right\}\le {(1-{a_{\ast }})^{k}}.\]
Proof.
We proceed by induction. Let $n=1$ and $k=1$. Then
\[\begin{array}{l}\displaystyle {\mathbb{P}_{x,y,0}^{t}}\left\{{D_{11}}\right\}={\mathbb{P}_{x,y,0}^{t}}\left\{{\bar{Z}_{1}}\in (C\times C\times 0)\right\}\\ {} \displaystyle =(1-{a_{t}}){T_{t}^{(1)}}(x,C){T_{t}^{(2)}}(y,C)\le (1-{a_{\ast }}),\end{array}\]
since ${T_{t}^{(i)}}(x,C)\le {T_{t}^{(i)}}(x,\mathbb{R})=1$.
Consider now $n\ge 2$, $k=1$:
\[\begin{array}{l}\displaystyle {\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}\left\{{D_{n1}}\right\}=(1-{a_{t}}){\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{t}^{(1)}}({x_{0}},dx){T_{t}^{(2)}}({y_{0}},dy){\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left\{{\bar{\sigma }_{1}}=n\right\}\\ {} \displaystyle \le (1-{a_{t}}){\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{t}^{(1)}}({x_{0}},dx){T_{t}^{(2)}}({y_{0}},dy)\le (1-{a_{\ast }}).\end{array}\]
So, we have shown that for all positive integers n and $k=1$ inequality (29) is valid.
In what follows we will use a simplified notation
\[ {\bar{\sigma }_{i}}={\bar{\sigma }_{C\times C}}(i).\]
Assume it is valid for all positive n and k, let us check it for $k+1$. Clearly, it is nothing to prove if $n < k+1$ since the underlying probability is zero. Consider $n\ge k+1$ and write
\[ {\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}\left\{{D_{n,k+1}}\right\}={\sum \limits_{s=1}^{n-k}}{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}\left\{{\bar{\sigma }_{1}}=s,{D_{n,k+1}}\right\}.\]
Now we use the conditioning by ${ℱ_{s}}$, the Markov property and the fact that
\[\begin{array}{l}\displaystyle \left\{{\bar{\sigma }_{1}}=s,{D_{n,k+1}}\right\}=\left\{{D_{s1}},{d_{s+1}}=\cdots ={d_{n}}=0,{\bar{\sigma }_{k+1}}=n\right\}\\ {} \displaystyle ={D_{s1}}\cap {\theta _{s}}\left(\left\{{d_{1}}=\cdots ={d_{n-s}}=0,{\bar{\sigma }_{k}}=n-s\right\}\right)={D_{s1}}\cap {\theta _{s}}{D_{n-s,k}},\end{array}\]
where ${\theta _{s}}$ is a shift operator. The obvious inlusion ${D_{s}}\in {ℱ_{s}}$ and the induction assumption allow us to obtain
(30)
\[\begin{aligned}{}{\sum \limits_{s=1}^{n-k}}{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}& \left\{{\bar{\sigma }_{1}}=s,{D_{n,k+1}}\right\}={\sum \limits_{s=1}^{n-k}}{\bar{\mathbb{E}}_{{x_{0}},{y_{0}},0}^{t}}\left\{{D_{s1}},{\bar{\mathbb{P}}_{{\bar{Z}_{s}}}^{t+s}}\left\{{D_{n-s,k}}\right\}\right\}\\ {} & \le {\sum \limits_{s=1}^{n-k}}\underset{x,y\in C,t}{\sup }{\bar{\mathbb{P}}_{x,y}^{t}}\left\{{D_{n-s,k}}\right\}{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}({D_{s1}})\\ {} & \le {(1-{a_{\ast }})^{k}}{\sum \limits_{s=1}^{n-k}}{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}({D_{s1}})={(1-{a_{\ast }})^{k}}{\sum \limits_{s=1}^{n-k}}{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}({d_{1}}=0,{\bar{\sigma }_{1}}=s)\\ {} & ={(1-{a_{\ast }})^{k}}{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}({d_{1}}=0,{\bar{\sigma }_{1}}\le n-k)\\ {} & ={(1-{a_{\ast }})^{k}}(1-{a_{t}}){\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{t}^{(1)}}({x_{0}},dx){T_{t}^{(2)}}({y_{0}},dy)\times \\ {} & \times {\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left\{{\bar{\sigma }_{1}}\le n-k-1\right\}+{(1-{a_{\ast }})^{k}}(1-{a_{t}}){T_{t}^{(1)}}({x_{0}},C){T_{t}^{(2)}}({y_{0}},C)\\ {} & \le {(1-{a_{\ast }})^{k+1}}{\int _{{\mathbb{R}^{2}}}}{T_{t}^{(1)}}({x_{0}},dx){T_{t}^{(2)}}({y_{0}},dy)={(1-\alpha )^{k+1}}.\end{aligned}\]
Thus the inequality (29) is proved for all n, k.  □
Lemma 6.
Let $m\ge 2$ be an integer, k an arbitrary positive integer and $j\in \{0,\dots ,k-1\}$. Then
\[ {\rho _{mk+j,k}}\le {(1-{a_{\ast }})^{k-1}}S{\psi _{1}^{-(m-1)}},\]
where S is defined in Lemma 3.
Proof.
In this lemma we use the following notation which is aimed to simplify further formulas:
\[\begin{array}{l}\displaystyle n=mk+j,\\ {} \displaystyle {\bar{\sigma }_{0}}=0,\\ {} \displaystyle {\bar{\sigma }_{i}}={\bar{\sigma }_{C\times C}}(i),\hspace{1em}i\ge 0,\\ {} \displaystyle {\bar{\Delta }_{i}}={\bar{\sigma }_{i}}-{\bar{\sigma }_{i-1}},\hspace{1em}i\ge 1,\\ {} \displaystyle {\bar{\zeta }_{k}}=\underset{1\le i\le k}{\max }\{{\bar{\Delta }_{i}}\},\\ {} \displaystyle {\bar{\tau }_{n}}=\min \{l\ge 0:{\bar{\Delta }_{l}}={\bar{\zeta }_{k}}\}\le k,\\ {} \displaystyle D={D_{mk+j,k.}}\end{array}\]
It is easy to see that
(31)
\[ D\subset \{{\bar{\zeta }_{k}}\ge m\}.\]
Indeed, assume $D\cap \{{\bar{\zeta }_{k}} < m\}\ne \varnothing $ and let $\omega \in D\cap \{{\bar{\zeta }_{k}} < m\}$. Then
\[ {\bar{\sigma }_{k}}(\omega )={\sum \limits_{j=1}^{k}}{\bar{\Delta }_{j}}(\omega )\le k{\bar{\zeta }_{k}}(\omega ) < mk.\]
This contradicts to the definition of $D={D_{mk+j,k}}$, so (31) is proved.
Assume first that ${\bar{\tau }_{n}}=1$. This means that the period before the first hitting is the longest, or, in other words, all subsequent intervals between hitting $C\times C$ are smaller or equal to the first one. Note, that since $m\ge 2$ and ${\bar{\zeta }_{k}}(\omega )\ge m$, $\omega \in D$, we conclude that hitting will not occur at the first timestep. This observation allows us to write the following decomposition for ${x_{0}},{y_{0}}\in C$:
(32)
\[\begin{array}{cc}& \displaystyle {\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}\left\{{D_{nk}},{\bar{\tau }_{n}}=1\right\}\\ {} & \displaystyle =(1-{a_{t}}){\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{{x_{0}}{y_{0}}}^{(t)}}(dx,dy){\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left\{{\bar{\sigma }_{1}}\ge m-1,{D_{n-1,k}}\right\},\end{array}\]
where we used the notation ${T_{xy}^{(t)}}(A,B)={T_{t}^{(1)}}(x,A){T_{t}^{(2)}}(y,B)$ introduced in the previous section.
Now, using the same arguments as in the proof of Lemma 5 we notice that for $s\ge m$
\[\begin{array}{l}\displaystyle {\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left\{{\bar{\sigma }_{1}}=s,{D_{n-1,k}}\right\}={\bar{\mathbb{E}}_{x,y,0}^{t+1}}\left[{\bar{\sigma }_{1}}=s,{D_{s}},{\bar{\mathbb{P}}_{{\bar{Z}_{s}}}^{t+s}}\left\{{D_{n-s,k-1}}\right\}\right]\\ {} \displaystyle \le {\rho _{n-s,k-1}}{\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left\{{\bar{\sigma }_{1}}=s,{D_{s}}\right\}\le {(1-{a_{\ast }})^{k-1}}{\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left\{{\bar{\sigma }_{1}}=s,{D_{s}}\right\},\end{array}\]
where we used Lemma 5.
Substituting this upper bound into (32), we get
\[\begin{aligned}{}{\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}\left\{{D_{nk}},{\bar{\tau }_{n}}=1\right\}& \le {(1-{a_{\ast }})^{k}}\sum \limits_{s\ge m-1}{\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{{x_{0}}{y_{0}}}^{(t)}}(dx,dy){\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left[{\bar{\sigma }_{1}}=s,{D_{s}}\right]\\ {} & ={(1-{a_{\ast }})^{k}}{\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{{x_{0}}{y_{0}}}}(dx,dy){\bar{\mathbb{P}}_{x,y,0}^{t+1}}\left\{{\bar{\sigma }_{1}}\ge m-1\right\}\\ {} & \le \frac{{(1-{a_{\ast }})^{k}}}{{\psi _{1}^{m-1}}}{\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{{x_{0}}{y_{0}}}}(dx,dy)h(x,y)\\ {} & =\frac{{(1-{a_{\ast }})^{k-1}}S}{{\psi _{1}^{m-1}}}.\end{aligned}\]
So, finally we get the estimate
(33)
\[ {\bar{\mathbb{P}}_{{x_{0}},{y_{0}},0}^{t}}\left\{{D_{nk}},{\bar{\tau }_{n}}=1\right\}\le \frac{{S_{1}}{(1-{a_{\ast }})^{k}}}{{\psi _{1}^{m-1}}}.\]
Here we introduced ${S_{1}}=S/(1-{a_{\ast }})$ because we would like to maintain the term ${(1-{a_{\ast }})^{k}}$.
Now we use (31) and (33) to write a decomposition with respect to ${\bar{\tau }_{n}}$:
\[\begin{aligned}{}& {\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{D_{nk}}\right\}={\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{D_{nk}},{\bar{\zeta }_{k}}\ge m\right\}={\sum \limits_{j=1}^{k}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left\{{D_{nk}},{\bar{\zeta }_{k}}\ge m,{\bar{\tau }_{n}}=j\right\}\\ {} & ={\sum \limits_{j=1}^{k}}\sum \limits_{s}{\bar{\mathbb{E}}_{x,y,0}^{t}}\left[{\bar{\mathbb{P}}_{{Z_{j}^{(1)}},{Z_{j}^{(2)}},0}^{t+j}}\left\{{\bar{\tau }_{n-s}}=1,{D_{n-s,k-j+1}}\right\};{D_{s,j-1}},{\bar{\tau }_{n}}=j,{\bar{\sigma }_{j-1}}=s\right]\\ {} & \le \frac{{S_{1}}}{{\psi _{1}^{m-1}}}{\sum \limits_{j=1}^{k}}\sum \limits_{s}{(1-{a_{\ast }})^{k-j+1}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left[{D_{s,j-1}},{\bar{\tau }_{n}}=j,{\bar{\sigma }_{j-1}}=s\right]\\ {} & =\frac{{S_{1}}}{{\psi _{1}^{m-1}}}{\sum \limits_{j=1}^{k}}\sum \limits_{s}{(1-{a_{\ast }})^{k-j+1}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left[{D_{s,j-1}},{\bar{\tau }_{n}}=j,{\bar{\sigma }_{j-1}}=s\right]\\ {} & \le \frac{{S_{1}}}{{\psi _{1}^{m-1}}}{\sum \limits_{j=1}^{k}}\sum \limits_{s}{(1-{a_{\ast }})^{k-j+1}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left[{\bar{\tau }_{n}}=j,{\bar{\sigma }_{j-1}}=s\right]\underset{u,v\in C,t}{\sup }{\bar{\mathbb{P}}_{u,v,0}^{t}}\{{D_{s,j-1}}\}\\ {} & =\frac{{S_{1}}}{{\psi _{1}^{m-1}}}{\sum \limits_{j=1}^{k}}\sum \limits_{s}{(1-{a_{\ast }})^{k-j+1}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left[{\bar{\tau }_{n}}=j,{\bar{\sigma }_{j-1}}=s\right]{\rho _{s,j-1}}\\ {} & \le \frac{{S_{1}}}{{\psi _{1}^{m-1}}}{\sum \limits_{j=1}^{k}}{(1-{a_{\ast }})^{k-j+1}}{(1-{a_{\ast }})^{j-1}}{\bar{\mathbb{P}}_{x,y,0}^{t}}\left[{\bar{\tau }_{n}}=j\right]=\frac{{S_{1}}{(1-{a_{\ast }})^{k}}}{{\psi _{1}^{m-1}}}.\end{aligned}\]
Note that we used Lemma 5 in the last inequality.  □
Finally we can show that $\{{\rho _{nk}}\}$ is summable.
Lemma 7.
\[ \rho :={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}{\rho _{nk}}\le \frac{1-{a_{\ast }}}{{({a_{\ast }})^{2}}}\left(1+\frac{S}{(1-{a_{\ast }})({\psi _{1}}-1)}\right)<\infty .\]
Proof.
As in the previous lemma, we denote by ${S_{1}}=S/(1-{a_{\ast }})$, so that we can maintain the term ${(1-{a_{\ast }})^{k}}$. Then we obtain the following upper bound:
\[\begin{array}{l}\displaystyle {\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}{\rho _{nk}}={\sum \limits_{k=1}^{\infty }}{\sum \limits_{n=k}^{\infty }}{\rho _{nk}}={\sum \limits_{k=1}^{\infty }}{\sum \limits_{m=1}^{\infty }}\left({\sum \limits_{j=0}^{k-1}}{\rho _{mk+j,k}}\right)\\ {} \displaystyle ={\sum \limits_{k=1}^{\infty }}\left({\sum \limits_{j=0}^{k-1}}{\rho _{k+j,k}}\right)+{\sum \limits_{k=1}^{\infty }}{\sum \limits_{m=2}^{\infty }}\left({\sum \limits_{j=0}^{k-1}}{\rho _{mk+j,k}}\right)\\ {} \displaystyle \le {\sum \limits_{k=1}^{\infty }}\left({\sum \limits_{j=0}^{k-1}}{\rho _{k+j,k}}\right)+{\sum \limits_{k=1}^{\infty }}{\sum \limits_{m=2}^{\infty }}k{(1-{a_{\ast }})^{k}}{\psi _{1}^{-(m-1)}}{S_{1}}\\ {} \displaystyle \le {\sum \limits_{k=1}^{\infty }}{(1-{a_{\ast }})^{k}}k+{\sum \limits_{k=1}^{\infty }}{\sum \limits_{m=1}^{\infty }}k{(1-{a_{\ast }})^{k}}{\psi _{1}^{-m}}{S_{1}}\\ {} \displaystyle =\frac{1-{a_{\ast }}}{{({a_{\ast }})^{2}}}+{S_{1}}{\sum \limits_{k=1}^{\infty }}k{(1-{a_{\ast }})^{k}}{\sum \limits_{m=1}^{\infty }}{\psi _{1}^{-m}}\\ {} \displaystyle =\frac{1-{a_{\ast }}}{{({a_{\ast }})^{2}}}+{S_{1}}\frac{1-{a_{\ast }}}{({\psi _{1}}-1){({a_{\ast }})^{2}}}=\frac{1-{a_{\ast }}}{{({a_{\ast }})^{2}}}\left(1+{S_{1}}\frac{1}{{\psi _{1}}-1}\right).\end{array}\]
Note that we used Lemma 6 in the first inequality.  □
Lemma 8.
\[\begin{aligned}{}\sum \limits_{n\ge 1}\underset{x\in [m,m+1),t}{\sup }{\mathbb{P}_{x,x,1}^{t}}\left\{{D_{n}}\right\}& \le \varepsilon {M_{1}}+{\hat{r}_{m}}{M_{2}}\end{aligned}\]
where ρ is defined in Lemma 7 and
\[\begin{array}{l}\displaystyle {M_{1}}={\Xi _{0}}\left(\frac{{\psi _{1}}S(1+\rho {\psi _{1}})}{{({\psi _{1}}-1)^{2}}}+\frac{{\psi _{1}}(1+S)}{{\psi _{1}}-1}\right),\\ {} \displaystyle {M_{2}}=2{\Xi _{0}}\left(S\frac{{\psi _{1}}(1+\rho +{\Xi _{1}}(1+S))}{{\psi _{1}}-1}+{\Xi _{1}}\frac{{\psi _{1}}S(1+\rho {\psi _{1}})}{{({\psi _{1}}-1)^{2}}}\right),\end{array}\]
${\Xi _{0}}$, ${\Xi _{1}}$ are from Theorem 3.
Proof.
Let us introduce
\[ {H_{t}^{(m)}}(x)=\underset{|x|\in [m,m+1)}{\sup }{H_{t}}(x).\]
Using Lemmas 2, 3, 4 and 6, we get
\[\begin{aligned}{}\sum \limits_{n\ge 1}\underset{|x|\in [m,m+1)}{\sup }& {\mathbb{P}_{x,x,1}^{t}}\left\{{D_{n}}\right\}\le \sum \limits_{n\ge 1}\left(\frac{{H_{t}^{(m)}}(x)}{{\psi _{1}^{n-1}}}+\frac{S((n-1){H_{t}^{(m)}}(x)+\varepsilon )}{{\psi _{1}^{n-1}}}\right)\\ {} & +\sum \limits_{n\ge 1}\sum \limits_{k\ge 2}S{H_{t}^{(m)}}(x){\sum \limits_{i=2}^{n-k+1}}{\psi _{1}^{-(i-1)}}{\sum \limits_{j=k-1}^{n-i}}{\rho _{j,k-1}}{\psi _{1}^{-(n-i-j)}}\\ {} & +\sum \limits_{n\ge 1}\sum \limits_{k\ge 2}\varepsilon S{\sum \limits_{j=k-1}^{n-1}}{\rho _{j,k-1}}{\psi _{1}^{-(n-j-1)}}={A_{1}}+{A_{2}}+\varepsilon S{A_{3}}.\end{aligned}\]
We start with ${A_{1}}$.
\[\begin{aligned}{}{A_{1}}=\sum \limits_{n\ge 1}\left(\frac{{H_{t}^{(m)}}(x)}{{\psi _{1}^{n-1}}}+\frac{S((n-1){H_{t}^{(m)}}(x)+\varepsilon )}{{\psi _{1}^{n-1}}}\right)& =\frac{{\psi _{1}}({H_{t}^{(m)}}(x)+\varepsilon S)}{{\psi _{1}}-1}\\ {} & +\frac{{\psi _{1}}S{H_{t}^{(m)}}(x)}{{({\psi _{1}}-1)^{2}}}.\end{aligned}\]
Before moving to ${A_{2}}$ let us first look at the sum
\[\begin{aligned}{}& {\sum \limits_{n=1}^{\infty }}\sum \limits_{k\ge 2}{\sum \limits_{i=1}^{n-k+1}}{\psi _{1}^{-(i-1)}}{\sum \limits_{j=k-1}^{n-i}}{\rho _{j,k-1}}{\psi _{1}^{-(n-i-j)}}\\ {} & \hspace{1em}={\sum \limits_{n=1}^{\infty }}\sum \limits_{k\ge 1}{\sum \limits_{i=1}^{n-k}}{\psi _{1}^{-(i-1)}}{\sum \limits_{j=k}^{n-i}}{\rho _{jk}}{\psi _{1}^{-(n-i-j)}}.\end{aligned}\]
It is clear that k may not exceed n, as well as ${\rho _{jk}}=0$ if $j < k$. Put ${p_{n}}={\psi _{1}^{-(n-1)}}$, $n\ge 1$, ${p_{0}}=0$ and ${q_{j}}(k)={\rho _{j+k,k}}$ for a fixed k, $j\ge 0$. We can then swap n and k and rewrite the previous sums as
\[\begin{aligned}{}{\sum \limits_{k=1}^{\infty }}& \sum \limits_{n\ge k}{\sum \limits_{i=1}^{n-k}}{\psi _{1}^{-(i-1)}}{\sum \limits_{j=k}^{n-i}}{\rho _{jk}}{\psi _{1}^{-(n-i-j)}}\\ {} & ={\sum \limits_{k=1}^{\infty }}\sum \limits_{n\ge k}{\sum \limits_{i=1}^{n-k}}{\psi _{1}^{-(i-1)}}{\sum \limits_{j=0}^{n-k-i}}{q_{j}}(k){\psi _{1}^{-(n-k-i-j)}}\\ {} & ={\sum \limits_{k=1}^{\infty }}\sum \limits_{n\ge k}{(p\mathrm{\star }q(k)\mathrm{\star }p)_{n-k}}={\sum \limits_{k=1}^{\infty }}{\left({\sum \limits_{n=1}^{\infty }}{p_{n}}\right)^{2}}\left(\sum \limits_{j\ge 0}{q_{j}}(k)\right)\\ {} & ={\left({\sum \limits_{n=1}^{\infty }}{\psi _{1}^{-(n-1)}}\right)^{2}}{\sum \limits_{k=1}^{\infty }}{\sum \limits_{j=1}^{\infty }}{\rho _{jk}}={\left(\frac{{\psi _{1}}}{{\psi _{1}}-1}\right)^{2}}\rho .\end{aligned}\]
Now we can calculate ${A_{2}}$.
\[\begin{aligned}{}{A_{2}}& =\sum \limits_{n\ge 1}\sum \limits_{k\ge 2}S{H_{t}^{(m)}}(x){\sum \limits_{i=2}^{n-k+1}}{\psi _{1}^{-(i-1)}}{\sum \limits_{j=k-1}^{n-i}}{\rho _{j,k-1}}{\psi _{1}^{-(n-i-j)}}\\ {} & =S{H_{t}^{(m)}}(x)\rho {\left(\frac{{\psi _{1}}}{{\psi _{1}}-1}\right)^{2}}-S{H_{t}^{(m)}}(x){A_{3}}.\end{aligned}\]
For ${A_{3}}$ we have
\[\begin{aligned}{}{A_{3}}& =\sum \limits_{n\ge 1}\sum \limits_{k\ge 2}{\sum \limits_{j=k-1}^{n-1}}{\rho _{j,k-1}}{\psi _{1}^{-(n-j-1)}}=\sum \limits_{n\ge 1}\sum \limits_{k\ge 1}{\sum \limits_{j=k}^{n-1}}{\rho _{j,k}}{\psi _{1}^{-(n-j-1)}}\\ {} & =\left(\sum \limits_{n\ge 0}{\psi _{1}^{-n}}\right)\left(\sum \limits_{j,k}{\rho _{j,k}}\right)=\frac{\rho {\psi _{1}}}{{\psi _{1}}-1}.\end{aligned}\]
Now we have
\[\begin{aligned}{}\sum \limits_{n\ge 1}\underset{|x|\in [m,m+1)}{\sup }& {\mathbb{P}_{x,x,1}^{t}}\left\{{D_{n}}\right\}\le \frac{{\psi _{1}}({H_{t}^{(m)}}(x)+\varepsilon S)}{{\psi _{1}}-1}\\ {} & +\frac{{\psi _{1}}S{H_{t}^{(m)}}(x)}{{({\psi _{1}}-1)^{2}}}+S{H_{t}^{(m)}}(x)\rho {\left(\frac{{\psi _{1}}}{{\psi _{1}}-1}\right)^{2}}\\ {} & +(\varepsilon S-S{H_{t}^{(m)}}(x))\frac{\rho {\psi _{1}}}{{\psi _{1}}-1}=\frac{{\psi _{1}}S{H_{t}^{(m)}}(x)(1+\rho {\psi _{1}})}{{({\psi _{1}}-1)^{2}}}\\ {} & +\frac{{\psi _{1}}({H_{t}^{(m)}}(x)+\varepsilon S+S{H_{t}^{(m)}}(x)+\rho \varepsilon S-\rho S{H_{t}^{(m)}}(x))}{{\psi _{1}}-1}\\ {} & \le \varepsilon S\frac{{\psi _{1}}(1+\rho )}{{\psi _{1}}-1}+{H_{t}^{(m)}}(x)\left(\frac{{\psi _{1}}S(1+\rho {\psi _{1}})}{{({\psi _{1}}-1)^{2}}}+\frac{{\psi _{1}}(1+S)}{{\psi _{1}}-1}\right).\end{aligned}\]
We can use result of Theorem 3 and Condition T to obtain
\[\begin{array}{l}\displaystyle {H_{t}^{(m)}}(x)\le \underset{|x|\in [m,m+1)}{\sup }{\int _{{\mathbb{R}^{2}}}}\frac{{R_{t}^{(1)}}(x,dy){R_{t}^{(2)}}(x,dz)}{1-{Q_{t}}(x,\mathbb{R})}\left({\Xi _{0}}(|y|+|z|)+{\Xi _{1}}\right)\\ {} \displaystyle \le {\Xi _{0}}\left(\underset{|x|\in [m,m+1)}{\sup }{\int _{\mathbb{R}}}{R_{t}^{(1)}}(x,dy)|y|+\underset{|x|\in [m,m+1)}{\sup }{\int _{\mathbb{R}}}{R_{t}^{(2)}}(x,dy)|y|\right)+\varepsilon {\Xi _{1}}\\ {} \displaystyle \le 2{\Xi _{0}}{\hat{r}_{m}}+\varepsilon {\Xi _{1}}.\end{array}\]
Finally we can write
\[\begin{aligned}{}\sum \limits_{n\ge 1}\underset{|x|\in [m,m+1)}{\sup }& {\mathbb{P}_{x,x,1}^{t}}\left\{{D_{n}}\right\}\le 2{\hat{r}_{m}}{\Xi _{0}}\left(\frac{{\psi _{1}}S(1+\rho {\psi _{1}})}{{({\psi _{1}}-1)^{2}}}+\frac{{\psi _{1}}(1+S)}{{\psi _{1}}-1}\right)\\ {} & +\varepsilon \left(S\frac{{\psi _{1}}(1+\rho +{\Xi _{1}}(1+S))}{{\psi _{1}}-1}+{\Xi _{1}}\frac{{\psi _{1}}S(1+\rho {\psi _{1}})}{{({\psi _{1}}-1)^{2}}}\right).\end{aligned}\]
 □

Appendix A

In this appendix we want to demonstrate how to calculate the constants involved in the bound in Theorem 2.
First, we note that the bound in Theorem 2 depends on constants ${M_{1}}$ and ${M_{2}}$ that are defined in Lemma 8. These constants in turn, depend on ${\Xi _{0}}$, ${\Xi _{1}}$ defined in Theorem 3 and S defined in Lemma 3. Constant ${\Xi _{1}}$ is known as far as we know ${\Xi _{0}}$ (see (18)). Thus, we have to calculate ${\Xi _{0}}$ and S.
We start with ${\Xi _{0}}$. From (17)
\[ {\Xi _{0}}=\frac{M{C_{0}^{2}}(2+2G+c-\delta )}{1-{a^{\ast }}},\]
and we see that the only unkown constant is M. It is defined in [12], Theorem 4.2, as
\[ M=\left(1+\frac{1}{1-\sqrt{(1+\varepsilon )(1-{\gamma _{1}})}}\right)\left(\frac{1+{\gamma _{0}}}{1-{\gamma _{0}}}\right),\]
where ε is an arbitrary constant such that $(1+\varepsilon )(1-{\gamma _{1}})<1$. Thus, we only need to calculate ${\gamma _{0}}$ and ${\gamma _{1}}$. This can be done using Theorem 4.3 from [12]. The essential condition in Theorem 4.3 reduces to the exponentially small bound on the tails of ${\sigma _{C}^{(i)}}$:
\[ {\mathbb{P}_{x}^{t}}\left\{{\sigma _{C}^{(i)}}>n\right\}\le (V(x){𝟙_{\mathbb{R}\setminus C}}(x)+D{𝟙_{C}}(x)){\psi ^{-n}},\]
for some $\psi >0$ and $V(x)$. Under the conditions of Theorem 2 (and thus Corollary 1) this is true, with $V(x)={C_{0}}(1+|x|)$ and $D={C_{0}}(c+2+2G-\delta )$.
We start with calculating
\[ {C_{0}}\underset{t,x\in C,i}{\sup }{\int _{\mathbb{R}}}\frac{{P_{t,i}}(x,dy)-{a_{t}}{\nu _{t}}(dy)}{1-{a_{t}}}(1+|y|)\le {C_{0}}\left(1+\frac{1+2G+c-\delta }{1-{a^{\ast }}}\right).\]
We denote the last expression by $\hat{Q}$, so that
\[ \hat{Q}={C_{0}}\left(1+\frac{1+2G+c-\delta }{1-{a^{\ast }}}\right).\]
Theorem 4.3 from [12] now gives us
\[ {\delta _{0}}=\sqrt{1+\theta \frac{{a_{\ast }}(\psi -1)}{(1-{a_{\ast }})\psi \hat{Q}+{a_{\ast }}}},\]
where θ is an arbitrary number in $(0,1)$.
\[\begin{array}{l}\displaystyle {\gamma _{0}}={\left(\frac{\theta {a_{\ast }}}{\psi +{a_{\ast }}(1-\theta ){(\psi \hat{Q}(1-{a_{\ast }}))^{-1}}}\right)^{\frac{1}{2}}},\\ {} \displaystyle \hat{D}=D\frac{1+{\gamma _{0}}}{1-{\gamma _{0}}}\left(1+\frac{{\delta _{0}}-1}{\psi -{\delta _{0}}}\psi \hat{Q}\right)\left(1+\frac{{\delta _{0}}(\psi -1)}{\psi -{\delta _{0}}}\right),\\ {} \displaystyle m=\min \{n\ge 1|\hspace{2.5pt}\hat{D}{\delta _{0}^{-1}}<1\},\\ {} \displaystyle {\gamma _{1}}={({a_{\ast }}\underset{t}{\inf }{\nu _{t}}(C))^{m}}\exp \left(\ln \left(1-\hat{D}{\delta _{0}^{-m}}\right)\left(\frac{{\delta _{0}^{m+1}}}{{\delta _{0}}-1}-1\right)\right).\end{array}\]
So, we calculated ${\gamma _{0}}$ and ${\gamma _{1}}$ which allows us to express M and thus ${\Xi _{0}}$ and ${\Xi _{1}}$ in terms of known values.
Finally, we should calculate S from Lemma 3.
\[\begin{array}{l}\displaystyle S=(1-{a_{\ast }})\underset{x,y,\in C,t}{\sup }{\int _{{\mathbb{R}^{2}}\setminus C\times C}}{T_{xy}^{(t)}}(du,dv)h(u,v)\\ {} \displaystyle \le (1-{a_{\ast }})\underset{t}{\sup }{\int _{\mathbb{R}}}\left({T_{t}^{(1)}}(x,du){\Xi _{0}}|u|+{T_{t}^{(2)}}(y,dv){\Xi _{0}}|v|+{\Xi _{1}}\right)\\ {} \displaystyle \le \frac{1-{a_{\ast }}}{1-{a^{\ast }}}\left(2{\Xi _{0}}(1+2G+c-\delta )+{\Xi _{1}}(1-{a^{\ast }})\right).\end{array}\]

Appendix B

Condition D.
We say that a sequence of Markov kernels $({P_{t}},\hspace{2.5pt}t\in {\mathbb{N}_{0}})$ satisfies Condition D with the set $C\in ℰ$ if:
  • (1) There exists a sequence of measurable functions ${V_{k}}:E\to [1,\infty ]$ and two sequences of positive constants $\{{\lambda _{k}},\hspace{2.5pt}k\ge 0\}$, and $\{{b_{k}},\hspace{2.5pt}k\ge 0\}$ such that for all $x\in E$
    (34)
    \[ {P_{k}}{V_{k+1}}(x)\le {\lambda _{k+1}}{V_{k}}(x)+{b_{k}}{𝟙_{C}}(x);\]
  • (2) The sequence $\{{\lambda _{k}},\hspace{2.5pt}k\ge 0\}$ defined in item (1), satisfies
    \[ {\sum \limits_{k=0}^{\infty }}{\left({\prod \limits_{j=0}^{k}}\left({\lambda _{j}}\vee 1\right)\right)^{-1}}{(1-{\lambda _{k}})^{+}}=\infty .\]
    Here $a\vee b=\max \{a,b\}$, and ${a^{+}}=\max \{a,0\}$.
Theorem 4.
Let $({P_{t}})$ be a sequence of Markov transition kernels, $C\in ℰ$ be some set and Condition D hold true.
Then the following two statements hold true.
  • 1. For any $t\in {\mathbb{N}_{0}}$ and $x\in E$ such that ${P_{t}}{V_{t+1}}(x)<\infty $:
    \[ {\mathbb{P}_{x}^{t}}\left\{{\sigma _{C}}<\infty \right\}=1.\]
  • 2. For any $x\in E$, $t\in {\mathbb{N}_{0}}$:
    \[ {\mathbb{E}_{x}^{t}}\left[{\prod \limits_{j=1}^{{\sigma _{C}}}}{\lambda _{j}^{-1}}\right]\le {V_{t}}(x)+{\lambda _{t+1}^{-1}}{b_{t}}{𝟙_{C}}(x).\]
The theorem is proven in [13], Theorem 1.

References

[1] 
Andrieu, C., Fort, G., Vihola, M.: Quantitative convergence rates for sugeometric Markov chains. Ann. Appl. Probab. 52, 391–404 (2015) MR3372082. https://doi.org/10.1239/jap/1437658605
[2] 
Baxendale, P.: Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Probab. 15, 700–738 (2005) MR2114987. https://doi.org/10.1214/105051604000000710
[3] 
Douc, R., Moulines, E., Soulier, P.: Practical drift conditions for subgeometric rates of convergence. Ann. Appl. Probab. 14, 1353–1377 (2004) MR2071426. https://doi.org/10.1214/105051604000000323
[4] 
Douc, R., Moulines, E., Soulier, P.: Quantitative bounds on convergence of time-inhomogeneous Markov chains. Ann. Appl. Probab. 14, 1643–1665 (2004) MR2099647. https://doi.org/10.1214/105051604000000620
[5] 
Douc, R., Moulines, E., Priouret, P., P., S.: Markov Chains. Springer, Switzerland (2018). MR3889011. https://doi.org/10.1007/978-3-319-97704-1
[6] 
D’Amico, G., Gismondi, F., Janssen, J., Manca, R., Petroni, F., Volpe di Prignano, E.: The study of basic risk processes by discrete-time non-homogeneous markov processes. Theory Probab. Math. Stat. 96, 27–43 (2018). MR3666870. https://doi.org/10.1090/tpms/1032
[7] 
Fort, G., Roberts, G.O.: Subgeometric ergodicity of strong Markov processes. Ann. Appl. Probab. 15, 1565–1589 (2005) MR2134115. https://doi.org/10.1214/105051605000000115
[8] 
Gismondi, F., Janssen, J., Manca, R.: Non-homogeneous time convolutions, renewal processes and age-dependent mean number of notorcar accidents. Ann. Actuar. Sci. 9, 36–57 (2015)
[9] 
Golomoziy, V.: An inequality for the coupling moment in the case of two inhomogeneous markov chains. Theory Probab. Math. Stat. 90, 43–56 (2015). MR3241859. https://doi.org/10.1090/tpms/948
[10] 
Golomoziy, V.: An estimate of the expectation of the excess of a renewal sequence generated by a time-inhomogeneous markov chain if a square-integrable majorizing sequence exists. Theory Probab. Math. Stat. 94, 53–62 (2017). MR3553453. https://doi.org/10.1090/tpms/1008
[11] 
Golomoziy, V.: On estimation of expectation of simultaneous renewal time of time-inhomogeneous markov chains using dominating sequence. Mod. Stoch. Theory Appl. 6(3), 333–343 (2019). MR4028080. https://doi.org/10.15559/19-vmsta138
[12] 
Golomoziy, V.: Exponential moments of simultaneous hitting time for non-atomic markov chains. Glas. Mat. 57(1), 129–147 (2022). MR4450636. https://doi.org/10.3336/gm.57.1.09
[13] 
Golomoziy, V.: Computable bounds of exponential moments of simultaneous hitting time for two time-inhomogeneous atomic markov chains. In: Stochastic Processes, Statistical Methods, and Engineering Mathematics. SPAS 2019. Springer Proceedings in Mathematics and Statistics 408, pp. 97–119. Springer, Cham (2023). MR4450636. https://doi.org/10.1007/978-3-031-17820-7_5
[14] 
Golomoziy, V., Kartashov, M.: The mean coupling time for independent discrete renewal processes. Theory Probab. Math. Stat. 84, 79–86 (2012). MR2857418. https://doi.org/10.1090/S0094-9000-2012-00855-7
[15] 
Golomoziy, V., Kartashov, M.: Maximal coupling procedure and stability of discrete markov chains. i. Theory Probab. Math. Stat. 86, 93–104 (2013). MR3241447. https://doi.org/10.1090/S0094-9000-2014-00905-9
[16] 
Golomoziy, V., Kartashov, M.: Maximal coupling procedure and stability of discrete markov chains. ii. Theory Probab. Math. Stat. 87, 65–78 (2013). MR3241447. https://doi.org/10.1090/S0094-9000-2014-00905-9
[17] 
Golomoziy, V., Kartashov, M.: On the integrability of the coupling moment for time-inhomogeneous markov chains. Theory Probab. Math. Stat. 89, 1–12 (2014). MR3235170. https://doi.org/10.1090/S0094-9000-2015-00930-3
[18] 
Golomoziy, V., Kartashov, M.: Maximal coupling and stability of discrete non-homogeneous markov chains. Theory Probab. Math. Stat. 91, 17–27 (2015). MR2986452. https://doi.org/10.1090/S0094-9000-2013-00891-6
[19] 
Golomoziy, V., Kartashov, M.: Maximal coupling and v -stability of discrete nonhomogeneous markov chains. Theory Probab. Math. Stat. 93, 19–31 (2016). MR3553437. https://doi.org/10.1090/tpms/992
[20] 
Golomoziy, V., Mishura, Y.: Stability estimates for finite-dimensional distributions of time-inhomogeneous markov chains. Mathematics 8(2), 1–13 (2020). https://doi.org/10.3390/math8020174
[21] 
Golomoziy, V., Kartashov, M., Kartashov, Y.: Impact of the stress factor on the price of widow’s pensions. proofs. Theory Probab. Math. Stat. 92, 17–22 (2016). MR3330687. https://doi.org/10.1090/tpms/979
[22] 
Lindvall, T.: Lectures on Coupling Method. John Wiley and Sons (1991) MR1180522
[23] 
Melfi, V.: Nonlinear Markov renewal theory with statistical applications. Ann. Probab. 20, 753–771 (1992) MR1159572
[24] 
Meyn, S., Tweedie, R.: Markov Chains and Stochastic Stability. Springer, Switzerland (1993). MR1287609. https://doi.org/10.1007/978-1-4471-3267-7
[25] 
Thorisson, H.: Coupling, Stationarity, and Regeneration. Springer, New York (2000) MR1741181. https://doi.org/10.1007/978-1-4612-1236-2
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Definitions and notation
  • 3 Geometric recurrence for inhomogeneous autoregression chains
  • 4 Stability of general autoregressive models
  • 5 Auxiliary results
  • Appendix A
  • Appendix B
  • References

Copyright
© 2023 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Coupling renewal theory inhomogeneous Markov chain autoregressive model

MSC2010
60J10 60K05

Metrics
since March 2018
504

Article info
views

215

Full article
views

252

PDF
downloads

111

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    4
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 1.
Consider a time-inhomogeneous Markov chain ${({X_{n}})_{n\ge 0}}$ with values in $(\mathbb{R},ℬ)$ starting from a fixed ${x_{0}}\in \mathbb{R}$ and having the form
\[ {X_{n+1}}={\alpha _{n+1}}{X_{n}}+{W_{n+1}},\hspace{1em}n\ge 0,\]
where ${\alpha _{n}}\ge 0$, ${W_{n}},\hspace{2.5pt}n\ge 1$, are independent and centered random variables defined on the same probability space $(\Omega ,ℱ,\mathbb{P})$. Denote their distribution functions and tails by ${\Gamma _{n}}(x)$ and ${\bar{\Gamma }_{n}}(x)=1-{\Gamma _{n}}(x)={\textstyle\int _{x}^{\infty }}{\Gamma _{n}}(dy)$, respectively. Assume the following conditions hold.
  • 1. There exists $\delta >0$ such that
    \[ {\sum \limits_{k=1}^{\infty }}{\left({\prod \limits_{j=1}^{k}}\max \{{\alpha _{j}}+\delta ,1\}\right)^{-1}}{(1-{\alpha _{k}}-\delta )^{+}}=\infty .\]
  • 2. $G:={\sup _{n\ge 1}}\mathbb{E}{W_{n}^{+}}<\infty $.
Then for all $x\in \mathbb{R}$ and for all $n\in \mathbb{N}$:
(1)
\[ {\mathbb{E}_{x}^{n}}\left[{\prod \limits_{j=1}^{{\sigma _{C}}}}\frac{1}{{\alpha _{j}}+\delta }\right]\le 1+|x|+\left(2G+1-\delta \right){𝟙_{[-c,c]}}(x).\]
Here
(2)
\[ c:=\max \left\{\frac{2G+1}{\delta }-1,1\right\},\]
${\sigma _{C}}=\inf \left\{n\ge 1|{X_{n}}\in C\right\}$ (stipulating that $\inf \varnothing =\infty $) is the first return time to the set $C=[-c,c]$.
Theorem 2.
Let ${X^{(i)}}$ be two Markov chains defined in (5) that simultaneously satisfy Condition M, Condition T and conditions of Corollary 1 with $\psi >1$, and $C=[-c,c]$ be a corresponding set.
Then there exist constants ${M_{1}},{M_{2}}\in \mathbb{R}$, such that for every $x\in C$
(12)
\[\begin{aligned}{}\left|\left|{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(1)}}\in \cdot \right\}-{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(2)}}\in \cdot \right\}\right|\right|& \le \varepsilon \hat{m}{M_{1}}+\Delta {M_{2}},\end{aligned}\]
where $\hat{m}$ and Δ are defined in Condition T.
For every $x\notin C$ the following inequality holds true:
(13)
\[\begin{aligned}{}\left|\left|{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(1)}}\in \cdot \right\}-{\mathbb{P}_{x}^{t}}\left\{{X_{n}^{(2)}}\in \cdot \right\}\right|\right|& \le \varepsilon (2\hat{m}{M_{1}}+\hat{\mu }(x))+2\Delta {M_{2}},\end{aligned}\]
where
\[ \hat{\mu }(x)=\underset{t}{\sup }\sum \limits_{k\ge 1}\left({\prod \limits_{j=0}^{k-1}}{Q_{t+j}}{𝟙_{\mathbb{R}\setminus C}}\right)(x,\mathbb{R}\setminus C).\]
Theorem 3.
Let
\[ {X_{n+1}^{(1)}}={\alpha _{n}^{(1)}}{X_{n}^{(1)}}+{W_{n}^{(1)}},\]
and
\[ {X_{n+1}^{(2)}}={\alpha _{n}^{(2)}}{X_{n}^{(2)}}+{W_{n}^{(2)}},\]
be two Markov chains. Let ${\Gamma _{n}^{(i)}}$ be a distribution of ${W_{n}^{(i)}}$ and all ${W_{n}^{(i)}}$ be independent. Assume that both chains satisfy conditions of Corollary 1 and Condition M for the set $C=[-c,c]$ where c is defined in (2).
Then there exist ${\psi _{1}}>1$ and ${\Xi _{0}},{\Xi _{1}}\in \mathbb{R}$, such that for every $x,y\in \mathbb{R}$ and $n\ge 0$
\[ {\mathbb{E}_{x,y}^{n}}\left[{\psi _{1}^{{\sigma _{C\times C}}}}\right]\le {\Xi _{0}}(|x|+|y|)+{\Xi _{1}}.\]
Theorem 4.
Let $({P_{t}})$ be a sequence of Markov transition kernels, $C\in ℰ$ be some set and Condition D hold true.
Then the following two statements hold true.
  • 1. For any $t\in {\mathbb{N}_{0}}$ and $x\in E$ such that ${P_{t}}{V_{t+1}}(x)<\infty $:
    \[ {\mathbb{P}_{x}^{t}}\left\{{\sigma _{C}}<\infty \right\}=1.\]
  • 2. For any $x\in E$, $t\in {\mathbb{N}_{0}}$:
    \[ {\mathbb{E}_{x}^{t}}\left[{\prod \limits_{j=1}^{{\sigma _{C}}}}{\lambda _{j}^{-1}}\right]\le {V_{t}}(x)+{\lambda _{t+1}^{-1}}{b_{t}}{𝟙_{C}}(x).\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy