1 Introduction
Starting with Dellacherie [4], the following simple model has been studied and intensively used in applications. Given a random variable γ with positive values on a probability space $(\Omega ,\mathcal{F},\mathsf{P})$, one considers the smallest filtration with respect to which γ is a stopping time (or, equivalently, the process ${\mathbb{1}_{\{t\geqslant \gamma \}}}$ is adapted). In particular, Dellacherie gives a formula for the compensator of this single jump process ${\mathbb{1}_{\{t\geqslant \gamma \}}}$. Chou and Meyer [2] describe all local martingales with respect to this filtration and prove a martingale representation theorem. A significant contribution is done in a recent paper by Herdegen and Herrmann [13], where a classification, whether a local martingale in this model is a strict local martingale, or a uniformly integrable martingale, etc., is given. Let us also mention some related papers [1, 15, 16, 3, 8, 21, 12], where, in particular, local martingales with respect to the filtrations generated by jump processes or measures of certain kind are studied.
Let us clarify that in the above model every local martingale has the form
or
where γ is a random variable with values in, say, $(0,+\infty )$, F, H, and $K=F-H$ are deterministic functions. Denote by G the distribution function of γ, $\overline{G}(t)=1-G(t)$, ${t_{G}}=\sup \{t:G(t)<1\}$ is the right endpoint of the distribution of γ. Assume that $\mathsf{E}|{M_{t}}|<\infty $, then
where the corresponding Lebesgue–Stieltjes integral is finite. If $({M_{t}})$ is a martingale, then $\mathsf{E}({M_{t}})=\mathsf{E}({M_{0}})$, and this equality can be written as
and can be viewed as a functional equation concerning one of functions in $(F,G,H)$ or $(F,G,K)$, where other two functions are assumed to be given. In fact, this equation takes place for $t<{t_{G}}$ or $t\leqslant {t_{G}}$, the latter in the case where ${t_{G}}<\infty $ and $\mathsf{P}(\gamma ={t_{G}})>0$. Moreover, it turns out that this is not only the necessary condition but also the sufficient one for ${({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ given by (1) to be a local martingale. This consideration allows us to reduce problems to solving this functional equation. For example, to find the compensator $F(t\wedge \gamma )$ of ${\mathbb{1}_{\{t\geqslant \gamma \}}}$ as in [4] one needs to find a solution F given G and $K\equiv 1$. A possible way to explain the idea in [2] is the following: The terminal value ${M_{\infty }}$ of any local martingale M in this model is represented as $H(\gamma )$, and to find a representation (1) for M it is enough to solve the equation for F given G and H; the linear dependence between H and F results in a representation theorem. Contrariwise, in [13] the authors suggest to find H from the equation for given F and G. This allows them to study global properties of M.
In this paper we consider a more general model, where all randomness appears “at time γ” but it may contain much more information than γ does. We start with a random variable γ on a probability space $(\Omega ,\mathcal{F},\mathsf{P})$, and define a single jump filtration $({\mathcal{F}_{t}})$ in such way that nothing happens strictly before γ, γ is a stopping time with respect to it, and the σ-field ${\mathcal{F}_{\gamma }}$ of events that occur before or at time γ coincides with $\mathcal{F}$ (in fact, on the set $\{\gamma <\infty \}$). We prove that every local martingale with respect to this filtration has the representation
where now L is a random variable which is not necessarily a function of γ. However, denoting $H(t)=\mathsf{E}[L|\gamma =t]$, we come to the same functional equation of type (2).
Some results of the paper can be deduced from known results for marked point processes, at least if $\mathcal{F}$ is countably generated; this applies, for example, to Theorem 5 about the compensator of a single jump process. Another example is Corollary 1 which says that every local martingale is the sum of a local martingale of form (1) and an “orthogonal” local martingale, the latter being characterised, essentially, by the property $F(t)\equiv 0$. The reader can recognize in this decomposition the representation of a local martingale as the sum of two stochastic integrals with respect to random measures, see [16] and [17]. However, our direct proofs are simpler due to the key feature of our paper. Namely, we obtain a simple necessary and sufficient condition for a process to be a local martingale and later exploit it. A description of all local martingales via a full description of all possible solutions to a functional equation of type (2) is a simple consequence of this necessary and sufficient condition. In particular, an absolute continuity type property of F with respect to G, considered as an assumption in [13], is proved to be a necessary condition. An elementary analysis of a functional equation of type (2) shows that, if γ has no atom at its right endpoint, there are different F satisfying the equation for given H and G. In particular, there is a local martingale M such that ${M_{0}}=1$ and ${M_{\infty }}=0$; M is necessarily a closed supermartingale.
Another important feature of our model, in contrast to Dellacherie’s model, is that it admits σ-martingales which are not local martingales.
Let us also mention some other papers where processes of form (1) or (3) are considered. Processes of form (1) with ${t_{G}}=\infty $ are typical in the modelling of credit risk, see, e.g., [18] and [19, Chapter 7], where usually F is expressed via G and one needs to find H. Since ${t_{G}}=\infty $, such a process is a martingale. For example, in the simplest case $F=1/\overline{G}$ and hence $H=0$. This process is the same that is mentioned in two paragraphs above. Single jump filtrations and processes of form (3) appear in [10] and [11]. It is interesting to note that, in [11], the random “time” γ is, in fact, the global maximum of a random process, say, a convergent continuous local martingale.
Section 2 contains our main results. In Theorem 1 we establish a necessary and sufficient condition for a process of type (3) to be a local martingale. This allows us to obtain a full description of all local martingales through a functional equation of type (2) in Theorem 2. A similar description is available for σ-martingales, see Theorem 3. Finally, Theorem 4 classifies local martingales in accordance with their global behaviour up to ∞. Section 3 contains the proofs of these results. In Section 4 we consider complementary questions. Namely, we find the compensator of a single jump process. We also consider submartingales of class $(\Sigma )$, see [22], and show that their transformation via a change of time leads to processes of type (3). As a consequence, we reprove Theorem 4.1 of [22].
We use the following notation: ${\mathbb{R}_{+}}=[0,+\infty )$, ${\overline{\mathbb{R}}_{+}}=[0,+\infty ]$, $a\wedge b=\min \hspace{0.1667em}\{a,b\}$. The arrows ↑ and ↓ indicate monotone convergence, while ${\lim \nolimits_{s\upuparrows t}}$ stands for ${\lim \nolimits_{s\to t,s<t}}$.
A real-valued function $Z(t)$ defined at least for $t\in [0,s)$ is called càdlàg on $[0,s)$ if it is right-continuous at every $t\in [0,s)$ and has a finite left-hand limit at every $t\in (0,s)$; it is not assumed that it has a limit as $t\upuparrows s$. If, additionally, a finite limit ${\lim \nolimits_{t\upuparrows s}}Z(t)$ exists, then $Z(t)$ is called càdlàg on $[0,s]$. Functions Z of finite variation on compact intervals are understood as usually and are assumed to be càdlàg. The variation at 0 includes $|Z(0)|$ as if Z is extended by 0 on negative axis. The total variation of Z over $[0,t]$ is denoted by $\operatorname{Var}{(Z)_{t}}$. We say that Z has a finite variation over $[0,s)$, $s\leqslant \infty $, if ${\lim \nolimits_{t\upuparrows s}}\operatorname{Var}{(Z)_{t}}<\infty $. We denote $\operatorname{Var}{(Z)_{\infty }}:={\lim \nolimits_{t\to \infty }}\operatorname{Var}{(Z)_{t}}$.
A filtration on a probability space $(\Omega ,\mathcal{F},\mathsf{P})$ is an increasing right-continuous family $\mathbb{F}={({\mathcal{F}_{t}})_{t\in {\mathbb{R}_{+}}}}$ of sub-σ-fields of $\mathcal{F}$. No completeness assumption is made. As usual, we define ${\mathcal{F}_{\infty }}=\sigma \big({\cup _{t\in {\mathbb{R}_{+}}}}{\mathcal{F}_{t}}\big)$ and, for a stopping time τ the σ-field ${\mathcal{F}_{\tau }}$ is defined by
\[ {\mathcal{F}_{\tau }}=\big\{A\in {\mathcal{F}_{\infty }}:A\cap \{\tau \leqslant t\}\in {\mathcal{F}_{t}}\hspace{2.5pt}\text{for every}\hspace{2.5pt}t\geqslant 0\big\}.\]
A set $B\subset \Omega \times {\mathbb{R}_{+}}$ is evanescent if $B\subseteq A\times {\mathbb{R}_{+}}$, where $A\in \mathcal{F}$ and $\mathsf{P}(A)=0$. We say that two stochastic processes X and Y are indistinguishable if $\{X\ne Y\}$ is an evanescent set.Since we do not suppose completeness of the filtration $\mathbb{F}$, we cannot expect that processes that we consider have all paths càdlàg. Instead we consider processes whose almost all paths are càdlàg. Obviously, for any càdlàg process X adapted with respect to the completed filtration, there is an a.s. càdlàg $\mathbb{F}$-adapted process indistinguishable from X. Furthermore, any $\mathbb{F}$-adapted process X with a.s. càdlàg paths is indistinguishable from an $\mathbb{F}$-optional process Y whose paths are right-continuous everywhere and have finite left-hand limits for $t<\rho (\omega )$ and $t>\rho (\omega )$, where ρ is a $\mathbb{F}$-stopping time with $\mathsf{P}(\rho <\infty )=0$; let us call such Y regular and ρ a moment of irregularity for Y. Dellacherie and Meyer [6, VI.5 (a), p. 70] prove that, if the filtration is not complete, every supermartingale X (with right-continuous expectation) has a modification Y with the above regularity property. If we are given just an adapted process X with almost all paths càdlàg, we define ρ and Y from values of X on a countable set exactly as is done in [6] in the case where X is a supermartingale. Using [5, Theorem IV.22, p. 94], we obtain that $\rho (\omega )=\infty $ and paths ${X_{\cdot }}(\omega )$ and ${Y_{\cdot }}(\omega )$ coincide for those ω for which ${X_{\cdot }}(\omega )$ is càdlàg everywhere. Moreover, if $\rho (\omega )<\infty $, then ${Y_{t}}(\omega )$ is càdlàg for $t<\rho (\omega )$ and one may put ${Y_{t}}(\omega )=0$ for $t\geqslant \rho (\omega )$.
Processes with finite variation are adapted and not assumed to start from 0. A moment of irregularity for them has additionally the property that their paths have finite variation over $[0,t]$ for all $t<\rho (\omega )$.
It is instructive to mention that, in our model, there is no need to use general results on the existence of (a.s.) càdlàg modifications for martingales since they can be proved directly. For example, if L is an integrable random variable with $\mathsf{E}L=0$, then the process M given by (3) with $F(t)=\mathsf{E}[L|\gamma >t]{\mathbb{1}_{\{t<{t_{G}}\}}}$ satisfies ${M_{t}}=\mathsf{E}[L|{\mathcal{F}_{t}}]$ a.s. for an arbitrary t. It is trivial to check that this function F has finite variation over any $[0,t]$ with $\mathsf{P}(\gamma >t)>0$ (and over $[0,{t_{G}})$ if $\mathsf{P}(\gamma ={t_{G}}<\infty )>0$). Thus M is regular. It may be that, if ${t_{G}}<\infty $ and $\mathsf{P}(\gamma ={t_{G}})=0$, the function F has not a finite limit as $t\upuparrows {t_{G}}$, or, more generally, has unbounded variation over $[0,{t_{G}})$. Then a moment of irregularity is given by
\[ \rho (\omega )=\left\{\begin{array}{l@{\hskip10.0pt}l}{t_{G}},& \text{if}\hspace{2.5pt}\gamma \geqslant {t_{G}}\text{;}\\ {} +\infty ,& \text{otherwise.}\end{array}\right.\]
It takes a finite value only on the set $\{\gamma \geqslant {t_{G}}\}$ of zero measure. In all other cases we may put $\rho \equiv +\infty $. See Remark 2 in Section 2 for more details.2 Main results
Let γ be a random variable with values in ${\overline{\mathbb{R}}_{+}}$ on a probability space $(\Omega ,\mathcal{F},\mathsf{P})$. We tacitly assume that $\mathsf{P}(\gamma >0)>0$. $G(t)=\mathsf{P}(\gamma \leqslant t)$, $t\in {\mathbb{R}_{+}}$, stands for the distribution function of γ and $\overline{G}(t)=1-G(t)$. Put also ${t_{G}}=\sup \hspace{0.1667em}\{t\in {\mathbb{R}_{+}}:G(t)<1\}$ and $\mathcal{T}=\{t\in {\mathbb{R}_{+}}:\mathsf{P}(\gamma \geqslant t)>0\}$. Note that $\mathsf{P}(\gamma \notin \mathcal{T})=0$. We will often distinguish between the following two cases:
It is clear that $\mathcal{T}=[0,{t_{G}})$ in Case A and $\mathcal{T}=[0,{t_{G}}]$ in Case B.
We define ${\mathcal{F}_{t}}$, $t\in {\mathbb{R}_{+}}$, as the collection of subsets A of Ω such that $A\in \mathcal{F}$ and $A\cap \{t<\gamma \}$ is either ∅ or coincides with $\{t<\gamma \}$.
It is shown in Proposition 1 that ${\mathcal{F}_{t}}$ is a σ-field for every $t\in {\mathbb{R}_{+}}$ and the family $\mathbb{F}={({\mathcal{F}_{t}})_{t\in {\mathbb{R}_{+}}}}$ is a filtration. We call this filtration a single jump filtration. It is determined by generating elements γ and $\mathcal{F}$. In this paper we consider only single jump filtrations and, if necessary to indicate generating elements, we use the notation $\mathbb{F}(\gamma ,\mathcal{F})$ for the single jump filtration generated by γ and $\mathcal{F}$.
In this section a single jump filtration $\mathbb{F}=\mathbb{F}(\gamma ,\mathcal{F})$ is fixed. All notions depending on filtration (stopping times, martingales, local martingales, etc.) refer to this filtration $\mathbb{F}$, unless otherwise specified.
Proposition 1.
(i) ${\mathcal{F}_{t}}$ is a σ-field and a random variable ξ is ${\mathcal{F}_{t}}$-measurable, $t\in {\mathbb{R}_{+}}$, if and only if ξ is constant on $\{t<\gamma \}$. ξ is ${\mathcal{F}_{\infty }}$-measurable if and only if ξ is constant on $\{\gamma =\infty \}$.
(ii) The family ${({\mathcal{F}_{t}})_{t\in {\mathbb{R}_{+}}}}$ is increasing and right-continuous, i.e. $\mathbb{F}={({\mathcal{F}_{t}})_{t\in {\mathbb{R}_{+}}}}$ is a filtration.
(iii) γ is a stopping time and ${\mathcal{F}_{\gamma }}={\mathcal{F}_{\infty }}$.
Proposition 2.
(i) If $X={({X_{t}})_{t\in {\mathbb{R}_{+}}}}$ is an adapted process, then there is a deterministic function $F(t)$, $0\leqslant t<{t_{G}}$, such that ${X_{t}}=F(t)$ on $\{t<\gamma \wedge {t_{G}}\}$. If $Y={({Y_{t}})_{t\in {\mathbb{R}_{+}}}}$ is an adapted process and $\mathsf{P}({X_{t}}={Y_{t}})=1$ for every $t\in {\mathbb{R}_{+}}$, then ${X_{t}}={Y_{t}}$ identically on $\{t<\gamma \wedge {t_{G}}\}$.
(ii) If $Y={({Y_{t}})_{t\in {\mathbb{R}_{+}}}}$ is a predictable process, then there is a measurable deterministic function $C(t)$, $t\in \mathcal{T}$, such that ${Y_{t}}=C(t)$ on $\{t\leqslant \gamma \}$, $t\in \mathcal{T}$.
(iii) If $X={({X_{t}})_{t\in {\mathbb{R}_{+}}}}$ is a process with finite variation, then $F(t)$ in (i) has a finite variation over $[0,t]$ for every $t<{t_{G}}$ in Case A and over $[0,{t_{G}})$ in Case B.
(iv) Every semimartingale is a process with finite variation.
(v) If $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ is a σ-martingale then there are a deterministic function $F(t)$, $t\in {\mathbb{R}_{+}}$, and a finite random variable L such that, up to $\mathsf{P}$-indistinguishability,
Statement (iv) is not surprising. If the σ-field $\mathcal{F}$ is countably generated, then our filtration is a special case of a filtration generated by a marked point process, and it is known, see [17], that then all martingales are of finite variation. In general, a single jump filtration is a special case of a jumping filtration, see [14], where again all martingales are of finite variation.
Remark 1.
If M is a σ-martingale, then it is a process with finite variation due to (iv) and, hence, the function $F(t)$ in (5) has a finite variation over $[0,t]$ for every $t<{t_{G}}$ in Case A and over $[0,{t_{G}})$ in Case B according to (iii).
Remark 2.
According to (i), the function $F(t)$ in (5) is uniquely determined for $t<{t_{G}}$. Since $\mathsf{P}(\gamma >{t_{G}})=0$, the stochastic interval $[\![ {t_{G}},\gamma [\![ $ is an evanescent set. Hence, $F(t)$ can be defined arbitrarily for $t\geqslant {t_{G}}$. For example, we can put it equal to 0 for $t\geqslant {t_{G}}$. Then $F(t)$ has a finite variation on compact intervals if ${t_{G}}=+\infty $ or in Case B. In Case A, if ${t_{G}}$ is finite, $F(t)$ may have infinite variation over $[0,{t_{G}})$ (and even not have a finite limit as $t\upuparrows {t_{G}}$), see Theorem 2 and Example 3 below. All other points are regular for $F(t)$. Now put $\rho (\omega )={t_{G}}<+\infty $ if we are in Case A, ${t_{G}}<+\infty $, ${\lim \nolimits_{t\upuparrows {t_{G}}}}\operatorname{Var}{(F)_{t}}=\infty $, and $\gamma (\omega )\geqslant {t_{G}}$, and let $\rho (\omega )=+\infty $ in all other cases. It follows that ρ is a moment of irregularity for the process in the right-hand side of (5).
In what follows, when we write that the process M has the representation (5), this means that M and the right-hand side of (5) are indistinguishable. Moreover, we tacitly assume that $F(t)$ is right-continuous for $t\geqslant {t_{G}}$ to ensure that the right-hand side of (5) is right-continuous.
Propositions 1 and 2 explain why we call $\mathbb{F}$ a single jump filtration: all randomness appears at time γ. It is not so natural to describe local martingales with respect to $\mathbb{F}$ as single jump processes. As we will see, the function F in (5) need not be continuous, so local martingales may have several jumps.
Our main goal is to provide a complete description of all local martingales. According to Proposition 2 (v), a necessary condition is that it is represented in form (5). Thus, it is enough to study only processes of this form.
Theorem 1.
Let $F(t)$, $0\leqslant t<{t_{G}}$, be a deterministic càdlàg function, L be a random variable, and a process $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ be given by
The following statements are equivalent:
In the case where $\mathcal{F}=\sigma \{\gamma \}$, equivalence (i) and (ii) is proved in [2].
Concerning the last statement of the proposition, let us emphasize that if ${t_{G}}<\infty $ and $\mathsf{P}(\gamma ={t_{G}})=0$, a local martingale $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ may not be a martingale on $[0,{t_{G}}]$; obviously, if it is a martingale, then it is uniformly integrable, and necessary and sufficient conditions for this are given in Theorem 4.
If (6) and (7) hold, then
and one can define the conditional expectation $H(t)$ of L given that $\gamma =t$ for $t\in \mathcal{T}$:
More precisely, $H(t)$ is a Borel function on $\mathcal{T}$ with finite values such that for any $t\in \mathcal{T}$
(9)
\[ \mathsf{E}\big(|L|{\mathbb{1}_{\{\gamma \leqslant t\}}}\big)<\infty ,\hspace{1em}t\in \mathcal{T},\]
\[ \mathsf{E}\big(L{\mathbb{1}_{\{\gamma \leqslant t\}}}\big)={\int _{[0,t]}}H(s)\hspace{0.1667em}dG(s).\]
Note that the function H is $dG$-a.s. unique and is $dG$-integrable over any closed interval in $\mathcal{T}$. It is convenient to introduce a notation for such functions.Let ${L_{\mathrm{loc}}^{1}}(dG)$ be the set of all Borel functions z on $\mathcal{T}$ such that
\[ {\int _{[0,t]}}|z(s)|\hspace{0.1667em}dG(s)<\infty \hspace{1em}\text{for all}\hspace{2.5pt}t\in \mathcal{T}.\]
Given a function $Z:[0,{t_{G}})\to \mathbb{R}$, let us write $Z\stackrel{\mathrm{loc}}{\ll }G$ if there is $z\in {L_{\mathrm{loc}}^{1}}(dG)$ such that $Z(t)=Z(0)+{\textstyle\int _{(0,t]}}z(s)\hspace{0.1667em}dG(s)$ for all $t<{t_{G}}$; in this case we put $\frac{dZ}{dG}(t):=z(t)$ for $0<t<{t_{G}}$. Let us emphasize that in Case B this definition implies that z is $dG$-integrable over $[0,{t_{G}}]$ and, hence, the function Z has a finite variation over $[0,{t_{G}})$ and there is a finite limit ${\lim \nolimits_{t\upuparrows {t_{G}}}}Z(t)=Z(0)+{\textstyle\int _{(0,{t_{G}})}}z(s)\hspace{0.1667em}dG(s)$. Note also that in this definition the value $z(0)$ can be chosen arbitrarily even if $G(0)>0$; the same refers to the value $z({t_{G}})$ in Case B. Correspondingly, $dZ/dG$ is defined only for $0<t<{t_{G}}$.Let G be a distribution function of a law on $[0,+\infty ]$. We will say that a pair $(F,H)$ satisfies Condition M if
and, additionally in Case B,
Proposition 3.
(a) Let H be any function satisfying (12). Define
where $F(0)$ is an arbitrary real number in Case A and
in Case B. Then the pair $(F,H)$ satisfies Condition M. Conversely, if F is such that the pair $(F,H)$ satisfies Condition M, then F satisfies (15) and, in Case B, (16) holds.
(15)
\[ F(t)=\overline{G}{(t)^{-1}}\Big[F(0)\overline{G}(0)-\underset{(0,t]}{\int }H(s)\hspace{0.1667em}dG(s)\Big],\hspace{1em}0<t<{t_{G}},\]Theorem 2.
In order that a right-continuous process $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ be a local martingale it is necessary and sufficient that there be a pair $(F,H)$ satisfying Condition M and a random variable ${L^{\prime }}$ satisfying
such that, up to $\mathsf{P}$-indistinguishability,
(19)
\[ \mathsf{E}\big(|{L^{\prime }}|{\mathbb{1}_{\{\gamma \leqslant t\}}}\big)<\infty ,\hspace{1em}t\in \mathcal{T},\hspace{2em}\textit{and}\hspace{2em}\mathsf{E}[{L^{\prime }}|\gamma ]=0,\]The statement that the process M given by (20) with ${L^{\prime }}=0$ is a local martingale if $F\stackrel{\mathrm{loc}}{\ll }G$ and H is constructed as in part (b) of Proposition 3, is essentially due to Herdegen and Herrmann [13], though they formulate (17) in an equivalent form:
They also prove that, in Case B, if F has infinite variation on $[0,{t_{G}})$ (and hence does not satisfy $F\stackrel{\mathrm{loc}}{\ll }G$), then M given by (6) is not a semimartingale, see [13, Lemma B.6]. (Note that this follows also from our Proposition 2 (iv).) We add that, also in Case B, if H is $dG$-integrable over $(0,{t_{G}})$, F satisfies (15), but $F(0)$ is greater or less than the right-hand side of (16), then M given by (20) with ${L^{\prime }}$ satisfying (19), is a supermartingale or a submartingale, respectively, cf. Theorem 4.
The fact that $H(0)$ can be chosen arbitrarily in Proposition 3 (b) says only that L can be an arbitrary integrable random variable on the set $\{\gamma =0\}$, which is evident ab initio. On the contrary, the fact that $F(0)$ can be chosen arbitrarily in (a) in Case A is an interesting feature of this model. It says that, given the terminal value ${M_{\infty }}$ of M (on $\{\gamma <\infty \}$), one can freely choose the initial value ${M_{0}}$ of M (on $\{\gamma >0\}$) to keep the property of being a local martingale for M.
Corollary 1.
Every local martingale $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ has a unique decomposition into the sum $M={M^{\prime }}+{M^{\prime\prime }}$ of two local martingales ${M^{\prime }}$ and ${M^{\prime\prime }}$, where ${M^{\prime }}$ is adapted with respect to the smallest filtration making γ a stopping time, and ${M^{\prime\prime }}$ which vanishes on $\{t<\gamma \}$ and satisfies $\mathsf{E}{M^{\prime\prime }_{0}}=0$.
Remark 3.
If $\mathsf{P}(\gamma =0)=0$, then it follows from the first property for ${M^{\prime\prime }}$ that ${M^{\prime\prime }_{0}}=0$ a.s. and thus the second property holds automatically.
Remark 4.
The smallest filtration making γ a stopping time is a single jump filtration $\mathbb{F}(\gamma ,\sigma \{\gamma \})$ generated by γ and the smallest σ-field $\sigma \{\gamma \}$ with respect to which γ is measurable. Let M be a $\mathbb{F}$-local martingale adapted to $\mathbb{F}(\gamma ,\sigma \{\gamma \})$. It follows from Theorem 1 that M is a $\mathbb{F}(\gamma ,\sigma \{\gamma \})$-local martingale.
As the next example shows, the product ${M^{\prime }}{M^{\prime\prime }}$ of local martingales from the above decomposition may not be a local martingale because the first condition in (19) may fail. It will follow from Theorem 3 below that this product is always a σ-martingale.
Example 1.
Let γ have an exponential distribution, e.g., $\overline{G}(t)={e^{-t}}$, F is given by (15) with $H(t)={t^{-1/2}}$ and an arbitrary $F(0)$, ${M^{\prime }_{t}}=F(t){\mathbb{1}_{\{t<\gamma \}}}+H(\gamma ){\mathbb{1}_{\{t\geqslant \gamma \}}}$, ${M^{\prime\prime }_{t}}=Y{\gamma ^{-1/2}}{\mathbb{1}_{\{t\geqslant \gamma \}}}$, where Y takes values $\pm 1$ with probabilities $1/2$ and is independent of γ. It follows that ${M^{\prime }}$ and ${M^{\prime\prime }}$ are local martingales but their product ${M^{\prime }_{t}}{M^{\prime\prime }_{t}}=Y{\gamma ^{-1}}{\mathbb{1}_{\{t\geqslant \gamma \}}}$ does not satisfy the integrability condition (7) and, hence, is not a local martingale. This process is a classical example (due to Émery) of a σ-martingale which is not a local martingale, see, e.g., [9, Example 2.3, p. 86].
The previous example shows that our model admits σ-martingales which are not local martingales. In the next theorem we describe all σ-martingales in our model. In particular, it implies that if $\mathcal{F}=\sigma \{\gamma \}$, then all σ-martingales that are integrable at 0 are local martingales.
Theorem 3.
In order that a right-continuous process $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ be a σ-martingale it is necessary and sufficient that it have a representation (20), where a pair $(F,H)$ satisfies Condition M and a random variable ${L^{\prime }}$ satisfies
The next theorem complements the classification of the limit behaviour of local martingales that was considered in Herdegen and Herrmann [13] in the case where $\mathcal{F}=\sigma \{\gamma \}$. Let us say that a local martingale $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ has
-
type 1 if the limit ${M_{\infty }}={\lim \nolimits_{t\to \infty }}{M_{t}}$ does not exist with positive probability or exists with probability one but is not integrable: $\mathsf{E}|{M_{\infty }}|=\infty $;
-
type 2a if M is a closed supermartingale (in particular, $\mathsf{E}|{M_{\infty }}|<\infty $) and $\mathsf{E}({M_{\infty }})<\mathsf{E}({M_{0}})$;
-
type 2b if M is a closed submartingale (in particular, $\mathsf{E}|{M_{\infty }}|<\infty $) and $\mathsf{E}({M_{\infty }})>\mathsf{E}({M_{0}})$;
-
type 3 if M is a uniformly integrable martingale (in particular, $\mathsf{E}|{M_{\infty }}|<\infty $ and $\mathsf{E}({M_{\infty }})=\mathsf{E}({M_{0}})$) and $\mathsf{E}({\sup _{t}}|{M_{t}}|)=\infty $;
-
type 4 if M has an integrable variation: $\mathsf{E}\big(\operatorname{Var}{(M)_{\infty }}\big)<\infty $.
Theorem 4.
Let $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ be a local martingale with the representation
where $L=H(\gamma )+{L^{\prime }}$, a pair $(F,H)$ satisfies Condition M and a random variable ${L^{\prime }}$ satisfies (19). Then in Case B the local martingale M has type 4. In Case A all types are possible. Namely,
(23)
\[ {M_{t}}=F(t){\mathbb{1}_{\{t<\gamma \}}}+L{\mathbb{1}_{\{t\geqslant \gamma \}}},\hspace{1em}t\in {\mathbb{R}_{+}},\]-
(i) M has type 1 if and only if $\mathsf{E}\big(|{L^{\prime }}|{\mathbb{1}_{\{\gamma <\infty \}}}\big)=\infty $ or ${\textstyle\int _{[0,{t_{G}})}}|H(s)|\hspace{0.1667em}dG(s)=\infty $.
-
(ii) If $\mathsf{P}(\gamma =\infty )>0$, $\mathsf{E}\big(|{L^{\prime }}|{\mathbb{1}_{\{\gamma <\infty \}}}\big)<\infty $, and ${\textstyle\int _{{\mathbb{R}_{+}}}}|H(s)|\hspace{0.1667em}dG(s)<\infty $ then M has type 4.
-
(iii) If $\mathsf{P}(\gamma =\infty )=0$, $\mathsf{E}|{L^{\prime }}|<\infty $, and ${\textstyle\int _{[0,{t_{G}})}}|H(s)|\hspace{0.1667em}dG(s)<\infty $ then
Remark 5.
It follows from (13) that the limit ${\lim \nolimits_{t\to {t_{G}}}}F(t)\overline{G}(t)$ in (iii.i) and (iii.ii) exists. Also, ${\textstyle\int _{[0,{t_{G}})}}|H(s)|\hspace{0.1667em}dG(s)$ in (i)–(iii) is finite if only if $F(t)\overline{G}(t)$ has a finite variation over $[0,{t_{G}})$.
Remark 6.
It follows from Theorem 4 that, in our model, every martingale M with $\mathsf{E}({\sup _{t}}|{M_{t}}|)<\infty $ has an integrable total variation. Of course, on general spaces, there exist martingales M having finite variation on compacts and such that $\mathsf{E}({\sup _{t}}|{M_{t}}|)<\infty $ and their total variation is not integrable, see, e.g., [9, Example 2.7, p. 103].
Example 2.
Assume that $H:(0,1)\to \mathbb{R}$ is a monotone nondecreasing function and, for definiteness, that it is right-continuous. Then it is the upper quantile function of $H(\gamma )$, where γ is uniformly distributed on $(0,1)$. Assume also that H is integrable on $(0,1)$ and ${\textstyle\int _{0}^{1}}H(s)\hspace{0.1667em}ds=0$, that is to say, that $H(\gamma )$ has zero mean. Put
\[ F(t)=-{(1-t)^{-1}}{\underset{0}{\overset{t}{\int }}}H(s)\hspace{0.1667em}ds={(1-t)^{-1}}{\underset{t}{\overset{1}{\int }}}H(s)\hspace{0.1667em}ds.\]
We see that F satisfying (13) with $F(0)=0$ is the Hardy–Littlewood maximal function corresponding to H. If we define M by (23) with $L=H(\gamma )$, then, by Theorem 4, M is a uniformly integrable martingale with ${M_{\infty }}=H(\gamma )$ and ${\sup _{t}}{M_{t}}=F(\gamma )$. This example is essentially the example of Dubins and Gilat [7] of a uniformly integrable martingale with a given distribution of its terminal value, having maximal (with respect to the stochastic partial order) maximum (in time).Example 3 ([13, Example 3.14]).
Let $\Omega =(0,1]$ be equipped with the Borel σ-field $\mathcal{F}$, and let $\mathsf{P}$ be the Lebesgue measure, $\gamma (\omega )=\omega $. Put $H(t)\equiv 0$. Then $F(t)={(1-t)^{-1}}$ satisfies (13) with $F(0)=1$. By Theorem 4, M defined by (23) is a supermartingale and local martingale but not a martingale. This seems to be the simplest example of a local martingale with continuous time, which is not a martingale. Note that, for $\omega =1$, the trajectory ${M_{t}}(\omega )={(1-t)^{-1}}{\mathbb{1}_{\{t<1\}}}$ has not a finite left-hand limit at 1. Moreover, if N is a modification of M, for $t<1$, the values of ${M_{t}}(\omega )$ and ${N_{t}}(\omega )$ must coincide on the atom $\{t<\gamma \}=(t,1]$ of ${\mathcal{F}_{t}}$, having the positive measure. Hence, ${N_{t}}(\omega )={M_{t}}(\omega )$ for $\omega =1$ for all $t<1$. This is an example of a right-continuous supermartingale which has not a modification with all paths càdlàg. Of course, the usual assumptions are not satisfied in this example.
3 Proofs
Proof of Proposition 1.
(i) and (iii) are evident from the definition of ${\mathcal{F}_{t}}$, and (ii) follows easily from (i).
Let us prove (iv). To prove that T is a stopping time, we must check that $\{T\leqslant t<\gamma \}$ is either ∅ or $\{t<\gamma \}$ for all $t\in {\mathbb{R}_{+}}$. This is trivial if $\{T<\gamma \}=\varnothing $. If there is a number r such that (4) holds, then $\{T\leqslant t<\gamma \}$ is either ∅ if $r>t$ or $\{t<\gamma \}$ if $r\leqslant t$.
Conversely, let T be a stopping time. If $T\geqslant \gamma $ for all ω, then there is nothing to prove. Assume that the set $\{T<\gamma \}\ne \varnothing $. Then there are real numbers q such that $\{T\leqslant q<\gamma \}\ne \varnothing $. For such q, by the definition of ${\mathcal{F}_{q}}$, $\{T\leqslant q<\gamma \}=\{q<\gamma \}$, or, equivalently, $\{T\leqslant q\}\supseteq \{q<\gamma \}$. Let r be the greatest lower bound of such q. The sets $\{q<\gamma \}\uparrow \{r<\gamma \}$ and $\{T\leqslant q\}\downarrow \{T\leqslant r\}$ as $q\downarrow r$. Thus,
\[ \{T<\gamma \}=\bigcup \limits_{q:\{T\leqslant q<\gamma \}\ne \varnothing }\{q<\gamma \}=\{r<\gamma \}\subseteq \{T\leqslant r\}.\]
Since $\{T\leqslant t<\gamma \}=\varnothing $ for any $t<r$, we have (4). □Proof of Proposition 2.
The first statement in (i) follows from Proposition 1 (i). Since $\mathsf{P}(t<\gamma \wedge {t_{G}})>0$ for every $t<{t_{G}}$, we obtain that ${X_{t}}$ and ${Y_{t}}$ take the same constant value on $\{t<\gamma \wedge {t_{G}}\}$.
Since a random variable ${Y_{t}}$ is ${\mathcal{F}_{t-}}$-measurable for a predictable process Y, ${Y_{t}}$ is constant on $\{t\leqslant \gamma \}$. Denote by $C(t)$, $t\in \mathcal{T}$, the value of ${Y_{t}}$ on $\{t\leqslant \gamma \}$. Since $\mathsf{P}(\gamma \geqslant t)>0$ for $t\in \mathcal{T}$, there is an ω such that $C(s)\equiv {Y_{s}}(\omega )$, $s\leqslant t$, and the measurability of C follows.
Let us prove (iii) in Case B. Then we obtain that ${X_{t}}=F(t)$ for all $t<{t_{G}}$ on the set $\{\gamma ={t_{G}}\}$, which has a positive probability. However, almost all paths of ${X_{t}}$ have a finite variation over $[0,{t_{G}})$, and the claim follows. The proof in Case A is similar.
Now let us prove (5) in the case where $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ is a uniformly integrable (a.s. càdlàg) martingale. We can find a random variable ${M_{\infty }}$ that is ${\mathcal{F}_{\infty }}$-measurable and such that ${\lim \nolimits_{n\to \infty }}{M_{n}}={M_{\infty }}$ $\mathsf{P}$-a.s. Since $\{t<\gamma \}$ is an atom of ${\mathcal{F}_{t}}$ and has a positive probability for $t<{t_{G}}$, we obtain from the martingale property that ${M_{t}}(\omega )=F(t)$ for all $\omega \in \{t<\gamma \}$, where
\[ F(t)=\frac{\mathsf{E}\big({M_{\infty }}{\mathbb{1}_{\{t<\gamma \}}}\big)}{\overline{G}(t)},\hspace{1em}t<{t_{G}}.\]
It is clear that the nominator and the denominator are right-continuous functions of bounded variation on $[0,{t_{G}}]$, hence $F(t)$, $0\leqslant t<{t_{G}}$ is a càdlàg function on $\mathcal{T}$ and has a finite variation on $[0,{t_{G}})$ in Case B and on every $[0,t]$, $0\leqslant t<{t_{G}}$, in Case A.Now set $L={M_{\infty }}{\mathbb{1}_{\{\gamma <\infty \}}}$. Then $L{\mathbb{1}_{\{\gamma \leqslant t\}}}={M_{\infty }}{\mathbb{1}_{\{\gamma \leqslant t\}}}$ is ${\mathcal{F}_{t}}$-measurable, and hence $\mathsf{P}$-a.s.
\[ {M_{t}}{\mathbb{1}_{\{\gamma \leqslant t\}}}=\mathsf{E}({M_{\infty }}{\mathbb{1}_{\{\gamma \leqslant t\}}}|{\mathcal{F}_{t}})=L{\mathbb{1}_{\{\gamma \leqslant t\}}}.\]
Thus we have obtained, that, for a given $t\in {\mathbb{R}_{+}}$, ${M_{t}}$ is equal $\mathsf{P}$-a.s. to the right-hand side of (5) with L and $F(t)$ as above. Since both the left-hand side and the right-hand side of (5) are almost surely right-continuous, they are indistinguishable. Moreover, if we change $F(t)$ for $t\geqslant {t_{G}}$, the right-hand side of (5) will change on an evanescent set. Thus we can put, say, $F(t)=0$ for $t\geqslant {t_{G}}$, and then the right-hand side of (5) is a regular right-continuous process with finite variation, and indistinguishable from M.Now let M be a local martingale and $\{{T_{n}}\}$ be a localizing sequence of stopping times, i.e. ${T_{n}}\uparrow \infty $ a.s. and ${M^{{T_{n}}}}$ is a uniformly integrable martingale for each n. We have proved that almost all paths of ${M^{{T_{n}}}}$ have finite variation. It follows that almost all paths of M have finite variation. This proves (iv).
Next, let M be a σ-martingale, i.e. M is a semimartingale and there is an increasing sequence of predictable sets ${\Sigma _{n}}$ such that ${\cup _{n}}{\Sigma _{n}}=\Omega \times {\mathbb{R}_{+}}$ and the integral process ${\mathbb{1}_{{\Sigma _{n}}}}\cdot M$ is a uniformly integrable martingale for every n. It does not matter if we integrate over $[0,t]$ or $(0,t]$, so let us agree that the domain of integration does not include 0. Since the integrand is bounded and every semimartingale is a process with finite variation in our model, the integral can be considered in the Lebesgue–Stieltjes sense, as well as other integrals appearing in the proof. Since ${\mathbb{1}_{{\Sigma _{n}}}}\cdot M$ is stopped at γ for every n with probability one, we have
\[ \int {\mathbb{1}_{]\!] \gamma ,\infty [\![ \cap {\Sigma _{n}}}}(t)\hspace{0.1667em}d\operatorname{Var}{(M)_{t}}=\int {\mathbb{1}_{]\!] \gamma ,\infty [\![ }}(t)\hspace{0.1667em}d\operatorname{Var}{({\mathbb{1}_{{\Sigma _{n}}}}\cdot M)_{t}}=0\hspace{1em}\mathsf{P}\text{-a.s.}\]
for every n, therefore,
\[ \int {\mathbb{1}_{]\!] \gamma ,\infty [\![ }}(t)\hspace{0.1667em}d\operatorname{Var}{(M)_{t}}=0\hspace{1em}\mathsf{P}\text{-a.s.}\]
Combining with (i), we prove representation (5). □Remark 7.
As it was already explained in the introduction, we can prove directly, without assuming that paths are a.s. càdlàg, that any uniformly integrable martingale has a regular modification. The proof is essentially the same as above where we proved that a.s. càdlàg uniformly integrable martingale has representation (5).
Proof of Theorem 1.
First, we prove that statements (ii) and (iii) are equivalent. The implication (ii)⇒(iii) follows trivially from the definition of a martingale. Conversely, let (iii) hold. The process ${({M_{t}})_{t\in \mathcal{T}}}$ is right-continuous, adapted by Proposition 1 (i), and integrable, see (7). Moreover, due to (6),
where $0\leqslant s<t\in \mathcal{T}$. Hence,
\[ \mathsf{E}[{M_{t}}-{M_{s}}|{\mathcal{F}_{s}}]=0\hspace{1em}\text{on}\hspace{2.5pt}\{s\geqslant \gamma \}.\]
But $\mathsf{E}[{M_{t}}-{M_{s}}|{\mathcal{F}_{s}}]$ is ${\mathcal{F}_{s}}$-measurable and, thus, equals a constant on $\{s<\gamma \}$. And this constant must be zero since $\mathsf{E}({M_{t}}-{M_{s}})=0$ by (8).The implication (ii)⇒(i) is trivial if ${t_{G}}=\infty $ or ${t_{G}}\in \mathcal{T}$. So we assume that ${t_{G}}<\infty $ and $\overline{G}(t)\downarrow 0$ as $t\upuparrows {t_{G}}$. Let ${t_{1}}<\cdots <{t_{n}}<\cdots <{t_{G}}$, ${t_{n}}\to {t_{G}}$, be an increasing sequence, then $\overline{G}({t_{n}})\to 0$. Put
\[ {T_{n}}=\left\{\begin{array}{l@{\hskip10.0pt}l}{t_{n}},& \text{if}\hspace{2.5pt}\gamma >{t_{n}}\text{;}\\ {} +\infty ,& \text{otherwise.}\end{array}\right.\]
Then ${T_{n}}$ is a stopping time by Proposition 1 (iv), ${T_{n}}\uparrow \infty $ a.s., and ${M_{t\wedge {T_{n}}}}={M_{t\wedge {t_{n}}}}$. Hence, ${M^{{T_{n}}}}$ is a martingale and M is a local martingale.It remains to prove the implication (i)⇒(ii). Let $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ be a local martingale with a localizing sequence $\{{T_{n}}\}$, i.e. ${T_{n}}\uparrow \infty $ a.s. and ${M^{{T_{n}}}}$ is a uniformly integrable martingale for each n. If $\mathsf{P}({T_{n}}\geqslant \gamma )=1$ for some n, then $M={M^{{T_{n}}}}$ is a uniformly integrable martingale, and there is nothing to prove. So assume that $\mathsf{P}({T_{n}}<\gamma )>0$ for all n. By Proposition 1 (iv), there is a number ${r_{n}}$ such that $\{{T_{n}}<\gamma \}=\{{T_{n}}={r_{n}}<\gamma \}=\{{r_{n}}<\gamma \}$. It follows from $\mathsf{P}({r_{n}}<\gamma )>0$ that ${r_{n}}<{t_{G}}$. In Case B we get $\mathsf{P}({T_{n}}<\gamma )=\mathsf{P}({r_{n}}<\gamma )\geqslant \mathsf{P}(\gamma ={t_{G}})>0$ for every n, a contradiction with ${T_{n}}\to \infty $ a.s. In Case A, if $\mathsf{P}(\gamma =\infty )>0$, then it follows from ${T_{n}}\to \infty $ a.s. that ${r_{n}}\to \infty $. In remaining cases where $\mathsf{P}(\gamma ={t_{G}})=0$, we obtain from $\mathsf{P}({r_{n}}<\gamma )\to 0$ that ${r_{n}}\to {t_{G}}$, $n\to \infty $. The claim follows since ${M_{t\wedge {T_{n}}}}={M_{t\wedge {r_{n}}}}$, and hence ${({M_{t}})_{t\leqslant {r_{n}}}}$ is a martingale. □
Proof of Proposition 3.
(a) It is obvious that (13) is equivalent to (15). It also follows from (13) that in Case B (14) is equivalent to (16). Thus it remains to prove that F defined in (a) satisfies $F\stackrel{\mathrm{loc}}{\ll }G$. Since $\overline{G}(s)\geqslant \overline{G}(t)>0$ for any $s<t<{t_{G}}$, we have
\[ \frac{1}{\overline{G}(t)}=\frac{1}{\overline{G}(0)}+\underset{(0,t]}{\int }\frac{1}{\overline{G}(s)\overline{G}(s-)}\hspace{0.1667em}dG(s),\hspace{1em}t<{t_{G}}.\]
On the other hand, from (15)
\[ F(t)\overline{G}(t)=F(0)\overline{G}(0)-\underset{(0,t]}{\int }H(s)\hspace{0.1667em}dG(s),\hspace{1em}t<{t_{G}}.\]
Combining, we obtain from integration by parts that
\[ F(t)=F(t)\overline{G}(t)\frac{1}{\overline{G}(t)}=F(0)-\underset{(0,t]}{\int }\frac{H(s)}{\overline{G}(s-)}\hspace{0.1667em}dG(s)+\underset{(0,t]}{\int }\frac{F(s)}{\overline{G}(s-)}\hspace{0.1667em}dG(s),\hspace{1em}t<{t_{G}}.\]
This shows that $F\stackrel{\mathrm{loc}}{\ll }G$ in Case A. In Case B we must show additionally that the function $\frac{|F(s)|+|H(s)|}{\overline{G}(s-)}$ is $dG$-integrable over $(0,{t_{G}})$. But $1/\overline{G}(s-)\leqslant 1/\mathsf{P}(\gamma ={t_{G}})$, $s\leqslant {t_{G}}$, and $F(s)$ is bounded on $[0,{t_{G}})$ in view of (15). The claim follows.(b) It is clear that the function $H(t)$, $t\in \mathcal{T}$, defined as in the statement, belongs to ${L_{\mathrm{loc}}^{1}}(dG)$. Integrating by parts, we get, for $t\in [0,{t_{G}})$,
\[\begin{aligned}{}F(t)\overline{G}(t)& =F(0)\overline{G}(0)-\underset{(0,t]}{\int }F(s)\hspace{0.1667em}dG(s)+\underset{(0,t]}{\int }\overline{G}(s-)\hspace{0.1667em}dF(s)\\ {} & =F(0)\overline{G}(0)-\underset{(0,t]}{\int }F(s)\hspace{0.1667em}dG(s)+\underset{(0,t]}{\int }\overline{G}(s-)\frac{dF}{dG}(s)\hspace{0.1667em}dG(s)\\ {} & =F(0)\overline{G}(0)-\underset{(0,t]}{\int }H(s)\hspace{0.1667em}dG(s),\end{aligned}\]
i.e. (13) holds. Therefore, Condition M is satisfied. Conversely, let (13) hold. In the proof of part (a) we deduced from (15) (and, hence, from (13)) that
\[ \frac{dF}{dG}(t)=-\frac{H(t)}{\overline{G}(t-)}+\frac{F(t)}{\overline{G}(t-)},\hspace{1em}dG\text{-a.s.},\]
and (17) follows. □Proof of Theorem 2.
Let $M={({M_{t}})_{t\in {\mathbb{R}_{+}}}}$ be a local martingale. By Proposition 2 (v) and Theorem 1, M has representation (6) and, moreover, (7) and (8) hold. Define the function $H(t)$, $t\in \mathcal{T}$, by (10). Then, see (9),
\[ \mathsf{E}\big(|H(\gamma )|{\mathbb{1}_{\{\gamma \leqslant t\}}}\big)\leqslant \mathsf{E}\big(|L|{\mathbb{1}_{\{\gamma \leqslant t\}}}\big)<\infty ,\]
which implies $H\in {L_{\mathrm{loc}}^{1}}(dG)$. Putting ${L^{\prime }}=\big(L-H(\gamma )\big){\mathbb{1}_{\{\gamma <\infty \}}}$, we obtain (19) as well. Now it follows from (20) and the second relation in (19) that
\[ \mathsf{E}({M_{t}})=F(t)\overline{G}(t)+\underset{[0,t]}{\int }H(s)\hspace{0.1667em}dG(s),\hspace{1em}t\in \mathcal{T},\]
so (13) and (14) follow from (8). Finally, (11) follows from Proposition 3 (a).Proof of Corollary 1.
The required decomposition follows from (20) if we put
\[ {M^{\prime }_{t}}=F(t){\mathbb{1}_{\{t<\gamma \}}}+H(\gamma ){\mathbb{1}_{\{t\geqslant \gamma \}}},\hspace{2em}{M^{\prime\prime }_{t}}={L^{\prime }}{\mathbb{1}_{\{t\geqslant \gamma \}}}.\]
Let a local martingale M with a representation (20) vanish on $\{t<\gamma \}$ and $\mathsf{E}{M_{0}}=0$. Then $F(t)\equiv 0$ for $t<{t_{G}}$ and $0=\mathsf{E}{M_{0}}=H(0)\overline{G}(0)+\mathsf{E}\big({L^{\prime }}{\mathbb{1}_{\{\gamma =0\}}}\big)=H(0)\overline{G}(0)$ in view of the second relation in (19). By Theorem 2, it follows from (13) and (14) that $H(t)=0$ $dG$-a.s. Now, if M is also adapted with respect to the smallest filtration making γ a stopping time, then ${M_{\gamma }}=\big(H(\gamma )+{L^{\prime }}\big){\mathbb{1}_{\{\gamma <\infty \}}}={L^{\prime }}{\mathbb{1}_{\{\gamma <\infty \}}}$ is $\sigma \{\gamma \}$-measurable. Using again the second relation in (19), we conclude that ${L^{\prime }}{\mathbb{1}_{\{\gamma <\infty \}}}=0$ a.s. This proves the unicity. □Proof of Theorem 3.
To prove sufficiency it is enough to consider the case, where $H\equiv 0$ and $F\equiv 0$. In view of the first condition in (22), there exists a Borel function $J:(0,\infty ]\to {\mathbb{R}_{+}}$ such that $\mathsf{E}\big[|{L^{\prime }}|\big|\gamma =t\big]=J(t)$. Put ${\Sigma _{n}}=\Omega \times \{t\in (0,+\infty ):J(t)\leqslant n\}$ and consider the Lebesgue–Stieltjes integral process ${\mathbb{1}_{{\Sigma _{n}}}}\cdot {M_{t}}={L^{\prime }}{\mathbb{1}_{\big\{\mathsf{E}\big[|{L^{\prime }}|\big|\gamma \big]\leqslant n\big\}}}{\mathbb{1}_{\{t\geqslant \gamma >0\}}}$. By Theorem 2, cf. condition (19), it is a local martingale. Since ${\Sigma _{n}}$ are predictable and ${\cup _{n}}{\Sigma _{n}}=\Omega \times {\mathbb{R}_{+}}$, M is a σ-martingale.
Conversely, let M be a σ-martingale. It is easy to check that to prove necessity it is enough to consider the case ${M_{0}}=0$. According to Proposition 2 (v) and Remark 1
where L is a random variable, $F(t)$, $0\leqslant t<{t_{G}}$, is a deterministic function with finite variation over $[0,t]$ for every $t<{t_{G}}$ in case A and over $[0,{t_{G}})$ in Case B. By the definition of σ-martingales, there is an increasing sequence of predictable sets ${\Sigma _{n}}$ such that ${\cup _{n}}{\Sigma _{n}}=\Omega \times {\mathbb{R}_{+}}$ and the integral process ${\mathbb{1}_{{\Sigma _{n}}}}\cdot M$ is a local martingale for every n. It was mentioned in the proof of Proposition 2 that the integral is understood as the Lebesgue–Stieltjes integral. By Proposition 2 (ii), there are Borel subsets ${D_{n}}$ of ${\mathbb{R}_{+}}$ such that ${\mathbb{1}_{{\Sigma _{n}}}}(\omega ,t){\mathbb{1}_{\{\gamma (\omega )\geqslant t\}}}={\mathbb{1}_{{D_{n}}}}(t){\mathbb{1}_{\{\gamma (\omega )\geqslant t\}}}$, in particular,
According to Theorem 2 and Proposition 3, ${\mathbb{1}_{{\Sigma _{n}}}}\cdot M$ has a representation
$0<t<{t_{G}}$, and in Case B ${H^{n}}({t_{G}}):={\lim \nolimits_{t\upuparrows {t_{G}}}}{F^{n}}(t)$.
\[ {\mathbb{1}_{{\Sigma _{n}}}}\cdot {M_{t}}={F^{n}}(t){\mathbb{1}_{\{t<\gamma \}}}+\big({H^{n}}(\gamma )+{L^{n}}\big){\mathbb{1}_{\{t\geqslant \gamma \}}}\]
where $\mathsf{E}\big(|{L^{n}}|{\mathbb{1}_{\{\gamma \leqslant t\}}}\big)<\infty $, $t\in \mathcal{T}$, $\mathsf{E}[{L^{n}}|\gamma ]=0$, ${H^{n}}\in {L_{\mathrm{loc}}^{1}}(dG)$, ${F^{n}}\stackrel{\mathrm{loc}}{\ll }G$,
(27)
\[ {H^{n}}(t)={F^{n}}(t)-\overline{G}(t-)\frac{d{F^{n}}}{dG}(t)={F^{n}}(t-)-\overline{G}(t)\frac{d{F^{n}}}{dG}(t),\]Combining with (25), we get
and
Since ${F^{n}}\stackrel{\mathrm{loc}}{\ll }G$, it follows from (28) and (26) that $F\stackrel{\mathrm{loc}}{\ll }G$. Substituting (28) in (29) and taking conditional expectation given γ, we get
It follows from (28) and (27) that
(28)
\[ {F^{n}}(t)={\int _{(0,t]}}{\mathbb{1}_{{D_{n}}}}(s)\hspace{0.1667em}dF(s),\hspace{1em}0<t<{t_{G}},\](29)
\[ {H^{n}}(\gamma )+{L^{n}}={\int _{(0,\gamma )}}{\mathbb{1}_{{D_{n}}}}(s)\hspace{0.1667em}dF(s)+{\mathbb{1}_{{D_{n}}}}(\gamma )\big(L-F(\gamma -)\big)\hspace{1em}\text{a.s.}\]
\[ {H^{n}}(\gamma )-{F^{n}}(\gamma -)={\mathbb{1}_{{D_{n}}}}(\gamma )\big(H(\gamma )-F(\gamma -)\big)\hspace{1em}\text{a.s.,}\]
i.e.
(30)
\[ {H^{n}}(t)-{F^{n}}(t-)={\mathbb{1}_{{D_{n}}}}(t)\big(H(t)-F(t-)\big)\hspace{1em}dG(t)\text{-a.s.}\]
\[ -\overline{G}(t){\mathbb{1}_{{D_{n}}}}(t)\frac{dF}{dG}(t)=-\overline{G}(t)\frac{d{F^{n}}}{dG}(t)={\mathbb{1}_{{D_{n}}}}(t)\big(H(t)-F(t-)\big)\hspace{1em}dG(t)\text{-a.s.},\]
so, taking (26) into account, we obtain
Additionally, in Case B, the left-hand side of (30) at $t={t_{G}}$ vanishes, hence, $H({t_{G}}):={\lim \nolimits_{t\upuparrows {t_{G}}}}F(t)$. It remains to put ${L^{\prime }}=L-H(\gamma )$. □Proof of Theorem 4.
In Case B
and the first term is finite by Remark 1, while $\mathsf{E}|L|<\infty $ due to (7). Thus, we proceed to Case A.
(i) Note that
\[ \underset{[0,{t_{G}})}{\int }|H(s)|\hspace{0.1667em}dG(s)=\mathsf{E}\big(|H(\gamma )|{\mathbb{1}_{\{\gamma <{t_{G}}\}}}\big)=\mathsf{E}\big(|H(\gamma )|{\mathbb{1}_{\{\gamma <\infty \}}}\big)\]
and $\mathsf{E}\big(|L|{\mathbb{1}_{\{\gamma <\infty \}}}\big)<\infty $ if and only if
\[ \mathsf{E}\big(|H(\gamma )|{\mathbb{1}_{\{\gamma <\infty \}}}\big)<\infty \hspace{2em}\text{and}\hspace{2em}\mathsf{E}\big(|{L^{\prime }}|{\mathbb{1}_{\{\gamma <\infty \}}}\big)<\infty .\]
Next, if ${M_{\infty }}$ is well defined, then
\[ {M_{\infty }}=L{\mathbb{1}_{\{\gamma <\infty \}}}+\underset{t\to \infty }{\lim }F(t){\mathbb{1}_{\{\gamma =\infty \}}}.\]
Finally, if $\mathsf{P}(\gamma =\infty )>0$, then it follows from (13) that ${\lim \nolimits_{t\to \infty }}F(t)$ exists and is finite if $\underset{[0,{t_{G}})}{\textstyle\int }|H(s)|\hspace{0.1667em}dG(s)<\infty $. Now, combining all above, we arrive at (i).(ii) If $\mathsf{P}(\gamma =\infty )>0$, then
\[ \operatorname{Var}{(M)_{\infty }}\leqslant 2\operatorname{Var}{(F)_{\infty }}+|L|{\mathbb{1}_{\{\gamma <\infty \}}},\]
and the last term on the right has finite expectation by assumptions. Since $\overline{G}(t)\geqslant \mathsf{P}(\gamma =\infty )>0$ in the case under consideration, it follows from assumptions and (13) that F has a finite variation over ${\mathbb{R}_{+}}$.From now on we assume that $\mathsf{E}\big(|{L^{\prime }}|{\mathbb{1}_{\{\gamma <\infty \}}}\big)<\infty $, ${\textstyle\int _{[0,{t_{G}})}}|H(s)|\hspace{0.1667em}dG(s)<\infty $, and $\mathsf{P}(\gamma ={t_{G}})=0$. Then M is a martingale on $[0,{t_{G}})$ by Theorem 1 and it coincides with ${M_{\infty }}=L$ for $t\geqslant {t_{G}}$. Hence, it is a (necessarily closed) submartingale (resp. supermartingale) if and only if $\mathsf{E}[L-{M_{t}}|{\mathcal{F}_{t}}]\geqslant 0$ (resp. $\leqslant 0$), for $t<{t_{G}}$. As in the proof of Theorem 1,
hence,
Taking expectations, we see that this constant has the same sign as $\mathsf{E}(L-{M_{t}})=\mathsf{E}(L-{M_{0}})$. However,
and (iii.i) follows.
The same proof shows that if ${\textstyle\int _{(0,{t_{G}})}}H(s)\hspace{0.1667em}dG(s)=F(0)\overline{G}(0)$, then M is a uniformly integrable martingale. Therefore, to prove (iii.ii) and (iii.iii) it is enough to show that $\mathsf{E}({\sup _{t}}|{M_{t}}|)<\infty $ implies (24), and that (24) implies $\mathsf{E}\big(\operatorname{Var}{(M)_{\infty }}\big)<\infty $.
If M is a local martingale with $\mathsf{E}({\sup _{t}}|{M_{t}}|)<\infty $, then $\mathsf{E}\big(|\Delta {M_{\gamma }}|{\mathbb{1}_{\{\gamma <\infty \}}}\big)<\infty $. But $|\Delta {M_{\gamma }}|{\mathbb{1}_{\{\gamma <\infty \}}}=|L-F(\gamma -)|{\mathbb{1}_{\{\gamma <\infty \}}}$, hence, taking conditional expectation given γ, we get
In view of (21) which is equivalent to (17), we obtain (24).
Conversely, let (24) hold. Then
\[\begin{aligned}{}& \operatorname{Var}{(M)_{\infty }}\\ {} & \hspace{1em}=|L|{\mathbb{1}_{\{\gamma =0\}}}+|F(0)|{\mathbb{1}_{\{\gamma >0\}}}+\underset{(0,\gamma )}{\int }\Big|\frac{dF}{dG}(s)\Big|\hspace{0.1667em}dG(s)+|L-F(\gamma -)|{\mathbb{1}_{\{0<\gamma <\infty \}}}\\ {} & \hspace{1em}\leqslant 2|L|{\mathbb{1}_{\{\gamma <\infty \}}}+2|F(0)|{\mathbb{1}_{\{\gamma >0\}}}+2\underset{(0,\gamma )}{\int }\Big|\frac{dF}{dG}(s)\Big|\hspace{0.1667em}dG(s)\end{aligned}\]
and
\[\begin{aligned}{}\mathsf{E}\Big({\int _{(0,\gamma )}}\Big|\frac{dF}{dG}(s)\Big|\hspace{0.1667em}dG(s)\Big)& =\underset{[0,{t_{G}})}{\int }\underset{(0,u)}{\int }\Big|\frac{dF}{dG}(s)\Big|\hspace{0.1667em}dG(s)\hspace{0.1667em}dG(u)\\ {} & =\underset{[0,{t_{G}})}{\int }\Big|\frac{dF}{dG}(s)\Big|\underset{(s,{t_{G}})}{\int }\hspace{0.1667em}dG(u)\hspace{0.1667em}dG(s)\\ {} & =\underset{[0,{t_{G}})}{\int }\overline{G}(s)\Big|\frac{dF}{dG}(s)\Big|\hspace{0.1667em}dG(s)<\infty .\end{aligned}\]
□Remark 8.
It follows from the last equalities in the proof that, due to (11) and (21) respectively, (24) implies that the following integrals are also finite:
\[ \underset{[0,{t_{G}})}{\int }|F(s-)|\hspace{0.1667em}dG(s)<\infty \hspace{2em}\text{and}\hspace{2em}\underset{[0,{t_{G}})}{\int }|H(s)|\hspace{0.1667em}dG(s)<\infty .\]
However, it may happen that
see an example in [13, Remark 3.11].4 Complements
4.1 Single jump processes and their compensators
Let us consider the same setting as in Section 2 and let V be a finite random variable. For simplicity, we assume that $\{\gamma =0\}\subseteq \{V=0\}$. Then
is an adapted process of finite variation on compact intervals.
Proof.
Let (31) hold. If ${t_{G}}\in \mathcal{T}$, then $\mathsf{E}\big(|V|{\mathbb{1}_{\{\gamma \leqslant {t_{G}}\}}}\big)<\infty $ means that the process X itself has integrable variation. In Case A, put
\[ {T_{n}}=\left\{\begin{array}{l@{\hskip10.0pt}l}{t_{n}},& \text{if}\hspace{2.5pt}\gamma >{t_{n}}\text{;}\\ {} +\infty ,& \text{otherwise.}\end{array}\right.\]
where ${t_{n}}\upuparrows {t_{G}}$. Then ${T_{n}}\uparrow \infty $ a.s. and $\operatorname{Var}{({X^{{T_{n}}}})_{\infty }}=|V|{\mathbb{1}_{\{\gamma \leqslant {T_{n}}\}}}=|V|{\mathbb{1}_{\{\gamma \leqslant {t_{n}}\}}}$.Conversely, let $\{{T_{n}}\}$ be a localizing sequence of stopping times such that $\mathsf{E}\big(|V|{\mathbb{1}_{\{\gamma \leqslant {T_{n}}\}}}\big)<\infty $. If $\mathsf{P}(\gamma \leqslant {T_{n}})=1$ for n large enough, then V is integrable. So assume that $\mathsf{P}(\gamma >{T_{n}})>0$ for every n. By Proposition 1 (iv), there are numbers ${r_{n}}$ such that $\{{T_{n}}<\gamma \}=\{{T_{n}}={r_{n}}<\gamma \}=\{{r_{n}}<\gamma \}$. Thus, we have a sequence ${r_{n}}$ such that $\mathsf{E}\big(|V|{\mathbb{1}_{\{\gamma \leqslant {r_{n}}\}}}\big)<\infty $. Since ${T_{n}}\to \infty $ a.s. and the sequence $\{{T_{n}}\}$ is increasing, in Case A it follows that ${r_{n}}\uparrow {t_{G}}$, and in Case B we come to a contradiction by repeating the arguments in the concluding part of the proof of Theorem 1. □
From now on we will assume that X is a process of locally integrable variation, i.e. (31) holds. Our aim is to find its compensator. We can introduce a function K similarly as the function H is introduced in (10):
It is clear that $K\in {L_{\mathrm{loc}}^{1}}(dG)$ and $K(0)=0$ if $\mathsf{P}(\gamma =0)>0$. Now define
in particular, $F(0)=0$. It follows that, in Case B, the function F has a bounded variation on $[0,{t_{G}})$ and has a finite limit as $t\upuparrows {t_{G}}$, so we put
(33)
\[ F(t)=\underset{(0,t]}{\int }\overline{G}{(s-)^{-1}}K(s)\hspace{0.1667em}dG(s),\hspace{1em}0\leqslant t<{t_{G}},\]The next theorem takes its origin in [4], where the case when $V=1$, γ is finite and ${t_{G}}=+\infty $ is considered.
Theorem 5.
Proof.
The process $t\wedge \gamma $ is adapted and continuous, hence, it is predictable. It follows that $F(t\wedge \gamma )$ is predictable. Next, in Case B, the set $\{(\omega ,t):\gamma (\omega )\geqslant {t_{G}},\hspace{0.2222em}t\geqslant {t_{G}}\}$ coincides with the intersection of predictable sets
\[ \{\gamma \geqslant {t_{G}}\}\times [{t_{G}},\infty )=\bigcap \limits_{n}\Big[\{\gamma >{t_{G}}-{n^{-1}}\}\times ({t_{G}}-{n^{-1}},\infty )\Big],\]
therefore, A is predictable. Hence it is enough to show that $M=A-X$ is a local martingale.We use Theorem 2 and Proposition 3 (b). M has the representation (6) with the same function F and $L=F(\gamma ){\mathbb{1}_{\{\gamma <\infty \}}}-V{\mathbb{1}_{\{\gamma <\infty \}}}+K({t_{G}}){\mathbb{1}_{\{\gamma ={t_{G}}<\infty \}}}$. Define the function H as in Proposition 3 (b). Then it follows from (33) that $H(t)=F(t)-K(t)$, $0<t<{t_{G}}$, and, in Case B, $H({t_{G}})=F({t_{G}})$. On the other hand, we have $\mathsf{E}[L|\gamma =t]=F(t)-K(t)=H(t)$, $0<t<{t_{G}}$, and, in Case B, $\mathsf{E}[L|\gamma ={t_{G}}]=F({t_{G}})-K({t_{G}})+K({t_{G}})=F({t_{G}})=H({t_{G}})$. The claim follows. □
4.2 Example: submartingales of class $(\Sigma )$
Recall, see [22], that a nonnegative submartingale $X={({X_{t}})_{t\in {\mathbb{R}_{+}}}}$ is called a submartingale of class $(\Sigma )$ if ${X_{0}}=0$ and it can be decomposed as ${X_{t}}={N_{t}}+{A_{t}}$, where $N={({N_{t}})_{t\in {\mathbb{R}_{+}}}}$, ${N_{0}}=0$, is a local martingale, $A={({A_{t}})_{t\in {\mathbb{R}_{+}}}}$, ${A_{0}}=0$, is a continuous increasing process, and the measure $(d{A_{t}})$ is carried by the set $\{t:{X_{t}}=0\}$. A typical example is a process ${X_{t}}={\overline{L}_{t}}-{L_{t}}$ which is the difference between the running maximum ${\overline{L}_{t}}$ of a continuous local martingale $({L_{t}})$ and ${L_{t}}$ itself.
Let $X={({X_{t}})_{t\in {\mathbb{R}_{+}}}}$ be a nonnegative submartingale with the Doob–Meyer decomposition ${X_{t}}={N_{t}}+{A_{t}}$, where ${N_{0}}={A_{0}}=0$, N is a local martingale, A is a predictable increasing process. Assume that ${A_{\infty }}<\infty $ a.s. and put ${C_{t}}=\inf \{s\geqslant 0:{A_{s}}>t\}$. Then, see [10, Lemma 3.1], X is of class $(\Sigma )$ if and only if a.s.
\[ {A_{{C_{t}}}}={A_{\infty }}\wedge t\hspace{2em}\text{and}\hspace{2em}{X_{{C_{t}}}}={X_{\infty }}{\mathbb{1}_{\{t\geqslant {A_{\infty }}\}}},\]
where a finite limit ${X_{\infty }}:={\lim \nolimits_{t\to \infty }}{X_{t}}$ exists a.s. by [10, Proposition 3.1]. Therefore, the process ${M_{t}}=-{N_{{C_{t}}}}$ has the representation
\[ {M_{t}}=t\wedge \gamma -V{\mathbb{1}_{\{t\geqslant \gamma \}}},\hspace{1em}\text{where}\hspace{2.5pt}\gamma ={A_{\infty }}\hspace{2.5pt}\text{and}\hspace{2.5pt}V={X_{\infty }}.\]
M may not be a local martingale. For example, take as L a Brownian motion stopped when it hits 1 and define $X=\overline{L}-L$, then ${M_{t}}=t\wedge 1$. However, if X is a submartingale of class $(D)$ then N is a uniformly integrable martingale and M is also a uniformly integrable martingale (with respect to its own filtration and, by Theorem 1, with respect to the single jump filtration generated by γ on an original space). Now we can define a function K according to (32) and conclude that (33) is valid with $F(t)=t$. We may interpret (33) as the equation with known K and unknown G. This identity says that the Lebesgue measure on $[0,{t_{G}})$ is absolutely continuous with respect to $dG$ but not vice versa. However, if the function $K(t)$ does not vanish ($dG$-a.s.) then we obtain from (33) that $dG$ is equivalent to the Lebesgue measure on $\mathcal{T}$, in particular, G is continuous, and
\[ \mathsf{P}(\gamma >t)=\exp \Big(-{\underset{0}{\overset{t}{\int }}}\frac{dt}{K(t)}\Big),\hspace{1em}t<{t_{G}}.\]
This statement coincides with Theorem 4.1 in [22]. If the function $K(t)$ may vanish, analysis of equation (33) with $F(t)=t$, known $K(t)$ and unknown $G(t)$ is done in [23].A kind of a converse statement is proved in [11]. If, say, a martingale M satisfies
where $\gamma <\infty $ and $V\geqslant 0$, then, using Monroe’s theorem [20], we prove that there is a Brownian motion B and a finite stopping time T such that, for the stopped process $L={B^{T}}$, the joint law of its terminal value ${L_{\infty }}$ and its maximum ${\overline{L}_{\infty }}$ coincides with that of M, that is, with the law of $(\gamma -V,\gamma )$. In particular, this shows that a distribution function G is the law of the maximum of a uniformly integrable continuous martingale L with ${L_{0}}=0$ if and only if, with $F(t)=t$, $0\leqslant t<{t_{G}}$, we have $F\stackrel{\mathrm{loc}}{\ll }G$, ${\textstyle\int _{[0,{t_{G}})}}|H(s)|\hspace{0.1667em}dG(s)<\infty $, where H is defined by (17), and $G(t)=o({t^{-1}})$, see conditions for M to have type 3 or 4 in Theorem 4. This gives an alternative proof of the main result in [23].