1 Introduction
This paper discusses in detail a framework of one-dimensional stochastic differential equations (henceforth abbreviated by SDEs) with distributional drift and possible path-dependency. To our best knowledge, this is the first paper which approaches a class of non-Markovian SDEs with distributional drifts.
The main objective of this paper is to analyze the solution (existence and uniqueness) of the martingale problem associated with SDEs of the type
where $b,\sigma :\mathbb{R}\to \mathbb{R}$ are continuous functions, $\sigma >0$, ${x_{0}}\in \mathbb{R}$ and W is a standard Brownian motion. The assumptions on b, which will be formulated later, imply that ${b^{\prime }}$ is a Schwartz distribution. Concerning the path-dependent component of the drift, we consider a locally bounded functional
where
and
(1.1)
\[ d{X_{t}}=\sigma ({X_{t}})d{W_{t}}+{b^{\prime }}({X_{t}})dt+\Gamma \big(t,{X^{t}}\big)dt,\hspace{2.5pt}\hspace{2.5pt}{X_{0}}\stackrel{d}{=}{\delta _{{x_{0}}}},\]
\[ {\eta ^{s}}(t):=\left\{\begin{array}{l@{\hskip10.0pt}l}\eta (t),& \text{if}\hspace{2.5pt}t\le s,\\ {} \eta (s),& \text{if}\hspace{2.5pt}t>s,\end{array}\right.\]
and $C([0,T])$ is the space of real-valued continuous functions on $[0,T]$, for a given terminal time $0<T<\infty $. By convention, we extend Γ from Λ to $[0,T]\times C([0,T])$ by setting (in nonanticipating way)
Setting $\sigma =1$, $b=B$, where B is a two-sided real-valued Brownian motion which is independent from W, equation (1.1) reads
When $\Gamma =0$, equation (1.4) constitutes the so-called Brox diffusion, see, e.g., [4, 15, 25] and other references therein. This is a celebrated random environment model. This paper includes the study of (1.4), where Γ is a bounded path-dependent functional, which appears to be a non-Markovian variant of Brox diffusion.Path-dependent SDEs were investigated under several aspects. Under standard Lipschitz regularity conditions on the coefficients, it is known (see, e.g., Theorem 11.2 [20, chapter V]) that strong existence and uniqueness hold. In the case the path-dependence takes the form of delayed stochastic equations, one-sided Lipschitz condition ensures strong existence and uniqueness, see, e.g., [24, 17]. Beyond Lipschitz regularity on the coefficients of the SDE, [14] shows uniqueness in law under structural conditions on an underlying approximating Markov process, where local-time and running maximum dependence are considered. The existence in law for infinite-dimensional SDEs with additive noise on the configuration space with path-dependent drift functionals with sublinear growth is studied in [8]. In all those contributions the drift is a nonanticipative functional. Beyond Brownian motion based driving noises, [6] establishes existence of solutions for some path-dependent differential equation driven by a Hölder process.
The Markovian case ($\Gamma =0$) with distributional drift has been intensively studied over the years. Diffusions in the generalized sense were first considered in the case when the solution is still a semimartingale, beginning with [19]. Later on, many authors considered special cases of SDEs with generalized coefficients. It is difficult to quote them all; in the case when the drift ${b^{\prime }}$ is a measure and the solutions are semimartingales we refer the reader to [3, 9, 21]. We also recall that [10] considered even special cases of nonsemimartingales solving stochastic differential equations with generalized drift.
In [12] and [13], the authors studied time-independent one-dimensional SDEs of the form
whose solutions are possibly nonsemimartingale processes, where σ is a strictly positive continuous function and ${b^{\prime }}$ is the derivative of a real-valued continuous function. They presented well-posedness of the martingale problem, Itô’s formula under weak conditions, semimartingale characterization and the Lyons–Zheng decomposition. The only supplementary assumption was the existence of the function
considered as a suitable limit via regularizations. Those authors considered solutions in law. The SDE (1.5) was also investigated in [1], where the authors provided a well-stated framework when σ and b are γ-Hölder continuous, $\gamma >\frac{1}{2}$. In [22], the authors have also shown that in some cases strong solutions exist and pathwise uniqueness holds. More recently, in the time-dependent framework (but still one-dimensional), a significant contribution was done in [7]. As far as the multidimensional case is concerned, some important steps were done in [11] and more recently in [5], when the diffusion matrix is the identity and b is a time-dependent drift in some proper negative Sobolev space. We also refer to [2], where the authors have focused on (1.1) in the case of a time-independent drift b which is a measure of Kato class.
(1.5)
\[ \mathrm{d}{X_{t}}=\sigma ({X_{t}})\mathrm{d}{W_{t}}+{b^{\prime }}({X_{t}})\mathrm{d}t,\hspace{1em}t\in [0,T],\](1.6)
\[ \Sigma (x):=2{\int _{0}^{x}}\frac{{b^{\prime }}}{{\sigma ^{2}}}(y)dy,\hspace{1em}x\in \mathbb{R},\]Let us come back to the objective of the present paper in which the path-dependent drift contains the derivative in the sense of distributions of a continuous function b. We remark that in case ${b^{\prime }}$ is a bounded measurable function and Γ is a bounded path-dependent functional, then the problem can be easily treated by applying Girsanov’s theorem, see, e.g., [16, Proposition 3.6 and 3.10, chapter 5]. Here, the combination of a Schwartz distribution ${b^{\prime }}$ with a path-dependent functional Γ requires a new set of ideas. Equation (1.1) will be interpreted as a martingale problem with respect to an operator
see (3.3), where L is the Markovian generator
where we stress that ${b^{\prime }}$ is the derivative of some continuous function b. If we define Σ as in (1.6), then the operator L can be written as
see [12]. We define a notion of martingale problem related to $\mathcal{L}$ (see Definition 3.3) and a notion of strong martingale problem related to ${\mathcal{D}_{\mathcal{L}}}$ and a given Brownian motion W (see Definition 3.4). That definition has to be compared with the notion of strong existence and pathwise uniqueness of an SDE. In this article, the notion of martingale problem extends the usual one by replacing the space ${C^{2}}$ of twice continuously differentiable real-valued functions with a more suitable set ${\mathcal{D}_{\mathcal{L}}}$. In the Markovian case, the notion of strong martingale problem was introduced in [22]. As mentioned earlier, we will concentrate on the case when b is continuous, the case of special discontinuous functions is investigated in [18].
(1.8)
\[ Lf={\big({e^{\Sigma }}{f^{\prime }}\big)^{\prime }}\frac{{e^{-\Sigma }}{\sigma ^{2}}}{2},\]The strategy of this paper consists in eliminating the distributional drift of the so-called Zvonkin’s transform, see [27]. In this direction, we transform the equation via an L-harmonic function h which exists under the assumption that the function (1.6) is well-defined. The case with $\Gamma =0$ was already implemented in [12] and [13], where the drift in the transformed SDE was null. In our non-Markovian context, the transformed equation is essentially a path-dependent SDE with measurable coefficients. Under some linear growth conditions (see Assumption 4.16), Theorem 4.23 illustrates the existence of the martingale problem related to (1.1). Proposition 4.24 states uniqueness under more restrictive conditions. Indeed, consider the example when $\sigma =1$ and ${b^{\prime }}$ is the derivative (in the sense of distributions) of a bounded continuous function b and Γ is a bounded measurable functional. In this case, $h={\textstyle\int _{0}^{\cdot }}{e^{-2b(y)}}dy$. Then the study of the well-posedness of the martingale problem is equivalent to the well-posedness of the path-dependent SDE
Existence and uniqueness for (1.9) can be established via Girsanov’s theorem.
(1.9)
\[ Y={Y_{0}}+{\int _{0}^{\cdot }}\exp \big(-2b\big({h^{-1}}({Y_{s}})\big)\big)d{W_{s}}+{\int _{0}^{\cdot }}\Gamma \big(s,{h^{-1}}({Y_{s}})\big)\exp \big(-2b\big({h^{-1}}({Y_{s}})\big)\big)ds.\]Moreover, Corollary 4.31 establishes well-posedness for the strong martingale problem associated to (1.1). This holds under suitable Lipschitz regularity conditions on the functional $\tilde{\Gamma }$, which is related to Γ via (4.14), and a specific assumption on the function ${\sigma _{0}}$ defined in Remark 4.9. We suppose that ${\sigma _{0}}$ is bounded, uniformly elliptic and it fulfills a Yamada–Watanabe type condition. One typical example is given when ${\sigma _{0}}$ is a γ-Hölder continuous function for $\frac{1}{2}\le \gamma \le 1$.
Several results of the present paper can be partially extended to the multidimensional case, by using the techniques developed in the Markovian case in [11]. In this direction, the harmonic function h should be replaced by the “mild” solution ϕ of the parabolic Kolmogorov equation
see Section 3.2 of [11].
(1.10)
\[ \left\{\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\partial _{t}}\phi +\frac{1}{2}\Delta \phi +{b^{\prime }}\nabla \phi & =& \lambda (\phi -\mathrm{id}),\\ {} \phi (T,\cdot )& =& \mathrm{id},\end{array}\right.\]2 Notations and preliminaries
2.1 General notations
Let I be an interval of $\mathbb{R}$. ${C^{k}}(I)$ is the space of real functions defined on I having continuous derivatives till order k. Such space is endowed with the topology of the uniform convergence on compact sets for the functions and all derivatives. Generally, $I=\mathbb{R}$ or $[0,T]$ for some fixed positive real T. The space of continuous functions on I will be denoted by $C(I)$. Often, if there is no ambiguity, ${C^{k}}(\mathbb{R})$ will be simply indicated by ${C^{k}}$. Given an a.e. bounded real function f, $|f{|_{\infty }}$ will denote the essential supremum.
We recall some notions from [12]. For us, all filtrations $\mathfrak{F}$ fulfill the usual conditions. When no filtration is specified, we mean the canonical filtration of the underlying process. Otherwise the canonical filtration associated with a process X is denoted by ${\mathfrak{F}^{X}}$. An $\mathfrak{F}$-Dirichlet process X is the sum of an $\mathfrak{F}$-local martingale ${M^{X}}$ with an $\mathfrak{F}$-adapted zero quadratic variation process ${A^{X}}$. We will fix by convention that ${A_{0}^{X}}=0$ so that the decomposition is unique. A sequence $({X^{n}})$ of continuous processes indexed by $[0,T]$ is said to converge u.c.p. to some process X whenever $\underset{t\in [0,T]}{\sup }|{X_{t}^{n}}-{X_{t}}|$ converges to zero in probability. Finally the notion of covariation between two general càdlàg processes (whenever it exists) is denoted by $[X,Y]$ and we set $[X]=[X,X]$, see, e.g., [23]. If $[X]$ exists, X is called a finite quadratic variation process.
Remark 2.1.
-
(1) An $\mathfrak{F}$-continuous semimartingale Y is always an $\mathfrak{F}$-Dirichlet process. The ${A^{Y}}$ process coincides with the continuous bounded variation component. Moreover, the quadratic variation $[Y]$ is the usual quadratic variation for semimartingales.
-
(2) Any $\mathfrak{F}$-Dirichlet process is a finite quadratic variation process and its quadratic variation is given by $[X]=[{M^{X}}]$.
-
(3) If $f\in {C^{1}}(\mathbb{R})$ and $X={M^{X}}+{A^{X}}$ is an $\mathfrak{F}$-Dirichlet process, then $Y=f(X)$ is again an $\mathfrak{F}$-Dirichlet process and $[Y]={\textstyle\int _{0}^{\cdot }}{f^{\prime }}{(X)^{2}}d[{M^{X}}]$.
3 Non-Markovian SDE: the function case
3.1 General considerations
As in the case of Markovian SDEs, it is possible to formulate the notions of strong existence, pathwise uniqueness, existence and uniqueness in law for path-dependent SDEs of the type (1.1), see, e.g., Section A. Let us suppose for the moment that $\sigma ,{b^{\prime }}:\mathbb{R}\to \mathbb{R}$ are Borel functions. We will consider solutions X of
for some initial condition ξ.
The previous equation will be denoted by $E(\sigma ,{b^{\prime }},\Gamma ;\nu )$ (where ν is the law of ξ), or simply by $E(\sigma ,{b^{\prime }},\Gamma )$ if we omit the initial condition. For simplicity, the initial conditions will always be considered as deterministic. The functional Γ is nonanticipative in the sense of (1.3).
Definition 3.1.
Let ν be the Dirac probability measure on $\mathbb{R}$ such that $\nu ={\delta _{{x_{0}}}}$, ${x_{0}}\in \mathbb{R}$. A stochastic process X is called a solution of $E(\sigma ,{b^{\prime }},\Gamma ;\nu )$ with respect to a probability $\mathbb{P}$ if there is a Brownian motion W on some filtered probability space, such that X solves (3.1) and ${X_{0}}={x_{0}}$. We also say that the couple $(X,\mathbb{P})$ solves $E(\sigma ,{b^{\prime }},\Gamma )$ with initial condition distributed according to ν.
Suppose $\Gamma \equiv 0$. A very well-known result in [26], Corollary 8.1.7, concerns the equivalence between martingale problems and solution in law of SDEs. Suppose for a moment that ${b^{\prime }}$ is a continuous function. According to [16, chapter 5], a process X and probability $\mathbb{P}$ solve the classical martingale problem, if and only if, X is a solution of (1.1). The proof of that result can be easily adapted to the path-dependent case, i.e. when $\Gamma \ne 0$. This provides the statement below.
Proposition 3.2.
A couple $(X,\mathbb{P})$ is a solution of $E(\sigma ,{b^{\prime }},\Gamma )$, if and only if, under $\mathbb{P}$,
is a local martingale, where $Lf=\frac{1}{2}{\sigma ^{2}}{f^{\prime\prime }}+{b^{\prime }}{f^{\prime }}$, for every $f\in {C^{2}}$.
(3.2)
\[ f({X_{t}})-f({X_{0}})-{\int _{0}^{t}}Lf({X_{s}})ds-{\int _{0}^{t}}{f^{\prime }}({X_{s}})\Gamma \big(s,{X^{s}}\big)ds\]3.2 Comments about the distributional case
When ${b^{\prime }}$ is a distribution, it is not obvious to introduce the notion of SDE, except in the case when L is close to the divergence form, i.e. when $Lf={({\sigma ^{2}}{f^{\prime }})^{\prime }}+\beta {f^{\prime }}$ and β is a Radon measure, see, e.g., Proposition 3.1 of [12]. For this reason, we replace the notion of solution in law with the notion of martingale problem. Suppose for a moment that L is a second order PDE operator with possibly generalized coefficients. In general, as it is shown in [12], ${C^{2}}$ is not included in the natural domain of operator L and, similarly to [12], we will replace ${C^{2}}$ with some set ${\mathcal{D}_{L}}$. Suppose that $L:{\mathcal{D}_{L}}\subset {C^{1}}(\mathbb{R})\to C(\mathbb{R})$. Nevertheless ${\mathcal{D}_{L}}$ is not the domain of L in the sense of the generator of a semigroup.
Definition 3.3.
-
(1) We say that a continuous stochastic process X solves (with respect to a probability $\mathbb{P}$ on some measurable space $(\Omega ,\mathcal{F})$) the martingale problem related to with initial condition $\nu ={\delta _{{x_{0}}}}$, ${x_{0}}\in \mathbb{R}$, with respect to a domain ${\mathcal{D}_{L}}$ if
(3.4)
\[ {M_{t}^{f}}:=f({X_{t}})-f({x_{0}})-{\int _{0}^{t}}Lf({X_{s}})ds-{\int _{0}^{t}}{f^{\prime }}({X_{s}})\Gamma \big(s,{X^{s}}\big)ds\]We will also say that the couple $(X,\mathbb{P})$ is a solution of (or $(X,\mathbb{P})$ solves) the martingale problem with respect to ${\mathcal{D}_{L}}$. -
(2) If a solution exists, we say that existence holds for the martingale problem above.
-
(3) We say that uniqueness holds for the martingale problem above, if any two solutions (on some measurable space $(\Omega ,\mathcal{F})$) $({X^{i}},{\mathbb{P}^{i}}),i=1,2$, have the same law.
We remark that in the classical literature of martingale problems, see [26], a solution is a probability on the path space $C([0,T])$ and X is the canonical process. If $(X,\mathbb{P})$ is a solution according to our notations, a solution in the classical framework would be the probability law of X with respect to $\mathbb{P}$. We preserve our notations in conformity with [12, 13].
In the sequel, when the measurable space $(\Omega ,\mathcal{F})$ is self-explanatory it will be often omitted. As already observed in Proposition 3.2, the notion of martingale problem is (since the works of Stroock and Varadhan [26]) a concept related to solutions of SDEs in law. In the case when ${b^{\prime }}$ and σ are continuous functions (see [26]), ${\mathcal{D}_{L}}$ corresponds to the space ${C^{2}}(\mathbb{R})$, in agreement with Remark 4.6 below.
Below we introduce the analogous notion of strong existence and pathwise uniqueness for our martingale problem.
Definition 3.4.
-
(1) Let $(\Omega ,\mathcal{F},\mathbb{P})$ be a probability space and let $\mathfrak{F}=({\mathcal{F}_{t}})$ be the canonical filtration associated with a fixed Brownian motion W. Let ${x_{0}}\in \mathbb{R}$ be a constant. We say that a continuous $\mathfrak{F}$-adapted real-valued process X such that ${X_{0}}={x_{0}}$ is a solution to the strong martingale problem (related to (3.3)), with respect to ${\mathcal{D}_{L}}$ and W (with related filtered probability space), if
(3.5)
\[\begin{aligned}{}& f({X_{t}})-f({x_{0}})-{\int _{0}^{t}}Lf({X_{s}})ds-{\int _{0}^{t}}{f^{\prime }}({X_{s}})\Gamma \big(s,{X^{s}}\big)ds\\ {} & \hspace{1em}={\int _{0}^{t}}{f^{\prime }}({X_{s}})\sigma ({X_{s}})d{W_{s}}\end{aligned}\] -
(2) We say that strong existence holds for the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$, if for every ${x_{0}}\in \mathbb{R}$, given a filtered probability space $(\Omega ,\mathcal{F},\mathbb{P},\mathfrak{F})$, where $\mathfrak{F}=({\mathcal{F}_{t}})$ is the canonical filtration associated with a Brownian motion W, there is a process X solving the strong martingale problem (related to (3.3)) with respect to ${\mathcal{D}_{L}}$ and W with ${X_{0}}={x_{0}}$.
-
(3) We say that pathwise uniqueness holds for the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$, if given $(\Omega ,\mathcal{F},\mathbb{P})$, a Brownian motion W on it, two solutions ${X^{i}},i=1,2$, to the strong martingale problem with respect to ${\mathcal{D}_{L}}$ and W, and $\mathbb{P}[{X_{0}^{1}}={X_{0}^{2}}]=1$, ${X^{1}}$ and ${X^{2}}$ are indistinguishable.
4 The case when the drift is the derivative of a continuous function
4.1 The Markovian case
In this section we recall some basic notations and results from [12] but in a way that simplifies the presentation of our framework. We will also add some useful new elements. Let σ and b be functions in $C(\mathbb{R})$ with $\sigma >0$. In [12], in view of defining ${\mathcal{D}_{L}}$ and L in the spirit of (1.7), the authors define the function
where the limit is intended to be in $C(\mathbb{R})$, i.e. uniformly on each compact. Here
where ${\Phi _{n}}(x):=n\Phi (nx)$ and $\Phi \in \mathcal{S}(\mathbb{R})$ (the Schwartz space), with $\textstyle\int \Phi (x)\hspace{0.1667em}dx=1$. For concrete examples, one can take either ${\sigma ^{2}}$ or b to be locally bounded variation functions, see [12] for other examples and other details. Proposition 2.3, Lemma 2.6 and 2.9 of [12] allow us (equivalently) to define a subspace ${\mathcal{D}_{L}}$ of ${C^{1}}(\mathbb{R})$ on which the definition of $Lf$ by (1.8) makes sense.
(4.1)
\[ \Sigma (x)=2\underset{n\to \infty }{\lim }{\int _{0}^{x}}\frac{{b^{\prime }_{n}}}{{\sigma _{n}^{2}}}(y)dy,\hspace{1em}\forall x\in \mathbb{R},\]Notation 4.1.
-
(1) ${\mathcal{D}_{L}}$ is the subset of all $f\in {C^{1}}(\mathbb{R})$ for which there exists $\phi \in {C^{1}}$ such that ${f^{\prime }}=\exp (-\Sigma )\phi $.
-
(3) We denote by $h:\mathbb{R}\to \mathbb{R}$ the function characterized by In particular h is an L-harmonic function in the sense that $Lh=0$, as we will see in Proposition 4.3.
Proof.
(1) We observe ${f^{2}}\in {\mathcal{D}_{L}}$ because ${({f^{2}})^{\prime }}=2f{f^{\prime }}=(2f\phi )\exp (-\Sigma )$. From Notation 4.1 (1) and the fact that ${\phi _{2}}:=2f\phi \in {C^{1}}$, we conclude ${f^{2}}\in {\mathcal{D}_{L}}$. By (4.3),
\[\begin{aligned}{}L{f^{2}}& ={\phi ^{\prime }_{2}}\exp (-\Sigma )\frac{{\sigma ^{2}}}{2}={(f\phi )^{\prime }}\exp (-\Sigma ){\sigma ^{2}}\\ {} & ={f^{\prime }}{\sigma ^{2}}\exp (-\Sigma )\phi +f{\phi ^{\prime }}\exp (-\Sigma ){\sigma ^{2}}={f^{\prime \hspace{0.1667em}2}}{\sigma ^{2}}+2fLf.\end{aligned}\]
(2) The proof follows by setting $\phi =1$, using item (1) above and Notation 4.1 (1). □
We now formulate a standing assumption.
Remark 4.6.
When σ and ${b^{\prime }}$ are continuous functions, then ${\mathcal{D}_{L}}={C^{2}}$. Indeed, in this case, $\Sigma \in {C^{1}}$ and then ${f^{\prime }}=\exp (-\Sigma )\phi \in {C^{1}}$. In particular, $Lf$ corresponds to the classical definition.
Remark 4.7.
We assume that Assumption 4.4 is satisfied. Let $g\in {\mathcal{D}_{L}}$ be a fixed diffeomorphism of class ${C^{1}}$ such that ${g^{\prime }}>0$. We set
and consider
(4.6)
\[ {\sigma _{0}^{g}}:=\big(\sigma {g^{\prime }}\big)\circ {g^{-1}},\hspace{1em}{b^{g}}=\big((Lg)\circ {g^{-1}}\big)\]
\[ {L^{g}}v:=\frac{1}{2}{\big({\sigma _{0}^{g}}\big)^{2}}{v^{\prime\prime }}+{b^{g}}{v^{\prime }},\hspace{1em}v\in {\mathcal{D}_{{L^{g}}}},\]
where we define ${\mathcal{D}_{{L^{g}}}}$ according to Notation 4.1 replacing L with ${L^{g}}$.Proposition 4.8.
We assume that Assumption 4.4 is satisfied. Let $g\in {\mathcal{D}_{L}}$ be a fixed diffeomorphism of class ${C^{1}}$ such that ${g^{\prime }}>0$. Then, $f\in {\mathcal{D}_{L}}$ if and only if $f\circ {g^{-1}}$ belongs to ${\mathcal{D}_{{L^{g}}}}$ and
Proof of Proposition 4.8.
By Notation 4.1, there exists ${\phi ^{g}}\in {C^{1}}$ such that
Concerning the direct implication, if $f\in {\mathcal{D}_{L}}$, first we prove that $f\circ {g^{-1}}\in {\mathcal{D}_{{L^{g}}}}$. Again by Notation 4.1 there exists ${\phi ^{f}}\in {C^{1}}$ such that ${f^{\prime }}=\exp (-\Sigma ){\phi ^{f}}$. So, ${(f\circ {g^{-1}})^{\prime }}=\frac{{f^{\prime }}}{{g^{\prime }}}\circ {g^{-1}}=\frac{{\phi ^{f}}}{{\phi ^{g}}}\circ {g^{-1}}\in {C^{1}}$ because ${g^{-1}}\in {C^{1}}$ and ${\phi ^{g}}>0$; then, $f\circ {g^{-1}}\in {C^{2}}$. Note that, by Remark 4.7, ${\mathcal{D}_{{L^{g}}}}={C^{2}}$ and hence $f\circ {g^{-1}}\in {\mathcal{D}_{{L^{g}}}}$. Moreover, according to Notation 4.1 (2),
By (4.8), $Lf\circ {g^{-1}}=\frac{{({\sigma _{0}^{g}})^{2}}}{2}{(f\circ {g^{-1}})^{\prime\prime }}+(Lg)\circ {g^{-1}}{(f\circ {g^{-1}})^{\prime }}={L^{g}}(f\circ {g^{-1}})$.
\[ {\big({\phi ^{f}}\big)^{\prime }}=\frac{2Lf}{{\sigma ^{2}}}\exp (\Sigma ),\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}{\big({\phi ^{g}}\big)^{\prime }}=\frac{2Lg}{{\sigma ^{2}}}\exp (\Sigma ).\]
A direct calculation gives
\[ {\big(f\circ {g^{-1}}\big)^{\prime\prime }}={\bigg(\frac{{\phi ^{f}}}{{\phi ^{g}}}\circ {g^{-1}}\bigg)^{\prime }}=\bigg[\frac{2Lf}{{g^{\prime \hspace{0.1667em}2}}{\sigma ^{2}}}-\frac{2Lg{f^{\prime }}}{{\sigma ^{2}}{g^{\prime \hspace{0.1667em}3}}}\bigg]\circ {g^{-1}}.\]
Consequently,
(4.8)
\[ \frac{{({\sigma _{0}^{g}})^{2}}}{2}{\big(f\circ {g^{-1}}\big)^{\prime\prime }}=\bigg[Lf-\frac{Lg{f^{\prime }}}{{g^{\prime }}}\bigg]\circ {g^{-1}}.\]Let us discuss the converse implication. Suppose that $f\circ {g^{-1}}$ belongs to ${\mathcal{D}_{{L^{g}}}}={C^{2}}$. Again, according to Notation 4.1 (1), we need to show that ${f^{\prime }}\exp (\Sigma )\in {C^{1}}$ which is equivalent to showing that $({f^{\prime }}\exp (\Sigma ))\circ {g^{-1}}$ belongs to ${C^{1}}$. If ${\phi ^{g}}\in {C^{1}}$ is such that ${g^{\prime }}=\exp (-\Sigma ){\phi ^{g}}$ (see (4.7)) we have
\[ \big({f^{\prime }}\exp (\Sigma )\big)\circ {g^{-1}}=\bigg({f^{\prime }}\frac{{\phi ^{g}}}{{g^{\prime }}}\bigg)\circ {g^{-1}}={\big(f\circ {g^{-1}}\big)^{\prime }}\big({\phi ^{g}}\circ {g^{-1}}\big),\]
which obviously belongs to ${C^{1}}$. Therefore $f\in {\mathcal{D}_{L}}$. □By setting $h=g$ in Proposition 4.8 we obtain the following corollary.
Corollary 4.9.
Let h be the function defined by (4.4). Then, $f\in {\mathcal{D}_{L}}$ if and only if $f\circ {h^{-1}}\in {C^{2}}$. Moreover, by setting $\varphi =f\circ {h^{-1}}$ for $f\in {\mathcal{D}_{L}}$, we have
where
In [12], the authors also show that the existence and uniqueness of the solution to the martingale problem are conditioned to a nonexplosion feature. The proposition below is an easy consequence of Proposition 3.13 in [12], which concerns the well-posedness of the martingale problem with respect to L in the case $\Gamma =0$.
Proposition 4.10.
Let $\nu ={\delta _{{x_{0}}}}$, ${x_{0}}\in \mathbb{R}$ and suppose that Assumption 4.4 holds true. Then, the existence and uniqueness hold for the martingale problem related to L (i.e. with $\Gamma =0$) with respect to ${\mathcal{D}_{L}}$ with initial condition ν.
4.2 The path-dependent framework
Let σ and b be functions in $C(\mathbb{R})$ with $\sigma >0$ and Γ as defined in (1.2). Let us assume again Assumption 4.4 and let h be the function defined in (4.4). We recall that ${\sigma _{0}}$ was defined in (4.9).
The first result explains how to reduce our path-dependent martingale problem to a path-dependent SDE.
Proposition 4.12.
Let X be a stochastic process on a probability space $(\Omega ,\mathcal{F},\mathbb{P})$.
-
(1) $(X,\mathbb{P})$ solves the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$ if and only if the process $Y=h(X)$ is a solution (with respect to $\mathbb{P}$) of
(4.10)
\[ {Y_{t}}={Y_{0}}+{\int _{0}^{t}}{\sigma _{0}}({Y_{s}})d{W_{s}}+{\int _{0}^{t}}{h^{\prime }}\big({h^{-1}}({Y_{s}})\big)\Gamma \big(s,{h^{-1}}\big({Y^{s}}\big)\big)ds\] -
(2) Let W be a Brownian motion on $(\Omega ,\mathcal{F},\mathbb{P})$. Then, X is a solution to the strong martingale problem with respect to ${\mathcal{D}_{L}}$ and W if and only if (4.10) holds.
Proof.
(1) We start proving the direct implication. According to (3.4) and the notations introduced therein,
is a $\mathbb{P}$-local martingale on some measurable space $(\Omega ,\mathcal{F})$.
(4.11)
\[ {M_{t}^{h}}=h({X_{t}})-h({X_{0}})-{\int _{0}^{t}}Lh({X_{s}})ds-{\int _{0}^{t}}{h^{\prime }}({X_{s}})\Gamma \big(s,{X^{s}}\big)ds\]In particular, by Proposition 4.3, Y satisfies
where ${M^{{h^{2}}}}$ is a local martingale, and we recall that ${\sigma _{0}}$ was defined in (4.6). By integration by parts,
The semimartingale ${Y^{2}}$ admits the two decompositions (4.12) and (4.13). By uniqueness, $-M={M^{{h^{2}}}}$ and ${\textstyle\int _{0}^{t}}{\sigma _{0}^{2}}({Y_{s}})ds={[Y]_{t}}$. By (4.11),
Setting
we have
Therefore, by Lévy’s characterization of Brownian motion, W is a standard Brownian motion. Since
(4.11) shows that Y solves (4.10).
\[ {Y_{t}}={Y_{0}}+{\int _{0}^{t}}{h^{\prime }}\big({h^{-1}}({Y_{s}})\big)\Gamma \big(s,{h^{-1}}\big({Y^{s}}\big)\big)ds+{M_{t}^{h}},\]
where ${M^{h}}$ is a local martingale and hence Y is a semimartingale. We need now to evaluate ${[{M^{h}}]_{t}}={[Y]_{t}}$. We apply (3.4) to $f={h^{2}}$ and, again by Proposition 4.3, we get
(4.12)
\[ {Y_{t}^{2}}={Y_{0}^{2}}+{\int _{0}^{t}}{\sigma _{0}^{2}}({Y_{s}})ds+2{\int _{0}^{t}}{Y_{s}}{h^{\prime }}\big({h^{-1}}({Y_{s}})\big)\Gamma \big(s,{h^{-1}}\big({Y^{s}}\big)\big)ds+{M_{t}^{{h^{2}}}},\]
\[\begin{aligned}{}{[Y]_{t}}& ={Y_{t}^{2}}-{Y_{0}^{2}}-2{\int _{0}^{t}}{Y_{s}}d{Y_{s}}\\ {} & ={Y_{t}^{2}}-{Y_{0}^{2}}+{M_{t}}-2{\int _{0}^{t}}{Y_{s}}{h^{\prime }}\big({h^{-1}}({Y_{s}})\big)\Gamma \big(s,{h^{-1}}\big({Y^{s}}\big)\big)ds,\end{aligned}\]
where ${M_{t}}=-2{\textstyle\int _{0}^{t}}{Y_{s}}d{M_{s}^{h}}$. Therefore
(4.13)
\[ {Y_{t}^{2}}={Y_{0}^{2}}-{M_{t}}+2{\int _{0}^{t}}{Y_{s}}{h^{\prime }}\big({h^{-1}}({Y_{s}})\big)\Gamma \big(s,{h^{-1}}\big({Y^{s}}\big)\big)ds+{[Y]_{t}}.\]Next, we prove the converse implication. Suppose that $Y=h(X)$ satisfies (4.10) for some $\mathbb{P}$-Brownian motion W. We take $f\in {\mathcal{D}_{L}}$. By Corollary 4.9 we have $\varphi \equiv f\circ {h^{-1}}\in {C^{2}}$. Using Itô’s formula and again Corollary 4.9, we get
\[\begin{aligned}{}\varphi ({Y_{t}})& =\varphi ({Y_{0}})+{\int _{0}^{t}}{\varphi ^{\prime }}({Y_{s}})d{Y_{s}}+\frac{1}{2}{\int _{0}^{t}}{\varphi ^{\prime\prime }}({Y_{s}})d{[Y]_{s}}\\ {} & =\varphi ({Y_{0}})+{\int _{0}^{t}}{\varphi ^{\prime }}({Y_{s}}){\sigma _{0}}({Y_{s}})d{W_{s}}+\frac{1}{2}{\int _{0}^{t}}{\varphi ^{\prime\prime }}({Y_{s}}){\sigma _{0}^{2}}({Y_{s}})ds\\ {} & =\varphi ({Y_{0}})+{\int _{0}^{t}}{\varphi ^{\prime }}({Y_{s}}){\sigma _{0}}({Y_{s}})d{W_{s}}+\frac{1}{2}{\int _{0}^{t}}{L^{h}}\varphi ({Y_{s}}){\sigma _{0}^{2}}({Y_{s}})ds\\ {} & =f({X_{0}})+{\int _{0}^{t}}Lf({X_{s}})ds+{\int _{0}^{t}}{f^{\prime }}({X_{s}})\sigma ({X_{s}})d{W_{s}}\\ {} & \hspace{1em}+{\int _{0}^{t}}{f^{\prime }}({X_{s}})\Gamma \big(s,{X^{s}}\big)ds.\end{aligned}\]
Therefore
is a local martingale, which concludes the proof.(2) The converse implication follows in the same way as for item (1). The proof of the direct implication follows directly by Itô’s formula. □
Corollary 4.13.
Let $(X,\mathbb{P})$ be a solution of the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$. Then X is a Dirichlet process (with respect to its canonical filtration) and ${[X]_{t}}={\textstyle\int _{0}^{t}}{\sigma ^{2}}({X_{s}})ds$, $t\in [0,T]$.
Proof.
Remark 4.14.
If X is a solution of the strong martingale problem with respect to ${\mathcal{D}_{L}}$ and some Brownian motion W, then X is a Dirichlet process with respect to the canonical filtration of the Brownian motion.
An immediate consequence of Proposition 4.12 is the following corollary.
Corollary 4.15.
Suppose that $\Gamma =0$ and let $(X,\mathbb{P})$ be a solution to the martingale problem related to L with respect to ${\mathcal{D}_{L}}$. Then, $Y=h(X)$ is an $\mathfrak{F}$-local martingale where $\mathfrak{F}$ is the canonical filtration of X with quadratic variation $[Y]={\textstyle\int _{0}^{\cdot }}{\sigma _{0}^{2}}({Y_{s}})ds$.
4.3 Existence
We fix here the same conventions as in Section 4.2. In the sequel, we introduce the map $\tilde{\Gamma }:\Lambda \to \mathbb{R}$ defined by
At this point, we introduce the following technical assumption, which is in particular verified if Γ is bounded and $\sigma =1$.
Remark 4.17.
Let X be a stochastic process and set $Y=h(X)$. If Assumption 4.16 is fulfilled, then
In particular ${\textstyle\int _{0}^{T}}{\tilde{\Gamma }^{2}}(s,{X^{s}})ds<\infty $ a.s.
(4.15)
\[ \underset{s\in [0,T]}{\sup }\big|\tilde{\Gamma }\big(s,{X^{s}}\big)\big|\le K\Big(1+\underset{s\in [0,T]}{\sup }|{Y_{s}}|\Big).\]Proposition 4.18 below is a well-known extension of Novikov’s criterion. It is an easy consequence of Corollary 5.14 in [16, Chapter 3].
Proposition 4.18.
Suppose Assumption 4.16 holds true. Let W be a Brownian motion and let X be a continuous and adapted process for which there exists a subdivision $0={t_{0}}<{t_{1}}<\cdots <{t_{n}}=T$ such that
\[ \mathbb{E}\Bigg[\exp \Bigg(\frac{1}{2}{\int _{{t_{i-1}}}^{{t_{i}}}}{\big|\tilde{\Gamma }\big(s,{X^{s}}\big)\big|^{2}}ds\Bigg)\Bigg]<\infty \]
for every $i\in \{1,\dots ,n\}$. Then, the process
is a martingale.
Next, we need a slight adaptation of the Dambis–Dubins–Schwarz theorem to the case of a finite interval. For the sake of completeness, we give the details here.
Proposition 4.19.
Let M be a local martingale vanishing at zero such that ${[M]_{t}}={\textstyle\int _{0}^{t}}{A_{s}}ds$, $t\in [0,T]$. Then, on a possibly enlarged probability space, there exists a copy of M (still denoted by the same letter M) with the same law and a Brownian motion β such that
Proof.
Let us define
\[ {\tilde{M}_{t}}=\left\{\begin{array}{l@{\hskip10.0pt}r}{M_{t}},& \hspace{2.5pt}t\in [0,T],\\ {} {M_{T}}+{B_{t}}-{B_{T}},& \hspace{2.5pt}t>T,\end{array}\right.\]
where B is a Brownian motion independent of M. If the initial probability space is not rich enough, one considers an enlarged probability space containing a copy of M (still denoted by the same letter) with the same law and the independent Brownian motion B. Note that $\tilde{M}$ is a local martingale with quadratic variation given by
\[ {[\tilde{M}]_{t}}=\left\{\begin{array}{l@{\hskip10.0pt}r}{[M]_{t}},& \hspace{2.5pt}t\in [0,T],\\ {} t-T+{[M]_{T}},& \hspace{2.5pt}t>T.\end{array}\right.\]
Observe that $\underset{t\to \infty }{\lim }{[\tilde{M}]_{t}}=\infty $. By the classical Dambis–Dubins–Schwarz theorem there exists a standard Brownian motion β such that a.s. ${\tilde{M}_{t}}={\beta _{{\textstyle\int _{0}^{t}}{A_{s}}ds}}$, $t\ge 0$. In particular,
□The proposition below is an adaptation of a well-known argument for Markov diffusions.
Proposition 4.20.
Remark 4.21.
Let $(X,\mathbb{P})$ be a solution to the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$ with $\Gamma =0$. We recall that, by Corollary 4.13, X is an $\mathfrak{F}$-Dirichlet process ($\mathfrak{F}$ being the canonical filtration) and $[X]=[{M^{X}}]={\textstyle\int _{0}^{\cdot }}{\sigma ^{2}}({X_{s}})ds$ so that, by Lévy’s characterization theorem, W is an $\mathfrak{F}$-Brownian motion.
Proof of Proposition 4.20.
Let $Y=h(X)$. By Proposition 4.12, we know that $[Y]={\textstyle\int _{0}^{\cdot }}{\sigma _{0}^{2}}({Y_{s}})ds$. Let us choose $k\ge |{\sigma _{0}}{|_{\infty }^{2}}T$ and a subdivision $\{{t_{0}}=0,\dots ,{t_{n}}=T\}$ of $[0,T]$ in such way that
for $i\in \{1,\dots ,n\}$. Here, K comes from Assumption 4.16. By (4.15), we know that
for every $i\in \{1,\dots ,n\}$. We set ${M_{t}}={Y_{t}}-{Y_{0}}$, $t\in [0,T]$, and we note that
We recall that ${Y_{0}}$ is deterministic. By applying Proposition 4.18, taking into account (4.18) and (4.19), we get
Since M is a local martingale vanishing at zero, Proposition 4.19 states that there is a copy (with the same distribution) of M (still denoted by the same letter) on another probability space, and a Brownian motion β, such that previous expression gives
the latter inequality being valid because ${[M]_{t}}={\textstyle\int _{0}^{t}}{\sigma _{0}^{2}}({Y_{s}})ds$. By (4.17) we get
for every $i\in \{1,\dots ,n\}$. By Remark 4.22 below, for each $i\in \{1,\dots ,n\}$, we have
(4.18)
\[ {\int _{{t_{i-1}}}^{{t_{i}}}}{\big|\tilde{\Gamma }\big(s,{X^{s}}\big)\big|^{2}}ds\le ({t_{i}}-{t_{i-1}}){K^{2}}{\Big(1+\underset{s\in [0,T]}{\sup }|{Y_{s}}|\Big)^{2}}\](4.19)
\[ {\Big(1+\underset{s\in [0,T]}{\sup }|{Y_{s}}|\Big)^{2}}\le 3\underset{s\in [0,T]}{\sup }|{M_{s}}{|^{2}}+3\big(1+{Y_{0}^{2}}\big).\](4.20)
\[\begin{aligned}{}& \mathbb{E}\Bigg(\exp \Bigg(\frac{1}{2}{\int _{{t_{i-1}}}^{{t_{i}}}}{\tilde{\Gamma }^{2}}\big(s,{X^{s}}\big)ds\Bigg)\Bigg)\\ {} & \hspace{1em}\le \mathbb{E}\bigg(\exp \bigg(\frac{3({t_{i}}-{t_{i-1}}){K^{2}}}{2}\underset{s\in [0,T]}{\sup }|{M_{s}}{|^{2}}\bigg)\bigg)\exp \bigg(\frac{3}{2}({t_{i}}-{t_{i-1}}){K^{2}}\big(1+{Y_{0}^{2}}\big)\bigg).\end{aligned}\](4.21)
\[\begin{aligned}{}& \mathbb{E}\bigg(\exp \bigg(\frac{3({t_{i}}-{t_{i-1}}){K^{2}}}{2}\underset{s\in [0,T]}{\sup }|{\beta _{{[M]_{s}}}}{|^{2}}\bigg)\bigg)\exp \bigg(\frac{3}{2}({t_{i}}-{t_{i-1}}){K^{2}}\big(1+{Y_{0}^{2}}\big)\bigg)\\ {} & \hspace{1em}\le \mathbb{E}\bigg(\exp \bigg(\frac{3({t_{i}}-{t_{i-1}}){K^{2}}}{2}\underset{\tau \in [0,k]}{\sup }|{\beta _{\tau }}{|^{2}}\bigg)\bigg)\exp \bigg(\frac{3}{2}({t_{i}}-{t_{i-1}}){K^{2}}\big(1+{Y_{0}^{2}}\big)\bigg),\end{aligned}\](4.22)
\[\begin{aligned}{}\mathbb{E}\Bigg[\exp \Bigg(\frac{1}{2}{\int _{{t_{i-1}}}^{{t_{i}}}}{\tilde{\Gamma }^{2}}\big(s,{X^{s}}\big)ds\Bigg)\Bigg]& \le \mathbb{E}\bigg[\exp \bigg(\frac{{c_{i}}}{k}\underset{\tau \in [0,k]}{\sup }|{B_{\tau }}{|^{2}}\bigg))\bigg]\exp \bigg(\frac{{c_{i}}}{k}\big(1+{Y_{0}^{2}}\big)\bigg)\\ {} & \le \mathbb{E}\bigg[\underset{\tau \in [0,k]}{\sup }\exp \bigg(\frac{{c_{i}}}{k}|{B_{\tau }}{|^{2}}\bigg)\bigg]\exp \bigg(\frac{{c_{i}}}{k}\big(1+{Y_{0}^{2}}\big)\bigg)\end{aligned}\]
\[ \mathbb{E}\bigg[\exp \bigg(\frac{{c_{i}}}{k}|{B_{\tau }}{|^{2}}\bigg)\bigg]\le \mathbb{E}\big[\exp \big({c_{i}}{G^{2}}\big)\big]<\infty ,\]
where G is a standard Gaussian random variable. Since $x\mapsto \exp (\frac{{c_{i}}}{2k}x)$ is increasing and convex, and ${(|{B_{\tau }}{|^{2}})_{\tau \ge 0}}$ is a nonnegative square integrable submartingale, $(\exp (\frac{{c_{i}}}{2k}|{B_{\tau }}{|^{2}})$ is also a nonnegative submartingale. Consequently, by Doob’s inequality (with $p=2)$ the expectation on the right-hand side of (4.22) is finite. Finally by Proposition 4.18, (4.16) is a martingale. □Proposition 4.20 opens the way to the following existence result for the path-dependent martingale problem.
Theorem 4.23.
Proof.
By Proposition 4.10, there is a solution $(X,\mathbb{P})$ to the above-mentioned martingale problem with $\Gamma =0$. By Remark 4.11, there is a Brownian motion W such that
for every $f\in {\mathcal{D}_{L}}$. We define the process
(4.23)
\[ f({X_{t}})-f({X_{0}})-{\int _{0}^{t}}Lf({X_{s}})ds={\int _{0}^{t}}\big({f^{\prime }}\sigma \big)({X_{s}})d{W_{s}}\]
\[ {V_{t}}:=\exp \Bigg({\int _{0}^{t}}\tilde{\Gamma }\big(s,{X^{s}}\big)d{W_{s}}-\frac{1}{2}{\int _{0}^{t}}{\tilde{\Gamma }^{2}}\big(s,{X^{s}}\big)ds\Bigg).\]
Under item (1), V is a martingale by Novikov’s condition. Under item (2), Proposition 4.20 says that V is a martingale. We define
By Girsanov’s theorem, (4.24) is a Brownian motion under the probability $\mathbb{Q}$ such that
\[ d\mathbb{Q}:=\exp \Bigg({\int _{0}^{T}}\tilde{\Gamma }\big(s,{X^{s}}\big)d{W_{s}}-\frac{1}{2}{\int _{0}^{T}}{\tilde{\Gamma }^{2}}\big(s,{X^{s}}\big)ds\Bigg)d\mathbb{P}.\]
Applying (4.24) to (4.23), we obtain
\[ f({X_{t}})-f({X_{0}})-{\int _{0}^{t}}Lf({X_{s}})ds-{\int _{0}^{t}}{f^{\prime }}({X_{s}})\Gamma \big(s,{X^{s}}\big)ds={\int _{0}^{t}}\big({f^{\prime }}\sigma \big)({X_{s}})d{\tilde{W}_{s}}\]
for every $f\in {\mathcal{D}_{L}}$. Since ${\textstyle\int _{0}^{t}}({f^{\prime }}\sigma )({X_{s}})d{\tilde{W}_{s}}$ is a local martingale under $\mathbb{Q}$, $(X,\mathbb{Q})$ is proved to be a solution to the martingale problem in the statement. □4.4 Uniqueness
We use here again the notation $\tilde{\Gamma }$ introduced in (4.14).
Proof.
Let $({X^{i}},{\mathbb{P}^{i}})$, $i=1,2$, be two solutions of the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$. Let us fix $i=1,2$. By Corollary 4.13, ${X^{i}}$ is an ${\mathfrak{F}^{{X^{i}}}}$-Dirichlet process with respect to ${\mathbb{P}^{i}}$, such that $[{X^{i}}]\equiv {\textstyle\int _{0}^{\cdot }}\sigma {({X_{s}^{i}})^{2}}ds$. Let ${M^{i}}$ be the martingale component of ${X^{i}}$. Since $[{M^{i}}]\equiv {\textstyle\int _{0}^{\cdot }}\sigma {({X_{s}^{i}})^{2}}ds$, by Lévy’s characterization theorem, the process
is an ${\mathfrak{F}^{{X^{i}}}}$-Brownian motion. In particular, ${W^{i}}$ is a Borel functional of ${X^{i}}$.
(4.25)
\[ {W_{t}^{i}}={\int _{0}^{t}}\frac{d{M_{s}^{i}}}{\sigma ({X_{s}^{i}})},\hspace{1em}t\in [0,T],\]By means of localization (similarly to Proposition 5.3.10 in [16]), without loss of generality we can suppose $\tilde{\Gamma }$ to be bounded. We define the process (whose random variables are also Borel functionals of ${X^{i}}$)
\[ {V_{t}^{i}}=\exp \Bigg(-{\int _{0}^{t}}\tilde{\Gamma }\big(s,{X^{i,s}}\big)d{W_{s}^{i}}-\frac{1}{2}{\int _{0}^{t}}{\big(\tilde{\Gamma }\big(s,{X^{i,s}}\big)\big)^{2}}ds\Bigg),\]
which, by Novikov’s condition, is a ${\mathbb{P}^{i}}$-martingale. This allows us to define the probability ${Q^{i}}$ such that $d{\mathbb{Q}^{i}}={V_{T}^{i}}d{\mathbb{P}^{i}}$.By Girsanov’s theorem, under ${\mathbb{Q}^{i}}$, ${B_{t}^{i}}:={W_{t}^{i}}+{\textstyle\int _{0}^{t}}\tilde{\Gamma }(s,{X^{i,s}})ds$ is a Brownian motion. Therefore, $({X^{i}},{\mathbb{Q}^{i}})$ solves the martingale problem related to L ($\Gamma =0$) with respect to ${\mathcal{D}_{L}}$. By uniqueness of the martingale problem with respect to ${\mathcal{D}_{L}}$ and $\Gamma =0$ (see Proposition 4.10), ${X^{i}}$ (under ${\mathbb{Q}^{i}}$), $i=1,2$, have the same law. Hence, for every Borel set $B\in \mathfrak{B}(C[0,T])$, we have
\[ {\mathbb{P}^{1}}\big\{{X^{1}}\in B\big\}\hspace{-0.1667em}=\hspace{-0.1667em}{\int _{\Omega }}\frac{1}{{V_{T}^{1}}({X^{1}})}{1_{\{{X^{1}}\in B\}}}d{\mathbb{Q}^{1}}={\int _{\Omega }}\frac{1}{{V_{T}^{2}}({X^{2}})}{1_{\{{X^{2}}\in B\}}}d{\mathbb{Q}^{2}}={\mathbb{P}^{2}}\big\{{X^{2}}\in B\big\}.\]
Therefore, ${X^{1}}$ under ${\mathbb{P}^{1}}$ has the same law as ${X^{2}}$ under ${\mathbb{P}^{2}}$. Finally, uniqueness holds for the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$. □4.5 Results on pathwise uniqueness
Before exploring conditions for strong existence and uniqueness for the martingale problem, we state and prove Proposition 4.27, which constitutes a crucial preliminary step.
Let $\bar{\Gamma }:\Lambda \to \mathbb{R}$ be a generic Borel functional. Related to it, we formulate the following technical assumption.
Proposition 4.27.
Suppose that Assumption 4.25 is satisfied and fix ${y_{0}}\in \mathbb{R}$. Then, pathwise uniqueness holds for the SDE with dynamics
i.e. $E({\sigma _{0}},0,\bar{\Gamma })$.
(4.26)
\[ {Y_{t}}={y_{0}}+{\int _{0}^{t}}{\sigma _{0}}({Y_{s}})d{W_{s}}+{\int _{0}^{t}}\bar{\Gamma }\big(s,{Y^{s}}\big)ds,\]The proof of the Proposition 4.27 generalizes the techniques of the Yamada–Watanabe pathwise uniqueness theorem for Markovian SDEs (see, e.g., Theorem 5.2.19 in [16]). Before proceeding with that proof, we state a lemma which is an easy consequence of Problem 5.3.15 in [16].
Lemma 4.28.
Suppose the assumptions in Proposition 4.27 are in force. Let Y be a solution of (4.26) and let $m\ge 2$ be an integer. Then, there exists a constant $C>0$, depending on the linear growth constant of ${\sigma _{0}}$, ${Y_{0}}$, T, m, and the quantities $(K,{\bar{\Gamma }_{\infty }})$ given in Assumptions 4.25 (3)–(4), such that
Proof of Proposition 4.27.
Let ${Y^{1}}$, ${Y^{2}}$ be two solutions on the same probability space with respect to the same Brownian motion W of (4.26) such that ${Y_{0}^{1}}={Y_{0}^{2}}={y_{0}}$. In the sequel, we set ${\Delta _{t}}={Y_{t}^{1}}-{Y_{t}^{2}}$, $t\in [0,T]$. By Lemma 4.28, we have
for $i=1,2$. By the assumption on ${\sigma _{0}}$, this obviously gives
for $i=1,2$. We observe
We recall from the proof of Proposition 2.13 in [16, Chapter 5], the existence of the functions
such that for every $x\in \mathbb{R}$
By applying Itô’s formula and using (4.29), we get
where ${M_{t}}={\textstyle\int _{0}^{t}}{\Psi ^{\prime }_{n}}({\Delta _{s}})[{\sigma _{0}}({Y_{s}^{1}})-{\sigma _{0}}({Y_{s}^{2}})]d{W_{s}}$ is a local martingale. Since ${\Psi ^{\prime }_{n}}$ is bounded, the estimate (4.28) ensures that M is a (even square integrable) martingale.
(4.28)
\[ \mathbb{E}\Bigg({\int _{0}^{T}}{\big|{\sigma _{0}}\big({Y_{t}^{i}}\big)\big|^{2}}dt\Bigg)<\infty \](4.29)
\[ {\Delta _{t}}={\int _{0}^{t}}\big(\bar{\Gamma }\big(s,{Y^{1,s}}\big)-\bar{\Gamma }\big(s,{Y^{2,s}}\big)\big)ds+{\int _{0}^{t}}\big({\sigma _{0}}\big({Y_{s}^{1}}\big)-{\sigma _{0}}\big({Y_{s}^{2}}\big)\big)d{W_{s}},\hspace{1em}t\in [0,T].\](4.30)
\[ 0\le {\rho _{n}}(x)\le \frac{2}{n{l^{2}}(x)},\hspace{1em}|{\Psi ^{\prime }_{n}}(x)|\le 1,\hspace{1em}|{\Psi _{n}}(x)|\le |x|,\hspace{1em}\underset{n\to \infty }{\lim }{\Psi _{n}}(x)=|x|.\]
\[\begin{aligned}{}{\Psi _{n}}({\Delta _{t}})& ={\int _{0}^{t}}{\Psi ^{\prime }_{n}}({\Delta _{s}})\big[\bar{\Gamma }\big(s,{Y^{1,s}}\big)-\bar{\Gamma }\big(s,{Y^{2,s}}\big)\big]ds\\ {} & \hspace{1em}+\frac{1}{2}{\int _{0}^{t}}{\Psi ^{\prime\prime }_{n}}({\Delta _{s}}){\big[{\sigma _{0}}\big({Y_{s}^{1}}\big)-{\sigma _{0}}\big({Y_{s}^{2}}\big)\big]^{2}}ds\\ {} & \hspace{1em}+{\int _{0}^{t}}{\Psi ^{\prime }_{n}}({\Delta _{s}})\big[{\sigma _{0}}\big({Y_{s}^{1}}\big)-{\sigma _{0}}\big({Y_{s}^{2}}\big)\big]d{W_{s}}.\end{aligned}\]
By using Assumption 4.25 and (4.30), we get
(4.31)
\[ {\Psi _{n}}({\Delta _{t}})\le {\int _{0}^{t}}K\Bigg(|{Y_{s}^{1}}-{Y_{s}^{2}}|+{\int _{0}^{s}}|{Y_{r}^{1}}-{Y_{r}^{2}}|dr\Bigg)ds+\frac{t}{n}+{M_{t}},\]We take the expectation, and applying Fubini’s theorem in (4.31), obtain
since $\mathbb{E}{M_{t}}=0$. Passing to the limit when $n\to \infty $, by Lebesgue’s dominated convergence theorem, we get
By Gronwall’s inequality, we obtain $\mathbb{E}|{\Delta _{t}}|=0$. By the continuity of the sample paths of ${Y^{1}}$, ${Y^{2}}$ we conclude that ${Y^{1}}$, ${Y^{2}}$ are indistinguishable. □
(4.32)
\[ \mathbb{E}{\Psi _{n}}({\Delta _{t}})\le K{\int _{0}^{t}}\mathbb{E}|{Y_{s}^{1}}-{Y_{s}^{2}}|ds+KT{\int _{0}^{t}}\mathbb{E}|{Y_{s}^{1}}-{Y_{s}^{2}}|ds+\frac{t}{n},\]We come back to the framework of the beginning of Section 4.1. We assume again the validity of Assumption 4.4. We recall the definition of the harmonic function h defined by $h(0)=0$, ${h^{\prime }}(x)={e^{-\Sigma }}$, see (4.4). We recall the notation ${\sigma _{0}}=(\sigma {h^{\prime }})\circ {h^{-1}}$. We define
Theorem 4.30.
Proof.
By Proposition 4.27, pathwise uniqueness holds. Indeed, taking into account the expression of (4.34), equation (4.10) is a particular case of (4.26).
In order to prove existence, we will apply Theorem 4.23. For this purpose, we need to verify that either Hypothesis (1) or (2) of that theorem hold. Hypothesis (1) in the above statement coincides with Hypothesis (1) in Theorem 4.23. Assume the validity of (2) in the above statement, and we check that Assumption 4.16 holds true. By (4.14) and the definition of $\bar{\Gamma }$ in (4.34), we obtain
The fact that $\frac{1}{{\sigma _{0}}}$ is bounded jointly with (3) in Assumption 4.25 yields the existence of a constant ${K_{1}}>0$ such that
(4.35)
\[ {\sigma _{0}}\big(\eta (s)\big)\tilde{\Gamma }\big(s,\big({h^{-1}}\circ \eta \big)\big)=\bar{\Gamma }(s,\eta ).\]
\[ \big|\tilde{\Gamma }\big(s,\big({h^{-1}}\circ \eta \big)\big)\big|\le {K_{1}}\Bigg(|\eta (s)|+{\int _{0}^{s}}|\eta (r)|dr\Bigg)+|\bar{\Gamma }(s,0)|.\]
By (4) in Assumption 4.25 and the simple estimate
we can conclude that Assumption 4.16 holds.By Theorem 4.23, existence holds for the martingale problem related to (3.3) with respect to ${\mathcal{D}_{L}}$, and by Proposition 4.12 (1), we have that (4.10) has a solution. At this point, we can apply the Yamada–Watanabe theorem to guarantee that the solution is actually strong. We remark that the Yamada–Watanabe theorem (in the path-dependent case) proof is the same as the one in the Markovian case, which is for instance stated in Proposition 3.20 [16, Chapter 5]. □