1 Introduction
This paper consists of two independent but related parts. In the first part (Section 3), we consider a class of backward stochastic differential equations (from now on BSDEs) (see (3.1) below) in the filtration generated by a Lévy process and, for the case of a Lipschitz generator of sublinear growth, we establish the existence and uniqueness of the solution in Theorem 3.3 below. We stress that the proof of Theorem 3.3 relies on the predictable representation property (from now on PRP) obtained by Di Tella and Engelbert in [6] (see also [5]) and recalled in Theorem 2.2 below.
A similar class of BSDEs has been considered by Nualart and Schoutens in [18]. Their approach is based on the PRP of the orthogonalized Teugels martingales, that was obtained in Nualart and Schoutens [17]. However, the PRP of the family of Teugels martingales requires an exponential moment of the Lévy measure outside the origin. On the other hand, for the PRP in [6] no additional assumptions on the Lévy process are needed. Hence, we are able to consider this class of BSDEs for general Lévy processes. Therefore, Theorem 3.3 below generalizes [18, Theorem 1] to arbitrary Lévy processes but with a great variety of systems of martingales in place of Teugels martingales.
The second part of the present paper (Section 4) is devoted to the study of a problem of logarithmic utility maximization of terminal wealth. We solve the problem having in mind the dynamical approach first introduced by Rouge and El Karoui in [19] and further developed by Hu, Imkeller and Müller in [10] in a Brownian setting, which is based on a martingale optimality principle constructed via BSDEs. In case of a more general filtration supporting martingales with jumps, the dynamical approach based on BSDEs has been followed by Morlais in [15, 16] and by Becherer in [1] to study the problem of exponential utility maximization of the terminal wealth.
The logarithmic utility maximization problem considered in this paper is analogous to the one studied in [10, Section 4] for a Brownian filtration: We extend the results of [10, Section 4] to Lévy processes with both Gaussian part and jumps.
Because of the special structure of the logarithmic utility, it turns out that, in the present paper, the martingale optimality principle can be constructed in a direct and independent way, without using BSDEs. However, as we shall explain (see Remark 4.7 below), this can be also alternatively done using a BSDE of the form of (3.1) below.
In [8] (see also references therein) Goll and Kallsen have studied the problem of the logarithmic utility maximization in a very general context (that, in particular, recovers Lévy processes), combining duality methods and semimartingale characteristics calculus. In [8], the authors assume the convexity of the set in which the admissible strategies for the optimization problem take values. However, since for the dynamical approach this further assumptions is not needed, the set of the constraints considered in the present paper will be closed and non-necessarily convex. Other references for the logarithmic utility maximization by the duality approach are Goll and Kallsen [7] and Kallsen [14]. We also recall the work by Civitanić and Karatzas [2] about utility optimization by the duality approach.
General setting of the paper. Let $(\Omega ,\mathcal{F},\mathbb{P})$ be a complete probability space and $\mathbb{F}$ be a filtration satisfying the usual conditions. We fix a finite time horizon $T>0$. We only consider $\mathbb{R}$-valued stochastic processes on the time interval $[0,T]$. When we say that a process X is a martingale we implicitly assume that X is càdlàg. We denote by $\mathcal{P}$ the σ-algebra of predictable subsets of $[0,T]\times \Omega $.
By ${\mathcal{H}^{2}}$ we denote the space of square integrable martingales X on $[0,T]$ such that ${X_{0}}=0$. The space $({\mathcal{H}^{2}},\| \cdot {\| _{2}})$ endowed with the norm specified by $\| X{\| _{2}^{2}}:=\mathbb{E}[{X_{T}^{2}}]$, for $X\in {\mathcal{H}^{2}}$, is a Hilbert space. We observe that, by Doob’s inequality, the $\| \cdot {\| _{2}}$-norm is equivalent to the norm $\| X{\| _{{\mathcal{H}^{2}}}^{2}}:=\mathbb{E}[{\sup _{t\in [0,T]}}{X_{t}^{2}}]$, $X\in {\mathcal{H}^{2}}$. However, $({\mathcal{H}^{2}},\| \cdot {\| _{{\mathcal{H}^{2}}}})$ is not a Hilbert space.
For $X\in {\mathcal{H}^{2}}$, we denote by $\langle X,X\rangle $ the predictable square variation process of X. We recall that $\langle X,X\rangle $ is the unique predictable increasing process starting at zero such that the process ${X^{2}}-\langle X,X\rangle $ is an $\mathbb{F}$-martingale. If $X,Y\in {\mathcal{H}^{2}}$, we define $\langle X,Y\rangle $ by polarization and say that X and Y are orthogonal if $\langle X,Y\rangle =0$.
For $X\in {\mathcal{H}^{2}}$, we introduce ${L^{2}}(X)$ as the space of $\mathbb{F}$-predictable processes H such that ${\textstyle\int _{0}^{T}}{H_{s}^{2}}\mathrm{d}{\langle X,X\rangle _{s}}$ is integrable. For $H\in {L^{2}}(X)$, we denote by ${\textstyle\int _{0}^{\cdot }}{H_{s}}\mathrm{d}{X_{s}}$ the stochastic integral of H with respect to X. It holds ${\textstyle\int _{0}^{\cdot }}{H_{s}}\mathrm{d}{X_{s}}\in {\mathcal{H}^{2}}$, for every H in ${L^{2}}(X)$. Furthermore, ${\textstyle\int _{0}^{\cdot }}{H_{s}}\mathrm{d}{X_{s}}$ is characterized in the following way: A martingale $Z\in {\mathcal{H}^{2}}$ is indistinguishable from ${\textstyle\int _{0}^{\cdot }}{H_{s}}\mathrm{d}{X_{s}}$ if and only if $\langle Z,Y\rangle ={\textstyle\int _{0}^{\cdot }}{H_{s}}\mathrm{d}{\langle X,Y\rangle _{s}}$, for every $Y\in {\mathcal{H}^{2}}$.
For two semimartingales X and Y, we denote by $[X,Y]$ the covariation of X and Y, that is, we define ${[X,Y]_{t}}:=\langle {X^{c}},{Y^{c}}\rangle +{\textstyle\sum _{0\le s\le t}}\Delta {X_{s}}\Delta {Y_{s}}$, $t\in [0,T]$, where ${X^{c}}$ and ${Y^{c}}$ denote the continuous local martingale part of X and Y, respectively.
2 Martingale representation for Lévy processes
A Lévy process with respect to $\mathbb{F}$ is an $\mathbb{R}$-valued and $\mathbb{F}$-adapted stochastically continuous càdlàg process L such that ${L_{0}}=0$, $({L_{t+s}}-{L_{t}})$ is distributed as ${L_{s}}$ and it is independent of ${\mathcal{F}_{t}}$, for all $s,t\ge 0$. In this case, we say that $(L,\mathbb{F})$ is a Lévy process. By ${\mathbb{F}^{L}}$ we denote the completion in $\mathcal{F}$ of the filtration generated by L. It is well known that ${\mathbb{F}^{L}}$ satisfies the usual conditions (see [20]). Clearly, $(L,{\mathbb{F}^{L}})$ is again a Lévy process.
Let $(L,\mathbb{F})$ be a Lévy process. We denote by μ the jump measure of L. Recall that μ is a homogeneous Poisson random measure on $[0,T]\times \mathbb{R}$ with respect to $\mathbb{F}$ (see [13], Definition II.1.20). The $\mathbb{F}$-predictable compensator of μ is deterministic and it is given by $\lambda \otimes \nu $, where λ is the Lebesgue measure on $[0,T]$ and ν is the Lévy measure of L, that is, ν is, in particular, a σ-finite measure such that $\nu (\{0\})=0$ and $x\mapsto {x^{2}}\wedge 1$ is ν-integrable. We call $\overline{\mu }:=\mu -\lambda \otimes \nu $ the compensated Poisson random measure associated with μ. By ${B^{\sigma }}$ we denote the Gaussian part of L, which is an $\mathbb{F}$-Brownian motion such that $\mathbb{E}[{({B_{t}^{\sigma }})^{2}}]={\sigma ^{2}}t$, where ${\sigma ^{2}}\ge 0$. By $\eta \in \mathbb{R}$ we denote the drift parameter of L and we call $(\eta ,{\sigma ^{2}},\nu )$ the characteristic triplet of L.
A function G on $\widetilde{\Omega }:=[0,T]\times \Omega \times \mathbb{R}$ is said to be predictable if G is $\mathcal{P}\otimes \mathcal{B}(\mathbb{R})$-measurable. Let ${\mathcal{G}^{2}}(\mu )$ denote the linear space of the predictable functions G on $\widetilde{\Omega }$ such that $\mathbb{E}[{\textstyle\sum _{s\le T}}{G^{2}}(s,\omega ,\Delta {L_{s}}(\omega )){1_{\{\Delta {L_{s}}(\omega )\ne 0\}}}]<+\infty $.
For $G\in {\mathcal{G}^{2}}(\mu )$, we denote by ${\textstyle\int _{[0,\cdot ]\times \mathbb{R}}}G(s,y)\overline{\mu }(\mathrm{d}s,\mathrm{d}y)$ the stochastic integral of G with respect to $\overline{\mu }$. Recall that ${\textstyle\int _{[0,\cdot ]\times \mathbb{R}}}G(s,y)\overline{\mu }(\mathrm{d}s,\mathrm{d}y)$ is defined as the unique purely discontinuous $Z\in {\mathcal{H}^{2}}$ with $\Delta {Z_{t}}(\omega )=G(t,\omega ,\Delta {L_{t}}(\omega )){1_{\{\Delta {L_{t}}(\omega )\ne 0\}}}$ (see [13], II.1.27).
Next, for any $f\in {L^{\hspace{0.1667em}2}}(\nu )$, we introduce the martingale ${X^{f}}\in {\mathcal{H}^{2}}$ by
Let $(L,\mathbb{F})$ be a Lévy process with characteristic triplet $(\eta ,{\sigma ^{2}},\nu )$ on $[0,T]$. Then, for any $f\in {L^{2}}(\nu )$, the martingale ${X^{f}}$ has the following properties:
(i) For every $f\in {L^{2}}(\nu )$, $({X^{f}},\mathbb{F})$ is a Lévy process and
(ii) For every $f,g\in {L^{2}}(\nu )$, ${\langle {X^{f}},{X^{g}}\rangle _{\hspace{0.1667em}t}}=t{\textstyle\int _{\mathbb{R}}}f(y)g(y)\hspace{0.1667em}\nu (\mathrm{d}y)$.
(iii) For every $f,g\in {L^{2}}(\nu )$, ${X^{f}}$ and ${X^{g}}$ are orthogonal martingales if and only if $f,g\in {L^{\hspace{0.1667em}2}}(\nu )$ are orthogonal functions in ${L^{\hspace{0.1667em}2}}(\nu )$.
Let $\mathcal{T}$ be an arbitrary subset of ${L^{2}}(\nu )$. Then we set
As usual, ${\ell ^{2}}$ denotes the Hilbert space of sequences $a={({a^{n}})_{n\ge 1}}$ of real numbers for which the norm $\| a{\| _{{\ell ^{2}}}^{2}}:={\textstyle\sum _{n=1}^{\infty }}{({a^{n}})^{2}}$ is finite.
Definition 2.1.
We denote by $({M^{2}}({\ell ^{2}}),\| \cdot {\| _{{M^{2}}({\ell ^{2}})}})$ the Hilbert space of ${\ell ^{2}}$-valued $\mathbb{F}$-predictable processes V such that $\| V{\| _{{M^{2}}({\ell ^{2}})}^{2}}:=\mathbb{E}[{\textstyle\int _{0}^{T}}\| {V_{s}}{\| _{{\ell ^{2}}}^{2}}\mathrm{d}s]<+\infty $.
Let $\mathcal{T}=\{{f_{n}},\hspace{2.5pt}n\ge 1\}\subseteq {L^{2}}(\nu )$ be an orthonormal basis. We remark that the orthogonal sum ${\textstyle\sum _{n=1}^{\infty }}{\textstyle\int _{0}^{\cdot }}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}$ converges in (and, therefore, belongs to) the Hilbert space $({\mathcal{H}^{2}},\| \cdot {\| _{2}})$ if and only if $V={({V^{n}})_{n\ge 1}}$ belongs to ${M^{2}}({\ell ^{2}})$. In this case we have the isometry $\| {\textstyle\sum _{n=1}^{\infty }}{\textstyle\int _{0}^{\cdot }}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}{\| _{2}}=\| V{\| _{{M^{2}}({\ell ^{2}})}}$.
For the next theorem, we refer to [6, Section 4.2]. It states that the family ${\mathcal{X}_{\mathcal{T}}}$, where $\mathcal{T}\subseteq {L^{2}}(\nu )$ is an orthonormal basis, possesses the PRP.
Theorem 2.2.
Let $(L,{\mathbb{F}^{L}})$ be a Lévy process with characteristic triplet $(\eta ,{\sigma ^{2}},\nu )$ on the probability space $(\Omega ,{\mathcal{F}_{T}^{L}},\mathbb{P})$. Let $\mathcal{T}:=\{{f_{n}},\hspace{2.5pt}n\ge 1\}$ be an orthogonal basis of ${L^{2}}(\nu )$ and let ${\mathcal{X}_{\mathcal{T}}}\subseteq {\mathcal{H}^{2}}$ be the associated family of ${\mathbb{F}^{L}}$-martingales defined in (2.1). Then ${\mathcal{X}_{\mathcal{T}}}$ consists of pairwise orthogonal martingales and every ${\mathbb{F}^{L}}$-square integrable martingale N has the representation
\[ N={N_{0}}+{\int _{0}^{\cdot }}{Z_{s}}\mathrm{d}{B_{s}^{\sigma }}+{\sum \limits_{n=1}^{\infty }}{\int _{0}^{\cdot }}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}},\hspace{1em}Z\in {L^{2}}({B^{\sigma }}),\hspace{2.5pt}{V^{n}}\in {L^{2}}({X^{{f_{n}}}}),\hspace{2.5pt}n\ge 1,\]
where the spaces ${L^{2}}({B^{\sigma }})$ and ${L^{2}}({X^{{f_{n}}}})$ are considered with respect to ${\mathbb{F}^{L}}$.
3 BSDEs for Lévy processes
Let $(L,{\mathbb{F}^{L}})$ be a Lévy process with characteristic triplet $(\eta ,{\sigma ^{2}},\nu )$. We consider the probability space $(\Omega ,{\mathcal{F}_{T}^{L}},\mathbb{P})$. Because of Theorem 2.2, for each orthogonal basis $\mathcal{T}=\{{f_{n}},\hspace{2.5pt}n\ge 1\}$ of ${L^{2}}(\nu )$, it is natural to consider the following BSDE:
where $f:[0,T]\times \Omega \times \mathbb{R}\times \mathbb{R}\times {\ell ^{2}}\longrightarrow \mathbb{R}$ is a given random function, called the generator of BSDE (3.1), and ξ is an ${\mathcal{F}_{T}^{L}}$-measurable random variable. We call the pair $(f,\xi )$ the data of BSDE (3.1).
(3.1)
\[ {Y_{t}}=\xi +{\int _{t}^{T}}f(s,{Y_{s}},{Z_{s}},{V_{s}})\mathrm{d}s-{\int _{t}^{T}}{Z_{s}}\mathrm{d}{B_{s}^{\sigma }}-{\sum \limits_{n=1}^{\infty }}{\int _{t}^{T}}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}},\]In [17] BSDE (3.1) is studied for Teugels martingales under the further assumption of the existence of an exponential moment for L, that is, $\mathbb{E}[{\mathrm{e}^{\varepsilon |{L_{1}}|}}]<+\infty $, for an $\varepsilon >0$. We are going to study BSDE (3.1) for arbitrary Lévy processes L and for a great variety of families ${\mathcal{X}_{\mathcal{T}}}=\{{B^{\sigma }}\}\cup \{{X^{{f_{n}}}},n\ge 1\}$, not restricting the analysis to Teugels martingales.
We introduce the space ${\mathcal{S}^{2}}$ by
We now state the assumptions on the generator f of (3.1) which we shall need in the proof of Theorem 3.3 below.
Assumption 3.2.
Let the generator f of BSDE (3.1) fulfil the following properties:
(i) For $(y,z,a)\in \mathbb{R}\times \mathbb{R}\times {\ell ^{2}}$, $f(\cdot ,\cdot ,y,z,a)$ is an ${\mathbb{F}^{L}}$-progressively measurable process.
(ii) There exists a constant $K>0$ and a nonnegative ${\mathbb{F}^{L}}$-progressively measurable process γ such that $\mathbb{E}[{\textstyle\int _{0}^{T}}{\gamma _{s}^{2}}\mathrm{d}s]<+\infty $ and
A generator f fulfilling the properties (i), (ii) and (iii) in Assumption 3.2, will be called admissible.
Theorem 3.3.
Let $\xi \in {L^{2}}(\Omega ,{\mathcal{F}_{T}^{L}},\mathbb{P})$ and let f be an admissible generator. Then BSDE (3.1) with data $(f,\xi )$ admits a unique solution $(Y,Z,V)$.
Proof.
We denote by ${L^{2}}(\lambda \otimes \mathbb{P})$ the space of $\lambda \otimes \mathbb{P}$-square integrable adapted processes on $[0,T]$. We introduce the space ${\mathcal{K}^{2}}:={L^{2}}(\lambda \otimes \mathbb{P})\times {L^{2}}({B^{\sigma }})\times {M^{2}}({\ell ^{2}})$ endowed with the norm $\| \cdot {\| _{{\mathcal{K}^{2}}}}$ defined by $\| \cdot {\| _{{\mathcal{K}^{2}}}^{2}}=\| \cdot {\| _{{L^{2}}(\lambda \otimes \mathbb{P})}^{2}}+\| \cdot {\| _{{L^{2}}({B^{\sigma }})}^{2}}+\| \cdot {\| _{{M^{2}}({\ell ^{2}})}^{2}}$. It is clear that $({\mathcal{K}^{2}},\| \cdot {\| _{{\mathcal{K}^{2}}}})$ is a Banach space. We now define the mapping
by setting $(Y,Z,V)=\Phi (R,S,P)$, for $(R,S,P)\in {\mathcal{K}^{2}}$, where
and $(Z,V)$ is the unique pair in ${L^{2}}({B^{\sigma }})\times {M^{2}}({\ell ^{2}})$ such that
We observe that from Assumption 3.2 (ii) and $\xi \in {L^{2}}(\Omega ,{\mathcal{F}_{T}^{L}},\mathbb{P})$, the random variable $\xi +{\textstyle\int _{0}^{T}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s$ is square integrable, since $(R,S,P)\in {\mathcal{K}^{2}}$. Hence Y is well defined by (3.2). Moreover, since $\xi +{\textstyle\int _{0}^{T}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s\in {L^{2}}(\Omega ,{\mathcal{F}_{T}^{L}},\mathbb{P})$, the existence of the unique pair $(Z,V)\in {L^{2}}({B^{\sigma }})\times {M^{2}}({\ell ^{2}})$ follows by Theorem 2.2. We also observe that $Y\in {\mathcal{S}^{2}}$. Indeed, the ${\mathbb{F}^{L}}$-martingale N satisfying the identity ${N_{t}}=\mathbb{E}\big[\xi +{\textstyle\int _{0}^{T}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s|{\mathcal{F}_{t}^{L}}]$ a.s., $t\in [0,T]$, is square integrable. So, using Doob’s inequality and Assumption 3.2 (ii), from (3.2) we also get the estimate
We now compute $\mathrm{d}{\overline{Y}_{s}}$ and $\mathrm{d}{[\overline{Y},\overline{Y}]_{s}}$. From (3.2) and (3.3), for $s\in (t,T]$, we deduce
Hence,
Inserting (3.5) and (3.6) in (3.4) gives
Because of $Y,{Y^{\prime }}\in {\mathcal{S}^{2}}$ and $(R,S,P),({R^{\prime }},{S^{\prime }},{P^{\prime }})\in {\mathcal{K}^{2}}$, from Assumption 3.2 (ii) and the Cauchy–Schwarz inequality, the drift part in (3.7) is integrable. Furthermore, by the Kunita–Watanabe inequality, observing that ${\langle {X^{{f_{n}}}},{X^{{f_{m}}}}\rangle _{t}}=t{\delta _{nm}}$, $t\in [0,T]$, ${\delta _{nm}}$ denoting the Kronecker symbol, since V and ${V^{\prime }}$ belong to ${M^{2}}({\ell ^{2}})$, we get that the process ${\textstyle\sum _{n,m=1}^{\infty }}{\textstyle\int _{0}^{\cdot }}{\mathrm{e}^{\beta s}}{\overline{V}_{s}^{n}}{\overline{V}_{s}^{m}}\mathrm{d}{[{X^{{f_{n}}}},{X^{{f_{m}}}}]_{s}}$ is of integrable variation. Let ${\mathcal{H}^{1}}$ denote the space of local martingales X such that $\| X{\| _{{\mathcal{H}^{1}}}}:=\mathbb{E}[{\sup _{t\in [0,T]}}|{X_{t}}|]<+\infty $ and ${X_{0}}=0$. We notice that $({\mathcal{H}^{1}},\| \cdot {\| _{{\mathcal{H}^{1}}}})$ is a Banach space of uniformly integrable martingales. The local martingales ${\textstyle\int _{0}^{\cdot }}{\mathrm{e}^{\beta s}}\hspace{0.1667em}{\overline{Y}_{s-}}\hspace{0.1667em}{\overline{Z}_{s}}\hspace{0.1667em}\mathrm{d}{B_{s}^{\sigma }}$ and ${\textstyle\int _{0}^{\cdot }}{\mathrm{e}^{\beta s}}\hspace{0.1667em}{\overline{Y}_{s-}}\hspace{0.1667em}{\overline{V}_{s}^{n}}\hspace{0.1667em}\mathrm{d}{X_{s}^{{f_{n}}}}$ belong to $({\mathcal{H}^{1}},\| \cdot {\| _{{\mathcal{H}^{1}}}})$. Indeed, since $\overline{Z}\in {L^{2}}({B^{\sigma }})$, $\overline{V}\in {M^{2}}({\ell ^{2}})$ and $\overline{Y}\in {\mathcal{S}^{2}}$, by the Burkholder–Davis–Gundy inequality (see, e.g., [12, Theorem 2.34]) and the Cauchy–Schwarz inequality, we see that the estimates
We now use the abbreviation
Since the filtration ${\mathbb{F}^{L}}$ is quasi-left continuous, the identity $\mathbb{E}[{Y_{s}^{2}}]=\mathbb{E}[{Y_{s-}^{2}}]$ holds. So, $\mathbb{E}[{\textstyle\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}^{2}}\mathrm{d}s]=\mathbb{E}[{\textstyle\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s}^{2}}\mathrm{d}s]$ holds by Fubini’s theorem. For $h>0$ and $a,b,c\ge 0$, we have the estimates $ab\le {a^{2}}\frac{h}{2}+\frac{{b^{2}}}{2h}$ and ${(a+b+c)^{2}}\le 4({a^{2}}+{b^{2}}+{c^{2}})$. Applying these inequalities to (3.9), and then choosing $h=8C$, we get
(3.2)
\[\begin{aligned}{}{Y_{t}}& :=\mathbb{E}\left[\xi +{\int _{t}^{T}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s\Big|{\mathcal{F}_{t}^{L}}\right]\\ {} & =\mathbb{E}\left[\xi +{\int _{0}^{T}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s\Big|{\mathcal{F}_{t}^{L}}\right]-{\int _{0}^{t}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s,\end{aligned}\](3.3)
\[\begin{aligned}{}\xi +{\int _{0}^{T}}& f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s\\ {} =& \mathbb{E}\left[\xi +{\int _{0}^{T}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s\right]+{\int _{0}^{T}}{Z_{s}}\mathrm{d}{B_{s}^{\sigma }}+{\sum \limits_{n=1}^{\infty }}{\int _{0}^{T}}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}\hspace{0.1667em}.\end{aligned}\]
\[\begin{aligned}{}\mathbb{E}\big[{\sup _{t\in [0,T]}}{Y_{t}^{2}}\big]& \le 2\mathbb{E}\big[{\sup _{t\in [0,T]}}{N_{t}^{2}}\big]+8T\| \gamma {\| _{{L^{2}}(\lambda \otimes \mathbb{P})}^{2}}\\ {} & +16{K^{2}}T\Big(\| R{\| _{{L^{2}}(\lambda \otimes \mathbb{P})}}+\| S{\| _{{L^{2}}({B^{\sigma }})}^{2}}+\| P{\| _{{M^{2}}({\ell ^{2}})}^{2}}\Big)<+\infty \end{aligned}\]
meaning that $Y\in {\mathcal{S}^{2}}$. In particular, we have $Y\in {L^{2}}(\lambda \otimes \mathbb{P})$. Finally, we observe that $(Y,Z,V)$ satisfies the relation
\[ {Y_{t}}=\xi +{\int _{t}^{T}}f(s,{R_{s}},{S_{s}},{P_{s}})\mathrm{d}s-{\int _{t}^{T}}{Z_{s}}\mathrm{d}{B_{s}^{\sigma }}-{\sum \limits_{n=1}^{\infty }}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}\]
and hence it satisfies (3.1) if and only if it is a fixed point of Φ. We now define a β-norm $\| \cdot {\| _{\beta }}$ on ${\mathcal{K}^{2}}$ equivalent to $\| \cdot {\| _{{\mathcal{K}^{2}}}}$ with respect to which Φ is a strong contraction: For any $(R,S,P)\in {\mathcal{K}^{2}}$ we define
\[ \| (R,S,P){\| _{\beta }}:={\left(\mathbb{E}\left[{\int _{0}^{T}}{\mathrm{e}^{\beta s}}\left({R_{s}^{2}}+{\sigma ^{2}}{S_{s}^{2}}+\| {P_{s}}{\| _{{\ell ^{2}}}^{2}}\right)\mathrm{d}s\right]\right)^{1/2}},\hspace{1em}\beta >0.\]
We introduce the notation $({Y^{\prime }},{Z^{\prime }},{V^{\prime }}):=\Phi ({R^{\prime }},{S^{\prime }},{P^{\prime }})$ and
\[ (\overline{Y},\overline{Z},\overline{V}):=(Y-{Y^{\prime }},Z-{Z^{\prime }},V-{V^{\prime }}),\hspace{2em}(\overline{R},\overline{S},\overline{P}):=(R-{R^{\prime }},S-{S^{\prime }},P-{P^{\prime }}).\]
Applying twice integration by parts to ${\mathrm{e}^{\beta t}}{\overline{Y}_{t}^{2}}$, because ${\overline{Y}_{T}}=0$, yields
(3.4)
\[ {\mathrm{e}^{\beta t}}{\overline{Y}_{t}^{2}}=-\beta {\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}^{2}}\mathrm{d}s-2{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}\mathrm{d}{\overline{Y}_{s}}-{\int _{t}^{T}}{\mathrm{e}^{\beta s}}\mathrm{d}{[\overline{Y},\overline{Y}]_{s}}.\](3.5)
\[ \begin{aligned}{}-\mathrm{d}{\overline{Y}_{s}}& =\big(f(s,{R_{s}},{S_{s}},{P_{s}})-f(s,{R^{\prime }_{s}},{S^{\prime }_{s}},{P^{\prime }_{s}})\big)\mathrm{d}s-{\overline{Z}_{s}}\mathrm{d}{B_{s}^{\sigma }}-{\sum \limits_{n=1}^{\infty }}{\overline{V}_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}.\end{aligned}\](3.6)
\[ \mathrm{d}{[\overline{Y},\overline{Y}]_{s}}={\overline{Z}_{s}^{2}}\mathrm{d}{[{B^{\sigma }},{B^{\sigma }}]_{s}}+{\sum \limits_{n,m=1}^{\infty }}{\overline{V}_{s}^{n}}{\overline{V}_{s}^{m}}\mathrm{d}{[{X^{{f_{n}}}},{X^{{f_{m}}}}]_{s}}.\](3.7)
\[\begin{aligned}{}{\mathrm{e}^{\beta t}}{\overline{Y}_{t}^{2}}=& -\beta {\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}^{2}}\mathrm{d}s+2{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}\big(f(s,{R_{s}},{S_{s}},{P_{s}})-f(s,{R^{\prime }_{s}},{S^{\prime }_{s}},{P^{\prime }_{s}})\big)\mathrm{d}s\\ {} & -2{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}{\overline{Z}_{s}}\mathrm{d}{B_{s}^{\sigma }}-2{\sum \limits_{n=1}^{\infty }}{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}{\overline{V}_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}\\ {} & -{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Z}_{s}^{2}}\mathrm{d}{[{B^{\sigma }},{B^{\sigma }}]_{s}}-{\sum \limits_{n,m=1}^{\infty }}{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{V}_{s}^{n}}{\overline{V}_{s}^{m}}\mathrm{d}{[{X^{{f_{n}}}},{X^{{f_{m}}}}]_{s}}.\end{aligned}\]
\[\begin{aligned}{}\mathbb{E}\left[\underset{t\in [0,T]}{\sup }\bigg|{\int _{0}^{t}}{\mathrm{e}^{\beta s}}\hspace{0.1667em}{\overline{Y}_{s-}}\hspace{0.1667em}{\overline{Z}_{s}}\hspace{0.1667em}\mathrm{d}{B_{s}^{\sigma }}\bigg|\right]& <+\infty \hspace{1em}\text{and}\\ {} \mathbb{E}\left[\underset{t\in [0,T]}{\sup }\bigg|{\int _{0}^{t}}{\mathrm{e}^{\beta s}}\hspace{0.1667em}{\overline{Y}_{s-}}\hspace{0.1667em}{\overline{V}_{s}^{n}}\hspace{0.1667em}\mathrm{d}{X_{s}^{{f_{n}}}}\bigg|\right]& <+\infty \end{aligned}\]
hold. Analogously, since $\overline{V}\in {M^{2}}({\ell ^{2}})$, it follows that
\[\begin{aligned}{}\mathbb{E}\bigg[\underset{t\in [0,T]}{\sup }\bigg|& {\sum \limits_{n=k+1}^{k+m}}{\int _{0}^{t}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}{\overline{V}_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}\bigg|\bigg]\\ {} & \le c{\mathrm{e}^{\beta T}}{\left(\mathbb{E}\bigg[\underset{t\in [0,T]}{\sup }{\overline{Y}_{t}^{2}}\bigg]\right)^{1/2}}{\left(\mathbb{E}\bigg[{\sum \limits_{n=k+1}^{k+m}}{\int _{0}^{T}}{({\overline{V}_{s}^{n}})^{2}}\mathrm{d}s\bigg]\right)^{1/2}}\longrightarrow 0\end{aligned}\]
as $k,m\to +\infty $, where $c>0$ is the constant coming from the Burkholder–Davis–Gundy inequality. Therefore, we obtain that the sum ${\textstyle\sum _{n=1}^{\infty }}{\textstyle\int _{0}^{\cdot }}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}{\overline{V}_{s}^{n}}\mathrm{d}{X_{t}^{{f_{n}}}}$ converges in $({\mathcal{H}^{1}},\| \cdot {\| _{{\mathcal{H}^{1}}}})$ and hence it is, in particular, a centered uniformly integrable martingale. Taking the expectation in (3.7) now yields
(3.8)
\[\begin{aligned}{}\mathbb{E}\bigg[{\mathrm{e}^{\beta t}}{\overline{Y}_{t}^{2}}\hspace{0.1667em}+\hspace{0.1667em}{\sigma ^{2}}\hspace{-0.1667em}& {\int _{t}^{T}}\hspace{-0.1667em}{\mathrm{e}^{\beta s}}{\overline{Z}_{s}^{2}}\mathrm{d}s\hspace{0.1667em}+\hspace{0.1667em}{\int _{t}^{T}}\hspace{-0.1667em}{\mathrm{e}^{\beta s}}\| \overline{V}{\| _{{\ell ^{2}}}^{2}}\mathrm{d}s\bigg]=\\ {} & -\beta \mathbb{E}\bigg[\hspace{-0.1667em}{\int _{t}^{T}}\hspace{-0.1667em}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}^{2}}\mathrm{d}s\bigg]\\ {} & \hspace{0.1667em}+\hspace{0.1667em}2\mathbb{E}\bigg[\hspace{-0.1667em}{\int _{t}^{T}}\hspace{-0.1667em}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}\big(f(s,{R_{s}},{S_{s}},{P_{s}})\hspace{0.1667em}-\hspace{0.1667em}f(s,{R^{\prime }_{s}},{S^{\prime }_{s}},{P^{\prime }_{s}})\big)\mathrm{d}s\bigg].\end{aligned}\]
\[ I:=2\mathbb{E}\bigg[{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}\big(f(s,{R_{s}},{S_{s}},{P_{s}})-f(s,{R^{\prime }_{-}},{S^{\prime }_{s}},{P^{\prime }_{s}})\big)\mathrm{d}s\bigg].\]
By Assumption 3.2 (iii), we get
(3.9)
\[ |I|\le 2C\mathbb{E}\bigg[{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s-}}\big(|{\overline{R}_{s}}|+|\sigma |\hspace{0.1667em}|{\overline{S}_{s}}|+\| {\overline{P}_{s}}{\| _{{\ell ^{2}}}}\big)\mathrm{d}s\bigg].\]
\[ |I|\le 8{C^{2}}\hspace{0.1667em}\mathbb{E}\bigg[{\int _{t}^{T}}{\mathrm{e}^{\beta s}}{\overline{Y}_{s}^{2}}\mathrm{d}s\bigg]+\frac{1}{2}\hspace{0.1667em}\mathbb{E}\bigg[{\int _{t}^{T}}{\mathrm{e}^{\beta s}}\big(|{\overline{R}_{s}}{|^{2}}+{\sigma ^{2}}\hspace{0.1667em}|{\overline{S}_{s}}{|^{2}}+\| {\overline{P}_{s}}{\| _{{\ell ^{2}}}^{2}}{|^{2}}\big)\mathrm{d}s\bigg].\]
Using the latter estimate in (3.8) and then taking $\beta =8{C^{2}}+1$, we obtain
\[ \| (Y,Z,V)-({Y^{\prime }},{Z^{\prime }},{V^{\prime }}){\| _{8{C^{2}}+1}^{2}}\le \frac{1}{2}\| (R,S,P)-({R^{\prime }},{S^{\prime }},{P^{\prime }}){\| _{8{C^{2}}+1}^{2}}\]
which means that Φ is a strong contraction on $({\mathcal{K}^{2}},\| \cdot {\| _{\beta }})$ if $\beta =8{C^{2}}+1$. Hence, Φ has a unique fixed point in $({\mathcal{K}^{2}},\| \cdot {\| _{8{C^{2}}+1}})$. The proof of the theorem is complete. □4 Logarithmic utility maximization
4.1 The market model
Let $(L,{\mathbb{F}^{L}})$ be a Lévy process with characteristics $(\eta ,{\sigma ^{2}},\nu )$. We assume ${\sigma ^{2}}>0$ and consider the probability space $(\Omega ,{\mathcal{F}_{T}^{L}},\mathbb{P})$. We denote by μ the jump measure of L and set $\overline{\mu }:=\mu -\lambda \otimes \nu $.
Let b and ζ be bounded predictable processes. We assume furthermore that there exist ${\varepsilon _{1}}>{\varepsilon _{2}}>0$ such that ${\varepsilon _{2}}\le {\zeta _{t}^{2}}(\omega )\le {\varepsilon _{1}}$, for every $(t,\omega )\in [0,T]\times \Omega $. Additionally, let β be a bounded predictable function on $\widetilde{\Omega }$ such that $\beta (t,\omega ,y)\ge 0$ and $\beta (t,\omega ,y)\le {\alpha _{t}}(\omega )(|y|\wedge 1)$, for all $(t,\omega ,y)$ in $\widetilde{\Omega }$, where α is a bounded nonnegative ${\mathbb{F}^{L}}$-predictable process. By the assumptions on β, we have
is a well-defined semimartingale and we can consider the price process $S={S_{0}}\hspace{0.1667em}\mathcal{E}(U)$. Because of the assumptions on β and from the explicit expression of the stochastic exponential (see [13, Eq. I.4.64]), it follows that $S>0$ and ${S_{-}}>0$. Furthermore, by the Doléans-Dade equation, for every $t\in [0,T]$, the price process S satisfies
Clearly, since $\beta (t,\omega ,y)\ge 0$ by assumption, the price process S can only have positive jumps.
\[ \mathbb{E}\bigg[\sum \limits_{0\le s\le T}{\beta ^{2}}(s,\Delta {L_{s}}){1_{\{\Delta {L_{s}}\ne 0\}}}\bigg]\le \mathbb{E}\bigg[{\int _{0}^{T}}{\int _{\mathbb{R}}}{\alpha _{s}^{2}}({y^{2}}\wedge 1)\nu (\mathrm{d}y)\mathrm{d}s\bigg]<+\infty \hspace{0.1667em}.\]
Therefore, the process $U={({U_{t}})_{t\in [0,T]}}$ defined by
(4.1)
\[ {U_{t}}:={\int _{0}^{t}}{b_{s}}\mathrm{d}s+{\int _{0}^{t}}{\zeta _{s}}\mathrm{d}{B_{s}^{\sigma }}+{\int _{[0,t]\times \mathbb{R}}}\beta (s,y)\overline{\mu }(\mathrm{d}s,\mathrm{d}y),\hspace{1em}t\in [0,T]\hspace{0.1667em},\](4.2)
\[ {S_{t}}={S_{0}}+{\int _{0}^{t}}{S_{s-}}({b_{s}}\mathrm{d}s+{\zeta _{s}}\mathrm{d}{B_{s}^{\sigma }})+{\int _{[0,t]\times \mathbb{R}}}{S_{s-}}\beta (s,y)\overline{\mu }(\mathrm{d}s,\mathrm{d}y).\]As in [4], the admissible strategies for the market model, described by the locally bounded semimartingale S, are the predictable processes π such that the stochastic integral ${\textstyle\int _{0}^{\cdot }}{\pi _{s}}\mathrm{d}{S_{s}}$ is a well-defined semimartingale and ${\textstyle\int _{0}^{T}}{\pi _{s}}\mathrm{d}{S_{s}}$ is bounded from below. An admissible strategy π is an arbitrage opportunity if it holds ${\textstyle\int _{0}^{T}}{\pi _{s}}\mathrm{d}{S_{s}}\ge 0$ a.s. and $\mathbb{P}[{\textstyle\int _{0}^{T}}{\pi _{s}}\mathrm{d}{S_{s}}>0]>0$. It is well known that, in this context, the existence of an equivalent local martingale measure for S implies the absence of arbitrage opportunities (see [4, Corollary 1.2]).
We introduce the bounded predictable process $\theta :={\zeta ^{-1}}b$. Let now $\mathbb{Q}$ be the measure defined on ${\mathcal{F}_{T}^{L}}$ by $\mathrm{d}\mathbb{Q}:=\mathcal{E}(-{\textstyle\int _{0}^{\cdot }}{\theta _{s}}/{\sigma ^{2}}\mathrm{d}{B^{\sigma }}{\big)_{T}}\mathrm{d}\mathbb{P}$. By Novikov’s condition, $\mathbb{Q}$ is a probability measure on ${\mathcal{F}_{T}^{L}}$ equivalent to $\mathbb{P}$. According to Girsanov’s theorem, ${\widehat{B}_{t}^{\sigma }}:={B_{t}^{\sigma }}+{\textstyle\int _{0}^{t}}{\theta _{s}}\mathrm{d}s$, $t\in [0,T]$, defines a $\mathbb{Q}$-Brownian motion with respect to ${\mathbb{F}^{L}}$. Under $\mathbb{Q}$, we consider the process $\widehat{U}$ defined by
Therefore, under $\mathbb{Q}$, we get $S={S_{0}}\hspace{0.1667em}\mathcal{E}(\widehat{U})$. We are now going to show that S is a $\mathbb{Q}$-martingale and, hence, that the market model is free of arbitrage opportunities.
(4.3)
\[ {\widehat{U}_{t}}:={\int _{0}^{t}}{\zeta _{s}}\mathrm{d}{\widehat{B}_{s}^{\sigma }}+{\int _{[0,t]\times \mathbb{R}}}\beta (s,y)\overline{\mu }(\mathrm{d}s,\mathrm{d}y)\hspace{0.1667em}.\]We denote by BMO$(\mathbb{Q})$ the space of adapted BMO martingales with respect to $\mathbb{Q}$ on $[0,T]$.
Proposition 4.1.
Let $\mathbb{Q}$ be the equivalent probability measure defined above.
(i) The $\mathbb{Q}$-compensator of the $\mathbb{P}$-Poisson random measure μ coincides with $\lambda \otimes \nu $. Hence, μ is also a $\mathbb{Q}$-Poisson random measure relative to ${\mathbb{F}^{L}}$.
(iii) The price process S is a martingale with respect to $\mathbb{Q}$. In particular, the market model is free of arbitrage opportunities.
Proof.
To verify (i), we observe that the density process of $\mathbb{Q}$ with respect to $\mathbb{P}$ is a continuous ${\mathbb{F}^{L}}$-martingale. Hence, from [9, Theorem 12.31], we conclude that the $\mathbb{Q}$-compensator of μ coincides with $\lambda \otimes \nu $. Therefore, μ is a Poisson random measure relative to ${\mathbb{F}^{L}}$ with respect to $\mathbb{Q}$ (see [13, Theorem II.4.8]) and this proves (i). We verify (ii). From (i) and the assumptions on β, $\widehat{U}\in {\mathcal{H}^{2}}(\mathbb{Q})$ holds. Furthermore, since $\Delta {\widehat{U}_{t}}(\omega )=\beta (t,\omega ,\Delta {L_{t}}(\omega )){1_{\{\Delta {L_{t}}(\omega )\ne 0\}}}$, β being bounded, $\widehat{U}$ has bounded jumps. Setting ${c_{1}}:={\sup _{t\in [0,T]}}|{\zeta _{t}}|$ and ${c_{2}}:={\sup _{t\in [0,T]}}|{\alpha _{t}}|$, we have
\[ 0<{\langle \widehat{U},\widehat{U}\rangle _{t}}\le \left({\sigma ^{2}}{c_{1}^{2}}+{c_{2}^{2}}{\int _{\mathbb{R}}}({y^{2}}\wedge 1)\nu (\mathrm{d}y)\right)T=:C(T),\hspace{1em}t\in [0,T]\hspace{0.1667em}.\]
Hence, $\langle \widehat{U},\widehat{U}\rangle $ is bounded on $[0,T]$. So, [9, Theorem 10.9 (2)] yields $\widehat{U}\in \text{BMO}(\mathbb{Q})$. We now come to (iii). Under $\mathbb{Q}$, we have $S={S_{0}}\hspace{0.1667em}\mathcal{E}(\widehat{U})$. Since $\Delta \widehat{U}\ge 0$, by (ii) we can apply [11, Theorem 2], which yields that S is a uniformly integrable ${\mathbb{F}^{L}}$-martingale under $\mathbb{Q}$. The proof of the proposition is complete. □4.2 The optimization problem
We now study the following optimization problem:
where $\mathcal{A}$ and ${W^{\rho ,x}}>0$ (both to be defined) represent the sets of admissible strategies and the wealth process with initial capital $x>0$, respectively.
(4.4)
\[ V(x)=\underset{\rho \in \mathcal{A}}{\sup }\mathbb{E}\Big[\log \big({W_{T}^{\rho ,x}}\big)\Big],\hspace{1em}x>0\hspace{0.1667em},\]We are going to solve (4.4) by constructing a family of processes $\{{R^{\rho ,x}},\hspace{2.5pt}\rho \in \mathcal{A}\}$ fulfilling the martingale optimality principle on $[0,T]$ (see [10, p. 1697]).
Assumption 4.2 (Martingale Optimality Principle).
Suppose that $x>0$. The family $\{{R^{\rho ,x}},\hspace{2.5pt}\rho \in \mathcal{A}\}$ is ${\mathbb{F}^{L}}$-adapted and has the following properties:
(1) ${R_{T}^{\rho ,x}}=\log ({W_{T}^{\rho ,x}})$ for every $\rho \in \mathcal{A}$.
(2) ${R_{0}^{\rho ,x}}\equiv {r^{x}}$ is a constant not depending on ρ for every $\rho \in \mathcal{A}$.
(3) ${R^{\rho ,x}}$ is a supermartingale for every $\rho \in \mathcal{A}$.
(4) There exists ${\rho ^{\ast }}\in \mathcal{A}$ such that ${R^{{\rho ^{\ast }},x}}$ is a martingale.
Notice that if the family $\{{R^{\rho ,x}},\hspace{2.5pt}\rho \in \mathcal{A}\}$ satisfies Assumption 4.2, then the strategy ${\rho ^{\ast }}$ in Assumption 4.2 (4) is a solution of (4.4). Indeed, for any $\rho \in \mathcal{A}$, we get
We now define the set $\mathcal{A}$ of admissible strategies and the wealth process ${W^{\rho ,x}}$. For $\rho \in \mathcal{A}$, we want to consider the wealth process ${W^{\rho ,x}}$ given by
where the process U is defined by (4.1). Therefore, we assume that ρ is a predictable process such that ${\textstyle\int _{0}^{T}}{\rho _{s}^{2}}\mathrm{d}s<+\infty $ a.s. To ensure ${W^{\rho ,x}}>0$, we assume ${\rho _{t}}(\omega )\ge 0$ which is, in particular, a short-sell constraint on the admissible strategies.
(4.5)
\[ {W^{\rho ,x}}=x\hspace{0.1667em}\mathcal{E}\left({\int _{0}^{\cdot }}{\rho _{s}}\mathrm{d}{U_{s}}\right)=x+{\int _{0}^{\cdot }}{W_{s-}^{\rho ,x}}{\rho _{s}}\mathrm{d}{U_{s}},\hspace{1em}x>0\hspace{0.1667em},\]Definition 4.3 (Admissible strategies).
Let $C\ne \varnothing $ be a closed subset of $[0,+\infty )$. The set $\mathcal{A}$ of admissible strategies consists of all predictable and C-valued processes ρ satisfying the integrability condition $\mathbb{E}[{\textstyle\int _{0}^{T}}{\rho _{s}^{2}}\mathrm{d}s]<+\infty $.
In the following proposition, we summarize some properties of the process ${W^{\rho ,x}}$ for $\rho \in \mathcal{A}$.
Proposition 4.4.
Let $\rho \in \mathcal{A}$ (see Definition 4.3), $x>0$, and $\beta (\omega ,t,y)\ge 0$, for every $(\omega ,t,y)\in \widetilde{\Omega }$. The wealth process ${W^{\rho ,x}}$ is the ${\mathbb{F}^{L}}$-semimartingale given by the identity (4.5). Furthermore, for every $t\in [0,T]$, we have ${W_{t}^{\rho ,x}}>0$ and
The local martingale part of $\log ({W^{\rho ,x}})$ is a true martingale and $\log ({W_{t}^{\rho ,x}})$ is integrable, $t\in [0,T]$.
(4.6)
\[\begin{aligned}{}\log \big({W_{t}^{\rho ,x}}\big)-\log x& ={\int _{0}^{t}}{\rho _{s}}{\zeta _{s}}\mathrm{d}{B_{s}^{\sigma }}+{\int _{[0,t]\times \mathbb{R}}}\log \big(1+{\rho _{s}}\beta (s,y)\big)\overline{\mu }(\mathrm{d}s,\mathrm{d}y)\\ {} & +{\int _{[0,t]\times \mathbb{R}}}\big(\log (1+{\rho _{s}}\beta (s,y))-{\rho _{s}}\beta (s,y)\big)\nu (\mathrm{d}y)\mathrm{d}s\\ {} & -{\int _{0}^{t}}\Big[\frac{{\sigma ^{2}}}{2}{\Big({\rho _{s}}{\zeta _{s}}-\frac{{\theta _{s}}}{{\sigma ^{2}}}\Big)^{2}}+\frac{{\theta _{s}^{2}}}{2{\sigma ^{2}}}\Big]\mathrm{d}s\hspace{0.1667em}.\end{aligned}\]Proof.
It is clear that for every $\rho \in \mathcal{A}$, the wealth process ${W^{\rho ,x}}$ is a semimartingale and it satisfies (4.5). Hence, since $\beta (t,\omega ,y)\ge 0$, we obtain ${W_{t}^{\rho ,x}}(\omega )>0$, for every $\rho \in \mathcal{A}$. Thus, we can consider the process $\log ({W^{\rho ,x}})$. From (4.5), (4.1) and the explicit expression of the stochastic exponential, it follows
We define the processes
From (4.8), since $\rho \in \mathcal{A}$ and ${\rho _{t}}(\omega )\beta (\omega ,t,y)\ge 0$, by the boundedness of α and the assumptions on β, we get the estimate
So, we can introduce ${\textstyle\int _{[0,\cdot ]\times \mathbb{R}}}\big(\log (1+{\rho _{s}}\beta (s,y))-{\rho _{s}}\beta (s,y)\big)\mu (\mathrm{d}s,\mathrm{d}y)$ which is a process of integrable variation and indistinguishable from A. Hence, the predictable compensator of A is ${A^{p}}:={\textstyle\int _{[0,\cdot ]\times \mathbb{R}}}\big(\log (1+{\rho _{s}}\beta (s,y))-{\rho _{s}}\beta (s,y)\big)\nu (\mathrm{d}y)\mathrm{d}s$ and, according to [13, Proposition II.1.28], the identity
(4.7)
\[\begin{aligned}{}\log \big({W_{t}^{\rho ,x}}& \big)-\log x={\int _{0}^{t}}{\rho _{s}}{\zeta _{s}}\mathrm{d}{B_{s}^{\sigma }}+{\int _{[0,t]\times \mathbb{R}}}{\rho _{s}}\beta (s,y)\overline{\mu }(\mathrm{d}s,\mathrm{d}y)\\ {} & +{\int _{0}^{t}}\Big[-\frac{{\sigma ^{2}}}{2}{\Big({\rho _{s}}{\zeta _{s}}-\frac{{\theta _{s}}}{{\sigma ^{2}}}\Big)^{2}}+\frac{{\theta _{s}^{2}}}{2{\sigma ^{2}}}\Big]\mathrm{d}s\\ {} & +\sum \limits_{0\le s\le t}\big\{\log (1+{\rho _{s}}\beta (s,\Delta {L_{s}}))-{\rho _{s}}\beta (s,\Delta {L_{s}})\big\}{1_{\{\Delta {L_{s}}\ne 0\}}}.\end{aligned}\]
\[\begin{aligned}{}& A:=\sum \limits_{0\le s\le \cdot }\big\{\log (1+{\rho _{s}}\beta (s,\Delta {L_{s}}))-{\rho _{s}}\beta (s,\Delta {L_{s}})\big\}{1_{\{\Delta {L_{s}}\ne 0\}}},\\ {} & B:=\sum \limits_{0\le s\le \cdot }\big|\log (1+{\rho _{s}}\beta (s,\Delta {L_{s}}))-{\rho _{s}}\beta (s,\Delta {L_{s}})\big|{1_{\{\Delta {L_{s}}\ne 0\}}}.\end{aligned}\]
The increasing process B is integrable. To see this, we first recall the following estimates:
(4.8)
\[ |\log (1+y)|\le |y|,\hspace{2em}|\log (1+y)-y|\le {y^{2}},\hspace{1em}\text{for}\hspace{2.5pt}\hspace{2.5pt}y\ge 0\hspace{0.1667em}.\](4.9)
\[ \mathbb{E}[{B_{T}}]\le \mathbb{E}\bigg[{\int _{0}^{T}}{\rho _{s}^{2}}{\alpha _{s}^{2}}\mathrm{d}s\bigg]{\int _{\mathbb{R}}}({y^{2}}\wedge 1)\nu (\mathrm{d}y)<+\infty \hspace{0.1667em}.\]
\[ A-{A^{p}}={\int _{[0,\cdot ]\times \mathbb{R}}}\big(\log (1+{\rho _{s}}\beta (s,y))-{\rho _{s}}\beta (s,y)\big)\overline{\mu }(\mathrm{d}s,\mathrm{d}y)\]
holds. In conclusion, by the linearity of the stochastic integral with respect to $\overline{\mu }$, we can rewrite (4.7) as in (4.6). It remains to show that $\log ({W_{t}^{\rho ,x}})$ is integrable for every $t\in [0,T]$. We observe that, because of (4.9), the boundedness of θ and ζ, since $\rho \in \mathcal{A}$, the drift part in (4.6) is integrable. Furthermore, the local martingale part of $\log ({W^{\rho ,x}})$ belongs to ${\mathcal{H}^{2}}$. Indeed, by the boundedness of ζ, we get that ${\textstyle\int _{0}^{\cdot }}{\rho _{s}}{\zeta _{s}}\hspace{0.1667em}\mathrm{d}{B_{s}^{\sigma }}$ belongs to ${\mathcal{H}^{2}}$. From (4.8) we have
\[ \log (1+{\rho _{t}}\beta (t,y))\le {\rho _{t}}\beta (t,y)\le {\rho _{t}}{\alpha _{t}}(|y|\wedge 1)\in {L^{2}}(\lambda \otimes \mathbb{P}\otimes \nu )\hspace{0.1667em}.\]
Thus, the stochastic integral with respect to $\overline{\mu }$ in (4.6) also belongs to ${\mathcal{H}^{2}}$. The proof is complete. □We notice that, for every $\rho \in \mathcal{A}$, we have the identity
where S is the price process of the stock. So, we can interpret an admissible strategy $\rho \in \mathcal{A}$ as the part of the wealth invested in the stock, and $\pi :={W_{-}^{\rho ,x}}\rho /{S_{-}}$ is the number of shares of the stock. Since from Proposition 4.4 the wealth process ${W^{\rho ,x}}$ is a positive semimartingale, for every $\rho \in \mathcal{A}$, the predictable process π is an admissible strategy for the market model described by the price process S (see (4.2)).
(4.10)
\[ {W_{t}^{\rho ,x}}=x+{\int _{0}^{t}}\frac{{W_{s-}^{\rho ,x}}{\rho _{s}}}{{S_{s-}}}\hspace{0.1667em}\mathrm{d}{S_{s}},\hspace{1em}x>0\hspace{0.1667em},\]We now state a measurable selection result, which will be useful in the proof of Theorem 4.6 below.
Lemma 4.5.
Let $C\subseteq \mathbb{R}$ be a closed subset and let $G:[0,T]\times \Omega \times C\longrightarrow \mathbb{R}$ be a mapping such that:
(i) $c\mapsto G(t,\omega ,c)$ is continuous over C for every $(t,\omega )\in [0,T]\times \Omega $.
(ii) $(t,\omega )\mapsto G(t,\omega ,c)$ is an ${\mathbb{F}^{L}}$-predictable process for every $c\in C$.
(iii) For all $(t,\omega )\in [0,T]\times \Omega $, there exists ${c^{\ast }}\in C$ such that
Then, the infimum is, in fact, the minimum, $(t,\omega )\mapsto {\min _{c\in C}}G(t,\omega ,c)$ is a predictable process and there exists a predictable process ${\rho ^{\ast }}$ such that
Proof.
Clearly, by (iii), the infimum is the minimum and $(t,\omega )\mapsto {\min _{c\in C}}G(t,\omega ,c)$ is a predictable process. Indeed, denoting by Q the set of rational numbers, from the continuity of G, ${\inf _{c\in C}}G(t,\omega ,c)={\inf _{c\in C\cap \mathbf{Q}}}G(t,\omega ,c)$, for $(t,\omega )\in [0,T]\times \Omega $, and the second claim is proven. The third claim follows from assumption (iii) and Filippov’s implicit function theorem as formulated in [3, Theorem 21.3.4] (with $U=C$ and without the space X). The proof of the lemma is complete. □
We are now ready to solve (4.4).
Theorem 4.6.
Let S be the price process given in (4.2) with the additional assumption $\beta (t,\omega ,y)\ge 0$, $(t,\omega ,y)\in \widetilde{\Omega }$. Let $\mathcal{A}$ be as in Definition 4.3. Then, for every $\rho \in \mathcal{A}$, the wealth process ${W^{\rho ,x}}$ satisfies (4.5) and $\log \big({W_{T}^{\rho ,x}}\big)\in {L^{1}}({\mathcal{F}_{T}^{L}},\mathbb{P})$. Furthermore, for $x>0$, the explicit expression of the value function V of the optimization problem (4.4) is $V(x)=\log (x)+\mathbb{E}[{\textstyle\int _{0}^{T}}f(s)\mathrm{d}s]$, where f given by
Moreover, there exists an admissible strategy ${\rho ^{\ast }}\in \mathcal{A}$ such that, for every $(t,\omega )$ in $[0,T]\times \Omega $,
holds and ${\rho ^{\ast }}$ is optimal, that is, $V(x)=\mathbb{E}[\log ({W_{T}^{{\rho ^{\ast }},x}})]$, $x>0$.
(4.11)
\[\begin{aligned}{}f(t,\omega )& :=-\underset{c\in C}{\min }\bigg(\frac{{\sigma ^{2}}}{2}{\Big(c{\zeta _{t}}(\omega )-\frac{{\theta _{t}}(\omega )}{{\sigma ^{2}}}\Big)^{2}}\\ {} & \hspace{1em}\hspace{1em}+{\int _{\mathbb{R}}}\big\{c\beta (t,\omega ,y)-\log (1+c\beta (t,\omega ,y))\big\}\nu (\mathrm{d}y)\bigg)+\frac{{\theta _{t}^{2}}(\omega )}{2{\sigma ^{2}}}\hspace{0.1667em}.\end{aligned}\](4.12)
\[\begin{aligned}{}{\rho _{t}^{\ast }}(\omega )\in \underset{c\in C}{\operatorname{arg\,min}}\bigg(& \frac{{\sigma ^{2}}}{2}{\Big(c\hspace{0.1667em}{\zeta _{t}}(\omega )-\frac{{\theta _{t}}(\omega )}{{\sigma ^{2}}}\Big)^{2}}\\ {} & +{\int _{\mathbb{R}}}\big\{c\beta (t,\omega ,x)-\log (1+c\beta (t,\omega ,x))\big\}\nu (\mathrm{d}x)\bigg)\end{aligned}\]Proof.
We only need to verify the statements about the optimization problem (4.4), since the properties of the wealth process ${W^{\rho ,x}}$ come from Proposition 4.4. We prove the result in two steps. First, using Lemma 4.5, we show that $(t,\omega )\mapsto f(t,\omega )$ in (4.11) is an admissible generator and that there exists a ${\rho ^{\ast }}\in \mathcal{A}$ satisfying (4.12). We then show the optimality of the strategy ${\rho ^{\ast }}\in \mathcal{A}$ as an application of the martingale optimality principle.
For each c in the closed subset $C\subseteq [0,+\infty )$, we define
\[\begin{aligned}{}G(t,\omega ,c):=\frac{{\sigma ^{2}}}{2}\big(c{\zeta _{t}}(\omega )-& {\sigma ^{-2}}{\theta _{t}}(\omega ){\big)^{2}}\\ {} & +{\int _{\mathbb{R}}}\big\{c\beta (t,\omega ,y)-\log (1+c\beta (t,\omega ,y))\big\}\nu (\mathrm{d}y)\hspace{0.1667em}.\end{aligned}\]
Since β is a predictable function, the process $(t,\omega )\mapsto G(t,\omega ,c)$ is predictable, for every $c\in C$. We now show the continuity of $c\mapsto G(t,\omega ,c)$ on C for $(t,\omega )$ in $[0,T]\times \Omega $. Let ${({c^{n}})_{n\in \mathbb{N}}}\subseteq C$ be a convergent sequence and let $c\in C$ be its limit. We have
\[\begin{aligned}{}& {c^{n}}\beta (t,\omega ,y)-\log (1+{c^{n}}\beta (t,\omega ,y))\\ {} & \hspace{1em}\longrightarrow c\beta (t,\omega ,y)-\log (1+c\beta (t,\omega ,y)),\hspace{1em}n\to +\infty \hspace{0.1667em},\end{aligned}\]
pointwise in t, ω and y. From (4.8), we get, as $n\to +\infty $,
\[ 0\le {c^{n}}\beta (t,\omega ,y)-\log (1+{c^{n}}\beta (t,\omega ,y))\le {({c^{n}}{\alpha _{t}}(\omega ))^{2}}({y^{2}}\wedge 1)\longrightarrow c{\alpha _{t}^{2}}(\omega )({y^{2}}\wedge 1)\hspace{0.1667em}.\]
By dominated convergence, we get $G(t,\omega ,{c^{n}})\longrightarrow G(t,\omega ,c)$, $n\to +\infty $, which is the statement about the continuity. We now show that there exists a ${c^{\ast }}$ in C such that $G(t,\omega ,{c^{\ast }})={\min _{c\in C}}G(t,\omega ,c)$, for every $(t,\omega )$ in $[0,T]\times \Omega $. By the estimate $G(t,\omega ,c)\ge \frac{{\sigma ^{2}}}{2}{\big(c{\zeta _{t}}(\omega )-{\sigma ^{-2}}{\theta _{t}}(\omega )\big)^{2}}$, we get $G(t,\omega ,c)\longrightarrow +\infty $ as $c\to \infty $, for every $(t,\omega )$ in $[0,T]\times \Omega $. Hence, ${C_{0}}:=\{c\in C:G(t,\omega ,c)\le G(t,\omega ,{c_{0}})\}$ is a closed bounded set and, consequently, compact, where ${c_{0}}\in C$ is chosen arbitrarily but fixed. By the continuity of $G(t,\omega ,\cdot )$, there exists ${c^{\ast }}={c^{\ast }}(t,\omega )$ in ${C_{0}}$ such that $G(t,\omega ,{c^{\ast }})={\min _{c\in {C_{0}}}}G(t,\omega ,c)$ holds for every $(t,\omega )$ in $[0,T]\times \Omega $. We also have $G(t,\omega ,{c^{\ast }})={\inf _{c\in C}}G(t,\omega ,c)$ for every $(t,\omega )\in [0,T]\times \Omega $. So, by Lemma 4.5, we get that $(t,\omega )\mapsto {\min _{c\in C}}G(t,\omega ,c)$ is a predictable process and that there exists a C-valued predictable ${\rho ^{\ast }}$ such that, for every $(t,\omega )$ in $[0,T]\times \Omega $, the identity $G(t,\omega ,{\rho _{t}^{\ast }}(\omega ))={\min _{c\in C}}G(t,\omega ,c)$ holds.We now show that $\mathbb{E}[{\textstyle\int _{0}^{T}}{({\rho _{s}^{\ast }})^{2}}\mathrm{d}s]<+\infty $. By $0<\beta (t,\omega ,y)\le {\alpha _{t}}(\omega )(|y|\wedge 1)$ and (4.8), for $c\in C$, we can estimate
\[ {\int _{\mathbb{R}}}\big(c\beta (t,y)-\log \big(1+c\beta (t,y)\big)\big)\nu (\mathrm{d}y)\le \left({\int _{\mathbb{R}}}({y^{2}}\wedge 1)\nu (\mathrm{d}y)\right){\alpha _{t}^{2}}{c^{2}}<+\infty \hspace{0.1667em}.\]
So, α being bounded, we get $0\le G(t,\omega ,c)\le {k_{1}}{c^{2}}+{k_{2}}$, where ${k_{1}},{k_{2}}>0$ denote two suitable constants. Using the boundedness of ζ and θ, the minimality property of ${\rho ^{\ast }}$ and the estimate (4.2), it is therefore straightforward to see that, for two suitable constants ${\overline{k}_{1}},{\overline{k}_{2}}>0$, we have $|{\rho _{t}^{\ast }}(\omega )|\le {\overline{k}_{1}}G(t,\omega ,c)+{\overline{k}_{2}}$ for every $c\in C$. Hence, we get ${({\rho _{t}^{\ast }}(\omega ))^{2}}\le {\tilde{k}_{1}}{c^{2}}+{\tilde{k}_{2}}$, $c\in C$, where ${\tilde{k}_{1}},{\tilde{k}_{2}}>0$ are suitable constants. This implies ${\rho ^{\ast }}\in \mathcal{A}$. We now verify that f in (4.11) satisfies Assumption 3.2 (i) and (ii). Because of the previous step and the predictability of θ, from the identity
\[ f(t,\omega )=-\underset{c\in C}{\min }G(t,\omega ,c)+\frac{{\theta _{t}^{2}}(\omega )}{2{\sigma ^{2}}},\hspace{1em}t\in [0,T],\]
we deduce that $(t,\omega )\mapsto f(t,\omega )$ is predictable. This shows that f fulfils Assumption 3.2 (i). The estimate
\[ |f(t,\omega )|\le G(t,\omega ,c)+\frac{{\theta _{t}^{2}}(\omega )}{2{\sigma ^{2}}}\le {k_{1}}{c^{2}}+k,\hspace{1em}c\in C,\]
where $k>0$ is a suitable constant, implies that f satisfies Assumption 3.2 (ii).We now construct a family of processes $\{{R^{\rho ,x}},\hspace{2.5pt}\rho \in \mathcal{A}\}$ which satisfies Assumption 4.2.
Notice that, because f satisfies Assumptions 4.2 (i) and (ii), the process ${\textstyle\int _{0}^{\cdot }}f(s)\mathrm{d}s$ is ${\mathbb{F}^{L}}$-adapted, f being predictable and Lebesgue integrable. Hence, we can consider the square integrable martingale N satisfying ${N_{t}}=\mathbb{E}[{\textstyle\int _{0}^{T}}f(s)\mathrm{d}s|{\mathcal{F}_{t}^{L}}]$ a.s., $t\in [0,T]$. We define the càdlàg semimartingale $Y={({Y_{t}})_{t\in [0,T]}}$ by setting
We observe that, ${\mathcal{F}_{0}^{L}}$ being trivial, we have ${Y_{0}}={N_{0}}=\mathbb{E}[{\textstyle\int _{0}^{T}}f(s)\mathrm{d}s]$. Furthermore, Y satisfies ${Y_{t}}=\mathbb{E}[{\textstyle\int _{t}^{T}}f(s)\mathrm{d}s|{\mathcal{F}_{t}^{L}}]$ a.s., $t\in [0,T]$ and ${Y_{T}}=0$.
We now set ${R_{t}^{\rho ,x}}:=\log ({W_{t}^{\rho ,x}})+{Y_{t}}$ for $t\in [0,T]$. Notice that ${R^{\rho ,x}}$ fulfils Assumption 4.2 (1), since ${R_{T}^{\rho ,x}}:=\log ({W_{T}^{\rho ,x}})$ holds. From Proposition 4.4, for every t in $[0,T]$, we get
The first line on the right-hand side of (4.14) consists of true martingales. Because the drift part on the right-hand side of (4.14) is non-positive and integrable, ${R^{\rho ,x}}$ is a supermartingale for every $\rho \in \mathcal{A}$. Additionally, ${R_{0}^{\rho ,x}}$ does not depend on ρ. Furthermore, if ${\rho ^{\ast }}$ is the admissible strategy introduced above, then ${\rho ^{\ast }}$ satisfies (4.12) and ${R^{{\rho ^{\ast }},x}}$ is a true martingale. The martingale optimality principle implies the optimality of ${\rho ^{\ast }}$. Hence, $V(x)=\mathbb{E}[\log ({W_{T}^{{\rho ^{\ast }},x}})]=\log (x)+{Y_{0}}$ and the proof of the theorem is complete. □
(4.14)
\[\begin{aligned}{}{R_{t}^{\rho ,x}}& =\log (x)+{N_{t}}+{\int _{0}^{t}}{\rho _{s}}{\zeta _{s}}\mathrm{d}{B_{s}^{\sigma }}+{\int _{[0,t]\times \mathbb{R}}}\log \big(1+{\rho _{s}}\beta (s,y)\big)\overline{\mu }(\mathrm{d}s,\mathrm{d}y)\\ {} & -{\int _{0}^{t}}\Big\{f(s)+\frac{{\sigma ^{2}}}{2}{\Big({\rho _{s}}{\zeta _{s}}-\frac{{\theta _{s}}}{{\sigma ^{2}}}\Big)^{2}}\\ {} & \hspace{28.45274pt}+{\int _{\mathbb{R}}}\big({\rho _{s}}\beta (s,y)-\log (1+{\rho _{s}}\beta (s,y))\big)\nu (\mathrm{d}y)-\frac{{\theta _{s}^{2}}}{2{\sigma ^{2}}}\Big\}\mathrm{d}s\hspace{0.1667em}.\end{aligned}\]Remark 4.7.
It is evident from the first part of the proof of Theorem 4.6, that the predictable function f defined in (4.11) is an admissible generator. So, because of Theorem 3.3, the BSDE
has a unique solution $(\widetilde{Y},Z,V)\in {\mathcal{S}^{2}}\times {\mathrm{L}^{2}}({B^{\sigma }})\times {M^{2}}({\ell ^{2}})$, where $(Z,V)$ is the unique pair such that for every $t\in [0,T]$
(4.15)
\[ {\widetilde{Y}_{t}}=0+{\int _{t}^{T}}f(s)\mathrm{d}s-{\int _{t}^{T}}{Z_{s}}\mathrm{d}{B_{s}^{\sigma }}-{\sum \limits_{n=1}^{\infty }}{\int _{t}^{T}}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}}\]
\[\begin{aligned}{}{N_{t}}& =\mathbb{E}\left[{\int _{0}^{T}}f(s)\mathrm{d}s\Big|{\mathcal{F}_{t}^{L}}\right]\\ {} & =\mathbb{E}\left[{\int _{0}^{T}}f(s)\mathrm{d}s\right]+{\int _{0}^{t}}{Z_{s}}\mathrm{d}{B_{s}^{\sigma }}+{\sum \limits_{n=1}^{\infty }}{\int _{0}^{t}}{V_{s}^{n}}\mathrm{d}{X_{s}^{{f_{n}}}},\end{aligned}\]
holds and ${\widetilde{Y}_{t}}=\mathbb{E}[{\textstyle\int _{t}^{T}}f(s)\mathrm{d}s|{\mathcal{F}_{t}^{L}}]$. Clearly, $\widetilde{Y}$ satisfies ${\widetilde{Y}_{t}}={N_{t}}-{\textstyle\int _{0}^{t}}f(s)\mathrm{d}s$, for every t in $[0,T]$, and ${\widetilde{Y}_{0}}=\mathbb{E}[{\textstyle\int _{0}^{T}}f(s)\mathrm{d}s]$. Hence, $\widetilde{Y}=Y$, where Y has been defined in (4.13). This shows that the martingale optimality principle in Theorem 4.6 can be also constructed as an application of Theorem 3.3.