1 Introduction
A basic method in mathematical finance is to replace the original probability measure with an equivalent martingale measure, sometimes called a risk-neutral measure. This measure is used for pricing and hedging given contingent claims (e.g., options, futures, etc.). In contrast to the situation of the classical Black–Scholes option pricing formula, where the equivalent martingale measure is unique, in actuarial mathematics that is certainly not the case.
The above fact was pointed out by Delbaen and Haezendonck in their pioneering paper [5], as the authors “tried to create a mathematical framework to deal with finance related to risk processes” in the frame of classical Risk Theory. Thus, they were confronted with the problem of characterizing all equivalent martingale measures Q such that a compound Poisson process under an original measure P remains a compound Poisson one under Q. They solved positively the previous problem in [5], and applied their results to the theory of premium calculation principles (see also Embrechts [7] for an overview). The method provided by [5] has been successfully applied to many areas of insurance mathematics such as pricing (re-)insurance contracts (Holtan [11], Haslip and Kaishev [10]), simulation of ruin probabilities (Boogaert and De Waegenaere [2]), risk capital allocation (Yu et al. [21]), pricing CAT derivatives (Geman and Yor [9], Embrechts and Meister [8]), and has been generalized to the case of mixed Poisson processes (see Meister [15]).
However, there is one vital point about the (compound) Poisson processes which is their greatest weakness as far as practical applications are considered, and this is the fact that the variance is a linear function of time t. The latter, together with the fact that in some interesting real-life cases the interarrival times process associated with a counting process remain independent but the exponential interarrival time distribution does not fit well into the observed data (cf. e.g. Chen et al. [3] and Wang et al. [20]), implies that the induced counting process is a renewal but not a Poisson one. This raises the question, whether the characterization of Delbaen and Haezendonck can be extended to the more general compound renewal risk model (also known as the Sparre–Andersen model), and it is precisely this problem the paper deals with. In particular, if the process S is under the probability measure P a compound renewal one, it would be interesting to characterize all probability measures Q being equivalent to P and converting S into a compound Poisson process under Q.
In Section 2, we prove the one direction of the desired characterization, see Proposition 2.1, which provides characterization and explicit calculation of Radon–Nikodým derivatives $dQ/dP$ for well-known cases in insurance mathematics, see Examples 2.1 and 2.2. Since the increments of a renewal process are not, in general, independent and stationary we cannot use arguments similar to those used in the main proof of [5, Proposition 2.2]. In an effort to overcome this obstacle we inserted Lemma 2.1, which holds true for any (compound) counting process, and on which the proof of Proposition 2.1 relies heavily.
In Section 3, the inverse direction is proven in Proposition 3.1, where a canonical change of measures technique is provided, which seems to simplify the well-known one involving the markovization of a (compound) renewal process, see Remark 3.2. The desired characterization is given in Theorem 3.1, which completes and simplifies the proof of the main result of [5]. As a consequence of Theorem 3.1, it is proven in Corollary 3.1 that any compound renewal process can be converted into a compound Poisson one through a change of measures, by choosing the “correct” Radon–Nikodým derivative. The main result of [5, Proposition 2.2] follows as a special instance of Theorem 3.1, see Remark 3.3 (a).
In Section 4, we apply our results to the financial pricing of insurance in a compound renewal risk model. We first prove that given a compound renewal process S under P, the process $Z(P):={\{{Z_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ with ${Z_{t}}:={S_{t}}-t\cdot p(P)$ for any $t\ge 0$, where $p(P)$ is the premium density, is a martingale under P if and only if S is a compound Poisson process under P, see Proposition 4.1, showing in this way that a martingale approach to premium calculation principles leads in the case of compound renewal processes immediately to compound Poisson ones. A consequence of Theorem 3.1 and Proposition 4.1 is a characterization of all progressively equivalent martingale measures Q converting a compound renewal process S into a compound Poisson one, see Proposition 4.2. Using the latter result, we find out canonical price processes satisfying the condition of no free lunch with vanishing risk, see Theorem 4.1, connecting in this way our results with this basic notion of mathematical finance. Finally, we present some applications of Corollary 3.1 and Theorem 4.1 to the computation of some premium calculation principles, see Examples 4.1 to 4.3.
2 Compound renewal processes and progressively equivalent measures
Throughout this paper, unless stated otherwise, $(\varOmega ,\varSigma ,P)$ is a fixed but arbitrary probability space and $\varUpsilon :=(0,\infty )$. The symbols ${\mathcal{L}^{1}}(P)$ and ${\mathcal{L}^{2}}(P)$ stand for the families of all real-valued P-integrable and P-square integrable functions on Ω, respectively. Functions that are P-a.s. equal are not identified. We denote by $\sigma (\mathcal{G})$ the σ-algebra generated by a family $\mathcal{G}$ of subsets of Ω. Given a topology $\mathfrak{T}$ on Ω we write $\mathfrak{B}(\varOmega )$ for its Borel σ-algebra on Ω, i.e. the σ-algebra generated by $\mathfrak{T}$. Our measure theoretic terminology is standard and generally follows [4]. For the definitions of real-valued random variables and random variables we refer to [4, p. 308]. We apply the notation ${P_{X}}:={P_{X}}(\theta ):=\mathbf{K}(\theta )$ to mean that X is distributed according to the law $\mathbf{K}(\theta )$, where $\theta \in D\subseteq {\mathbb{R}^{d}}$ ($d\in \mathbb{N}$) is the parameter of the distribution. We denote again by $\mathbf{K}(\theta )$ the distribution function induced by the probability distribution $\mathbf{K}(\theta )$. Notation $\mathbf{Ga}(a,b)$, where $a,b\in (0,\infty )$, stands for the law of gamma distribution (cf. e.g. [17, p. 180]). In particular, $\mathbf{Ga}(a,1)=\mathbf{Exp}(a)$ stands for the law of exponential distribution. For two real-valued random variables X and Y we write $X=Y$ P-a.s. if $\{X\ne Y\}$ is a P-null set. If $A\subseteq \varOmega $, then ${A^{c}}:=\varOmega \setminus A$, while ${\chi _{A}}$ denotes the indicator (or characteristic) function of the set A. For a map $f:D\to E$ and for a nonempty set $A\subseteq D$ we denote by $f\upharpoonright A$ the restriction of f to A. For the unexplained terminology of Probability and Risk Theory we refer to [17].
A sequence $W:={\{{W_{n}}\}_{n\in \mathbb{N}}}$ of positive real-valued random variables on Ω is called a (claim) interarrival process (cf. e.g. [17, p. 7]). The (claim) arrival process $T:={\{{T_{n}}\}_{n\in {\mathbb{N}_{0}}}}$ induced by W is defined by means of ${T_{0}}:=0$ and ${T_{n}}:={\textstyle\sum _{k=1}^{n}}{W_{k}}$ for any $n\in \mathbb{N}$ (cf. e.g. [17, p. 7]). A counting (or claim number) process $N:={\{{N_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ is defined by means of ${N_{t}}:={\textstyle\sum _{n=1}^{\infty }}{\chi _{\{{T_{n}}\le t\}}}$ for any $t\ge 0$ (cf. e.g. [17, Theorem 2.1.1]). In particular, if W is P-i.i.d. with common distribution $\mathbf{K}(\theta ):\mathfrak{B}(\varUpsilon )\to [0,1]$ ($\theta \in D\subseteq {\mathbb{R}^{d}}$), the counting process N is a P-renewal process with parameter $\theta \in D\subseteq {\mathbb{R}^{d}}$ and interarrival time distribution $\mathbf{K}(\theta )$ (written P-RP$(\mathbf{K}(\theta ))$ for short). If $\theta >0$ and $\mathbf{K}(\theta )=\mathbf{Exp}(\theta )$ then a P-RP$(\mathbf{K}(\theta ))$ becomes a P-Poisson process with parameter θ (cf. e.g. [17, p. 23 for the definition]). Note that if N is a P-RP$(\mathbf{K}(\theta ))$ then ${\mathbb{E}_{P}}[{N_{t}^{m}}]<\infty $ for any $t\ge 0$ and $m\in \mathbb{N}$ (cf. e.g. [18, Proposition 4, p. 101]); hence according to [17, Corollary 2.1.5], it has zero probability of explosion, i.e. $P(\{{\sup _{n\in \mathbb{N}}}{T_{n}}<\infty \})=0$. Furthermore, if $X:={\{{X_{n}}\}_{n\in \mathbb{N}}}$ is another sequence of P-i.i.d. positive real-valued random variables on Ω, called claim size process (cf. e.g. [17, p. 103]), which is independent of N, define the aggregate claims process $S:={\{{S_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ by means of ${S_{t}}:={\textstyle\sum _{n=1}^{{N_{t}}}}{X_{n}}$ for any $t\ge 0$ (cf. e.g. [17, p. 103]). In particular, if N is a P-RP$(\mathbf{K}(\theta ))$, the aggregate claims process is a P-compound renewal process (P-CRP for short) with parameters $\mathbf{K}(\theta )$ and ${P_{{X_{1}}}}$. In the special case where N is a P-Poisson process with parameter θ, the aggregate claims process S is called a P-compound Poisson process (P-CPP for short) with parameters θ and ${P_{{X_{1}}}}$.
Henceforth, unless stated otherwise, $S:={\{{S_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ is a P-CRP with parameters $\mathbf{K}(\theta )$ and ${P_{{X_{1}}}}$, ${\mathcal{F}^{W}}:={\{{\mathcal{F}_{n}^{W}}\}_{n\in \mathbb{N}}}$, ${\mathcal{F}^{X}}:={\{{\mathcal{F}_{n}^{X}}\}_{n\in \mathbb{N}}}$ and $\mathcal{F}:={\{{\mathcal{F}_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ are the natural filtrations of W, X and S, respectively.
The next lemma is a general and helpful result, as it provides a clear understanding of the structure of $\mathcal{F}$, and it is essential for the proofs of our main results. Lemma 2.1 is a part of [13, Lemma III.1.29], but we write it with its proof in a form suitable for our results.
Proof.
Fix an arbitrary $t\ge 0$ and $n\in {\mathbb{N}_{0}}$.
Clearly, for $n=0$ we get ${\mathcal{F}_{0}^{X}}={\mathcal{F}_{0}^{W}}=\{\varnothing ,\varOmega \}$ and ${\mathcal{F}_{t}}\cap \{{N_{t}}=0\}=\{\varnothing ,\varOmega \}\cap \{{N_{t}}=0\}$; hence ${\mathcal{F}_{t}}\cap \{{N_{t}}=0\}=\sigma ({\mathcal{F}_{0}^{W}}\cup {\mathcal{F}_{0}^{X}})\cap \{{N_{t}}=0\}$.
(a) Inclusion $\sigma ({\mathcal{F}_{n}^{W}}\cup {\mathcal{F}_{n}^{X}})\cap \{{N_{t}}=n\}\subseteq {\mathcal{F}_{t}}\cap \{{N_{t}}=n\}$ holds true.
To show (a), fix an arbitrary $k\in \{1,\dots ,n\}$. Note that S is progressively measurable with respect to $\mathcal{F}$ (cf. e.g. [14, p. 4 for the definition]), since ${S_{t}}$ is ${\mathcal{F}_{t}}$-measurable and has right continuous paths (cf. e.g. [14, Proposition 1.13]). The latter, together with the fact that ${T_{k}}$ is a stopping time of $\mathcal{F}$, implies that ${S_{{T_{k}}}}$ is ${\mathcal{F}_{{T_{k}}}}$-measurable, where ${\mathcal{F}_{{T_{k}}}}:=\{A\in \varSigma :A\cap \{{T_{k}}\le v\}\in {\mathcal{F}_{v}}\hspace{0.1667em}\hspace{0.1667em}\text{for any}\hspace{0.1667em}\hspace{0.1667em}v\ge 0\}$ (cf. e.g. [14, Proposition 2.18]). But ${T_{k-1}}<{T_{k}}$ yields ${\mathcal{F}_{{T_{k-1}}}}\subseteq {\mathcal{F}_{{T_{k}}}}$ (cf. e.g. [14, Lemma 2.15]), implying that ${S_{{T_{k-1}}}}$ is ${\mathcal{F}_{{T_{k}}}}$-measurable. Consequently, the random variable ${X_{k}}={S_{{T_{k}}}}-{S_{{T_{k-1}}}}$ is ${\mathcal{F}_{{T_{k}}}}$-measurable; hence ${\mathcal{F}_{n}^{X}}\cap \{{N_{t}}=n\}\subseteq {\mathcal{F}_{t}}\cap \{{N_{t}}=n\}$.
Since ${W_{k}}$ is ${\mathcal{F}_{{T_{k}}}}$-measurable, standard computations yield ${\mathcal{F}_{n}^{W}}\cap \{{N_{t}}=n\}\subseteq {\mathcal{F}_{t}}\cap \{{N_{t}}=n\}$, completing in this way the proof of (a).
(b) Inclusion ${\mathcal{F}_{t}}\cap \{{N_{t}}=n\}\subseteq \sigma ({\mathcal{F}_{n}^{W}}\cup {\mathcal{F}_{n}^{X}})\cap \{{N_{t}}=n\}$ holds true.
To show (b), let $A\in {\textstyle\bigcup _{u\le t}}\sigma ({S_{u}})$. There exist an index $u\in [0,t]$ and a set $B\in \mathfrak{B}(\varUpsilon )$ such that $A={S_{u}^{-1}}(B)={\textstyle\bigcup _{m\in {\mathbb{N}_{0}}}}(\{{N_{u}}=m\}\cap {B_{m}})$, where ${B_{m}}:={({\textstyle\sum _{j=1}^{m}}{X_{j}})^{-1}}(B)\in {\mathcal{F}_{m}^{X}}$ for any $m\in {\mathbb{N}_{0}}$, implying
where ${D_{n}}:=({\textstyle\bigcup _{m=0}^{n-1}}(\{{N_{u}}=m\}\cap {B_{m}}))\cup (\{{T_{n}}\le u\}\cap {B_{n}})\in \sigma ({\mathcal{F}_{n}^{W}}\cup {\mathcal{F}_{n}^{X}})$; hence ${\textstyle\bigcup _{u\le t}}\sigma ({S_{u}})\cap \{{N_{t}}=n\}\subseteq \sigma ({\mathcal{F}_{n}^{W}}\cup {\mathcal{F}_{n}^{X}})\cap \{{N_{t}}=n\}$, implying (b). This completes the proof of the lemma. □
Lemma 2.2.
Let Q be a probability measure on Σ.
(a) If X is Q-i.i.d., ${Q_{{X_{1}}}}\sim {P_{{X_{1}}}}$ and h is a real-valued, one-to-one, $\mathfrak{B}(\varUpsilon )$-measurable function, then there exists a ${P_{{X_{1}}}}$-a.s. unique real-valued $\mathfrak{B}(\varUpsilon )$-measurable function γ such that
(b) If W is Q-i.i.d. and ${Q_{{W_{1}}}}\sim {P_{{W_{1}}}}$, then there exists a ${P_{{W_{1}}}}$-a.s. unique positive function $r\in {\mathcal{L}^{1}}({P_{{W_{1}}}})$ such that for every $n\in {\mathbb{N}_{0}}$ and for all $D\in {\mathcal{F}_{n}^{W}}$ the condition
holds true.
Proof.
For (a): First note that $h(\varUpsilon ):=\{h(y):y\in \varUpsilon \}\in \mathfrak{B}(\mathbb{R})$ (cf. e.g. [4, Theorem 8.3.7]) and that the function ${h^{-1}}$ is $\mathfrak{B}(h(\varUpsilon ))$-$\mathfrak{B}(\varUpsilon )$-measurable (cf. e.g. [4, Proposition 8.3.5]). Since ${P_{{X_{1}}}}\sim {Q_{{X_{1}}}}$, by the Radon–Nikodým Theorem there exists a positive Radon–Nikodým derivative $f\in {\mathcal{L}^{1}}({P_{{X_{1}}}})$ of ${Q_{{X_{1}}}}$ with respect to ${P_{{X_{1}}}}$. Put $\gamma :=h\circ f$. An easy computation justifies the validity of $(i)$.
To check the validity of $(ii)$, fix an arbitrary $n\in {\mathbb{N}_{0}}$ and consider the family ${\mathcal{C}_{n}}:=\{{\textstyle\bigcap _{j=1}^{n}}{A_{j}}:{A_{j}}\in \sigma ({X_{j}})\}$. Standard computations show that any $A\in {\mathcal{C}_{n}}$ satisfies condition (1). By a monotone class argument it can be shown that (1) remains valid for any $A\in {\mathcal{F}_{n}^{X}}$.
Applying similar arguments as above we obtain (b). □
Notations 2.1.
(a) Let h be a function as in Lemma 2.2. The class of all real-valued $\mathfrak{B}(\varUpsilon )$-measurable functions γ such that ${\mathbb{E}_{P}}[{h^{-1}}\circ \gamma \circ {X_{1}}]=1$ will be denoted by ${\mathcal{F}_{P,h}}:={\mathcal{F}_{P,{X_{1}},h}}$.
(b) Let us fix an arbitrary $\theta \in D\subseteq {\mathbb{R}^{d}}$ and let $\boldsymbol{\Lambda }(\widetilde{\theta })$ be a probability distribution on $\mathfrak{B}(\varUpsilon )$, where $\widetilde{\theta }:=\rho (\theta )$ is a parameter depending on θ and ρ is a function from D into ${\mathbb{R}^{k}}$ ($d,k\in \mathbb{N}$). The class of all probability measures Q on Σ being progressively equivalent to P, i.e. $Q\upharpoonright {\mathcal{F}_{t}}\sim P\upharpoonright {\mathcal{F}_{t}}$ for any $t\ge 0$, and S is a Q-CRP with parameters $\boldsymbol{\Lambda }(\widetilde{\theta })$ and ${Q_{{X_{1}}}}$ will be denoted by ${\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$. In the special case $d=k$ and $\rho :=i{d_{D}}$ we write ${\mathcal{M}_{S,\boldsymbol{\Lambda }(\theta )}}:={\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$, for simplicity.
From now on, unless stated otherwise, h is a function as in Lemma 2.2, D, θ and $\widetilde{\theta }$ are as in Notation 2.1 (b).
For the definition of a $(P,\mathcal{Z})$-martingale, where $\mathcal{Z}:={\{{\mathcal{Z}_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ is a filtration on $(\varOmega ,\varSigma )$ we refer to [17, p. 25]. A $(P,\mathcal{Z})$-martingale ${\{{Z_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ is P-a.s. positive if ${Z_{t}}$ is P-a.s. positive for each $t\ge 0$. For $\mathcal{Z}=\mathcal{F}$ we write P-martingale instead of $(P,\mathcal{F})$-martingale, for simplicity.
For a given aggregate claims process S on $(\varOmega ,\varSigma )$, in order to investigate the existence of progressively equivalent martingale measures (see Section 4), one has to be able to characterize Radon–Nikodým derivatives $dQ/dP$. Proposition 2.1 follows also as a special case of [12, Proposition 4.3 and Theorem 5.1], but we write it in a form suitable for our purposes, and we present a rather elementary proof.
Proposition 2.1.
Let Q be a probability measure on Σ such that S is a Q-CRP with parameters $\boldsymbol{\Lambda }(\widetilde{\theta })$ and ${Q_{{X_{1}}}}$. Then the following are equivalent:
-
$(i)$ $Q\upharpoonright {\mathcal{F}_{t}}\sim P\upharpoonright {\mathcal{F}_{t}}$ for any $t\ge 0$;
-
$(ii)$ ${Q_{{X_{1}}}}\sim {P_{{X_{1}}}}$ and ${Q_{{W_{1}}}}\sim {P_{{W_{1}}}}$;
-
$(iii)$ there exists a ${P_{{X_{1}}}}$-a.s. unique function $\gamma \in {\mathcal{F}_{P,h}}$ such that
(RRM)
\[ Q(A)={\int _{A}}{M_{t}^{(\gamma )}}(\theta )\hspace{0.1667em}dP\hspace{1em}\textit{for all}\hspace{2.5pt}A\in {\mathcal{F}_{t}},\]\[ {M_{t}^{(\gamma )}}(\theta ):=\Bigg[{\prod \limits_{j=1}^{{N_{t}}}}\hspace{0.1667em}\big({h^{-1}}\circ \gamma \big)({X_{j}})\cdot \frac{d{Q_{{W_{1}}}}}{d{P_{{W_{1}}}}}({W_{j}})\Bigg]\cdot \frac{1-\boldsymbol{\Lambda }(\widetilde{\theta })(t-{T_{{N_{t}}}})}{1-\mathbf{K}(\theta )(t-{T_{{N_{t}}}})},\]where the family ${M^{(\gamma )}}(\theta ):={\{{M_{t}^{(\gamma )}}(\theta )\}_{t\in {\mathbb{R}_{+}}}}$ is a P-a.s. positive P-martingale satisfying the condition ${\mathbb{E}_{P}}[{M_{t}^{(\gamma )}}(\theta )]=1$.
Proof.
Fix an arbitrary $t\ge 0$.
For $(i)\Longrightarrow (ii)$: Statement ${Q_{{X_{1}}}}\sim {P_{{X_{1}}}}$ follows by [5, Lemma 2.1]. To show statement ${Q_{{W_{1}}}}\sim {P_{{W_{1}}}}$, let $B\in \mathfrak{B}(\varUpsilon )$ such that ${Q_{{W_{1}}}}(B)=0$. Since ${P_{{W_{1}}}}(B)={\lim \nolimits_{m\to \infty }}P({W_{1}^{-1}}(B)\cap \{{T_{1}}\le m\})$, ${W_{1}^{-1}}(B)\cap \{{T_{1}}\le m\}\in {\mathcal{F}_{m}}$ and $Q\upharpoonright {\mathcal{F}_{m}}\sim P\upharpoonright {\mathcal{F}_{m}}$ we get $P({W_{1}^{-1}}(B)\cap \{{T_{1}}\le m\})=0$ for any $m\in \mathbb{N}$, implying that ${P_{{W_{1}}}}(B)=0$. Replacing Q by P leads to ${Q_{{W_{1}}}}\sim {P_{{W_{1}}}}$.
For $(ii)\Longrightarrow (iii)$: Let $A\in {\mathcal{F}_{t}}$ be given. By Lemma 2.1, for every $k\in {\mathbb{N}_{0}}$ there exists a set ${B_{k}}\in \sigma ({\mathcal{F}_{k}^{W}}\cup {\mathcal{F}_{k}^{X}})$ such that $A\cap \{{N_{t}}=k\}={B_{k}}\cap \{{N_{t}}=k\}$. Thus, due to the fact that N has zero probability of explosion, we get
(2)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle Q(A)& \displaystyle =& \displaystyle {\sum \limits_{k=0}^{\infty }}Q\big({B_{k}}\cap \{{N_{t}}=k\}\big)\\ {} & \displaystyle =& \displaystyle {\sum \limits_{k=0}^{\infty }}Q\big({B_{k}}\cap \{{T_{k}}\le t\}\cap \{{W_{k+1}}>t-{T_{k}}\}\big).\end{array}\]Fix an arbitrary $n\in {\mathbb{N}_{0}}$ and put $G:={\textstyle\bigcap _{j=1}^{n}}({W_{j}^{-1}}({E_{j}})\cap {X_{j}^{-1}}({F_{j}}))\cap \{{W_{n+1}}>t-{T_{n}}\}$ where ${E_{j}},{F_{j}}\in \mathfrak{B}(\varUpsilon )$ for any $j\in \{1,\dots ,n\}$. Then the set G satisfies the condition
In fact, by Lemma 2.2 and Fubini’s Theorem we get
(3)
\[ Q(G)={\int _{G}}\Bigg[{\prod \limits_{j=1}^{n}}\hspace{0.1667em}\big({h^{-1}}\circ \gamma \big)({X_{j}})\cdot \frac{d{Q_{{W_{1}}}}}{d{P_{{W_{1}}}}}({W_{j}})\Bigg]\cdot \frac{1-\boldsymbol{\Lambda }(\widetilde{\theta })(t-{T_{n}})}{1-\mathbf{K}(\theta )(t-{T_{n}})}\hspace{0.1667em}dP.\]
\[\begin{aligned}{}Q(G)& =\int \Bigg[{\prod \limits_{j=1}^{n}}\hspace{0.1667em}{\chi _{{F_{j}}}}({x_{j}})\cdot {\chi _{{E_{j}}}}({w_{j}})\cdot \big({h^{-1}}\circ \gamma \big)({x_{j}})\cdot r({w_{j}})\Bigg]\cdot \frac{Q(\{{W_{n+1}}>t-w\})}{P(\{{W_{n+1}}>t-w\})}\cdot \\ {} & \hspace{2em}P\big(\{{W_{n+1}}>t-w\}\big)\hspace{0.1667em}{P_{{X_{1}},\dots ,{X_{n}};{W_{1}},\dots {W_{n}}}}\big(d({x_{1}},\dots ,{x_{n}};{w_{1}},\dots ,{w_{n}})\big)\\ {} & =\int {\chi _{G}}\cdot \Bigg[{\prod \limits_{j=1}^{n}}\hspace{0.1667em}\big({h^{-1}}\circ \gamma \big)({X_{j}})\cdot r({W_{j}})\Bigg]\cdot \frac{1-\boldsymbol{\Lambda }(\widetilde{\theta })(t-{T_{n}})}{1-\mathbf{K}(\theta )(t-{T_{n}})}\hspace{0.1667em}dP,\end{aligned}\]
where $w:={\textstyle\sum _{j=1}^{n}}{w_{j}}$ and $r({w_{j}}):=\frac{d{Q_{{W_{1}}}}}{d{P_{{W_{1}}}}}({w_{j}})$ for any $j\in \{1,\dots ,n\}$; hence condition (3) follows. By a monotone class argument it can be shown that (3) remains valid for any $C\in \sigma ({\mathcal{F}_{n}^{W}}\cup {\mathcal{F}_{n}^{X}})\cap \{{W_{n+1}}>t-{T_{n}}\}$.But since ${B_{k}}\cap \{{T_{k}}\le t\}\in \sigma ({\mathcal{F}_{k}^{W}}\cup {\mathcal{F}_{k}^{X}})$ for any $k\in {\mathbb{N}_{0}}$, conditions (2) and (3) imply
implying
\[ Q(A)={\sum \limits_{k=0}^{\infty }}{\mathbb{E}_{P}}\Bigg[{\chi _{A\cap \{{N_{t}}=k\}}}\cdot \Bigg[{\prod \limits_{j=1}^{{N_{t}}}}\hspace{0.1667em}\big({h^{-1}}\circ \gamma \big)({X_{j}})\cdot r({W_{j}})\Bigg]\cdot \frac{1-\boldsymbol{\Lambda }(\widetilde{\theta })(t-{T_{{N_{t}}}})}{1-\mathbf{K}(\theta )(t-{T_{{N_{t}}}})}\Bigg].\]
Thus,
(4)
\[ Q(A)={\mathbb{E}_{P}}\big[{\chi _{A}}\cdot {M_{t}^{(\gamma )}}(\theta )\big]\hspace{1em}\text{for all}\hspace{2.5pt}A\in {\mathcal{F}_{t}},\]
\[ {\int _{A}}{M_{u}^{(\gamma )}}(\theta )\hspace{0.1667em}dP={\int _{A}}{M_{t}^{(\gamma )}}(\theta )\hspace{0.1667em}dP\hspace{1em}\text{for all}\hspace{0.1667em}\hspace{0.1667em}u\in [0,t]\hspace{0.1667em}\hspace{0.1667em}\text{and}\hspace{0.1667em}\hspace{0.1667em}A\in {\mathcal{F}_{u}};\]
hence ${M^{(\gamma )}}(\theta )$ is a P-martingale. The latter together with condition (4) proves condition (RRM).By (RRM) for $A=\varOmega $ we obtain
\[ {\mathbb{E}_{P}}\big[{M_{t}^{(\gamma )}}(\theta )\big]={\int _{\varOmega }}{M_{t}^{(\gamma )}}(\theta )\hspace{0.1667em}dP=Q(\varOmega )=1.\]
Note that $\frac{1-\boldsymbol{\Lambda }(\widetilde{\theta })(t-{T_{{N_{t}}}})}{1-\mathbf{K}(\theta )(t-{T_{{N_{t}}}})}$ is P-a.s. positive. The latter, together with the fact that ${h^{-1}}\circ \gamma $ and r are ${P_{{X_{1}}}}$- and ${P_{{W_{1}}}}$-a.s. positive functions, respectively, implies $P(\{{M_{t}^{(\gamma )}}(\theta )>0\})=1$.The implication (iii)⟹(i) is immediate. □
Proposition 2.1 allows us to explicitly calculate Radon–Nikodým derivatives for the most important insurance risk processes, as the following two examples illustrate. In the first example we consider the case of the Poisson process with parameter θ.
Example 2.1.
Take $h:=\ln $, $\theta ,\widetilde{\theta }\in D:=\varUpsilon $, and let $P\in {\mathcal{M}_{S,\mathbf{Exp}(\theta )}}$ and $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}}$. By Proposition 2.1 there exists a ${P_{{X_{1}}}}$-a.s. unique function $\gamma \in {\mathcal{F}_{P,\ln }}$ defined by means of $\gamma :=\ln f$, where f is a Radon–Nikodým derivative of ${Q_{{X_{1}}}}$ with respect to ${P_{{X_{1}}}}$, such that for all $A\in {\mathcal{F}_{t}}$
In our next example we consider a renewal process with gamma distributed interarrival times.
Example 2.2.
Assume that $h:=\ln $, $\theta =({\xi _{1}},{\kappa _{1}})\in D:=\varUpsilon \times \mathbb{N}$, $\widetilde{\theta }=({\xi _{2}},{\kappa _{2}})\in D$, and let $P\in {\mathcal{M}_{S,\mathbf{Ga}(\theta )}}$ and $Q\in {\mathcal{M}_{S,\mathbf{Ga}(\widetilde{\theta })}}$. By Proposition 2.1 there exists a ${P_{{X_{1}}}}$-a.s. unique function $\gamma \in {\mathcal{F}_{P,\ln }}$ such that for all $A\in {\mathcal{F}_{t}}$
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle Q(A)& \displaystyle =& \displaystyle {\int _{A}}{e^{{\textstyle\textstyle\sum _{j=1}^{{N_{t}}}}\gamma ({X_{j}})}}\cdot {\bigg(\frac{{\xi _{2}^{{\kappa _{2}}}}\cdot \Gamma ({\kappa _{1}})}{{\xi _{1}^{{\kappa _{1}}}}\cdot \Gamma ({\kappa _{2}})}\bigg)^{{N_{t}}}}\cdot {e^{-t\cdot ({\xi _{2}}-{\xi _{1}})}}\cdot \frac{{\textstyle\textstyle\sum _{i=0}^{{\kappa _{2}}-1}}\frac{{({\xi _{2}}\cdot (t-{T_{{N_{t}}}}))^{i}}}{i!}}{{\textstyle\textstyle\sum _{i=0}^{{\kappa _{1}}-1}}\frac{{({\xi _{1}}\cdot (t-{T_{{N_{t}}}}))^{i}}}{i!}}\\ {} & & \displaystyle \hspace{2em}\hspace{2em}\cdot {\prod \limits_{j=1}^{{N_{t}}}}{W_{j}^{{\kappa _{2}}-{\kappa _{1}}}}\hspace{0.1667em}dP.\end{array}\]
3 The characterization
We know from Proposition 2.1 that under the weak conditions ${Q_{{X_{1}}}}\sim {P_{{X_{1}}}}$ and ${Q_{{W_{1}}}}\sim {P_{{W_{1}}}}$, the measures P and Q are equivalent on each σ-algebra ${\mathcal{F}_{t}}$, a result that does not, in general, hold true for ${\mathcal{F}_{\infty }}:=\sigma ({\textstyle\bigcup _{t\in {\mathbb{R}_{+}}}}{\mathcal{F}_{t}})$. Let us start with the following helpful lemma.
Proof.
Inclusion ${\mathcal{F}_{\infty }}\subseteq {\mathcal{F}_{\infty }^{(W,X)}}$ follows immediately by Lemma 2.1 and the fact that N has zero probability of explosion.
To check the validity of the inverse inclusion, fix an arbitrary $n\in {\mathbb{N}_{0}}$. Since ${X_{n}}$ is ${\mathcal{F}_{{T_{n}}}}$-measurable, we get ${X_{n}^{-1}}(B)\cap \{{T_{n}}\le \ell \}\in {\mathcal{F}_{\infty }}$ for all $B\in \mathfrak{B}(\varUpsilon )$ and $\ell \in {\mathbb{N}_{0}}$; hence ${X_{n}^{-1}}(B)\in {\mathcal{F}_{\infty }}$, implying together with the ${\mathcal{F}_{\infty }}$-measurability of ${T_{n}}$ that ${\mathcal{F}_{\infty }^{(W,X)}}\subseteq {\mathcal{F}_{\infty }}$. □
Note that the above lemma remains true, without the assumption $P\in {\mathcal{M}_{S,\mathbf{K}(\theta )}}$, under the weaker assumption that N has zero probability of explosion.
Remark 3.1.
Let $Q\in {\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$. If ${P_{{X_{1}}}}\ne {Q_{{X_{1}}}}$ or ${P_{{W_{1}}}}\ne {Q_{{W_{1}}}}$, applying Lemma 3.1 together with the strong law of large numbers, it can be easily seen that the probability measures P and Q are singular on ${\mathcal{F}_{\infty }}$, implying that P and Q are equivalent of ${\mathcal{F}_{\infty }}$ if and only if $P\upharpoonright {\mathcal{F}_{\infty }}=Q\upharpoonright {\mathcal{F}_{\infty }}$ if and only if ${P_{{X_{1}}}}={Q_{{X_{1}}}}$ and ${P_{{W_{1}}}}={Q_{{W_{1}}}}$.
Before we formulate the inverse of Proposition 2.1 (i.e. that for a given function $\gamma \in {\mathcal{F}_{P,h}}$ there exists a unique probability measure $Q\in {\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$ satisfying (RRM)) we remind a simple construction of canonical probability spaces admitting compound renewal processes.
By $(\varOmega \times \varXi ,\varSigma \otimes H,P\otimes R)$ we denote the product probability space of the probability spaces $(\varOmega ,\varSigma ,P)$ and $(\varXi ,H,R)$. If I is an arbitrary nonempty index set, we write ${P_{I}}$ for the product measure on ${\varOmega ^{I}}$ and ${\varSigma _{I}}$ for its domain.
Throughout what follows, we put $\widetilde{\varOmega }:={\varUpsilon ^{\mathbb{N}}}$, $\widetilde{\varSigma }:=\mathfrak{B}(\widetilde{\varOmega })=\mathfrak{B}{(\varUpsilon )_{\mathbb{N}}}$, $\varOmega :=\widetilde{\varOmega }\times \widetilde{\varOmega }$ and $\varSigma :=\widetilde{\varSigma }\otimes \widetilde{\varSigma }$ for simplicity.
For all $n\in \mathbb{N}$ and for any fixed $\theta \in D\subseteq {\mathbb{R}^{d}}$, let ${Q_{n}}(\theta ):=\mathbf{K}(\theta )$ and ${R_{n}}:=R$ be probability measures on $\mathfrak{B}(\varUpsilon )$. Define the probability measure P on Σ by means of $P:=\mathbf{K}{(\theta )_{\mathbb{N}}}\otimes {R_{\mathbb{N}}}$, and for any $\omega =({w_{1}},\dots ,{w_{n}},\dots ;{x_{1}},\dots ,{x_{n}},\dots )\in \varOmega $ put ${W_{n}}(\omega ):={w_{n}}$ and ${X_{n}}(\omega ):={x_{n}}$. It then follows that $X:={\{{X_{n}}\}_{n\in \mathbb{N}}}$ is a claim size process satisfying the condition ${P_{{X_{n}}}}=R$ for any $n\in \mathbb{N}$, and that $W:={\{{W_{n}}\}_{n\in \mathbb{N}}}$ is a P-independent claim interarrival process with ${P_{{W_{n}}}}=\mathbf{K}(\theta )$ for any $n\in \mathbb{N}$. Putting ${T_{n}}:={\textstyle\sum _{k=1}^{n}}{W_{k}}$ for any $n\in {\mathbb{N}_{0}}$ and $T:={\{{T_{n}}\}_{n\in {\mathbb{N}_{0}}}}$, we define by means of ${N_{t}}:={\textstyle\sum _{n=1}^{\infty }}{\chi _{\{{T_{n}}\le t\}}}$ for any $t\ge 0$ the counting process $N:={\{{N_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ induced by T (cf. e.g. [17, Theorem 2.1.1]). Setting ${S_{t}}:={\textstyle\sum _{n=1}^{{N_{t}}}}{X_{n}}$ for any $t\ge 0$ and $S:={\{{S_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ we get that S is a P-CRP with parameters $\mathbf{K}(\theta )$ and ${P_{{X_{1}}}}$. Moreover, according to Lemma 3.1 we have that $\varSigma ={\mathcal{F}_{\infty }^{(W,X)}}={\mathcal{F}_{\infty }}$.
The following proposition shows that after changing the measure the process S remains a compound renewal one if the Radon–Nikodým derivative has the “right” structure on each σ-algebra ${\mathcal{F}_{t}}$. To formulate it, we use the following notation and assumption.
Notation 3.1.
Let $\mathbf{K}(\theta )$ and $\boldsymbol{\Lambda }(\widetilde{\theta })$ be probability distributions on $\mathfrak{B}(\varUpsilon )$ such that $\mathbf{K}(\theta )\sim \boldsymbol{\Lambda }(\widetilde{\theta })$. For any $n\in {\mathbb{N}_{0}}$ the class of all likelihood ratios ${g_{n}}:={g_{\theta ,\widetilde{\theta },n}}:{\varUpsilon ^{n+1}}\to \varUpsilon $ defined by means of
\[ {g_{n}}({w_{1}},\dots ,{w_{n}},t):=\Bigg[{\prod \limits_{j=1}^{n}}\frac{d\boldsymbol{\Lambda }(\widetilde{\theta })}{d\mathbf{K}(\theta )}({w_{j}})\Bigg]\cdot \frac{1-\boldsymbol{\Lambda }(\widetilde{\theta })(t-w)}{1-\mathbf{K}(\theta )(t-w)}\]
for any $({w_{1}},\dots ,{w_{n}},t)\in {\varUpsilon ^{n+1}}$, where $w:={\textstyle\sum _{j=1}^{n}}{w_{j}}$, will be denoted by ${\mathcal{G}_{n,\theta ,\widetilde{\theta }}}$. Notation ${\mathcal{G}_{\theta ,\widetilde{\theta }}}$ stands for the set $\{g={\{{g_{n}}\}_{n\in {\mathbb{N}_{0}}}}:{g_{n}}\in {\mathcal{G}_{n,\theta ,\widetilde{\theta }}}\hspace{2.5pt}\text{for any}\hspace{2.5pt}n\in {\mathbb{N}_{0}}\}$ of all sequences of elements of ${\mathcal{G}_{n,\theta ,\widetilde{\theta }}}$.
Throughout what follows $\mathbf{K}(\theta )$, $\boldsymbol{\Lambda }(\widetilde{\theta })$ and $g\in {\mathcal{G}_{\theta ,\widetilde{\theta }}}$ are as in Notation 3.1, and P, S are those constructed before Notation 3.1.
Proposition 3.1.
Let $\gamma \in {\mathcal{F}_{P,h}}$. Then for all $A\in {\mathcal{F}_{t}}$ the condition
\[ Q(A)={\int _{A}}\Bigg[{\prod \limits_{j=1}^{{N_{t}}}}\hspace{0.1667em}\big({h^{-1}}\circ \gamma \circ {X_{j}}\big)\Bigg]\cdot {g_{{N_{t}}}}({W_{1}},\dots ,{W_{{N_{t}}}},t)\hspace{0.1667em}dP\]
determines a unique probability measure $Q\in {\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$.
Proof.
Fix an arbitrary $t\ge 0$, and define the set-functions ${qQ_{n}}(\theta ),qR:\mathfrak{B}(\varUpsilon )\to \mathbb{R}$ by means of ${qQ_{n}}(\theta )({B_{1}}):={\mathbb{E}_{P}}[{\chi _{{W_{1}^{-1}}({B_{1}})}}\cdot (\frac{d\boldsymbol{\Lambda }(\widetilde{\theta })}{d\mathbf{K}(\theta )}\circ {W_{1}})]$ and $qR({B_{2}}):={\mathbb{E}_{P}}[{\chi _{{X_{1}^{-1}}({B_{2}})}}\cdot ({h^{-1}}\circ \gamma \circ {X_{1}})]$ for any ${B_{1}},{B_{2}}\in \mathfrak{B}(\varUpsilon )$, respectively. Applying a monotone class argument it can be seen that ${qQ_{n}}(\theta )=\boldsymbol{\Lambda }(\widetilde{\theta })$, while Lemma 2.2 (a) $(i)$ implies that $qR$ is a probability measure. Therefore, we may construct a probability measure $qQ:=\boldsymbol{\Lambda }{(\widetilde{\theta })_{\mathbb{N}}}\otimes {qR_{\mathbb{N}}}$ on Σ such that S is a $qQ$-CRP with parameters $\boldsymbol{\Lambda }(\widetilde{\theta })$ and ${qQ_{{X_{1}}}}=qR$, implying that ${qQ_{{X_{1}}}}\sim {P_{{X_{1}}}}$ and ${qQ_{{W_{1}}}}\sim {P_{{W_{1}}}}$. Applying now Proposition 2.1 we obtain $qQ\upharpoonright {\mathcal{F}_{t}}\sim P\upharpoonright {\mathcal{F}_{t}}$, or equivalently
\[ qQ(A)={\int _{A}}\Bigg[{\prod \limits_{j=1}^{{N_{t}}}}\hspace{0.1667em}\big({h^{-1}}\circ \gamma \circ {X_{j}}\big)\Bigg]\cdot {g_{{N_{t}}}}({W_{1}},\dots ,{W_{{N_{t}}}},t)\hspace{0.1667em}dP\]
for all $A\in {\mathcal{F}_{t}}$. Thus $Q\upharpoonright {\mathcal{F}_{t}}=qQ\upharpoonright {\mathcal{F}_{t}}$; hence $Q\upharpoonright q\varSigma =qQ\upharpoonright q\varSigma $ where $q\varSigma :={\textstyle\bigcup _{t\in {\mathbb{R}_{+}}}}{\mathcal{F}_{t}}$, implying that Q is σ-additive on $q\varSigma $ and that $qQ$ is the unique extension of Q on $\varSigma =\sigma (q\varSigma )$. □The next result is the desired characterization. Its proof is an immediate consequence of Propositions 2.1 and 3.1.
Theorem 3.1.
The following hold true:
-
$(i)$ for any $Q\in {\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$ there exists a ${P_{{X_{1}}}}$-a.s. unique function $\gamma \in {\mathcal{F}_{P,h}}$ satisfying condition (RRM);
-
$(ii)$ conversely, for any function $\gamma \in {\mathcal{F}_{P,h}}$ there exists a unique probability measure $Q\in {\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$ satisfying condition (RRM).
In order to formulate the next results of this section, let us denote by ${\widetilde{\mathcal{F}}_{P,\theta }}$ the class of all real-valued $\mathfrak{B}(\varUpsilon )$-measurable functions ${\beta _{\theta }}$, such that ${\beta _{\theta }}:=\gamma +{\alpha _{\theta }}$, where $\gamma \in {\mathcal{F}_{P,\ln }}$ and ${\alpha _{\theta }}$ is a real number depending on θ.
The following result allows us to convert any compound renewal process into a compound Poisson one through a change of measure.
Corollary 3.1.
If ${W_{1}}\in {\mathcal{L}^{1}}(P)$ then the following hold true:
-
$(i)$ for any $\widetilde{\theta }\in \varUpsilon $ and any probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}}$ there exists a ${P_{{X_{1}}}}$-a.s. unique function ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }}$ satisfying together with Q the conditions and
(RPM)
\[ Q(A)={\int _{A}}{M_{t}^{(\beta )}}(\theta )\hspace{0.1667em}dP\hspace{1em}\textit{for all}\hspace{2.5pt}A\in {\mathcal{F}_{t}},\] -
$(ii)$ conversely, for any function ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }}$ there exist a $\widetilde{\theta }\in \varUpsilon $ and a unique probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}}$ satisfying together with ${\beta _{\theta }}$ conditions (*) and (RPM).
Proof.
Fix an arbitrary $t\ge 0$.
For $(i)$: Under the assumptions of statement $(i)$, according to Theorem 3.1 $(i)$ there exists a ${P_{{X_{1}}}}$-a.s. unique function $\gamma \in {\mathcal{F}_{P,\ln }}$ defined by means of $\gamma :=\ln f$, where f is a Radon–Nikodým derivative of ${Q_{{X_{1}}}}$ with respect to ${P_{{X_{1}}}}$, such that
for all $A\in {\mathcal{F}_{t}}$. Define ${\alpha _{\theta }}:=\ln \widetilde{\theta }+\ln {\mathbb{E}_{P}}[{W_{1}}]$, and put ${\beta _{\theta }}:=\gamma +{\alpha _{\theta }}$. It then follows that ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }}$ and that condition (*) is valid. The latter together with condition (5) implies condition (RPM).
(5)
\[ Q(A)={\int _{A}}\frac{{e^{{\textstyle\textstyle\sum _{j=1}^{{N_{t}}}}\gamma ({X_{j}})-\widetilde{\theta }\cdot (t-{T_{{N_{t}}}})}}}{1-\mathbf{K}(\theta )(t-{T_{{N_{t}}}})}\cdot \Bigg[{\prod \limits_{j=1}^{{N_{t}}}}\hspace{0.1667em}\frac{d{Q_{{W_{1}}}}}{d{P_{{W_{1}}}}}({W_{j}})\Bigg]\hspace{0.1667em}dP\]For $(ii)$: Let ${\beta _{\theta }}=\gamma +{\alpha _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }}$ and define $\widetilde{\theta }:=\frac{{e^{{\alpha _{\theta }}}}}{{\mathbb{E}_{P}}[{W_{1}}]}$. By Theorem 3.1 $(ii)$ for the function $\gamma ={\beta _{\theta }}-{\alpha _{\theta }}$ there exists a unique probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}}$ satisfying condition (RRM) or equivalently condition (RPM). □
Remark 3.3.
(a) In the special case $P\in {\mathcal{M}_{S,\mathbf{Exp}(\theta )}}$, Corollary 3.1 yields the main result of Delbaen and Haezendonck [5, Proposition 2.2].
(b) Theorem 3.1 remains true if we replace the classes ${\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}$ and ${\mathcal{F}_{P,h}}$ by their subclasses ${\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}^{\ell }}:=\{Q\in {\mathcal{M}_{S,\boldsymbol{\Lambda }(\widetilde{\theta })}}:{\mathbb{E}_{Q}}[{X_{1}^{\ell }}]<\infty ]\}$ and ${\mathcal{F}_{P,h}^{\ell }}:=\{\gamma \in {\mathcal{F}_{P,h}}:{\mathbb{E}_{P}}[{X_{1}^{\ell }}\cdot ({h^{-1}}\circ \gamma \circ {X_{1}})]<\infty \}$ for $\ell =1,2$, respectively. As a consequence, Corollary 3.1 remains true if we replace the class ${\widetilde{\mathcal{F}}_{P,\theta }}$ by its subclass ${\widetilde{\mathcal{F}}_{P,\theta }^{\ell }}:=\{{\beta _{\theta }}=\gamma +{\alpha _{\theta }}:\gamma \in {\mathcal{F}_{P,\ln }^{\ell }}\hspace{2.5pt}\text{and}\hspace{2.5pt}{\alpha _{\theta }}\in \mathbb{R}\}$ for $\ell =1,2$.
The following example translates the results of Corollary 3.1 to a well-known compound renewal process appearing in applications.
Example 3.1.
Fix an arbitrary $t\ge 0$, let $\theta :=(\xi ,2)\in D:={\varUpsilon ^{2}}$, and let $P\in {\mathcal{M}_{S,\mathbf{Ga}(\theta )}}$ such that ${P_{{X_{1}}}}=\mathbf{Ga}(\eta )$, where $\eta :=(b,2)\in D$. Let $\widetilde{\theta }\in \varUpsilon $ and $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}}$ such that ${Q_{{X_{1}}}}=\mathbf{Exp}(\zeta )$, where ζ is a positive real constant. By Corollary 3.1 $(i)$, there exists a ${P_{{X_{1}}}}$-a.s. unique function ${\beta _{\theta }}:=\gamma +{\alpha _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }}$, where $\gamma (x):=\ln \frac{\zeta \cdot {e^{-\zeta \cdot x}}}{{b^{2}}\cdot x\cdot {e^{-b\cdot x}}}$ for any $x\in \varUpsilon $ and ${\alpha _{\theta }}:=\ln \widetilde{\theta }+\ln {\mathbb{E}_{P}}[{W_{1}}]=\ln \frac{2\cdot \widetilde{\theta }}{\xi }$, satisfying together with Q the condition
for all $A\in {\mathcal{F}_{t}}$.
(6)
\[ Q(A)={\int _{A}}\hspace{0.1667em}{\bigg(\frac{1}{2\xi }\bigg)^{{N_{t}}}}\cdot \frac{{e^{{\textstyle\textstyle\sum _{j=1}^{{N_{t}}}}{\beta _{\theta }}({X_{j}})-t\cdot \widetilde{\theta }+t\xi }}}{[{\textstyle\textstyle\prod _{j=1}^{{N_{t}}}}{W_{j}}]\cdot (1+\xi \cdot (t-{T_{{N_{t}}}}))}\hspace{0.1667em}dP\]Conversely, let ζ be as above and consider the function ${\beta _{\theta }}:=\gamma +{\alpha _{\theta }}$, where $\gamma (x):=\ln \frac{\zeta \cdot {e^{-\zeta \cdot x}}}{{b^{2}}\cdot x\cdot {e^{-b\cdot x}}}$ for any $x\in \varUpsilon $ and ${\alpha _{\theta }}\in \mathbb{R}$. It then follows easily that ${\mathbb{E}_{P}}[{e^{\gamma ({X_{1}})}}]=1$, implying that $\gamma \in {\mathcal{F}_{P,\ln }}$; hence ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }}$. Thus, we may apply Corollary 3.1 $(ii)$ to get a $\widetilde{\theta }\in \varUpsilon $ and a unique probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}}$ satisfying together with ${\beta _{\theta }}$ conditions (*) and (6). But then applying Lemma 2.2 (a), we get
\[ {Q_{{X_{1}}}}(B)={\mathbb{E}_{P}}\big[{\chi _{{X_{1}^{-1}}(B)}}\cdot {e^{\gamma ({X_{1}})}}\big]={\int _{B}}\zeta \cdot {e^{-\zeta \cdot x}}\hspace{0.1667em}\lambda (dx)\hspace{1em}\text{for any}\hspace{2.5pt}B\in \mathfrak{B}(\varUpsilon ),\]
implying that ${Q_{{X_{1}}}}=\mathbf{Exp}(\zeta )$.4 Applications
In this section we first show that a martingale approach to premium calculation principles leads in the case of CRPs to CPPs, providing in this way a method to find progressively equivalent martingale measures. Next, using our results we show that if ${\widetilde{\mathcal{F}}_{P,\theta }^{2}}\ne \varnothing $ then there exist canonical price processes (called claim surplus processes in Risk Theory) satisfying the condition of no free lunch with vanishing risk.
In order to present the results of this section we recall the following notions. For a given real-valued process $Y:={\{{Y_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ on $(\varOmega ,\varSigma )$ a probability measure Q on Σ is called a martingale measure for Y, if Y is a Q-martingale. We will say that Y satisfies condition (PEMM) if there exists a progressively equivalent martingale measure (PEMM for short) for Y, i.e. a probability measure Q on Σ such that $Q\upharpoonright {\mathcal{F}_{t}}\sim P\upharpoonright {\mathcal{F}_{t}}$ for any $t\ge 0$ and Y is a Q-martingale. Moreover, let $T>0$, $\mathbb{T}:=[0,T]$, ${Q_{T}}:=Q\upharpoonright {\mathcal{F}_{T}}$, ${Y_{\mathbb{T}}}:={\{{Y_{t}}\}_{t\in \mathbb{T}}}$ and ${\mathcal{F}_{\mathbb{T}}}:={\{{\mathcal{F}_{t}}\}_{t\in \mathbb{T}}}$. We will say that the process ${Y_{\mathbb{T}}}$ satisfies condition (EMM) if there exists an equivalent martingale measures for ${Y_{\mathbb{T}}}$, i.e. a probability measure ${Q_{T}}$ on ${\mathcal{F}_{T}}$ such that ${Q_{T}}\sim {P_{T}}$ and ${Y_{\mathbb{T}}}$ is a $({Q_{T}},{\mathcal{F}_{\mathbb{T}}})$-martingale.
Suppose that ${X_{1}},{W_{1}}\in {\mathcal{L}^{1}}(P)$ and define the premium density as
Consider the process $Z(P):={\{{Z_{t}}\}_{t\in {\mathbb{R}_{+}}}}$ with ${Z_{t}}:={S_{t}}-t\cdot p(P)$ for any $t\ge 0$. The following auxiliary result could be of independent interest, since it says that if S is under P a CRP and the process $Z(P)$ is a P-martingale, then ${N_{t}}$ must have a Poisson distribution so that S is actually a CPP.
Proposition 4.1.
Proof.
Fix an arbitrary $t\ge 0$.
For $(i)\Longrightarrow (ii)$: Since P is martingale measure for $Z(P)$, we have ${\mathbb{E}_{P}}[{Z_{t}}]=0$ implying ${\mathbb{E}_{P}}[{S_{t}}]=t\cdot \frac{{\mathbb{E}_{P}}[{X_{1}}]}{{\mathbb{E}_{P}}[{W_{1}}]}$, or equivalently ${\mathbb{E}_{P}}[{N_{t}}]=t\cdot \widetilde{\theta }$.
Proof.
The above claim is well known (cf. e.g. [18, Remark 21, p. 110]), but since we have not seen its proof anywhere, we insert it for completeness. The implication $(a)\Longrightarrow (b)$ is immediate.
For $(b)\Longrightarrow (a)$: To prove this implication, let us recall that the renewal function associated with the distribution $\mathbf{K}(\theta )$ is defined by
\[ U(u):={\sum \limits_{n=0}^{\infty }}{\mathbf{K}^{\ast n}}(\theta )(u)\hspace{1em}\text{for any}\hspace{2.5pt}u\in \mathbb{R}\]
where ${\mathbf{K}^{\ast n}}(\theta )$ is the n-fold convolution of $\mathbf{K}(\theta )$ (cf. e.g. [18, Definition 17, p. 108]). Clearly $U(u)=1+{\mathbb{E}_{P}}[{N_{u}}]$ for any $u\ge 0$. Assuming that ${\mathbb{E}_{P}}[{N_{t}}]=t\theta $, we get $U(t)=1+t\theta $, implying that the Laplace–Stieltjes transform $\widehat{U}(s)$ of $U(t)$ is given by
\[ \widehat{U}(s)={\int _{{\mathbb{R}_{+}}}}{e^{-s\cdot u}}\hspace{0.1667em}dU(u)={e^{-s\cdot 0}}\cdot U(0)+{\int _{0}^{\infty }}\theta {e^{-s\cdot u}}\hspace{0.1667em}du=\frac{s+\theta }{s}\hspace{1em}\text{for every}\hspace{2.5pt}s\ge 0,\]
where the second equality follows from the fact that ${\textstyle\int _{{\mathbb{R}_{+}}}}{e^{-s\cdot u}}\hspace{0.1667em}dU(u)$ is a Riemann–Stieltjes integral and U has a density for $u>0$, $U(u)=0$ for $u<0$ and it has a unit jump at $u=0$ (cf. e.g. [18, pp. 108–109]). It then follows that
\[ \widehat{\mathbf{K}}(\theta )(s)=\frac{\widehat{U}(s)-1}{\widehat{U}(s)}=\frac{\theta }{\theta +s}\hspace{1em}\text{for any}\hspace{2.5pt}s\ge 0,\]
where $\widehat{\mathbf{K}}(\theta )$ denotes the Laplace–Stieltjes transform of the distribution of ${W_{n}}$ for any $n\in \mathbb{N}$ (cf. e.g. [18, Proposition 20, p. 109]); hence ${P_{{W_{n}}}}=\mathbf{Exp}(\theta )$ for any $n\in \mathbb{N}$. But since W is also P-independent, it follows that N is a P-Poisson process with parameter θ (cf. e.g. [17, Theorem 2.3.4]). □Thus, according to the above claim statement $(ii)$ follows.
For $(ii)\Longrightarrow (i)$: Since $P\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{1}}$, it follows that S has independent increments (cf. e.g. [17, Theorem 5.1.3]). Thus, for all $u\in [0,t]$ and $A\in {\mathcal{F}_{u}}$ we get
\[\begin{aligned}{}& {\int _{A}}\big({S_{t}}-{\mathbb{E}_{P}}[{S_{t}}]\big)-\big({S_{u}}-{\mathbb{E}_{P}}[{S_{u}}]\big)\hspace{0.1667em}dP\\ {} & \hspace{1em}={\int _{\varOmega }}{\chi _{A}}\hspace{0.1667em}dP\cdot {\int _{\varOmega }}\big(({S_{t}}-{S_{u}})-{\mathbb{E}_{P}}[{S_{t}}-{S_{u}}]\big)\hspace{0.1667em}dP=0,\end{aligned}\]
implying that the process ${\{{S_{t}}-{\mathbb{E}_{P}}[{S_{t}}]\}_{t\in {\mathbb{R}_{+}}}}$ is a P-martingale. But since ${\mathbb{E}_{P}}[{S_{t}}]=t\cdot {\mathbb{E}_{P}}[{S_{1}}]$, statement $(i)$ follows.For $(iii)\Longrightarrow (iv)$: Since P is a martingale measure for $Z(P)$, it follows by the equivalence of statements $(i)$ and $(ii)$ that $P\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{1}}$. But since ${Z_{t}}\in {\mathcal{L}^{2}}(P)$, we have ${\operatorname{Var}_{P}}[{Z_{t}}]={\mathbb{E}_{P}}[{N_{t}}]\cdot {\operatorname{Var}_{P}}[{X_{1}}]+{\operatorname{Var}_{P}}[{N_{t}}]\cdot {\mathbb{E}_{P}^{2}}[{X_{1}}]<\infty $, where ${\operatorname{Var}_{P}}$ denotes the variance under the measure P; hence ${\operatorname{Var}_{P}}[{X_{1}}]<\infty $, implying statement $(iv)$.
For $(iv)\Longrightarrow (iii)$: Since $P\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}$ and ${\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}\subseteq {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{1}}$, it follows again by the equivalence of statements $(i)$ and $(ii)$ that P is a martingale measure for V. But ${\operatorname{Var}_{P}}[{Z_{t}}]={\operatorname{Var}_{P}}[{S_{t}}]={\mathbb{E}_{P}}[{N_{t}}]\cdot {\operatorname{Var}_{P}}[{X_{1}}]+{\operatorname{Var}_{P}}[{N_{t}}]\cdot {\mathbb{E}_{P}^{2}}[{X_{1}}]<\infty $, where the inequality follows by the fact that $P\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}$; hence statement $(iii)$ follows.
Moreover, assuming statement $(ii)$ and ${X_{1}}\in {\mathcal{L}^{2}}(P)$, we get immediately statement $(iv)$, implying that all statements $(i)$–$(iv)$ are equivalent. □
In the next proposition we find out a wide class of canonical processes converting the progressively equivalent measures Q of Theorem 3.1 into martingale measures. In this way, a characterization of all progressively equivalent martingale measures, similar to that of Theorem 3.1, is provided.
Proposition 4.2.
If $\ell =1,2$ and $P\in {\mathcal{M}_{S,\mathbf{K}(\theta )}^{\ell }}$ the following hold true:
In both cases $V=Z(Q)$.
-
$(i)$ for every $\widetilde{\theta }\in \varUpsilon $ and $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{\ell }}$ there exists a ${P_{{X_{1}}}}$-a.s. unique function ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{\ell }}$ satisfying together with Q conditions (*) and (RPM), and the process $V:={\{{V_{t}}\}_{t\in {\mathbb{R}_{+}}}}$, defined by means of ${V_{t}}:={S_{t}}-t\cdot \frac{{\mathbb{E}_{P}}[{X_{1}}\cdot {e^{{\beta _{\theta }}({X_{1}})}}]}{{\mathbb{E}_{P}}[{W_{1}}]}$ for any $t\ge 0$, such that Q is a PEMM for V;
-
$(ii)$ conversely, for every function ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{\ell }}$ and for the process V defined in $(i)$, there exist a $\widetilde{\theta }\in \varUpsilon $ and a unique probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{\ell }}$ satisfying together with ${\beta _{\theta }}$ conditions (*) and (RPM), and such that Q is a PEMM for V.
Proof.
Fix $\ell =1$ or $\ell =2$.
For $(i)$: Under the assumptions of $(i)$, by Corollary 3.1 $(i)$ and Remark 3.3 (b) there exists a ${P_{{X_{1}}}}$-a.s. unique function ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{\ell }}$ satisfying together with Q conditions (*) and (RPM). It then follows by Lemma 2.2 (a) and condition (*) that $V=Z(Q)$; hence by Proposition 4.1 we get that Q is a PEMM for V.
For $(ii)$: Under the assumptions of $(ii)$, by Corollary 3.1 $(ii)$ and Remark 3.3 (b) there exist a $\widetilde{\theta }\in \varUpsilon $ and a unique probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{\ell }}$ satisfying together with ${\beta _{\theta }}$ conditions (*) and (RPM); hence according to Proposition 4.1 the process $Z(Q)$ is a Q-martingale. Again by Lemma 2.2 (a) and condition (*) we obtain that $V=Z(Q)$. □
The next theorem connects our results with the basic notion of no free lunch with vanishing risk ((NFLVR) for short) (see [6, Definition 8.1.2]) of Mathematical Finance.
Theorem 4.1.
Let $P\in {\mathcal{M}_{S,\mathbf{K}(\theta )}^{2}}$, ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{2}}$ and V be as above. There exist a $\widetilde{\theta }\in \varUpsilon $ and a unique probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}$ satisfying together with ${\beta _{\theta }}$ conditions (*) and (RPM), and such that for every $T>0$ the process ${V_{\mathbb{T}}}:={\{{V_{t}}\}_{t\in \mathbb{T}}}$ satisfies condition (NFLVR).
Proof.
Fix an arbitrary $T>0$ and let ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{2}}$. By Proposition 4.2 $(ii)$ there exist a $\widetilde{\theta }\in \varUpsilon $ and a unique probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}$ satisfying together with ${\beta _{\theta }}$ conditions (*) and (RPM), and such that V is a Q-martingale with ${V_{t}}\in {\mathcal{L}^{2}}(Q)$ for any $t\ge 0$; hence ${V_{\mathbb{T}}}$ is a $({Q_{T}},{\mathcal{F}_{\mathbb{T}}})$-martingale, implying that it is a $({Q_{T}},{\mathcal{F}_{\mathbb{T}}})$-semi-martingale (cf. e.g. [19, Definition 7.1.1]). The latter implies that ${V_{\mathbb{T}}}$ is also a $({P_{T}},{\mathcal{F}_{\mathbb{T}}})$-semi-martingale since ${Q_{T}}\sim {P_{T}}$ (cf. e.g. [19, Theorem 10.1.8]). But since the process V satisfies condition (PEMM) we have that ${V_{\mathbb{T}}}$ satisfies condition (EMM). Thus, applying the Fundamental Theorem of Asset Pricing (FTAP for short) for unbounded stochastic processes, see [6, Theorem 14.1.1], we obtain that the process ${V_{\mathbb{T}}}$ satisfies condition (NFLVR). □
Remark 4.1.
It is well known that the FTAP of Delbaen and Schachermayer uses P.A. Meyer’s usual conditions (cf. e.g. [19, Definition 2.1.5]). These conditions play a fundamental role in the definition of the stochastic integral with respect to a (semi-)martingale. Nevertheless, the stochastic integral can be defined for any semi-martingale without the usual conditions (see [19, pp. 22–23 and p. 150]). As a consequence, the easy implication of the FTAP of Delbaen and Schachermayer (i.e. (EMM) ⟹ (NFLVR)) holds true without the usual conditions.
We have seen that the initial probability measure P can be replaced by another progressively equivalent probability measure Q such that S is converted into a Q-CPP. The idea is to define a probability measure Q in order to give more weight to less favourable events. More precisely Q must be defined in such a way that the corresponding premium density $p(Q)$ includes the safety loading, i.e. $p(P)<p(Q)$. This led Delbaen and Haezendonck to define a premium calculation principle as a probability measure $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\lambda )}^{1}}$, for some $\lambda \in \varUpsilon $ (compare [5, Definition 3.1]).
In the next Examples 4.1 to 4.3, applying Proposition 4.2 and Theorem 4.1, we show how to construct premium calculation principles Q satisfying the desired property $p(P)<p(Q)<\infty $, and such that for any $T>0$ the process ${V_{\mathbb{T}}}$ has the property of (NFLVR). For a discussion on how to rediscover some well-known premium calculation principles in the frame of classical Risk Theory using change of measures techniques we refer to [5, Examples 3.1 to 3.3].
Example 4.1.
Let $\theta :=(\xi ,k)\in D:={\varUpsilon ^{2}}$, and let $P\in {\mathcal{M}_{S,\mathbf{Ga}(\theta )}^{2}}$ be such that ${P_{{X_{1}}}}=\mathbf{Ga}(\eta )$, where $\eta :=(\zeta ,2)\in D$. Consider the real-valued function ${\beta _{\theta }}:=\gamma +{\alpha _{\theta }}$ with $\gamma (x):=\ln \frac{{\mathbb{E}_{P}}[{X_{1}}]}{2c}-\ln x+\frac{2(c-1)}{c{\mathbb{E}_{P}}[{X_{1}}]}\cdot x$ for any $x\in \varUpsilon $, where $c>2$ is a real constant, and ${\alpha _{\theta }}:=\ln (\frac{\xi }{d}\cdot {\mathbb{E}_{P}}[{W_{1}}])$, where $d<k$ is a positive constant. It can be easily seen that ${\mathbb{E}_{P}}[{e^{\gamma ({X_{1}})}}]=1$ and ${\mathbb{E}_{P}}[{X_{1}^{2}}\cdot {e^{\gamma ({X_{1}})}}]=\frac{2{c^{2}}}{\zeta }<\infty $, implying $\gamma \in {\mathcal{F}_{P,\ln }^{2}}$; hence ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{2}}$. Define $\widetilde{\theta }$ by means of $\widetilde{\theta }:={e^{{\alpha _{\theta }}}}/{\mathbb{E}_{P}}[{W_{1}}]$. Thus, due to Proposition 4.2 $(ii)$, there exists a unique premium calculation principle $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}$ satisfying conditions (*) and (RPM), and such that Q is a PEMM for the process V with ${V_{t}}:={S_{t}}-t\cdot \frac{\xi }{d}\cdot \frac{{\mathbb{E}_{P}}[{X_{1}}]}{2c}\cdot {\mathbb{E}_{P}}[{e^{\frac{2\cdot (c-1)}{c\cdot {\mathbb{E}_{P}}[{X_{1}}]}\cdot {X_{1}}}}]\in {\mathcal{L}^{2}}(Q)$ for any $t\ge 0$. Therefore, applying Lemma 2.2 (a) we get
\[ {Q_{{X_{1}}}}(B)={\int _{B}}\frac{\zeta }{c}\cdot {e^{-\frac{\zeta }{c}\cdot x}}\hspace{0.1667em}\lambda (dx)\hspace{1em}\text{for any}\hspace{2.5pt}B\in \mathfrak{B}(\varUpsilon ),\]
implying that ${Q_{{X_{1}}}}=\mathbf{Exp}(\frac{\zeta }{c})$; hence $p(P)<p(Q)<\infty $. In particular, according to Theorem 4.1 for any $T>0$ the process ${V_{\mathbb{T}}}$ satisfies the (NFLVR) condition.Example 4.2.
Let $\theta :=(k,b)\in D:={\varUpsilon ^{2}}$, let $\mathbf{W}(\theta )$ be the Weibull distribution over $\mathfrak{B}(\varUpsilon )$ defined by
\[ \mathbf{W}(\theta )(B):={\int _{B}}\frac{k}{{b^{k}}}\cdot {x^{k-1}}\cdot {e^{-{(x/b)^{k}}}}\hspace{0.1667em}\lambda (dx)\hspace{1em}\text{for any}\hspace{2.5pt}B\in \mathfrak{B}(\varUpsilon ),\]
and let $P\in {\mathcal{M}_{S,\mathbf{W}(\theta )}^{2}}$ such that ${P_{{X_{1}}}}=\mathbf{Exp}(\eta )$, where $\eta \in \varUpsilon $. Consider the real-valued function ${\beta _{\theta }}:=\gamma +{\alpha _{\theta }}$ with $\gamma (x):=\ln (1-c\cdot {\mathbb{E}_{P}}[{X_{1}}])+c\cdot x$ for any $x\in \varUpsilon $, where $c<\eta $ is a positive constant, and ${\alpha _{\theta }}:=0$. It can be easily seen that ${\mathbb{E}_{P}}[{e^{\gamma ({X_{1}})}}]=1$ and ${\mathbb{E}_{P}}[{X_{1}^{2}}\cdot {e^{\gamma ({X_{1}})}}]=\frac{2}{{(\eta -c)^{2}}}<\infty $, implying $\gamma \in {\mathcal{F}_{P,\ln }^{2}}$; hence ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{2}}$. Define $\widetilde{\theta }$ by $\widetilde{\theta }:={e^{{\alpha _{\theta }}}}/{\mathbb{E}_{P}}[{W_{1}}]$. Applying now Proposition 4.2 $(ii)$ we get that there exists a unique premium calculation principle $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}$ satisfying conditions (*) and (RPM), and such that Q is a PEMM for the process V with ${V_{t}}:={S_{t}}-t\cdot \frac{(1-c\cdot {\mathbb{E}_{P}}[{X_{1}}])\cdot {\mathbb{E}_{P}}[{X_{1}}\cdot {e^{c\cdot {X_{1}}}}]}{b\cdot \Gamma (1+1/k)}\in {\mathcal{L}^{2}}(Q)$ for any $t\ge 0$. The latter together with Lemma 2.2 (a) yields
\[ {Q_{{X_{1}}}}(B)={\int _{B}}(\eta -c)\cdot {e^{-(\eta -c)\cdot x}}\hspace{0.1667em}\lambda (dx)\hspace{1em}\text{for any}\hspace{2.5pt}B\in \mathfrak{B}(\varUpsilon ),\]
implying that ${Q_{{X_{1}}}}=\mathbf{Exp}(\eta -c)$. Thus, $p(P)<p(Q)<\infty $. In particular, according to Theorem 4.1 for any $T>0$ the process ${V_{\mathbb{T}}}$ satisfies the (NFLVR) condition.In our next example we show how one can obtain the Esscher principle by applying Proposition 4.2 $(ii)$.
Example 4.3.
Take $\theta :=(\xi ,2)\in D:={\varUpsilon ^{2}}$, and let $P\in {\mathcal{M}_{S,\mathbf{Ga}(\theta )}^{2}}$ such that ${P_{{X_{1}}}}=\mathbf{Ga}(\eta )$, where $\eta :=(b,a)\in D$. Consider the real-valued function ${\beta _{\theta }}:=\gamma +{\alpha _{\theta }}$ with $\gamma (x):=c\cdot x-\ln {\mathbb{E}_{P}}[{e^{c\cdot {X_{1}}}}]$ for any $x\in \varUpsilon $, where $c<b$ is a positive constant, and ${\alpha _{\theta }}:=0$. It can be easily seen that ${\mathbb{E}_{P}}[{e^{\gamma ({X_{1}})}}]=1$ and ${\mathbb{E}_{P}}[{X_{1}^{2}}\cdot {e^{\gamma ({X_{1}})}}]=\frac{a\cdot (a+1)}{{(b-c)^{2}}}<\infty $, implying $\gamma \in {\mathcal{F}_{P,\ln }^{2}}$; hence ${\beta _{\theta }}\in {\widetilde{\mathcal{F}}_{P,\theta }^{2}}$. Define $\widetilde{\theta }$ by $\widetilde{\theta }:={e^{{\alpha _{\theta }}}}/{\mathbb{E}_{P}}[{W_{1}}]$. Thus, due to Proposition 4.2 $(ii)$ there exists a unique premium calculation principle $Q\in {\mathcal{M}_{S,\mathbf{Exp}(\widetilde{\theta })}^{2}}$ satisfying conditions (*) and (RPM), and such that Q is a PEMM for the process V with ${V_{t}}:={S_{t}}-t\cdot \frac{\xi }{2}\cdot \frac{{\mathbb{E}_{P}}[{X_{1}}\cdot {e^{c\cdot {X_{1}}}}]}{{\mathbb{E}_{P}}[{e^{c\cdot {X_{1}}}}]}\in {\mathcal{L}^{2}}(Q)$ for any $t\ge 0$. But then, according to Lemma 2.2 (a), we have
\[ {Q_{{X_{1}}}}(B)={\int _{B}}\frac{{(b-c)^{a}}}{\Gamma (a)}\cdot {x^{a-1}}\cdot {e^{-(b-c)\cdot x}}\hspace{0.1667em}\lambda (dx)\hspace{1em}\text{for any}\hspace{2.5pt}B\in \mathfrak{B}(\varUpsilon ).\]
The latter yields ${Q_{{X_{1}}}}=\mathbf{Ga}(\widetilde{\eta })$, where $\widetilde{\eta }:=(b-c,a)\in {\varUpsilon ^{2}}$, and
\[ {\mathbb{E}_{Q}}[{X_{1}}]=\frac{{\mathbb{E}_{P}}[{X_{1}}\cdot {e^{c\cdot {X_{1}}}}]}{{\mathbb{E}_{P}}[{e^{c\cdot {X_{1}}}}]}=\frac{a}{b-c}>\frac{a}{b}={\mathbb{E}_{P}}[{X_{1}}];\]
hence $p(P)<p(Q)<\infty $. In particular, according to Theorem 4.1 for any $T>0$ the process ${V_{\mathbb{T}}}$ satisfies the (NFLVR) condition.