1 Introduction
In various research areas such as Economics, Finance, Insurance, and Statistics, problems and their solutions frequently rely on certain functions being monotonic, as well as on determining the degree of their monotonicity, or lack of it. For example, the notion of profit seekers in Behavioural Economics and Finance is based on increasing utility functions, which can have varying shapes and thus characterize subclasses of profit seekers (e.g., [9]). In Reliability Engineering and Risk Assessment (e.g., [13]), a number of notions such as hazardrate and likelihoodratio orderings rely on monotonicity of the ratios of certain functions. The presence of insurance deductibles and policy limits often change the pattern of monotonicity of insurance losses (e.g., [3]). In the literature on Statistical Inference, the socalled monotonelikelihoodratio family plays an important role.
Due to these and a myriad of other reasons, researchers quite often restrict themselves to function classes with prespecified monotonicity properties. But one may not be comfortable with this element of subjectivity and would therefore prefer to rely on dataimplied shapes when making decisions. To illustrate the point, we recall, for example, the work of Bebbington et al. [1] who specifically set out to determine whether mortality continues to increase, or starts to decelerate, after a certain speciesrelated latelife age. This is known in the gerontological literature as the latelife mortality deceleration phenomenon. Naturally, we refrain from elaborating on this complex topic and refer for details and further references to the aforementioned paper.
Monotonicity may indeed be necessary for certain results or properties to hold, but there are also many instances when monotonicity is just a sufficient condition. In such cases, a natural question arises: can we depart from monotonicity and still have valid results? Furthermore, in some cases, monotonicity may not even be expected to hold, though perhaps be desirable, and so developing techniques for quantifying the lack of monotonicity becomes of interest. Several results in this direction have recently been proposed in the literature (e.g., [5, 21, 15, 16]; and references therein). All in all, these are some of the problems, solutions to which ask for indices that could be used to assess monotonicity, or lack of it. In the following sections we introduce and discuss several such indices, each designed to reveal different monotonicity aspects.
We have organized the rest of the paper as follows. In Section 2 we present a specific example, driven by insurance and econometric considerations, which not only motivates present research but also highlights the main idea adopted in this paper. In Section 3 we introduce and discuss indices of lack of increase, decrease, and monotonicity. We also suggest a convenient numerical procedure for a quick calculation of the indices at any desirable precision. In Section 4 we introduce orderings of functions according to the values of their indices and argue in favour of using normalized versions of the indices. In Section 5 we introduce a stricter notion of ordering, and in Section 6 we develop a theory for quantifying the lack of positivity (or negativity) in signed measures. Section 7 concludes the paper.
2 Motivating example
Insurance losses are nonnegative random variables $X\ge 0$, and the expected value $\mathbf{E}[X]$ is called the net premium. All practicallysound premium calculation principles (pcp’s), which are functionals π assigning nonnegative finite or infinite values to X’s, are such that $\pi [X]\ge \mathbf{E}[X]$. The latter property is called nonnegative loading of π.
As an example, consider the following ‘dual’ version of the weighted pcp ([7, 8]; and references therein):
where U is the random variable uniform on $[0,1]$, ${F}^{1}$ is the quantile function of X defined by the equation ${F}^{1}(p)=\inf \{x\hspace{2.5pt}:\hspace{2.5pt}F(x)\ge p\}$, and $w:[0,1]\to [0,\infty ]$ is an appropriately chosen weight function, whose illustrative examples will be provided in a moment. We of course assume that the two expectations in the definition of $\pi _{w}$ are welldefined and finite, and the denominator is not zero.
When dealing with insurance losses, researchers usually choose nondecreasing weight functions in order to have nonnegative loading of $\pi _{w}$. For example, $w(t)=\mathbf{1}\{t>p\}$ for any parameter $p\in (0,1)$ is nondecreasing, and it turns $\pi _{w}$ into the average value at risk, also known as tail conditional expectation. Another example is $w(t)=\nu {(1t)}^{\nu 1}$ with parameter $\nu >0$. If $\nu \in (0,1]$, then w is nondecreasing, and $\pi _{w}$ becomes the proportional hazards pcp [18, 19]. If $\nu \ge 1$, then w is nonincreasing, and $\pi _{w}$ becomes the (absolute) SGini index of economic equality (e.g., [23]; and references therein).
Note that the pcp $\pi _{w}$ can be rewritten as the weighted integral
with the normalized weight function ${w}^{\ast }(t)=w(t)/{\int _{0}^{1}}w(u)\mathrm{d}u$, which is a probability density function, because $w(t)\ge 0$ for all $t\in [0,1]$ and ${\int _{0}^{1}}w(u)\mathrm{d}u\in (0,\infty )$. This representation of $\pi _{w}$ connects our present research with the dual utility theory ([20, 17]; and references therein) that has arisen as a prominent counterpart to the classical utility theory of von Neuman and Morgenstern [14]. It is also important to mention that in Insurance, integral (2) plays a very prominent role and is known as the distortion risk measure (e.g., [6]; and references therein).
In general, the function w and its normalized version ${w}^{\ast }$ may not be monotonic, as it depends on the shape of the probability density function of the random variable $W\in [0,1]$ in the following reformulation of $\pi _{w}$:
In the econometric language, $\textbf{E}[{F}^{1}(W)]$ means the average income (assuming that X stands for ‘income’) possessed by individuals whose positions on the society’s incomepercentile scale are modelled by W, which is, naturally, a random variable.
Hence, we are interested when $\textbf{E}[{F}^{1}(W)]$ is at least $\textbf{E}[W]$ (insurance perspective) or at most $\textbf{E}[W]$ (income inequality perspective). In view of equations (1) and (2), our task reduces to verifying whether or not
If the function w happens to be nondecreasing (as in insurance), then bound (4) holds (e.g., [11]). For example, with parameter $\lambda >0$, the weight function $w(s)={s}^{\lambda }$ leads to the sizebiased pcp, $w(s)={e}^{\lambda s}$ to the Esscher pcp, $w(s)=1{e}^{\lambda s}$ to the Kamps pcp, and the already noted function $w(s)=\mathbf{1}\{s>\lambda \}$ leads to the average value at risk.
As already noted, broader contexts than that of classical insurance suggest various shapes of probability distortions and thus lead to functions w that are not necessarily monotonic. Indeed, in view of representation (3), the average income of individuals depends on the distribution of W, whose shape is governed by societal opportunities that the individuals are exposed to. A natural question arises:
To answer this question in an illuminating way, let w be absolutely continuous and such that $w(0)=0$, which are sound assumptions from the practical point of view. Denote the density of w by ${w^{\prime }}$, and let $\mathbf{1}_{\{u>t\}}$ be the indicator. We have
where ${g}^{+}=\max \{g,0\}$ and ${g}^{}=\max \{g,0\}$. Bound (5) says that, in average with respect to the Lebesgue measure, the positive part ${g}^{+}$ must be larger than the negative part ${g}^{}$.
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{cov}\big[{F}^{1}(U),w(U)\big]& \displaystyle =\mathbf{cov}\Bigg[{F}^{1}(U),{\int _{0}^{U}}{w^{\prime }}(t)\hspace{0.1667em}\mathrm{d}t\Bigg]\\{} & \displaystyle ={\int _{0}^{1}}\mathbf{cov}\big[{F}^{1}(U),\mathbf{1}_{\{U>t\}}\big]{w^{\prime }}(t)\hspace{0.1667em}\mathrm{d}t.\end{array}\]
The function $v(t)=\mathbf{cov}[{F}^{1}(U),\mathbf{1}_{\{U>t\}}]$ is nonnegative (e.g., [11]), and so is also the integral $\theta ={\int _{0}^{1}}v(t)dt$, which makes $v(t)/\theta $ a probability density function. Denote the corresponding distribution function by V. We have the equations
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbf{cov}\big[{F}^{1}(U),w(U)\big]& \displaystyle =\theta {\int _{0}^{1}}{w^{\prime }}\hspace{0.1667em}\mathrm{d}V\\{} & \displaystyle =\theta {\int _{0}^{1}}{w^{\prime }}\circ {V}^{1}\hspace{0.1667em}\mathrm{d}\lambda ,\end{array}\]
where λ denotes the Lebesgue measure. Hence, property (4) is equivalent to the bound ${\int _{0}^{1}}g\mathrm{d}\lambda \ge 0$ with $g={w^{\prime }}\circ {V}^{1}$, and it can of course be rewritten as the inequality
(5)
\[{\int _{0}^{1}}{g}^{+}\hspace{0.1667em}\mathrm{d}\lambda \ge {\int _{0}^{1}}{g}^{}\hspace{0.1667em}\mathrm{d}\lambda ,\]Those familiar with asset pricing will immediately see how inequality (5), especially when reformulated as
is connected to the gain–loss ratio [2], as well as to the Omega ratio [10]. Furthermore, we can reformulate bound (6) as
with the ratio on the lefthand side being equal to 0 when the function g is nonpositive, and equal to 1 when g is nonnegative. Hence, bound (7) says that the ratio must be at least $1/2$, which means that, in average, the function g must be more positive than negative.
(6)
\[\frac{{\textstyle\int _{0}^{1}}{g}^{+}\hspace{0.1667em}\mathrm{d}\lambda }{{\textstyle\int _{0}^{1}}{g}^{}\mathrm{d}\lambda }\ge 1\](7)
\[\frac{{\textstyle\int _{0}^{1}}{g}^{+}\hspace{0.1667em}\mathrm{d}\lambda }{{\textstyle\int _{0}^{1}}g\hspace{0.1667em}\mathrm{d}\lambda }\ge \frac{1}{2},\]The above interpretations have shaped our considerations in the present paper, and have led toward the construction of monotonicity indices that we introduce and discuss next.
3 Assessing lack of monotonicity in functions
We are interested in assessing monotonicity of a function $g_{0}$ on an interval $[a,b]$. Since shifting the function up or down, left or right, does not distort its monotonicity, we therefore ‘standardize’ $g_{0}$ into
defined on the interval $[0,y]$ with $y=ba$. Note that g satisfies the boundary condition $g(0)=0$. We assume that $g_{0}$ and thus g are absolutely continuous.
Let $\mathcal{F}_{y}$ denote the set of all absolutely continuous functions f on the interval $[0,y]$ such that $f(0)=0$. Denote the total variation of $f\in \mathcal{F}_{y}$ on the interval $[0,y]$ by $\ f\ _{y}$, that is, $\ f\ _{y}={\int _{0}^{y}}{f^{\prime }}\mathrm{d}\lambda $. Furthermore, let ${\mathcal{F}_{y}^{+}}$ denote the set of all $f\in \mathcal{F}_{y}$ that are nondecreasing. For any $g\in \mathcal{F}_{y}$, we define its index of lack of increase (LOI) as the distance between g and the set ${\mathcal{F}_{y}^{+}}$, that is,
Obviously, if g is nondecreasing, then $\mathrm{LOI}_{y}(g)=0$, and the larger the value of $\mathrm{LOI}_{y}(g)$, the farther the function g is from being nondecreasing on the interval $[0,y]$. For the function $g_{0}$, which is where g originates from, the LOI of $g_{0}$ on the interval $[a,b]$ is
(Throughout the paper we occasionally use ‘$:=$,’ when a need arises to emphasize that certain equations are by definition.) Determining $\mathrm{LOI}_{y}(g)$ using definition (9) is not straightforward. To facilitate the task, we next give an integral representation of the index.
(9)
\[\mathrm{LOI}_{y}(g)=\inf \Bigg\{{\int _{0}^{y}}\big{g^{\prime }}{f^{\prime }}\big\hspace{0.1667em}\mathrm{d}\lambda \hspace{2.5pt}:\hspace{2.5pt}f\in {\mathcal{F}_{y}^{+}}\Bigg\}.\]Theorem 1.
The infimum in definition (9) is attained at a function $f_{1}\in {\mathcal{F}_{y}^{+}}$ such that ${f^{\prime }_{1}}={({g^{\prime }})}^{+}$, and thus
When g originates from $g_{0}$ via equation (8), we have
(10)
\[\mathrm{LOI}_{y}(g)={\int _{0}^{y}}{\big({g^{\prime }}\big)}^{}\hspace{0.1667em}\mathrm{d}\lambda .\]Theorem 1, though easy to prove directly, follows immediately from a more general result, Theorem 2 of Section 6, and we thus do not provide any more details. The index of lack of decrease (LOD) is defined analogously. Namely, in the computationally convenient form, it is given by the equation
The reason for doubling the minimum on the righthand sides of definitions (12) and (13) will become clear from properties below.
\[\mathrm{LOD}_{y}(g)={\int _{0}^{y}}{\big({g^{\prime }}\big)}^{+}\hspace{0.1667em}\mathrm{d}\lambda .\]
When g originates from $g_{0}$ via equation (8), we have
\[\mathrm{LOD}_{[a,b]}(g_{0})={\int _{a}^{b}}{\big({g^{\prime }_{0}}\big)}^{+}\hspace{0.1667em}\mathrm{d}\lambda .\]
In turn, the index of lack of monotonicity (LOM) of g is given by the equation
and when g originates from $g_{0}$ via equation (8), we have
(13)
\[\mathrm{LOM}_{[a,b]}(g_{0})=2\min \big\{\mathrm{LOI}_{[a,b]}(g_{0}),\mathrm{LOD}_{[a,b]}(g_{0})\big\}.\]
A1) The index $\mathrm{LOI}_{y}$ is translation invariant, that is, $\mathrm{LOI}_{y}(g+\alpha )=\mathrm{LOI}_{y}(g)$ for every constant $\alpha \in \mathbf{R}$. The index $\mathrm{LOD}_{y}$ is also translation invariant.

A2) The index $\mathrm{LOI}_{y}$ is positively homogeneous, that is, $\mathrm{LOI}_{y}(\beta g)=\beta \mathrm{LOI}_{y}(g)$ for every constant $\beta \ge 0$. The index $\mathrm{LOD}_{y}$ is also positively homogeneous.

A3) $\mathrm{LOI}_{y}(\beta g)=(\beta )\mathrm{LOD}_{y}(g)$ for every negative constant $\beta <0$, and thus in particular, $\mathrm{LOI}_{y}(g)=\mathrm{LOD}_{y}(g)$.

A4) $\mathrm{LOI}_{y}(g)+\mathrm{LOD}_{y}(g)=\ g\ _{y}$. Consequently, we have the following two observations:
An illustrative example follows.
Example 1.
Consider the functions $g_{0}(z)=\sin (z)$ and $h_{0}(z)=\cos (z)$ on $[\pi /2,\pi ]$. Neither of them is monotonic on the interval, but a visual inspection of their graphs suggests that sine is closer to being increasing than cosine, which can of course be viewed as a subjective statement. To substantiate it, we employ the above introduced indices. First, by lifting and shifting, we turn sine into $g(x)=1\cos (x)$ and cosine into $h(x)=\sin (x)$. Since ${g^{\prime }}(x)=\sin (x)$ and ${h^{\prime }}(x)=\cos (x)$, we have
Consequently,
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathrm{LOI}_{[\pi /2,\pi ]}(g_{0})=\mathrm{LOI}_{3\pi /2}(g)=1,& \displaystyle \hspace{1em}\mathrm{LOI}_{[\pi /2,\pi ]}(h_{0})=\mathrm{LOI}_{3\pi /2}(h)=2,\\{} \displaystyle \mathrm{LOD}_{[\pi /2,\pi ]}(g_{0})=\mathrm{LOD}_{3\pi /2}(g)=2,& \displaystyle \hspace{1em}\mathrm{LOD}_{[\pi /2,\pi ]}(h_{0})=\mathrm{LOD}_{3\pi /2}(h)=1,\\{} \displaystyle \mathrm{LOM}_{[\pi /2,\pi ]}(g_{0})=\mathrm{LOM}_{3\pi /2}(g)=2,& \displaystyle \hspace{1em}\mathrm{LOM}_{[\pi /2,\pi ]}(h_{0})=\mathrm{LOM}_{3\pi /2}(h)=2.\end{array}\]
Hence, for example, sine is at the distance 1 from the set of all nondecreasing functions on the noted interval, whereas cosine is at the distance 2 from the same set. Note also that the total variations ${\int _{0}^{3\pi /2}}{g^{\prime }}\mathrm{d}\lambda $ and ${\int _{0}^{3\pi /2}}{h^{\prime }}\mathrm{d}\lambda $ of the two functions are the same, equal to 3. This concludes Example 1.Note 1.
Example 1 is based on functions for which the four integrals in equations (14) and (15) are easy to calculate, but functions arising in applications are frequently quite unwieldy. For this, we need a numerical procedure for calculating the integral ${\int _{0}^{y}}H({f^{\prime }})\mathrm{d}\lambda $ for various transformations H, such as $H(x)={x}^{}$ and $H(x)={x}^{+}$. A convenient way is as follows:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\int _{0}^{y}}H\big({f^{\prime }}\big)\hspace{0.1667em}\mathrm{d}\lambda & \displaystyle \approx \sum \limits_{n=1}^{N}\frac{y}{N}H\bigg(\frac{N}{y}\bigg\{f\bigg(\frac{n}{N}y\bigg)f\bigg(\frac{n1}{N}y\bigg)\bigg\}\bigg)\\{} & \displaystyle =\sum \limits_{n=1}^{N}\frac{ba}{N}H\bigg(\frac{N}{ba}\bigg\{f_{0}\bigg(a+\frac{n}{N}(ba)\bigg)\\{} & \displaystyle \hspace{1em}f_{0}\bigg(a+\frac{n1}{N}(ba)\bigg)\bigg\}\bigg),\end{array}\]
where $f_{0}$ is the underlying function (on the interval $[a,b]$) and f is the shiftedandlifted function (on the interval $[0,y]$) defined by the equation $f(x)=f_{0}(x+a)f_{0}(a)$ for all $x\in [0,y]$ with $y=ba$.The use of the integral ${\int _{0}^{y}}H({f^{\prime }})\mathrm{d}\lambda $ in Note 1 hints at the possibility of distances other than $L_{1}$norms when defining indices. One may indeed wish to deemphasize small values of ${f^{\prime }}$ and to emphasize its large values when defining indices. In a simple way, this can be achieved by taking the pth power of ${({f^{\prime }})}^{}$, ${({f^{\prime }})}^{+}$ and ${f^{\prime }}$ for any $p\ge 1$. This argument leads us to the $L_{p}$type counterpart of minimization problem (9) which can be solved along the same path as that used in the proof of Theorem 1. It is remarkable that the minimizing function does not depend on the choice of the metric and remains equal to ${({f^{\prime }})}^{+}$. Consequently, for example, the following $L_{p}$index of lack of increase arises:
\[\mathrm{LOI}_{p,y}(f)={\Bigg({\int _{0}^{y}}{\big({\big({f^{\prime }}\big)}^{}\big)}^{p}\hspace{0.1667em}\mathrm{d}\lambda \Bigg)}^{1/p}.\]
Other than $L_{p}$norms can also be successfully explored, and this is important in applications, where no pth power may adequately (de)emphasize parts of ${f^{\prime }}$. The phenomenon has prominently manifested, for example, in Econometrics (e.g., [4, 22]; and references therein). In such cases, more complexly shaped functions are typically used, including those $H:[0,\infty )\to [0,\infty )$ with $H(0)=0$ that give rise to the class of Birnbaum–Orlicz (BO) spaces.4 Monotonicity comparisons and normalized indices
The index values in Example 1 suggest that on the noted interval, sine is more increasing than cosine, because sine is closer to the set ${\mathcal{F}_{y}^{+}}$ than cosine is. In general, given two functions $g_{1}$ and $h_{1}$ on the interval $[0,y]$, we can say that the function $g_{1}$ is more nondecreasing than $h_{1}$ on the interval whenever $\mathrm{LOI}_{y}(g_{1})\le \mathrm{LOI}_{y}(h_{1})$. Likewise, we can say that the function $g_{2}$ is more nonincreasing than $h_{2}$ on the interval $[0,y]$ whenever $\mathrm{LOD}_{y}(g_{2})\le \mathrm{LOD}_{y}(h_{2})$. But an issue arises with these definitions because for a given pair of functions g and h, the property $\mathrm{LOI}_{y}(g)\le \mathrm{LOI}_{y}(h)$ may not be equivalent to $\mathrm{LOD}_{y}(g)\ge \mathrm{LOD}_{y}(h)$. Though it may look strange at first sight, this nonreflexivity is natural because the total variations of the functions g and h on the interval $[0,y]$ may not be equal, and in such cases, comparing nonmonotonicities of g and h is not meaningful.
If, however, the total variations of g and h are equal on the interval $[0,y]$, then $\mathrm{LOI}_{y}(g)\le \mathrm{LOI}_{y}(h)$ if and only if $\mathrm{LOD}_{y}(g)\ge \mathrm{LOD}_{y}(h)$. This suggests that in order to achieve this ‘if and only if’ property in general, we need to normalize the indices, which gives rise to the following definitions
using the identity $\min \{u,v\}=(u+vuv)/2$ that holds for all real numbers u and v. In terms of the original function $g_{0}$ on the interval $[a,b]$, we have
\[{\mathrm{LOI}_{y}^{\ast }}(g)=\frac{{\textstyle\int _{0}^{y}}{({g^{\prime }})}^{}\hspace{0.1667em}\mathrm{d}\lambda }{{\textstyle\int _{0}^{y}}{g^{\prime }}\hspace{0.1667em}\mathrm{d}\lambda }\hspace{1em}\text{and}\hspace{1em}{\mathrm{LOD}_{y}^{\ast }}(g)=\frac{{\textstyle\int _{0}^{y}}{({g^{\prime }})}^{+}\hspace{0.1667em}\mathrm{d}\lambda }{{\textstyle\int _{0}^{y}}{g^{\prime }}\hspace{0.1667em}\mathrm{d}\lambda },\]
of the normalized indices of lack of increase and decrease, respectively. Obviously,
and thus ${\mathrm{LOI}_{y}^{\ast }}(g)\le {\mathrm{LOI}_{y}^{\ast }}(h)$ if and only if ${\mathrm{LOD}_{y}^{\ast }}(g)\ge {\mathrm{LOD}_{y}^{\ast }}(h)$. Furthermore, the normalized index of lack of monotonicity is ${\mathrm{LOM}_{y}^{\ast }}(g):=2\min \{{\mathrm{LOI}_{y}^{\ast }}(g),{\mathrm{LOD}_{y}^{\ast }}(g)\}$, which we rewrite as
(17)
\[{\mathrm{LOM}_{y}^{\ast }}(g)=1\frac{g(y)}{{\textstyle\int _{0}^{y}}{g^{\prime }}\hspace{0.1667em}\mathrm{d}\lambda }\]
\[{\mathrm{LOI}_{[a,b]}^{\ast }}(g_{0})=\frac{{\textstyle\int _{a}^{b}}{({g^{\prime }_{0}})}^{}\hspace{0.1667em}\mathrm{d}\lambda }{{\textstyle\int _{a}^{b}}{g^{\prime }_{0}}\hspace{0.1667em}\mathrm{d}\lambda },\hspace{2em}{\mathrm{LOD}_{[a,b]}^{\ast }}(g_{0})=\frac{{\textstyle\int _{a}^{b}}{({g^{\prime }_{0}})}^{+}\hspace{0.1667em}\mathrm{d}\lambda }{{\textstyle\int _{a}^{b}}{g^{\prime }_{0}}\hspace{0.1667em}\mathrm{d}\lambda },\]
and
\[{\mathrm{LOM}_{[a,b]}^{\ast }}(g_{0})=1\frac{g_{0}(b)g_{0}(a)}{{\textstyle\int _{a}^{b}}{g^{\prime }_{0}}\hspace{0.1667em}\mathrm{d}\lambda }.\]
Obviously, ${\mathrm{LOI}_{[a,b]}^{\ast }}(g_{0})={\mathrm{LOI}_{y}^{\ast }}(g)$, ${\mathrm{LOD}_{[a,b]}^{\ast }}(g_{0})={\mathrm{LOD}_{y}^{\ast }}(g)$, and ${\mathrm{LOM}_{[a,b]}^{\ast }}(g_{0})={\mathrm{LOM}_{y}^{\ast }}(g)$. To illustrate the normalized indices, we continue Example 1.Example 2.
Recall that we are dealing with the functions $g_{0}(z)=\sin (z)$ and $h_{0}(z)=\cos (z)$ on the interval $[\pi /2,\pi ]$. We transform them into the functions $g(x)=1\cos (x)$ and $h(x)=\sin (x)$ on the interval $[0,3\pi /2]$. From equations (14) and (15), we see that the total variations ${\int _{0}^{3\pi /2}}{g^{\prime }}\mathrm{d}\lambda $ and ${\int _{0}^{3\pi /2}}{h^{\prime }}\mathrm{d}\lambda $ are equal to 3, and so we have the equations:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle {\mathrm{LOI}_{[\pi /2,\pi ]}^{\ast }}(g_{0})={\mathrm{LOI}_{3\pi /2}^{\ast }}(g)=\frac{1}{3},& \displaystyle \hspace{1em}{\mathrm{LOI}_{[\pi /2,\pi ]}^{\ast }}(h_{0})={\mathrm{LOI}_{3\pi /2}^{\ast }}(h)=\frac{2}{3},\\{} \displaystyle {\mathrm{LOD}_{[\pi /2,\pi ]}^{\ast }}(g_{0})={\mathrm{LOD}_{3\pi /2}^{\ast }}(g)=\frac{2}{3},& \displaystyle \hspace{1em}{\mathrm{LOD}_{[\pi /2,\pi ]}^{\ast }}(h_{0})={\mathrm{LOD}_{3\pi /2}^{\ast }}(h)=\frac{1}{3},\\{} \displaystyle {\mathrm{LOM}_{[\pi /2,\pi ]}^{\ast }}(g_{0})={\mathrm{LOM}_{3\pi /2}^{\ast }}(g)=\frac{2}{3},& \displaystyle \hspace{1em}{\mathrm{LOM}_{[\pi /2,\pi ]}^{\ast }}(h_{0})={\mathrm{LOM}_{3\pi /2}^{\ast }}(h)=\frac{2}{3}.\end{array}\]
The numerical procedure of Note 1 can easily be employed to calculate these normalized indices. This concludes Example 2.Next are properties of the normalized indices.

B1) The three indices ${\mathrm{LOI}_{y}^{\ast }}(g)$, ${\mathrm{LOD}_{y}^{\ast }}(g)$, and ${\mathrm{LOM}_{y}^{\ast }}(g)$ are normalized, that is, take values in the unit interval $[0,1]$, with the following special cases:

(a) ${\mathrm{LOI}_{y}^{\ast }}(g)=0$ if and only if ${({g^{\prime }})}^{}\equiv 0$, that is, when g is nondecreasing everywhere on $[0,y]$, and ${\mathrm{LOI}_{y}^{\ast }}(g)=1$ if and only if ${({g^{\prime }})}^{+}\equiv 0$, that is, when g is nonincreasing everywhere on $[0,y]$.

(b) ${\mathrm{LOD}_{y}^{\ast }}(g)=0$ if and only if ${({g^{\prime }})}^{+}\equiv 0$, that is, when g is nonincreasing everywhere on $[0,y]$, and ${\mathrm{LOD}_{y}^{\ast }}(g)=1$ if and only if ${({g^{\prime }})}^{}\equiv 0$, that is, when g is nondecreasing everywhere on $[0,y]$.

(c) ${\mathrm{LOM}_{y}^{\ast }}(g)=0$ if and only if g is either nondecreasing everywhere on $[0,y]$ or nonincreasing everywhere on $[0,y]$, and ${\mathrm{LOM}_{y}^{\ast }}(g)=1$ if and only if ${\mathrm{LOI}_{y}^{\ast }}(g)={\mathrm{LOD}_{y}^{\ast }}(g)$ (recall equation (16)).


B2) The three indices ${\mathrm{LOI}_{y}^{\ast }}(g)$, ${\mathrm{LOD}_{y}^{\ast }}(g)$, and ${\mathrm{LOM}_{y}^{\ast }}(g)$ are translation invariant, that is, ${\mathrm{LOI}_{y}^{\ast }}(g+\alpha )={\mathrm{LOI}_{y}^{\ast }}(g)$ for every constant $\alpha \in \mathbf{R}$, and analogously for ${\mathrm{LOD}_{y}^{\ast }}(g)$ and ${\mathrm{LOM}_{y}^{\ast }}(g)$.

B3)

(a) The three indices ${\mathrm{LOI}_{y}^{\ast }}(g)$, ${\mathrm{LOD}_{y}^{\ast }}(g)$, and ${\mathrm{LOM}_{y}^{\ast }}(g)$ are positivescale invariant, that is, ${\mathrm{LOI}_{y}^{\ast }}(\beta g)={\mathrm{LOI}_{y}^{\ast }}(g)$ for every positive constant $\beta >0$, and analogously for ${\mathrm{LOD}_{y}^{\ast }}(g)$ and ${\mathrm{LOM}_{y}^{\ast }}(g)$.

(b) Moreover, ${\mathrm{LOM}_{y}^{\ast }}(g)$ is negativescale invariant, that is, ${\mathrm{LOM}_{y}^{\ast }}(\beta g)={\mathrm{LOM}_{y}^{\ast }}(g)$ for every negative constant $\beta <0$, and thus, in general, ${\mathrm{LOM}_{y}^{\ast }}(\beta g)={\mathrm{LOM}_{y}^{\ast }}(g)$ for every real constant $\beta \ne 0$ (recall equation (17)).


B4) ${\mathrm{LOD}_{y}^{\ast }}(g)={\mathrm{LOI}_{y}^{\ast }}(g)$, and thus ${\mathrm{LOD}_{y}^{\ast }}(\beta g)={\mathrm{LOI}_{y}^{\ast }}(g)$ for every negative constant $\beta <0$.
We next use the above indices to introduce three new orderings:

C1) The function g is more nondecreasing than h on the interval $[0,y]$, denoted by $g\ge _{I,y}h$, if and only if ${\mathrm{LOI}_{y}^{\ast }}(g)\le {\mathrm{LOI}_{y}^{\ast }}(h)$.

C2) The function g is more nonincreasing than h on the interval $[0,y]$, denoted by $g\ge _{D,y}h$, if and only if ${\mathrm{LOD}_{y}^{\ast }}(g)\le {\mathrm{LOD}_{y}^{\ast }}(h)$.

C3) The function g is more monotonic than h on the interval $[0,y]$, denoted by $g\ge _{M,y}h$, if and only if ${\mathrm{LOM}_{y}^{\ast }}(g)\le {\mathrm{LOM}_{y}^{\ast }}(h)$.
We see that on the interval $[0,y]$, the function g is more nondecreasing than h if and only if the function g is less nonincreasing than h. In other words, $g\ge _{I,y}h$ is equivalent to $g\le _{D,y}h$, which we have achieved by introducing the normalized indices.
5 Stricter notion of comparison: a note
One of the fundamental notions of ordering random variables is that of firstorder stochastic dominance (e.g., [6, 12, 13]). Similarly to this notation, our earlier introduced orderings can be strengthened by first noting that the integral ${\int _{0}^{y}}{({g^{\prime }})}^{}\mathrm{d}\lambda $ is equal to ${\int _{0}^{\infty }}{S_{y}^{}}(z\mid g)\mathrm{d}z$, where the function
\[{S_{y}^{}}(z\mid g)=\lambda \big\{x\in [0,y]\hspace{2.5pt}:\hspace{2.5pt}{\big({g^{\prime }}\big)}^{}(x)>z\big\}\]
counts the ‘time’ that the function ${({g^{\prime }})}^{}$ spends above the threshold z during the ‘time’ period $[0,y]$. Likewise, we define the ‘plus’ version ${S_{y}^{+}}(z\mid g)$. This lead us to the following definitions of ordering functions according to their monotonicity.
Obviously, if $g\ge _{\mathit{SI},y}h$, then $g\ge _{I,y}h$, and if $g\ge _{\mathit{SD},y}h$, then $g\ge _{D,y}h$.6 Assessing lack of positivity in signed measures
We now take a path in the direction of general Measure Theory. Namely, let Ω be a set equipped with a sigmaalgebra, and let $\mathcal{M}$ denote the set of all (signed) measures ν defined on the sigmaalgebra. Furthermore, let ${\mathcal{M}}^{+}\subset \mathcal{M}$ be the subset of all positive measures. Given a signedmeasure $\nu \in \mathcal{M}$, we define its index of lack of positivity (LOP) by the equation
\[\mathrm{LOP}(\nu )=\inf \big\{\ \nu \mu \ \hspace{2.5pt}:\hspace{2.5pt}\mu \in {\mathcal{M}}^{+}\big\},\]
where $\ \cdot \ $ denotes the total variation. Specifically, with $({\varOmega }^{},{\varOmega }^{+})$ denoting a Hahn decomposition of Ω, let $({\nu }^{},{\nu }^{+})$ be the Jordan decomposition of ν. Note that ${\nu }^{}$ and ${\nu }^{+}$ are elements of ${\mathcal{M}}^{+}$. The variation of ν is $\nu ={\nu }^{}+{\nu }^{+}$, and its total variation is $\ \nu \ =\nu (\varOmega )$. The following theorem provides an actionable, and crucial for our considerations, reformulation of the index $\mathrm{LOP}(\nu )$.Proof.
Since ${\nu }^{+}\in {\mathcal{M}}^{+}$, we have the bound $\mathrm{LOP}(\nu )\le \ \nu {\nu }^{+}\ $, which can be rewritten as $\mathrm{LOP}(\nu )\le \ {\nu }^{}\ $. To prove the opposite bound $\mathrm{LOP}(\nu )\ge \ {\nu }^{}\ $, we proceed as follows. For any $\mu \in {\mathcal{M}}^{+}$, we have the bound
Since $\nu ={\nu }^{+}{\nu }^{}$ and ${\nu }^{+}(A)=0$ for every $A\subset {\varOmega }^{}$, the righthand side of bound (19) is equal to ${\nu }^{}+\mu ({\varOmega }^{})$, which is not smaller than ${\nu }^{}({\varOmega }^{})$. The latter is, by definition, equal to $\ {\nu }^{}\ $. This establishes the bound $\mathrm{LOP}(\nu )\ge \ {\nu }^{}\ $ and completes the proof of equation (18). We still need to show that $\mu ={\nu }^{+}$ is the only measure $\mu \in {\mathcal{M}}^{+}$ such that the equation
holds. Note that $\ {\nu }^{}\ ={\nu }^{}({\varOmega }^{})$. Since
in order to have equation (20), the righthand side of inequality (21) must be equal to ${\nu }^{}({\varOmega }^{})$. This can happen only when $\mu ({\varOmega }^{})=0$ and ${\nu }^{+}\mu ({\varOmega }^{+})=0$, with the former equation implying that the support of μ must be ${\varOmega }^{+}$, and the latter equation implying that μ must be equal to ${\nu }^{+}$ on ${\varOmega }^{+}$. Hence, $\mu ={\nu }^{+}$. This finishes the proof of Theorem 2. □
(21)
\[\ \nu \mu \ \ge \big{\nu }^{}+\mu \big\big({\varOmega }^{}\big)+\big{\nu }^{+}\mu \big\big({\varOmega }^{+}\big),\]Similarly to the LOP index, the index of lack of negativity (LON) of $\nu \in \mathcal{M}$ is given by the equation
and the corresponding index of lack of sign (LOS) is
The reason for doubling the minimum on the righthand side of equation (23) is the same as in the more specialized cases discussed earlier. Namely, due to the equation $\ {\nu }^{}\ +\ {\nu }^{+}\ =\ \nu \ $, we have that $\min \{\ {\nu }^{}\ ,\ {\nu }^{+}\ \}$ does not exceed $\ \nu \ /2$. Hence, for LOS to be always between 0 and $\ \nu \ $, just like LOP and LON are, we need to double the minimum. Finally, we introduce the normalized indices
\[{\mathrm{LOP}}^{\ast }(\nu )=\frac{\ {\nu }^{}\ }{\ \nu \ },\hspace{2em}{\mathrm{LON}}^{\ast }(\nu )=\frac{\ {\nu }^{+}\ }{\ \nu \ },\]
and
\[{\mathrm{LOS}}^{\ast }(\nu )=2\min \big\{{\mathrm{LOP}}^{\ast }(\nu ),{\mathrm{LON}}^{\ast }(\nu )\big\},\]
whose values are always in the unit interval $[0,1]$.7 Conclusion
In this paper, we have introduced indices that, in a natural way, quantify the lack of increase, decrease, and monotonicity of functions, as well as the lack of positivity, negativity, and signconstancy in signed measures. In addition to being of theoretical interest, this research topic also has practical implications, and for the latter reason, we have also introduced a simple and convenient numerical procedure for calculating the indices without resorting to frequently unwieldy closedform expressions. The indices satisfy a number of natural properties, and they also facilitate the ranking of functions according to their lack of monotonicity. Relevant applications in Insurance, Finance, and Economics have been pointed out, and some of them discussed in greater detail.