1 Basic notions and consistent criterion on hypothesis for countable family of probability measures
Let $(E,S)$ be a measurable space with a given family of probability measures: $\{\mu _{i},i\in I\}$.
Definition 1.
The family $\{\mu _{i},\hspace{2.5pt}i\in I\}$ of probability measures is called orthogonal (singular) if $\mu _{i}$ and $\mu _{j}$ are orthogonal for each $i\ne j$.
Definition 2.
The family $\{\mu _{i},\hspace{2.5pt}i\in I\}$ of probability measures is called separable if there exists a family of S-measurable sets $\{X_{i},\hspace{2.5pt}i\in I\}$ such that the relations are fulfilled:
-
(1) $\forall i\in I$ $\forall j\in I$ $\mu _{i}(X_{j})=\left\{\begin{array}{l@{\hskip10.0pt}l}1\hspace{1em}& \text{if}\hspace{2.5pt}i=j,\\{} 0\hspace{1em}& \text{if}\hspace{2.5pt}i\ne j.\end{array}\right.$
-
(2) $\forall i\in I$ $\forall j\in I$ $\operatorname{card}(X_{i}\cap X_{j})<c$ if $i\ne j$, where c denotes a power of continuum.
Definition 3.
The family $\{\mu _{i},\hspace{2.5pt}i\in I\}$ of probability measures is called weakly separable if there exists a family of S-measurable sets $\{X_{i},\hspace{2.5pt}i\in I\}$ such that the relations are fulfilled:
Definition 4.
The family $\{\mu _{i},\hspace{2.5pt}i\in I\}$ of probability measures is called strongly separable if there exists a disjoint family of S-measurable sets $\{X_{i},\hspace{2.5pt}i\in I\}$ such that the relations are fulfilled: $\forall i\in I$ $\mu _{i}(X_{i})=1$.
Remark 1.
A strong separability implies separability, a separability implies a weak separability, and a weak separability implies orthogonality, but not vice versa.
Example 1.
Let $E=[0,1]\times [0,1]$, S be a Borel σ-algebra of subsets of E. Take the S-measurable sets $X_{i}=\{0\le x\le 1,\hspace{0.2778em}y=i,\hspace{0.2778em}i\in [0,1]\}$ and assume that $L_{i}$ are the linear Lebesgue probability measures on $X_{i}$. Then the family $\{L_{i},\hspace{2.5pt}i\in [0,1]\}$ is strongly separable.
Example 2.
Let $E=[0,1]\times [0,1]$, S be a Borel σ-algebra of subsets of E. Take the S-measurable sets
\[X_{i}=\big\{(x,y)\mid 0\le x\le 1,\hspace{0.2778em}y=i,\hspace{2.5pt}\text{if}\hspace{2.5pt}i\in [0,1];\hspace{0.2778em}x=i-2,\hspace{0.2778em}0\le y\le 1,\hspace{2.5pt}\text{if}\hspace{2.5pt}i\in [2,3]\big\}.\]
Let $L_{i}$ be the linear Lebesgue probability measures on $X_{i}$. Then the family $\{L_{i},\hspace{2.5pt}i\in [0,1]\cup [2,3]\}$ is separable but not strongly separable.Example 3.
Let $E=[0,1]\times [0,1]\times [0,1]$, S be a Borel σ-algebra of subsets of E. Take the S-measurable sets:
\[\begin{array}{r@{\hskip0pt}l}\displaystyle X_{i}=\big\{(x,y,z)\mid \hspace{0.2778em}& \displaystyle 0\le x\le 1,\hspace{0.2778em}0\le y\le 1,\hspace{0.2778em}z=i,\hspace{2.5pt}\text{if}\hspace{2.5pt}i\in [0,1];\\{} & \displaystyle x=i-2,\hspace{0.2778em}0\le y\le 1,\hspace{0.2778em}0\le z\le 1,\hspace{2.5pt}\text{if}\hspace{2.5pt}i\in [2,3];\\{} & \displaystyle 0\le x\le 1,\hspace{0.2778em}y=i-4,\hspace{0.2778em}0\le z\le 1,\hspace{2.5pt}\text{if}\hspace{2.5pt}i\in [4,5]\big\}.\end{array}\]
Let $L_{i}$ be the planar Lebesgue probability measures on $X_{i}$. Then the family $\{L_{i},\hspace{2.5pt}i\in [0,1]\cup [2,3]\cup [4,5]\}$ is weakly separable but not separable.Example 4.
Let $E=[0,1]\times [0,1]$, S be a Borel σ-algebra of subsets of E. Take the S-measurable sets
Let $L_{i}$ be the linear Lebesgue probability measures on $X_{i}$ and $L_{0}$ be the planar Lebesgue probability measure on $E=[0,1]\times [0,1]$. Then the family $\{L_{i},\hspace{2.5pt}i\in [0,1]\}$ is orthogonal, but not weakly separable.
Definition 5.
We consider the notion of Hypothesis as any assumption that defines the form of the distribution selection.
Let H be set of hypotheses and $\mathrm{B}(\mathrm{H})$ be σ-algebra of subsets of H which contains all finite subsets of H.
Definition 6.
The family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ is said to admit a consistent criteria of a hypothesis if there exists at least one measurable map $\delta :(E,S)\to (\mathrm{H},\mathrm{B}(\mathrm{H}))$, such that $\mu _{H}(\{x\mid \delta (x)=H\})=1$, for all $H\in \mathrm{H}$.
Definition 8.
The family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ is said to admit a consistent criterion of any parametric function if for any real bounded measurable function $g:(\mathrm{H},\mathrm{B}(\mathrm{H}))\to \mathbb{R}$ there exists at least one measurable function $f:(E,S)\to \mathbb{R}$ such that $\mu _{H}(\{x\mid f(x)=g(H)\})=1$, for all $H\in \mathrm{H}$.
Definition 9.
The family of probability measures $\{\mu _{H},\hspace{2.5pt}H\in \mathrm{H}\}$is said to admit an unbiased criterion of any parametric function if for any real bounded measurable function $g:(\mathrm{H},\mathrm{B}(\mathrm{H}))\to \mathbb{R}$ there exists at least one measurable function $\beta :(E,S)\to \mathbb{R}$, such that $\int _{E}\beta (x)\mu _{H}(dx)=g(H)$ for all $H\in \mathrm{H}$.
Remark 2.
If M is a family of probability measures admitting a consistent criterion for a hypothesis, then it is clear that M is a family of probability measures which admits a consistent criterion for any parametric function and a family of probability measures which admits an unbiased criterion of any parametric function.
Theorem 1.
The family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ admits a consistent criterion δ of a hypothesis if and only if the probability of error of all kinds is equal to zero for the criterion δ.
Proof.
Necessity. As the family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ admits a consistent criterion of a hypothesis, so there exists such a measurable map $\delta :(E,S)\to (\mathrm{H},\mathrm{B}(\mathrm{H}))$ that $\mu _{H}(\{x\mid \delta (x)=H\})=1$ for all $H\in \mathrm{H}$. It follows $\alpha _{H}(\delta )=\mu _{H}(\{x\mid \delta (x)\ne H\}=0$.
Sufficiency. As the probability of error of all kinds is equal to zero, so $\alpha _{H}(\delta )=\mu _{H}(\{x\mid \delta (x)\ne H\}=0$ for all $H\in \mathrm{H}$, we have
for any $H\ne {H^{\prime }}$.
On the other hand $\{x\mid \delta (x)=H\}\cup \{x\mid \delta (x)\ne H\}=E$ and $\mu _{H}(\{x\mid \delta (x)=H\})=1$, for all $H\in \mathrm{H}$.
Therefore δ is a consistent criterion of a hypothesis. The Theorem 1 is proved. □
Theorem 2.
Let $\mathrm{H}=\{H_{1},H_{2},\dots ,H_{n},\dots \}$ be the set of hypotheses. The family of probability measures $\{\mu _{H_{i}},\hspace{2.5pt}i\in \mathbb{N}\}$, $\mathbb{N}=\{1,2,\dots ,n,\dots \}$ admits the consistent criterion of hypotheses if and only if the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ is strongly separable.
Proof.
Necessity. Since the family $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ admits a consistent criterion of hypotheses, then there exists a measurable map δ of the space $(E,S)$ to $(\mathrm{H},\mathrm{B}(\mathrm{H}))$ such that $\mu _{H_{i}}(x\mid \delta (x)=H_{i})=1$, $i\in \mathbb{N}$. Let $X_{i}=(x:\delta (x)=H_{i})$, then it is obvious, that $X_{i}\cap X_{i}\ne \varnothing $ for all $i\ne j$ and $\mu _{H_{i}}(X_{i})=1$, $\forall i\in \mathbb{N}$. Therefore, the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in N\}$ is strongly separable.
Sufficiency. As the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ is strongly separable, then there exist such pairwise disjoint S-measurable sets $X_{i}$, $i\in \mathbb{N}$ that $\mu _{H_{i}}(X_{i})=1$, $\forall i\in \mathbb{N}$.
Let’s define δ as such a mapping $(E,S)\to (\mathrm{H},\mathrm{B}(\mathrm{H}))$ that $\delta (X_{i})=H_{i}$, $i\in \mathbb{N}$. We have $\{x:\delta (x)=H_{i}\}=X_{i}$ and $\mu _{H_{i}}\{x:\delta (x)=H_{i}\}=1$, $\forall i\in \mathbb{N}$. Therefore δ is a consistent criterion of hypotheses. The Theorem 2 is proved. □
Theorem 3.
Let $\mathrm{H}=\{H_{1},H_{2},\dots ,H_{n},\dots \}$ and the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ be separable or weakly separable. Then the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ admits a consistent criterion of hypotheses.
Proof.
Since the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ is separable or weakly separable, then there exists a family $X_{1},X_{2},\dots ,X_{n},\dots $ of S-measures sets such that
Let us consider the sets:
\[\begin{array}{r@{\hskip0pt}l}& \displaystyle \overline{X}_{1}=X_{1}-X_{1}\cap \bigg(\bigcup \limits_{k\ne 1}X_{k}\bigg)\\{} & \displaystyle \overline{X}_{2}=X_{2}-X_{2}\cap \bigg(\bigcup \limits_{k\ne 2}X_{k}\bigg)\\{} & \displaystyle \hspace{1em}\dots \\{} & \displaystyle \overline{X}_{n}=X_{n}-X_{n}\cap \bigg(\bigcup \limits_{k\ne n}X_{k}\bigg)\\{} & \displaystyle \hspace{1em}\dots \end{array}\]
It is obvious that $\{\overline{X}_{1},\overline{X}_{2},\dots ,\overline{X}_{n},\dots \}$ is a disjoint family of S-measurable sets and $\mu _{H_{i}}(\overline{X}_{i})=1$, $\forall i\in \mathbb{N}$. Therefore, the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ is strongly separable and $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ admits a consistent criterion of hypotheses by the Theorem 1. The Theorem 3 is proved. □Theorem 4.
Let $\mathrm{H}=\{H_{1},H_{2},\dots ,H_{n},\dots \}$ and the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ $\mathbb{N}=\{1,2,\dots ,n,\dots \}$ be orthogonal (singular). Then the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ admits a consistent criterion of hypotheses.
Proof.
The singularity of probability measures implies an existence of the family $\{X_{ik}\}$ of S-measurable sets such that for any $i\ne k$ we have $\mu _{H_{k}}(X_{ik})=0$ and $\mu _{H_{i}}(E-X_{ik})=0$.
Let us consider the sets $X_{i}=\bigcup _{k\ne i}(E-X_{ik})$, then
\[\mu _{H_{i}}(X_{i})=\mu _{H_{i}}\bigg(\bigcup \limits_{k\ne i}(E-X_{ik})\bigg)\le \sum \limits_{k\ne i}\mu _{H_{i}}(E-X_{ik})=0.\]
Therefore, $\mu _{H_{i}}(X_{i})=0$; $\mu _{H_{i}}(E-X_{i})=1$. On the other hand, for $k\ne i$ we have $\mu _{H_{k}}(E-X_{i})=0$. This means that the family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ is weakly separable. By the Theorem 3 this family of probability measures admits a consistent criterion of hypotheses. The Theorem 4 is proved. □2 Consistent criteria in Banach space
Let ${M}^{\sigma }$ be a real linear space of all alternating finite measures on S.
Definition 10.
A linear subset $M_{B}\subset {M}^{\sigma }$ is called a Banach space of measures if:
-
(1) a norm can be defined on $M_{B}$ so that $M_{B}$ will be a Banach space with respect to this norm, and for any orthogonal measures $\mu ,\nu \in M_{B}$ and any real number $\lambda \ne 0$ the inequality $\| \mu +\lambda \nu \| \ge \| \mu \| $ is fulfilled;
-
(2) if $\mu \in M_{B}$, $|\hspace{0.1667em}f(x)|\le 1$, than $\nu _{f}(A)=\int _{A}f(x)\hspace{0.1667em}\mu (dx)\in M_{B}$, and $\| \nu _{f}\| \le \| \nu \| $, where $f(x)$ is a real measurable function, $A\in S$;
The construction of the Banach space of measures is studied in paper [8]. The following theorem have also been proved in this paper:
Theorem 5.
Let $M_{B}$ be a Banach space of measures, then in $M_{B}$ there exists a family of pairwise orthogonal probability measures $\{\mu _{i},\hspace{0.2778em}i\in I\}$ such that
where $M_{B}(\mu _{i})$ is the Banach space of elements ν of the form:
with the norm
Let $\{H_{i}\}$ be a countable family of hypotheses. Denote by $F=F(M_{B})$ the set of real functions f for which $\int _{E}f(x)\hspace{0.1667em}\mu _{H_{i}}(dx)$ is defined for all $\mu _{H_{i}}\in M_{B}$, where $M_{B}=\bigoplus _{i\in \mathbb{N}}M_{B}(\mu _{H_{i}})$.
Theorem 6.
Let $M_{B}=\bigoplus _{i\in \mathbb{N}}M_{B}(\mu _{H_{i}})$ be a Banach space of measures. The family of probability measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ admits a consistent criteria of hypotheses if and only if the correspondence $f\to l_{f}$ defined by the equality
is one-to-one.
Here $l_{f}$ is the linear functional on $M_{B}$, $f\in F(M_{B})$.
Proof.
Sufficiency. For $f\in F(M_{B})$ we define the linear continuous functional $l_{f}$ by the equality $\int _{E}f(x)\hspace{0.1667em}\mu _{H}dx)=l_{f}(\mu _{H})$. Denote as $I_{f}$ a countable subset in $\mathbb{N}$, for which $\int _{E}f(x)\hspace{0.1667em}\mu _{H_{i}}(dx)=0$ for $i\notin I_{f}$. Let us consider the functional $l_{f_{H_{i}}}$ on $M_{B}(\mu _{H_{i}})$ to which it corresponds. Then for $\mu _{H_{1}},\mu _{H_{2}}\in M_{B}(\mu _{H_{i}})$ we have:
\[\int _{E}f_{H_{1}}(x)\hspace{0.1667em}\mu _{H}(dx)=l_{f_{H_{1}}}(\mu _{H_{2}})=\int _{E}f_{1}(x)f_{2}(x)\hspace{0.1667em}\mu _{H_{i}}(dx)=\int f_{H_{1}}(x)\hspace{0.1667em}\mu _{H_{i}}(dx).\]
Therefore $f_{H_{1}}=f_{1}$ a.e. with respect to the measure $\mu _{H_{i}}$. Let $f_{H_{i}}>0$ a.e. with respect to the measure $\mu _{H_{i}}$ and
\[\int _{E}f_{H_{i}}(x)\hspace{0.1667em}\mu _{H_{i}}(dx)<\infty ,\hspace{2em}\mu _{H_{i}}(C)=\int _{C}f_{H_{i}}(x)\hspace{0.1667em}\mu _{H_{i}}(dx),\]
then
\[\int _{E}f_{H_{i}}(x)\hspace{0.1667em}\mu _{H_{j}}(dx)=l_{f_{H_{i}}}(\mu _{H_{j}})=0,\hspace{1em}\forall j\in \mathbb{N}.\]
Denote $C_{H_{i}}=\{x\mid f_{H_{i}}(x)>0\}$, then $\int _{E}f_{H_{i}}(x)\hspace{0.1667em}\mu _{H_{j}}(dx)=l_{f_{H_{i}}}(\mu _{H_{j}})=0$, $\forall j\in \mathbb{N}$. Hence it follows, that $\mu _{H_{j}}(C_{H_{i}})=0$, $\forall j\in \mathbb{N}$. On the other hand $\mu _{H_{i}}(E-C_{H_{i}})=0$. Therefore the family $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$ is weekly separable and
Let us consider the sets $\overline{C}_{H_{i}}=C_{H_{i}}\diagdown C_{H_{i}}\cap \bigcup _{k\ne i}C_{H_{k}}$. It is obvious that $\{\overline{C}_{H_{i}},i\in \mathbb{N}\}$ is a disjunctive family of S-measurable sets and $\mu _{H_{i}}(C_{H_{i}})=1$, $\forall i\in \mathbb{N}$. Let us define a mapping $\delta :(E,S)\to (\mathrm{H},\mathrm{B}(H))$ like that $\delta (\overline{C}_{H_{i}})=H_{i}$, $\forall i\in \mathbb{N}$. We have $\mu _{H_{i}}(\{x\mid \delta (x)=H_{i}\})=1$, $\forall i\in \mathbb{N}$. Therefore δ is a consistent criterion of hypotheses.
Necessity. Since the family of probability measures $\{\mu _{H_{i}},\hspace{2.5pt}i\in \mathbb{N}\}$ admits a consistent criterion of hypotheses and this family is strongly separable, so there exist S-measurable sets $X_{i}$, $i\in \mathbb{N}$, such that
\[\mu _{H_{i}}(X_{j})=\left\{\begin{array}{l@{\hskip10.0pt}l}1,\hspace{1em}& \text{if}\hspace{2.5pt}i=j,\\{} 0,\hspace{1em}& \text{if}\hspace{2.5pt}i\ne j.\end{array}\right.\]
We put the linear continuous functional $l_{X_{i}}$ into the correspondence to a function $I_{X_{i}}\in F(M_{\beta })$ by the formula:
\[\int _{E}I_{X_{i}}(x)\hspace{0.1667em}\mu _{H_{i}}(dx)=l_{I_{X_{i}}}(\mu _{H_{i}})=\| \mu _{H_{i}}\| _{M_{\beta }(\mu _{H_{i}})}.\]
We put the linear continuous functional $l_{f_{H_{1}}}$ into the correspondence to the function
Then for any $\mu _{H_{2}}\in M_{B}(\mu _{i})$
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \int _{E}f_{H_{1}}(x)\hspace{0.1667em}\mu _{H_{2}}(dx)& \displaystyle =\int _{E}f_{1}(x)I_{X_{i}}(x)\hspace{0.1667em}\mu _{H_{2}}(dx)=\int _{E}f(x)f_{1}(x)I_{X_{i}}(x)\hspace{0.1667em}\mu _{H_{i}}(dx)\\{} & \displaystyle =l_{f_{H_{1}}}(\mu _{H_{2}})=\| \mu _{H_{2}}\| _{M_{B}(\mu _{H_{i}})}.\end{array}\]
Let $\varSigma =\{l_{f}\}$ be the collection of extensions of the functional $l_{f_{_{H_{1}}}}:M_{\beta }(\mu _{_{H_{1}}})\to R$ satisfying the condition $l_{f}\le p(x)$ on those subspaces where they are defined.
Let us introduce a partial ordering on Σ having assumed $l_{f_{1}}<l_{f_{2}}$ if $l_{f_{2}}$ is defined on a larger set than $l_{f_{1}}$ and $l_{f_{1}}(\mu )=l_{f_{2}}(\mu )$ where both of them are defined.
Let $\{l_{f_{i}}\}_{i\in I}$ be a linear ordered subset in Σ. Let $M_{B}(\mu _{_{H_{i}}})$ be the subspace on which $l_{_{f_{H_{i}}}}$ is defined. Define $l_{f}$ on $\bigoplus _{i\in I}M_{B}(\mu _{_{H_{i}}})$, having assumed $l_{f}(\mu )=l_{_{f_{H_{i}}}}(\mu )$ if $\mu \in M_{B}(\mu _{_{H_{i}}})$.
It is obvious, that $l_{f_{H_{i}}}<l_{f}$. Since any linearly ordered subset in Σ has an upper bound, by virtue of Zorn lemma Σ contains a maximal element λ defined on some set ${X^{\prime }}$ satisfying the condition $\lambda (x)\le p(x)$ for $x\in {X^{\prime }}$. But ${X^{\prime }}$ must coincide with the entire space $M_{B}$ because otherwise we could extend λ to a wider space by adding, as above, one more dimension.
This contradicts with the maximality of λ and hence ${X^{\prime }}=M_{B}$. Therefore the extension of the functional is defined everywhere.
If we put the linear continuous functional $l_{f}$ into the correspondence to the function
then we obtain
\[\int _{E}f(x)\mu _{H}(dx)=\| \mu _{H}\| =\sum \limits_{i\in \mathbb{N}}\| \mu _{H_{i}}\| _{M_{B}(\mu _{H_{i}})},\]
where $\mu _{H}=\sum _{i\in N}\int _{E}g_{H_{i}}\hspace{0.1667em}\mu _{H_{i}}(dx)$. The Theorem 6 is proved. □Remark 3.
It follows from the proven theorem that the indicated above correspondence puts some functions $f\in F(M_{B})$ into the correspondence to each linear continuous functional $l_{f}$. If in $F(M_{\beta })$ we identify functions coinciding with respect to the measures $\{\mu _{H_{i}},\hspace{0.2778em}i\in \mathbb{N}\}$, then the correspondence will be bijective.
In what follows $\mathrm{B}(E,S)$ will always denote a vector space formed by all real bounded measurable functions on $(E,S)$ having the natural order. It is an $(AN)$-space with identity according to which a function is identically equal to one on E (see [5]). Let ${\mathrm{B}^{\prime }}(E,S)$ denote the topological conjugate space of $\mathrm{B}(E,S)$, which is an order-complete Banach lattice. The elements of ${\mathrm{B}^{\prime }}(E,S)$ are called finitely-additive measures on $(E,S)$ and the canonical bilinear form which puts $\mathrm{B}(E,S)$ and ${\mathrm{B}^{\prime }}(E,S)$ in duality is denoted by
\[\langle f,\mu \rangle =\mu (f)=\int _{E}f(x)\hspace{0.1667em}\mu (dx),\hspace{1em}f\in \mathrm{B}(E,S),\hspace{2.5pt}\mu \in {\mathrm{B}^{\prime }}(E,S)\]
and called the integral of f with respect to μ. In what follows $\mathrm{B}(\mathrm{H},B(\mathrm{H}))$ is the space of measurable bounded functions and ${\mathrm{B}^{\prime }}(\mathrm{H},B(\mathrm{H}))$ is the conjugate space of all finitely-additive measures on $(\mathrm{H},B(\mathrm{H}))$.Equal units of $\mathrm{B}(\mathrm{H},B(\mathrm{H}))$ space are denoted by $e_{\mathrm{H}}$ and elements $\mathrm{B}(\mathrm{H},B(\mathrm{H}))$ and ${\mathrm{B}^{\prime }}(\mathrm{H},B(\mathrm{H}))$ are denoted by g and ν respectively. Intersection of $\{\nu \in {\mathrm{B}^{\prime }}(\mathrm{H},\mathrm{B}(\mathrm{H}))\mid \langle e_{\mathrm{H}},\nu \rangle =1\}$ and positive cone is denoted by $S_{\mathrm{H}}$. It is clear that $S_{\mathrm{H}}$ is a compact subset of the simple share, so a set of extreme points of this cone is not empty.
It is also well known that in the (ZFC), (CH), (MA) theory there exists a continual weekly separable family of probability measures which is not strongly separable. Here and in the sequel we denote by (MA) the Martin’s axiom (see [3]).
Theorem 7.
Let $M_{B}=\bigoplus _{H\in \mathrm{H}}M_{B}(\mu _{H})$ be the Banach space of measures, E be the complete separable metric space, S be the Borel σ-algebra in E and $\operatorname{card}\mathrm{H}\le c$. Then in the theory (ZFC) and (MA) the family of probability measures $\{\mu _{H},H\in \mathrm{H}\}$ admits a consistent criteria of hypotheses if and only if the family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ admits an unbiased criterion of any parametric function and the correspondence $f\to l_{f}$ by the equality
is one-to-one. Here $l_{f}$ is a linear continuous functional on $M_{B}$, $f\in F(M_{B})$.
Proof.
Necessity. As the family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ admits a consistent criterion of hypotheses, so the family $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ admits an unbiased criterion of any parametric function and it is strongly separable. So, the family $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ is weekly separable. The necessity is proved in the same manner as the necessity of the Theorem 6.
Sufficiency. According to the Theorem 6 a Borel orthogonal family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$, $\operatorname{card}\mathrm{H}\le c$ is weakly separable. We represent $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ as an inductive sequence $\mu _{H}<\omega _{\alpha }$, where $\omega _{\alpha }$ denotes the first ordinal number of the power of the set H. Since the family $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ is weakly separable, there exists a family of measurable parts $\{X_{H}\}_{H<\omega _{\alpha }}$ of the space E, such that the following relations are fulfilled:
\[\mu _{H}(X_{{H^{\prime }}})=\left\{\begin{array}{l@{\hskip10.0pt}l}1,\hspace{1em}& \text{if}\hspace{2.5pt}H={H^{\prime }},\\{} 0,\hspace{1em}& \text{if}\hspace{2.5pt}H\ne {H^{\prime }}\end{array}\right.\]
for all $H\in [0,\omega _{\alpha })$ and ${H^{\prime }}\in [0,\omega _{\alpha })$.We define $\omega _{\alpha }$-sequence of parts $B_{H}$ of the space E so that the following relations are fulfilled:
-
(1) $B_{H}$ is a Borel subset in E for all $H<\omega _{\alpha }$.
-
(2) $B_{H}\subset X_{H}$ for all $H<\omega _{\alpha }$.
-
(3) $B_{H}\cap B_{{H^{\prime }}}=\varnothing $ for all $H<\omega _{\alpha }$, ${H^{\prime }}<\omega _{\alpha }$, $H\ne {H^{\prime }}$.
-
(4) $\mu _{H}(B_{H})=1$ for all $H<\omega _{\alpha }$.
Assume that $B_{0}=X_{0}$. Let further the partial sequence $\{B_{{H^{\prime }}}\}_{{H^{\prime }}<H}$ be already defined for $H<\omega _{\alpha }$.
It is clear, that ${\mu }^{\ast }(\bigcup _{{H^{\prime }}<H}B_{{H^{\prime }}})=0$. Thus there exists a Borel subset $Y_{H}$ of the space E, such that the following relations are valid: $\bigcup _{{H^{\prime }}<H}B_{{H^{\prime }}}\subset Y_{H}$ and $\mu (Y_{H})=0$. Assume $B_{H}=X_{H}\setminus Y_{H}$, thereby the $\omega _{\alpha }$ sequence of $\{B_{H}\}_{H<\omega _{\alpha }}$ disjunctive measurable subsets of the space E is constructed. Therefore $\mu _{H}(B_{H})=1$ for all $H<\omega _{\alpha }$. As a family of probability measures $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ admits an unbiased criterion for any parametric function, so there exists a subspace $G\subset \mathrm{B}(E,S)$, containing $e_{E}$ unit and $\mathrm{B}(E,S)$ can be imagine as a topological sum of G and $H_{0}={\mu }^{-1}(0)$, where the functional
\[\mu (f)=\int _{E}f(x)\hspace{0.1667em}\mu (dx),\hspace{1em}f\in \mathrm{B}(E,S),\hspace{2.5pt}\mu \in {\mathrm{B}^{\prime }}(E,S)\]
and a family $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ is strongly separable, subspace G is a grid towards canonical order on G (see [5]). We assume that $S_{0}$ is a minimal σ-algebra of subalgebra S, all function on G are measurable towards $S_{0}$. Then $G\subset B(E,S_{0})\subset B(E,S)$.Since a subspace G contains $e_{E}$ and represents a grid, then $G\supset B(E,S)$ and that’s why $G=B(E,S)$.
As family $\{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ represents a dense subspace of $exS_{H}$ ($exS_{H}$ are extreme points of $S_{H}$), so $I_{\mu }$ is an ideal in the set $S_{0}$ which contains zero measured sets for all $\mu \in \{\mu _{H},\hspace{0.2778em}H\in \mathrm{H}\}$ and consists only of an empty set.
Hence there exist such sets $\{A_{H},\hspace{0.2778em}H\in \mathrm{H}\}$ that $\mu _{H}(A_{H})=1$ and $A_{H}\cap A_{{H^{\prime }}}=\varnothing $ for a $H\ne H_{0}$ and $E=\bigcup _{H\in \mathrm{H}}A_{H}$ is a set $S_{0}$. It follows from the condition of this theorem that for every $T\in \mathrm{B}(\mathrm{H})$ in G there exists $f_{T}$ function, which is a consistent criterion of $g_{T}$ parametric function. If $A=\{x\mid f_{t}(x)\ne 0\}$, then $\bigcup _{H\in T}A_{H}\subset A$, $A\cap A_{H}=\varnothing $ for all $H\notin T$ and hence $\bigcup _{H\in T}A_{H}=A$ implying that $\bigcup _{H\in T}A_{H}\subset S_{0}$.
Then, the mapping $\delta (x)=H$ if $x\in A_{H}$ for all $H\in \mathrm{H}$ is a consistent criterion of hypotheses. The theorem is proved. □