Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 9, Issue 4 (2022)
  4. On the mean and variance of the estimate ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • Cited by
  • More
    Article info Full article Related articles Cited by

On the mean and variance of the estimated tangency portfolio weights for small samples
Volume 9, Issue 4 (2022), pp. 453–482
Gustav Alfelt ORCID icon link to view author Gustav Alfelt details   Stepan Mazur ORCID icon link to view author Stepan Mazur details  

Authors

 
Placeholder
https://doi.org/10.15559/22-VMSTA212
Pub. online: 2 September 2022      Type: Research Article      Open accessOpen Access

Received
13 December 2021
Revised
24 May 2022
Accepted
29 July 2022
Published
2 September 2022

Abstract

In this paper, a sample estimator of the tangency portfolio (TP) weights is considered. The focus is on the situation where the number of observations is smaller than the number of assets in the portfolio and the returns are i.i.d. normally distributed. Under these assumptions, the sample covariance matrix follows a singular Wishart distribution and, therefore, the regular inverse cannot be taken. In the paper, bounds and approximations for the first two moments of the estimated TP weights are derived, as well as exact results are obtained when the population covariance matrix is equal to the identity matrix, employing the Moore–Penrose inverse. Moreover, exact moments based on the reflexive generalized inverse are provided. The properties of the bounds are investigated in a simulation study, where they are compared to the sample moments. The difference between the moments based on the reflexive generalized inverse and the sample moments based on the Moore–Penrose inverse is also studied.

1 Introduction

How to efficiently allocate capital lies at the heart of financial decision making. Portfolio theory, as developed by [35], provides a framework for this problem, based on the means, variances and covariances of the assets in the considered portfolio. The theory revolves around the trade-off between expected return and variance (risk), denoted by mean-variance optimization. In this setting, investors allocate wealth in order to maximize expected return given a certain level of risk or conversely allocate wealth to minimize the risk given a certain level of expected return. Although it has received a lot of criticism (see, e.g., [42] and [27]), the framework remains one of the most crucial components in portfolio management.
In this paper, we consider the tangency portfolio (TP) which is one of the most important portfolios in the financial literature. The TP weights determine what proportions of the capital to invest in each asset and are obtained by maximizing the expected quadratic utility function. For a portfolio of p risky assets, the TP weights are given by
(1)
\[ {\mathbf{w}_{TP}}={\alpha ^{-1}}{\boldsymbol{\Sigma }^{-1}}(\boldsymbol{\mu }-{r_{f}}{\mathbf{1}_{p}}),\]
where $\boldsymbol{\mu }$ is a p-dimensional mean vector of the asset returns, Σ is a $p\times p$ symmetric positive definite covariance matrix of the asset returns, the coefficient $\alpha >0$ describes the investors’ risk aversion,1 ${r_{f}}$ denotes the rate of a risk-free asset and ${\mathbf{1}_{p}}$ is a p-dimensional vector of ones. We allow for short sales and, therefore, some weights can be negative. Let us also note that ${\mathbf{w}_{TP}}$ determines the structure of the portfolio which corresponds to risky assets and does in general not sum to 1. Consequently, the rest of the wealth $1-{\mathbf{w}^{\prime }_{TP}}{\mathbf{1}_{p}}$ needs to be invested into the risk-free asset.
Naturally, the TP weights ${\mathbf{w}_{TP}}$ depend on knowledge of the mean vector $\boldsymbol{\mu }$ and the covariance matrix Σ. In general, these quantities are not known and need to be estimated from data on N historical return vectors ${\mathbf{x}_{1}},\dots ,{\mathbf{x}_{N}}$. Plugging sample estimates of the mean vector and covariance matrix into (1) leads us to the sample estimate of the TP weights expressed as
(2)
\[ {\hat{\mathbf{w}}_{TP}}={\alpha ^{-1}}{\mathbf{S}^{-1}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}}),\]
where S is the sample covariance matrix and $\bar{\mathbf{x}}$ is the sample mean vector, respectively, of ${\mathbf{x}_{1}},\dots ,{\mathbf{x}_{N}}$.2 The statistical properties of ${\hat{\mathbf{w}}_{TP}}$ have been extensively studied throughout the literature. [18] derived an exact test of the weights in the multivariate normal case. [39] obtained the univariate density for the TP weights as well as its asymptotic distribution, under the assumption that returns are independent and identically multivariate normally distributed. Further, [4] provided a procedure of monitoring the TP weights with a sequential approach. [6] obtained the density for, and several exact tests on, linear transformations of estimated TP weights, while [32] provided approximate and asymptotic distributions for the weights. [3] studied the distribution of ${\hat{\mathbf{w}}_{TP}}$ from a Bayesian perspective.3 [15] studied the TP weights in small and large dimensions when both the population and sample covariance matrix are singular. Analytical expressions of higher order moments of the estimated TP weights are derived in [29], while the article [31] presented the asymptotic distribution of the estimated TP weights as well as the asymptotic distribution of the statistical test on the elements of the TP under a high-dimensional asymptotic regime. [38] derived a test for the location of the TP, and [37] extended this result to the high-dimensional setting. Furthermore, [9] derived central limit theorems for the TP weights estimator under the assumption that the matrix of observations has a matrix-variate location mixture of normal distributions. More recently, [30] investigated the distributional properties of the TP weights under a skew-normal model in small and large dimensions.
The common scenario considered is that the number of observations available for the estimation, denoted by N, is greater than the portfolio size, denoted by p. In this case the sample covariance matrix S is positive definite, and ${\hat{\mathbf{w}}_{TP}}$ can be obtained as presented in (2). However, when the considered portfolio is large, it is possible that the number of available observations is less than the portfolio dimension. This can be due to a lack of data for all the assets in the portfolio, but it may also occur due to the fact that covariance of asset returns tends to change over time. As such, the assumption of a constant covariance might only hold for limited periods of time, hence limiting the amount of data available for estimation. Many applications consider portfolios of large dimensions, containing up to 50, 100 or even 1000 assets (see, e.g., [41, 26, 34, 2, 20, 16, 22, 5, 12, 1]). If returns are measured on weekly or monthly intervals, data reaching back several decades might be required to ensure $p\le N$. Unless the considered assets can be assumed to have a constant covariance matrix over very long time periods, data spanning such long time intervals is not suitable to use in the estimation, or might simply not be available. Any such situations, where $p>N$, would result in a singular sample covariance matrix S, which in turn is noninvertible, in the standard sense.
This issue can be remedied by estimating ${\boldsymbol{\Sigma }^{-1}}$ in (1) with the Moore–Penrose inverse of S, which we will denote by ${\mathbf{S}^{+}}$. This generalized inverse has previously been successfully employed in portfolio theory for the $p>N$ case by [10, 11, 44, 15].4 Applying the Moore–Penrose inverse, the TP weights are estimated as
(3)
\[ {\tilde{\mathbf{w}}_{TP}}={\alpha ^{-1}}{\mathbf{S}^{+}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}}).\]
An attractive feature of applying the Moore–Penrose inverse ${\mathbf{S}^{+}}$ in (1) is that it is the least square solution to the system of equations described by
(4)
\[ \mathbf{S}\mathbf{v}={\alpha ^{-1}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}})\]
which in the singular case generally lacks exact solution. That is, as shown in [40], for any vector $\mathbf{v}\in {\mathbb{R}^{p}}$, we have that $\| \mathbf{S}\mathbf{v}-{\alpha ^{-1}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}}){\| _{2}}$ ≥ $\| \mathbf{S}{\mathbf{S}^{+}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}})-{\alpha ^{-1}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}}){\| _{2}}$, where $\| \cdot {\| _{2}}$ denotes the Euclidean norm of a vector. Phrased differently, (3) provides the best solution to equation (4), in the least square sense. In addition, when $p\le N$, we have that ${\mathbf{S}^{+}}={\mathbf{S}^{-1}}$ and ${\tilde{\mathbf{w}}_{TP}}={\hat{\mathbf{w}}_{TP}}$, such that ${\tilde{\mathbf{w}}_{TP}}$ can be viewed as a general estimator for the TP weights, covering both the singular and nonsingular case. For further properties of the Moore–Penrose inverse, see, e.g., [17].
The expectation and variance of an estimator are key quantities to describe its statistical properties. With the standard assumption of normally distributed asset returns, the stochastic components of ${\tilde{\mathbf{w}}_{TP}}$ consists of ${\mathbf{S}^{+}}$ and $\bar{\mathbf{x}}$, which are independent under the assumption of normally distributed data (see, e.g., [10]). Unfortunately, there exist no derivation of the expected value or variance of ${\mathbf{S}^{+}}$, when $p>N$. In [21] however, these quantities are presented in the special case of $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$. The authors also provided approximate results, using moments of standard normal random variables, and exact results for moments of the generalized reflexive inverse, another quantity that can be applied as an inverse of S. Further, in a recent paper [28], several bounds on the mean and variance of ${\mathbf{S}^{+}}$ are provided, based on the Poincaré separation theorem. Our paper builds on the results presented in [21] and [28] to provide bounds and approximations for the moments of the TP weights, $\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]$ and $\mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]$, where $\mathbb{E}[\cdot ]$ and $\mathbb{V}[\cdot ]$ denote the expected value and variance, respectively. We also present a simulation study, where various measures compare the derived bounds with the equivalent sample quantities obtained from simulated data. Finally, we compare the moments obtained applying the reflexive generalized inverse and the sample moments based on the Moore–Penrose inverse.
The rest of this paper is organized as follows. Section 2.1 provides exact moment results for the case $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$. Section 2.2 presents bounds for the moments of ${\tilde{\mathbf{w}}_{TP}}$ in the general case, while approximate moments are derived in Section 2.3. Exact moments applying the reflexive generalized inverse are derived in Section 3. The simulation study is presented in Section 4 while Section 5 summarizes.

2 Moments with the Moore–Penrose inverse

Let X be a $p\times N$ matrix with N asset return vectors of dimension $p\times 1$ stacked as columns, where $p>N$. Further, we assume that these return vectors are independent and normally distributed with mean vector $\boldsymbol{\mu }$ and positive definite covariance matrix Σ. Thus $\mathbf{X}\sim {\mathcal{MN}_{p,N}}(\boldsymbol{\mu }{\mathbf{1}_{N}},\boldsymbol{\Sigma },{\mathbf{I}_{N}})$, where ${\mathcal{MN}_{p,n}}(\mathbf{M},\boldsymbol{\Sigma },\mathbf{U})$ denotes the matrix-variate normal distribution with $p\times N$ mean matrix M, $p\times p$ row-wise covariance matrix Σ and $N\times N$ column-wise covariance matrix U. Further, let the $p\times 1$ vector $\bar{\mathbf{x}}$ be the row mean of X. Now, define $\mathbf{Y}=\mathbf{X}-\bar{\mathbf{x}}{\mathbf{1}^{\prime }_{N}}$, such that $\mathbf{Y}\sim {\mathcal{MN}_{p,N}}(\mathbf{0},\boldsymbol{\Sigma },{\mathbf{I}_{N}})$. Further, let $\mathbf{S}=\mathbf{Y}{\mathbf{Y}^{\prime }}/n$, such that $\text{rank}(\mathbf{S})=n<p$ with $n=N-1$, and $n\mathbf{S}\sim {\mathcal{W}_{p}}(n,\boldsymbol{\Sigma })$, i.e. $n\mathbf{S}$ follows a p-dimensional singular Wishart distribution with n degrees of freedom and the parameter matrix Σ. Let $\mathbf{S}=\mathbf{Q}\mathbf{R}{\mathbf{Q}^{\prime }}$ denote the eigenvalue decomposition of S, where R is the $n\times n$ diagonal matrix of positive eigenvalues and Q is the $p\times n$ matrix with corresponding eigenvectors as columns. Further, define
\[ {\mathbf{S}^{+}}=\mathbf{Q}{\mathbf{R}^{-1}}{\mathbf{Q}^{\prime }}.\]
Then, ${\mathbf{S}^{+}}$ constitutes the Moore–Penrose inverse of $\mathbf{Y}{\mathbf{Y}^{\prime }}/n$, and ${\mathbf{S}^{+}}$ is independent of $\bar{\mathbf{x}}$ (see [10]).
In the following, let $\boldsymbol{\eta }={\alpha ^{-1}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}})$ and $\boldsymbol{\theta }=\mathbb{E}[\boldsymbol{\eta }]={\alpha ^{-1}}(\boldsymbol{\mu }-{r_{f}}{\mathbf{1}_{p}})$. Consequently, from Corollary 3.2b.1 in [36], together with the fact that $\mathbb{E}[\bar{\mathbf{x}}]=\boldsymbol{\mu }$ and $\mathbb{V}[\bar{\mathbf{x}}]=\boldsymbol{\Sigma }/(n+1)$, we obtain that
(5)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}]& \displaystyle =& \displaystyle \boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}+\frac{\boldsymbol{\Sigma }}{{\alpha ^{2}}(n+1)},\end{array}\]
(6)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\boldsymbol{\eta }^{\prime }}\boldsymbol{\eta }]& \displaystyle =& \displaystyle {\boldsymbol{\theta }^{\prime }}\boldsymbol{\theta }+\frac{\text{tr}(\boldsymbol{\Sigma })}{{\alpha ^{2}}(n+1)},\end{array}\]
(7)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\boldsymbol{\eta }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\eta }]& \displaystyle =& \displaystyle {\boldsymbol{\theta }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\theta }+\frac{\text{tr}(\boldsymbol{\Sigma }\boldsymbol{\Sigma })}{{\alpha ^{2}}(n+1)}.\end{array}\]
Further, let ${s^{ij}}$ denote the element on row i and column j of ${\mathbf{S}^{+}}$, and let ${\sigma ^{ij}}$ denote the element on row i and column j of ${\boldsymbol{\Sigma }^{-1}}$. Also let ${\mathbf{e}_{i}}$ denotes a $p\times 1$ vector where all values are equal to zero, except the i-th element, which is equal to one. Moreover, we assume that ${\lambda _{1}}(\mathbf{M})\ge {\lambda _{2}}(\mathbf{M})\ge \cdots \ge {\lambda _{p}}(\mathbf{M})$ are the ordered eigenvalues of a symmetric $p\times p$ matrix M, and that $\mathbf{A}{\le _{L}}\mathbf{B}$ denotes the Löwner ordering of two positive semi-definite matrices A and B.

2.1 Exact moments when $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$

When Σ is the identity matrix, it is possible to derive exact moments of the TP weights obtained from the Moore–Penrose inverse in the singular case. First, note the following results presented in Theorem 2.1 of [21], which state that in the case $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$ and $p>n+3$, we have that
(8)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\mathbf{S}^{+}}]& \displaystyle =& \displaystyle {a_{1}}{\mathbf{I}_{p}},\end{array}\]
(9)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[\text{vec}({\mathbf{S}^{+}})]& \displaystyle =& \displaystyle {a_{2}}({\mathbf{I}_{{p^{2}}}}+{\mathbf{C}_{{p^{2}}}})+2{a_{3}}\text{vec}({\mathbf{I}_{p}}){\text{vec}^{\prime }}({\mathbf{I}_{p}}),\end{array}\]
where ${\mathbf{C}_{{p^{2}}}}$ is the commutation matrix, $\text{vec}(\cdot )$ is the vectorization operator and
(10)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {a_{1}}& \displaystyle =& \displaystyle \frac{{n^{2}}}{p(p-n-1)},\end{array}\]
(11)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {a_{2}}& \displaystyle =& \displaystyle \frac{{n^{3}}[p(p-1)-n(p-n-2)-2]}{p(p-1)(p+2)(p-n)(p-n-1)(p-n-3)},\end{array}\]
(12)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {a_{3}}& \displaystyle =& \displaystyle \frac{{n^{3}}[{n^{2}}(n-1)+2n(p-2)(p-n)+2p(p-1)]}{{p^{2}}(p-1)(p+2)(p-n){(p-n-1)^{2}}(p-n-3)}.\end{array}\]
Note that constants in (10)–(12) differ slightly from the constants presented in [21], since our paper considers results for $n\mathbf{S}\sim {\mathcal{W}_{p}}(n,\boldsymbol{\Sigma })$, while [21] derived the results for $\mathbf{W}\sim {\mathcal{W}_{p}}(n,\boldsymbol{\Sigma })$. The moments in (8) and (9) allow us to derive the following results.
Theorem 1.
If $p>n+3$ and $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle {a_{1}}{\mathbf{w}_{TP}},\\ {} \displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle ({a_{2}}+2{a_{3}}){\mathbf{w}_{TP}}{\mathbf{w}^{\prime }_{TP}}+\left[{a_{2}}{\mathbf{w}^{\prime }_{TP}}{\mathbf{w}_{TP}}+\frac{{a_{1}^{2}}+(p+1){a_{2}}+2{a_{3}}}{{\alpha ^{2}}(n+1)}\right]{\mathbf{I}_{p}}\end{array}\]
with constants ${a_{1}}$, ${a_{2}}$ and ${a_{3}}$ that are defined in (10)–(12).
Proof.
Since ${\tilde{\mathbf{w}}_{TP}}={\alpha ^{-1}}{\mathbf{S}^{+}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}})$, the first result follows directly from (8) and the independence of ${\mathbf{S}^{+}}$ and $\bar{\mathbf{x}}$. For the second result, first note that as discussed in [21], equation (9) can be written as
\[ \operatorname{Cov}({s^{ij}},{s^{kl}})={a_{2}}({\delta _{ik}}{\delta _{jl}}+{\delta _{il}}{\delta _{jk}})+2{a_{3}}{\delta _{ij}}{\delta _{kl}},\]
where ${\delta _{ij}}=1$ if $i=j$ and 0 otherwise, so that ${\delta _{ij}}$, $i,j=1\dots ,p$, denote the elements of ${\mathbf{I}_{p}}$. Hence, we have that
(13)
\[ \mathbb{E}[{s^{ij}}{s^{kl}}]={a_{2}}({\delta _{ik}}{\delta _{jl}}+{\delta _{il}}{\delta _{jk}})+({a_{1}^{2}}+2{a_{3}}){\delta _{ij}}{\delta _{kl}}.\]
Also note the following element representations of matrix operations, where A and B are symmetric $p\times p$ matrices and $\text{tr}(\cdot )$ denotes the trace operator of a matrix:
(14)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\left[\mathbf{A}\text{tr}(\mathbf{B}\mathbf{A})\right]_{ij}}& \displaystyle =& \displaystyle {a_{ij}}{\sum \limits_{k=1}^{p}}{\sum \limits_{l=1}^{p}}{a_{kl}}{b_{kl}},\end{array}\]
(15)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\left[\mathbf{A}\mathbf{B}\mathbf{A}\right]_{ij}}& \displaystyle =& \displaystyle {\sum \limits_{k=1}^{p}}{\sum \limits_{l=1}^{p}}{b_{kl}}{a_{ik}}{a_{jl}}\\ {} & \displaystyle =& \displaystyle {\sum \limits_{k=1}^{p}}{\sum \limits_{l=1}^{p}}{b_{kl}}{a_{il}}{a_{jk}}.\end{array}\]
Moreover, with $\boldsymbol{\eta }={\alpha ^{-1}}(\bar{\mathbf{x}}-{r_{f}}{\mathbf{1}_{p}})$ and $\mathbb{E}[\boldsymbol{\eta }]=\boldsymbol{\theta }$,
(16)
\[ \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]=\mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]=\mathbb{E}\left[\mathbb{E}[{\mathbf{S}^{+}}\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}{\mathbf{S}^{+}}\mid \boldsymbol{\eta }]\right]-\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}].\]
By letting $\mathbf{H}=\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}$ and applying equations (13)–(15) we obtain
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}{[{\mathbf{S}^{+}}\mathbf{H}{\mathbf{S}^{+}}\mid \boldsymbol{\eta }]_{ij}}& \displaystyle =& \displaystyle {\sum \limits_{k=1}^{p}}{\sum \limits_{l=1}^{p}}{h_{kl}}\mathbb{E}[{s^{ik}}{s^{jl}}]\\ {} & \displaystyle =& \displaystyle {\sum \limits_{k=1}^{p}}{\sum \limits_{l=1}^{p}}{h_{kl}}[{a_{2}}({\delta _{ij}}{\delta _{kl}}+{\delta _{il}}{\delta _{kj}})+({a_{1}^{2}}+2{a_{3}}){\delta _{ik}}{\delta _{jl}}]\\ {} & \displaystyle =& \displaystyle {a_{2}}{\left[{\mathbf{I}_{p}}\text{tr}(\mathbf{H}{\mathbf{I}_{p}})\right]_{ij}}+{a_{2}}{[{\mathbf{I}_{p}}\mathbf{H}{\mathbf{I}_{p}}]_{ij}}+({a_{1}^{2}}+2{a_{3}}){[{\mathbf{I}_{p}}\mathbf{H}{\mathbf{I}_{p}}]_{ij}}.\end{array}\]
Consequently,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\mathbf{S}^{+}}\mathbf{H}{\mathbf{S}^{+}}\mid \boldsymbol{\eta }]& \displaystyle =& \displaystyle ({a_{1}^{2}}+{a_{2}}+2{a_{3}})\mathbf{H}+{a_{2}}\text{tr}(\mathbf{H}){\mathbf{I}_{p}},\end{array}\]
and inserting the above result into (16) together with (5) and (8) gives
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]& \displaystyle =& \displaystyle ({a_{1}^{2}}+{a_{2}}+2{a_{3}})\left(\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}+{\alpha ^{-2}}{N^{-1}}{\mathbf{I}_{p}}\right)+\\ {} & & \displaystyle +{a_{2}}\left(\text{tr}(\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }})+{\alpha ^{-2}}{N^{-1}}p\right){\mathbf{I}_{p}}-{a_{1}^{2}}\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\end{array}\]
and the theorem follows noting that $\boldsymbol{\theta }={\mathbf{w}_{TP}}$ when $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$.  □
A direct consequence of Theorem 1 is that the estimator ${\tilde{\mathbf{w}}_{TP}}$ is biased, with bias factor ${a_{1}}$. Hence, in the case of $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$, we have that ${a_{1}^{-1}}{\tilde{\mathbf{w}}_{TP}}$ constitutes an unbiased estimator. Further, in accordance with Corollary 2.1 in [21], as $n,p\to \infty $, assuming $n/p\to r$, with $0<r<1$, the constants of $\mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]$ emits the following asymptotic magnitudes: ${a_{1}}=\mathcal{O}(1)$, ${a_{2}}=\mathcal{O}({n^{-1}})=\mathcal{O}({p^{-1}})$ and ${a_{3}}=\mathcal{O}({n^{-2}})=\mathcal{O}({p^{-2}})$. Consequently, since $\text{tr}({\mathbf{w}_{TP}}{\mathbf{w}^{\prime }_{TP}})=\mathcal{O}(p)$ in the general case, we have that ${a_{2}}\text{tr}({\mathbf{w}_{TP}}{\mathbf{w}^{\prime }_{TP}})=\mathcal{O}(1)$. Hence, unless ${\mathbf{w}_{TP}}$ has some specific structure, $\mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]$ does not vanish to zero under this asymptotic regime. This is not unique for the singular case, since the corresponding is also true for ${\hat{\mathbf{w}}_{TP}}$ in the nonsingular case, when $n,p\to \infty $. Finally, we note that in practice the population covariance matrix of a portfolio of assets will likely never be equal to ${\mathbf{I}_{p}}$, and hence the results in this section are mainly of theoretical nature.

2.2 Bounds on the moments

This section aims to provide upper and lower bounds for the expected value of ${\tilde{\mathbf{w}}_{TP}}$ and upper bounds for the variance of ${\tilde{\mathbf{w}}_{TP}}$. First, define the following $p\times p$ matrices,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbf{D}& \displaystyle =& \displaystyle {a_{1}}{({\lambda _{p}}({\boldsymbol{\Sigma }^{-1}}))^{2}}\boldsymbol{\Sigma },\\ {} \displaystyle {\mathbf{U}_{a}}& \displaystyle =& \displaystyle {a_{1}}{({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{2}}\boldsymbol{\Sigma },\\ {} \displaystyle {\mathbf{U}_{b}}& \displaystyle =& \displaystyle \frac{n}{p-n-1}{\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}){\mathbf{I}_{p}},\end{array}\]
with elements ${d_{ij}}$, ${u_{ij}^{(a)}}$ and ${u_{ij}^{(b)}}$, respectively. Further denote by ${e_{ij}}$ the elements of $\mathbb{E}[{\mathbf{S}^{+}}]$ and let ${u_{ii}^{(\ast )}}=\min \{{u_{ii}^{(a)}},{u_{ii}^{(b)}}\}$, $i=1,\dots ,p$. Then we can derive the following result.
Theorem 2.
Suppose $p>n+3$ and $\boldsymbol{\Sigma }>0$. Let ${w_{i}}$ and ${\theta _{i}}$, be the i-th elements of the $p\times 1$ vectors $\mathbf{w}=\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]$ and $\boldsymbol{\theta }={\alpha ^{-1}}(\boldsymbol{\mu }-{r_{f}}{\mathbf{1}_{p}})$, respectively. Then for $i=1,\dots ,p$, it holds that
\[ {v_{ii}}{\theta _{i}}+{\sum \limits_{j\ne i}^{p}}{v_{ij}}{\theta _{j}}\le {w_{i}}\le {z_{ii}}{\theta _{i}}+{\sum \limits_{j\ne i}^{p}}{z_{ij}}{\theta _{j}}\]
where, for $i,j=1,\dots ,p$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {v_{ij}}& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}{g_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}\ge 0,\\ {} {h_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}<0,\end{array}\right.\\ {} \displaystyle {z_{ij}}& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}{g_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}<0,\\ {} {h_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}\ge 0,\end{array}\right.\end{array}\]
with ${g_{ii}}={d_{ii}}$, ${h_{ii}}={u_{ii}^{(\ast )}}$, while for $i\ne j$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {g_{ij}}& \displaystyle =& \displaystyle \max \left\{\begin{array}{l}{d_{ij}}-\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})},\\ {} {u_{ij}^{(a)}}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} -\sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})},\\ {} -\sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\end{array}\right\},\\ {} \displaystyle {h_{ij}}& \displaystyle =& \displaystyle \min \left\{\begin{array}{l}{d_{ij}}+\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})},\\ {} {u_{ij}^{(a)}}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} \sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})},\\ {} \sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\end{array}\right\}.\end{array}\]
Proof.
The result follows directly from the element-wise bounds in Lemma A2 and that due to the independence of ${\mathbf{S}^{+}}$ and $\bar{\mathbf{x}}$ we have $\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]=\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta }$.  □
Note that when $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$, we have that ${\lambda _{1}}{({\boldsymbol{\Sigma }^{-1}})^{2}}={\lambda _{p}}{({\boldsymbol{\Sigma }^{-1}})^{2}}=1$, and hence $\mathbb{E}[{\mathbf{S}^{+}}]=\mathbf{D}={\mathbf{U}_{a}}={a_{1}}{\mathbf{I}_{p}}$. Consequently ${g_{ij}}={h_{ij}}=0$, $i\ne j$, and ${g_{ii}}={h_{ii}}={a_{1}}$, $i=1,\dots ,p$, since ${u_{ii}^{(a)}}={a_{1}}<{a_{1}}\frac{p}{n}={u_{ii}^{(b)}}$, and $p>n$. Hence, Theorem 2 yields that $\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]={a_{1}}\boldsymbol{\theta }$, consistent with the result of Theorem 1.
The following result provides two upper bounds for the variance of the TP weights estimate ${\tilde{\mathbf{w}}_{TP}}$.
Theorem 3.
Suppose $p>n+3$ and $\boldsymbol{\Sigma }>0$. Then
(17)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle {\le _{L}}& \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\left({k_{1}}\boldsymbol{\Sigma }\mathbb{E}[\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}]\boldsymbol{\Sigma }+{k_{2}}\boldsymbol{\Sigma }\mathbb{E}[{\boldsymbol{\eta }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\eta }]\right),\end{array}\]
(18)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle {\le _{L}}& \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\mathbb{E}[({\boldsymbol{\eta }^{\prime }}\boldsymbol{\eta })]{\mathbf{I}_{p}},\end{array}\]
with the expected values given in (5)–(7) and
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {c_{1}}& \displaystyle =& \displaystyle {n^{2}}{[(p-n)(p-n-1)(p-n-3)]^{-1}},\\ {} \displaystyle {c_{2}}& \displaystyle =& \displaystyle (p-n-2){c_{1}},\\ {} \displaystyle {k_{1}}& \displaystyle =& \displaystyle \left[1+n-\frac{(p+1)(p(n+1)-2)}{p(p+1)-2}\right]\frac{n}{p},\\ {} \displaystyle {k_{2}}& \displaystyle =& \displaystyle \left[1-\frac{(p+1)(p-n)}{p(p+1)-2}\right]\frac{n}{p}.\end{array}\]
Proof.
We are interested in bounds for the quantity ${\boldsymbol{\alpha }^{\prime }}\mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]\boldsymbol{\alpha }={\boldsymbol{\alpha }^{\prime }}\mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]\boldsymbol{\alpha }$, for all $\boldsymbol{\alpha }\in {\mathbb{R}^{p}}$. First, by the tower property we have
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]& \displaystyle =& \displaystyle \mathbb{E}\left[\mathbb{E}[{\mathbf{S}^{+}}\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}{\mathbf{S}^{+}}\mid \boldsymbol{\eta }]\right]-\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}].\end{array}\]
Hence, we can obtain
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\boldsymbol{\alpha }^{\prime }}\mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]\boldsymbol{\alpha }& \displaystyle =& \displaystyle \mathbb{E}\left[\mathbb{E}[{\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}{\mathbf{S}^{+}}\boldsymbol{\alpha }\mid \boldsymbol{\eta }]\right]-{\boldsymbol{\alpha }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}]{\boldsymbol{\alpha }^{\prime }}\\ {} & \displaystyle =& \displaystyle \mathbb{E}\left[\mathbb{E}[{({\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\boldsymbol{\eta })^{2}}\mid \boldsymbol{\eta }]\right]-{({\boldsymbol{\alpha }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta })^{2}}.\end{array}\]
Then, by noting that ${({\boldsymbol{\alpha }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta })^{2}}>0$ and applying the bounds from Lemma A4 on $\mathbb{E}[{({\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\boldsymbol{\eta })^{2}}]$ we can derive
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\boldsymbol{\alpha }^{\prime }}\mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]\boldsymbol{\alpha }& \displaystyle \le & \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\\ {} & & \displaystyle \times \mathbb{E}\left[{k_{1}}{({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\eta })^{2}}+{k_{2}}({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\alpha })({\boldsymbol{\eta }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\eta })\right],\\ {} \displaystyle {\boldsymbol{\alpha }^{\prime }}\mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]\boldsymbol{\alpha }& \displaystyle \le & \displaystyle {({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}(2{c_{1}}+{c_{2}})\mathbb{E}[({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\boldsymbol{\eta }^{\prime }}\boldsymbol{\eta })],\end{array}\]
and with the aid of (5)–(7) the result follows.  □

2.3 Approximate moments

Regarding general Σ, it is possible to provide approximate moments for ${\tilde{\mathbf{w}}_{TP}}$ using simulations of standard normal matrices. Following Section 3.1 in [21], we denote the eigendecomposition of Σ as $\boldsymbol{\Sigma }={\boldsymbol{\Gamma }\boldsymbol{\Lambda }\boldsymbol{\Gamma }^{\prime }}$, with ${\lambda _{i}}$ denoting the i-th diagonal element of Λ, and let $\mathbf{Z}\sim {\mathcal{MN}_{p,n}}(\mathbf{0},{\mathbf{I}_{p}},{\mathbf{I}_{n}})$, with ${\mathbf{z}^{\prime }_{i}}$ denoting row i of Z. Further, denote ${m_{ij}}(\boldsymbol{\Lambda })=\mathbb{E}[{\mathbf{z}^{\prime }_{i}}{({\mathbf{Z}^{\prime }}\boldsymbol{\Lambda }\mathbf{Z})^{-2}}{\mathbf{z}_{j}}]$ and ${v_{ij,kl}}(\boldsymbol{\Lambda })=\operatorname{Cov}[{\mathbf{z}^{\prime }_{i}}{({\mathbf{Z}^{\prime }}\boldsymbol{\Lambda }\mathbf{Z})^{-2}}{\mathbf{z}_{j}},{\mathbf{z}^{\prime }_{k}}{({\mathbf{Z}^{\prime }}\boldsymbol{\Lambda }\mathbf{Z})^{-2}}{\mathbf{z}_{l}}]$, where $\operatorname{Cov}[X,Y]$ denotes the covariance between X and Y.
Also define
(19)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbf{M}(\boldsymbol{\Lambda })& \displaystyle =& \displaystyle n{\sum \limits_{i=1}^{p}}{\lambda _{i}}{m_{ii}}(\boldsymbol{\Lambda }){\mathbf{e}_{i}}{\mathbf{e}^{\prime }_{i}},\\ {} \displaystyle \mathbf{V}(\boldsymbol{\Lambda })& \displaystyle =& \displaystyle {n^{2}}\left[{\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}{\lambda _{i}}{\lambda _{j}}{v_{ii,jj}}(\boldsymbol{\Lambda })({\mathbf{e}_{i}}{\mathbf{e}^{\prime }_{j}}\otimes {\mathbf{e}_{i}}{\mathbf{e}^{\prime }_{j}})\right.\\ {} & & \displaystyle +{\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}{\lambda _{i}}{\lambda _{j}}{v_{ij,ij}}(\boldsymbol{\Lambda })({\mathbf{e}_{j}}{\mathbf{e}^{\prime }_{j}}\otimes {\mathbf{e}_{i}}{\mathbf{e}^{\prime }_{i}})({\mathbf{I}_{{p^{2}}}}+{\mathbf{C}_{{p^{2}}}})\\ {} & & \displaystyle \left.-2{\sum \limits_{i}^{p}}{\lambda _{i}^{2}}{v_{ii,ii}}(\boldsymbol{\Lambda })({\mathbf{e}_{i}}{\mathbf{e}^{\prime }_{i}}\otimes {\mathbf{e}_{i}}{\mathbf{e}^{\prime }_{i}})\right]\end{array}\]
and make the decomposition
(20)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle (\boldsymbol{\Gamma }\otimes \boldsymbol{\Gamma })\mathbf{V}(\boldsymbol{\Lambda })({\boldsymbol{\Gamma }^{\prime }}\otimes {\boldsymbol{\Gamma }^{\prime }})& \displaystyle =& \displaystyle \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\boldsymbol{\Psi }_{11}}& \cdots & {\boldsymbol{\Psi }_{1p}}\\ {} \vdots & \ddots & \vdots \\ {} {\boldsymbol{\Psi }_{p1}}& \cdots & {\boldsymbol{\Psi }_{pp}}\end{array}\right),\end{array}\]
where ${\boldsymbol{\Psi }_{ij}}$ are $p\times p$ matrices, $i,j=1,\dots ,p$. The following result can then be derived.
Theorem 4.
If $p>n+3$ and $\boldsymbol{\Sigma }>0$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle \boldsymbol{\Gamma }\mathbf{M}(\boldsymbol{\Lambda }){\boldsymbol{\Gamma }^{\prime }}\boldsymbol{\theta },\\ {} \displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle {\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}\left({\theta _{i}}{\theta _{j}}+\frac{{\sigma _{ij}}}{{\alpha ^{2}}(n+1)}\right){\boldsymbol{\Psi }_{ij}}+\frac{1}{{\alpha ^{2}}(n+1)}\boldsymbol{\Gamma }\mathbf{M}(\boldsymbol{\Lambda })\boldsymbol{\Lambda }\mathbf{M}(\boldsymbol{\Lambda }){\boldsymbol{\Gamma }^{\prime }}\end{array}\]
with ${\theta _{i}}={\alpha ^{-1}}({\mu _{i}}-{r_{f}})$.
Proof.
From Theorem 3.1 in [21], we have that $\mathbb{E}[{\mathbf{S}^{+}}]=\boldsymbol{\Gamma }\mathbf{M}(\boldsymbol{\Lambda }){\boldsymbol{\Gamma }^{\prime }}$. Then the first result follows due to the independence of ${\mathbf{S}^{+}}$ and $\bar{\mathbf{x}}$. For the second result, we have that
(21)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]& \displaystyle =& \displaystyle \mathbb{E}\left[\mathbb{E}[{\mathbf{S}^{+}}\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}{\mathbf{S}^{+}}\mid \boldsymbol{\eta }]\right]-\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}].\end{array}\]
Again we let $\mathbf{H}=\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}$. Applying Theorem 3.1 in [21] we have that
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[\text{vec}({\mathbf{S}^{+}})]& \displaystyle =& \displaystyle (\boldsymbol{\Gamma }\otimes \boldsymbol{\Gamma })\mathbf{V}(\boldsymbol{\Lambda })({\boldsymbol{\Gamma }^{\prime }}\otimes {\boldsymbol{\Gamma }^{\prime }}),\end{array}\]
and in accordance with equation (6.8) in [23], we get
\[ \mathbb{E}[{\mathbf{S}^{+}}\mathbf{H}{\mathbf{S}^{+}}]={\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}{h_{ij}}{\boldsymbol{\Psi }_{ij}}+\mathbb{E}[{\mathbf{S}^{+}}]\mathbf{H}\mathbb{E}[{\mathbf{S}^{+}}],\]
where ${\boldsymbol{\Psi }_{ij}}$ is obtained from the decomposition (20). Inserting the above into (21) gives
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\mathbf{S}^{+}}\boldsymbol{\eta }]& \displaystyle =& \displaystyle {\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}\mathbb{E}[{h_{ij}}]{\boldsymbol{\Psi }_{ij}}+\mathbb{E}[{\mathbf{S}^{+}}]\mathbb{E}[\mathbf{H}]\mathbb{E}[{\mathbf{S}^{+}}]-\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}]\\ {} & \displaystyle =& \displaystyle {\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}\left({\theta _{i}}{\theta _{j}}+\frac{{\sigma _{ij}}}{{\alpha ^{2}}N}\right){\boldsymbol{\Psi }_{ij}}+\frac{1}{{\alpha ^{2}}N}\boldsymbol{\Gamma }\mathbf{M}(\boldsymbol{\Lambda })\boldsymbol{\Lambda }\mathbf{M}(\boldsymbol{\Lambda }){\boldsymbol{\Gamma }^{\prime }}\end{array}\]
due to (5) and since ${\boldsymbol{\Gamma }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\Gamma }=\boldsymbol{\Lambda }$. The theorem is proved.  □
In [21] the authors note that the moments ${m_{ij}}(\boldsymbol{\Lambda })$ and ${v_{ij,kl}}(\boldsymbol{\Lambda })$ do not seem to have tractable closed-form representations. However, these quantities can be approximated by simulation of Z, given the eigenvalues of Σ.

3 Exact moments with reflexive generalized inverse

An alternative to using the Moore–Penrose inverse ${\mathbf{S}^{+}}$ to estimate ${\boldsymbol{\Sigma }^{-1}}$ is an application of the reflexive generalized inverse, defined as
\[ {\mathbf{S}^{\dagger }}={\boldsymbol{\Sigma }^{-1/2}}{\left({\boldsymbol{\Sigma }^{-1/2}}\mathbf{S}{\boldsymbol{\Sigma }^{-1/2}}\right)^{+}}{\boldsymbol{\Sigma }^{-1/2}},\]
where the elements of ${\mathbf{S}^{\dagger }}$ are denoted ${s_{ij}^{\dagger }}$. Then, the TP weights vector can be estimated by
\[ {\mathbf{w}_{TP}^{\dagger }}={\mathbf{S}^{\dagger }}\boldsymbol{\eta },\]
and we derive the following result.
Theorem 5.
If $p>n+3$ and $\boldsymbol{\Sigma }>0$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\mathbf{w}_{TP}^{\dagger }}]& \displaystyle =& \displaystyle {a_{1}}{\mathbf{w}_{TP}},\\ {} \displaystyle \mathbb{V}[{\mathbf{w}_{TP}^{\dagger }}]& \displaystyle =& \displaystyle ({a_{2}}+2{a_{3}}){\mathbf{w}_{TP}}{\mathbf{w}^{\prime }_{TP}}\\ {} & & \displaystyle +\left[{a_{2}}{\mathbf{w}^{\prime }_{TP}}\boldsymbol{\Sigma }{\mathbf{w}_{TP}}+\frac{{a_{1}^{2}}+(p+1){a_{2}}+2{a_{3}}}{{\alpha ^{2}}(n+1)}\right]{\boldsymbol{\Sigma }^{-1}}.\end{array}\]
Proof.
The first result follows directly from Corollary 2.3 in [21], and the independence of S and $\bar{\mathbf{x}}$. For the second result, we have that
(22)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\mathbf{S}^{\dagger }}\boldsymbol{\eta }]& \displaystyle =& \displaystyle \mathbb{E}\left[\mathbb{E}[{\mathbf{S}^{\dagger }}\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}{\mathbf{S}^{\dagger }}\mid \boldsymbol{\eta }]\right]-\mathbb{E}[{\mathbf{S}^{\dagger }}]\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\mathbb{E}[{\mathbf{S}^{\dagger }}].\end{array}\]
Again we let $\mathbf{H}=\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}$, and note that by Corollary 2.3 in [21] we have
\[ \mathbb{E}[{s_{ik}^{\dagger }}{s_{lj}^{\dagger }}]={a_{2}}({\sigma ^{ij}}{\sigma ^{kl}}+{\sigma ^{il}}{\sigma ^{kj}})+({a_{1}^{2}}+2{a_{3}}){\sigma ^{ik}}{\sigma ^{jl}}\]
which combined with (14)–(15) allows us to obtain
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}{\left[{\mathbf{S}^{\dagger }}\mathbf{H}{\mathbf{S}^{\dagger }}\mid \boldsymbol{\eta }\right]_{ij}}& \displaystyle =& \displaystyle {\sum \limits_{k=1}^{p}}{\sum \limits_{l=1}^{p}}{h_{kl}}\mathbb{E}\left[{s_{ik}^{\dagger }}{s_{lj}^{\dagger }}\right]\\ {} & \displaystyle =& \displaystyle {\sum \limits_{k=1}^{p}}{\sum \limits_{l=1}^{p}}{h_{kl}}\left({a_{2}}({\sigma ^{ij}}{\sigma ^{kl}}+{\sigma ^{il}}{\sigma ^{kj}})\right.\\ {} & & \displaystyle \left.+({a_{1}^{2}}+2{a_{3}}){\sigma ^{ik}}{\sigma ^{jl}}\right)\\ {} & \displaystyle =& \displaystyle ({a_{1}^{2}}+{a_{2}}+2{a_{3}}){\left[{\boldsymbol{\Sigma }^{-1}}\mathbf{H}{\boldsymbol{\Sigma }^{-1}}\right]_{ij}}+{a_{2}}\text{tr}(\mathbf{H}{\boldsymbol{\Sigma }^{-1}}){\left[{\boldsymbol{\Sigma }^{-1}}\right]_{ij}}\end{array}\]
so sthat
\[ \mathbb{E}[{\mathbf{S}^{\dagger }}\mathbf{H}{\mathbf{S}^{\dagger }}\mid \boldsymbol{\eta }]=({a_{1}^{2}}+{a_{2}}+2{a_{3}}){\boldsymbol{\Sigma }^{-1}}\mathbf{H}{\boldsymbol{\Sigma }^{-1}}+{a_{2}}\text{tr}(\mathbf{H}{\boldsymbol{\Sigma }^{-1}}){\boldsymbol{\Sigma }^{-1}}.\]
Inserting this into equation (22) gives
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\mathbf{S}^{\dagger }}\boldsymbol{\eta }]& \displaystyle =& \displaystyle ({a_{1}^{2}}+{a_{2}}+2{a_{3}}){\boldsymbol{\Sigma }^{-1}}\mathbb{E}[\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}]{\boldsymbol{\Sigma }^{-1}}+{a_{2}}\text{tr}(\mathbb{E}[\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}]{\boldsymbol{\Sigma }^{-1}}){\boldsymbol{\Sigma }^{-1}}\\ {} & & \displaystyle -\mathbb{E}[{\mathbf{S}^{\dagger }}]\boldsymbol{\theta }{\boldsymbol{\theta }^{\prime }}\mathbb{E}[{\mathbf{S}^{\dagger }}],\end{array}\]
and applying the first result on $\mathbb{E}[{\mathbf{S}^{\dagger }}]$ together with (5) concludes the proof.  □
An obvious drawback of ${\mathbf{w}_{TP}^{\dagger }}$ is that Σ must be known in order to construct ${\mathbf{S}^{\dagger }}$. Moreover, in the case of $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$ the results in Theorem 5 coincide with the results in Theorem 1, since in this case ${\mathbf{S}^{\dagger }}={\mathbf{S}^{+}}$.

4 Simulation study

The aim of this section is to compare the bounds on the moments of ${\tilde{\mathbf{w}}_{TP}}$ derived in Section 2.2 with the sample mean and sample variance of this estimator. We will also investigate the difference between the moments of ${\mathbf{w}_{TP}^{\dagger }}$ derived in Theorem 5 and the sample moments of ${\tilde{\mathbf{w}}_{TP}}$. Ideally, the bounds should not deviate from the obtained sample moments very much. To this end, define ${\mathbf{b}^{l}}$ and ${\mathbf{b}^{u}}$ as the $p\times 1$ vectors with elements
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {b_{i}^{l}}& \displaystyle =& \displaystyle {v_{ii}}{\mu _{i}}+{\sum \limits_{j\ne i}^{p}}{v_{ij}}{\mu _{j}},\\ {} \displaystyle {b_{i}^{u}}& \displaystyle =& \displaystyle {z_{ii}}{\mu _{i}}+{\sum \limits_{j\ne i}^{p}}{z_{ij}}{\mu _{j}},\end{array}\]
such that ${\mathbf{b}^{l}}$ and ${\mathbf{b}^{u}}$ represent the element-wise lower and upper bounds for the expected TP weights vector presented in Theorem 2, where we set $\alpha =1$ and ${r_{f}}=0$. Let
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\mathbf{B}_{1}}& \displaystyle =& \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\left({k_{1}}\boldsymbol{\Sigma }\mathbb{E}[\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}]\boldsymbol{\Sigma }+{k_{2}}\boldsymbol{\Sigma }\mathbb{E}[{\boldsymbol{\eta }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\eta }]\right),\\ {} \displaystyle {\mathbf{B}_{2}}& \displaystyle =& \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\mathbb{E}[({\boldsymbol{\eta }^{\prime }}\boldsymbol{\eta })]{\mathbf{I}_{p}}\end{array}\]
represent the bounds in equations (17) and (18) in Theorem 3, respectively. Further, let m and V respectively denote the sample mean vector and sample covariance matrix of ${\tilde{\mathbf{w}}_{TP}}$ based on an observed matrix X, as described in Section 2. Moreover, we define
(23)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {t_{l}}& \displaystyle =& \displaystyle \frac{{\mathbf{1}^{\prime }_{p}}|{\mathbf{b}^{l}}-\mathbf{m}|}{p},\end{array}\]
(24)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {t_{u}}& \displaystyle =& \displaystyle \frac{{\mathbf{1}^{\prime }_{p}}|{\mathbf{b}^{u}}-\mathbf{m}|}{p},\end{array}\]
(25)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {t^{\dagger }}& \displaystyle =& \displaystyle \frac{{\mathbf{1}^{\prime }_{p}}|\mathbb{E}[{\mathbf{w}_{TP}^{\dagger }}]-\mathbf{m}|}{p},\end{array}\]
so that ${t_{l}}$, ${t_{u}}$ and ${t^{\dagger }}$ measure the element-wise difference between the sample mean vector and the lower and upper bounds on the mean, and mean of ${\mathbf{w}_{TP}^{\dagger }}$, respectively. Dividing by p allows comparing the measures between various portfolio sizes. Further, let
(26)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {T_{1}}& \displaystyle =& \displaystyle \frac{\left|{\mathbf{1}^{\prime }_{p}}\left({\mathbf{B}_{1}}-\mathbf{V}\right){\mathbf{1}_{p}}\right|}{{p^{2}}},\end{array}\]
(27)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {T_{2}}& \displaystyle =& \displaystyle \frac{\left|{\mathbf{1}^{\prime }_{p}}\left({\mathbf{B}_{2}}-\mathbf{V}\right){\mathbf{1}_{p}}\right|}{{p^{2}}},\end{array}\]
(28)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {T^{\dagger }}& \displaystyle =& \displaystyle \frac{\left|{\mathbf{1}^{\prime }_{p}}\left(\mathbb{V}[{\mathbf{w}_{TP}^{\dagger }}]-\mathbf{V}\right){\mathbf{1}_{p}}\right|}{{p^{2}}},\end{array}\]
where ${\mathbf{1}_{p}}$ is a $p\times 1$ vector of ones. Then, ${T_{1}}$ and ${T_{2}}$ provide a measure of discrepancy between the sample covariance matrix and bounds presented in Theorem 3, while ${T^{\dagger }}$ measures the discrepancy between the variance of ${\mathbf{w}_{TP}^{\dagger }}$ presented in Theorem 5 and the sample covariance matrix of ${\tilde{\mathbf{w}}_{TP}}$. Since it is divided by ${p^{2}}$, the number of elements in ${\mathbf{B}_{1}}$, ${\mathbf{B}_{2}}$, $\mathbb{V}[{\mathbf{w}_{TP}^{\dagger }}]$ and V, the measures ${T_{1}}$, ${T^{\dagger }}$ and ${T_{2}}$ again allow for comparison between different portfolio sizes. Moreover, define
(29)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {f_{l}}& \displaystyle =& \displaystyle \| {\mathbf{b}^{l}}-\mathbf{m}{\| _{F}^{2}}/\| \mathbf{m}{\| _{F}^{2}},\end{array}\]
(30)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {f_{u}}& \displaystyle =& \displaystyle \| {\mathbf{b}^{u}}-\mathbf{m}{\| _{F}^{2}}/\| \mathbf{m}{\| _{F}^{2}},\end{array}\]
(31)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {f^{\dagger }}& \displaystyle =& \displaystyle \| \mathbb{E}[{\mathbf{w}_{TP}^{\dagger }}]-\mathbf{m}{\| _{F}^{2}}/\| \mathbf{m}{\| _{F}^{2}},\end{array}\]
(32)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {F_{1}}& \displaystyle =& \displaystyle \| {\mathbf{B}_{1}}-\mathbf{V}{\| _{F}^{2}}/\| \mathbf{V}{\| _{F}^{2}},\end{array}\]
(33)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {F_{2}}& \displaystyle =& \displaystyle \| {\mathbf{B}_{2}}-\mathbf{V}{\| _{F}^{2}}/\| \mathbf{V}{\| _{F}^{2}},\end{array}\]
(34)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {F^{\dagger }}& \displaystyle =& \displaystyle \| \mathbb{V}[{\mathbf{w}_{TP}^{\dagger }}]-\mathbf{V}{\| _{F}^{2}}/\| \mathbf{V}{\| _{F}^{2}},\end{array}\]
where $\| \mathbf{M}{\| _{F}^{2}}$ denotes the Frobenius norm of the matrix M. Hence ${f_{1}}$, ${f_{2}}$, ${F_{1}}$ and ${F_{2}}$ represent the normalized Frobenius norm of differences between the bounds and the sample variance, while ${f^{\dagger }}$ and ${F^{\dagger }}$ denote the differences between the moments of ${\mathbf{w}_{TP}^{\dagger }}$ and the sample variance of ${\tilde{\mathbf{w}}_{TP}}$.
In the following, we will study simulations of (23)–(34) for various parameter values. In order to account for a wide range of values of $\boldsymbol{\mu }$ and Σ, these values will be randomly generated in the simulation study. Each of the p elements in the mean vector $\boldsymbol{\mu }$ will be independently generated as $\mathcal{U}(-0.1,0.1)$, where $\mathcal{U}(l,u)$ denotes the uniform distribution between l and u. The positive definite covariance matrix Σ will be determined as $\boldsymbol{\Sigma }=\boldsymbol{\Gamma }\boldsymbol{\Lambda }{\boldsymbol{\Gamma }^{\prime }}$, where the $p\times p$ matrix Γ represent the eigenvectors of Σ and is generated according to the Haar distribution. The $p\times p$ matrix Λ is diagonal, and its elements represents the ordered eigenvalues of Σ. Here we let the p eigenvalues be equally spaced from d to 1, for various values of d. Then, the parameter d represents a measure of dependency between the p assets in the portfolio, where $d=1$ represents no dependency and larger d represents a stronger dependency structure. Consequently, the simulation procedure can be described as follows:
  • 1) Generate $\boldsymbol{\mu }$, with ${\mu _{i}}\sim \mathcal{U}(-0.1,0.1)$, $i=1\dots ,p$.
  • 2) Generate Γ according to the Haar distribution, and compute $\boldsymbol{\Sigma }=\boldsymbol{\Gamma }\boldsymbol{\Lambda }{\boldsymbol{\Gamma }^{\prime }}$, where $\text{diag}(\boldsymbol{\Lambda })=d\dots ,1$.
  • 3) Independently generate $\bar{\mathbf{x}}\sim {\mathcal{N}_{p,1}}(\boldsymbol{\mu },\boldsymbol{\Sigma }/N)$ and $n\mathbf{S}\sim {\mathcal{W}_{p}}(n,\boldsymbol{\Sigma })$.
  • 4) Compute ${\tilde{\mathbf{w}}_{TP}}$.
  • 5) Repeat steps 3) and 4) above $s=10000$ times.
  • 6) Based on the s samples of ${\tilde{\mathbf{w}}_{TP}}$, compute m and V.
  • 7) Given m and V, compute (23)–(34).
The above procedure is repeated $r=10$ times to get r values of (23)–(34) for a given combination of p, N and d. Figures 1–12 display the mean value, for the r simulations, of each respective measure, for $p=\{25,50,75,100\}$, $d=\{1,\dots ,10\}$ and $N=\{2,0.4p,0.7p,p-3\}$.5 For easier reading, the values are displayed on a logarithmic scale and are connected with a solid line. First, we notice that most measures seem to increase with increasing dependency measure d. Further, ${t_{l}}$, ${t_{u}}$, ${t^{\dagger }}$, ${T_{1}}$, ${T_{2}}$, ${T^{\dagger }}$ increase with increasing sample size N. However, ${F_{2}}$, the measure of the discrepancy between the sample variance of ${\tilde{\mathbf{w}}_{TP}}$ and the variance bound ${\mathbf{B}_{2}}$, on the contrary, decreases with increasing N. Regarding the bounds on the expected value of ${\tilde{\mathbf{w}}_{TP}}$, ${t_{l}}$ and ${t_{u}}$ become very similar, and so do ${f_{1}}$ and ${f_{2}}$. The measures of the difference between $\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]$ and $\mathbb{E}[{\mathbf{w}_{TP}^{\dagger }}]$, ${t^{\dagger }}$ and ${f^{\dagger }}$, are fairly small for most of the considered simulation parameters. This suggests that $\mathbb{E}[{\mathbf{w}_{TP}^{\dagger }}]$ can serve as a rough approximation of $\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]$, especially for $N\in (0.4p,0.7p)$. Furthermore, when $d=1$ we have $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$, and hence the both bounds ${\mathbf{b}_{1}}$ and ${\mathbf{b}_{2}}$, as well as $\mathbb{E}[{\mathbf{w}_{TP}^{\dagger }}]$, provide equality with $\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]$. In particular, for $d=1$, these measures simply capture sample variance for the mean of m. Similarly, when $d=1$, ${T^{\dagger }}$ and ${F^{\dagger }}$ capture the sample variance of V. Further, for $N<p-3$ and low values of d, ${T^{\dagger }}$ and ${F^{\dagger }}$ are fairly small, suggesting that $\mathbb{V}[{\mathbf{w}_{TP}^{\dagger }}]$ could be applied as a rough approximation of $\mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]$ in these cases. Finally, we notice that the measures ${F_{1}}$ and ${F_{2}}$ become very large for most of the combinations of p, N and d. It is however important to note that the Frobenius norm of differences, that these measures are based on, captures element-wise squared discrepancies, while ${\mathbf{B}_{1}}$ and ${\mathbf{B}_{2}}$ are not element-wise bounds, but rather bounds in the Löwner order sense.
vmsta212_g001.jpg
Fig. 1.
The logarithm of ${t_{l}}$ plotted for various values of p, N and d
vmsta212_g002.jpg
Fig. 2.
The logarithm of ${t_{u}}$ plotted for various values of p, N and d
vmsta212_g003.jpg
Fig. 3.
The logarithm of ${t^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g004.jpg
Fig. 4.
The logarithm of ${T_{1}}$ plotted for various values of p, N and d
vmsta212_g005.jpg
Fig. 5.
The logarithm of ${T_{2}}$ plotted for various values of p, N and d
vmsta212_g006.jpg
Fig. 6.
The logarithm of ${T^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g007.jpg
Fig. 7.
The logarithm of ${f_{l}}$ plotted for various values of p, N and d
vmsta212_g008.jpg
Fig. 8.
The logarithm of ${f_{u}}$ plotted for various values of p, N and d
vmsta212_g009.jpg
Fig. 9.
The logarithm of ${f^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g010.jpg
Fig. 10.
The logarithm of ${F_{1}}$ plotted for various values of p, N and d
vmsta212_g011.jpg
Fig. 11.
The logarithm of ${F_{2}}$ plotted for various values of p, N and d
vmsta212_g012.jpg
Fig. 12.
The logarithm of ${F^{\dagger }}$ plotted for various values of p, N and d

5 Summary

The TP is an important portfolio in mean-variance asset optimization framework of [35], and the statistical properties of the typical TP weight estimator have been thoroughly studied. However, when the portfolio dimension is greater than the sample size, this estimator is not applicable since standard inversion of the now singular sample covariance matrix is not possible. This issue can be solved by applying the Moore–Penrose inverse, to which a general TP weights estimator can be provided, covering both the singular and nonsingular case. Unfortunately, there exists no derivation of the moments for the Moore–Penrose inverse of a singular Wishart matrix, and consequently the moments of the general TP estimator cannot be obtained.
In this paper, we provide bounds on the mean and variance of the TP weights estimator in the singular case. Further, we present approximate results, as well as exact moment results in the case when the population covariance is equal to the identity matrix. We also provide exact moment results when the reflexive generalized inverse is applied in the TP weights equation.
Moreover, we investigate the properties of the derived bounds, and the estimator based on the reflexive generalized inverse, in a simulation study. The difference between the various bounds and the sample counterparts are measured by several quantities, and studied for numerous dimensions, sample sizes and levels of dependencies of the population covariance matrix. The results suggest that many of the derived bounds are closest to the sample moments when the population covariance matrix implies low dependency between the considered assets. Finally, the study implies that in some cases the moments of TP weights based on the reflexive generalized inverse can be used as a rough approximation for the moments of TP weights based on the Moore–Penrose inverse. For future studies, it would be relevant, for example, to perform a sensitivity analysis on how fluctuations in the population covariance matrix affect the estimated TP weights.

Appendix Appendix

Lemma A1.
The elements of $\mathbb{E}[{\mathbf{S}^{+}}]$ have the bounds, for $i=1,\dots ,p$,
\[ 0<{d_{ii}}\le {e_{ii}}\le {u_{ii}^{(a)}},\]
and, for $i,j=1,\dots ,p$, $i\ne j$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {e_{ij}}& \displaystyle \le & \displaystyle \min \{{d_{ij}},{u_{ij}^{(a)}}\}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} \displaystyle {e_{ij}}& \displaystyle \ge & \displaystyle \max \{{d_{ij}},{u_{ij}^{(a)}}\}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})}.\end{array}\]
Proof.
First note that in accordance with Theorem 3.2 and Theorem 3.3 of [28], we have that
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbf{D}{\le _{L}}\mathbb{E}[{\mathbf{S}^{+}}]& \displaystyle {\le _{L}}& \displaystyle {\mathbf{U}_{a}},\\ {} \displaystyle \mathbb{E}[{\mathbf{S}^{+}}]& \displaystyle {\le _{L}}& \displaystyle {\mathbf{U}_{b}}.\end{array}\]
Further, by definition of the Löwner order we have, with $\boldsymbol{\alpha }\in {\mathbb{R}^{p}}$, that
(35)
\[ {\boldsymbol{\alpha }^{\prime }}\mathbf{D}\boldsymbol{\alpha }\le {\boldsymbol{\alpha }^{\prime }}\mathbb{E}[{\mathbf{S}^{+}}]\boldsymbol{\alpha }\le {\boldsymbol{\alpha }^{\prime }}\mathbf{U}\boldsymbol{\alpha }.\]
Thus, since ${\boldsymbol{\alpha }^{\prime }}(\mathbb{E}[{\mathbf{S}^{+}}]-\mathbf{D})\boldsymbol{\alpha }\ge 0$, we have that $\mathbb{E}[{\mathbf{S}^{+}}]-\mathbf{D}$ is a positive semi-definite matrix, and the same holds for $\mathbf{U}-\mathbb{E}[{\mathbf{S}^{+}}]$. This gives that $0<{d_{ii}}\le {e_{ii}}\le {u_{ii}^{(a)}}$, $i=1,\dots ,p$.
Moreover, note that every principal submatrix of a positive definite matrix is also positive definite. Combined with (35) it provides the following inequalities, for any $i,j=1,\dots ,p$, and with arbitrary nonzero scalars ${x_{1}}$ and ${x_{2}}$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {x_{1}^{2}}{u_{ii}^{(a)}}+2{x_{1}}{x_{2}}{u_{ij}^{(a)}}+{x_{2}^{2}}{u_{jj}^{(a)}}& \displaystyle \ge \\ {} \displaystyle {x_{1}^{2}}{e_{ii}}+2{x_{1}}{x_{2}}{e_{ij}}+{x_{2}^{2}}{e_{jj}}& \displaystyle \ge \\ {} \displaystyle {x_{1}^{2}}{d_{ii}}+2{x_{1}}{x_{2}}{d_{ij}}+{x_{2}^{2}}{d_{jj}}& \displaystyle >& \displaystyle 0.\end{array}\]
Now, first assume ${x_{1}}>0$, ${x_{2}}>0$. Then, the above expressions can be applied to obtain
(36)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {x_{1}^{2}}{e_{ii}}+2{x_{1}}{x_{2}}{e_{ij}}+{x_{2}^{2}}{e_{ii}}& \displaystyle \ge & \displaystyle {x_{1}^{2}}{d_{ii}}+2{x_{1}}{x_{2}}{d_{ij}}+{x_{2}^{2}}{d_{jj}},\end{array}\]
(37)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {x_{1}^{2}}{u_{ii}^{(a)}}+2{x_{1}}{x_{2}}{e_{ij}}+{x_{2}^{2}}{u_{jj}^{(a)}}& \displaystyle \ge & \displaystyle {x_{1}^{2}}{d_{ii}}+2{x_{1}}{x_{2}}{d_{ij}}+{x_{2}^{2}}{d_{jj}},\\ {} \displaystyle {e_{ij}}& \displaystyle \ge & \displaystyle -\frac{{x_{1}^{2}}({u_{ii}^{(a)}}-{d_{ii}})+{x_{2}^{2}}({u_{jj}^{(a)}}-{d_{jj}})-2{x_{1}}{x_{2}}{d_{ij}}}{2{x_{1}}{x_{2}}}\\ {} & \displaystyle =& \displaystyle -\frac{{x_{1}}({u_{ii}^{(a)}}-{d_{ii}})}{2{x_{2}}}-\frac{{x_{2}}({u_{jj}^{(a)}}-{d_{jj}})}{2{x_{1}}}+{d_{ij}}\end{array}\]
for any $i,j=\dots ,p$, $i\ne j$. As the right-hand side is a lower bound, we would like to obtain values ${x_{1}}$ and ${x_{2}}$ that maximizes this concave function. Deriving and setting the expression equal to zero, we obtain that it has its maximum at
\[ {x_{1}^{2}}({u_{ii}^{(a)}}-{d_{ii}})={x_{2}^{2}}({u_{jj}^{(a)}}-{d_{jj}}).\]
Without loss of generality we can set ${x_{1}}=1$ and thus obtain the maximum at
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {x_{1}}& \displaystyle =& \displaystyle 1,\\ {} \displaystyle {x_{2}}& \displaystyle =& \displaystyle \sqrt{\frac{({u_{ii}^{(a)}}-{d_{ii}})}{({u_{jj}^{(a)}}-{d_{jj}})}}.\end{array}\]
Applying this result to equation (37) yields
(38)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {e_{ij}}& \displaystyle \ge & \displaystyle {d_{ij}}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})}.\end{array}\]
With an equivalent approach, again with ${x_{1}}>0$, ${x_{2}}>0$, we can use inequalities
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {x_{1}^{2}}{d_{ii}}+2{x_{1}}{x_{2}}{e_{ij}}+{x_{2}^{2}}{d_{jj}}& \displaystyle \le & \displaystyle {x_{1}^{2}}{u_{ii}^{(a)}}+2{x_{1}}{x_{2}}{u_{ij}^{(a)}}+{x_{2}^{2}}{u_{jj}^{(a)}},\\ {} \displaystyle {e_{ij}}& \displaystyle \le & \displaystyle \frac{{x_{1}^{2}}({u_{ii}^{(a)}}-{d_{ii}})+{x_{2}^{2}}({u_{jj}^{(a)}}-{d_{jj}})+2{x_{1}}{x_{2}}{u_{ij}^{(a)}}}{2{x_{1}}{x_{2}}}\end{array}\]
in order to obtain the upper bound
(39)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {e_{ij}}& \displaystyle \le & \displaystyle {u_{ij}^{(a)}}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})}.\end{array}\]
Instead considering ${x_{1}}<0$ and ${x_{2}}>0$ (or ${x_{1}}>0$ and ${x_{2}}<0$), with a similar approach, we can obtain the bounds
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {e_{ij}}& \displaystyle \le & \displaystyle {d_{ij}}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} \displaystyle {e_{ij}}& \displaystyle \ge & \displaystyle {u_{ij}^{(a)}}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})}.\end{array}\]
Letting ${x_{1}}<0$ and ${x_{2}}<0$ again yield bounds (38) and (39). Expressing it differently, the above bounds can be written as
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {e_{ij}}& \displaystyle \le & \displaystyle \min \{{d_{ij}},{u_{ij}^{(a)}}\}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} \displaystyle {e_{ij}}& \displaystyle \ge & \displaystyle \max \{{d_{ij}},{u_{ij}^{(a)}}\}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\end{array}\]
concluding the proof.  □
The results in Lemma A1 can be further extended, by also considering the bounding matrix ${\mathbf{U}_{b}}$. The following lemma summarizes this result.
Lemma A2.
The elements of $\mathbb{E}[{\mathbf{S}^{+}}]$ have the bounds, for $i=1,\dots ,p$,
\[ 0<{g_{ii}}:={d_{ii}}\le {e_{ii}}\le {h_{ii}}:={u_{ii}^{(\ast )}},\]
where ${u_{ii}^{(\ast )}}=\min \{{u_{ii}^{(a)}},{u_{ii}^{(b)}}\}$. Further, for $i,j=1,\dots ,p$, $i\ne j$,
\[ {g_{ij}}\le {e_{ij}}\le {h_{ij}}\]
with
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {g_{ij}}& \displaystyle =& \displaystyle \max \left\{\begin{array}{l}{d_{ij}}-\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})},\\ {} {u_{ij}^{(a)}}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} -\sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})},\\ {} -\sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\end{array}\right\},\\ {} \displaystyle {h_{ij}}& \displaystyle =& \displaystyle \min \left\{\begin{array}{l}{d_{ij}}+\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})},\\ {} {u_{ij}^{(a)}}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} \sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})},\\ {} \sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\end{array}\right\}.\end{array}\]
Proof.
First, we have that
\[ {d_{ij}}-\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})}\le {e_{ij}}\le {d_{ij}}+\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})},\]
since ${e_{ii}}$ (and ${e_{jj}}$) in (36) can be replaced by either ${u_{ii}^{(a)}}$ or ${u_{ii}^{(b)}}$, whichever is the smaller. Then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {u_{ij}^{(a)}}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})}& \displaystyle \le & \displaystyle {e_{ij}}\le {u_{ij}^{(a)}}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})}\\ {} \displaystyle -\sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})}& \displaystyle \le & \displaystyle {e_{ij}}\le \sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})}\end{array}\]
follows directly from Lemma A1 and the fact that ${\mathbf{U}_{b}}$ is diagonal and thus ${u_{ij}^{(b)}}=0$. Finally,
\[ -\sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\le {e_{ij}}\le \sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\]
follows from
\[ -\sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\le -\sqrt{{e_{ii}}{e_{jj}}}\le {e_{ij}}\le \sqrt{{e_{ii}}{e_{jj}}}\le \sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}.\]
The lemma is proved.  □
In the following, let
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {k_{3}}& \displaystyle =& \displaystyle \frac{n[p(n+1)-2]}{p[p(p+1)-2]},\\ {} \displaystyle {k_{4}}& \displaystyle =& \displaystyle \frac{n(p-n)}{p[p(p+1)-2]}.\end{array}\]
Further, define $g(\mathbf{L})={\textstyle\prod _{i=1}^{n}}|{\mathbf{L}_{i}}{|_{+}}$ and $c(n,p)={(2\pi )^{np/2}}{2^{n}}s(n,p)$, where $|{\mathbf{L}_{i}}{|_{+}}$ and $s(n,p)$ are defined as on pages 128 and 129 in [28].
Lemma A3.
Let an $n\times p$ matrix L satisfy ${\mathbf{LL}^{\prime }}={\mathbf{I}_{n}}$. Then, for all $\boldsymbol{\alpha },\mathbf{x}\in {\mathbb{R}^{p}}$,
  • (i) $\textstyle\int ({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})g(\mathbf{L})d\mathbf{L}={k_{1}}c(n,p){({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}+{k_{2}}c(n,p)({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})$,
  • (ii) $\textstyle\int {({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}}g(\mathbf{L})d\mathbf{L}={k_{3}}c(n,p){({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}+{k_{4}}c(n,p)({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})$.
Proof.
In accordance with page 130 in [28], we have
(40)
\[\begin{aligned}{}& n({\mathbf{I}_{{p^{2}}}}+{\mathbf{K}_{p,p}})+{n^{2}}\text{vec}({\mathbf{I}_{p}}){\text{vec}^{\prime }}({\mathbf{I}_{p}})\\ {} & \hspace{1em}=c{(n,p)^{-1}}\int {(\mathbf{L}\otimes \mathbf{L})^{\prime }}\left\{p({\mathbf{I}_{{n^{2}}}}+{\mathbf{K}_{n,n}})+{p^{2}}\text{vec}({\mathbf{I}_{n}}){\text{vec}^{\prime }}({\mathbf{I}_{n}})\right\}\times \\ {} & \hspace{2em}\times (\mathbf{L}\otimes \mathbf{L})g(\mathbf{L})d\mathbf{L},\end{aligned}\]
where ${\mathbf{K}_{\cdot ,\cdot }}$ is the commutation matrix. Now note that
(41)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}{\mathbf{I}_{{p^{2}}}}(\boldsymbol{\alpha }\otimes \mathbf{x})=({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x}),\end{array}\]
(42)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}{\mathbf{K}_{p,p}}(\boldsymbol{\alpha }\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}},\end{array}\]
(43)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}\text{vec}({\mathbf{I}_{p}}){\text{vec}^{\prime }}({\mathbf{I}_{p}})(\boldsymbol{\alpha }\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}},\end{array}\]
(44)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}{(\mathbf{L}\otimes \mathbf{L})^{\prime }}{\mathbf{I}_{{n^{2}}}}(\mathbf{L}\otimes \mathbf{L})(\boldsymbol{\alpha }\otimes \mathbf{x})=({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x}),\end{array}\]
(45)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}{(\mathbf{L}\otimes \mathbf{L})^{\prime }}{\mathbf{K}_{n,n}}(\mathbf{L}\otimes \mathbf{L})(\boldsymbol{\alpha }\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}},\end{array}\]
(46)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}{(\mathbf{L}\otimes \mathbf{L})^{\prime }}\text{vec}({\mathbf{I}_{n}}){\text{vec}^{\prime }}({\mathbf{I}_{n}})(\mathbf{L}\otimes \mathbf{L})(\boldsymbol{\alpha }\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}}\end{array}\]
and
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}{\mathbf{I}_{{p^{2}}}}(\mathbf{x}\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}},\\ {} & & \displaystyle {(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}{\mathbf{K}_{p,p}}(\mathbf{x}\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}},\\ {} & & \displaystyle {(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}\text{vec}({\mathbf{I}_{p}}){\text{vec}^{\prime }}({\mathbf{I}_{p}})(\mathbf{x}\otimes \mathbf{x})=({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x}),\\ {} & & \displaystyle {(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}{(\mathbf{L}\otimes \mathbf{L})^{\prime }}{\mathbf{I}_{{n^{2}}}}(\mathbf{L}\otimes \mathbf{L})(\mathbf{x}\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}},\\ {} & & \displaystyle {(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}{(\mathbf{L}\otimes \mathbf{L})^{\prime }}{\mathbf{K}_{n,n}}(\mathbf{L}\otimes \mathbf{L})(\mathbf{x}\otimes \mathbf{x})={({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}},\\ {} & & \displaystyle {(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}{(\mathbf{L}\otimes \mathbf{L})^{\prime }}\text{vec}({\mathbf{I}_{n}}){\text{vec}^{\prime }}({\mathbf{I}_{n}})(\mathbf{L}\otimes \mathbf{L})(\mathbf{x}\otimes \mathbf{x})=({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x}).\end{array}\]
Then, from equation (40) we can obtain the following two expressions:
(47)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}\left\{n({\mathbf{I}_{{p^{2}}}}+{\mathbf{K}_{p,p}})+{n^{2}}\text{vec}({\mathbf{I}_{p}}){\text{vec}^{\prime }}({\mathbf{I}_{p}})\right\}(\boldsymbol{\alpha }\otimes \mathbf{x})\\ {} & & \displaystyle =c{(n,p)^{-1}}{(\boldsymbol{\alpha }\otimes \mathbf{x})^{\prime }}\left[\int {(\mathbf{L}\otimes \mathbf{L})^{\prime }}\left\{p({\mathbf{I}_{{n^{2}}}}+{\mathbf{K}_{n,n}})+{p^{2}}\text{vec}({\mathbf{I}_{n}}){\text{vec}^{\prime }}({\mathbf{I}_{n}})\right\}\right.\phantom{\int }\\ {} & & \displaystyle \left.\phantom{\int }\times (\mathbf{L}\otimes \mathbf{L})g(\mathbf{L})d\mathbf{L}\right]\phantom{\int }(\boldsymbol{\alpha }\otimes \mathbf{x}),\\ {} & & \displaystyle n({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})+(n+{n^{2}}){({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}\\ {} & & \displaystyle =c{(n,p)^{-1}}\int \left\{p({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})+(p+{p^{2}}){({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}}\right\}g(\mathbf{L})d\mathbf{L},\end{array}\]
and
(48)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}\left\{n({\mathbf{I}_{{p^{2}}}}+{\mathbf{K}_{p,p}})+{n^{2}}\text{vec}({\mathbf{I}_{p}}){\text{vec}^{\prime }}({\mathbf{I}_{p}})\right\}(\mathbf{x}\otimes \mathbf{x})\\ {} & & \displaystyle =c{(n,p)^{-1}}{(\boldsymbol{\alpha }\otimes \boldsymbol{\alpha })^{\prime }}\left[\int {(\mathbf{L}\otimes \mathbf{L})^{\prime }}\left\{p({\mathbf{I}_{{n^{2}}}}+{\mathbf{K}_{n,n}})+{p^{2}}\text{vec}({\mathbf{I}_{n}}){\text{vec}^{\prime }}({\mathbf{I}_{n}})\right\}\right.\phantom{\int }\\ {} & & \displaystyle \left.\phantom{\int }\times (\mathbf{L}\otimes \mathbf{L})g(\mathbf{L})d\mathbf{L}\right]\phantom{\int }(\mathbf{x}\otimes \mathbf{x}),\\ {} & & \displaystyle 2n{({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}+{n^{2}}({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})\\ {} & & \displaystyle =c{(n,p)^{-1}}\int \left\{{p^{2}}({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})+2p{({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}}\right\}g(\mathbf{L})d\mathbf{L}.\end{array}\]
From equation (47) we can then derive
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \int ({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})g(\mathbf{L})d\mathbf{L}& \displaystyle =& \displaystyle \frac{c(n,p)n}{p}\left[({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})+(n+1){({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}\right]\\ {} & & \displaystyle -(1+p)\int {({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}}g(\mathbf{L})d\mathbf{L}.\end{array}\]
Inserting this expression into equation (48) yields
\[ \int {({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})^{2}}g(\mathbf{L})d\mathbf{L}=\frac{n}{p}\frac{(p(n+1)-2){({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}+(p-n)({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})}{c{(n,p)^{-1}}(p(p+1)-2)},\]
and then we finally obtain
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \int \hspace{-0.1667em}({\boldsymbol{\alpha }^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\mathbf{L}^{\prime }}\mathbf{L}\mathbf{x})g(\mathbf{L})d\mathbf{L}& \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}=\hspace{-0.1667em}\hspace{-0.1667em}& \displaystyle \frac{n}{p}\frac{({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})+(n+1){({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}}{c{(n,p)^{-1}}}\\ {} & & \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}-(p\hspace{-0.1667em}+\hspace{-0.1667em}1)\frac{n}{p}\frac{(p(n\hspace{-0.1667em}+\hspace{-0.1667em}1)\hspace{-0.1667em}-\hspace{-0.1667em}2){({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}}\hspace{-0.1667em}+\hspace{-0.1667em}(p\hspace{-0.1667em}-\hspace{-0.1667em}n)({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})}{c{(n,p)^{-1}}(p(p+1)-2))}\\ {} & \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}=\hspace{-0.1667em}\hspace{-0.1667em}& \displaystyle \frac{c(n,p)n}{p}\left(1-\frac{(p+1)(p-n)}{p(p+1)-2}\right)({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})\\ {} & & \displaystyle +\frac{c(n,p)n}{p}\left(1+n-\frac{(p+1)(p(n+1)-2)}{p(p+1)-2}\right)\\ {} & & \displaystyle \times {({\boldsymbol{\alpha }^{\prime }}\mathbf{x})^{2}},\end{array}\]
completing the proof.  □
Lemma A4.
Let $n\mathbf{S}\sim {\mathcal{W}_{p}}(n,\boldsymbol{\Sigma })$, $p>n+3$ and $\boldsymbol{\Sigma }>0$. Then, for all $\boldsymbol{\alpha },\mathbf{x}\in {\mathbb{R}^{p}}$,
  • (i) $\mathbb{E}[{({\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\mathbf{x})^{2}}]\le (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\left[{k_{1}}{({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\Sigma }\mathbf{x})^{2}}+{k_{2}}({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\boldsymbol{\Sigma }\mathbf{x})\right]$,
  • (ii) $\mathbb{E}[{({\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\mathbf{x})^{2}}]\le {({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}(2{c_{1}}+{c_{2}})({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})$.
Proof.
First, let ${\mathbf{Y}^{\prime }}{\boldsymbol{\Sigma }^{-1/2}}=\mathbf{T}\mathbf{L}$, where $\mathbf{L}{\mathbf{L}^{\prime }}={\mathbf{I}_{n}}$, L is an $n\times p$ matrix and T is a lower triangular $n\times n$ matrix with positive elements. Further, note that in accordance with page 131 in [28], for $p>n+3$, we have that
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[\text{vec}({\mathbf{S}^{+}}){\text{vec}^{\prime }}({\mathbf{S}^{+}})]& \displaystyle =& \displaystyle c{(n,p)^{-1}}\int ({c_{1}}({\mathbf{I}_{{p^{2}}}}+{\mathbf{K}_{p,p}})(\mathbf{P}\otimes \mathbf{P})\\ {} & & \displaystyle +{c_{2}}\text{vec}(\mathbf{P}){\text{vec}^{\prime }}(\mathbf{P}))g(\mathbf{L})d\mathbf{L},\end{array}\]
where
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbf{P}& \displaystyle =& \displaystyle {\boldsymbol{\Sigma }^{1/2}}{\mathbf{L}^{\prime }}{(\mathbf{L}\boldsymbol{\Sigma }{\mathbf{L}^{\prime }})^{-1}}{(\mathbf{L}\boldsymbol{\Sigma }{\mathbf{L}^{\prime }})^{-1}}\mathbf{L}{\boldsymbol{\Sigma }^{1/2}}.\end{array}\]
Then, with equalities similar to (41)–(46),
(49)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {(\boldsymbol{\alpha }\hspace{-0.1667em}\otimes \hspace{-0.1667em}\mathbf{x})^{\prime }}\mathbb{E}[\text{vec}({\mathbf{S}^{+}}){\text{vec}^{\prime }}({\mathbf{S}^{+}})](\boldsymbol{\alpha }\hspace{-0.1667em}\otimes \hspace{-0.1667em}\mathbf{x})& \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}=\hspace{-0.1667em}\hspace{-0.1667em}& \displaystyle c{(n,p)^{-1}}{(\boldsymbol{\alpha }\hspace{-0.1667em}\otimes \hspace{-0.1667em}\mathbf{x})^{\prime }}\int ({c_{1}}({\mathbf{I}_{{p^{2}}}}+{\mathbf{K}_{p,p}})(\mathbf{P}\hspace{-0.1667em}\otimes \hspace{-0.1667em}\mathbf{P}),\\ {} & & \displaystyle +{c_{2}}\text{vec}(\mathbf{P}){\text{vec}^{\prime }}(\mathbf{P}))g(\mathbf{L})d\mathbf{L}(\boldsymbol{\alpha }\otimes \mathbf{x}),\\ {} \displaystyle \mathbb{E}[{({\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\mathbf{x})^{2}}]& \displaystyle \hspace{-0.1667em}\hspace{-0.1667em}=\hspace{-0.1667em}\hspace{-0.1667em}& \displaystyle c{(n,p)^{-1}}\left[({c_{1}}+{c_{2}})\int {({\mathbf{x}^{\prime }}\mathbf{P}\boldsymbol{\alpha })^{2}}g(\mathbf{L})d\mathbf{L}\right.\\ {} & & \displaystyle +\left.\phantom{\int }{c_{1}}\int ({\mathbf{x}^{\prime }}\mathbf{P}\mathbf{x})({\boldsymbol{\alpha }^{\prime }}\mathbf{P}\boldsymbol{\alpha })g(\mathbf{L})d\mathbf{L}\right].\end{array}\]
Now, by Lemma A5, we have that $({\mathbf{x}^{\prime }}\mathbf{P}\mathbf{x})({\boldsymbol{\alpha }^{\prime }}\mathbf{P}\boldsymbol{\alpha })\ge {({\mathbf{x}^{\prime }}\mathbf{P}\boldsymbol{\alpha })^{2}}$. Further, combining this inequality with (49) and Lemma 2.4 (i) in [28], we have
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{({\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\mathbf{x})^{2}}]& \displaystyle =& \displaystyle c{(n,p)^{-1}}\left[({c_{1}}+{c_{2}})\int {({\mathbf{x}^{\prime }}\mathbf{P}\boldsymbol{\alpha })^{2}}g(\mathbf{L})d\mathbf{L}\right.\\ {} & & \displaystyle +\left.\phantom{\int }{c_{1}}\int ({\mathbf{x}^{\prime }}\mathbf{P}\mathbf{x})({\boldsymbol{\alpha }^{\prime }}\mathbf{P}\boldsymbol{\alpha })g(\mathbf{L})d\mathbf{L}\right]\\ {} & \displaystyle \le & \displaystyle c{(n,p)^{-1}}(2{c_{1}}+{c_{2}})\int ({\mathbf{x}^{\prime }}\mathbf{P}\mathbf{x})({\boldsymbol{\alpha }^{\prime }}\mathbf{P}\boldsymbol{\alpha })g(\mathbf{L})d\mathbf{L}\\ {} & \displaystyle \le & \displaystyle c{(n,p)^{-1}}(2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\\ {} & & \displaystyle \times \int ({\boldsymbol{\alpha }^{\prime }}{\boldsymbol{\Sigma }^{1/2}}{\mathbf{L}^{\prime }}\mathbf{L}{\boldsymbol{\Sigma }^{1/2}}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}{\boldsymbol{\Sigma }^{1/2}}{\mathbf{L}^{\prime }}\mathbf{L}{\boldsymbol{\Sigma }^{1/2}}\mathbf{x})g(\mathbf{L})d\mathbf{L}\\ {} & \displaystyle =& \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\left[{k_{1}}{({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\Sigma }\mathbf{x})^{2}}+{k_{2}}({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\boldsymbol{\Sigma }\mathbf{x})\right],\end{array}\]
where Lemma A3 (i) has been applied in the last equality. On the other hand, if we instead apply the inequality in Lemma 2.4 (ii) of [28], we obtain
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{({\boldsymbol{\alpha }^{\prime }}{\mathbf{S}^{+}}\mathbf{x})^{2}}]& \displaystyle \le & \displaystyle c{(n,p)^{-1}}{({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\left[(2{c_{1}}+{c_{2}})({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})\right]\int g(\mathbf{L})d\mathbf{L}\\ {} & \displaystyle =& \displaystyle {({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\left[(2{c_{1}}+{c_{2}})({\boldsymbol{\alpha }^{\prime }}\boldsymbol{\alpha })({\mathbf{x}^{\prime }}\mathbf{x})\right]\end{array}\]
where Lemma 3.1 (i) in [28] gives the equality and concludes the proof.  □
Lemma A5.
Let A be a $p\times p$ symmetric positive definite matrix. Then for any $\mathbf{c},\mathbf{d}\in {\mathbb{R}^{p}}$,
\[ ({\mathbf{c}^{\prime }}\mathbf{A}\mathbf{c})({\mathbf{d}^{\prime }}\mathbf{A}\mathbf{d})\ge {({\mathbf{c}^{\prime }}\mathbf{A}\mathbf{d})^{2}}.\]
Proof.
Let $\mathbf{A}=\mathbf{Q}\mathbf{R}{\mathbf{Q}^{\prime }}$ denote the eigenvalue decomposition of A, such that Q is orthogonal and R is a diagonal matrix with positive elements. Make the substitutions
\[\begin{array}{r}\displaystyle \mathbf{f}={\mathbf{R}^{1/2}}{\mathbf{Q}^{\prime }}\mathbf{c},\\ {} \displaystyle \mathbf{g}={\mathbf{R}^{1/2}}{\mathbf{Q}^{\prime }}\mathbf{d},\end{array}\]
so that the inequality $({\mathbf{c}^{\prime }}\mathbf{A}\mathbf{c})({\mathbf{d}^{\prime }}\mathbf{A}\mathbf{d})\ge {({\mathbf{c}^{\prime }}\mathbf{A}\mathbf{d})^{2}}$ can be written as
(50)
\[ ({\mathbf{f}^{\prime }}\mathbf{f})({\mathbf{g}^{\prime }}\mathbf{g})\ge {({\mathbf{f}^{\prime }}\mathbf{g})^{2}}.\]
Further, since $({\mathbf{f}^{\prime }}\mathbf{g})=\| \mathbf{f}\| \| \mathbf{g}\| \cos (\theta )$, where $\| \cdot \| $ denotes the Euclidean norm and θ is the angle between the vectors f and g, the inequality (50) becomes
\[ \| \mathbf{f}{\| ^{2}}\| \mathbf{g}{\| ^{2}}\ge \| \mathbf{f}{\| ^{2}}\| \mathbf{g}{\| ^{2}}\cos {(\theta )^{2}},\]
which holds since $\cos {(\theta )^{2}}\le 1$. The lemma is proved.  □

Acknowledgement

We would like to thank Prof. Yuliya Mishura, the Associate Editor and the two anonymous referees for helping to improve the paper. We are also grateful to Andrii Dmytryshyn and Mårten Gulliksson for helpful remarks on matrix inequalities.

Footnotes

1 This value represents how willing an investor is to accept upward and downward risks on their investment. It can be determined through, e.g., qualitative assessment, such as interview questions posed to the investor.
2 It is worth to mention that a similar structure appears in the discriminant analysis. Namely, the coefficients of a discriminant function that maximizes the discrepancy between two datasets are expressed as a product of the inverse sample covariance matrix and the sample mean vector (see, for example, [6, 14]).
3 In the Bayesian setting, the posterior distribution of TP weights is expressed as a product of the (singular) Wishart matrix and Gaussian vector. Statistical properties of those products are studied by [7, 8, 13, 9].
4 Instead of using the Moore–Penrose inverse, one can consider regularization techniques such as the ridge-type method [43], the Landweber–Fridman iteration approach [33], a form of Lasso [19], or an iterative algorithm based on a second order damped dynamical systems [24, 25].
5 The computation time for each set of simulations for $p=\{25,50,75,100\}$ is $\{12,26,55,107\}$ minutes, respectively, when calculation is run on 15 threads of an AMD Ryzen 7 5800H CPU. Hence, for future research, it is possible to explore even larger sample sizes on a standard PC.

References

[1] 
Alfelt, G., Bodnar, T., Javed, F., Tyrcha, J.: Singular conditional autoregressive Wishart model for realized covariance matrices. Accepted for publication in Journal of Business and Economic Statistics (2022).
[2] 
Ao, M., Yingying, L., Zheng, X.: Approaching mean-variance efficiency for large portfolios. Rev. Financ. Stud. 32(7), 2890–2919 (2019). https://doi.org/10.1093/rfs/hhy105
[3] 
Bauder, D., Bodnar, T., Mazur, S., Okhrin, Y.: Bayesian Inference for the Tangent Portfolio. Int. J. Theor. Appl. Finance 21(08), 1850054 (2018). MR3897158. https://doi.org/10.1142/S0219024918500541
[4] 
Bodnar, O.: Sequential suveillance of the tangency portfolio weights. Int. J. Theor. Appl. Finance 12(06), 797–810 (2009). https://doi.org/10.1142/S0219024909005464
[5] 
Bodnar, O., Bodnar, T., Parolya, N.: Recent advances in shrinkage-based high-dimensional inference. Journal of Multivariate Analysis 104826 (2022). MR4353848. https://doi.org/10.1016/j.jmva.2021.104826
[6] 
Bodnar, T., Okhrin, Y.: On the product of inverse wishart and normal distributions with applications to discriminant analysis and portfolio theory. Scand. J. Stat. 38(2), 311–331 (2011). MR2829602. https://doi.org/10.1111/j.1467-9469.2011.00729.x
[7] 
Bodnar, T., Mazur, S., Okhrin, Y.: On the exact and approximate distributions of the product of a Wishart matrix with a normal vector. J. Multivar. Anal. 122, 70–81 (2013). MR3189308. https://doi.org/10.1016/j.jmva.2013.07.007
[8] 
Bodnar, T., Mazur, S., Okhrin, Y.: Distribution of the product of singular Wishart Matrix and normal vector. Theory Probab. Math. Stat. 91, 1–15 (2014). MR3364119. https://doi.org/10.1090/tpms/962
[9] 
Bodnar, T., Mazur, S., Parolya, N.: Central limit theorems for functionals of large sample covariance matrix and mean vector in matrix-variate location mixture of normal distributions. Scand. J. Stat. 46(2), 636–660 (2019). MR3948571. https://doi.org/10.1111/sjos.12383
[10] 
Bodnar, T., Mazur, S., Podgorski, K.: Singular inverse Wishart distribution and its application to portfolio theory. Journal of Multivariate Analysis, 314–326 (2016). MR3431434. https://doi.org/10.1016/j.jmva.2015.09.021
[11] 
Bodnar, T., Mazur, S., Podgorski, K.: A test for the global minimum variance portfolio for small sample and singular covariance. AStA Adv. Stat. Anal. 101(3), 253–265 (2017). MR3679345. https://doi.org/10.1007/s10182-016-0282-z
[12] 
Bodnar, T., Okhrin, Y., Parolya, N.: Optimal shrinkage-based portfolio selection in high dimensions. Journal of Business & Economic Statistics. to appear (2022)
[13] 
Bodnar, T., Mazur, S., Muhinyuza, S., Parolya, N.: On the product of a singular wishart matrix and a singular gaussian vector in high dimension. Theory Probab. Math. Stat. 99(2), 37–50 (2018). MR3908654. https://doi.org/10.1090/tpms/1078
[14] 
Bodnar, T., Mazur, S., Ngailo, E., Parolya, N.: Discriminant analysis in small and large dimensions. Theory Probab. Math. Stat. 100, 24–42 (2019). MR3992991. https://doi.org/10.1090/tpms/1096
[15] 
Bodnar, T., Mazur, S., Podgorski, K., Tyrcha, J.: Tangency portfolio weights for singular covariance matrix in small and large dimensions: Estimation and test theory. J. Stat. Plan. Inference 201, 40–57 (2019). MR3913439. https://doi.org/10.1016/j.jspi.2018.11.003
[16] 
Bodnar, T., Dmytriv, S., Okhrin, Y., Parolya, N., Schmid, W.: Statistical inference for the expected utility portfolio in high dimensions. IEEE Trans. Acoust. Speech Signal Process. 69, 1–14 (2021). MR4213326. https://doi.org/10.1109/TSP.2020.3037369
[17] 
Boullion, T.L., Odell, P.L.: Generalized Inverse Matrices. Wiley, New York, NY (1971). https://cds.cern.ch/record/213449. MR0338012
[18] 
Britten-Jones, M.: The sampling error in estimates of mean-variance efficient portfolio weights. J. Finance 54(2), 655–671 (1999). https://doi.org/10.1111/0022-1082.00120
[19] 
Brodie, J., Daubechies, I., De Mol, C., Giannone, D., Loris, I.: Sparse and stable Markowitz portfolios. Proc. Natl. Acad. Sci. USA 106, 12267–12272 (2009). https://doi.org/10.1073/pnas.0904287106
[20] 
Cai, T.T., Hu, J., Li, Y., Zheng, X.: High-dimensional minimum variance portfolio estimation based on high-frequency data. J. Econom. 214(2), 482–494 (2020). MR4057056. https://doi.org/10.1016/j.jeconom.2019.04.039
[21] 
Cook, R.D., Forzani, L.: On the mean and variance of the generalized inverse of a singular Wishart matrix. Electron. J. Stat. 5, 146–158 (2011). MR2786485. https://doi.org/10.1214/11-EJS602
[22] 
Ding, Y., Li, Y., Zheng, X.: High dimensional minimum variance portfolio estimation under statistical factor models. J. Econom. 222(1), 502–515 (2021). MR4234830. https://doi.org/10.1016/j.jeconom.2020.07.013
[23] 
Ghazal, G.A., Neudecker, H.: On second-order and fourth-order moments of jointly distributed random matrices: a survey. Linear Algebra Appl. 321(1), 61–93 (2000). MR1799985. https://doi.org/10.1016/S0024-3795(00)00181-6
[24] 
Gulliksson, M., Mazur, S.: An iterative approach to ill-conditioned optimal portfolio selection. Comput. Econ. 56, 773–794 (2020). https://doi.org/10.1007/s10614-019-09943-6
[25] 
Gulliksson, M., Oleynik, A., Mazur, S.: Portfolio selection with a rank-deficient covariance matrix. Working paper (2021)
[26] 
Hautsch, N., Kyj, L.M., Malec, P.: Do high-frequency data improve high-dimensional portfolio allocations? J. Appl. Econom. 30(2), 263–290 (2015). MR3322719. https://doi.org/10.1002/jae.2361
[27] 
Hubbard, D.W.: The Failure of Risk Management: Why It’s Broken and How to Fix It. John Wiley & Sons, (2020)
[28] 
Imori, S., Rosen, D.: On the mean and dispersion of the Moore-Penrose generalized inverse of a Wishart matrix. Electron. J. Linear Algebra 36, 124–133 (2020). MR4089045. https://doi.org/10.13001/ela.2020.5091
[29] 
Javed, F., Mazur, S., Ngailo, E.: Higher order moments of the estimated tangency portfolio weights. J. Appl. Stat. 48(3), 517–535 (2021). MR4205986. https://doi.org/10.1080/02664763.2020.1736523
[30] 
Javed, F., Mazur, S., Thorsén, E.: Tangency portfolio weights under a skew-normal model in small and large dimensions. Working paper ((13) (2021))
[31] 
Karlsson, S., Mazur, S., Muhinyuza, S.: Statistical inference for the tangency portfolio in high dimension. Statistics 55(3), 532–560 (2021). MR4313438. https://doi.org/10.1080/02331888.2021.1951730
[32] 
Kotsiuba, I., Mazur, S.: On the asymptotic and approximate distributions of the product of an inverse wishart matrix and a gaussian vector. Theory of Probability and Mathematical Statstics 93, 95–104 (2015). MR3553443. https://doi.org/10.1090/tpms/1004
[33] 
Kress, R.: Linear Integral Equations. Springer, (1999). MR1723850. https://doi.org/10.1007/978-1-4612-0559-3
[34] 
Ledoit, O., Wolf, M.: Nonlinear shrinkage of the covariance matrix for portfolio selection: Markowitz meets goldilocks. Rev. Financ. Stud. 30(12), 4349–4388 (2017). https://doi.org/10.1093/rfs/hhx052
[35] 
Markowitz, H.: Portfolio selection. J. Finance 7(1), 77–91 (1952). https://doi.org/10.1111/j.1540-6261.1952.tb01525.x
[36] 
Mathai, A.M., Provost, S.B.: Quadratic Forms in Random Variables. CRC Press, (1992). MR1192786
[37] 
Muhinyuza, S.: A test on mean-variance efficiency of the tangency portfolio in high-dimensional setting. Theory of Probability and Mathematical Statistics In press (2020). MR4421345. https://doi.org/10.1090/tpms
[38] 
Muhinyuza, S., Bodnar, T., Lindholm, M.: A test on the location of the tangency portfolio on the set of feasible portfolios. Appl. Math. Comput. 386, 125519 (2020). MR4126729. https://doi.org/10.1016/j.amc.2020.125519
[39] 
Okhrin, Y., Schmid, W.: Distributional properties of portfolio weights. J. Econom. 134(1), 235–256 (2006). MR2328322. https://doi.org/10.1016/j.jeconom.2005.06.022
[40] 
Planitz, M.: Inconsistent systems of linear equations. Math. Gaz. 63(425), 181–185 (1979). https://doi.org/10.2307/3617890
[41] 
Rubio, F., Mestre, X., Palomar, D.P.: Performance analysis and optimal selection of large minimum variance portfolios under estimation risk. IEEE J. Sel. Top. Signal Process. 6(4), 337–350 (2012). https://doi.org/10.1109/JSTSP.2012.2202634
[42] 
Taleb, N.N.: The Black Swan: The Impact of the Highly Improbable, vol. 2. Random house, (2007)
[43] 
Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. Winston, New York (1977). MR0455365
[44] 
Tsukuma, H.: Estimation of a high-dimensional covariance matrix with the Stein loss. J. Multivar. Anal. 148, 1–17 (2016). MR3493016. https://doi.org/10.1016/j.jmva.2016.02.012
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Moments with the Moore–Penrose inverse
  • 3 Exact moments with reflexive generalized inverse
  • 4 Simulation study
  • 5 Summary
  • Appendix Appendix
  • Acknowledgement
  • Footnotes
  • References

Copyright
© 2022 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Tangency portfolio singular inverse Wishart Moore–Penrose inverse reflexive generalized inverse estimator moments

MSC2010
62H12 91G10

Funding
Stepan Mazur acknowledges financial support from the internal research grants at Örebro University and from the project “Models for macro and financial economics after the financial crisis” (Dnr: P18-0201) funded by Jan Wallander and Tom Hedelius Foundation.

Metrics
since March 2018
524

Article info
views

539

Full article
views

388

PDF
downloads

99

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    12
  • Theorems
    5
vmsta212_g001.jpg
Fig. 1.
The logarithm of ${t_{l}}$ plotted for various values of p, N and d
vmsta212_g002.jpg
Fig. 2.
The logarithm of ${t_{u}}$ plotted for various values of p, N and d
vmsta212_g003.jpg
Fig. 3.
The logarithm of ${t^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g004.jpg
Fig. 4.
The logarithm of ${T_{1}}$ plotted for various values of p, N and d
vmsta212_g005.jpg
Fig. 5.
The logarithm of ${T_{2}}$ plotted for various values of p, N and d
vmsta212_g006.jpg
Fig. 6.
The logarithm of ${T^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g007.jpg
Fig. 7.
The logarithm of ${f_{l}}$ plotted for various values of p, N and d
vmsta212_g008.jpg
Fig. 8.
The logarithm of ${f_{u}}$ plotted for various values of p, N and d
vmsta212_g009.jpg
Fig. 9.
The logarithm of ${f^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g010.jpg
Fig. 10.
The logarithm of ${F_{1}}$ plotted for various values of p, N and d
vmsta212_g011.jpg
Fig. 11.
The logarithm of ${F_{2}}$ plotted for various values of p, N and d
vmsta212_g012.jpg
Fig. 12.
The logarithm of ${F^{\dagger }}$ plotted for various values of p, N and d
Theorem 1.
Theorem 2.
Theorem 3.
Theorem 4.
Theorem 5.
vmsta212_g001.jpg
Fig. 1.
The logarithm of ${t_{l}}$ plotted for various values of p, N and d
vmsta212_g002.jpg
Fig. 2.
The logarithm of ${t_{u}}$ plotted for various values of p, N and d
vmsta212_g003.jpg
Fig. 3.
The logarithm of ${t^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g004.jpg
Fig. 4.
The logarithm of ${T_{1}}$ plotted for various values of p, N and d
vmsta212_g005.jpg
Fig. 5.
The logarithm of ${T_{2}}$ plotted for various values of p, N and d
vmsta212_g006.jpg
Fig. 6.
The logarithm of ${T^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g007.jpg
Fig. 7.
The logarithm of ${f_{l}}$ plotted for various values of p, N and d
vmsta212_g008.jpg
Fig. 8.
The logarithm of ${f_{u}}$ plotted for various values of p, N and d
vmsta212_g009.jpg
Fig. 9.
The logarithm of ${f^{\dagger }}$ plotted for various values of p, N and d
vmsta212_g010.jpg
Fig. 10.
The logarithm of ${F_{1}}$ plotted for various values of p, N and d
vmsta212_g011.jpg
Fig. 11.
The logarithm of ${F_{2}}$ plotted for various values of p, N and d
vmsta212_g012.jpg
Fig. 12.
The logarithm of ${F^{\dagger }}$ plotted for various values of p, N and d
Theorem 1.
If $p>n+3$ and $\boldsymbol{\Sigma }={\mathbf{I}_{p}}$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle {a_{1}}{\mathbf{w}_{TP}},\\ {} \displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle ({a_{2}}+2{a_{3}}){\mathbf{w}_{TP}}{\mathbf{w}^{\prime }_{TP}}+\left[{a_{2}}{\mathbf{w}^{\prime }_{TP}}{\mathbf{w}_{TP}}+\frac{{a_{1}^{2}}+(p+1){a_{2}}+2{a_{3}}}{{\alpha ^{2}}(n+1)}\right]{\mathbf{I}_{p}}\end{array}\]
with constants ${a_{1}}$, ${a_{2}}$ and ${a_{3}}$ that are defined in (10)–(12).
Theorem 2.
Suppose $p>n+3$ and $\boldsymbol{\Sigma }>0$. Let ${w_{i}}$ and ${\theta _{i}}$, be the i-th elements of the $p\times 1$ vectors $\mathbf{w}=\mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]$ and $\boldsymbol{\theta }={\alpha ^{-1}}(\boldsymbol{\mu }-{r_{f}}{\mathbf{1}_{p}})$, respectively. Then for $i=1,\dots ,p$, it holds that
\[ {v_{ii}}{\theta _{i}}+{\sum \limits_{j\ne i}^{p}}{v_{ij}}{\theta _{j}}\le {w_{i}}\le {z_{ii}}{\theta _{i}}+{\sum \limits_{j\ne i}^{p}}{z_{ij}}{\theta _{j}}\]
where, for $i,j=1,\dots ,p$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {v_{ij}}& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}{g_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}\ge 0,\\ {} {h_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}<0,\end{array}\right.\\ {} \displaystyle {z_{ij}}& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}{g_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}<0,\\ {} {h_{ij}}& \text{if}\hspace{2.5pt}{\theta _{j}}\ge 0,\end{array}\right.\end{array}\]
with ${g_{ii}}={d_{ii}}$, ${h_{ii}}={u_{ii}^{(\ast )}}$, while for $i\ne j$,
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {g_{ij}}& \displaystyle =& \displaystyle \max \left\{\begin{array}{l}{d_{ij}}-\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})},\\ {} {u_{ij}^{(a)}}-\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} -\sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})},\\ {} -\sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\end{array}\right\},\\ {} \displaystyle {h_{ij}}& \displaystyle =& \displaystyle \min \left\{\begin{array}{l}{d_{ij}}+\sqrt{({u_{ii}^{(\ast )}}-{d_{ii}})({u_{jj}^{(\ast )}}-{d_{jj}})},\\ {} {u_{ij}^{(a)}}+\sqrt{({u_{ii}^{(a)}}-{d_{ii}})({u_{jj}^{(a)}}-{d_{jj}})},\\ {} \sqrt{({u_{ii}^{(b)}}-{d_{ii}})({u_{jj}^{(b)}}-{d_{jj}})},\\ {} \sqrt{{u_{ii}^{(\ast )}}{u_{jj}^{(\ast )}}}\end{array}\right\}.\end{array}\]
Theorem 3.
Suppose $p>n+3$ and $\boldsymbol{\Sigma }>0$. Then
(17)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle {\le _{L}}& \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\left({k_{1}}\boldsymbol{\Sigma }\mathbb{E}[\boldsymbol{\eta }{\boldsymbol{\eta }^{\prime }}]\boldsymbol{\Sigma }+{k_{2}}\boldsymbol{\Sigma }\mathbb{E}[{\boldsymbol{\eta }^{\prime }}\boldsymbol{\Sigma }\boldsymbol{\eta }]\right),\end{array}\]
(18)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle {\le _{L}}& \displaystyle (2{c_{1}}+{c_{2}}){({\lambda _{1}}({\boldsymbol{\Sigma }^{-1}}))^{4}}\mathbb{E}[({\boldsymbol{\eta }^{\prime }}\boldsymbol{\eta })]{\mathbf{I}_{p}},\end{array}\]
with the expected values given in (5)–(7) and
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {c_{1}}& \displaystyle =& \displaystyle {n^{2}}{[(p-n)(p-n-1)(p-n-3)]^{-1}},\\ {} \displaystyle {c_{2}}& \displaystyle =& \displaystyle (p-n-2){c_{1}},\\ {} \displaystyle {k_{1}}& \displaystyle =& \displaystyle \left[1+n-\frac{(p+1)(p(n+1)-2)}{p(p+1)-2}\right]\frac{n}{p},\\ {} \displaystyle {k_{2}}& \displaystyle =& \displaystyle \left[1-\frac{(p+1)(p-n)}{p(p+1)-2}\right]\frac{n}{p}.\end{array}\]
Theorem 4.
If $p>n+3$ and $\boldsymbol{\Sigma }>0$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle \boldsymbol{\Gamma }\mathbf{M}(\boldsymbol{\Lambda }){\boldsymbol{\Gamma }^{\prime }}\boldsymbol{\theta },\\ {} \displaystyle \mathbb{V}[{\tilde{\mathbf{w}}_{TP}}]& \displaystyle =& \displaystyle {\sum \limits_{i=1}^{p}}{\sum \limits_{j=1}^{p}}\left({\theta _{i}}{\theta _{j}}+\frac{{\sigma _{ij}}}{{\alpha ^{2}}(n+1)}\right){\boldsymbol{\Psi }_{ij}}+\frac{1}{{\alpha ^{2}}(n+1)}\boldsymbol{\Gamma }\mathbf{M}(\boldsymbol{\Lambda })\boldsymbol{\Lambda }\mathbf{M}(\boldsymbol{\Lambda }){\boldsymbol{\Gamma }^{\prime }}\end{array}\]
with ${\theta _{i}}={\alpha ^{-1}}({\mu _{i}}-{r_{f}})$.
Theorem 5.
If $p>n+3$ and $\boldsymbol{\Sigma }>0$, then
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \mathbb{E}[{\mathbf{w}_{TP}^{\dagger }}]& \displaystyle =& \displaystyle {a_{1}}{\mathbf{w}_{TP}},\\ {} \displaystyle \mathbb{V}[{\mathbf{w}_{TP}^{\dagger }}]& \displaystyle =& \displaystyle ({a_{2}}+2{a_{3}}){\mathbf{w}_{TP}}{\mathbf{w}^{\prime }_{TP}}\\ {} & & \displaystyle +\left[{a_{2}}{\mathbf{w}^{\prime }_{TP}}\boldsymbol{\Sigma }{\mathbf{w}_{TP}}+\frac{{a_{1}^{2}}+(p+1){a_{2}}+2{a_{3}}}{{\alpha ^{2}}(n+1)}\right]{\boldsymbol{\Sigma }^{-1}}.\end{array}\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy