VMSTA Modern Stochastics: Theory and Applications 2351-60542351-60462351-6046 VTeXMokslininkų g. 2A, 08412 Vilnius, Lithuania VMSTA212 10.15559/22-VMSTA212 Research Article On the mean and variance of the estimated tangency portfolio weights for small samples https://orcid.org/0000-0003-3403-6805 AlfeltGustavgustava@math.su.sea https://orcid.org/0000-0002-1395-9427 MazurStepanStepan.Mazur@oru.sebc Department of Mathematics, Stockholm University, SE-10691 Stockholm, Sweden Unit of Statistics, Örebro University School of Business, SE-70182 Örebro, Sweden Department of Economics and Statistics, School of Business and Economics, Linnaeus University, SE-35195 Växjö, Sweden, Sweden Corresponding author. 2022 292022944534821312202124520222972022 © 2022 The Author(s). Published by VTeX2022 Open access article under the CC BY license.

In this paper, a sample estimator of the tangency portfolio (TP) weights is considered. The focus is on the situation where the number of observations is smaller than the number of assets in the portfolio and the returns are i.i.d. normally distributed. Under these assumptions, the sample covariance matrix follows a singular Wishart distribution and, therefore, the regular inverse cannot be taken. In the paper, bounds and approximations for the first two moments of the estimated TP weights are derived, as well as exact results are obtained when the population covariance matrix is equal to the identity matrix, employing the Moore–Penrose inverse. Moreover, exact moments based on the reflexive generalized inverse are provided. The properties of the bounds are investigated in a simulation study, where they are compared to the sample moments. The difference between the moments based on the reflexive generalized inverse and the sample moments based on the Moore–Penrose inverse is also studied.

Tangency portfolio singular inverse Wishart Moore–Penrose inverse reflexive generalized inverse estimator moments 62H12 91G10 Örebro UniversityStepan Mazur acknowledges financial support from the internal research grants at Örebro University and from the project “Models for macro and financial economics after the financial crisis” (Dnr: P18-0201) funded by Jan Wallander and Tom Hedelius Foundation.
Introduction

How to efficiently allocate capital lies at the heart of financial decision making. Portfolio theory, as developed by , provides a framework for this problem, based on the means, variances and covariances of the assets in the considered portfolio. The theory revolves around the trade-off between expected return and variance (risk), denoted by mean-variance optimization. In this setting, investors allocate wealth in order to maximize expected return given a certain level of risk or conversely allocate wealth to minimize the risk given a certain level of expected return. Although it has received a lot of criticism (see, e.g.,  and ), the framework remains one of the most crucial components in portfolio management.

In this paper, we consider the tangency portfolio (TP) which is one of the most important portfolios in the financial literature. The TP weights determine what proportions of the capital to invest in each asset and are obtained by maximizing the expected quadratic utility function. For a portfolio of p risky assets, the TP weights are given by w T P = α 1 Σ 1 ( μ r f 1 p ) , where μ is a p-dimensional mean vector of the asset returns, Σ is a p × p symmetric positive definite covariance matrix of the asset returns, the coefficient α > 00$]]> describes the investors’ risk aversion,1 This value represents how willing an investor is to accept upward and downward risks on their investment. It can be determined through, e.g., qualitative assessment, such as interview questions posed to the investor. r f denotes the rate of a risk-free asset and 1 p is a p-dimensional vector of ones. We allow for short sales and, therefore, some weights can be negative. Let us also note that w T P determines the structure of the portfolio which corresponds to risky assets and does in general not sum to 1. Consequently, the rest of the wealth 1 w T P 1 p needs to be invested into the risk-free asset. Naturally, the TP weights w T P depend on knowledge of the mean vector μ and the covariance matrix Σ. In general, these quantities are not known and need to be estimated from data on N historical return vectors x 1 , , x N . Plugging sample estimates of the mean vector and covariance matrix into (1) leads us to the sample estimate of the TP weights expressed as w ˆ T P = α 1 S 1 ( x ¯ r f 1 p ) , where S is the sample covariance matrix and x ¯ is the sample mean vector, respectively, of x 1 , , x N .2 It is worth to mention that a similar structure appears in the discriminant analysis. Namely, the coefficients of a discriminant function that maximizes the discrepancy between two datasets are expressed as a product of the inverse sample covariance matrix and the sample mean vector (see, for example, ). The statistical properties of w ˆ T P have been extensively studied throughout the literature.  derived an exact test of the weights in the multivariate normal case.  obtained the univariate density for the TP weights as well as its asymptotic distribution, under the assumption that returns are independent and identically multivariate normally distributed. Further,  provided a procedure of monitoring the TP weights with a sequential approach.  obtained the density for, and several exact tests on, linear transformations of estimated TP weights, while  provided approximate and asymptotic distributions for the weights.  studied the distribution of w ˆ T P from a Bayesian perspective.3 In the Bayesian setting, the posterior distribution of TP weights is expressed as a product of the (singular) Wishart matrix and Gaussian vector. Statistical properties of those products are studied by .  studied the TP weights in small and large dimensions when both the population and sample covariance matrix are singular. Analytical expressions of higher order moments of the estimated TP weights are derived in , while the article  presented the asymptotic distribution of the estimated TP weights as well as the asymptotic distribution of the statistical test on the elements of the TP under a high-dimensional asymptotic regime.  derived a test for the location of the TP, and  extended this result to the high-dimensional setting. Furthermore,  derived central limit theorems for the TP weights estimator under the assumption that the matrix of observations has a matrix-variate location mixture of normal distributions. More recently,  investigated the distributional properties of the TP weights under a skew-normal model in small and large dimensions. The common scenario considered is that the number of observations available for the estimation, denoted by N, is greater than the portfolio size, denoted by p. In this case the sample covariance matrix S is positive definite, and w ˆ T P can be obtained as presented in (2). However, when the considered portfolio is large, it is possible that the number of available observations is less than the portfolio dimension. This can be due to a lack of data for all the assets in the portfolio, but it may also occur due to the fact that covariance of asset returns tends to change over time. As such, the assumption of a constant covariance might only hold for limited periods of time, hence limiting the amount of data available for estimation. Many applications consider portfolios of large dimensions, containing up to 50, 100 or even 1000 assets (see, e.g., ). If returns are measured on weekly or monthly intervals, data reaching back several decades might be required to ensure p N. Unless the considered assets can be assumed to have a constant covariance matrix over very long time periods, data spanning such long time intervals is not suitable to use in the estimation, or might simply not be available. Any such situations, where p > NN$]]>, would result in a singular sample covariance matrix S, which in turn is noninvertible, in the standard sense.

This issue can be remedied by estimating Σ 1 in (1) with the Moore–Penrose inverse of S, which we will denote by S + . This generalized inverse has previously been successfully employed in portfolio theory for the p > NN$]]> case by .4 Instead of using the Moore–Penrose inverse, one can consider regularization techniques such as the ridge-type method , the Landweber–Fridman iteration approach , a form of Lasso , or an iterative algorithm based on a second order damped dynamical systems . Applying the Moore–Penrose inverse, the TP weights are estimated as w ˜ T P = α 1 S + ( x ¯ r f 1 p ) . An attractive feature of applying the Moore–Penrose inverse S + in (1) is that it is the least square solution to the system of equations described by S v = α 1 ( x ¯ r f 1 p ) which in the singular case generally lacks exact solution. That is, as shown in , for any vector v R p , we have that S v α 1 ( x ¯ r f 1 p ) 2 S S + ( x ¯ r f 1 p ) α 1 ( x ¯ r f 1 p ) 2 , where · 2 denotes the Euclidean norm of a vector. Phrased differently, (3) provides the best solution to equation (4), in the least square sense. In addition, when p N, we have that S + = S 1 and w ˜ T P = w ˆ T P , such that w ˜ T P can be viewed as a general estimator for the TP weights, covering both the singular and nonsingular case. For further properties of the Moore–Penrose inverse, see, e.g., . The expectation and variance of an estimator are key quantities to describe its statistical properties. With the standard assumption of normally distributed asset returns, the stochastic components of w ˜ T P consists of S + and x ¯, which are independent under the assumption of normally distributed data (see, e.g., ). Unfortunately, there exist no derivation of the expected value or variance of S + , when p > NN$]]>. In  however, these quantities are presented in the special case of Σ = I p . The authors also provided approximate results, using moments of standard normal random variables, and exact results for moments of the generalized reflexive inverse, another quantity that can be applied as an inverse of S. Further, in a recent paper , several bounds on the mean and variance of S + are provided, based on the Poincaré separation theorem. Our paper builds on the results presented in  and  to provide bounds and approximations for the moments of the TP weights, E [ w ˜ T P ] and V [ w ˜ T P ], where E [ · ] and V [ · ] denote the expected value and variance, respectively. We also present a simulation study, where various measures compare the derived bounds with the equivalent sample quantities obtained from simulated data. Finally, we compare the moments obtained applying the reflexive generalized inverse and the sample moments based on the Moore–Penrose inverse.

The rest of this paper is organized as follows. Section 2.1 provides exact moment results for the case Σ = I p . Section 2.2 presents bounds for the moments of w ˜ T P in the general case, while approximate moments are derived in Section 2.3. Exact moments applying the reflexive generalized inverse are derived in Section 3. The simulation study is presented in Section 4 while Section 5 summarizes.

Moments with the Moore–Penrose inverse

Let X be a p × N matrix with N asset return vectors of dimension p × 1 stacked as columns, where p > NN$]]>. Further, we assume that these return vectors are independent and normally distributed with mean vector μ and positive definite covariance matrix Σ. Thus X MN p , N ( μ 1 N , Σ , I N ), where MN p , n ( M , Σ , U ) denotes the matrix-variate normal distribution with p × N mean matrix M, p × p row-wise covariance matrix Σ and N × N column-wise covariance matrix U. Further, let the p × 1 vector x ¯ be the row mean of X. Now, define Y = X x ¯ 1 N , such that Y MN p , N ( 0 , Σ , I N ). Further, let S = Y Y / n, such that rank ( S ) = n < p with n = N 1, and n S W p ( n , Σ ), i.e. n S follows a p-dimensional singular Wishart distribution with n degrees of freedom and the parameter matrix Σ. Let S = Q R Q denote the eigenvalue decomposition of S, where R is the n × n diagonal matrix of positive eigenvalues and Q is the p × n matrix with corresponding eigenvectors as columns. Further, define S + = Q R 1 Q . Then, S + constitutes the Moore–Penrose inverse of Y Y / n, and S + is independent of x ¯ (see ). In the following, let η = α 1 ( x ¯ r f 1 p ) and θ = E [ η ] = α 1 ( μ r f 1 p ). Consequently, from Corollary 3.2b.1 in , together with the fact that E [ x ¯ ] = μ and V [ x ¯ ] = Σ / ( n + 1 ), we obtain that E [ η η ] = θ θ + Σ α 2 ( n + 1 ) , E [ η η ] = θ θ + tr ( Σ ) α 2 ( n + 1 ) , E [ η Σ η ] = θ Σ θ + tr ( Σ Σ ) α 2 ( n + 1 ) . Further, let s i j denote the element on row i and column j of S + , and let σ i j denote the element on row i and column j of Σ 1 . Also let e i denotes a p × 1 vector where all values are equal to zero, except the i-th element, which is equal to one. Moreover, we assume that λ 1 ( M ) λ 2 ( M ) λ p ( M ) are the ordered eigenvalues of a symmetric p × p matrix M, and that A L B denotes the Löwner ordering of two positive semi-definite matrices A and B. Exact moments when <inline-formula id="j_vmsta212_ineq_081"><alternatives><mml:math> <mml:mi mathvariant="bold">Σ</mml:mi> <mml:mo>=</mml:mo> <mml:msub> <mml:mrow> <mml:mi mathvariant="bold">I</mml:mi> </mml:mrow> <mml:mrow> <mml:mi mathvariant="italic">p</mml:mi> </mml:mrow> </mml:msub></mml:math><tex-math><![CDATA[$\boldsymbol{\Sigma }={\mathbf{I}_{p}}$]]></tex-math></alternatives></inline-formula> When Σ is the identity matrix, it is possible to derive exact moments of the TP weights obtained from the Moore–Penrose inverse in the singular case. First, note the following results presented in Theorem 2.1 of , which state that in the case Σ = I p and p > n + 3n+3$]]>, we have that E [ S + ] = a 1 I p , V [ vec ( S + ) ] = a 2 ( I p 2 + C p 2 ) + 2 a 3 vec ( I p ) vec ( I p ) , where C p 2 is the commutation matrix, vec ( · ) is the vectorization operator and a 1 = n 2 p ( p n 1 ) , a 2 = n 3 [ p ( p 1 ) n ( p n 2 ) 2 ] p ( p 1 ) ( p + 2 ) ( p n ) ( p n 1 ) ( p n 3 ) , a 3 = n 3 [ n 2 ( n 1 ) + 2 n ( p 2 ) ( p n ) + 2 p ( p 1 ) ] p 2 ( p 1 ) ( p + 2 ) ( p n ) ( p n 1 ) 2 ( p n 3 ) . Note that constants in (10)–(12) differ slightly from the constants presented in , since our paper considers results for n S W p ( n , Σ ), while  derived the results for W W p ( n , Σ ). The moments in (8) and (9) allow us to derive the following results.

If p > n + 3n+3$]]> and Σ = I p , then E [ w ˜ T P ] = a 1 w T P , V [ w ˜ T P ] = ( a 2 + 2 a 3 ) w T P w T P + a 2 w T P w T P + a 1 2 + ( p + 1 ) a 2 + 2 a 3 α 2 ( n + 1 ) I p with constants a 1 , a 2 and a 3 that are defined in (10)–(12). Since w ˜ T P = α 1 S + ( x ¯ r f 1 p ), the first result follows directly from (8) and the independence of S + and x ¯. For the second result, first note that as discussed in , equation (9) can be written as Cov ( s i j , s k l ) = a 2 ( δ i k δ j l + δ i l δ j k ) + 2 a 3 δ i j δ k l , where δ i j = 1 if i = j and 0 otherwise, so that δ i j , i , j = 1 , p, denote the elements of I p . Hence, we have that E [ s i j s k l ] = a 2 ( δ i k δ j l + δ i l δ j k ) + ( a 1 2 + 2 a 3 ) δ i j δ k l . Also note the following element representations of matrix operations, where A and B are symmetric p × p matrices and tr ( · ) denotes the trace operator of a matrix: A tr ( B A ) i j = a i j k = 1 p l = 1 p a k l b k l , A B A i j = k = 1 p l = 1 p b k l a i k a j l = k = 1 p l = 1 p b k l a i l a j k . Moreover, with η = α 1 ( x ¯ r f 1 p ) and E [ η ] = θ, V [ w ˜ T P ] = V [ S + η ] = E E [ S + η η S + η ] E [ S + ] θ θ E [ S + ] . By letting H = η η and applying equations (13)–(15) we obtain E [ S + H S + η ] i j = k = 1 p l = 1 p h k l E [ s i k s j l ] = k = 1 p l = 1 p h k l [ a 2 ( δ i j δ k l + δ i l δ k j ) + ( a 1 2 + 2 a 3 ) δ i k δ j l ] = a 2 I p tr ( H I p ) i j + a 2 [ I p H I p ] i j + ( a 1 2 + 2 a 3 ) [ I p H I p ] i j . Consequently, E [ S + H S + η ] = ( a 1 2 + a 2 + 2 a 3 ) H + a 2 tr ( H ) I p , and inserting the above result into (16) together with (5) and (8) gives V [ S + η ] = ( a 1 2 + a 2 + 2 a 3 ) θ θ + α 2 N 1 I p + + a 2 tr ( θ θ ) + α 2 N 1 p I p a 1 2 θ θ and the theorem follows noting that θ = w T P when Σ = I p . □ A direct consequence of Theorem 1 is that the estimator w ˜ T P is biased, with bias factor a 1 . Hence, in the case of Σ = I p , we have that a 1 1 w ˜ T P constitutes an unbiased estimator. Further, in accordance with Corollary 2.1 in , as n , p , assuming n / p r, with 0 < r < 1, the constants of V [ w ˜ T P ] emits the following asymptotic magnitudes: a 1 = O ( 1 ), a 2 = O ( n 1 ) = O ( p 1 ) and a 3 = O ( n 2 ) = O ( p 2 ). Consequently, since tr ( w T P w T P ) = O ( p ) in the general case, we have that a 2 tr ( w T P w T P ) = O ( 1 ). Hence, unless w T P has some specific structure, V [ w ˜ T P ] does not vanish to zero under this asymptotic regime. This is not unique for the singular case, since the corresponding is also true for w ˆ T P in the nonsingular case, when n , p . Finally, we note that in practice the population covariance matrix of a portfolio of assets will likely never be equal to I p , and hence the results in this section are mainly of theoretical nature. Bounds on the moments This section aims to provide upper and lower bounds for the expected value of w ˜ T P and upper bounds for the variance of w ˜ T P . First, define the following p × p matrices, D = a 1 ( λ p ( Σ 1 ) ) 2 Σ , U a = a 1 ( λ 1 ( Σ 1 ) ) 2 Σ , U b = n p n 1 λ 1 ( Σ 1 ) I p , with elements d i j , u i j ( a ) and u i j ( b ) , respectively. Further denote by e i j the elements of E [ S + ] and let u i i ( ) = min { u i i ( a ) , u i i ( b ) }, i = 1 , , p. Then we can derive the following result. Suppose p > n + 3n+3$]]> and Σ > 00$]]>. Let w i and θ i , be the i-th elements of the p × 1 vectors w = E [ w ˜ T P ] and θ = α 1 ( μ r f 1 p ), respectively. Then for i = 1 , , p, it holds that v i i θ i + j i p v i j θ j w i z i i θ i + j i p z i j θ j where, for i , j = 1 , , p, v i j = g i j if θ j 0 , h i j if θ j < 0 , z i j = g i j if θ j < 0 , h i j if θ j 0 , with g i i = d i i , h i i = u i i ( ) , while for i j, g i j = max d i j ( u i i ( ) d i i ) ( u j j ( ) d j j ) , u i j ( a ) ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , ( u i i ( b ) d i i ) ( u j j ( b ) d j j ) , u i i ( ) u j j ( ) , h i j = min d i j + ( u i i ( ) d i i ) ( u j j ( ) d j j ) , u i j ( a ) + ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , ( u i i ( b ) d i i ) ( u j j ( b ) d j j ) , u i i ( ) u j j ( ) . The result follows directly from the element-wise bounds in Lemma A2 and that due to the independence of S + and x ¯ we have E [ w ˜ T P ] = E [ S + ] θ. □ Note that when Σ = I p , we have that λ 1 ( Σ 1 ) 2 = λ p ( Σ 1 ) 2 = 1, and hence E [ S + ] = D = U a = a 1 I p . Consequently g i j = h i j = 0, i j, and g i i = h i i = a 1 , i = 1 , , p, since u i i ( a ) = a 1 < a 1 p n = u i i ( b ) , and p > nn$]]>. Hence, Theorem 2 yields that E [ w ˜ T P ] = a 1 θ, consistent with the result of Theorem 1.

The following result provides two upper bounds for the variance of the TP weights estimate w ˜ T P .

Suppose p > n + 3n+3$]]> and Σ > 00$]]>. Then V [ w ˜ T P ] L ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 k 1 Σ E [ η η ] Σ + k 2 Σ E [ η Σ η ] , V [ w ˜ T P ] L ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 E [ ( η η ) ] I p , with the expected values given in (5)–(7) and c 1 = n 2 [ ( p n ) ( p n 1 ) ( p n 3 ) ] 1 , c 2 = ( p n 2 ) c 1 , k 1 = 1 + n ( p + 1 ) ( p ( n + 1 ) 2 ) p ( p + 1 ) 2 n p , k 2 = 1 ( p + 1 ) ( p n ) p ( p + 1 ) 2 n p .

We are interested in bounds for the quantity α V [ w ˜ T P ] α = α V [ S + η ] α, for all α R p . First, by the tower property we have V [ S + η ] = E E [ S + η η S + η ] E [ S + ] θ θ E [ S + ] . Hence, we can obtain α V [ S + η ] α = E E [ α S + η η S + α η ] α E [ S + ] θ θ E [ S + ] α = E E [ ( α S + η ) 2 η ] ( α E [ S + ] θ ) 2 . Then, by noting that ( α E [ S + ] θ ) 2 > 00$]]> and applying the bounds from Lemma A4 on E [ ( α S + η ) 2 ] we can derive α V [ S + η ] α ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 × E k 1 ( α Σ η ) 2 + k 2 ( α Σ α ) ( η Σ η ) , α V [ S + η ] α ( λ 1 ( Σ 1 ) ) 4 ( 2 c 1 + c 2 ) E [ ( α α ) ( η η ) ] , and with the aid of (5)–(7) the result follows. □ Approximate moments Regarding general Σ, it is possible to provide approximate moments for w ˜ T P using simulations of standard normal matrices. Following Section 3.1 in , we denote the eigendecomposition of Σ as Σ = Γ Λ Γ , with λ i denoting the i-th diagonal element of Λ, and let Z MN p , n ( 0 , I p , I n ), with z i denoting row i of Z. Further, denote m i j ( Λ ) = E [ z i ( Z Λ Z ) 2 z j ] and v i j , k l ( Λ ) = Cov [ z i ( Z Λ Z ) 2 z j , z k ( Z Λ Z ) 2 z l ], where Cov [ X , Y ] denotes the covariance between X and Y. Also define M ( Λ ) = n i = 1 p λ i m i i ( Λ ) e i e i , V ( Λ ) = n 2 i = 1 p j = 1 p λ i λ j v i i , j j ( Λ ) ( e i e j e i e j ) + i = 1 p j = 1 p λ i λ j v i j , i j ( Λ ) ( e j e j e i e i ) ( I p 2 + C p 2 ) 2 i p λ i 2 v i i , i i ( Λ ) ( e i e i e i e i ) and make the decomposition ( Γ Γ ) V ( Λ ) ( Γ Γ ) = Ψ 11 Ψ 1 p Ψ p 1 Ψ p p , where Ψ i j are p × p matrices, i , j = 1 , , p. The following result can then be derived. If p > n + 3n+3$]]> and Σ > 00$]]>, then E [ w ˜ T P ] = Γ M ( Λ ) Γ θ , V [ w ˜ T P ] = i = 1 p j = 1 p θ i θ j + σ i j α 2 ( n + 1 ) Ψ i j + 1 α 2 ( n + 1 ) Γ M ( Λ ) Λ M ( Λ ) Γ with θ i = α 1 ( μ i r f ). From Theorem 3.1 in , we have that E [ S + ] = Γ M ( Λ ) Γ . Then the first result follows due to the independence of S + and x ¯. For the second result, we have that V [ S + η ] = E E [ S + η η S + η ] E [ S + ] θ θ E [ S + ] . Again we let H = η η . Applying Theorem 3.1 in  we have that V [ vec ( S + ) ] = ( Γ Γ ) V ( Λ ) ( Γ Γ ) , and in accordance with equation (6.8) in , we get E [ S + H S + ] = i = 1 p j = 1 p h i j Ψ i j + E [ S + ] H E [ S + ] , where Ψ i j is obtained from the decomposition (20). Inserting the above into (21) gives V [ S + η ] = i = 1 p j = 1 p E [ h i j ] Ψ i j + E [ S + ] E [ H ] E [ S + ] E [ S + ] θ θ E [ S + ] = i = 1 p j = 1 p θ i θ j + σ i j α 2 N Ψ i j + 1 α 2 N Γ M ( Λ ) Λ M ( Λ ) Γ due to (5) and since Γ Σ Γ = Λ. The theorem is proved. □ In  the authors note that the moments m i j ( Λ ) and v i j , k l ( Λ ) do not seem to have tractable closed-form representations. However, these quantities can be approximated by simulation of Z, given the eigenvalues of Σ. Exact moments with reflexive generalized inverse An alternative to using the Moore–Penrose inverse S + to estimate Σ 1 is an application of the reflexive generalized inverse, defined as S = Σ 1 / 2 Σ 1 / 2 S Σ 1 / 2 + Σ 1 / 2 , where the elements of S are denoted s i j . Then, the TP weights vector can be estimated by w T P = S η , and we derive the following result. If p > n + 3n+3$]]> and Σ > 00$]]>, then E [ w T P ] = a 1 w T P , V [ w T P ] = ( a 2 + 2 a 3 ) w T P w T P + a 2 w T P Σ w T P + a 1 2 + ( p + 1 ) a 2 + 2 a 3 α 2 ( n + 1 ) Σ 1 . The first result follows directly from Corollary 2.3 in , and the independence of S and x ¯. For the second result, we have that V [ S η ] = E E [ S η η S η ] E [ S ] θ θ E [ S ] . Again we let H = η η , and note that by Corollary 2.3 in  we have E [ s i k s l j ] = a 2 ( σ i j σ k l + σ i l σ k j ) + ( a 1 2 + 2 a 3 ) σ i k σ j l which combined with (14)–(15) allows us to obtain E S H S η i j = k = 1 p l = 1 p h k l E s i k s l j = k = 1 p l = 1 p h k l a 2 ( σ i j σ k l + σ i l σ k j ) + ( a 1 2 + 2 a 3 ) σ i k σ j l = ( a 1 2 + a 2 + 2 a 3 ) Σ 1 H Σ 1 i j + a 2 tr ( H Σ 1 ) Σ 1 i j so sthat E [ S H S η ] = ( a 1 2 + a 2 + 2 a 3 ) Σ 1 H Σ 1 + a 2 tr ( H Σ 1 ) Σ 1 . Inserting this into equation (22) gives V [ S η ] = ( a 1 2 + a 2 + 2 a 3 ) Σ 1 E [ η η ] Σ 1 + a 2 tr ( E [ η η ] Σ 1 ) Σ 1 E [ S ] θ θ E [ S ] , and applying the first result on E [ S ] together with (5) concludes the proof. □ An obvious drawback of w T P is that Σ must be known in order to construct S . Moreover, in the case of Σ = I p the results in Theorem 5 coincide with the results in Theorem 1, since in this case S = S + . Simulation study The aim of this section is to compare the bounds on the moments of w ˜ T P derived in Section 2.2 with the sample mean and sample variance of this estimator. We will also investigate the difference between the moments of w T P derived in Theorem 5 and the sample moments of w ˜ T P . Ideally, the bounds should not deviate from the obtained sample moments very much. To this end, define b l and b u as the p × 1 vectors with elements b i l = v i i μ i + j i p v i j μ j , b i u = z i i μ i + j i p z i j μ j , such that b l and b u represent the element-wise lower and upper bounds for the expected TP weights vector presented in Theorem 2, where we set α = 1 and r f = 0. Let B 1 = ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 k 1 Σ E [ η η ] Σ + k 2 Σ E [ η Σ η ] , B 2 = ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 E [ ( η η ) ] I p represent the bounds in equations (17) and (18) in Theorem 3, respectively. Further, let m and V respectively denote the sample mean vector and sample covariance matrix of w ˜ T P based on an observed matrix X, as described in Section 2. Moreover, we define t l = 1 p | b l m | p , t u = 1 p | b u m | p , t = 1 p | E [ w T P ] m | p , so that t l , t u and t measure the element-wise difference between the sample mean vector and the lower and upper bounds on the mean, and mean of w T P , respectively. Dividing by p allows comparing the measures between various portfolio sizes. Further, let T 1 = 1 p B 1 V 1 p p 2 , T 2 = 1 p B 2 V 1 p p 2 , T = 1 p V [ w T P ] V 1 p p 2 , where 1 p is a p × 1 vector of ones. Then, T 1 and T 2 provide a measure of discrepancy between the sample covariance matrix and bounds presented in Theorem 3, while T measures the discrepancy between the variance of w T P presented in Theorem 5 and the sample covariance matrix of w ˜ T P . Since it is divided by p 2 , the number of elements in B 1 , B 2 , V [ w T P ] and V, the measures T 1 , T and T 2 again allow for comparison between different portfolio sizes. Moreover, define f l = b l m F 2 / m F 2 , f u = b u m F 2 / m F 2 , f = E [ w T P ] m F 2 / m F 2 , F 1 = B 1 V F 2 / V F 2 , F 2 = B 2 V F 2 / V F 2 , F = V [ w T P ] V F 2 / V F 2 , where M F 2 denotes the Frobenius norm of the matrix M. Hence f 1 , f 2 , F 1 and F 2 represent the normalized Frobenius norm of differences between the bounds and the sample variance, while f and F denote the differences between the moments of w T P and the sample variance of w ˜ T P . In the following, we will study simulations of (23)–(34) for various parameter values. In order to account for a wide range of values of μ and Σ, these values will be randomly generated in the simulation study. Each of the p elements in the mean vector μ will be independently generated as U ( 0.1 , 0.1 ), where U ( l , u ) denotes the uniform distribution between l and u. The positive definite covariance matrix Σ will be determined as Σ = Γ Λ Γ , where the p × p matrix Γ represent the eigenvectors of Σ and is generated according to the Haar distribution. The p × p matrix Λ is diagonal, and its elements represents the ordered eigenvalues of Σ. Here we let the p eigenvalues be equally spaced from d to 1, for various values of d. Then, the parameter d represents a measure of dependency between the p assets in the portfolio, where d = 1 represents no dependency and larger d represents a stronger dependency structure. Consequently, the simulation procedure can be described as follows: Generate μ, with μ i U ( 0.1 , 0.1 ), i = 1 , p. Generate Γ according to the Haar distribution, and compute Σ = Γ Λ Γ , where diag ( Λ ) = d , 1. Independently generate x ¯ N p , 1 ( μ , Σ / N ) and n S W p ( n , Σ ). Compute w ˜ T P . Repeat steps 3) and 4) above s = 10000 times. Based on the s samples of w ˜ T P , compute m and V. Given m and V, compute (23)–(34). The above procedure is repeated r = 10 times to get r values of (23)–(34) for a given combination of p, N and d. Figures 112 display the mean value, for the r simulations, of each respective measure, for p = { 25 , 50 , 75 , 100 }, d = { 1 , , 10 } and N = { 2 , 0.4 p , 0.7 p , p 3 }.5 The computation time for each set of simulations for p = { 25 , 50 , 75 , 100 } is { 12 , 26 , 55 , 107 } minutes, respectively, when calculation is run on 15 threads of an AMD Ryzen 7 5800H CPU. Hence, for future research, it is possible to explore even larger sample sizes on a standard PC. For easier reading, the values are displayed on a logarithmic scale and are connected with a solid line. First, we notice that most measures seem to increase with increasing dependency measure d. Further, t l , t u , t , T 1 , T 2 , T increase with increasing sample size N. However, F 2 , the measure of the discrepancy between the sample variance of w ˜ T P and the variance bound B 2 , on the contrary, decreases with increasing N. Regarding the bounds on the expected value of w ˜ T P , t l and t u become very similar, and so do f 1 and f 2 . The measures of the difference between E [ w ˜ T P ] and E [ w T P ], t and f , are fairly small for most of the considered simulation parameters. This suggests that E [ w T P ] can serve as a rough approximation of E [ w ˜ T P ], especially for N ( 0.4 p , 0.7 p ). Furthermore, when d = 1 we have Σ = I p , and hence the both bounds b 1 and b 2 , as well as E [ w T P ], provide equality with E [ w ˜ T P ]. In particular, for d = 1, these measures simply capture sample variance for the mean of m. Similarly, when d = 1, T and F capture the sample variance of V. Further, for N < p 3 and low values of d, T and F are fairly small, suggesting that V [ w T P ] could be applied as a rough approximation of V [ w ˜ T P ] in these cases. Finally, we notice that the measures F 1 and F 2 become very large for most of the combinations of p, N and d. It is however important to note that the Frobenius norm of differences, that these measures are based on, captures element-wise squared discrepancies, while B 1 and B 2 are not element-wise bounds, but rather bounds in the Löwner order sense. The logarithm of t l plotted for various values of p, N and d The logarithm of t u plotted for various values of p, N and d The logarithm of t plotted for various values of p, N and d The logarithm of T 1 plotted for various values of p, N and d The logarithm of T 2 plotted for various values of p, N and d The logarithm of T plotted for various values of p, N and d The logarithm of f l plotted for various values of p, N and d The logarithm of f u plotted for various values of p, N and d The logarithm of f plotted for various values of p, N and d The logarithm of F 1 plotted for various values of p, N and d The logarithm of F 2 plotted for various values of p, N and d The logarithm of F plotted for various values of p, N and d Summary The TP is an important portfolio in mean-variance asset optimization framework of , and the statistical properties of the typical TP weight estimator have been thoroughly studied. However, when the portfolio dimension is greater than the sample size, this estimator is not applicable since standard inversion of the now singular sample covariance matrix is not possible. This issue can be solved by applying the Moore–Penrose inverse, to which a general TP weights estimator can be provided, covering both the singular and nonsingular case. Unfortunately, there exists no derivation of the moments for the Moore–Penrose inverse of a singular Wishart matrix, and consequently the moments of the general TP estimator cannot be obtained. In this paper, we provide bounds on the mean and variance of the TP weights estimator in the singular case. Further, we present approximate results, as well as exact moment results in the case when the population covariance is equal to the identity matrix. We also provide exact moment results when the reflexive generalized inverse is applied in the TP weights equation. Moreover, we investigate the properties of the derived bounds, and the estimator based on the reflexive generalized inverse, in a simulation study. The difference between the various bounds and the sample counterparts are measured by several quantities, and studied for numerous dimensions, sample sizes and levels of dependencies of the population covariance matrix. The results suggest that many of the derived bounds are closest to the sample moments when the population covariance matrix implies low dependency between the considered assets. Finally, the study implies that in some cases the moments of TP weights based on the reflexive generalized inverse can be used as a rough approximation for the moments of TP weights based on the Moore–Penrose inverse. For future studies, it would be relevant, for example, to perform a sensitivity analysis on how fluctuations in the population covariance matrix affect the estimated TP weights. Appendix The elements of E [ S + ] have the bounds, for i = 1 , , p, 0 < d i i e i i u i i ( a ) , and, for i , j = 1 , , p, i j, e i j min { d i j , u i j ( a ) } + ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , e i j max { d i j , u i j ( a ) } ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) . First note that in accordance with Theorem 3.2 and Theorem 3.3 of , we have that D L E [ S + ] L U a , E [ S + ] L U b . Further, by definition of the Löwner order we have, with α R p , that α D α α E [ S + ] α α U α . Thus, since α ( E [ S + ] D ) α 0, we have that E [ S + ] D is a positive semi-definite matrix, and the same holds for U E [ S + ]. This gives that 0 < d i i e i i u i i ( a ) , i = 1 , , p. Moreover, note that every principal submatrix of a positive definite matrix is also positive definite. Combined with (35) it provides the following inequalities, for any i , j = 1 , , p, and with arbitrary nonzero scalars x 1 and x 2 , x 1 2 u i i ( a ) + 2 x 1 x 2 u i j ( a ) + x 2 2 u j j ( a ) x 1 2 e i i + 2 x 1 x 2 e i j + x 2 2 e j j x 1 2 d i i + 2 x 1 x 2 d i j + x 2 2 d j j > 0 . & \displaystyle 0.\end{array}\]]]> Now, first assume x 1 > 00$]]>, x 2 > 00$]]>. Then, the above expressions can be applied to obtain x 1 2 e i i + 2 x 1 x 2 e i j + x 2 2 e i i x 1 2 d i i + 2 x 1 x 2 d i j + x 2 2 d j j , x 1 2 u i i ( a ) + 2 x 1 x 2 e i j + x 2 2 u j j ( a ) x 1 2 d i i + 2 x 1 x 2 d i j + x 2 2 d j j , e i j x 1 2 ( u i i ( a ) d i i ) + x 2 2 ( u j j ( a ) d j j ) 2 x 1 x 2 d i j 2 x 1 x 2 = x 1 ( u i i ( a ) d i i ) 2 x 2 x 2 ( u j j ( a ) d j j ) 2 x 1 + d i j for any i , j = , p, i j. As the right-hand side is a lower bound, we would like to obtain values x 1 and x 2 that maximizes this concave function. Deriving and setting the expression equal to zero, we obtain that it has its maximum at x 1 2 ( u i i ( a ) d i i ) = x 2 2 ( u j j ( a ) d j j ) . Without loss of generality we can set x 1 = 1 and thus obtain the maximum at x 1 = 1 , x 2 = ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) . Applying this result to equation (37) yields e i j d i j ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) . With an equivalent approach, again with x 1 > 00$]]>, x 2 > 00$]]>, we can use inequalities x 1 2 d i i + 2 x 1 x 2 e i j + x 2 2 d j j x 1 2 u i i ( a ) + 2 x 1 x 2 u i j ( a ) + x 2 2 u j j ( a ) , e i j x 1 2 ( u i i ( a ) d i i ) + x 2 2 ( u j j ( a ) d j j ) + 2 x 1 x 2 u i j ( a ) 2 x 1 x 2 in order to obtain the upper bound e i j u i j ( a ) + ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) . Instead considering x 1 < 0 and x 2 > 00$]]> (or x 1 > 00$]]> and x 2 < 0), with a similar approach, we can obtain the bounds e i j d i j + ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , e i j u i j ( a ) ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) . Letting x 1 < 0 and x 2 < 0 again yield bounds (38) and (39). Expressing it differently, the above bounds can be written as e i j min { d i j , u i j ( a ) } + ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , e i j max { d i j , u i j ( a ) } ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , concluding the proof. □ The results in Lemma A1 can be further extended, by also considering the bounding matrix U b . The following lemma summarizes this result. The elements of E [ S + ] have the bounds, for i = 1 , , p, 0 < g i i : = d i i e i i h i i : = u i i ( ) , where u i i ( ) = min { u i i ( a ) , u i i ( b ) }. Further, for i , j = 1 , , p, i j, g i j e i j h i j with g i j = max d i j ( u i i ( ) d i i ) ( u j j ( ) d j j ) , u i j ( a ) ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , ( u i i ( b ) d i i ) ( u j j ( b ) d j j ) , u i i ( ) u j j ( ) , h i j = min d i j + ( u i i ( ) d i i ) ( u j j ( ) d j j ) , u i j ( a ) + ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) , ( u i i ( b ) d i i ) ( u j j ( b ) d j j ) , u i i ( ) u j j ( ) . First, we have that d i j ( u i i ( ) d i i ) ( u j j ( ) d j j ) e i j d i j + ( u i i ( ) d i i ) ( u j j ( ) d j j ) , since e i i (and e j j ) in (36) can be replaced by either u i i ( a ) or u i i ( b ) , whichever is the smaller. Then u i j ( a ) ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) e i j u i j ( a ) + ( u i i ( a ) d i i ) ( u j j ( a ) d j j ) ( u i i ( b ) d i i ) ( u j j ( b ) d j j ) e i j ( u i i ( b ) d i i ) ( u j j ( b ) d j j ) follows directly from Lemma A1 and the fact that U b is diagonal and thus u i j ( b ) = 0. Finally, u i i ( ) u j j ( ) e i j u i i ( ) u j j ( ) follows from u i i ( ) u j j ( ) e i i e j j e i j e i i e j j u i i ( ) u j j ( ) . The lemma is proved. □ In the following, let k 3 = n [ p ( n + 1 ) 2 ] p [ p ( p + 1 ) 2 ] , k 4 = n ( p n ) p [ p ( p + 1 ) 2 ] . Further, define g ( L ) = i = 1 n | L i | + and c ( n , p ) = ( 2 π ) n p / 2 2 n s ( n , p ), where | L i | + and s ( n , p ) are defined as on pages 128 and 129 in . Let an n × p matrix L satisfy LL = I n . Then, for all α , x R p , ( α L L α ) ( x L L x ) g ( L ) d L = k 1 c ( n , p ) ( α x ) 2 + k 2 c ( n , p ) ( α α ) ( x x ), ( α L L x ) 2 g ( L ) d L = k 3 c ( n , p ) ( α x ) 2 + k 4 c ( n , p ) ( α α ) ( x x ). In accordance with page 130 in , we have n ( I p 2 + K p , p ) + n 2 vec ( I p ) vec ( I p ) = c ( n , p ) 1 ( L L ) p ( I n 2 + K n , n ) + p 2 vec ( I n ) vec ( I n ) × × ( L L ) g ( L ) d L , where K · , · is the commutation matrix. Now note that ( α x ) I p 2 ( α x ) = ( α α ) ( x x ) , ( α x ) K p , p ( α x ) = ( α x ) 2 , ( α x ) vec ( I p ) vec ( I p ) ( α x ) = ( α x ) 2 , ( α x ) ( L L ) I n 2 ( L L ) ( α x ) = ( α L L α ) ( x L L x ) , ( α x ) ( L L ) K n , n ( L L ) ( α x ) = ( α L L x ) 2 , ( α x ) ( L L ) vec ( I n ) vec ( I n ) ( L L ) ( α x ) = ( α L L x ) 2 and ( α α ) I p 2 ( x x ) = ( α x ) 2 , ( α α ) K p , p ( x x ) = ( α x ) 2 , ( α α ) vec ( I p ) vec ( I p ) ( x x ) = ( α α ) ( x x ) , ( α α ) ( L L ) I n 2 ( L L ) ( x x ) = ( α L L x ) 2 , ( α α ) ( L L ) K n , n ( L L ) ( x x ) = ( α L L x ) 2 , ( α α ) ( L L ) vec ( I n ) vec ( I n ) ( L L ) ( x x ) = ( α L L α ) ( x L L x ) . Then, from equation (40) we can obtain the following two expressions: ( α x ) n ( I p 2 + K p , p ) + n 2 vec ( I p ) vec ( I p ) ( α x ) = c ( n , p ) 1 ( α x ) ( L L ) p ( I n 2 + K n , n ) + p 2 vec ( I n ) vec ( I n ) × ( L L ) g ( L ) d L ( α x ) , n ( α α ) ( x x ) + ( n + n 2 ) ( α x ) 2 = c ( n , p ) 1 p ( α L L α ) ( x L L x ) + ( p + p 2 ) ( α L L x ) 2 g ( L ) d L , and ( α α ) n ( I p 2 + K p , p ) + n 2 vec ( I p ) vec ( I p ) ( x x ) = c ( n , p ) 1 ( α α ) ( L L ) p ( I n 2 + K n , n ) + p 2 vec ( I n ) vec ( I n ) × ( L L ) g ( L ) d L ( x x ) , 2 n ( α x ) 2 + n 2 ( α α ) ( x x ) = c ( n , p ) 1 p 2 ( α L L α ) ( x L L x ) + 2 p ( α L L x ) 2 g ( L ) d L . From equation (47) we can then derive ( α L L α ) ( x L L x ) g ( L ) d L = c ( n , p ) n p ( α α ) ( x x ) + ( n + 1 ) ( α x ) 2 ( 1 + p ) ( α L L x ) 2 g ( L ) d L . Inserting this expression into equation (48) yields ( α L L x ) 2 g ( L ) d L = n p ( p ( n + 1 ) 2 ) ( α x ) 2 + ( p n ) ( α α ) ( x x ) c ( n , p ) 1 ( p ( p + 1 ) 2 ) , and then we finally obtain ( α L L α ) ( x L L x ) g ( L ) d L = n p ( α α ) ( x x ) + ( n + 1 ) ( α x ) 2 c ( n , p ) 1 ( p + 1 ) n p ( p ( n + 1 ) 2 ) ( α x ) 2 + ( p n ) ( α α ) ( x x ) c ( n , p ) 1 ( p ( p + 1 ) 2 ) ) = c ( n , p ) n p 1 ( p + 1 ) ( p n ) p ( p + 1 ) 2 ( α α ) ( x x ) + c ( n , p ) n p 1 + n ( p + 1 ) ( p ( n + 1 ) 2 ) p ( p + 1 ) 2 × ( α x ) 2 , completing the proof. □ Let n S W p ( n , Σ ), p > n + 3n+3$]]> and Σ > 00$]]>. Then, for all α , x R p , E [ ( α S + x ) 2 ] ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 k 1 ( α Σ x ) 2 + k 2 ( α Σ α ) ( x Σ x ) , E [ ( α S + x ) 2 ] ( λ 1 ( Σ 1 ) ) 4 ( 2 c 1 + c 2 ) ( α α ) ( x x ). First, let Y Σ 1 / 2 = T L, where L L = I n , L is an n × p matrix and T is a lower triangular n × n matrix with positive elements. Further, note that in accordance with page 131 in , for p > n + 3n+3$]]>, we have that E [ vec ( S + ) vec ( S + ) ] = c ( n , p ) 1 ( c 1 ( I p 2 + K p , p ) ( P P ) + c 2 vec ( P ) vec ( P ) ) g ( L ) d L , where P = Σ 1 / 2 L ( L Σ L ) 1 ( L Σ L ) 1 L Σ 1 / 2 . Then, with equalities similar to (41)–(46), ( α x ) E [ vec ( S + ) vec ( S + ) ] ( α x ) = c ( n , p ) 1 ( α x ) ( c 1 ( I p 2 + K p , p ) ( P P ) , + c 2 vec ( P ) vec ( P ) ) g ( L ) d L ( α x ) , E [ ( α S + x ) 2 ] = c ( n , p ) 1 ( c 1 + c 2 ) ( x P α ) 2 g ( L ) d L + c 1 ( x P x ) ( α P α ) g ( L ) d L . Now, by Lemma A5, we have that ( x P x ) ( α P α ) ( x P α ) 2 . Further, combining this inequality with (49) and Lemma 2.4 (i) in , we have E [ ( α S + x ) 2 ] = c ( n , p ) 1 ( c 1 + c 2 ) ( x P α ) 2 g ( L ) d L + c 1 ( x P x ) ( α P α ) g ( L ) d L c ( n , p ) 1 ( 2 c 1 + c 2 ) ( x P x ) ( α P α ) g ( L ) d L c ( n , p ) 1 ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 × ( α Σ 1 / 2 L L Σ 1 / 2 α ) ( x Σ 1 / 2 L L Σ 1 / 2 x ) g ( L ) d L = ( 2 c 1 + c 2 ) ( λ 1 ( Σ 1 ) ) 4 k 1 ( α Σ x ) 2 + k 2 ( α Σ α ) ( x Σ x ) , where Lemma A3 (i) has been applied in the last equality. On the other hand, if we instead apply the inequality in Lemma 2.4 (ii) of , we obtain E [ ( α S + x ) 2 ] c ( n , p ) 1 ( λ 1 ( Σ 1 ) ) 4 ( 2 c 1 + c 2 ) ( α α ) ( x x ) g ( L ) d L = ( λ 1 ( Σ 1 ) ) 4 ( 2 c 1 + c 2 ) ( α α ) ( x x ) where Lemma 3.1 (i) in  gives the equality and concludes the proof.  □

Let A be a p × p symmetric positive definite matrix. Then for any c , d R p , ( c A c ) ( d A d ) ( c A d ) 2 .

Let A = Q R Q denote the eigenvalue decomposition of A, such that Q is orthogonal and R is a diagonal matrix with positive elements. Make the substitutions f = R 1 / 2 Q c , g = R 1 / 2 Q d , so that the inequality ( c A c ) ( d A d ) ( c A d ) 2 can be written as ( f f ) ( g g ) ( f g ) 2 . Further, since ( f g ) = f g cos ( θ ), where · denotes the Euclidean norm and θ is the angle between the vectors f and g, the inequality (50) becomes f 2 g 2 f 2 g 2 cos ( θ ) 2 , which holds since cos ( θ ) 2 1. The lemma is proved.  □

Acknowledgement

We would like to thank Prof. Yuliya Mishura, the Associate Editor and the two anonymous referees for helping to improve the paper. We are also grateful to Andrii Dmytryshyn and Mårten Gulliksson for helpful remarks on matrix inequalities.

References Alfelt, G., Bodnar, T., Javed, F., Tyrcha, J.: Singular conditional autoregressive Wishart model for realized covariance matrices. Accepted for publication in Journal of Business and Economic Statistics (2022). Ao, M., Yingying, L., Zheng, X.: Approaching mean-variance efficiency for large portfolios. Rev. Financ. Stud. 32(7), 28902919 (2019). https://doi.org/10.1093/rfs/hhy105 Bauder, D., Bodnar, T., Mazur, S., Okhrin, Y.: Bayesian Inference for the Tangent Portfolio. Int. J. Theor. Appl. Finance 21(08), 1850054 (2018). MR3897158. https://doi.org/10.1142/S0219024918500541 Bodnar, O.: Sequential suveillance of the tangency portfolio weights. Int. J. Theor. Appl. Finance 12(06), 797810 (2009). https://doi.org/10.1142/S0219024909005464 Bodnar, O., Bodnar, T., Parolya, N.: Recent advances in shrinkage-based high-dimensional inference. Journal of Multivariate Analysis 104826 (2022). MR4353848. https://doi.org/10.1016/j.jmva.2021.104826 Bodnar, T., Okhrin, Y.: On the product of inverse wishart and normal distributions with applications to discriminant analysis and portfolio theory. Scand. J. Stat. 38(2), 311331 (2011). MR2829602. https://doi.org/10.1111/j.1467-9469.2011.00729.x Bodnar, T., Mazur, S., Okhrin, Y.: On the exact and approximate distributions of the product of a Wishart matrix with a normal vector. J. Multivar. Anal. 122, 7081 (2013). MR3189308. https://doi.org/10.1016/j.jmva.2013.07.007 Bodnar, T., Mazur, S., Okhrin, Y.: Distribution of the product of singular Wishart Matrix and normal vector. Theory Probab. Math. Stat. 91, 115 (2014). MR3364119. https://doi.org/10.1090/tpms/962 Bodnar, T., Mazur, S., Parolya, N.: Central limit theorems for functionals of large sample covariance matrix and mean vector in matrix-variate location mixture of normal distributions. Scand. J. Stat. 46(2), 636660 (2019). MR3948571. https://doi.org/10.1111/sjos.12383 Bodnar, T., Mazur, S., Podgorski, K.: Singular inverse Wishart distribution and its application to portfolio theory. Journal of Multivariate Analysis, 314–326 (2016). MR3431434. https://doi.org/10.1016/j.jmva.2015.09.021 Bodnar, T., Mazur, S., Podgorski, K.: A test for the global minimum variance portfolio for small sample and singular covariance. AStA Adv. Stat. Anal. 101(3), 253265 (2017). MR3679345. https://doi.org/10.1007/s10182-016-0282-z Bodnar, T., Okhrin, Y., Parolya, N.: Optimal shrinkage-based portfolio selection in high dimensions. Journal of Business & Economic Statistics. to appear (2022) Bodnar, T., Mazur, S., Muhinyuza, S., Parolya, N.: On the product of a singular wishart matrix and a singular gaussian vector in high dimension. Theory Probab. Math. Stat. 99(2), 3750 (2018). MR3908654. https://doi.org/10.1090/tpms/1078 Bodnar, T., Mazur, S., Ngailo, E., Parolya, N.: Discriminant analysis in small and large dimensions. Theory Probab. Math. Stat. 100, 2442 (2019). MR3992991. https://doi.org/10.1090/tpms/1096 Bodnar, T., Mazur, S., Podgorski, K., Tyrcha, J.: Tangency portfolio weights for singular covariance matrix in small and large dimensions: Estimation and test theory. J. Stat. Plan. Inference 201, 4057 (2019). MR3913439. https://doi.org/10.1016/j.jspi.2018.11.003 Bodnar, T., Dmytriv, S., Okhrin, Y., Parolya, N., Schmid, W.: Statistical inference for the expected utility portfolio in high dimensions. IEEE Trans. Acoust. Speech Signal Process. 69, 114 (2021). MR4213326. https://doi.org/10.1109/TSP.2020.3037369 Boullion, T.L., Odell, P.L.: Generalized Inverse Matrices. Wiley, New York, NY (1971). https://cds.cern.ch/record/213449. MR0338012 Britten-Jones, M.: The sampling error in estimates of mean-variance efficient portfolio weights. J. Finance 54(2), 655671 (1999). https://doi.org/10.1111/0022-1082.00120 Brodie, J., Daubechies, I., De Mol, C., Giannone, D., Loris, I.: Sparse and stable Markowitz portfolios. Proc. Natl. Acad. Sci. USA 106, 1226712272 (2009). https://doi.org/10.1073/pnas.0904287106 Cai, T.T., Hu, J., Li, Y., Zheng, X.: High-dimensional minimum variance portfolio estimation based on high-frequency data. J. Econom. 214(2), 482494 (2020). MR4057056. https://doi.org/10.1016/j.jeconom.2019.04.039 Cook, R.D., Forzani, L.: On the mean and variance of the generalized inverse of a singular Wishart matrix. Electron. J. Stat. 5, 146158 (2011). MR2786485. https://doi.org/10.1214/11-EJS602 Ding, Y., Li, Y., Zheng, X.: High dimensional minimum variance portfolio estimation under statistical factor models. J. Econom. 222(1), 502515 (2021). MR4234830. https://doi.org/10.1016/j.jeconom.2020.07.013 Ghazal, G.A., Neudecker, H.: On second-order and fourth-order moments of jointly distributed random matrices: a survey. Linear Algebra Appl. 321(1), 6193 (2000). MR1799985. https://doi.org/10.1016/S0024-3795(00)00181-6 Gulliksson, M., Mazur, S.: An iterative approach to ill-conditioned optimal portfolio selection. Comput. Econ. 56, 773794 (2020). https://doi.org/10.1007/s10614-019-09943-6 Gulliksson, M., Oleynik, A., Mazur, S.: Portfolio selection with a rank-deficient covariance matrix. Working paper (2021) Hautsch, N., Kyj, L.M., Malec, P.: Do high-frequency data improve high-dimensional portfolio allocations? J. Appl. Econom. 30(2), 263290 (2015). MR3322719. https://doi.org/10.1002/jae.2361 Hubbard, D.W.: The Failure of Risk Management: Why It’s Broken and How to Fix It. John Wiley & Sons, (2020) Imori, S., Rosen, D.: On the mean and dispersion of the Moore-Penrose generalized inverse of a Wishart matrix. Electron. J. Linear Algebra 36, 124133 (2020). MR4089045. https://doi.org/10.13001/ela.2020.5091 Javed, F., Mazur, S., Ngailo, E.: Higher order moments of the estimated tangency portfolio weights. J. Appl. Stat. 48(3), 517535 (2021). MR4205986. https://doi.org/10.1080/02664763.2020.1736523 Javed, F., Mazur, S., Thorsén, E.: Tangency portfolio weights under a skew-normal model in small and large dimensions. Working paper ((13) (2021)) Karlsson, S., Mazur, S., Muhinyuza, S.: Statistical inference for the tangency portfolio in high dimension. Statistics 55(3), 532560 (2021). MR4313438. https://doi.org/10.1080/02331888.2021.1951730 Kotsiuba, I., Mazur, S.: On the asymptotic and approximate distributions of the product of an inverse wishart matrix and a gaussian vector. Theory of Probability and Mathematical Statstics 93, 95104 (2015). MR3553443. https://doi.org/10.1090/tpms/1004 Kress, R.: Linear Integral Equations. Springer, (1999). MR1723850. https://doi.org/10.1007/978-1-4612-0559-3 Ledoit, O., Wolf, M.: Nonlinear shrinkage of the covariance matrix for portfolio selection: Markowitz meets goldilocks. Rev. Financ. Stud. 30(12), 43494388 (2017). https://doi.org/10.1093/rfs/hhx052 Markowitz, H.: Portfolio selection. J. Finance 7(1), 7791 (1952). https://doi.org/10.1111/j.1540-6261.1952.tb01525.x Mathai, A.M., Provost, S.B.: Quadratic Forms in Random Variables. CRC Press, (1992). MR1192786 Muhinyuza, S.: A test on mean-variance efficiency of the tangency portfolio in high-dimensional setting. Theory of Probability and Mathematical Statistics In press (2020). MR4421345. https://doi.org/10.1090/tpms Muhinyuza, S., Bodnar, T., Lindholm, M.: A test on the location of the tangency portfolio on the set of feasible portfolios. Appl. Math. Comput. 386, 125519 (2020). MR4126729. https://doi.org/10.1016/j.amc.2020.125519 Okhrin, Y., Schmid, W.: Distributional properties of portfolio weights. J. Econom. 134(1), 235256 (2006). MR2328322. https://doi.org/10.1016/j.jeconom.2005.06.022 Planitz, M.: Inconsistent systems of linear equations. Math. Gaz. 63(425), 181185 (1979). https://doi.org/10.2307/3617890 Rubio, F., Mestre, X., Palomar, D.P.: Performance analysis and optimal selection of large minimum variance portfolios under estimation risk. IEEE J. Sel. Top. Signal Process. 6(4), 337350 (2012). https://doi.org/10.1109/JSTSP.2012.2202634 Taleb, N.N.: The Black Swan: The Impact of the Highly Improbable, vol. 2. Random house, (2007) Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-Posed Problems. Winston, New York (1977). MR0455365 Tsukuma, H.: Estimation of a high-dimensional covariance matrix with the Stein loss. J. Multivar. Anal. 148, 117 (2016). MR3493016. https://doi.org/10.1016/j.jmva.2016.02.012