Shrinking the Sample Covariance Matrix using Convex Penalties on the Matrix-Log Transformation

03/19/2019
by   David E. Tyler, et al.
0

For q-dimensional data, penalized versions of the sample covariance matrix are important when the sample size is small or modest relative to q. Since the negative log-likelihood under multivariate normal sampling is convex in Σ^-1, the inverse of its covariance matrix, it is common to add to it a penalty which is also convex in Σ^-1. More recently, Deng-Tsui (2013) and Yu et al.(2017) have proposed penalties which are functions of the eigenvalues of Σ, and are convex in Σ, but not in Σ^-1. The resulting penalized optimization problem is not convex in either Σ or Σ^-1. In this paper, we note that this optimization problem is geodesically convex in Σ, which allows us to establish the existence and uniqueness of the corresponding penalized covariance matrices. More generally, we show the equivalence of convexity in Σ and geodesic convexity for penalties on Σ which are strictly functions of their eigenvalues. In addition, when using such penalties, we show that the resulting optimization problem reduces to to a q-dimensional convex optimization problem on the eigenvalues of Σ, which can then be readily solved via Newton-Raphson. Finally, we argue that it is better to apply these penalties to the shape matrix Σ/(Σ)^1/q rather than to Σ itself. A simulation study and an example illustrate the advantages of applying the penalty to the shape matrix.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset