2010
DOI: 10.1109/tsp.2010.2053029
|View full text |Cite
|
Sign up to set email alerts
|

Shrinkage Algorithms for MMSE Covariance Estimation

Abstract: We address covariance estimation in the sense of minimum mean-squared error (MMSE) for Gaussian samples. Specifically, we consider shrinkage methods which are suitable for high dimensional problems with a small number of samples (large p small n). First, we improve on the Ledoit-Wolf (LW) method by conditioning on a sufficient statistic. By the Rao-Blackwell theorem, this yields a new estimator called RBLW, whose mean-squared error dominates that of LW for Gaussian variables. Second, to further reduce the esti… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
380
1
6

Year Published

2011
2011
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 437 publications
(390 citation statements)
references
References 33 publications
(54 reference statements)
3
380
1
6
Order By: Relevance
“…The most well-conditioned covariance estimate of Σ is F = tr(S)/d·I d×d [12]. The idea of OAS is to shrink the ill-conditioned sample covariance, S, towards F so that a well-conditioned covariance estimate, Σ , can be obtained.…”
Section: Oasmentioning
confidence: 99%
See 2 more Smart Citations
“…The most well-conditioned covariance estimate of Σ is F = tr(S)/d·I d×d [12]. The idea of OAS is to shrink the ill-conditioned sample covariance, S, towards F so that a well-conditioned covariance estimate, Σ , can be obtained.…”
Section: Oasmentioning
confidence: 99%
“…The idea of OAS is to shrink the ill-conditioned sample covariance, S, towards F so that a well-conditioned covariance estimate, Σ , can be obtained. Specifically, OAS optimizes the following cost function [12]:…”
Section: Oasmentioning
confidence: 99%
See 1 more Smart Citation
“…Consider the following basic image model: elements where the pixels are concatenated along a fixed lexicographic ordering. As with all patch -based techniques, the size of image patches must be described in advance [31], [32], [35]. Traditionally, the size of the image patch is a parameter-free that specifies how stochastic the user believes the image to be.…”
Section: Problem Statementmentioning
confidence: 99%
“…, r D are drawn. Chen et al [5] give the following matrix as a "naive but most well-conditioned estimate" for T :F = Tr(T ) K I, where Tr(·) is the trace of a matrix and I is the identity matrix.F is a diagonal matrix whose diagonal entries are all equal to the average of the diagonal elements ofT . Our approach usesF in place of the first term of the right hand side of Eq.…”
Section: Covariance Matrix Updatementioning
confidence: 99%