2006 Fortieth Asilomar Conference on Signals, Systems and Computers 2006
DOI: 10.1109/acssc.2006.354997
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Likelihood Covariance Estimation with a Condition Number Constraint

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
31
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 28 publications
(31 citation statements)
references
References 5 publications
0
31
0
Order By: Relevance
“…4 reveals that FNE and SNE are basically equivalent and outperform in terms of average SINR all the counterparts, included those specific for compound Gaussian disturbance 10 . Precisely, in the matched condition, as already observed in Fig.…”
Section: A Spatial Processingmentioning
confidence: 99%
See 2 more Smart Citations
“…4 reveals that FNE and SNE are basically equivalent and outperform in terms of average SINR all the counterparts, included those specific for compound Gaussian disturbance 10 . Precisely, in the matched condition, as already observed in Fig.…”
Section: A Spatial Processingmentioning
confidence: 99%
“…Without surprise, LRE-6 outperforms LRE-7 reflecting the presence of a mismatch loss. Finally, in the mismatched scenario, FNE and SNE also grant better performance than FML and CML with 10 As to the LRE, the clutter covariance matrix rank is evaluated as the number of the eigenvalues greater than tr (M s)/10 5 ≥ 10 −4 . Hence, the estimators exploiting the true rank, i.e., 6 (LRE-6) and rank 7 (LRE-7) are displayed.…”
Section: A Spatial Processingmentioning
confidence: 99%
See 1 more Smart Citation
“…Methods which are more convenient for these applications are presented in Ledoit and Wolf (2004a,b); Huang et al (2006); Warton (2008); Won et al (2009) and Deng and Tsui (2013) along with our approach presented in this article. Huang et al (2006) were among the first to parallel the penalized log-likelihood estimation with l 2 (and l 1 ) regularization with the common ridge (Lasso) regression.…”
Section: Introductionmentioning
confidence: 99%
“…The penalized log-likelihood functions and other constrained optimization techniques are used to gain better estimates for the matrices. Examples of these include the graphical Lasso algorithm and its extensions (Friedman et al (2008); Fan et al (2009);Witten et al (2011); Bien and Tibshirani (2011)) and other regularization driven approaches for the log-likelihood function (see e.g., Won et al (2009) ;Yuan and Wang (2013) ;Deng and Tsui (2013)). Examples of optimization-based approaches which do not deal with the likelihood-based inference are presented in Ledoit and Wolf (2004a,b) and Cai et al (2011).…”
Section: Introductionmentioning
confidence: 99%