2012 IEEE 7th Sensor Array and Multichannel Signal Processing Workshop (SAM) 2012
DOI: 10.1109/sam.2012.6250512
|View full text |Cite
|
Sign up to set email alerts
|

Natural order recovery for banded covariance models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…where λ is a regularization parameter, and C 1 is a matrix version of the L1 norm, namely a sum over the absolute values of the elements in C. It is important to emphasize that this approach is only applicable to chordal graphical models and it assumes that the perfect order of the variables is known a priori. Recent developments in high dimensional covariance estimation provide data-driven methods for identifying this order and structure [30], [28]. We leave this topic as a possible direction for future research.…”
Section: Convexity In Chordal Mggdmentioning
confidence: 99%
See 1 more Smart Citation
“…where λ is a regularization parameter, and C 1 is a matrix version of the L1 norm, namely a sum over the absolute values of the elements in C. It is important to emphasize that this approach is only applicable to chordal graphical models and it assumes that the perfect order of the variables is known a priori. Recent developments in high dimensional covariance estimation provide data-driven methods for identifying this order and structure [30], [28]. We leave this topic as a possible direction for future research.…”
Section: Convexity In Chordal Mggdmentioning
confidence: 99%
“…Our results are also applicable to the case of unknown sparsity pattern, i.e., structure learning via sparsity inducing penalties, but require prior knowledge of the perfect order. Recent works on structure learning in directed acyclic graphs provide data driven techniques for learning this order [30], [28].…”
Section: Introductionmentioning
confidence: 99%
“…Using the definition of ϵ in (7) with Corollary 1, we obtain the result below on the convergence of estimates.…”
Section: A Robust Sample Covariancementioning
confidence: 92%
“…Assuming that the underlying relationships follow Gaussian distributions, techniques such as graphical lasso [4]- [6] have been developed to tackle the problem by incorporating sparsity-promoting penalties. Different optimization methods have been proposed to solve the graphical lasso problem, including coordinate descent [5], proximal methods [7], [8], alternating minimization methods [9], [10], and Newtonconjugate gradient methods [11].…”
Section: Introductionmentioning
confidence: 99%
“…Doing sparse graph recovery based on the graphical lasso formulation and its variants have been extensively studied and a recent survey [31] provides a good primer into these approaches. The traditional optimization based algorithms like BCD [2], G-ISTA [28], BigQUIC [18], GGMncv [48], MissGlasso [43] and many others have been designed for a range of requirements like scaling to large number of features, handling missing values, including non-convex penalties etc. The TeraLasso [14] and the Sylvester Graphical Lasso or SyGlasso model [47] are tensor based approaches and have recently garnered more interest.…”
Section: Related Methodsmentioning
confidence: 99%