2016
DOI: 10.1186/s12859-016-1309-x
|View full text |Cite
|
Sign up to set email alerts
|

On the inconsistency of ℓ 1-penalised sparse precision matrix estimation

Abstract: BackgroundVarious ℓ 1-penalised estimation methods such as graphical lasso and CLIME are widely used for sparse precision matrix estimation and learning of undirected network structure from data. Many of these methods have been shown to be consistent under various quantitative assumptions about the underlying true covariance matrix. Intuitively, these conditions are related to situations where the penalty term will dominate the optimisation.ResultsWe explore the consistency of ℓ 1-based methods for a class of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
23
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(24 citation statements)
references
References 26 publications
1
23
0
Order By: Relevance
“…In these situations, the covariance matrix cannot be inverted due to singularity (Hartlap, Simon, & Schneider, 2007), which is overcome by the glasso method. Accordingly, most of the simulation work has focused on high-dimensional settings (n < p), where model selection consistency is not typically evaluated in more common asymptotic settings (n → ∞) (Ha & Sun, 2014;Heinävaara, Leppä-aho, Corander, & Honkela, 2016;Peng, Wang, Zhou, & Zhu, 2009). Further, in behavioral science applications, the majority of network models are t in low-dimensional settings (p n) (Costantini et al, 2015;Rhemtulla et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…In these situations, the covariance matrix cannot be inverted due to singularity (Hartlap, Simon, & Schneider, 2007), which is overcome by the glasso method. Accordingly, most of the simulation work has focused on high-dimensional settings (n < p), where model selection consistency is not typically evaluated in more common asymptotic settings (n → ∞) (Ha & Sun, 2014;Heinävaara, Leppä-aho, Corander, & Honkela, 2016;Peng, Wang, Zhou, & Zhu, 2009). Further, in behavioral science applications, the majority of network models are t in low-dimensional settings (p n) (Costantini et al, 2015;Rhemtulla et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Two additional papers describe aspects of (unsupervised) network reconstruction [3,4]. Affeldt et al…”
Section: Introductionmentioning
confidence: 99%
“…The basic idea here is to first identify related variables, and then in a second step perform multiple parallel local network reconstructions from which a global network is inferred. The second contribution related to network construction, Heinävaara et al [4], describes aspects of L1-penalised sparse precision matrix estimation. L1-regularisation is often applied in network reconstruction, and this manuscript demonstrates that it is important to check whether the conditions of consistency are likely to be met by the dataset and the problem at hand.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations