2017
DOI: 10.1093/biomet/asx070
|View full text |Cite
|
Sign up to set email alerts
|

Robust and consistent variable selection in high-dimensional generalized linear models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
17
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 51 publications
0
17
0
Order By: Relevance
“…However, one could expect a robust penalized estimator to behave as well as a robust oracle estimator at the model, while keeping its good robustness properties. We illustrate this point with a simulation study taken from Ref , where the theoretical properties are also discussed.…”
Section: High‐dimensional Statisticsmentioning
confidence: 89%
See 2 more Smart Citations
“…However, one could expect a robust penalized estimator to behave as well as a robust oracle estimator at the model, while keeping its good robustness properties. We illustrate this point with a simulation study taken from Ref , where the theoretical properties are also discussed.…”
Section: High‐dimensional Statisticsmentioning
confidence: 89%
“…The adaptive lasso procedures used the weights wj=1true/()βtrue^j0+1n, where βtrue^0 denotes the tuned lasso estimator. We used the coordinate descent algorithm described in Avella‐Medina and Ronchetti for the different versions of the lasso and we stop the algorithm when the tuning parameter gives models of size greater or equal 20. The selection of the tuning parameters is done by BIC.…”
Section: High‐dimensional Statisticsmentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike Guo et al (2017), according to a natural point of view in robustness, we do not assume that the parameter space is a compact subset of R p , where p is the covariates dimension and weaker assumptions on the penalty are required. Besides, our results are not restricted to the LASSO or ADALASSO penalties as in Avella-Medina and Ronchetti (2018) or Guo et al (2017). Indeed, they are stated in a general penalty framework that allows to include not only the two penalties already mentioned but also SCAD and MCP penalties.…”
Section: Introductionmentioning
confidence: 99%
“…Some robust procedures are already available in the literature. Among others, Avella-Medina and Ronchetti (2017) proposed a robust penalized quasi-likelihood estimator for generalized linear models, Park and Konishi (2016) suggested a robust penalized logistic regression based on a weighted likelihood methodology, and Kurnaz et al (2018) adopted a trimmed elasticnet estimator for linear and logistic regression. However, none of these options satisfy the zero-sum constraint.…”
Section: Introductionmentioning
confidence: 99%