Proceedings of the 45th IEEE Conference on Decision and Control 2006
DOI: 10.1109/cdc.2006.376828
|View full text |Cite
|
Sign up to set email alerts
|

A new algorithm for variable selection

Abstract: A new method for variable selection and estimation called Iteratively Scaled Ridge Regression, ISRR, is proposed. The method is an iterative algorithm based on ridge regression. Simulation studies show that ISRR shares the properties of both subset selection and ridge regression. It selects an optimal subset of the regressor variables and is robust to small changes in the data set. The ISRR algorithm was primarily developed for linear models, but is quite simple and general and can easily be extended to more g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2010
2010
2014
2014

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…On the one hand, among L 1 methods, the LASSO approach introduced in [17] and [18] gives very good results in terms of model error. On the other hand, L 2 methods (and especially the iterative scaled ridge regression (ISRR) approach, proposed in [19]) are less accurate but computationally faster than the LASSO approach. Therefore, L 2 solutions are preferred in this case, as simple and fast-to-identify models are the object of this brief (see again the discussion in Section I) and acceptable performance can still be guaranteed.…”
Section: B Subset Selection and Regularizationmentioning
confidence: 99%
See 1 more Smart Citation
“…On the one hand, among L 1 methods, the LASSO approach introduced in [17] and [18] gives very good results in terms of model error. On the other hand, L 2 methods (and especially the iterative scaled ridge regression (ISRR) approach, proposed in [19]) are less accurate but computationally faster than the LASSO approach. Therefore, L 2 solutions are preferred in this case, as simple and fast-to-identify models are the object of this brief (see again the discussion in Section I) and acceptable performance can still be guaranteed.…”
Section: B Subset Selection and Regularizationmentioning
confidence: 99%
“…In [19], it has been shown that the optimal k and the subsequent value of λ can be selected by minimizing a suitable evaluation criterion. Here, the generalized cross validation (GCV) criterion is minimized with respect to λ to find the optimal model parameters and disregard the unimportant regressors.…”
Section: B Subset Selection and Regularizationmentioning
confidence: 99%