2022
DOI: 10.3390/math10162985
|View full text |Cite
|
Sign up to set email alerts
|

A Robust Variable Selection Method for Sparse Online Regression via the Elastic Net Penalty

Abstract: Variable selection has been a hot topic, with various popular methods including lasso, SCAD, and elastic net. These penalized regression algorithms remain sensitive to noisy data. Furthermore, “concept drift” fundamentally distinguishes streaming data learning from batch learning. This article presents a method for noise-resistant regularization and variable selection in noisy data streams with multicollinearity, dubbed canal-adaptive elastic net, which is similar to elastic net and encourages grouping effects… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…It combines the strengths of two other regularization methods, L1: least absolute shrinkage and selection operator (LASSO) and L2: Ridge regression regularization. EN can handle multicollinearity, which occurs when some predictors correlate strongly (Wang, Liang, Liu, Song, & Zhang, 2022). The LASSO regularization can encounter problems of inconsistency and instability in the presence of multicollinearity since it may arbitrarily select one predictor over another.…”
Section: Elastic Net (En)mentioning
confidence: 99%
“…It combines the strengths of two other regularization methods, L1: least absolute shrinkage and selection operator (LASSO) and L2: Ridge regression regularization. EN can handle multicollinearity, which occurs when some predictors correlate strongly (Wang, Liang, Liu, Song, & Zhang, 2022). The LASSO regularization can encounter problems of inconsistency and instability in the presence of multicollinearity since it may arbitrarily select one predictor over another.…”
Section: Elastic Net (En)mentioning
confidence: 99%
“…Although it amplifies the regression performance (quality of fit), this method might underestimate the penalty, ultimately reducing selection efficiency. With small values of λ, the majority of the coefficients do not achieve a 0 value, leading to a larger dataset [21]. This limitation leads to the underperformance of ridge regression in analyzing numerous features.…”
Section: Ridgementioning
confidence: 99%
“…When high multicollinearity is present, ridge may outperform LASSO and EN. However, LASSO shows a superior capability to select features, while EN improves estimations over datasets with unknown variance [21].…”
Section: Elastic Netmentioning
confidence: 99%
“…Detailed mathematical formulations of entropy, computation of maximum values and mean values reported in [21][22][23] have been used in this paper to compute CFI1 to CFI7.…”
Section: Classification Of Pqementioning
confidence: 99%