2015
DOI: 10.22237/jmasm/1430453640
|View full text |Cite
|
Sign up to set email alerts
|

Applying Penalized Binary Logistic Regression with Correlation Based Elastic Net for Variables Selection

Abstract: Reduction of the high dimensional classification using penalized logistic regression is one of the challenges in applying binary logistic regression. The applied penalized method, correlation based elastic penalty (CBEP), was used to overcome the limitation of LASSO and elastic net in variable selection when there are perfect correlation among explanatory variables. The performance of the CBEP was demonstrated through its application in analyzing two well-known high dimensional binary classification data sets.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(19 citation statements)
references
References 15 publications
0
19
0
Order By: Relevance
“…The lasso penalized function has attracted a lot of attention because of its capability to yield sparse solution and its great performance in classification . However, it was proved in Zou that the lasso does not exhibit as consistent variable selection method.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The lasso penalized function has attracted a lot of attention because of its capability to yield sparse solution and its great performance in classification . However, it was proved in Zou that the lasso does not exhibit as consistent variable selection method.…”
Section: Methodsmentioning
confidence: 99%
“…The lasso penalized function has attracted a lot of attention because of its capability to yield sparse solution and its great performance in classification. [34][35][36][37][38] However, it was proved in Zou 30 that the lasso does not exhibit as consistent variable selection method. In other words, lasso lacks the oracle properties, which is defined that a method selects the correct subset of descriptors with probability approaching to 1, and estimates the nonzero descriptors as efficiently as would be possible if the irrelevant descriptors were known in advance.…”
Section: Penalized Logistic Regressionmentioning
confidence: 99%
“…4 must be modified. The conditional probability p of the system state is defined as follows: 31 Y is replaced by the “logit” transformation of the conditional probability p : …”
Section: Process Pattern and Elastic Net-principal Component Analysismentioning
confidence: 99%
“…The classical linear model assumes that the mean of the response variable y is a linear function of a set of predictor variables [1][2][3][4][5][6][7], and that the response variable is continuous and normally distributed with constant variance. As a matter of fact, in many applications, the response variable is categorical or consists of counts or is continuous but non normal, so the ordinary least square method can't be applied to find the regression models [8][9][10][11][12][13][14][15]. Generalized linear models were introduced by Nelder and Wedderburn in 1972 [16] to address those limitations.…”
Section: Introductionmentioning
confidence: 99%
“…In the GLMs the mean of the response variable is modeled as a monotonic nonlinear transformation of a linear function of the predictor variables.The inverse of the transformation g is known as the link function. Many applications used GLM [15,[17][18][19][20][21][22][23]. An example of non normal continuous distribution that has many applications is the inverse Gaussian distribution.…”
Section: Introductionmentioning
confidence: 99%