2015
DOI: 10.1080/00949655.2015.1073290
|View full text |Cite
|
Sign up to set email alerts
|

Robust logistic regression modelling via the elastic net-type regularization and tuning parameter selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…The groundwater well data used in this study were collected from data obtained through field observations and measurements by Busan Metropolitan City, the groundwater basic survey report, National Groundwater Information Center (gims.go.kr, accessed on 28 January 2020), and Korea Rural Community Corporation. The collected groundwater well data were randomly classified into 153 total datasets, 70% of which were used as the model training datasets (107); the remaining 30% were used as the model validation datasets (46). Figure 1 shows the locations of the groundwater well data used in this study.…”
Section: Well Datamentioning
confidence: 99%
“…The groundwater well data used in this study were collected from data obtained through field observations and measurements by Busan Metropolitan City, the groundwater basic survey report, National Groundwater Information Center (gims.go.kr, accessed on 28 January 2020), and Korea Rural Community Corporation. The collected groundwater well data were randomly classified into 153 total datasets, 70% of which were used as the model training datasets (107); the remaining 30% were used as the model validation datasets (46). Figure 1 shows the locations of the groundwater well data used in this study.…”
Section: Well Datamentioning
confidence: 99%
“…We can find from Equation (2.2) that l p (β) is based on the maximum likelihood method, which is very sensitive to outliers. To obtain robust variable selection, Park and Konishi [24] proposed a weighted penalized log-likelihood function…”
Section: Review Some Classical Methodsmentioning
confidence: 99%
“…where m = ⌊αn⌋, P (S = 1) = P (S = −1) = 0.5. We set α = 0.1 and α = 0.2 to compare our proposed method (ARVSP) with the method (WRVSP) proposed by [24] and the penalized maximum likelihood (PML) method.…”
Section: D2mentioning
confidence: 99%
“…The L 1 -norm part of the penalty generates a sparse model by shrinking some regression coefficients exactly to zero. The L 2 -norm part of the penalty removes the limitation on the number of selected variables, encourages grouping effect, and stabilizes the L 1 regularization path[ 19 ]. An efficient algorithm LARS-EN [ 15 ] is proposed to compute the entire Enet regularization paths with the computational effort of a single OLS fit.…”
Section: Proposed Variable Selection Methodsmentioning
confidence: 99%