2020
DOI: 10.1177/0962280220921415
|View full text |Cite
|
Sign up to set email alerts
|

Regression shrinkage methods for clinical prediction models do not guarantee improved performance: Simulation study

Abstract: When developing risk prediction models on datasets with limited sample size, shrinkage methods are recommended. Earlier studies showed that shrinkage results in better predictive performance on average. This simulation study aimed to investigate the variability of regression shrinkage on predictive performance for a binary outcome. We compared standard maximum likelihood with the following shrinkage methods: uniform shrinkage (likelihood-based and bootstrap-based), penalized maximum likelihood (ridge) methods,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
144
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 77 publications
(151 citation statements)
references
References 46 publications
6
144
1
Order By: Relevance
“… 76 However, there is an ongoing debate whether it solves such problems, and a recent study has suggested that although shrinkage can result in improved calibration, it may not be superior in terms of reducing overfitting. 77 Furthermore, shrinkage models lead to biassed estimates of the regression coefficients, thus making results more difficult to interpret. Third, in our modelling approach we tested disease-based indicators such as the Charlson Comorbidity Index and CIRS that were developed and validated for other purposes.…”
Section: Discussionmentioning
confidence: 99%
“… 76 However, there is an ongoing debate whether it solves such problems, and a recent study has suggested that although shrinkage can result in improved calibration, it may not be superior in terms of reducing overfitting. 77 Furthermore, shrinkage models lead to biassed estimates of the regression coefficients, thus making results more difficult to interpret. Third, in our modelling approach we tested disease-based indicators such as the Charlson Comorbidity Index and CIRS that were developed and validated for other purposes.…”
Section: Discussionmentioning
confidence: 99%
“…It is preferable to include shrinkage in the model estimation process using penalized regression (Harrell Jr, 2015). Models based on penalized regression include L1 penalized estimation (least absolute shrinkage and selection operator, lasso), L2 penalized estimation (ridge), and the Firth penalization (Firth, 1993), which is often preferred as the amount of penalization does not have to be estimated from sparse data (Van Calster, van Smeden, De Cock, & Steyerberg, 2020). The mathematics behind these methods are similar.…”
Section: Introductionmentioning
confidence: 99%
“…The least absolute shrinkage and selection operator (Lasso) is a popular ML algorithm with outstanding feature selection capability. The LASSO preferentially shrinks some predictor coe cients to zero by penalizing the absolute values of the regression coe cients [15,16]. In this study, the optimized logistic regression coe cients were estimated given a boundary ("L1 Norm") to the sum of absolute standardized regression coe cients [15,16].…”
Section: Development Validation and Performance Of Ml-based Modelsmentioning
confidence: 99%