2019
DOI: 10.1177/0962280219854487
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of multiple prediction models: A novel view on model selection and performance assessment

Abstract: Model selection and performance assessment for prediction models are important tasks in machine learning, e.g. for the development of medical diagnosis or prognosis rules based on complex data. A common approach is to select the best model via cross-validation and to evaluate this final model on an independent dataset. In this work, we propose to instead evaluate several models simultaneously. These may result from varied hyperparameters or completely different learning algorithms. Our main goal is to increase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
38
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 11 publications
(39 citation statements)
references
References 36 publications
1
38
0
Order By: Relevance
“…To arrive at a suitable selection of prediction models for the evaluation study, more work is necessary when sensitivity and specificity are assessed simultaneously. The within 1 SE rule 37 can be adopted to the co‐primary endpoint scenario by basing it on the weighted accuracy wAcc=wSe+(1w)Sp, w(0,1) instead of the overall classification accuracy. We will investigate the performance of this approach with the weight w=(1Se01Sp0+1)1 in Section 5.…”
Section: Model Selection Prior To the Evaluation Studymentioning
confidence: 99%
“…To arrive at a suitable selection of prediction models for the evaluation study, more work is necessary when sensitivity and specificity are assessed simultaneously. The within 1 SE rule 37 can be adopted to the co‐primary endpoint scenario by basing it on the weighted accuracy wAcc=wSe+(1w)Sp, w(0,1) instead of the overall classification accuracy. We will investigate the performance of this approach with the weight w=(1Se01Sp0+1)1 in Section 5.…”
Section: Model Selection Prior To the Evaluation Studymentioning
confidence: 99%
“…We will investigate the finite sample FWER in realistic and least-favorable settings in section 4. As described by Westphal and Brannath (2019a), we extend ϕ to a multiple test for the actually relevant hypothesis system…”
Section: Study Goalmentioning
confidence: 99%
“…Firstly, the extended test retains (asymptotic) FWER control as only non-rejections are added. Secondly, we can compare different model selection strategies because the extended multiple test always operates on H = H M and not only on H S (Westphal & Brannath, 2019a).…”
Section: Study Goalmentioning
confidence: 99%
See 2 more Smart Citations