2014
DOI: 10.1007/s00134-014-3227-6
|View full text |Cite|
|
Sign up to set email alerts
|

How to derive and validate clinical prediction models for use in intensive care medicine

Abstract: This narrative review introduces a checklist of 19 items designed to help intensivists develop and transparently report valid clinical prediction models.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
129
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 120 publications
(129 citation statements)
references
References 60 publications
0
129
0
Order By: Relevance
“…The predictive performance of the BAS was evaluated in terms of both discrimination and calibration. 21 Discrimination referred to the ability of the BAS to distinguish mothers with and without early breastfeeding cessation. It was quantified by the area under the receiver operating characteristic (ROC) curve.…”
Section: Discussionmentioning
confidence: 99%
“…The predictive performance of the BAS was evaluated in terms of both discrimination and calibration. 21 Discrimination referred to the ability of the BAS to distinguish mothers with and without early breastfeeding cessation. It was quantified by the area under the receiver operating characteristic (ROC) curve.…”
Section: Discussionmentioning
confidence: 99%
“…The number of rib fractures was recorded from radiologists' reports, and the fractures themselves could have been misclassified, given that radiographic diagnosis of rib fracture may be imperfect. 43 Lastly, because the random assignment of participants to the derivation or validation set may have improved the performance of our model, 29,30 future external validation could show different classification performance.…”
Section: Limitationsmentioning
confidence: 99%
“…Using 11 variables and 41 non-events (3.7 per predictor) results in overfitting of the development sample and inflated performance estimates [2]. This will be evident upon use of the score in other populations.Second, comparing the performance of the new score with four existing scores using the development dataset is against recommendations [2], as this is biased to favor the new score due to overfitting. For comparison with other scores, an independent cohort not used to develop any of the scores must be used [2].…”
mentioning
confidence: 99%
“…The authors state that they used logistic regression analysis to "reassess" the score, which essentially is a recalibration resulting in a new model generating new predictions. This is neither internal nor external validation, which requires assessment of predictions made by the score without modifications in a new sample [2].Fourth, it is recommended to assess calibration by graphical methods or regressions of the predicted versus observed outcomes [2,4], not by the Hosmer-Lemeshow Ĉ-test, as P > 0.05 is more likely to indicate lack of power than proper model fit when used on small samples.While we agree that clinical prediction rules may be valuable for clinicians considering ECMO, it is a prerequisite that such scores are developed and validated using appropriate methodology [2] and sufficient sample sizes, and that all relevant features are transparently reported with adequate discussion of the limitations [5]. Developing and sufficiently validating a clinical prediction rule for this highly selected patient group likely requires a large, multicentre collaboration to ensure trustworthy predictions that will benefit patients and relatives, the healthcare system, researchers, and society.…”
mentioning
confidence: 99%
See 1 more Smart Citation