2021
DOI: 10.1161/circoutcomes.121.007858
|View full text |Cite
|
Sign up to set email alerts
|

External Validations of Cardiovascular Clinical Prediction Models: A Large-Scale Review of the Literature

Abstract: Background: There are many clinical prediction models (CPMs) available to inform treatment decisions for patients with cardiovascular disease. However, the extent to which they have been externally tested, and how well they generally perform has not been broadly evaluated. Methods: A SCOPUS citation search was run on March 22, 2017 to identify external validations of cardiovascular CPMs in the Tufts Predictive Analytics and Comparative Effectiveness CPM… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 43 publications
(30 citation statements)
references
References 47 publications
0
30
0
Order By: Relevance
“…Our prior literature review 6 was unable to examine calibration because it is frequently unreported and, when reported, the metrics used vary from study to study and are largely uninformative with regard to the magnitude of miscalibration (eg, Hosmer Lemeshow, which yields only a P , which tends to be large in small samples and small in large samples). The validations we performed ourselves revealed that CPM-predicted outcome rates frequently deviate from observed outcome rates even when discrimination was good.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our prior literature review 6 was unable to examine calibration because it is frequently unreported and, when reported, the metrics used vary from study to study and are largely uninformative with regard to the magnitude of miscalibration (eg, Hosmer Lemeshow, which yields only a P , which tends to be large in small samples and small in large samples). The validations we performed ourselves revealed that CPM-predicted outcome rates frequently deviate from observed outcome rates even when discrimination was good.…”
Section: Discussionmentioning
confidence: 99%
“…Most of those that have been externally validated have been evaluated only once. 6,7 Yet, our prior analysis also called into question the value of these single validations, since discriminatory performance typically varies tremendously when a single model is evaluated on multiple databases. 6 However, there are inherent limitations in literature reviews in understanding how well models perform when evaluated on external data.…”
mentioning
confidence: 99%
“…The train, validation, and test sets were coming from the MIMIC-III dataset. However, using an independent dataset from a different system would be beneficial to test the performance of the model [ 46 ], which provides room for future work.…”
Section: Discussionmentioning
confidence: 99%
“…Different combinations of activation functions were used, using the architecture (12,156,32,1) with learning rate of 5e-4. The activation functions used were Relu, Sigmoid and Tanh.…”
Section: Activation Functionmentioning
confidence: 99%
“…Finally, different epochs and bath size combinations were tested. using the architecture (12,156,32,1) with learning rate of 5e-4 with Relu as the activation function for the two hidden layers. It was found that the best performance was achieved using an epoch of 150 and batch size of 30.…”
Section: Epochs and Batch Sizementioning
confidence: 99%