2013
DOI: 10.4338/aci-2013-04-ra-0029
|View full text |Cite
|
Sign up to set email alerts
|

Comparing predictions made by a prediction model, clinical score, and physicians

Abstract: SummaryBackground: Asthma exacerbations are one of the most common medical reasons for children to be brought to the hospital emergency department (ED). Various prediction models have been proposed to support diagnosis of exacerbations and evaluation of their severity. Objectives: First, to evaluate prediction models constructed from data using machine learning techniques and to select the best performing model. Second, to compare predictions from the selected model with predictions from the Pediatric Respirat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
32
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(33 citation statements)
references
References 26 publications
1
32
0
Order By: Relevance
“…Moreover, the best performance on both original and preprocessed data sets was observed for the naive Bayes classifier. These findings are consistent with our experience with using naive Bayes alone [7], combining random undersampling with naive Bayes [11,12], as well as with other experimental and theoretical studies on random undersampling [5,6,23] and naive Bayes [18].…”
Section: Discussionsupporting
confidence: 81%
“…Moreover, the best performance on both original and preprocessed data sets was observed for the naive Bayes classifier. These findings are consistent with our experience with using naive Bayes alone [7], combining random undersampling with naive Bayes [11,12], as well as with other experimental and theoretical studies on random undersampling [5,6,23] and naive Bayes [18].…”
Section: Discussionsupporting
confidence: 81%
“…Systems can be passive (users explicitly make a request for support), semi-active (watchdog systems are invoked automatically and present information when users request it) or active (triggered automatically, present information without it being requested and, in some cases, make decisions without the intervention of clinicians). They have been implemented to support clinicians across the spectrum of medical specialties and have been customised for different levels of clinical expertise from novice (eg student nurses) 10 through nonspecialist (eg in community hospital settings) 11 to highly specialist healthcare professionals (eg digital pathology). 12 Simple CDSSs usually check the input provided by a clinician and verify whether the value is allowable or within a specifi ed range and whether there are any predefi ned contraindications.…”
Section: Brief Taxonomy Of Clinical Decision Support Systemsmentioning
confidence: 99%
“…[22] Further validation of the PRAM in 782 children ages 2–17 was published as the Pediatric Respiratory Assessment Measure[23] with measurement of inter-rater reliability, predictive ability for hospitalization before initial treatment (c statistic 0.78), after initial bronchodilation (c statistic 0.84), and using a combination of the initial PRAM and change in PRAM after initial treatment (c statistic 0.86). The PRAM has been shown in other populations to be predictive of hospitalization[24] and has been compared directly to PASS,[16] a Bayesian predictive model, clinician prediction,[25] and spirometry. [26]…”
Section: Methodsmentioning
confidence: 99%