2003
DOI: 10.1197/jamia.m1338
|View full text |Cite
|
Sign up to set email alerts
|

Measuring the Impact of Diagnostic Decision Support on the Quality of Clinical Decision Making: Development of a Reliable and Valid Composite Score

Abstract: The scores described can be used as key outcome measures in a larger study to fully assess the value of diagnostic decision aids, such as the ISABEL system.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
39
1

Year Published

2003
2003
2015
2015

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(40 citation statements)
references
References 35 publications
0
39
1
Order By: Relevance
“…Computerized decision support tools, such as Isabel and DXplain offer potential advantages over the simple checklists used in this study because they can narrow the differential diagnosis to fit the patient's clinical data. They have performed well in simulated [10,[28][29][30][31][32] and real settings [28,29] but have not been tested against usual care in randomized controlled trials.…”
Section: Comparison With Other Studiesmentioning
confidence: 99%
“…Computerized decision support tools, such as Isabel and DXplain offer potential advantages over the simple checklists used in this study because they can narrow the differential diagnosis to fit the patient's clinical data. They have performed well in simulated [10,[28][29][30][31][32] and real settings [28,29] but have not been tested against usual care in randomized controlled trials.…”
Section: Comparison With Other Studiesmentioning
confidence: 99%
“…However, a significant issue that arises from expert-based evaluation is inconsistency due to variations in practice, differences in judgment, and differences in degrees of harshness or lenience (Hills 1976;Hripcsak and Wilcox 2002;Takayama et al 2006;Iramaneerat and Yudkowsky 2007;Norcini and McKinley 2007). Additionally, measures of diagnostic accuracy have been developed by investigators who are interested in assessing the performance of diagnostic decision support systems (Friedman et al 1998;Berner 2003;Ramnarayan et al 2003). These efforts have identified important considerations in assessing the quality of a diagnostic hypothesis set, including the presence of a correct diagnosis, the position of the correct diagnosis in the hypothesis list, the plausibility of any incorrect diagnoses, relevance of diagnoses, the comprehensiveness of the diagnostic set, and the quality of the management plan implied by the diagnostic set (Friedman et al 1998;Ramnarayan et al 2003).…”
Section: Introductionmentioning
confidence: 97%
“…In fact, definitions of diagnostic quality and accuracy are typically based on the knowledge and judgment of an individual expert or of a small panel of experts (Hripcsak and Wilcox 2002). Alternatively, experts may be asked to perform a diagnostic task in order to set a gold standard relative to which others will be measured or asked to judge the performance of others (McGaghie et al 1994;Dillon et al 2002Dillon et al , 2004Berner 2003;Boulet et al 2003;Littlefield et al 2003;Ramnarayan et al 2003). However, a significant issue that arises from expert-based evaluation is inconsistency due to variations in practice, differences in judgment, and differences in degrees of harshness or lenience (Hills 1976;Hripcsak and Wilcox 2002;Takayama et al 2006;Iramaneerat and Yudkowsky 2007;Norcini and McKinley 2007).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…1 Since 1996, several studies have examined that issue, and, yet, there is still disagreement on the way to operationalize Miller's proposition. In this issue of the Journal, Ramnarayan et al 2 describe a variety of metrics to evaluate the performance of a new pediatric diagnostic program, ISABEL. In a previous issue, Fraser et al 3 also described metrics to evaluate a heart disease program, the HDP.…”
mentioning
confidence: 99%