2019
DOI: 10.1093/jamia/ocz035
|View full text |Cite
|
Sign up to set email alerts
|

A review of measurement practice in studies of clinical decision support systems 1998–2017

Abstract: Objective To assess measurement practice in clinical decision support evaluation studies. Materials and Methods We identified empirical studies evaluating clinical decision support systems published from 1998 to 2017. We reviewed titles, abstracts, and full paper contents for evidence of attention to measurement validity, reliability, or reuse. We used Friedman and Wyatt’s typology to categorize the studies. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…Earlier studies have found that use of register data may play a vital role in patient care [12,15,36]. Furthermore, an excess of research has been devoted to evaluating electronic knowledge sources by employing self-reported use, which is prone to biases [37][38][39]. We are not aware of any prior studies examining outcomes of the use of an online knowledge base by relating frequency of use to objective data from quality registries.…”
Section: Discussionmentioning
confidence: 99%
“…Earlier studies have found that use of register data may play a vital role in patient care [12,15,36]. Furthermore, an excess of research has been devoted to evaluating electronic knowledge sources by employing self-reported use, which is prone to biases [37][38][39]. We are not aware of any prior studies examining outcomes of the use of an online knowledge base by relating frequency of use to objective data from quality registries.…”
Section: Discussionmentioning
confidence: 99%
“…Earlier studies have found that use of register data may play an vital role in patient care (12,15,32). Furthermore, an excess of research has been devoted to evaluating electronic knowledge sources by employing self-reported use, which is prone to biases (33)(34)(35). We are not aware of any prior studies examining outcomes of the use of an online KB by relating frequency of use to objective data from quality registries.…”
Section: Discussionmentioning
confidence: 99%
“…11 Health informatics evaluations are seldom replicated 12 and often do not follow good practice in reliable measurement of outcomes. 13 Formative evaluations that can help to shape developments and mitigate risks associated with new applications are also done far too infrequently, and where they exist, they are often misaligned with commercial and political timescales. As a result, formative insights may fail to inform concurrent decision-making and thereby not allow to mitigate risks.…”
Section: Evaluation Of Technology and Evaluation Of Servicesmentioning
confidence: 99%