2022
DOI: 10.1287/orsc.2021.1549
|View full text |Cite
|
Sign up to set email alerts
|

To Engage or Not to Engage with AI for Critical Judgments: How Professionals Deal with Opacity When Using AI for Medical Diagnosis

Abstract: Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgments. We know little, however, about how human-AI augmentation takes place in practice. Yet, gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major U.S. hospital where AI tools were used in three departments by diagnostic radiolog… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
92
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 185 publications
(95 citation statements)
references
References 91 publications
2
92
1
Order By: Relevance
“…Second, algorithms differ in their interpretability. Compared to ruled-based algorithms, which are based on hierarchies of rules and control flows (Lebovitz et al, 2022), ML algorithms are less intelligible to users, who have difficulties understanding the quality of the information received. The reduction of algorithmic interpretability correlates to the increasing mathematical knowledge required to understand the model and the information processing required for humans to replicate algorithmic paths and rules (Burrell, 2016).…”
Section: Differences In Algorithmic Support: Accuracy and Interpretab...mentioning
confidence: 99%
“…Second, algorithms differ in their interpretability. Compared to ruled-based algorithms, which are based on hierarchies of rules and control flows (Lebovitz et al, 2022), ML algorithms are less intelligible to users, who have difficulties understanding the quality of the information received. The reduction of algorithmic interpretability correlates to the increasing mathematical knowledge required to understand the model and the information processing required for humans to replicate algorithmic paths and rules (Burrell, 2016).…”
Section: Differences In Algorithmic Support: Accuracy and Interpretab...mentioning
confidence: 99%
“…Clinical leaders could highlight the essential role played by humans at each phase of the development and implementation of an AI technology ( 30 ). For example, leaders could note that radiologists may relinquish the preliminary scanning of medical images to the AI technology in order to focus on the more complicated cases, or those that the algorithm finds ambiguous ( 31 ); technology developers can include feedback during screening that recommends reaching out for in-person evaluation to verify the clinical recommendation ( 32 ).…”
Section: Ai-based Automation Technologies For Mental Healthcarementioning
confidence: 99%
“…First, regarding clinician daily work practices, ML model outputs are often complex and uninterpretable to humans—the so called ‘black box' problem. The precise reason for an ML model's recommendation often cannot be easily pinpointed, even by data scientists ( 31 ). In addition, technology-based approaches are pattern-based, while traditional research methods are hypothesis-driven, and most clinicians do not have the training required to engage in the computational thinking associated with these interpretations ( 17 , 37 ).…”
Section: Ai-enabled Decision Support Technologies For Mental Healthcarementioning
confidence: 99%
“…The output it generates, however, is often opaque and difficult to interpret. Machine learning as a black box creates "epistemic uncertainty and opacity" [Lebovitz et al 2022]. With the proliferation of machine learning models in all spheres of life there is a perceived need for mechanisms that can help users and other stakeholders to better understand algorithmic predictions.…”
Section: Literature Reviewmentioning
confidence: 99%