2022
DOI: 10.2139/ssrn.4246077
|View full text |Cite
|
Sign up to set email alerts
|

Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 24 publications
1
4
0
Order By: Relevance
“…However, such computational overconfidence might lead to insufficient utilization of better-performing decision-support models. Similar resistance to an interpretable algorithm in the context of repetitive decision-making under uncertainty has been recently reported by DeStefano et al (2022). Their novel field experiment was conducted at a retail company while it was switching its stock reordering decision support system from an interpretable (weighted moving average) model to an uninterpretable (recurrent neural network) model.…”
Section: Discussionsupporting
confidence: 55%
“…However, such computational overconfidence might lead to insufficient utilization of better-performing decision-support models. Similar resistance to an interpretable algorithm in the context of repetitive decision-making under uncertainty has been recently reported by DeStefano et al (2022). Their novel field experiment was conducted at a retail company while it was switching its stock reordering decision support system from an interpretable (weighted moving average) model to an uninterpretable (recurrent neural network) model.…”
Section: Discussionsupporting
confidence: 55%
“…In fact, empirical studies have found XAI detrimental in uncertain environments, as humans are more likely to reject helpful recommendations because of overconfidence in their troubleshooting abilities. 202 In many cases, automatic identification of anomalies (vide supra) for review by a human operator suffices as long as the anomalies are rare. The human scientist can then invoke their own reasoning, statistical evidence, or other forms of investigation to study the problem.…”
Section: Vc Sample What Can Be Made and How To Make It � Defer Optimi...mentioning
confidence: 99%
“…Moreover, the most appropriate models for initial discoveryfor both interpretability and extrapolationmay be the types of feature-selected linear models discussed above, obviating the need for more sophisticated black-box model interpretability methods. In fact, empirical studies have found XAI detrimental in uncertain environments, as humans are more likely to reject helpful recommendations because of overconfidence in their troubleshooting abilities . In many cases, automatic identification of anomalies ( vide supra ) for review by a human operator suffices as long as the anomalies are rare.…”
Section: Recommendations Toward ML For Exceptional Materialsmentioning
confidence: 99%
“…In fact, empirical studies have found XAI detrimental in uncertain environments, as humans are more likely to reject helpful recommendations because of overconfidence in their troubleshooting abilities. 202 In many cases, automatic identification of anomalies (vide supra) for review by a human operator suffices, so long as the anomalies are rare. The human scientist can then invoke their own reasoning, statistical evidence, or other forms of investigation to study the problem.…”
Section: Try To Fill-in-the-blanks Of Input and Output Spacementioning
confidence: 99%