2018
DOI: 10.1287/mnsc.2016.2643
|View full text |Cite
|
Sign up to set email alerts
|

Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them

Abstract: Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they coul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

43
455
5
5

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 726 publications
(508 citation statements)
references
References 26 publications
43
455
5
5
Order By: Relevance
“…Regarding algorithmic decision aids, studies such as these highlight the need for affording real or perceived decision control to the human user in order to satisfy his or her psychological needs and self‐interest (Colarelli & Thompson, ). In fact, this conclusion corraborates Dietvorst et al's (, ) finding that trust in an algorithm degrades quickly upon seeing it err, but that it can be equally quick to restore by allowing the human decision maker to modify the algorithm's judgment, even under constraints. Here, algorithm aversion appears to manifest itself in augmented decision‐making systems that fail to address human users' psychological need for agency, autonomy, and control.…”
Section: Resultsmentioning
confidence: 67%
See 1 more Smart Citation
“…Regarding algorithmic decision aids, studies such as these highlight the need for affording real or perceived decision control to the human user in order to satisfy his or her psychological needs and self‐interest (Colarelli & Thompson, ). In fact, this conclusion corraborates Dietvorst et al's (, ) finding that trust in an algorithm degrades quickly upon seeing it err, but that it can be equally quick to restore by allowing the human decision maker to modify the algorithm's judgment, even under constraints. Here, algorithm aversion appears to manifest itself in augmented decision‐making systems that fail to address human users' psychological need for agency, autonomy, and control.…”
Section: Resultsmentioning
confidence: 67%
“…In large part, the recent findings of Dietvorst et al () are a rework of an old concept: human‐in‐the‐loop decision making. Essentially, this entails an augmented decision making system in which the human user semisupervises the algorithm by having opportunities to intervene, provide input, and have the final say.…”
Section: Resultsmentioning
confidence: 99%
“…Recently, this effect has been noted in forecasting research (Önkal et al, ) and has been called algorithm aversion (Dietvorst, Simmons, & Massey, ). A developing area of research is trying to identify interventions that increase trust in automation advice, such as providing confidence intervals or allowing human judges to slightly modify automation forecasts (Dietvorst, Simmons, & Massey, ; Goodwin, Gönül, & Önkal, ). This research is important, but more research is needed on the underlying psychological processes that affect the discounting of automation advice, especially in comparison to human advice.…”
Section: Introductionmentioning
confidence: 99%
“…This may not be acceptable to society. A large body of research suggests that peoples' willingness to accept technological risk is governed by factors related not only to the actual risk but also to other characteristics (Sjöberg, 2000;Slovic and Peters, 2006;Dietvorst, Simmons, and Massey, 2014). For example, risks are more acceptable when they are voluntary (which it may not be for the many road users who will have to share the road with HAVs) and if a person can exert control over the outcomes (which is, by definition, not the case for higher levels of vehicle automation) (Starr, 1969;Fischhoff et al, 1978;Otway and von Winterfeldt, 1982;Slovic, 1987Slovic, , 2000Dietvorst, Simmons, and Massey, 2016).…”
Section: Introductionmentioning
confidence: 99%