2019
DOI: 10.1177/0022243719851788
|View full text |Cite
|
Sign up to set email alerts
|

Task-Dependent Algorithm Aversion

Abstract: Research suggests that consumers are averse to relying on algorithms to perform tasks that are typically done by humans, despite the fact that algorithms often perform better. The authors explore when and why this is true in a wide variety of domains. They find that algorithms are trusted and relied on less for tasks that seem subjective (vs. objective) in nature. However, they show that perceived task objectivity is malleable and that increasing a task’s perceived objectivity increases trust in and use of alg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

33
469
6
4

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 682 publications
(512 citation statements)
references
References 53 publications
33
469
6
4
Order By: Relevance
“…Much evidence suggests that individuals should show an aversion towards computer algorithms, especially if it concerns decisions from the moral domain. When given the possibility to choose between advice provided by a human or an algorithm, people show a preference for the former and thus exhibit algorithm aversion (Castelo et al, 2019;Dietvorst et al, 2015Dietvorst et al, , 2016Longoni et al, 2019). Bigman and Gray (2018) have demonstrated via a series of experiments that decisions with a moral component appear to be the domain of humans and not "machines", showing machine-aversion in the moral domain, a tendency that was also documented by Gogoll and Uhl (2018).…”
Section: Introductionmentioning
confidence: 83%
“…Much evidence suggests that individuals should show an aversion towards computer algorithms, especially if it concerns decisions from the moral domain. When given the possibility to choose between advice provided by a human or an algorithm, people show a preference for the former and thus exhibit algorithm aversion (Castelo et al, 2019;Dietvorst et al, 2015Dietvorst et al, , 2016Longoni et al, 2019). Bigman and Gray (2018) have demonstrated via a series of experiments that decisions with a moral component appear to be the domain of humans and not "machines", showing machine-aversion in the moral domain, a tendency that was also documented by Gogoll and Uhl (2018).…”
Section: Introductionmentioning
confidence: 83%
“…However, we believe this is most likely due to the belief of some participants that algorithms produce superior outcomes, in accordance with reality [ 17 ]. Bias against algorithms re-emerges once participants are informed that humans and algorithms (in our case: an artificial neural network) produce similar outcomes, (presumably) taking away the algorithms’ perceived edge [ 16 ].…”
Section: Discussionmentioning
confidence: 99%
“…Recent evidence suggests that individuals show an aversion towards computer algorithms under certain conditions. When given the possibility to choose between advice provided by a human or an algorithm, people tend to display a preference for the former and thus exhibit algorithm aversion [ 16 19 ]. This algorithm aversion in the original sense can emerge in various contexts, but a common component is that people have to witness the algorithm making a mistake.…”
Section: Hypothesis Developmentmentioning
confidence: 99%
See 1 more Smart Citation
“…Some tasks, like image recognition for instance, are extremely easy for humans, but (currently) difficult for algorithms (Krizhevsky, Sutskever, & Hinton, 2012). Conversely, people may think that making a prediction is a relatively easy task for an algorithm, as it is an objective task involving the integration of multiple pieces of information or complex calculations (Castelo, Bos, & Lehmann, 2019). This view leads us to predict that slower response times will lead to lower quality evaluations of algorithm-generated predictions, as they will signal more effort being exerted for an ostensibly easy task.…”
Section: Prediction Accuracy and Response Time As Informationmentioning
confidence: 99%