2019
DOI: 10.1002/bdm.2155
|View full text |Cite
|
Sign up to set email alerts
|

A systematic review of algorithm aversion in augmented decision making

Abstract: Despite abundant literature theorizing societal implications of algorithmic decision making, relatively little is known about the conditions that lead to the acceptance or rejection of algorithmically generated insights by individual users of decision aids. More specifically, recent findings of algorithm aversion—the reluctance of human forecasters to use superior but imperfect algorithms—raise questions about whether joint human‐algorithm decision making is feasible in practice. In this paper, we systematical… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

9
301
2
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 421 publications
(314 citation statements)
references
References 90 publications
9
301
2
2
Order By: Relevance
“…Organizations might consider embedding a certain amount of human interaction within their AI‐based staffing systems even at early stages, in order to help ensure that applicants feel they are valued as individuals rather than as data points feeding AI, thereby reducing applicant reliance on negative fairness heuristics that may impact outcomes further into the selection process. This suggestion is supported by Burton, Stein, and Jensen’s (2019) suppositions that human‐algorithm decision systems outperform single‐system decision makers, largely by mitigating aversion and mistrust of AI. One way to achieve this human‐AI blend can be combining the characteristics of AI interviews with that of video‐conference interviews, such that the interview is performed by HR personnel through video chat software, while the responses are still recorded and processed by the AI.…”
Section: Limitations and Implicationsmentioning
confidence: 89%
“…Organizations might consider embedding a certain amount of human interaction within their AI‐based staffing systems even at early stages, in order to help ensure that applicants feel they are valued as individuals rather than as data points feeding AI, thereby reducing applicant reliance on negative fairness heuristics that may impact outcomes further into the selection process. This suggestion is supported by Burton, Stein, and Jensen’s (2019) suppositions that human‐algorithm decision systems outperform single‐system decision makers, largely by mitigating aversion and mistrust of AI. One way to achieve this human‐AI blend can be combining the characteristics of AI interviews with that of video‐conference interviews, such that the interview is performed by HR personnel through video chat software, while the responses are still recorded and processed by the AI.…”
Section: Limitations and Implicationsmentioning
confidence: 89%
“…On the one hand, it has shown that mechanical gathering and combining of information (e.g., combination using ordinary least squares regression) can improve decision quality when compared to clinical gathering and combining of information (e.g., intuition-based combination of information) (Kuncel et al, 2013). On the other hand, this literature found that people are skeptical of the use of mechanical gathering and combining of information (Burton et al, 2019). Some researchers even refer to the reluctance to use mechanically combined information as algorithm aversion (Dietvorst et al, 2015;but cf.…”
Section: Background and Development Of Hypothesesmentioning
confidence: 99%
“…Some researchers even refer to the reluctance to use mechanically combined information as algorithm aversion (Dietvorst et al, 2015;but cf. Logg et al, 2019) and indicate that using automated systems within jobs might affect people's behavior and reactions to the job (Burton et al, 2019).…”
Section: Background and Development Of Hypothesesmentioning
confidence: 99%
See 2 more Smart Citations