2022
DOI: 10.1093/jopart/muac007
|View full text |Cite
|
Sign up to set email alerts
|

Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice

Abstract: Artificial intelligence algorithms are increasingly adopted as decisional aides by public bodies, with the promise of overcoming biases of human decision-makers. At the same time, they may introduce new biases in the human-algorithm interaction. Drawing on psychology and public administration literatures, we investigate two key biases: overreliance on algorithmic advice even in the face of ‘warning signals’ from other sources (automation bias), and selective adoption of algorithmic advice when this corresponds… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 84 publications
(24 citation statements)
references
References 50 publications
2
22
0
Order By: Relevance
“…Namely, decision-makers adhere to the algorithmic advice (rather than resist it) precisely when predictions are aligned with prevalent societal stereotypes. These experimental findings are consistent with patterns observed in real-life settings for example, in the Dutch childcare benefits scandal, mentioned above, ethnic minority citizens (of Moroccan, Turkish and Dutch Antilles origin) were disproportionately impacted (Financial Times 2021): erroneous algorithmic predictions aligned with prevalent stereotypes, with bureaucratic decision-makers unlikely to override such predictions (see further Alon-Barkat & Busuioc 2022).…”
Section: Transparency and Human Oversight Requirements -Some Critical...supporting
confidence: 86%
See 2 more Smart Citations
“…Namely, decision-makers adhere to the algorithmic advice (rather than resist it) precisely when predictions are aligned with prevalent societal stereotypes. These experimental findings are consistent with patterns observed in real-life settings for example, in the Dutch childcare benefits scandal, mentioned above, ethnic minority citizens (of Moroccan, Turkish and Dutch Antilles origin) were disproportionately impacted (Financial Times 2021): erroneous algorithmic predictions aligned with prevalent stereotypes, with bureaucratic decision-makers unlikely to override such predictions (see further Alon-Barkat & Busuioc 2022).…”
Section: Transparency and Human Oversight Requirements -Some Critical...supporting
confidence: 86%
“…The influence that AI algorithms have on human decision-makers is poorly understood, and various studies have raised the prospect that human overseers (decisional mediators) could be prone to important cognitive biases in this respect such as "automation bias", as discussed above. Recent research raises the prospect of additional cognitive biases that can arise in human processing of AI algorithmic outputs: for instance, that human decision-makers are inclined to give more weight and defer to algorithmic recommendations that align with decision-makers' worldviews, with what they already believe to be true that is, when predictions conform with pre-existing beliefs and stereotypes (Alon-Barkat & Busuioc 2022). This could invertedly lead them to make decisions that are harmful and can compound (rather than mitigate) bias.…”
Section: Transparency and Human Oversight Requirements -Some Critical...mentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, the studies do not illuminate whether the effects of congruency are unique to AI or whether we can expect a preference for any (AI and human) actor who shows congruency. Overcoming this limitation, [28] found that congruency increased adherence to AI-assisted recommendations but to a similar degree as recommendations by a co-worker. Hence, [28] could show that individuals did not differentiate between AI and a co-worker in terms of motivated reasoning.…”
Section: Motivated Reasoning: Effects Of Opinion Congruencymentioning
confidence: 95%
“…Overcoming this limitation, [28] found that congruency increased adherence to AI-assisted recommendations but to a similar degree as recommendations by a co-worker. Hence, [28] could show that individuals did not differentiate between AI and a co-worker in terms of motivated reasoning. Receiving information from both, AI and the co-worker, resulted in the same pattern of motivated reasoning.…”
Section: Motivated Reasoning: Effects Of Opinion Congruencymentioning
confidence: 95%