2023
DOI: 10.1111/puar.13602
|View full text |Cite
|
Sign up to set email alerts
|

‘Just like I thought’: Street‐level bureaucrats trust AI recommendations if they confirm their professional judgment

Abstract: Artificial Intelligence is increasingly used to support and improve street‐level decision‐making, but empirical evidence on how street‐level bureaucrats' work is affected by AI technologies is scarce. We investigate how AI recommendations affect street‐level bureaucrats' decision‐making and if explainable AI increases trust in such recommendations. We experimentally tested a realistic mock predictive policing system in a sample of Dutch police officers using a 2 × 2 factorial design. We found that police offic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 58 publications
0
12
0
1
Order By: Relevance
“…Smart City or Sustainable Development strategies, most mention them rather than design the agenda and next edition based on it. Moreover, in many cases, it is referred to if these results are complaint with the overall strategy and vision of the OGD initiative holder, similar to whatSelten et al (2023) found for the trust in AI recommendations by "street-level bureaucrats" that occur if these recommendations confirm their judgment, what they call "Just like I thought." In other words, when these findings can support and perhaps manipulate further actions, even if they are not consistent with other findings, e.g., public perception or the results of other benchmarks or indices.…”
mentioning
confidence: 65%
“…Smart City or Sustainable Development strategies, most mention them rather than design the agenda and next edition based on it. Moreover, in many cases, it is referred to if these results are complaint with the overall strategy and vision of the OGD initiative holder, similar to whatSelten et al (2023) found for the trust in AI recommendations by "street-level bureaucrats" that occur if these recommendations confirm their judgment, what they call "Just like I thought." In other words, when these findings can support and perhaps manipulate further actions, even if they are not consistent with other findings, e.g., public perception or the results of other benchmarks or indices.…”
mentioning
confidence: 65%
“…Interviewee BF20 also referred to this earlier as helping to inform the auditor's understanding of the client's process and controls. Here, machine output is retained but only insofar as it builds on existing knowledge of the client (for a comparison, see Selten et al, 2023). The following interviewee reinforces this point by explaining situations when output is used in a purely ancillary sense, to bolster documentation:…”
Section: Data Analytics Ml/ai and The "Augmentation" Of Judgmentmentioning
confidence: 92%
“…Interviewee BF20 also referred to this earlier as helping to inform the auditor's understanding of the client's process and controls. Here, machine output is retained but only insofar as it builds on existing knowledge of the client (for a comparison, see Selten et al, 2023). The following interviewee reinforces this point by explaining situations when output is used in a purely ancillary sense, to bolster documentation:
But just having that data available, it makes it a lot easier because it's just something that we can easily then include in our documentation and say, “This is the trend,” and [that it] makes sense with our understanding of what's going on in the client's environment, the environment they operate in.
…”
Section: Empirical Analysis and Findingsmentioning
confidence: 99%
“…As the title implies, Selten, Robeer, and Grimmelikhuijsen (2023) explore the effect of artificial intelligence on the bureaucrats charged with carrying out its recommendations. Specifically, they utilized a realistic mock predictive policing system in a sample of Dutch police officers to experimentally investigate whether explainable AI increases trust in such recommendations.…”
Section: In This Issuementioning
confidence: 99%