2020
DOI: 10.1525/gp.2020.12908
|View full text |Cite
|
Sign up to set email alerts
|

Algorithmic Long-Term Unemployment Risk Assessment in Use: Counselors’ Perceptions and Use Practices

Abstract: The recent surge of interest in algorithmic decision-making among scholars across disciplines is associated with its potential to resolve the challenges common to administrative decision-making in the public sector, such as greater fairness and equal treatment of each individual, among others. However, algorithmic decision-making combined with human judgment may introduce new complexities with unclear consequences. This article offers evidence that contributes to the ongoing discussion about algorithmic decisi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…Overall, researchers found that the explanations improved the confidence of the decisions, but counter-intuitively, had a somewhat negative effect on the quality of those decisions [80].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Overall, researchers found that the explanations improved the confidence of the decisions, but counter-intuitively, had a somewhat negative effect on the quality of those decisions [80].…”
Section: Methodsmentioning
confidence: 99%
“…Running-example. In the IEFP use case, SHAP factors were given to job counselors to show the top factors influencing the score of a candidate both positively and negatively [80]. The transparency provided by SHAP provided a local explanation about the outcome of the model.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, there is a small body of research documenting service users' experiences of specific AI applications in the social services, particularly users' negative experiences of exclusion and discrimination [21,33], providing context-specific insights into system users' experiences of AI and illustrating the high-stakes nature of implementing AI in this domain. This work, together with some small-scale, mostly qualitative studies involving frontline social service staff [34][35][36][37][38], illustrates the complex and dynamic relationship between AI and the routines of social welfare professionals and indicates mixed reactions to these systems among staff. For example, the study by Zejnilović et al [36] of counselors in a Portuguese employment service in 2020 found high levels of distrust and generally negative perceptions of an AI system used to score clients' risk of long-term unemployment.…”
Section: Existing Research On Perceptions Of the Elsi Of Using Ai Inc...mentioning
confidence: 99%
“…The system essentially categorised the unemployed as good or bad investments, leading the Human Rights Commissioner to establish the algorithm's decisions as unjust, and in the end the system was banned (Kuziemski and Misuraca, 2020). A central issue here was that there was no clear plan for how human judgement and algorithmic decision-making could be joined and enacted in a public service context (Zejnilovic et al, 2020).…”
mentioning
confidence: 99%