Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems 2018
DOI: 10.1145/3173574.3174014
|View full text |Cite
|
Sign up to set email alerts
|

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

Abstract: Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions-like taxation, justice, and child protection-are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addresse… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
242
0
2

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 310 publications
(247 citation statements)
references
References 66 publications
3
242
0
2
Order By: Relevance
“…housing). In contexts in which public administration and private decision-making are being automated or augmented with algorithms, the need for occasional individualised consideration is one of the reasons for keeping a human-in-the-loop, or 'screen-level bureaucrat' on hand [1,8,12,48]. By scrutinising additional information about the individual that the algorithm does not consider, and considering alternative forms of reasoning regarding mitigating circumstances that an algorithm could not, the 'human-in-the-loop' may be able to serve the aim of individual justice.…”
Section: Aristotle's Other Maximmentioning
confidence: 99%
“…housing). In contexts in which public administration and private decision-making are being automated or augmented with algorithms, the need for occasional individualised consideration is one of the reasons for keeping a human-in-the-loop, or 'screen-level bureaucrat' on hand [1,8,12,48]. By scrutinising additional information about the individual that the algorithm does not consider, and considering alternative forms of reasoning regarding mitigating circumstances that an algorithm could not, the 'human-in-the-loop' may be able to serve the aim of individual justice.…”
Section: Aristotle's Other Maximmentioning
confidence: 99%
“…Should models be used to increase preventative measures, such as community policing, or to heighten response capacity after crimes have been reported?" (Veale, et al, 2018).…”
Section: Discussion and Way Forwardmentioning
confidence: 99%
“…Since algorithms are being trained on big data, the issues that they identify have a bearing on the regulation of algorithms too. Second, as Veale et al (2018) point out, practitioners are already deploying these systems in the public sector and 'are facing immediate, value laden challenges'. They suggest that researchers should not assume that public sector practitioners are naive about 'challenges such as fairness and accountability', and urge greater engagement, based on trust, between public bodies and researchers.…”
Section: Algorithms Big Data and The Search For Public Valuementioning
confidence: 99%