2022
DOI: 10.48550/arxiv.2202.05338
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning

A. Feder Cooper,
Emanuel Moss,
Benjamin Laufer
et al.

Abstract: In 1996, philosopher Helen Nissenbaum issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems. Using the conceptual framing of moral blame, Nissenbaum described four types of barriers to accountability that computerization presented: 1) "many hands," the problem of attributing moral responsibility for outcomes caused by many moral actors; 2) "bugs," a way software developers might shrug off responsibility by s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 54 publications
0
2
0
Order By: Relevance
“…If the defendant in question belongs to this 1% of instances, the tool will be guaranteed to fail in this case despite its seemingly impressive accuracy. 5 To formalize this intuition and ground the legal concept of adversarial scrutiny in the technical reality of statistical software, we build on conceptual advances from the literature on distributional robustness in machine learning and algorithmic fairness [15,17,31,40,56,98,121]. At its core, robust adversarial testing requires a tool to perform well on input most relevant to the defendant's case.…”
Section: Robust Adversarial Testingmentioning
confidence: 99%
“…If the defendant in question belongs to this 1% of instances, the tool will be guaranteed to fail in this case despite its seemingly impressive accuracy. 5 To formalize this intuition and ground the legal concept of adversarial scrutiny in the technical reality of statistical software, we build on conceptual advances from the literature on distributional robustness in machine learning and algorithmic fairness [15,17,31,40,56,98,121]. At its core, robust adversarial testing requires a tool to perform well on input most relevant to the defendant's case.…”
Section: Robust Adversarial Testingmentioning
confidence: 99%
“…The upstream developer and downstream user are often different parties; the steps detailed above might be done only by the upstream model developer, while the downstream end-user must rely solely on the model provided by the developer. Furthermore, either party may be constrained by resources, logistical barriers, or context-specific barriers, leading the downstream model user to employ a different threshold for making final classification decisions than the upstream model developer had anticipated and optimized for [19,26].…”
Section: Introductionmentioning
confidence: 99%