Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 2019
DOI: 10.1145/3306618.3314244
|View full text |Cite
|
Sign up to set email alerts
|

Actionable Auditing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
46
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 337 publications
(69 citation statements)
references
References 15 publications
0
46
0
Order By: Relevance
“…744 Algorithmic audits in 2019 uncovered disproportionate performance or biases within AI systems ranging from self-driving-car software that performed differently for darker-and lighter-skinned pedestrians, 745 gender bias in online biographies, 746 skewed representations in object recognition from lower-income environments, 747 racial differences in algorithmic pricing, 748 and differential prioritization in healthcare, 749 as well as performance disparities in facial recognition. 750 In several cases, these audits had a tangible impact on improving the lives of people unfairly affected. 751 They also had a substantial impact on policy discussions.…”
Section: Ai and Interdisciplinary Analysis For Problem Solvingmentioning
confidence: 99%
“…744 Algorithmic audits in 2019 uncovered disproportionate performance or biases within AI systems ranging from self-driving-car software that performed differently for darker-and lighter-skinned pedestrians, 745 gender bias in online biographies, 746 skewed representations in object recognition from lower-income environments, 747 racial differences in algorithmic pricing, 748 and differential prioritization in healthcare, 749 as well as performance disparities in facial recognition. 750 In several cases, these audits had a tangible impact on improving the lives of people unfairly affected. 751 They also had a substantial impact on policy discussions.…”
Section: Ai and Interdisciplinary Analysis For Problem Solvingmentioning
confidence: 99%
“…For algorithms that track online behavior in order to personalize (limit) ads and products, audits have focused primarily on issues of transparency or autonomy. Audits of some algorithms, such as facial recognition or affect recognition (Buolamwini and Gebru, 2018; Raji and Buolamwini, 2019), have focused not only on bias, but also on the potential for abuse. The audit instrument we sketch here is meant to be more comprehensive and broadly applicable.…”
Section: What Is An Algorithm Audit?mentioning
confidence: 99%
“…For instance, discarding data due to a subpopulation having missing values more commonly (useless answers or not provided) may result in underrepresentation. Decision makers may also favour a privileged group or reinforce stereotypes influenced by cultural norms or one's own beliefs by, for instance, using underrepresentative data to train machine‐learning models (e.g., gender and racial bias found in AI recognition technology 70,71 ). They can also provide different subsets of questions to different groups, thus hindering objectivity and leading to a selective and possibly misleading use of data to support decisions that have already been made.…”
Section: Mapping Missingness and Unfairnessmentioning
confidence: 99%