2021
DOI: 10.1145/3479577
|View full text |Cite
|
Sign up to set email alerts
|

Everyday Algorithm Auditing: Understanding the Power of Everyday Users in Surfacing Harmful Algorithmic Behaviors

Abstract: A growing body of literature has proposed formal approaches to audit algorithmic systems for biased and harmful behaviors. While formal auditing approaches have been greatly impactful, they often suffer major blindspots, with critical issues surfacing only in the context of everyday use once systems are deployed. Recent years have seen many cases in which everyday users of algorithmic systems detect and raise awareness about harmful behaviors that they encounter in the course of their everyday interactions wit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
58
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 76 publications
(61 citation statements)
references
References 60 publications
3
58
0
Order By: Relevance
“…In contrast to posthoc audits and evaluations, certificates of robustness provide algorithmic guarantees on the performance of models under certain conditions [76]. Human-inthe-loop auditing processes leverage everyday users to provide further protection once a system is deployed [63], and we heard from participants in both of our studies that they believed IMCs would be valuable for record-keeping and organizational alignment throughout deployment.…”
Section: Background and Related Workmentioning
confidence: 99%
“…In contrast to posthoc audits and evaluations, certificates of robustness provide algorithmic guarantees on the performance of models under certain conditions [76]. Human-inthe-loop auditing processes leverage everyday users to provide further protection once a system is deployed [63], and we heard from participants in both of our studies that they believed IMCs would be valuable for record-keeping and organizational alignment throughout deployment.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Future work is needed to explore ways to build sustainable, diverse, interdisciplinary communities of practice. Toolkit developers interested in pursuing this vision may be able to draw lessons from other open-source community-building efforts [21,47], or from recent work exploring ways to support collective algorithm auditing [6,22,24,97]. In short, we advocate for future fairness toolkits to position themselves as socio-technical systems that enable more collaborative approaches to ML fairness practice.…”
Section: Fostering Interdisciplinary Communication and Collaborationmentioning
confidence: 99%
“…First, it is widely accepted that algorithms' opacity (Diakopoulos & Koliska, 2017)-or what Pasquale (2015) calls the "black box" of algorithmic decision makingmakes it difficult to curtail platform power, which has motivated a growing body of empirical research interested in studying algorithms from the outside. This includes methods such as "reverse engineering" (Diakopoulos, 2015), "scraping audits" (Sandvig et al, 2014), "everyday algorithm auditing" (Shen et al, 2021), small-scale observation (Bucher, 2012), and systematic large-scale observation (Rieder et al, 2018). Second, debates around how to hold the media accountable in general, and social media in particular, tend to focus on calls for greater transparency for regulatory inspection (Diakopoulos, 2016;Pasquale, 2015).…”
Section: Social Media Recommender Systems Exposure Diversity and Platform Observabilitymentioning
confidence: 99%