2021
DOI: 10.1561/1100000083
|View full text |Cite
|
Sign up to set email alerts
|

Auditing Algorithms: Understanding Algorithmic Systems from the Outside In

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(47 citation statements)
references
References 0 publications
0
47
0
Order By: Relevance
“…Other work has focused on organisational challenges and barriers that practitioners face when attempting to build more responsible AI products and services [93,94,126], as well as considerations regarding fairness perceptions across cultures [128]. Research has also been directed specifically towards better understanding AI practitioners' needs and the development of frameworks, processes and tools to help assess and audit algorithmic systems for unfair, biased, or otherwise harmful behaviour (e.g., [14,18,94,99,125]) -both internally (within the organisations responsible for developing and maintaining these systems) and externally (by independent auditors, users and/or regulators). Particularly relevant here is the work exploring the issues and efficacy of fairness-specific tooling for supporting practitioners, for example, that of Holstein et al [65], Lee et al [88] and Deng et al [39], which considered the perceptions and use of so-called "fairness toolkits" that aim to support ML practitioners with fairness concerns, finding significant disconnects between the tooling and the expectations, needs and practices of practitioners.…”
Section: Algorithmic Fairnessmentioning
confidence: 99%
“…Other work has focused on organisational challenges and barriers that practitioners face when attempting to build more responsible AI products and services [93,94,126], as well as considerations regarding fairness perceptions across cultures [128]. Research has also been directed specifically towards better understanding AI practitioners' needs and the development of frameworks, processes and tools to help assess and audit algorithmic systems for unfair, biased, or otherwise harmful behaviour (e.g., [14,18,94,99,125]) -both internally (within the organisations responsible for developing and maintaining these systems) and externally (by independent auditors, users and/or regulators). Particularly relevant here is the work exploring the issues and efficacy of fairness-specific tooling for supporting practitioners, for example, that of Holstein et al [65], Lee et al [88] and Deng et al [39], which considered the perceptions and use of so-called "fairness toolkits" that aim to support ML practitioners with fairness concerns, finding significant disconnects between the tooling and the expectations, needs and practices of practitioners.…”
Section: Algorithmic Fairnessmentioning
confidence: 99%
“…It is now the norm for most platforms to use personalized algorithms to recommend content to their users (Bhandari and Bimo, 2022). Algorithm audits (Metaxa et al, 2021) and internal documents (Wells et al, 2021) indicate that feeds like the TikTok For You page and the Instagram Explore page curate content by identifying users' interests through an analysis of digital trace data, like the posts they look at, like, share, or skip. While the specifics of these processes are rarely made known to the general public-much less everyday users-knowing how these algorithm-based systems work can help people have more agency over what they see and share.…”
Section: Social Media Literacymentioning
confidence: 99%
“…Past research has successfully identied harmful and discriminatory behaviors in a variety of algorithmic domains, including search engines [52,54,60], online advertising [45,67], facial recognition [10], word embedding [7], and e-commerce [40]. For example, Buolamwini and Gebru audited three commercial gender classication systems and found that the commercial systems misclassied darker-skinned women more often than white people [10].…”
Section: Algorithm Auditmentioning
confidence: 99%
“…The presence of biases in algorithmic systems has given rise to auditing approaches, usually led by AI/ML experts, to investigate these systems for harmful behaviors [17,52]. These expert-driven auditing techniques have been successful in nding and mitigating many cases of harmful algorithmic behavior; yet, they suer from a number of limitations.…”
Section: Introductionmentioning
confidence: 99%