2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533213
|View full text |Cite
|
Sign up to set email alerts
|

Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

Abstract: Algorithmic audits (or 'AI audits') are an increasingly popular mechanism for algorithmic accountability; however, they remain poorly defined. Without a clear understanding of audit practices, let alone widely used standards or regulatory guidance, claims that an AI product or system has been audited, whether by first-, second-, or third-party auditors, are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this knowledge gap, we provide the first comprehensive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 77 publications
(35 citation statements)
references
References 21 publications
0
35
0
Order By: Relevance
“…It was noted in the vein of predictive medicine through Hermansson and Kahan (2018) that overtraining an AI healthcare algorithm on Caucasian data produced poorer predictive care for non-Caucasian groups. This issue falls in line with the poor application of AI in other areas and is an example of how their use has far to go in suitability for routine, un-aided use without perpetuating negative or unhelpful feedback loops of care (Costanza-Chock et al 2022;Hoffman,2021;Gebru,2020;Kong,2022). Many sources of electronic medical records systems hold incomplete connectedness, communication, and completeness, making it difficult for them to be used optimally (Patch et al,2019).…”
Section: Discussion Of Results and Future Of DI In Healthcarementioning
confidence: 93%
“…It was noted in the vein of predictive medicine through Hermansson and Kahan (2018) that overtraining an AI healthcare algorithm on Caucasian data produced poorer predictive care for non-Caucasian groups. This issue falls in line with the poor application of AI in other areas and is an example of how their use has far to go in suitability for routine, un-aided use without perpetuating negative or unhelpful feedback loops of care (Costanza-Chock et al 2022;Hoffman,2021;Gebru,2020;Kong,2022). Many sources of electronic medical records systems hold incomplete connectedness, communication, and completeness, making it difficult for them to be used optimally (Patch et al,2019).…”
Section: Discussion Of Results and Future Of DI In Healthcarementioning
confidence: 93%
“…AI auditing can be done internally (ibid.) or by hiring an impartial third-party auditor (Costanza-Chock et al, 2022). However, as Constanza-Chock et al (ibid.)…”
Section: On Investigating Algorithmsmentioning
confidence: 99%
“…More generally, these harms may erode democracy [87], through election interference or censorship [182]. Moreover, algorithmic systems may exacerbate social inequalities and reduction of civil liberties within legal systems [120,160], such as unreasonable searches [132], wrongful arrest [52,52,107], or court transcription errors [107]. These harms adversely impact how a nation's institutions or services function [3] and increase societal polarization [182].…”
Section: Political and Civic Harmsmentioning
confidence: 99%