2023
DOI: 10.1007/s44206-023-00074-y
|View full text |Cite
|
Sign up to set email alerts
|

Auditing of AI: Legal, Ethical and Technical Approaches

Jakob Mökander

Abstract: AI auditing is a rapidly growing field of research and practice. This review article, which doubles as an editorial to Digital Society’s topical collection on ‘Auditing of AI’, provides an overview of previous work in the field. Three key points emerge from the review. First, contemporary attempts to audit AI systems have much to learn from how audits have historically been structured and conducted in areas like financial accounting, safety engineering and the social sciences. Second, both policymakers and tec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(2 citation statements)
references
References 151 publications
0
2
0
Order By: Relevance
“…However, the motivation for such audits and the institutional features required to support them are quite different. Audits for algorithmic externalities have been motivated by activists concerned with social injustice and companies complying with current or prospective regulation (Krafft et al, 2021;Mökander, 2023). On the other hand, the pressure to address algorithmic internalities must come primarily from the human principals themselves.…”
Section: Institutions Facilitating Choice Among Algorithmic Agentsmentioning
confidence: 99%
“…However, the motivation for such audits and the institutional features required to support them are quite different. Audits for algorithmic externalities have been motivated by activists concerned with social injustice and companies complying with current or prospective regulation (Krafft et al, 2021;Mökander, 2023). On the other hand, the pressure to address algorithmic internalities must come primarily from the human principals themselves.…”
Section: Institutions Facilitating Choice Among Algorithmic Agentsmentioning
confidence: 99%
“…It is also true that many AI systems fail to live up to the claims made by the organizations that deploy them. In these cases, more robust software development practices (Kearns & Roth, 2020), broader impact requirements (Prunkl et al, 2021), guardrails for the use of AI systems (Gasser & Mayer-Schoenberger, 2024), and independent AI audits (Mökander, 2023) can help mitigate risks and prevent harms. However, with respect to the cold, impersonal treatment of decision subjects (O'Neil, 2017), the quantification of social relationships (Mau, 2019), the centralization of decision power and the standardization of decision criteria (Kleinberg & Raghavan, 2021), the automation of cognitive tasks (Brynjolfsson & McAfee, 2014), or the shaping of social preferences through nudges (Thaler & Sunstein, 2008), the problem is not that AI systems 'don't work' but that they work 'too well' as tools for instrumental rationalization.…”
Section: Introductionmentioning
confidence: 99%