2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533158
|View full text |Cite
|
Sign up to set email alerts
|

The Fallacy of AI Functionality

Abstract: Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on "ethical" or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all. To describe the harms of various types of functionality failures, we … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
33
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 95 publications
(42 citation statements)
references
References 91 publications
0
33
0
Order By: Relevance
“…Other possible reasons include a strong AI hype discourse, hubris, and simmering grudges within the ML community (Narayanan and Kapoor 2022) used by researchers and developers to rationalize downplaying the role of humans. The invisibilization of judgment is a problem for science as it makes it more difficult to assess algorithms and an even bigger problem for outsiders such as journalists, policymakers, or lay people, who often take claims and numbers about algorithms that are presented to them for granted (Raji et al 2022). This culture results in claims like "Art is dead [..] A.I.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Other possible reasons include a strong AI hype discourse, hubris, and simmering grudges within the ML community (Narayanan and Kapoor 2022) used by researchers and developers to rationalize downplaying the role of humans. The invisibilization of judgment is a problem for science as it makes it more difficult to assess algorithms and an even bigger problem for outsiders such as journalists, policymakers, or lay people, who often take claims and numbers about algorithms that are presented to them for granted (Raji et al 2022). This culture results in claims like "Art is dead [..] A.I.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, there is a need for structural reform in academia and industry that take current configurations of power and incentive structures into consideration. For the tech industry, a variety of anti-trust measures have been proposed that could aid in dealing with opaque, unaccountable algorithms such as increasing external oversight (Tutt 2016), implementing higher transparency and testing standards (Pasquale 2015;Raji et al 2022), outlawing harmful and/or pseudo-scientific applications (Stark and Hutson 2021), and breaking up tech companies to undermine monopolization (Falcon 2021). Similarly, measures to protect workers, activists, and experts are also important such as whistle-blower and unionization protection (Whittaker 2021) and support against company lawsuits targeting individual critics (Corbyn 2022).…”
Section: Discussionmentioning
confidence: 99%
“…Ethical issues are, thus, an inherent concern for any company developing and operating it. With the rapid development and ubiquitous use of AI systems, the focus has extended from AI functionality toward AI ethics (Raji et al 2022), such as privacy violation (Mazurek and Małagocka 2019), biased prediction (Gebru 2020) and lack of explainability (Doran et al 2017). As a result, it exposes organizations to ethical failure of AI, which we define as a situation where AI violates social norms and triggers public criticisms (Holweg et al 2022).…”
Section: The Ethics Of Aimentioning
confidence: 99%
“…(See Van Miltenburg et al 2021a for further discussion.) Moreover, error analyses provide a healthy dose of skepticism with regard to system performance, and as such help avoid the fallacy of AI functionality (Raji et al, 2022) 2 . Finally, it is simply not possible to automatically evaluate all aspects of NLG output (Raji et al, 2021).…”
Section: Introductionmentioning
confidence: 99%