2021
DOI: 10.1007/s43681-021-00052-5
|View full text |Cite|
|
Sign up to set email alerts
|

Towards intellectual freedom in an AI Ethics Global Community

Abstract: The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(8 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…Teachers should also recognize that the use of new technologies, such as AI, can greatly help to enable teaching and collaboration with all teachers, but there is an equal need to be concerned about the divide that AI can create that will widen the gap between peers in a class. In order to improve teachers' AI literacy, the choice of curriculum, contents, methods, and practice resources for special training should be diverse rather than conformist, as this may result in teachers' agency not being valued [70,71].…”
Section: Discussionmentioning
confidence: 99%
“…Teachers should also recognize that the use of new technologies, such as AI, can greatly help to enable teaching and collaboration with all teachers, but there is an equal need to be concerned about the divide that AI can create that will widen the gap between peers in a class. In order to improve teachers' AI literacy, the choice of curriculum, contents, methods, and practice resources for special training should be diverse rather than conformist, as this may result in teachers' agency not being valued [70,71].…”
Section: Discussionmentioning
confidence: 99%
“…Thirdly, there is acute potential for conflicts of interest with first or second party audits. For instance, Google researchers on the internal AI team were dismissed and blocked from publishing critiques on the large-scale language models [36,56], which would later be revealed to be critical to the company's future product roadmap [44,160,162]. Similarly, the warnings from Facebook researchers on addressing mounting bias issues and misinformation campaigns was reported to have been internally suppressed [116].…”
Section: The Outsized Focus On Internal Auditsmentioning
confidence: 99%
“…Whether it's for avoiding more robust regulation and attracting better employees, like it would be in the case of a company like Google (Voinea and Uszkai 2020), or for politicians to signal to the electorate that they care about Responsible AI (post-industrial democracies) or to make their opposition to Western democracies and their WEIRD morality (Haidt 2012) internationally known, it has become clear that we cannot solve a political problem with ethical ramifications (the regulation of AI) just by simply drafting codes of ethics and establishing moral bureaucracies. Even if we were to leave aside the classical criticism of bureaucracies and bureaucrats as being simply budget maximizers (Niskanen 1971;1994), an opaque ethical infrastructure that does not contribute to the development of moral and intellectual virtues for the individuals who actually work with AI (Constantinescu et al 2021) would be nothing more than a waste of both public and private resources, and with potentially deleterious consequences.…”
Section: Concluding Remarks: Regulating Ai a Catch-22 Situation?mentioning
confidence: 99%