Since 2016, more than 80 AI ethics documents-including codes, principles, frameworks, and policy strategies-have been produced by corporations, governments, and NGOs. In this paper, we examine three topics of importance related to our ongoing empirical study of ethics and policy issues in these emerging documents. First, we review possible challenges associated with the relative homogeneity of the documents' creators. Second, we provide a novel typology of motivations to characterize both obvious and less obvious goals of the documents. Third, we discuss the varied impacts these documents may have on the AI governance landscape, including what factors are relevant to assessing whether a given document is likely to be successful in achieving its goals. CCS CONCEPTS • Computing methodologies → Artificial intelligence; • Social and professional topics → Codes of ethics; Government technology policy.
Like previous educational technologies, artificial intelligence in education (AIEd) threatens to disrupt the status quo, with proponents highlighting the potential for efficiency and democratization, and skeptics warning of industrialization and alienation. However, unlike frequently discussed applications of AI in autonomous vehicles, military and cybersecurity concerns, and healthcare, AI's impacts on education policy and practice have not yet captured the public's attention. This paper, therefore, evaluates the status of AIEd, with special attention to intelligent tutoring systems and anthropomorphized artificial educational agents. I discuss AIEd's purported capacities, including the abilities to simulate teachers, provide robust student differentiation, and even foster socio-emotional engagement. Next, to situate developmental pathways for AIEd going forward, I contrast sociotechnical possibilities and risks through two idealized futures. Finally, I consider a recent proposal to use peer review as a gatekeeping strategy to prevent harmful research. This proposal serves as a jumping off point for recommendations to AIEd stakeholders towards improving their engagement with socially responsible research and implementation of AI in educational systems.
In recent years, numerous public, private, and non-governmental organizations (NGOs) have produced documents addressing the ethical implications of artificial intelligence (AI). These normative documents include principles, frameworks, and policy strategies that articulate the ethical concerns, priorities, and associated strategies of leading organizations and governments around the world. We examined 112 such documents from 25 countries that were produced between 2016 and the middle of 2019. While other studies identified some degree of consensus in such documents, our work highlights meaningful differences across public, private, and non-governmental organizations. We analyzed each document in terms of how many of 25 ethical topics were covered and the depth of discussion for those topics. As compared to documents from private entities, NGO and public sector documents reflect more ethical breadth in the number of topics covered, are more engaged with law and regulation, and are generated through processes that are more participatory. These findings may reveal differences in underlying beliefs about an organization's responsibilities, the relative importance of relying on experts versus including representatives from the public, and the tension between prosocial and economic goals.
The policy agenda is currently being established for artificial intelligence (AI), a domain marked by complex and sweeping implications for economic transformation tempered by concerns about social and ethical risks. This article reviews the United States national AI policy strategy through extensive qualitative and quantitative content analysis of 63 strategic AI policy documents curated by the federal government between 2016 and 2020. Drawing on a prominent theory of agenda setting, the Multiple Streams Framework, and in light of competing paradigms of technology policy, this article reviews how the U.S. government understands the key policy problems, solutions, and issue frames associated with AI. Findings indicate minimal attention to focusing events or problem indicators emphasizing social and ethical concerns, as opposed to economic and geopolitical ones. Further, broad statements noting ethical dimensions of AI often fail to translate into specific policy solutions, which may be explained by a lack of technical feasibility or value acceptability of ethics‐related policy solutions, along with institutional constraints for agencies in specific policy sectors. Finally, despite widespread calls for increased public participation, proposed solutions remain expert dominated. Overall, while the emerging U.S. AI policy agenda reflects a striking level of attention to ethics—a promising development for policy stakeholders invested in AI ethics and more socially oriented approaches to technology governance—this success is only partial and is ultimately layered into a traditional strategic approach to innovation policy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.