In this column, we introduce our Model AI Assignment, A Module on Ethical Thinking about Autonomous Vehicles in an AI Course, and more broadly introduce a conversation on ethics education in AI education.
In the moral machine project, participants are asked to form judgments about the well-known trolley example. The project is intended to serve as a starting point for public discussion that would eventually lead to a solution to the social dilemma of autonomous vehicles. The dilemma is that autonomous vehicles should be programed to maximize the number of lives saved in trolley-style dilemmas. But consumers will only purchase autonomous vehicles that are programed to favor passenger safety in such dilemmas. We argue that the project is seriously misguided. There are relevant variants of trolley to which the project's participants are not exposed. These variants make clear that the morally correct way to program autonomous vehicles is not at odds with what consumers will purchase. The project is hugely popular and dominates public discussion of this issue. We show that, ironically, the project itself is largely responsible for the dilemma. Keywords Moral machine project • Trolley problem • Autonomous vehicles In the moral machine project 1 , participants are asked to form judgments about variations of this well-known example: Trolley: There is a runaway trolley. If you do nothing, the trolley will hit and kill five people. If you pull a lever, the trolley will be diverted onto another track and kill one person.
Virtue-based approaches to engineering ethics have recently received considerable attention within the field of engineering education. Proponents of virtue ethics in engineering argue that the approach is practically and pedagogically superior to traditional approaches to engineering ethics, including the study of professional codes of ethics and normative theories of behavior. This paper argues that a virtue-based approach, as interpreted in the current literature, is neither practically or pedagogically effective for a significant subpopulation within engineering: engineers with high functioning autism spectrum disorder (ASD). Because the main argument for adopting a character-based approach is that it could be more successfully applied to engineering than traditional rule-based or algorithmic ethical approaches, this oversight is problematic for the proponents of the virtue-based view. Furthermore, without addressing these concerns, the wide adoption of a virtue-based approach to engineering ethics has the potential to isolate individuals with ASD and to devalue their contributions to moral practice. In the end, this paper gestures towards a way of incorporating important insights from virtue ethics in engineering that would be more inclusive of those with ASD.
A computer science faculty member and a philosophy faculty member collaborated in the development of a one-week introduction to ethics which was integrated into a traditional AI course. The goals were to: (1) encourage students to think about the moral complexities involved in developing accident algorithms for autonomous vehicles, (2) identify what issues need to be addressed in order to develop a satisfactory solution to the moral issues surrounding these algorithms, and (3) and to offer students an example of how computer scientists and ethicists must work together to solve a complex technical and moral problems. The course module introduced Utilitarianism and engaged students in considering the classic "Trolley Problem," which has gained contemporary relevance with the emergence of autonomous vehicles. Students used this introduction to ethics in thinking through the implications of their final projects. Results from the module indicate that students gained some fluency with Utilitarianism, including a strong understanding of the Trolley Problem. This short paper argues for the need of providing students with instruction in ethics in AI course. Given the strong alignment between AI's decision-theoretic approaches and Utilitarianism, we highlight the difficulty of encouraging AI students to challenge these assumptions.
In this paper I introduce and solve a puzzle that arises at the intersection of aesthetics and linguistics: the "Paradox of Gustatory taste." the puzzle essentially involves explaining how taste disagreements can be both subjective and objective-a seemingly paradoxical task. the puzzle has its roots in linguistically based observations about taste disagreements, and has important implications not only for aesthetics but for philosophy of language as well. I claim the paradox can be resolved, without reference to a semantic theory, by appealing to a theory of gustatory value. to demonstrate this point, I develop such a theory, which I model on a Humean approach to aesthetic value presented by Peter Railton. I argue this theory can fully resolve the paradox of taste and has the benefit of remaining neutral with regard to the semantics of predicates of personal taste. If I am correct, this discovery represents a substantial contribution to the dialectic because it offers philosophers and linguists substantial motivation to diminish their reliance on disagreement data in the debate about the semantics of taste.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.