Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities." Numerous scholars who favor or disfavor its feasibility have participated in the debate. This paper presents an experiment (N=1270) that 1) collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future and 2) examines whether debunking common misconceptions on the proposal modifies one's stance toward the issue. The results indicate that even though online users mainly disfavor AI and robot rights, they are supportive of protecting electronic agents from cruelty (i.e., favor the right against cruel treatment). Furthermore, people's perceptions became more positive when given information about rights-bearing non-human entities or myth-refuting statements. The style used to introduce AI and robot rights significantly affected how the participants perceived the proposal, similar to the way metaphors function in creating laws. For robustness, we repeated the experiment over a more representative sample of U.S. residents (N=164) and found that perceptions gathered from online users and those by the general population are similar.
Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people’s views of electronic legal personhood vis-à-vis existing liability models. Our study reveals people’s desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents’ punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents’ wrongdoings.
The Link Trainer is often described as the first successful attempt at what we now recognize as flight simulation and even virtual reality. Instead of asking how well the device simulated flight conditions, this article shows that what the Link Trainer simulated was not the conditions of the air, but rather the conditions of the cockpit that was gradually filled with flight instruments. The article also considers the Link Trainer as a cultural space in which shifting ideas about what it meant to be a pilot were manifested. A pilot in the Link Trainer was trained into a new category of flier-the virtual flier-who was an avid reader of instruments and an attentive listener to signals. This article suggests that, by situating the pilot within new spaces, protocols, and relationships, technologies of simulation have constituted the identity of the modern pilot and other operators of machines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.