This article aims to introduce a degree of technological and ethical realism to the framing of autonomous vehicle perception and decisionality. The objective is to move the socioethical dialog surrounding autonomous vehicle decisionality from the dominance of "trolley framings" to more pressing ethical issues. The article argues that more realistic ethical framings of autonomous vehicle technologies should focus on the matters of HMI, machine perception, classification, and data privacy, which are some distance from the decisionality framing premise of the MIT Moral Machine experiment. To support this claim the article appeals to state-of-the-art technologies and emerging technologies concerning autonomous vehicle perception and decisionality, as a means to inform and frame ethical contexts. This is further supported by considering a context specific ethical framing for each time phase we anticipate regarding emerging autonomous vehicle technology.
Our social relations are changing, we are now not just talking to each other, but we are now also talking to artificial intelligence (AI) assistants. We claim AI assistants present a new form of digital connectivity risk and a key aspect of this risk phenomenon relates to user risk awareness (or lack of) regarding AI assistant functionality. AI assistants present a significant societal risk phenomenon amplified by the global scale of the products and the increasing use in healthcare, education, business, and service industry. However, there appears to be little research concerning the need to not only understand the risks of AI assistant technologies but also how to frame and communicate the risks to users. How can users assess the risks without fully understanding the complexity of the technology? This is a challenging and unwelcome scenario. AI assistant technologies consists of a complex eco-system and demands explicit and precise communication in terms of communicating and contextualising the new digital risk phenomenon. The paper then agues for the need to examine how to best to explain and support both domestic and commercial user risk awareness regarding AI assistants. To this end, we propose the method of creating a risk narrative which is focused on temporal points of changing societal connectivity and contextualised in terms of risk. We claim the connectivity risk narrative provides an effective medium in capturing, communicating, and contextualising the risks of AI assistants in a medium that can support explainability as a risk mitigation mechanism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.