Technocrats from many developed countries, especially Japan and South Korea, are preparing for the human-robot co-existence society that they believe will emerge by 2030. Regulators are assuming that within the next two decades, robots will be capable of adapting to complex, unstructured environments and interacting with humans to assist with the performance of daily life tasks. Unlike heavily regulated industrial robots that toil in isolated settings, Next Generation Robots will have relative autonomy, which raises a number of safety issues that are the focus of this article. Our purpose is to describe a framework for a legal system focused on Next Generation Robots safety issues, including a Safety Intelligence concept that addresses robot Open-Texture Risk. We express doubt that a model based on Isaac Asimov's Three Laws of Robotics can ever be a suitable foundation for creating an artificial moral agency ensuring robot safety. Finally, we make predictions about the most significant Next Generation Robots safety issues that will arise as the human-robot co-existence society emerges.Y.-H. Weng ( ) Conscription Agency, Ministry of the Interior, Republic of China,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.