With the recent emergences of AI technologies, our societies are facing regulatory challenges in terms of their design, manufacture, sale and use. In addition to the existing norms, many new ' AI laws' will be needed for early stage AI governance. However, when it comes to AI, there is a significant gap between hard laws and soft laws. Although we have witnessed the development of soft law from both public institutions and organisations like the EU and the IEEE in recent years, hard law has been less forthcoming. Answering the question of why this gap exists and whether or not 'natural law' can narrow is the chief purpose of this paper. To do so we will draw on two supplemental principles from the natural law tradition.
The integration of artificial intelligence (AI) into human society mandates that their decision-making process is explicable to users, as exemplified in Asimov’s Three Laws of Robotics. Such human interpretability calls for explainable AI (XAI), of which this paper cites various models. However, the transaction between computable accuracy and human interpretability can be a trade-off, requiring answers to questions about the negotiable conditions and the degrees of AI prediction accuracy that may be sacrificed to enable user-interpretability. The extant research has focussed on technical issues, but it is also desirable to apply a branch of ethics to deal with the trade-off problem. This scholarly domain is labelled coarse ethics in this study, which discusses two issues vis-à-vis AI prediction as a type of evaluation. First, which formal conditions would allow trade-offs? The study posits two minimal requisites: adequately high coverage and order-preservation. The second issue concerns conditions that could justify the trade-off between computable accuracy and human interpretability, to which the study suggests two justification methods: impracticability and adjustment of perspective from machine-computable to human-interpretable. This study contributes by connecting ethics to autonomous systems for future regulation by formally assessing the adequacy of AI rationales.
Modern technology calls for the judicial integration of robots into our society as well as their functional integration. Some scholars and industrialists argue that robots might possess their own property and should pay tax; however, it seems premature to grant an electronic personhood to robots at their current technological level. Therefore, another legal institution is needed. With this in mind Pagallo suggests that the concept of 'specific property' (peculium), which was given to Roman slaves, could be applied to highly developed robots. He calls it digital peculium (DP). In this paper, I explain what peculium was in Roman law and compare it with some future regulations for an autonomous taxicab to clarify the similarity and differences between the Roman peculium and DP. Two merits of the introduction of DP are found in my study. First, a robot may have its own DP although it has no personhood. Second, substantive regulations, which were applied to Roman slaves for supporting their masters and creditors, may be reused without destroying the current legal system. In conclusion, it becomes clear that DP is useful as a chrysalis legal institution for supervising robots before they become autonomous in the truest sense of the word.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.