Purpose
The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice.
Design/methodology/approach
In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI.
Findings
In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems.
Originality/value
The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature.
Abstract:Responsible research and innovation (RRI) is an approach to research and innovation governance aiming to ensure that research purpose, process and outcomes are acceptable, sustainable and even desirable. In order to achieve this ambitious aim, RRI must be relevant to research and innovation in industry. In this paper, we discuss a way of understanding and representing RRI that resonates with private companies and lends itself to practical implementation and action. We propose the development of an RRI maturity model in the tradition of other well-established maturity models, linked with a corporate research and development (R&D) process. The foundations of this model lie in the discourse surrounding RRI and selected maturity models from other domains as well as the results of extensive empirical investigation. The model was tested in three industry environments and insights from these case studies show the model to be viable and useful in corporate innovation processes. With this approach, we aim to inspire further research and evaluation of the proposed maturity model as a tool for facilitating the integration of RRI in corporate management.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.