The ethics of artificial intelligence (AI) is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current understanding of how organisations deal with AI ethics by presenting empirical findings collected using a set of ten case studies and providing an account of the cross-case analysis. The paper reviews the discussion of ethical issues of AI as well as mitigation strategies that have been proposed in the literature. Using this background, the cross-case analysis categorises the organisational responses that were observed in practice. The discussion shows that organisations are highly aware of the AI ethics debate and keen to engage with ethical issues proactively. However, they make use of only a relatively small subsection of the mitigation strategies proposed in the literature. These insights are of importance to organisations deploying or using AI, to the academic AI ethics debate, but maybe most valuable to policymakers involved in the current debate about suitable policy developments to address the ethical issues raised by AI.
This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature.
The Sustainable Development Goals (SDGs) are internationally agreed goals that allow us to determine what humanity, as represented by 193 member states, finds acceptable and desirable. The paper explores how technology can be used to address the SDGs and in particular Smart Information Systems (SIS). SIS, the technologies that build on big data analytics, typically facilitated by AI techniques such as machine learning, are expected to grow in importance and impact. Some of these impacts are likely to be beneficial, notably the growth in efficiency and profits, which will contribute to societal wellbeing. At the same time, there are significant ethical concerns about the consequences of algorithmic biases, job loss, power asymmetries and surveillance, as a result of SIS use. SIS have the potential to exacerbate inequality and further entrench the market dominance of big tech companies, if left uncontrolled. Measuring the impact of SIS on SDGs thus provides a way of assessing whether an SIS or an application of such a technology is acceptable in terms of balancing foreseeable benefits and harms. One possible approach is to use the SDGs as guidelines to determine the ethical nature of SIS implementation. While the idea of using SDGs as a yardstick to measure the acceptability of emerging technologies is conceptually strong, there should be empirical evidence to support such approaches. The paper describes the findings of a set of 6 case studies of SIS across a broad range of application areas, such as smart cities, agriculture, finance, insurance and logistics, explicitly focusing on ethical issues that SIS commonly raise and empirical insights from organisations using these technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.