Artificial intelligence (AI) is associated with both positive and negative impacts on both people and planet, and much attention is currently devoted to analyzing and evaluating these impacts. In 2015, the UN set 17 Sustainable Development Goals (SDGs), consisting of environmental, social, and economic goals. This article shows how the SDGs provide a novel and useful framework for analyzing and categorizing the benefits and harms of AI. AI is here considered in context as part of a sociotechnical system consisting of larger structures and economic and political systems, rather than as a simple tool that can be analyzed in isolation. This article distinguishes between direct and indirect effects of AI and divides the SDGs into five groups based on the kinds of impact AI has on them. While AI has great positive potential, it is also intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to make use of them. As a handful of nations and companies control the development and application of AI, this raises important questions regarding the potential negative implications of AI on the SDGs. The conceptual framework here presented helps structure the analysis of which of the SDGs AI might be useful in attaining and which goals are threatened by the increased use of AI.
Artificial intelligence (AI) now permeates all aspects of modern society, and we are simultaneously seeing an increased focus on issues of sustainability in all human activities. All major corporations are now expected to account for their environmental and social footprint and to disclose and report on their activities. This is carried out through a diverse set of standards, frameworks, and metrics related to what is referred to as ESG (environment, social, governance), which is now, increasingly often, replacing the older term CSR (corporate social responsibility). The challenge addressed in this article is that none of these frameworks sufficiently capture the nature of the sustainability related impacts of AI. This creates a situation in which companies are not incentivised to properly analyse such impacts. Simultaneously, it allows the companies that are aware of negative impacts to not disclose them. This article proposes a framework for evaluating and disclosing ESG related AI impacts based on the United Nation’s Sustainable Development Goals (SDG). The core of the framework is here presented, with examples of how it forces an examination of micro, meso, and macro level impacts, a consideration of both negative and positive impacts, and accounting for ripple effects and interlinkages between the different impacts. Such a framework helps make analyses of AI related ESG impacts more structured and systematic, more transparent, and it allows companies to draw on research in AI ethics in such evaluations. In the closing section, Microsoft’s sustainability reporting from 2018 and 2019 is used as an example of how sustainability reporting is currently carried out, and how it might be improved by using the approach here advocated.
Digital technologies have profound effects on all areas of modern life, including the workplace. Certain forms of digitalisation entail simply exchanging digital files for paper, while more complex instances involve machines performing a wide variety of tasks on behalf of humans. While some are wary of the displacement of humans that occurs when, for example, robots perform tasks previously performed by humans, others argue that robots only perform the tasks that robots should have carried out in the very first place and never by humans. Understanding the impacts of digitalisation in the workplace requires an understanding of the effects of digital technology on the tasks we perform, and these effects are often not foreseeable. In this article, the changing nature of work in the health care sector is used as a case to analyse such change and its implications on three levels: the societal (macro), organisational (meso), and individual level (micro). Analysing these transformations by using a layered approach is helpful for understanding the actual magnitude of the changes that are occurring and creates the foundation for an informed regulatory and societal response. We argue that, while artificial intelligence, big data, and robotics are revolutionary technologies, most of the changes we see involve technological substitution and not infrastructural change. Even though this undermines the assumption that these new technologies constitute a fourth industrial revolution, their effects on the micro and meso level still require both political awareness and proportional regulatory responses.
Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention, as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers, and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.