Climate change is a global priority. In 2015, the United Nations (UN) outlined its Sustainable Development Goals (SDGs), which stated that taking urgent action to tackle climate change and its impacts was a key priority. The 2021 World Climate Summit finished with calls for governments to take tougher measures towards reducing their carbon footprints. However, it is not obvious how governments can make practical implementations to achieve this goal. One challenge towards achieving a reduced carbon footprint is gaining awareness of how energy exhaustive a system or mechanism is. Artificial Intelligence (AI) is increasingly being used to solve global problems, and its use could potentially solve challenges relating to climate change, but the creation of AI systems often requires vast amounts of, up front, computing power, and, thereby, it can be a significant contributor to greenhouse gas emissions. If governments are to take the SDGs and calls to reduce carbon footprints seriously, they need to find a management and governance mechanism to (i) audit how much their AI system ‘costs’ in terms of energy consumption and (ii) incentivise individuals to act based upon the auditing outcomes, in order to avoid or justify politically controversial restrictions that may be seen as bypassing the creativity of developers. The idea is thus to find a practical solution that can be implemented in software design that incentivises and rewards and that respects the autonomy of developers and designers to come up with smart solutions. This paper proposes such a sustainability management mechanism by introducing the notion of ‘Sustainability Budgets’—akin to Privacy Budgets used in Differential Privacy—and by using these to introduce a ‘Game’ where participants are rewarded for designing systems that are ‘energy efficient’. Participants in this game are, among others, the Machine Learning developers themselves, which is a new focus for this problem that this text introduces. The paper later expands this notion to sustainability management in general and outlines how it might fit into a wider governance framework.
Artificial Intelligence (AI) technologies have the potential to dramatically impact the lives and life chances of people with disabilities seeking employment and throughout their career progression. While these systems are marketed as highly capable and objective tools for decision making, a growing body of research demonstrates a record of inaccurate results as well as inherent disadvantages for women and people of colour (Broussard, 2018; Noble, 2018; O’Neil 2017). Assessments of fairness in Recruitment AI for people with disabilities have thus far received little attention or have been overlooked (Guo et al., 2019; Petrick, 2015; Trewin, 2018; Trewin et al. 2019; Whittaker et al., 2019). This white paper details the impacts to and concerns of disabled employment seekers using AI systems for recruitment, and provides recommendations on the steps employers can take to ensure innovation in recruitment is also fair to all users. In doing so, we further the point that making systems fairer for disabled employment seekers ensures systems are fairer for all.
In 2016 Microsoft released Tay.ai to the Twittersphere, a conversational chatbot that was intended to act like a millennial girl. However, they ended up taking Tay's account down in less than 24 h because Tay had learnt to tweet racist and sexist statements from its online interactions. Taking inspiration from the theory of morality as cooperation, and the place of trust in the developmental psychology of socialisation, we offer a multidisciplinary and pragmatic approach to build on the lessons learnt from Tay's experiences, to create a chatbot that is more selective in its learning, and thus resistant to becoming immoral the way Tay did.
We find ourselves at a unique point of time in history. Following over two millennia of debate amongst some of the greatest minds that ever existed about the nature of morality, the philosophy of ethics and the attributes of moral agency, and after all that time still not having reached consensus, we are coming to a point where artificial intelligence (AI) technology is enabling the creation of machines that will possess a convincing degree of moral competence. The existence of these machines will undoubtedly have an impact on this age old debate, but we believe that they will have a greater impact on society at large, as AI technology deepens its integration into the social fabric of our world. The purpose of this special issue on Computing Morality is to bring together different perspectives on this technology and its impact on society. The special issue contains four very different and inspiring contributions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.