The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the "black box" of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public's perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.
Trust is often perceived as having great value. For example, there is a strong belief that trust will bring different sorts of public goods and help us preserve common resources. A related concept which is just as important, but perhaps not explicitly discussed to the same extent as “trust”, is “reliance” or “confidence”. To be able to rely on some agent is often seen as a prerequisite for being able to trust this agent. Up to now, the conceptual discussion about the definition of trust and reliance has been rational in the sense that most people involved have offered arguments for their respective views, or against competing views. While these arguments rely on some criterion or other, these criteria are rarely explicitly stated, and to our knowledge, no systematic account of such criteria has been offered. In this paper we give an account of what criteria we should use to assess tentative definitions of “trust” and “reliance”. We will also offer our own well-founded definitions of “trust” and “reliance”. Trust should be regarded as a kind of reliance and we defend what we call “the accountability view” of trust, by appealing to the desiderata we identify in the first parts of the paper.
This chapter charts and critically analyses the ethical challenge of assessing how much (and what kind of) evidence is required for the justification of interventions in response antibiotic resistance (ABR), as well as other major public health threats. Our ambition here is to identify and briefly discuss main issues, and point to ways in which these need to be further advanced in future research. This will result in a tentative map of complications, underlying problems and possible challenges. This map illustrates that the ethical challenges in this area are much more complex and profound than is usually acknowledged, leaving no tentatively plausible intervention package free of downsides. This creates potentially overwhelming theoretical conundrums when trying to justify what to do. We therefore end by pointing out two general features of the complexity we find to be of particular importance, and a tentative suggestion for how to create a theoretical basis for further analysis.
<p>For many years, some urban architecture has aimed to exclude unwanted groups of people from some locations. This type of architecture is called “defensive” or “hostile” architecture and includes benches that cannot be slept on, spikes in the ground that cannot be stood on, and pieces of metal that hinder one’s ability to skateboard. These defensive measures have sparked public outrage, with many thinking such measures lead to suffering, are disrespectful, and violate people’s rights. In this paper, it is argued that these views are difficult to defend and that much more empirical research on the topic is needed. <em></em></p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.