This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. It is clear how human agents can, for example, via coercion or manipulation, hinder each other’s autonomy, or how they can respect each other’s autonomy. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person’s autonomy? We argue for a philosophical view according to which AI systems—while not moral agents or bearers of duties, and unable to literally respect or disrespect—are governed by so-called “ought-to-be norms.” This explains the normativity at stake with AI systems. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms.
As awareness of bias in educational machine learning applications increases, accountability for technologies and their impact on educational equality is becoming an increasingly important constituent of ethical conduct and accountability in education. This article critically examines the relationship between so-called algorithmic fairness and algorithmic accountability in education. I argue that operative political meanings of accountability and fairness are constructed, operationalized, and reciprocally configured in the performance of algorithmic accountability in education. Tools for measuring forms of unwanted bias in machine learning systems, and technical fixes for mitigating them, are value-laden and may conceal the politics behind quantifying educational inequality. Crucially, some approaches may also disincentivize systemic reforms for substantive equality in education in the name of accountability.
Some comparisons yield puzzling results. In the puzzling cases, neither item is determinately better than the other, but they are not exactly equal either, as improving one of them just slightly still does not make it determinately better than the other. What does this kind of incommensurability or incomparability mean for robots? We discuss especially Ruth Chang’s views, arguing for four claims. First, we defend her view that despite appearances to the contrary, formal incomparability does not follow – comparison of “apples” and “oranges” is not impossible. Second, rational value-assessment turns out to be very complicated in virtue of the non-linear relations between descriptive and evaluative features. These complications pose considerable challenges to robots, whatever views about incommensurability are adopted. Ruth Chang’s theory introduces a fourth value relation “being on a par”, and we argue, thirdly, that (unlike its rivals) it will pose a considerable extra challenge for robots, as it is a non-transitive relation (unlike equality or betterness). Fourthly, we argue that exercise of normative powers – Chang’s suggestion for hard choices in contexts of parity – is not available in the case of (fully autonomous) robots.
The practices of organizational talent acquisition are rapidly transforming as a result of the proliferation of information systems that support decision-making, ranging from applicant tracking systems to recruitment chatbots. As part of human resource management (HRM), talent acquisition covers recruitment and team-assembly activities and is allegedly in dire need for digital aid. We analyze the pitfalls and tensions of digitalization in this area through a lens that builds on the interdisciplinary literature related to digital ethics. Using three relevant landmark papers, we analyzed qualitative data from 47 interviews of HRM professionals in Finland, including team-assembly facilitators and recruitment experts. The analysis highlights 14 potential tensions and pitfalls, such as the tension between requesting detailed data versus respecting privacy and the pitfall of unequal treatment across application channels. We identify that the values of autonomy, fairness and utility are often especially at risk of being compromised. We discuss the tendency of the binary considerations related to human and automated decision making, and the reasons for the incompatibility between current digital systems and organizations’ needs for talent acquisition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.