Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade off the degree to which they are satisfied against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a reinterpretation of impossibility results, and directions for future research. 1
A concept of diversity is an understanding of what makes a group diverse that may be applicable in a variety of contexts. We distinguish three diversity concepts, show that each can be found in discussions of diversity in science, and explain how they tend to be associated with distinct epistemic and ethical rationales. Yet philosophical literature on diversity among scientists has given little attention to distinct concepts of diversity. This is significant because the unappreciated existence of multiple diversity concepts can generate unclarity about the meaning of Bdiversity,^lead to problematic inferences from empirical research, and obscure complex ethical-epistemic questions about how to define diversity in specific cases. We illustrate some ethicalepistemic implications of our proposal by reference to an example of deliberative minipublics on human tissue biobanking.
Previous simulation models have found positive effects of cognitive diversity on group performance, but have not explored effects of diversity in demographics (e.g., gender, ethnicity). In this paper, we present an agent-based model that captures two empirically supported hypotheses about how demographic diversity can improve group performance. The results of our simulations suggest that, even when social identities are not associated with distinctive task-related cognitive resources, demographic diversity can, in certain circumstances, benefit collective performance by counteracting two types of conformity that can arise in homogeneous groups: those relating to group-based trust and those connected to normative expectations towards in-groups.
Data‐driven algorithms are widely used to make or assist decisions in sensitive domains, including healthcare, social services, education, hiring, and criminal justice. In various cases, such algorithms have preserved or even exacerbated biases against vulnerable communities, sparking a vibrant field of research focused on so‐called algorithmic biases. This research includes work on identification, diagnosis, and response to biases in algorithm‐based decision‐making. This paper aims to facilitate the application of philosophical analysis to these contested issues by providing an overview of three key topics: What is algorithmic bias? Why and how can it occur? What can and should be done about it? Throughout, we highlight connections—both actual and potential—with philosophical ideas and concerns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.