The explosion in the use of software in important sociotechnical systems has renewed focus on the study of the way technical constructs reflect policies, norms, and human values. This effort requires the engagement of scholars and practitioners from many disciplines. And yet, these disciplines often conceptualize the operative values very differently while referring to them using the same vocabulary. The resulting conflation of ideas confuses discussions about values in technology at disciplinary boundaries. In the service of improving this situation, this paper examines the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of "fairness"deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions and collaborations around the concept of fairness. We use a case study of risk assessments in criminal justice applications to both motivate our effort-describing how conflation of different concepts under the banner of "fairness" led to unproductive confusion-and illustrate the value of the fairness analytic by demonstrating how the rigorous analysis it enables can assist in identifying key areas of theoretical, political, and practical misunderstanding or disagreement, and where desired support alignment or collaboration in the absence of consensus. the terms we consider here is at the early stages of formation, now is the time to attend to the infrastructure necessary to support its development.
BACKGROUNDParticularly with the rise of machine learning as a core technical tool for building computer systems and the concomitant sense that these systems were not adequately reflecting the values and goals of their designers and creators, the question of how to build values into software systems has gained significant traction in recent years. One focal point for the community has been the rise of the FAT scholarly meetings (FAT/ML, or the Workshop on Fairness, Accountability, and Transparency in Machine Learning, held annually since 2014 at a major machine learning conference and FAT * , the (now, ACM) Conference on Fairness, Accountability, and Transparency, which aims to build community outside of research with a machine learning focus. 2 In a similar vein, prior research within Computer Supported Cooperative Work (CSCW) and related fields, such as Human Computer Interaction (HCI) and Science & Technology Studies, has paid attention to the ways in which technical practices and computational artifacts of all kinds embed or promote a range of social values (e.g., [78,110,137,138,151]). Research programs developed to focus on values and technology, such as Value Sensitive Design and "values in design", include forms of both analysis to identify and critique values associated with systems, and methods for incorporating values into the processes of engineering and design [48,4...