Predictive policing systems are used increasingly by law enforcement to try to prevent crime before it occurs. But what happens when these systems are trained using biased data? Kristian Lum and William Isaac consider the evidence – and the social consequences
A recent wave of research has attempted to define fairness quantitatively. In particular, this work has explored what fairness might mean in the context of decisions based on the predictions of statistical and machine learning models. The rapid growth of this new field has led to wildly inconsistent motivations, terminology, and notation, presenting a serious challenge for cataloging and comparing definitions. This article attempts to bring much-needed order. First, we explicate the various choices and assumptions made—often implicitly—to justify the use of prediction-based decision-making. Next, we show how such choices and assumptions can raise fairness concerns and we present a notationally consistent catalog of fairness definitions from the literature. In doing so, we offer a concise reference for thinking through the choices, assumptions, and fairness considerations of prediction-based decision-making. Expected final online publication date for the Annual Review of Statistics, Volume 8 is March 8, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Scholars in several fields, including quantitative methodologists, legal scholars, and theoretically oriented criminologists, have launched robust debates about the fairness of quantitative risk assessment. As the Supreme Court considers addressing constitutional questions on the issue, we propose a framework for understanding the relationships among these debates: layers of bias. In the top layer, we identify challenges to fairness within the risk-assessment models themselves. We explain types of statistical fairness and the tradeoffs between them. The second layer covers biases embedded in data. Using data from a racially biased criminal justice system can lead to unmeasurable biases in both risk scores and outcome measures. The final layer engages conceptual problems with risk models: Is it fair to make criminal justice decisions about individuals based on groups? We show that each layer depends on the layers below it: Without assurances about the foundational layers, the fairness of the top layers is irrelevant.
Field and laboratory experiments indicate that a
number of factors associated with filtration other than
just pore size (e.g., diameter, manufacturer, volume
of sample processed, amount of suspended
sediment in the sample) can produce significant
variations in the “dissolved” concentrations of such
elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. The bulk
of these variations result from the inclusion/exclusion
of colloidally associated trace elements in the filtrate,
although dilution and sorption/desorption from filters
also may be factors. Thus, dissolved trace element
concentrations quantitated by analyzing filtrates generated by processing whole water through similar pore-sized filters may not be equal or comparable. As
such, simple filtration of unspecified volumes of natural
water through unspecified 0.45-μm membrane filters
may no longer represent an acceptable operational
definition for a number of dissolved chemical
constituents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.