Organizations often employ data-driven models to inform decisions that can have a significant impact on people's lives (e.g., university admissions, hiring). In order to protect people's privacy and prevent discrimination, these decision-makers may choose to delete or avoid collecting social category data, like sex and race. In this article, we argue that such censoring can exacerbate discrimination by making biases more difficult to detect. We begin by detailing how computerized decisions can lead to biases in the absence of social category data and in some contexts, may even sustain biases that arise by random chance. We then show how proactively using social category data can help illuminate and combat discriminatory practices, using cases from education and employment that lead to strategies for detecting and preventing discrimination. We conclude that discrimination can occur in any sociotechnical system in which someone decides to use an algorithmic process to inform decision-making, and we offer a set of broader implications for researchers and policymakers.
Organizations often employ data-driven models to inform decisions that can have a significant impact on people's lives (e.g., university admissions, hiring). In order to protect people's privacy and prevent discrimination, these decision-makers may choose to delete or avoid collecting social category data, like sex and race. In this article, we argue that such censoring can exacerbate discrimination by making biases more difficult to detect. We begin by detailing how computerized decisions can lead to biases in the absence of social category data and in some contexts, may even sustain biases that arise by random chance. We then show how proactively using social category data can help illuminate and combat discriminatory practices, using cases from education and employment that lead to strategies for detecting and preventing discrimination. We conclude that discrimination can occur in any sociotechnical system in which someone decides to use an algorithmic process to inform decision-making, and we offer a set of broader implications for researchers and policymakers.
Economic theory about peers can help learning scientists and designers scale their work from the scale of small classrooms to limitless learning experiences. I propose: 1. We may increase productivity in online learning by changing technologies around peers; many structures around peers can scale with class size. 2. It is not always in students' best interests to be good peers, and collective action failures may worsen with class size. I conducted an experiment in a NovoEd MOOC for teachers that was motivated by these propositions; it leads to future questions about unintended and emergent effects.
This course helps attendees design effective online randomized experiments and A/B tests, in situations from Amazon Mechanical Turk to online course platforms. We discuss how to identify whether to run an experiment and what the appropriate comparison is, how to choose or construct outcome measures, and how to run an experiment on Amazon Mechanical Turk (AMT). We briefly review statistical significance and provide examples of bootstrapping. For each topic, we present relevant background concepts, discuss how the topic is applied to a concrete A/B test (where treatment A is compared to treatment B), discuss experimental design guidelines and heuristics, and provide web resources for future reference or further learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.