The use of crowd workers as research participants is fast becoming commonplace in social, behavioral, and educational research, and institutional review boards are encountering more and more research protocols concerning these workers. In what sense are crowd workers vulnerable as research participants, and what should ethics reviewers look out for in evaluating a crowdsourced research protocol? Using the popular crowd-working platform Amazon Mechanical Turk as the key example, this article aims to provide a starting point for a heuristic for ethical evaluation. The first part considers two reputed threats to crowd workers' autonomy-undue inducements and dependent relationships-and finds that autonomy-focused arguments about these factors are inconclusive or inapplicable. The second part proposes applying Alan Wertheimer's analysis of exploitation instead to frame the ethics of crowdsourced research. The article then provides some concrete suggestions for ethical reviewers based on the exploitation framework.
Regarding the determination of vulnerability, the bioethics community has univocally jettisoned "labelled groups", groups whose membership confers a context-invariant "vulnerable" status to their members. While the usual reasons against the sole use of labelled groups to determine the vulnerability of individuals are sound, labelled groups as exemplars of vulnerability can play indispensable roles in bioethical reasoning. In this article, I argue against the wholesale jettisoning of labelled groups by showing how they can be useful.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.