The use of crowd workers as research participants is fast becoming commonplace in social, behavioral, and educational research, and institutional review boards are encountering more and more research protocols concerning these workers. In what sense are crowd workers vulnerable as research participants, and what should ethics reviewers look out for in evaluating a crowdsourced research protocol? Using the popular crowd-working platform Amazon Mechanical Turk as the key example, this article aims to provide a starting point for a heuristic for ethical evaluation. The first part considers two reputed threats to crowd workers' autonomy-undue inducements and dependent relationships-and finds that autonomy-focused arguments about these factors are inconclusive or inapplicable. The second part proposes applying Alan Wertheimer's analysis of exploitation instead to frame the ethics of crowdsourced research. The article then provides some concrete suggestions for ethical reviewers based on the exploitation framework.