Although participants with psychiatric symptoms, specific risk factors, or rare demographic characteristics can be difficult to identify and recruit for participation in research, participants with these characteristics are crucial for research in the social, behavioral, and clinical sciences. Online research in general and crowdsourcing software in particular may offer a solution. However, no research to date has examined the utility of crowdsourcing software for conducting research on psychopathology. In the current study, we examined the prevalence of several psychiatric disorders and related problems, as well as the reliability and validity of participant reports on these domains, among users of Amazon's Mechanical Turk. Findings suggest that crowdsourcing software offers several advantages for clinical research while providing insight into potential problems, such as misrepresentation, that researchers should address when collecting data online.
Crowdsourcing services--particularly Amazon Mechanical Turk--have made it easy for behavioral scientists to recruit research participants. However, researchers have overlooked crucial differences between crowdsourcing and traditional recruitment methods that provide unique opportunities and challenges. We show that crowdsourced workers are likely to participate across multiple related experiments and that researchers are overzealous in the exclusion of research participants. We describe how both of these problems can be avoided using advanced interface features that also allow prescreening and longitudinal data collection. Using these techniques can minimize the effects of previously ignored drawbacks and expand the scope of crowdsourcing as a tool for psychological research.
Taking notes on laptops rather than in longhand is increasingly common. Many researchers have suggested that laptop note taking is less effective than longhand note taking for learning. Prior studies have primarily focused on students' capacity for multitasking and distraction when using laptops. The present research suggests that even when laptops are used solely to take notes, they may still be impairing learning because their use results in shallower processing. In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. We show that whereas taking more notes can be beneficial, laptop note takers' tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.
Crowdsourcing has become an increasingly popular means of flexibly deploying large amounts of human computational power. The present chapter investigates the role of microtask labor marketplaces in managing human and hybrid human machine computing. Labor marketplaces offer many advantages that in combination allow human intelligence to be allocated across projects rapidly and efficiently and information to be transmitted effectively between market participants. Human computation comes with a set of challenges that are distinct from machine computation, including increased unsystematic error (e.g. mistakes) and systematic error (e.g. cognitive biases), both of which can be exacerbated when motivation is low, incentives are misaligned, and task requirements are poorly communicated. We provide specific guidance to how to ameliorate these issues through task design, workforce selection, data cleaning and aggregation. Risks and Rewards of Crowdsourcing Marketplaces The present chapter focuses on the risks and rewards of using online marketplaces to enable crowdsourced human computation. We discuss the strengths and limitations of these marketplaces, with a particular emphasis on the quality of crowdsourced data collected from Amazon Mechanical Turk. Data quality is by far the most important consideration when designing computational tasks, and it can be influenced by many factors. We emphasize Mechanical Turk because it is currently one of the most popular and accessible crowdsourcing platforms and offers low barriers of entry to researchers interested in exploring the uses of crowdsourcing. In addition to describing the strengths and limitations of this platform, we provide general considerations and specific recommendations for measuring and improving data quality that are applicable across crowdsourcing markets. Crowdsourcing is the distribution of tasks to a large group of individuals via a flexible open call, in which individuals work at their own pace until the task is completed (for a more detailed definition see Estelles-Arolas & Gonzalez-Ladron-de Guevera, 2012). Crowd membership is fluid, with low barriers to entry and no minimum commitment. Individuals with heterogeneous skills, motivation, and other resources contribute to tasks in parallel. Crowdsourcing leverages the unique knowledge of individual crowd members, the sheer volume of their collective time and abilities, or both to solve problems that are difficult to solve using computers, or smaller and more structured groups. The unique strengths of groups are generally used to solve one of two basic kinds of problems. Some problems have no obvious a priori solution, but correct answers seem obvious once known (e.g. insight problems; Dominowski & Dallob, 1995) or can be verified. In these cases, crowds can generate responses from which the "best" response can be selected according to some criteria. The volume and diversity of workers with different perspectives, strategies and knowledge can lead to quick, unorthodox, and successful solutions. The Interne...
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest.RAND's publications do not necessarily reflect the opinions of its research clients and sponsors. Support RANDMake a tax-deductible charitable contribution at www.rand.org/giving/contribute www.rand.org For more information on this publication, visit www.rand.org/t/RR2693Published by the RAND Corporation, Santa Monica, Calif.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.