Citizen science games (CSGs) are a valuable means for motivating citizen science participation. However, many CSGs still suffer from the recruitment and retention issues of traditional citizen science projects, despite much prior literature on what motivates CSG players. In this study, we take a Human-Computer Interaction (HCI) perspective to explore in what ways CSGs are still failing to provide motivating play experiences for players. Using Qualitative Content Analysis, we conducted and analyzed an online survey of 185 players from 9 citizen science games. This survey contributes insights to the current state of CSG experiences and next steps for developers to address these issues. We found that major concerns included scientific communication, instructional design, user interface and controls, task quality, and software issues.
Wetland loss is increasing rapidly, and there are gaps in public awareness of the problem. By crowdsourcing image analysis of wetland morphology, academic and government studies could be supplemented and accelerated while engaging and educating the public. The Land Loss Lookout (LLL) project crowdsourced mapping of wetland morphology associated with wetland loss and restoration. We demonstrate that volunteers can be trained relatively easily online to identify characteristic wetland morphologies, or patterns present on the landscape that suggest a specific geomorphological process. Results from a case study in coastal Louisiana revealed strong agreement among nonexpert and expert assessments who agreed on classifications at least 83% and at most 94% of the time. Participants self-reported increased knowledge of wetland loss after participating in the project. Crowd-identified morphologies are consistent with expectations, although more work is needed to directly compare LLL results with previous studies. This work provides a foundation for using crowd-based wetland loss analysis to increase public awareness of the issue, and to contribute to land surveys or train machine learning algorithms.
Citizen science projects that rely on human computation can attempt to solicit volunteers or use paid microwork platforms such as Amazon Mechanical Turk. To better understand these approaches, this paper analyzes crowdsourced image label data sourced from an environmental justice project looking at wetland loss off the coast of Louisiana. This retrospective analysis identifies key differences between the two populations: while Mechanical Turk workers are accessible, cost-efficient, and rate more images than volunteers (on average), their labels are of lower quality, whereas volunteers can achieve high accuracy with comparably few votes. Volunteer organizations can also interface with the educational or outreach goals of an organization in ways that the limited context of microwork prevents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.