Evaluation offers many benefits for citizen science including the ability to inform design and improve project programming; to aid in understanding impacts on volunteer outcomes; to validate project successes; and to advance best-practices in the field. However, evaluation and subsequent use of its findings in citizen science remains limited. Here, we applied an existing typology to document evaluation use among 15 citizen science project leaders who were deeply involved in a collaborative evaluation process. From their evaluation efforts, these leaders gained new and deeper understanding of their volunteers and programming (conceptual use); made critical changes to their projects (programmatic use); shared their evaluation findings with others (dissemination use); and expanded their attitudes and actions with regard to evaluation (process use). Knowledge gains from evaluation prompted the project leaders in our study to change their training, revise their protocols, add resources, and even terminate an unproductive project. Through reports, presentations, and publications, the project leaders shared findings related to skill proficiency with their volunteers, other staff members, practitioners in other citizen science projects, funders, researchers, and evaluators. Our study makes connections between the evaluation-use literature and citizen science practice, and offers recommendations to address the challenge of limited application of evaluation within citizen science. As such, this paper can help project leaders understand the important and diverse ways evaluation can support individual projects and the larger field. It also raises questions on the role of collaboration in citizen science evaluation.
This paper is the culmination of several facilitated exercises and meetings between external researchers and five citizen science (CS) project teams who analyzed existing data records to understand CS volunteers' accuracy and skills. CS teams identified a wide range of skill variables that were "hiding in plain sight" in their data records, and that could be explored as part of a secondary analysis, which we define here as analyses based on data already possessed by the project. Each team identified a small number of evaluation questions to explore with their existing data. Analyses focused on accurate data collection and all teams chose to add complementary records that documented volunteers' project engagement or the data collection context to their analysis. Most analyses were conducted as planned, and included a range of approaches from correlation analyses to general additive models. Importantly, the results from these analyses were then used to inform the design of both existing and new CS projects, and to inform the field more broadly through a range of dissemination strategies. We conclude by sharing ways that others might consider pursuing their own secondary analysis to help fill gaps in our current understanding related to volunteer skills.
With the widespread availability and pervasiveness of artificial intelligence (AI) in many application areas across the globe, the role of crowdsourcing has seen an upsurge in terms of importance for scaling up data-driven algorithms in rapid cycles through a relatively low-cost distributed workforce or even on a volunteer basis. However, there is a lack of systematic and empirical examination of the interplay among the processes and activities combining crowd-machine hybrid interaction. To uncover the enduring aspects characterizing the human-centered AI design space when involving ensembles of crowds and algorithms and their symbiotic relations and requirements, a Computer-Supported Cooperative Work (CSCW) lens strongly rooted in the taxonomic tradition of conceptual scheme development is taken with the aim of aggregating and characterizing some of the main component entities in the burgeoning domain of hybrid crowd-AI centered systems. The goal of this article is thus to propose a theoretically grounded and empirically validated analytical framework for the study of crowd-machine interaction and its environment. Based on a scoping review and several cross-sectional analyses of research studies comprising hybrid forms of human interaction with AI systems and applications at a crowd scale, the available literature was distilled and incorporated into a unifying framework comprised of taxonomic units distributed across integration dimensions that range from the original time and space axes in which every collaborative activity take place to the main attributes that constitute a hybrid intelligence architecture. The upshot is that when turning to the challenges that are inherent in tasks requiring massive participation, novel properties can be obtained for a set of potential scenarios that go beyond the single experience of a human interacting with the technology to comprise a vast set of massive machine-crowd interactions.
Citizen science connects scientists with the public to enable discovery, engaging broad audiences across the world. There are many attributes that make citizen science an asset to the field of heliophysics, including agile collaboration. Agility is the extent to which a person, group of people, technology, or project can work efficiently, pivot, and adapt to adversity. Citizen scientists are agile; they are adaptable and responsive. Citizen science projects and their underlying technology platforms are also agile in the software development sense, by utilizing beta testing and short timeframes to pivot in response to community needs. As they capture scientifically valuable data, citizen scientists can bring expertise from other fields to scientific teams. The impact of citizen science projects and communities means citizen scientists are a bridge between scientists and the public, facilitating the exchange of information. These attributes of citizen scientists form the framework of agile collaboration. In this paper, we contextualize agile collaboration primarily for aurora chasers, a group of citizen scientists actively engaged in projects and independent data gathering. Nevertheless, these insights scale across other domains and projects. Citizen science is an emerging yet proven way of enhancing the current research landscape. To tackle the next-generation’s biggest research problems, agile collaboration with citizen scientists will become necessary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.