Evaluation offers many benefits for citizen science including the ability to inform design and improve project programming; to aid in understanding impacts on volunteer outcomes; to validate project successes; and to advance best-practices in the field. However, evaluation and subsequent use of its findings in citizen science remains limited. Here, we applied an existing typology to document evaluation use among 15 citizen science project leaders who were deeply involved in a collaborative evaluation process. From their evaluation efforts, these leaders gained new and deeper understanding of their volunteers and programming (conceptual use); made critical changes to their projects (programmatic use); shared their evaluation findings with others (dissemination use); and expanded their attitudes and actions with regard to evaluation (process use). Knowledge gains from evaluation prompted the project leaders in our study to change their training, revise their protocols, add resources, and even terminate an unproductive project. Through reports, presentations, and publications, the project leaders shared findings related to skill proficiency with their volunteers, other staff members, practitioners in other citizen science projects, funders, researchers, and evaluators. Our study makes connections between the evaluation-use literature and citizen science practice, and offers recommendations to address the challenge of limited application of evaluation within citizen science. As such, this paper can help project leaders understand the important and diverse ways evaluation can support individual projects and the larger field. It also raises questions on the role of collaboration in citizen science evaluation.
This paper is the culmination of several facilitated exercises and meetings between external researchers and five citizen science (CS) project teams who analyzed existing data records to understand CS volunteers' accuracy and skills. CS teams identified a wide range of skill variables that were "hiding in plain sight" in their data records, and that could be explored as part of a secondary analysis, which we define here as analyses based on data already possessed by the project. Each team identified a small number of evaluation questions to explore with their existing data. Analyses focused on accurate data collection and all teams chose to add complementary records that documented volunteers' project engagement or the data collection context to their analysis. Most analyses were conducted as planned, and included a range of approaches from correlation analyses to general additive models. Importantly, the results from these analyses were then used to inform the design of both existing and new CS projects, and to inform the field more broadly through a range of dissemination strategies. We conclude by sharing ways that others might consider pursuing their own secondary analysis to help fill gaps in our current understanding related to volunteer skills.
Retention of volunteers and participants is a critical concern for programs that rely on their manpower, but limited empirical research exists, especially on youth volunteers. This descriptive, cross-sectional quantitative study examined the influence of volunteer motivation, participation, and science project type on the retention of 4-H youth volunteers ages 12 to 19 years participating in science projects in 3 states. An instrument was created for this study that included both existing survey scales and researcher-developed items based on combined research from the citizen science, volunteer development, and youth development fields. The research revealed that consistency and engagement were correlated with the predictors of retention, but race was not. 4-H science programs have a significantly higher likelihood of retaining youth participants than 4-H citizen science programs. Suggestions for youth educators to develop retention strategies are discussed based on the findings and future research into youth volunteer engagement is proposed.
This paper describes the collaborative process for how a group of citizen science project leaders, evaluators, and researchers worked together to develop, validate, and test embedded assessments of two different volunteer science inquiry skills. The development process for creating these embedded assessments (activities integrated into the learning experience, allowing learners to demonstrate competencies) is articulated, as well as challenges encountered in assessing two science inquiry skills common in citizen science projects: notice relevant features and record standard observations. The authors investigate the extent to which the assessments were successful at achieving four criteria identified as ideal for shared embedded assessments of volunteers' skills, namely: broadly applicable, authentic, performance-based, and integrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.