While evaluation capacity building (ECB) may hold promise for fostering evaluation, little is known about how it is operationalized within a network. This article presents initial findings from a National Science Foundation-funded research project (Complex Adaptive Systems as a Model for Network Evaluations) that used concepts from complex adaptive systems theory to develop case studies of ECB within the Nanoscale Informal Science Education Network. The project used a multiple case study approach to explore ECB within four Network workgroups. Cross-case themes documented characteristics of the system and ECB within it. Evaluation capacity was evident in several ways, including people's comfort with evaluation, evaluation-related skills, evaluation processes used, and the value placed on evaluation. Ultimately, the study identified several complex adaptive system features that fostered Network ECB: massive entanglement and neighbor interactions, information
The 2012 election season provided increased opportunities for the collaboration among citizens, new media, and democracy. The “social media election” saw a rise in online user-generated political content posted to YouTube. These videos, often satirical in nature, were viewed by millions, making the potential impacts from this new form of political communication deserving of inquiry. Using experimental design, this study explored the relationship between user-generated political satire and “normative” political attitudes. The results revealed that viewing satirical representations of political candidates did not affect individuals’ level of political cynicism or political information efficacy; however, perceptions of candidate credibility and favorability were altered.
Informal STEM education (ISE) organizations, especially museums, have used evaluation productively but unevenly. We argue that advancing evaluation in ISE requires that evaluation capacity building (ECB) broadens to include not only professional evaluators but also other professionals such as educators, exhibit developers, activity facilitators, and institutional leaders. We identify four categories of evaluation capacity: evaluation skill and knowledge, use of evaluation, organizational systems related to conducting or integrating evaluation, and values related to evaluation. We studied a field‐wide effort to build evaluation capacity across a network of organizations and found it important to address individuals’ evaluation capacities as well as capacities at the organizational level. Organizational factors that support ECB included redundancy of evaluation capacities across multiple people in an organization, institutional coherence around the value of evaluation, and recognition that ECB can be led from multiple levels of an organizational hierarchy. We argue that the increasing emphasis on evaluation in the ISE field represents an exciting opportunity and that, with targeted strategies and investments, ECB holds great promise for the future of ISE and ISE evaluation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.