Scientifically substantiated evaluations are pivotal to ensuring the effectiveness and improvement of the growing number of science communication projects. Yet current evaluation practices are still lacking in various respects. Based on a systematic review of evaluation reports, an online survey of, as well as discussion rounds with science communication practitioners in the German-speaking countries, we discuss three main challenges of science communication evaluation: (1) There is a conflation of impact goals and measurable project objectives as well as a lack of precise definitions of objectives and target groups, which complicates the assessment of the projects' success. (2) Although many evaluations highlight the impact-oriented interest of those responsible, the methods chosen rarely allow scientifically valid evaluations of effects. The lack of comparative reference points and the partially unsuitable use of self-report measures are key issues in this regard. (3) The fact that few evaluation processes are made transparent and that formative evaluation designs are a rarity indicates a tendency to understand evaluations as the final ‘success story’ of a project rather than a learning process. This stands in the way of a constructive discussion of the actual impact of science communication. Our exploratory insights contribute to an understanding of the weaknesses of science communication evaluation and needs in the field. They also provide impulses for future improvements in the field for the stakeholders in practice, research, funding, and science management.
ZusammenfassungEvaluationen bieten einen wichtigen Mehrwert für Wissenschaftskommunikation, denn anhand ihrer Ergebnisse lässt diese sich zukünftig zielorientiert und effektiv gestalten. Zur Zeit steht die Evaluation von Wissenschaftskommunikation in Deutschland allerdings noch vor Herausforderungen. So ergeben sich bereits vor Beginn der Evaluationen Probleme durch fehlende strategische Planung von Wissenschaftskommunikation. Darüber hinaus mangelt es bei Evaluationen oft an passenden Evaluationsdesigns und geeigneten Datenerhebungsmethoden. Zu guter Letzt erschwert das in der deutschen Wissenschaftskommunikationspraxis vorherrschende Bild von Evaluation einen kollektiven und konstruktiven Lernprozess für die Wissenschaftskommunikation. Diese Herausforderungen gilt es zu überwinden, damit Evaluation als kollektiver Reflexionsprozess zur konstruktiven Weiterentwicklung von Wissenschaftskommunikation beitragen kann.
A large-scale field experiment tested psychological interventions to reduce engine idling at long-wait stops. Messages based on theories of normative influence, outcome efficacy, and self-regulation were displayed approaching railway crossing on street poles. Observers coded whether drivers (N = 6,049) turned off their engine while waiting at the railway crossings (only 27.2% did so at baseline). Automatic air quality monitors recorded levels of pollutants during barrier down times. To different degrees, the social norm and outcome efficacy messages successfully increased the proportion of drivers who turned off their engines (by 42% and 25%, respectively) and significantly reduced concentrations of atmospheric particulate matter (PM2.5) two meters above ground level. Thus, the environment was improved through behavior change. Moreover, of both theoretical and practical significance there was an 'accelerator effect', in line with theories of normative influence whereby the social norm message was increasingly effective as the volume of traffic increased.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.