This article discusses factors effecting the implementation of European policies at the national level. Four main independent variables are distinguished: political institutions, the degree of corporatism, citizens' support for the EU, and political culture in the member states. The impacts of these variables on the success of implementing EU directives in ten policy areas are then tested with the multiple regression model. The results suggest that political culture and the design of political institutions in the member states had the most significant impact on implementation behavior. Countries with a high level of trust and political stability combined with efficient and flexible political institutions had the most success in implementing European policies.
Programme evaluation has become a widely applied mode of systematic inquiry for making judgements about public policies. Although evaluation, as a form of systematic inquiry, has provided feedback information for policy makers, it still too often produces banal answers to complex and multi-dimensional societal problems. In this article, we take a close look at the ontological premises, conceptions of causality, and relationships to rational theories of action of different programme evaluation paradigms. There is a paradigm crisis in evaluation resulting from differences over assumptions about causality. Evaluation paradigms clearly provide research strategies, but more particularly they map causal links in contrasting ways. Traditional cause-and-effect logic disregards the fact that programme effects are always brought about by real actors rather than constructed ideal actors. A new interpretation of causes and effects is needed, which would strengthen the core ideas that lie behind the now widely applied and consolidated realistic evaluation tradition.
Meta-evaluation has many definitions. Some use it to describe aggregating information from several individual evaluations, others define it as a systematic tool for the quality control of evaluation studies. This article aims at elaborating the latter approach by introducing a more learning-oriented interpretation of the concept of meta-evaluation. In this sense, meta-evaluation should be part of an open dialogue between various parties in the evaluation process. Making evaluation as transparent as possible enhances the preconditions of organizational learning through meta-evaluation. The empirical basis for this article is a meta-evaluation study based on 15 midterm evaluations of European Structural Fund programmes in Finland. The article concludes that critical analysis of single evaluations serves both policy and organizational learning. By policy learning we refer to the mechanisms of knowledge dissemination in various policy arenas, and by organizational learning to continuous and reflexive evaluative inquiry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.