BackgroundThe idea that underlying, generative mechanisms give rise to causal regularities has become a guiding principle across many social and natural science disciplines. A specific form of this enquiry, realist evaluation is gaining momentum in the evaluation of complex social interventions. It focuses on ‘what works, how, in which conditions and for whom’ using context, mechanism and outcome configurations as opposed to asking whether an intervention ‘works’. Realist evaluation can be difficult to codify and requires considerable researcher reflection and creativity. As such there is often confusion when operationalising the method in practice. This article aims to clarify and further develop the concept of mechanism in realist evaluation and in doing so aid the learning of those operationalising the methodology.DiscussionUsing a social science illustration, we argue that disaggregating the concept of mechanism into its constituent parts helps to understand the difference between the resources offered by the intervention and the ways in which this changes the reasoning of participants. This in turn helps to distinguish between a context and mechanism. The notion of mechanisms ‘firing’ in social science research is explored, with discussions surrounding how this may stifle researchers’ realist thinking. We underline the importance of conceptualising mechanisms as operating on a continuum, rather than as an ‘on/off’ switch.SummaryThe discussions in this article will hopefully progress and operationalise realist methods. This development is likely to occur due to the infancy of the methodology and its recent increased profile and use in social science research. The arguments we present have been tested and are explained throughout the article using a social science illustration, evidencing their usability and value.
BackgroundRealist evaluation is increasingly used in health services and other fields of research and evaluation. No previous standards exist for reporting realist evaluations. This standard was developed as part of the RAMESES II project. The project’s aim is to produce initial reporting standards for realist evaluations.MethodsWe purposively recruited a maximum variety sample of an international group of experts in realist evaluation to our online Delphi panel. Panel members came from a variety of disciplines, sectors and policy fields. We prepared the briefing materials for our Delphi panel by summarising the most recent literature on realist evaluations to identify how and why rigour had been demonstrated and where gaps in expertise and rigour were evident. We also drew on our collective experience as realist evaluators, in training and supporting realist evaluations, and on the RAMESES email list to help us develop the briefing materials.Through discussion within the project team, we developed a list of issues related to quality that needed to be addressed when carrying out realist evaluations. These were then shared with the panel members and their feedback was sought. Once the panel members had provided their feedback on our briefing materials, we constructed a set of items for potential inclusion in the reporting standards and circulated these online to panel members. Panel members were asked to rank each potential item twice on a 7-point Likert scale, once for relevance and once for validity. They were also encouraged to provide free text comments.ResultsWe recruited 35 panel members from 27 organisations across six countries from nine different disciplines. Within three rounds our Delphi panel was able to reach consensus on 20 items that should be included in the reporting standards for realist evaluations. The overall response rates for all items for rounds 1, 2 and 3 were 94 %, 76 % and 80 %, respectively.ConclusionThese reporting standards for realist evaluations have been developed by drawing on a range of sources. We hope that these standards will lead to greater consistency and rigour of reporting and make realist evaluation reports more accessible, usable and helpful to different stakeholders.
The development of these minimum measurement standards is intended to promote the appropriate use of PRO measures to inform PCOR and CER, which in turn can improve the effectiveness and efficiency of healthcare delivery. A next step is to expand these minimum standards to identify best practices for selecting decision-relevant PRO measures.
Integrating PROs in clinical practice has the potential to enhance patient-centered care. The online version of the User's Guide will be updated periodically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.