Kimberley R. Isett's research focuses on institutional dynamics in implementing government services, with a particular interest in the delivery of services to vulnerable populations. Increasingly, her work focuses on the use of evidence in decision making, both in policy and systems. Dr. Isett has been awarded just over $1 million in research grants and has worked with elected offi cials and policy makers at all levels of government.
Developing ways to bridge the long‐recognized gap between researchers and policy makers is increasingly important in this age of constrained public resources. As noted by recent scholarship, progress toward evidence‐informed policy making requires both improving the supply of research that is reliable, timely, and relevant to the policy process and promoting demand and support for this information among decision makers. This article presents a case study of the Pew‐MacArthur Results First Initiative, which is working in a growing number of state and local governments to build systems that bring rigorous evidence on “what works” into their budget processes and to support its use in resource allocation decisions. The initiative's experience to date is promising, although creating lasting and dynamic evidence‐based policy‐making systems requires a long‐term commitment by both researchers and policy makers.
Although scholars have proposed many steps to increase evaluation use, there has been little comparative empirical study of whether researchers follow these recommendations and whether specific steps are associated with greater utilization. This study of state legislative evaluators finds that those that regularly meet with stakeholders and provide readily actionable products were considered by senior legislative staff to have more impact, as were larger offices. Legislative evaluators working in auditing organizations were viewed less favorably than those working in other units; this appears to be related to their adherence to Government Auditing Standards that prescribe organizational independence. Evaluators following these standards had less stakeholder engagement than did those working in other legislative units that adhered to research standards that stress meeting stakeholder needs. Environmental factors such as changes in party control may also play a role in how the work of evaluators is valued and used in the legislative process.
Although there is growing interest in applying benefit-cost analysis (BCA) to public policy questions, limited information is available on states' use of this methodology. The nationwide assessment presented here begins to fill that void and finds that states and the District of Columbia are increasingly conducting BCAs and using the results to inform their policy choices. The numbers of reports released by the states and statutory mandates to conduct these studies increased substantially between 2008 and 2011. An analysis of the studies released by states shows that most lack some recommended technical features of rigorous BCA, but the reports are having a reported impact on state policy and budget decisions. Like other forms of policy research, BCA faces challenges including resource and data limitations, timing problems, and gaining policymaker buy-in for the approach and findings.
Evaluators have long sought a world in which our work makes a tangible difference to society, but that goal has often seemed out of reach. However, in recent years, advocates have proclaimed an era of evidence-based policymaking in which the What Works data generated by evaluations will be increasingly used to inform programme and policy choices. Four primary factors have been critical to the rise of this approach – attaining a critical mass of curated What Works’ evidence, growing interest among political leaders in considering this information when making choices, new budgetary mechanisms for using these data and new tools that facilitate rigorous outcome studies. However, the movement also faces critical challenges including the growing distrust of empirical data among some political factions, leaks in the evaluation pipeline that generates data to identify What Works and the replication failure of many evidence-based interventions. The evaluation field should support this movement through efforts to plug leaks in the evidence pipeline, stronger efforts to assess implementation challenges, training students in evidence-based approaches and assisting in outreach to policymakers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.