Background The KT Challenge program supports health care professionals to effectively implement evidence-based practices. Unlike other knowledge translation (KT) programs, this program is grounded in capacity building, focuses on health care professionals (HCPs), and uses a multi-component intervention. This study presents the evaluation of the KT Challenge program to assess the impact on uptake, KT capacity, and practice change. Methods The evaluation used a mixed-methods retrospective pre-post design involving surveys and review of documents such as teams’ final reports. Online surveys collecting both quantitative and qualitative data were deployed at four time points (after both workshops, 6 months into implementation, and at the end of the 2-year funded projects) to measure KT capacity (knowledge, skills, and confidence) and impact on practice change. Qualitative data was analyzed using a general inductive approach and quantitative data was analyzed using non-parametric statistics. Results Participants reported statistically significant increases in knowledge and confidence across both workshops, at the 6-month mark of their projects, and at the end of their projects. In addition, at the 6-month check-in, practitioners reported statistically significant improvements in their ability to implement practice changes. In the first cohort of the program, of the teams who were able to complete their projects, half were able to show demonstrable practice changes. Conclusions The KT Challenge was successful in improving the capacity of HCPs to implement evidence-based practice changes and has begun to show demonstrable improvements in a number of practice areas. The program is relevant to a variety of HCPs working in diverse practice settings and is relatively inexpensive to implement. Like all practice improvement programs in health care settings, a number of challenges emerged stemming from the high turnover of staff and the limited capacity of some practitioners to take on anything beyond direct patient care. Efforts to address these challenges have been added to subsequent cohorts of the program and ongoing evaluation will examine if they are successful. The KT Challenge program has continued to garner great interest among practitioners, even in the midst of dealing with the COVID-19 pandemic, and shows promise for organizations looking for better ways to mobilize knowledge to improve patient care and empower staff. This study contributes to the implementation science literature by providing a description and evaluation of a new model for embedding KT practice skills in health care settings.
Background Public health professionals are expected to use the best available research and contextual evidence to inform decision-making. The National Collaborating Centre for Methods and Tools developed, implemented, and evaluated a Knowledge Broker mentoring program aimed at facilitating organization-wide evidence-informed decision-making in ten public health units in Ontario, Canada. The purpose of this study was to pragmatically assess the impact of the program. Methods A convergent mixed methods design was used to interpret quantitative results in the context of the qualitative findings. A goal-setting exercise was conducted with senior leadership in each organization prior to implementing the program. Achievement of goals was quantified through deductive coding of post-program interviews with participants and management. Interviews analyzed inductively to qualitatively explain progress toward identified goals and identify key factors related to implementation of EIDM within the organization. Results Organizations met their goals for evidence use to varying degrees. The key themes identified that support an organizational shift to EIDM include definitive plans for participants to share knowledge during and after program completion, embedding evidence into decision-making processes, and supportive leadership with organizational investment of time and resources. The location, setting, or size of health units was not associated with attainment of EIDM goals; small, rural health units were not at a disadvantage compared to larger, urban health units. Conclusions The Knowledge Broker mentoring program allowed participants to share their learning and support change at their health units. When paired with organizational supports such as supportive leadership and resource investment, this program holds promise as an innovative knowledge translation strategy for organization wide EIDM among public health organizations.
No abstract
Research on evaluation (RoE) is essential to increase knowledge, develop robust approaches, and help evaluators to conduct better evaluations. In this article, we used the concept of meta-evaluation in the field of interprofessional education and collaborative practice to identify current evaluation trends and efforts in RoE reflective of capacity building. The results contribute to identify weaknesses in current evaluation methods and highlight the negative consequences of poor RoE for knowledge development. Specific recommendations are drawn out to increase the quality of evaluation studies, to provide evidences in RoE, and to increase its connection between evaluation practices in the field.
Rooted in the pedagogical literature, three evaluation educators, guided by a facilitator, engaged in reflective practice regarding case-centered teaching and learning. We engaged in (D)escription, (A)nalysis, (T)heorizing, and (A)cting (DATA model) in relation to case-centered teaching. Based on a cross-case analysis, we identified five common actions in our teaching with cases: (1) Use case-centered teaching in various contexts to support a variety of different learning outcomes for students from different backgrounds;(2) Choose cases intentionally; (3) Integrate student learning activities, supports, and materials; (4) Evaluate students' learning experiences with cases; and (5) Engage in collaborative reflection to facilitate learning and improvement on instructional practices with casecentered teaching.This article offers readers a glimpse into case-centered teaching in action as three instructors and a facilitator collectively reflect on their use of cases in evaluation education. Although we had previously learned from and taught with cases, we approached casecentered teaching and learning with new insights based on the previous articles in this volume (Bourgeois et al., this issue; Ensminger et al., this issue; Kallemeyn et al., this issue; Linfield & Tovey, this issue; Montrosse-Moorhead et al., this issue). Most evaluation educators who teach with cases draw from their own experiences in designing and using cases in their teaching (Bourgeois et al., this issue). Rather than maintaining our teaching as a private, individual practice, we engaged in collaborative reflection on our practice (Smith et al., 2015). This article follows a tradition of various forms of self-study on evaluation (Boyce & McGowan, 2018;Chouinard & Boyce, 2017;He et al., 2021; van Draanen, 2017). The purpose of this article is to share our collective actions regarding how to implement case-centered teaching in evaluation courses and how it shapes learning experiences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.