2018
DOI: 10.4314/gjedr.v17i1.10
|View full text |Cite
|
Sign up to set email alerts
|

Step by step process from logic model to case study method as an approach to educational programme evaluation

Abstract: Logic models and case study approach to programme evaluation have proven effective in evaluating educational programmes. However, there is no article that has described a step by step process of how a logic model can inform the choice of a case study methodology. In this article, we used the clinical components of a bridging programme in Canada to illustrate the step by step process of logic model to case study methodology. We provided a background to the bridging programme, steps for designing programme evalu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…In contrast to these informal evaluative practices, formal evaluations should aim to, first, assess knowledge brokering impact in terms of practice and/or policy change and, second, determine how and to what extent particular knowledge brokering activities helped achieve those outcomes. The literature on evaluative practices, especially programme evaluation, has proliferated in the past decade [27][28][29], and includes widely accepted guidance on developing project-appropriate logic models, outcomes and outcome indicators. Dobbins et al [30] recently found that a knowledge translation intervention delivered by KBs resulted in improvements in evidence-informed decisionmaking knowledge, skills and behaviours, suggesting that, if KB researchers develop concrete, actionable indicators and ways to measure theminformed by theories, models or frameworks and keeping in mind a wide range of stakeholder perspectivesperhaps a culture of evaluation can grow within knowledge brokering.…”
Section: Discussionmentioning
confidence: 99%
“…In contrast to these informal evaluative practices, formal evaluations should aim to, first, assess knowledge brokering impact in terms of practice and/or policy change and, second, determine how and to what extent particular knowledge brokering activities helped achieve those outcomes. The literature on evaluative practices, especially programme evaluation, has proliferated in the past decade [27][28][29], and includes widely accepted guidance on developing project-appropriate logic models, outcomes and outcome indicators. Dobbins et al [30] recently found that a knowledge translation intervention delivered by KBs resulted in improvements in evidence-informed decisionmaking knowledge, skills and behaviours, suggesting that, if KB researchers develop concrete, actionable indicators and ways to measure theminformed by theories, models or frameworks and keeping in mind a wide range of stakeholder perspectivesperhaps a culture of evaluation can grow within knowledge brokering.…”
Section: Discussionmentioning
confidence: 99%
“…The model (see Figure 1) consists of four components: inputs (funding sources as previously described), activities (tuition support, recruitment activities, and resources), outputs (the number of paraprofessionals who received some form of tuition support and obtained certification as anticipated), and outcomes (short-, medium-, or long-term impact of the paraprofessional tuition grant program) both intended and unintended. The conceptual framework of our evaluation is similar in concept while smaller in scale to other program evaluations used in education environments (Kalu & Norman, 2018; Martin & Carey, 2014).…”
Section: Methodsmentioning
confidence: 99%
“…Case studies are often used for intervention programme evaluations (Crowe et al, 2011;Fetters et al, 2013;Yin, 2013). Kalu and Norman (2018) argue this can assist in overcoming criticisms of the logic model, in particular simplification of the context and lack of sophistication in assessing complex interactions (Jones et al, 2020;Funnell and Rogers, 2011;Renger et al, 2011).…”
Section: Evaluation Methodologymentioning
confidence: 99%