2021
DOI: 10.1177/13563890211041676
|View full text |Cite
|
Sign up to set email alerts
|

The unused potential of process tracing as evaluation approach: The case of cluster policy evaluation

Abstract: This article shows that process tracing developed in social science research can be used in evaluations of complex structural and technology policy programmes to overcome deficits in the methodological instruments used to date. Cluster policies are a well-suited example because they are characterized by complex impact patterns like many other current structural and innovation policy programmes. The origin and characteristics of the methodological approach of process tracing are discussed and weaknesses of impa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Process tracing is a theory-driven methodology suited to studying causal mechanisms in single case studies of multifaceted and multiactor interventions such as the 3 feet model. [12] Process tracing goes beyond tracking 'implementation fidelity' (adoption of interventions…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…Process tracing is a theory-driven methodology suited to studying causal mechanisms in single case studies of multifaceted and multiactor interventions such as the 3 feet model. [12] Process tracing goes beyond tracking 'implementation fidelity' (adoption of interventions…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…If an evaluation intends to understand how the program actually produced the results (i.e., “how it works”), this requires unpacking the arrows into episodes of interaction between actors. The lack of theorization of interactions between actors also implies that what is going on in-between program activities and results/outcomes are typically not evidenced empirically (see Rothgang and Lageman, 2021: 533; Schmitt and Beach, 2015: 431). This would not be problematic for impact evaluation methods such as randomized controlled trials (RCTs) because the controlled comparison of results/outcomes in cases where activities were present and those where they were absent enables causal inferences to be made.…”
Section: Existing Tbemsmentioning
confidence: 99%
“…While RE and CA are very useful for assessing the activities and outputs/outcomes of interventions, the conceptual language they provide for theorizing what links them together does not enable the evaluator to unpack the sequence of actions and interactions between program actors and relevant stakeholders that produced a contribution (Rothgang and Lageman, 2021; Schmitt and Beach, 2015). Yet if the inner workings of an intervention are not unpacked theoretically, it is difficult for evaluators to claim that they have empirical evidence of whether and how an intervention produced a contribution.…”
Section: Introductionmentioning
confidence: 99%