In this article I analyze process tracing, a causal mechanism-based technique for testing causal claims in the social sciences that requires one to specify a chain of intervening causes between any putative cause and effect. I argue that one should not only give evidence that the intervening causes are present in a suitable case study, as process tracing methodologists recommend, but also provide counterfactual evidence to show that each link in the chain is genuinely causal. I detail what that counterfactual evidence should consist of, using Woodward’s manipulability theory, and argue that this evidence relies on tentative comparisons to other case studies.
Evidential pluralism has been used to justify mixed-method research in political science. The combination of methodologies within (qualitative) case study analysis, however, has not received as much attention. This article applies the theory of evidential pluralism to causal inference in the case study method process tracing. I argue that different methodologies for process tracing commit to distinct fundamental theories of causation. I show that, problematically, one methodology may not recognize as genuine knowledge the fundamental claims of the other. By evaluating the epistemic reliability of these fundamental claims, we can find a way out of such conflicts and rescue pluralism.
If a human subject knows they are being measured, this knowledge may affect their attitudes and behaviour to such an extent that it affects the measurement results as well. This broad range of effects is shared under the term ‘reactivity’. Although reactivity is often seen by methodologists as a problem to overcome, in this paper I argue that some quite extreme reactive changes may be legitimate, as long as we are measuring phenomena that are not simple biological regularities. Legitimate reactivity is reactivity which does not undermine the accuracy of a measure; I show that if such reactivity were corrected for, this would unjustifiably ignore the authority of the research subject. Applying this argument to the measurement of depression, I show that under the most commonly accepted models of depression there is room for legitimate reactivity. In the first part of the paper, I provide an inventory of the different types of reactivity that exist in the literature, as well as the different types of phenomena that one could measure. In the second part, I apply my argument to the measurement of depression with the PHQ-9 survey. I argue that depending on what kind of phenomenon we consider depression to be (a disease, a social construction, a harmful dysfunction, or a practical kind), we will accept different kinds of reactivity. I show that both under the harmful dysfunction model and the practical kinds model, certain reactive changes in measuring depression are best seen as legitimate recharacterizations of the underlying phenomenon, and define what legitimate means in this context. I conclude that in both models, biological aspects constrain characterization, but the models are not so strict that only one concept is acceptable, leaving room for reactivity.
Evidential pluralists, like Federica Russo and Jon Williamson, argue that causal claims should be corroborated by establishing both the existence of a suitable correlation and a suitable mechanism complex. At first glance, this fits well with mixed method research in the social sciences, which often involves a pluralist combination of statistical and mechanistic evidence. However, statistical evidence concerns a population of cases, while mechanistic evidence is found in individual case studies. How should researchers combine such general statistical evidence and specific mechanistic evidence? This article discusses a very recent answer to this question, ‘multi-method large-N qualitative analysis’ or multi-method LNQA, popular in political science and international relations studies of rare events like democratic transitions and cease-fire agreements. Multi-method LNQA combines a comprehensive study of all (or most) relevant event cases with statistical analysis, in an attempt to solve the issues of generalization faced by other types of qualitative research, such as selection bias and lack of representativeness. I will argue that the kind of general causal claim that multi-method LNQA is after, however, is crucially different from the average treatment effect found in statistical analysis and can in fact only be supported with mechanistic evidence. I conclude from this that mixed method research, and thereby evidential pluralism, may be inappropriate in this context.
The issue of causal comparability in the social sciences underlies matters of both generalization and extrapolation (or external validity). After critiquing two existing interpretations of comparability, due to Hitchcock and Hausman, I propose a distinction between ontological and epistemic comparability. While the former refers to whether two cases are actually comparable, the latter respects that in cases of incomplete information, we need to rely on whatever evidence we have of comparability. I argue, using a political science case study, that in those cases of imperfect information, an epistemic homogeneity criterion can be an adequate justification for generalization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.