This is an electronic version of an article published in Coe, R. (2009) 'Unobserved but not unimportant : the e ects of unmeasured variables on causal attributions.', E ective education., 1 (2). pp. 101-122. E ective education is available online at: http://www.informaworld.com/smpp/content db=all?content=10. 1080/19415530903522519 Additional information:
Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. Peer-review status:Peer-reviewed
Publication status:Accepted for publication version
Citation for published item:Coe, R. (2009)
Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-profit purposes provided that : a full bibliographic reference is made to the original source a link is made to the metadata record in DRO the full-text is not changed in any wayThe full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details.
Objective:To estimate how much difference the inclusion of plausibly important but unmeasured variables could make to estimates of the effects of educational programmes.
Methods:Two examples of policy-relevant research in education were identified. A sensitivity analysis using Monte Carlo simulation was conducted to estimate the size of a possible spurious ‗effect' that could actually be entirely due to the failure to incorporate a plausible unobserved variable.
Results:In all the examples the effect size reported in the original study was within the range of possible spurious effects. Conclusions: What appeared to the original researchers to be substantial and unequivocal causal effects were reduced to tiny and uncertain differences when the effects of plausible unobserved differences were taken into account. Evaluators who rely on statistical control should be more cautious in making causal claims, consider possible effects of unmeasured variables and conduct sensitivity analyses. Alternatively, stronger designs should be used.