Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author's favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing methodological literature are often grossly misinterpreted. We explain how to avoid these
Randomized natural experiments provide social scientists with rare opportunities to draw credible causal inferences in real-world settings. We capitalize on such a unique experiment to examine how the name order of candidates on ballots affects election outcomes. Since 1975, California has randomized the ballot order for statewide offices with a complex alphabet lottery. Adapting statistical techniques to this lottery and addressing methodological problems of conventional approaches, our analysis of statewide elections from 1978 to 2002 reveals that, in general elections, ballot order significantly impacts only minor party candidates, with no detectable effects on major party candidates. These results contradict previous research, finding large effects in general elections for major party candidates. In primaries, however, we show that
We amass a new, large-scale dataset of newspaper editorials that allows us to calculate fine-grained measures of the political positions of newspaper editorial pages. Collecting and classifying over 1500 editorials adopted by 25 major US newspapers on 495 Supreme Court cases from 1994 to 2004, we apply an item response theoretic approach to place newspaper editorial boards on a substantively meaningful-and long validated-scale of political preferences. We validate the measures, show how they can be used to shed light on the permeability of the wall between news and editorial desks, and argue that the general strategy we employ has great potential for more widespread use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.