Political knowledge is a central concept in the study of public opinion and political behavior. Yet what the field collectively believes about this construct is based on dozens of studies using different indicators of knowledge. We identify two theoretically relevant dimensions: atemporaldimension that corresponds to the time when a fact was established and atopicaldimension that relates to whether the fact is policy-specific or general. The resulting typology yields four types of knowledge questions. In an analysis of more than 300 knowledge items from late in the first decade of the 2000s, we examine whether classic findings regarding the predictors of knowledge withstand differences across types of questions. In the case of education and the mass media, the mechanisms for becoming informed operate differently across question types. However, differences in the levels of knowledge between men and women are robust, reinforcing the importance of including gender-relevant items in knowledge batteries.
Media exposure is one of the most important concepts in the social sciences, and yet scholars have struggled with how to operationalize it for decades. Some researchers have focused on the effects of variously worded self-report measures. Others advocate the use of aggregate and/or behavioral data that does not rely on a person's ability to accurately recall exposure. The present study introduces the prototype of an experimental design that can be used to improve measures of exposure. In particular, we show how an experimental benchmark can be employed to (1) compare actual (i.e., manipulated) and self-reported values of news exposure; (2) assess how closely the self-reported measures approximates the performance of "true" exposure in an empirical application, and (3) leverage the experimental benchmark to investigate whether a variation in question wording improves the accuracy of self-reported exposure measures.
Scholars often study isolated media effects in single country using one method at one moment in time. We seek to generalize the research in this area by examining hundreds of press-worthy events across dozens of countries at various points in time with an array of techniques and outcome measures. In particular, we merge a database containing thousands of national events with five waves of the European Social Survey to conduct analyses across countries and individuals as well as within countries and subjects. The results suggest that there is an impressive degree of heterogeneity when it comes to how citizens react to political developments. In particular, some events generate significant opinion changes when groups of individuals who are "treated" are compared to "control" cases. However, other events produce modest or even null findings with methods that employ different counterfactuals. Thus, findings of both strong and weak media effects that scholars have uncovered over the years could be a function of methodological choices as well as context-specific factors such as institutional arrangements, media-systems, eras, or event characteristics. Data limitations also make some research designs possible while they preclude others. We conclude with advice for others who wish to study political events in this manner as well as discussion of media effects, broadly construed.
Media effects research has produced mixed findings about the size and direction of the relationship between media consumption and public attitudes (e.g., Bartels, 1993). We investigate the extent to which model choices contribute to these inconsistent findings. Taking a comparative approach, we first review the use of different models in contemporary studies and their main findings. In order to extend and validate this review, we consider the implications for national election studies attempting to measure media effects in election campaigns and recreate these models with the British Election Study 2005-10 panel data. We compare the direction and size of effects of media content on attitude change across: between-subjects, withinelections models, in which the effects of individual-level variance in media exposure and content are assessed; within-subjects, within-elections models, which compare the effects of variance in media content for the same individual; and within-subjects, between-elections models that allow us to analyze the links between media content and exposure with attitude change over time. Our review shows some notable differences between models in terms of significance of effects (but not effect sizes). We corroborate this finding in the British campaign analysis. We conclude that to check the robustness of claims of media effects in observational data, where possible researchers should examine different model choices when evaluating media effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.