Low processing fluency fosters the impression that a stimulus is unfamiliar, which in turn results in perceptions of higher risk, independent of whether the risk is desirable or undesirable. In Studies 1 and 2, ostensible food additives were rated as more harmful when their names were difficult to pronounce than when their names were easy to pronounce; mediation analyses indicated that this effect was mediated by the perceived novelty of the substance. In Study 3, amusement-park rides were rated as more likely to make one sick (an undesirable risk) and also as more exciting and adventurous (a desirable risk) when their names were difficult to pronounce than when their names were easy to pronounce.
when asked, "how many animals of each kind did Moses take on the Ark?" most people respond "Two" despite knowing that noah rather than Moses was the biblical actor. Two experiments tested the role of processing fluency in the detection of such semantic distortions by presenting questions in an easy or difficult to read print font. As predicted, low processing fluency facilitated detection of the misleading nature of the question and reduced the proportion of erroneous answers. however, low processing fluency also reduced the proportion of correct answers in response to an undistorted question. In both cases, participants were less likely to rely on their spontaneous association when the font was difficult to read, resulting in improved performance on distorted and impaired performance on undistorted questions. we propose that fluency experiences influence processing style. When asked, "How many animals of each kind did Moses take on the Ark?" most people respond "Two" despite knowing that Noah rather than Moses was the actor in the biblical story (Erickson & Mattson, 1981). This Moses illusion bears on an important aspect of human communication: Under which conditions are distortions in utterances and texts likely to be noticed? Previous research addressed a variety of plausible accounts (for a comprehensive review see Park & Reder, 2003), including the possibility that recipients are cooperative communicators (Grice, 1975; Schwarz, 1996) who notice the distortion, but simply correct for it by responding to what the questioner must have meant. Yet making participants aware that the text may be distorted, or asking them to identify such distortions,
Political communication has become one of the central arenas of innovation in the application of automated analysis approaches to ever-growing quantities of digitized texts. However, although researchers routinely and conveniently resort to certain forms of human coding to validate the results derived from automated procedures, in practice the actual "quality assurance" of such a "gold standard" often goes unchecked. Contemporary practices of validation via manual annotations are far from being acknowledged as best practices in the literature, and the reporting and interpretation of validation procedures differ greatly. We systematically assess the connection between the quality of human judgment in manual annotations and the relative performance evaluations of automated procedures against true standards by relying on large-scale Monte Carlo simulations. The results from the simulations confirm that there is a substantially greater risk of a researcher reaching an incorrect conclusion regarding the performance of automated procedures when the quality of manual annotations used for validation is not properly ensured. Our contribution should therefore be regarded as a call for the systematic application of high-quality manual validation materials in any political communication study, drawing on automated text analysis procedures.
Informal discussion plays a crucial role in democracy, yet much of its value depends on diversity. We describe two models of political discussion. The purposive model holds that people typically select discussants who are knowledgeable and politically similar to them. The incidental model suggests that people talk politics for mostly idiosyncratic reasons, as by‐products of nonpolitical social processes. To adjudicate between these accounts, we draw on a unique, multisite, panel data set of whole networks, with information about many social relationships, attitudes, and demographics. This evidence permits a stronger foundation for inferences than more common egocentric methods. We find that incidental processes shape discussion networks much more powerfully than purposive ones. Respondents tended to report discussants with whom they share other relationships and characteristics, rather than based on expertise or political similarity, suggesting that stimulating discussion outside of echo chambers may be easier than previously thought.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.