When reviewers write online reviews, they differ in the focus of their attention: some focus on their own experiences, whereas some direct their attention to others—prospective consumers who may read the reviews in the future. This paper explores how, why, and when reviewers’ attentional focus can influence the helpfulness evaluation of reviews beyond the impact of substantive review content. Drawing on the attentional focus and persuasion literatures, we develop a theoretical model proposing that reviewers’ attentional focus may influence consumers’ perception of review helpfulness through opposing processes, and that its overall effect is contingent on the review’s two-sidedness. Results of one archival analysis and five controlled experiments provide consistent support for our hypotheses. This work challenges the predominant view of the positive impact of other-focus (vs. self-focus), explores the interpersonal impact of a reviewer’s attentional focus on prospective consumers who are total strangers, and reveals an important, context-specific boundary condition.
Online word-of-mouth studies generally assume that a product's average rating is the primary force shaping consumers' purchase decisions and driving sales. Similarly, practitioners place more emphasis on average ratings by displaying them at more salient places than individual reviews. In contrast, emerging evidence suggests that individual reviews also affect the decision-making of those consumers who consult both kinds of information. However, because average ratings and individual reviews are often correlated and confounded empirically, little research has attempted to disentangle their effects. To address this empirical challenge, we construct trade-off situations in which the average ratings and top-ranked reviews of different product options do not align with each other. We then investigate consumers' preferences that can indirectly reveal the relative impact of average ratings versus top reviews. Through an archival analysis of a panel dataset and two laboratory experiments, we find consistent evidence for a swaying effect of individual reviews and reveal their textual content as a likely reason. These findings challenge the commonly accepted assumption of average ratings being the primary driver of consumers' purchase decisions and suggest that consumers may not be as rational as previous literature assumed. In addition, this paper is the first to disentangle the effects of average ratings and individual reviews on consumer decisionmaking and explore a possible reason for the swaying effect of individual reviews. Our paper illustrates the importance of information accessibility in consumers' purchase decisions, and our findings offer valuable insights for product manufacturers, online retailers, and review platforms.
How and why positive and negative reviews influence product sales differently has critical
implications for both research and businesses. Although earlier online word-of-mouth research
empirically documented that negative reviews influence product sales to a greater extent than
positive reviews (i.e., a negativity bias), later research has revealed that positive reviews are generally
more helpful (i.e., a positivity bias). We propose that an answer to this conundrum may be that
negative reviews get more exposure than positive reviews. As consumers are often overwhelmed by
the massive number of online reviews, they need to be selective when searching for reviews. This
research investigates consumers’ preference for positive vs. negative reviews during both the
information-seeking and information-evaluation stages of their decision-making process. Drawing
on the motivated reasoning literature, we propose that consumers exhibit a negativity bias when they
search for reviews to read but manifest a confirmation bias when they evaluate the helpfulness of
reviews. We conducted three experiments and found consistent support for these hypotheses. Our
findings expand the current understanding of consumers’ processing of online reviews to the
information-seeking stage, reveal differential biases at different stages, demonstrate a possible
explanation for the negativity bias in product sales, and provide important practical implications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.