Purpose
The purpose of this paper is to present an analysis of Goodreads’ user-generated book reviews from a linguistic perspective for insights into the psychological aspects of reviewers’ perceptions and behaviors. This examination of users’ language and perspectives may shed light on the role and value of user-generated reviews in complementing the traditional representation of resources and facilitating the discoverability of cultural objects.
Design/methodology/approach
This study involved a textual analysis of 474,803 unique reviews of Goodreads’ 2015 top-rated books generated by 9,335 Goodreads’ reviewers. In order to better understand the nuances of user-generated reviews, a content analysis was applied to 2,500 reviews of each of the five top-ranked titles in Goodreads’ Fiction Literature genre category.
Findings
The analysis of user-generated reviews demonstrates that language is a quite stable and reliable dimension across Goodreads’ users. The high rate of function words utilized, in particular I-words, coupled with positive emotion words, suggests that reviewers tended to convey their opinions in order to influence other individuals’ reading choices, or in Bourdieu’s (1985) terms, influence cultural production. In line with previous studies of user-generated reviews, the prevalence of positive reviews may also imply their unreliable nature. This study supports the importance of transparency regarding inclusion of user-generated reviews in traditional systems of knowledge representation, organization and discovery, such as WorldCat.
Originality/value
This study contributes to better understanding of linguistic characteristics of Goodreads’ reviews, including the role and value of user-generated reviews in complementing traditional representation of resources and facilitating discoverability of cultural objects.