Words are part of almost every marketplace interaction. Online reviews, customer service calls, press releases, marketing communications, and other interactions create a wealth of textual data. But how can marketers best use such data? This article provides an overview of automated textual analysis and details how it can be used to generate marketing insights. The authors discuss how text reflects qualities of the text producer (and the context in which the text was produced) and impacts the audience or text recipient. Next, they discuss how text can be a powerful tool both for prediction and for understanding (i.e., insights). Then, the authors overview methodologies and metrics used in text analysis, providing a set of guidelines and procedures. Finally, they further highlight some common metrics and challenges and discuss how researchers can address issues of internal and external validity. They conclude with a discussion of potential areas for future work. Along the way, the authors note how textual analysis can unite the tribes of marketing. While most marketing problems are interdisciplinary, the field is often fragmented. By involving skills and ideas from each of the subareas of marketing, text analysis has the potential to help unite the field with a common set of tools and approaches.
The compromise effect denotes the finding that brands gain share when they become the intermediate rather than extreme option in a choice set. Despite the robustness and importance of this phenomenon, choice modelers have neglected to incorporate the compromise effect in formal choice models and to test whether such models outperform the standard value maximization model. In this article, the authors suggest four context-dependent choice models that can conceptually capture the compromise effect. Although the models are motivated by theory from economics and behavioral decision research, they differ with respect to the particular mechanism that underlies the compromise effect (e.g., contextual concavity versus loss aversion). Using two empirical applications, the authors (1) contrast the alternative models and show that incorporating the compromise effect by modeling the local choice context leads to superior predictions and fit compared with the traditional value maximization model and a stronger (naive) model that adjusts for possible biases in utility measurement, (2) generalize the compromise effect by demonstrating that it systematically affects choice in larger sets of products and attributes than has been previously shown, (3) show the theoretical and empirical equivalence of loss aversion and local (contextual) concavity and (4) demonstrate the superiority of models that use a single reference point over “tournament models” in which each option serves as a reference point. They discuss the theoretical and practical implications of this research as well as the ability of the proposed models to predict other behavioral context effects.
This tutorial provides evidence that character misrepresentation in survey screeners by Amazon Mechanical Turk Workers ("Turkers") can substantially and significantly distort research findings. Using five studies, we demonstrate that a large proportion of respondents in paid MTurk studies claim a false identity, ownership, or activity in order to qualify for a study. The extent of misrepresentation can be unacceptably high, and the responses to subsequent questions can have little correspondence to responses from appropriately identified participants. We recommend a number of remedies to deal with the problem, largely involving strategies to take away the economic motive to misrepresent and to make it difficult for Turkers to recognize that a particular response will gain them access to a study. The major short-run solution involves a two-survey process that first asks respondents to identify their characteristics when there is no motive to deceive, and then limits the second survey to those who have passed this screen. The long-run recommendation involves building an ongoing MTurk participant pool ("panel") that (1) continuously collects information that could be used to classify respondents, and (2) eliminates from the panel those who misrepresent themselves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.