Abstract:User-generated online content serves as a source of product-and service-related information that reduces the uncertainty in consumer decision making, yet the abundance of such content makes it prohibitively costly to use all relevant information. Dealing with this (big data) problem requires a consumer to decide what subset of information to focus on. Peer-generated star ratings are excellent tools for one to decide what subset of information to focus on as they indicate a review's "tone". However, star ratings are not available for all user-generated content and not detailed enough in other cases. Sentiment analysis, a text-analytic technique that automatically detects the polarity of text, provides sentiment scores that are comparable to, and potentially more refined than, star ratings. Despite its popularity as an active topic in analytics research, sentiment analysis outcomes have not been evaluated through rigorous user studies. We fill that gap by investigating the impact of sentiment scores on purchase decisions through a controlled experiment using 100 participants. The results suggest that, consistent with the effort-accuracy trade off and effort-minimization concepts, sentiment scores on review documents improve the efficiency (speed) of purchase decisions without significantly affecting decision effectiveness (confidence).