This paper reports the results of an exploratory field experiment in Singapore that assessed the values of two types of privacy assurance: privacy statements and privacy seals. We collaborated with a local firm to host the experiment on its 1 Rajiv Sabherwal was the accepting senior editor for this paper. Jeff Smith was the associate editor. Mary J. Culnan served as a reviewer. The second reviewer chose to remain anonymous. website with its real domain name, and the subjects were not informed of the experiment. Hence, the study provided a field observation of the subjects' behavioral responses toward privacy assurances. We found that (1) the existence of a privacy statement induced more subjects to disclose their personal information but that of a privacy seal did not; (2) monetary incentive had a positive influence on disclosure; and (3) information request had a negative influence on disclosure. These results were robust in other specifications that used alternative measures for some of our model variables. We discuss this study in relation to the extant privacy literature, most of which employs surveys and laboratory experiments for data collection, and draw related managerial implications.
Many online review systems adopt a voluntary voting mechanism to identify helpful reviews to support consumer purchase decisions. While several studies have looked at what makes an online review helpful (review helpfulness), little is known on what makes an online review receive votes (review voting). Drawing on information processing theories and the related literature, we investigated the effects of a select set of review characteristics, including review length and readability, review valence, review extremity, and reviewer credibility on two outcomes-review voting and review helpfulness. We examined and analyzed a large set of review data from Amazon with the sample selection model. Our results indicate that there are systematic differences between voted and non-voted reviews, suggesting that helpful reviews with certain characteristics are more likely to be observed and identified in an online review system than reviews without the characteristics. Furthermore, when review characteristics had opposite effects on the two outcomes (i.e. review voting and review helpfulness), ignoring the selection effects due to review voting would result in the effects on review helpfulness being overestimated , which increases the risk of committing a type I error. Even when the effects on the two outcomes are in the same direction, ignoring the selection effects due to review voting would increase the risk of committing type II error that cannot be mitigated with a larger sample. We discuss the implications of the findings on research and practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.