Corporate social responsibility (CSR) has been treated as an instrument to differentiate firms in a competitive market. However, due to the credence good nature of CSR, when considering product quality dimension, firms can only signal their quality through advertising or labeling. These signaling mechanisms may be exploited by some dishonest firms who claim to be green (“greenwashing”). Many critics argue that greenwashing needs to be regulated because it deceives the market and discourages firms from going genuinely green. In this article, instead of focusing on the ethical side of this issue, we try to explore the market outcome from an economic perspective. We show that regulating greenwashing may not necessarily increase the positive environmental externality of green products. In particular, even if greenwashing is regulated, firms may not act green when the additional CSR cost is too high or when the corresponding CSR issue is not as important. On the other hand, we find that allowing greenwashing may incentivize some firms to go genuinely green as long as there are some informed customers in the market.
Crowdsourcing relies on online platforms to connect a community of users to perform specific tasks. However, without appropriate control, the behavior of the online community might not align with the platform's designed objective, which can lead to an inferior platform performance. This paper investigates how the feedback information on a crowdsourcing platform and systematic bias of crowdsourcing workers can affect crowdsourcing outcomes. Specifically, using archival data from the online crowdsourcing platform Kaggle, combined with survey data from actual Kaggle contest participants, we examine the role of a systematic bias, namely the salience bias, in influencing the performance of the crowdsourcing workers and how the number of crowdsourcing workers moderates the impact of the salience bias as a result of the parallel path effect and competition effect. Our results suggest that the salience bias influences the performance of contestants, including the winners of the contests. Furthermore, the parallel path effect cannot completely eliminate the impact of the salience bias, but it can attenuate it to a certain extent. By contrast, the competition effect is likely to amplify the impact of the salience bias. Our results have critical implications for crowdsourcing firms and platform designers.
Crowdsourcing is a new way for online crowds to get involved in a company’s research and development process. Businesses can host public contests on online platforms (such as Kaggle, Topcoder, and Tongal) to seek new product ideas and technological solutions. In the contest communities, members usually have a “coopetitive” relationship: they compete against each other for the contest prize, while at the same time also cooperate with each other by sharing information and knowledge. This work investigates the effect of knowledge sharing in such crowdsourcing contests. Surprisingly, we find that the knowledge sharing process may not always help improve crowdsourcing contestants’ performance. The effectiveness of knowledge sharing is influenced by the volume, quality, and generativity of shared knowledge. Shared knowledge is only beneficial when it is of high quality or when it has high potential of being further developed collectively by the community. Meanwhile, the development process has to be diverging; narrowing the development process in one direction can restrict the community creativity and negatively influence crowdsourcing performance. Our work informs the crowdsourcing practitioners to be more cautious when they enable collaboration such as knowledge sharing for the contest community.
Innovative forecasting methods using new data sources have been developed to address various problems in operations management, such as demand, sales, and event forecasts. One of the methods for forecasting events consists of prediction markets where participants can take financial positions that may generate returns depending on whether certain events occur or not. Results in experimental psychology and behavioral economics have shown that individuals, including experts, can be subject to judgment bias when making probability estimates for future events. We examine, in this study, whether prediction markets are immune to such bias in estimating event probability. We find that even when there are large numbers of transactions and high volumes of trades, probabilistic fallacies still occur. Moreover, when they occur, they tend to be persistent over a certain period of time, and they tend to happen in situations similar to the ones where individual probabilistic fallacies are reported to occur. Our results have implications for the design of prediction markets and at the same time call for caution when using forecasts generated this way.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.