When a product of uncertain quality is first introduced, consumers may be enticed to strategically delay their purchasing decisions in anticipation of the product reviews of their peers. This paper investigates how the presence of social learning interacts with the adoption decisions of strategic consumers and the dynamicpricing decisions of a monopolist firm, within a simple two-period model. When the firm commits to a price path ex ante (pre-announced pricing), we show that the presence of social learning increases the firm's ex ante expected profit, despite the fact that it exacerbates consumers' tendency to strategically delay their purchase. As opposed to following a price-skimming policy which is always optimal in the absence of social learning, we find that, for most model parameters, the firm will announce an increasing price plan. When the firm does not commit to a price path ex ante (responsive pricing), interestingly, the presence of social learning has no effect on strategic purchasing delays. Under this pricing regime, social learning remains beneficial for the firm and prices may either rise or decline over time, with the latter being ex ante more likely.Furthermore, we illustrate that contrary to results reported in existing literature, in settings characterized by social learning, price-commitment is generally not beneficial for a firm facing strategic consumers.
Motivated by the proliferation of online platforms that collect and disseminate consumers' experiences with alternative substitutable products/services, we investigate the problem of optimal information provision when the goal is to maximize aggregate consumer surplus. We develop a decentralized multi-armed bandit framework where a forward-looking principal (the platform designer) commits upfront to a policy that dynamically discloses information regarding the history of outcomes to a series of short-lived rational agents (the consumers). We demonstrate that consumer surplus is non-monotone in the accuracy of the designer's information-provision policy. Because consumers are constantly in "exploitation" mode, policies that disclose accurate information on past outcomes suffer from inadequate "exploration." We illustrate how the designer can (partially) alleviate this inefficiency by employing a policy that strategically obfuscates the information in the platform's possession -interestingly, such a policy is beneficial despite the fact that consumers are aware of both the designer's objective and the precise way by which information is being disclosed to them.More generally, we show that the optimal information-provision policy can be obtained as the solution of a large-scale linear program. Noting that such a solution is typically intractable, we use our structural findings to design an intuitive heuristic that underscores the value of information obfuscation in decentralized learning. We further highlight that obfuscation remains beneficial even if the designer can directly incentivize consumers to explore through monetary payments.At any time, the history of service outcomes (i.e., the system state x t ) is not directly observable to the consumers. Instead, there is a platform designer who commits upfront to a "messaging policy" that acts as an instrument of information-provision to the consumers. 7 This policy specifies the message that is displayed on the platform, given any underlying system state; in §7.2, we extend 4 The general analysis in §6 can be readily extended to the case of |S| > 2 providers. 5 The probability density function of a Beta(s, f ) random variable is given by g(x; s, f ) = x s−1 (1−x) f −1 B(s,f ), for x ∈ [0, 1]. 6 The platform and the consumers hold the same prior belief, so that platform actions (e.g., choice of informationprovision policy) do not convey any additional information on provider quality to the consumers (e.g., Bergemann and Välimäki 1997, Bose et al. 2006, Papanastasiou and Savva 2016.7 Commitment is a reasonable assumption in the context of online platforms, where information provision occurs on the basis of pre-decided algorithms and the large volume of products/services hosted renders ad-hoc adjustments of the automatically-generated content prohibitively costly (see also §5.4, where this assumption is relaxed).
Consumers often consult the reviews of their peers before deciding whether to purchase a new experience good; however, their initial quality expectations are typically set by the product's observable attributes. This paper focuses on the implications of social learning for a monopolist firm's choice of product design. In our model, the firm's design choice determines the product's ex ante expected quality, and designs associated with (stochastically) higher quality incur higher costs of production. Consumers are forward-looking social learners, and may choose to strategically delay their purchase in anticipation of product reviews. In this setting, we find that the firm's optimal policy differs significantly depending on the level of the ex ante quality uncertainty surrounding the product. In comparison to the case where there is no social learning, we show that (i) when the uncertainty is relatively low, the firm opts for a product of inferior design accompanied by a lower price, while (ii) when the uncertainty is high, the firm chooses a product of superior design accompanied by a higher price; interestingly, we find that the product's expected quality decreases either in the absolute sense (in the former case), or relative to the product's price (in the latter case). We further establish that, contrary to conventional knowledge, social learning can have an ex ante negative impact on the firm's profit, in particular when the consumers are sufficiently forward-looking. Conversely, we find that the presence of social learning tends to be beneficial for the consumers only provided they are sufficiently forward-looking.
In the wake of the 2016 U.S. presidential election, social-media platforms are facing increasing pressure to combat the propagation of “fake news” (i.e., articles whose content is fabricated). Motivated by recent attempts in this direction, we consider the problem faced by a social-media platform that is observing the sharing actions of a sequence of rational agents and is dynamically choosing whether to conduct an inspection (i.e., a “fact-check”) of an article whose validity is ex ante unknown. We first characterize the agents’ inspection and sharing actions and establish that, in the absence of any platform intervention, the agents’ news-sharing process is prone to the proliferation of fabricated content, even when the agents are intent on sharing only truthful news. We then study the platform’s inspection problem. We find that because the optimal policy is adapted to crowdsource inspection from the agents, it exhibits features that may appear a priori nonobvious; most notably, we show that the optimal inspection policy is nonmonotone in the ex ante probability that the article being shared is fake. We also investigate the effectiveness of the platform’s policy in mitigating the detrimental impact of fake news on the agents’ learning environment. We demonstrate that in environments characterized by a low (high) prevalence of fake news, the platform’s policy is more effective when the rewards it collects from content sharing are low relative to the penalties it incurs from the sharing of fake news (when the rewards it collects from content sharing are high in absolute terms). This paper was accepted by Vishal Gaur, operations management.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.