We study the efficiency of bilateral liability-based contracts in managed security services (MSSs). We model MSS as a collaborative service with the protection quality shaped by the contribution of both the service provider and the client. We adopt the negligence concept from the legal profession to design two novel contracts: threshold-based liability contract and variable liability contract. We find that they can achieve the first best outcome when postbreach effort verification is feasible. More importantly, they are more efficient than a multilateral contract when the MSS provider assumes limited liability. Our results show that bilateral liability-based contracts can work in the real world. Hence, more research is needed to explore their properties. We discuss the related implications. The online appendix is available at https://doi.org/10.1287/isre.2018.0806 .
Online platforms are experimenting with interventions such as content screening to moderate the effects of fake, biased, and incensing content. Yet, online platforms face an operational challenge in implementing machine learning algorithms for managing online content due to the labeling problem, where labeled data used for model training are limited and costly to obtain. To address this issue, we propose a domain adaptive transfer learning via adversarial training approach to augment fake content detection with collective human intelligence. We first start with a source domain dataset containing deceptive and trustworthy general news constructed from a large collection of labeled news sources based on human judgments and opinions. We then extract discriminating linguistic features commonly found in source domain news using advanced deep learning models. We transfer these features associated with the source domain to augment fake content detection in three target domains: political news, financial news, and online reviews. We show that domain invariant linguistic features learned from a source domain with abundant labeled examples can effectively improve fake content detection in a target domain with very few or highly unbalanced labeled data. We further show that these linguistic features offer the most value when the level of transferability between source and target domains is relatively high. Our study sheds light on the platform operation in managing online content and resources when applying machine learning for fake content detection. We also outline a modular architecture that can be adopted in developing content screening tools in a wide spectrum of fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.