Launched in 2009 and later becoming the most popular microblogging service in China, Weibo once had a bold experiment to address the unprecedented challenge of large-scale inappropriate content on its platform. It imitated the common law and jury institution and established a community committee system to involve ordinary users in the content moderation process. However, this innovation with democratic components later got reversed and transformed into a more platform-centric design where users only played an assistive role. This paper traces all available policy documents and web archives to map out how Weibo’s community governance evolved from a half-autonomous system to a rigidly controlled model since 2012. This paper suggests that a democratic content moderation system cannot last under external pressures on the social media company and without the solid empowerment of users.
While crowdsourcing approaches in content moderation systems increase the governance capacity of social media, they also offer a loophole for malicious users to massively report and restrict disliked content.To fill the knowledge gap about large-scale, bottom-up attempts at restraining online expressions, we focus on a type of public and institutionalized mass reporting: anti-smear (反黑) campaigns within Chinese online fandom communities, where fans coordinate together and collectively report content they perceive as inappropriate.Based on detailed data of more than two hundred anti-smear groups collected from Weibo and interviews with active participants, our paper examines the motives and dynamics of anti-smear campaigns, the coordination strategies used to game the content moderation system, and the diffusion of anti-smear culture among fandom networks. We argue that anti-smear is essentially a practice of information control and reflects an intolerant mindset of social media users towards dissidents.This paper also points out the vulnerability of community-based content moderation systems to be weaponized in a polarized age, which brings great challenges to platform governance.
The tension between the increasing need for fact-checking and the limited capacity of fact-check providers inspired several crowdsourced approaches to address this challenge.However, little is known about how effectively crowdsourced fact-checking might perform, and there is no comprehensive framework to evaluate such fact-check providers.We fill this gap by proposing such a framework, using four dimensions (Variety, Velocity, Veracity, and Viability) to assess and compare the contributions of a crowdsourced fact-checking community and professional fact-checking sites. Our analysis shows the different focus these two types of sites have in terms of topic coverage (variety) and demonstrates that while crowdsourced fact-checkers are much faster than professionals (velocity) to answer new requests, these fact-checkers often build on the existing professional knowledge for repeated requests.In addition, our findings indicate that the accuracy of the crowdsourced community (veracity) parallels that of the professional sources; and that the crowdsourced fact-checks are perceived quite close to professionals in terms of objectivity, clarity, and persuasiveness (viability).
Social media companies constantly experiment with different platform governance models to meet content moderation challenges.This need calls for a comprehensive and empirical understanding of how the content moderation system evolves and works on major social media platforms on a long-time scale.This study aims to fill this gap with a quantitative and qualitative review of Weibo's community-driven content moderation system and examines three essential actors in the moderation pipeline with eleven million public moderation cases and decision data from 2012 to 2021. We suggest that Weibo authority weighs more on socially sensitive cases rather than insolent behaviors and leverages jury votes to endorse its final decisions. While Weibo imitates the judicial systems, we take a natural experiment in jury-verdict cases to argue that the deterrence effect of Weibo's moderation is questionable and inconsequential.We argue that digital jurors usually started with dedicated minds but soon got uninterested; however, their votes still reveal mild conflicts in public opinion and indicate that there are no stable majority groups on social media.Also, users who frequently filed reports show a pattern of voluntarily policing the community or trolling others, sometimes coordinately, and reporting is a contagious behavior that may be associated with retaliation.Our study extensively scrutinizes Weibo's experimental self-governance model and offers a basis for more studies and important insights for future followers.
Astroturf, or the simulation of grass-roots consensus, is a common component of political propaganda on social media. Previous research of Chinese propaganda has found a complex system of astroturf behind the Great Firewall, but we know little about the corresponding strategies overseas. Here we use machine learning to identify over 18,000 Chinese astroturf accounts, both human- and bot-run, that spread pro-state political propaganda on Twitter. In contrast to internal propaganda, these astroturf accounts focuses on internally-censored topics and is preoccupied with the character assassination of critics. Despite the resources spent on the task, the group is remarkably ineffective: content reaches very few people and results in no chilling effects. This study demonstrates the limitations of authoritarian governments in manipulating opinion online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.