Social media companies constantly experiment with different platform governance models to meet content moderation challenges.This need calls for a comprehensive and empirical understanding of how the content moderation system evolves and works on major social media platforms on a long-time scale.This study aims to fill this gap with a quantitative and qualitative review of Weibo's community-driven content moderation system and examines three essential actors in the moderation pipeline with eleven million public moderation cases and decision data from 2012 to 2021. We suggest that Weibo authority weighs more on socially sensitive cases rather than insolent behaviors and leverages jury votes to endorse its final decisions. While Weibo imitates the judicial systems, we take a natural experiment in jury-verdict cases to argue that the deterrence effect of Weibo's moderation is questionable and inconsequential.We argue that digital jurors usually started with dedicated minds but soon got uninterested; however, their votes still reveal mild conflicts in public opinion and indicate that there are no stable majority groups on social media.Also, users who frequently filed reports show a pattern of voluntarily policing the community or trolling others, sometimes coordinately, and reporting is a contagious behavior that may be associated with retaliation.Our study extensively scrutinizes Weibo's experimental self-governance model and offers a basis for more studies and important insights for future followers.