Facebook has become an important platform for news publishers to promote their work and engage with their readers. Some news pages on Facebook have a reputation for consistently low factualness in their reporting, and there is concern that Facebook allows their misinformation to reach large audiences. To date, there is remarkably little empirical data about how often users "like, " comment and share content from news pages on Facebook, how user engagement compares between sources that have a reputation for misinformation and those that do not, and how the political leaning of the source impacts the equation. In this work, we propose a methodology to generate a list of news publishers' official Facebook pages annotated with their partisanship and (mis)information status based on third-party evaluations, and collect engagement data for the 7.5 M posts that 2,551 U.S. news publishers made on their pages during the 2020 U.S. presidential election. We propose three metrics to study engagement (1) across the Facebook news ecosystem, (2) between (mis)information providers and their audiences, and (3) with individual pieces of content from (mis)information providers. Our results show that misinformation news sources receive widespread engagement on Facebook, accounting for 68.1 % of all engagement with far-right news providers, followed by 37.7 % on the far left. Individual posts from misinformation news providers receive consistently higher median engagement than non-misinformation in every partisanship group. While most prevalent on the far right, misinformation appears to be an issue across the political spectrum. CCS CONCEPTS• Security and privacy → Social aspects of security and privacy; • Information systems → Social networks.
Actors engaged in election disinformation are using online advertising platforms to spread political messages. In response to this threat, online advertising networks have started making political advertising on their platforms more transparent in order to enable third parties to detect malicious advertisers. We present a set of methodologies and perform a security analysis of Facebook's U.S. Ad Library, which is their political advertising transparency product. Unfortunately, we find that there are several weaknesses that enable a malicious advertiser to avoid accurate disclosure of their political ads. We also propose a clustering-based method to detect advertisers engaged in undeclared coordinated activity. Our clustering method identified 16 clusters of likely inauthentic communities that spent a total of over four million dollars on political advertising. This supports the idea that transparency could be a promising tool for combating disinformation. Finally, based on our findings, we make recommendations for improving the security of advertising transparency on Facebook and other platforms.
No abstract
Social media networks commonly employ content moderation as a tool to limit the spread of harmful content. However, the efficacy of this strategy in limiting the delivery of harmful content to users is not well understood. In this paper, we create a framework to quantify the efficacy of content moderation and use our metrics to analyze content removal on Facebook within the U.S. news ecosystem. In a data set of over 2 𝑀 posts with 1.6 𝐵 user engagements collected from 2,551 U.S. news sources before and during the Capitol Riot on January 6, 2021, we identify 10,811 removed posts. We find that the active engagement life cycle of Facebook posts is very short, with 90 % of all engagement occurring within the first 30 hours after posting. Thus, even relatively quick intervention allowed significant accrual of engagement before removal, and prevented only 21 % of the predicted engagement potential during a baseline period before the U.S. Capitol attack. Nearly a week after the attack, Facebook began removing older content, but these removals occurred so late in these posts' engagement life cycles that they disrupted less than 1 % of predicted future engagement, highlighting the limited impact of this intervention. Content moderation likely has limits in its ability to prevent engagement, especially in a crisis, and we recommend that other approaches such as slowing down the rate of content diffusion be investigated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.