2022
DOI: 10.1145/3555562
|View full text |Cite
|
Sign up to set email alerts
|

The Effects of AI-based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence

Abstract: Misinformation on social media has become a serious concern. Marking news stories with credibility indicators, possibly generated by an AI model, is one way to help people combat misinformation. In this paper, we report the results of two randomized experiments that aim to understand the effects of AI-based credibility indicators on people's perceptions of and engagement with the news, when people are under social influence such that their judgement of the news is influenced by other people. We find that the p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…This line of inquiry is important because it might provide a possible explanation for the mixed findings concerning the effects of fact-checking labels in that people's evaluation of labels could affect the accuracy evaluation of or engagement with posts containing misinformation. As fact-checking labels on social media are provided by various sources, ranging from institutions such as independent fact checkers (e.g., PolitiFact, Snopes) and news organizations 3 to general social media users to algorithms (Lu et al, 2022;Seo et al, 2019;Yaqub et al, 2020), we examined people's perception of the effectiveness of fact-checking labels based on four different sources: (a) third-party fact checkers, (b) news organizations, (c) algorithms, and (d) social media users (i.e., crowdsourcing or community labels). We asked participants to rate their perceived efficacy of each fact-checking label after showing them a visual example of how social media platforms label posts containing misleading or inaccurate information so that they could understand what we meant by fact-checking labels.…”
Section: Discussionmentioning
confidence: 99%
“…This line of inquiry is important because it might provide a possible explanation for the mixed findings concerning the effects of fact-checking labels in that people's evaluation of labels could affect the accuracy evaluation of or engagement with posts containing misinformation. As fact-checking labels on social media are provided by various sources, ranging from institutions such as independent fact checkers (e.g., PolitiFact, Snopes) and news organizations 3 to general social media users to algorithms (Lu et al, 2022;Seo et al, 2019;Yaqub et al, 2020), we examined people's perception of the effectiveness of fact-checking labels based on four different sources: (a) third-party fact checkers, (b) news organizations, (c) algorithms, and (d) social media users (i.e., crowdsourcing or community labels). We asked participants to rate their perceived efficacy of each fact-checking label after showing them a visual example of how social media platforms label posts containing misleading or inaccurate information so that they could understand what we meant by fact-checking labels.…”
Section: Discussionmentioning
confidence: 99%
“…A widely adopted debunking approach is to apply warning tags, labels, or indicators, during the misinformation presentation after fact-checking by professional organizations or artificial intelligence (AI). Empirical user studies reveal that those warnings are generally effective in reducing participants' belief in misinformation (Clayton et al 2020;Yaqub et al 2020;Jia et al 2022;Kreps and Kriner 2022;Lu et al 2022). Yet, the efficacy of the warnings can be impacted by factors such as warning specificity (e.g., general warnings introduce bias, reducing belief in real news), warning design (e.g., simple and precise warning language), the source of the warnings (e.g., fact checker and community), and extra fact-checking details.…”
Section: Misinformation and Its Correctionmentioning
confidence: 99%
“…However, they are costly, ethically challenging, require hiring fluent speakers for each language, and difficult to sufficiently scale to typical volumes of user-generated content. Purely automated solutions have remained on the horizon, as they struggle with evaluating facticity and have reduced impact in sharing behavior in the presence of social influence Lu et al (2022).…”
Section: Introductionmentioning
confidence: 99%