2019
DOI: 10.1093/jcmc/zmz012
|View full text |Cite
|
Sign up to set email alerts
|

Flagging Facebook Falsehoods: Self-Identified Humor Warnings Outperform Fact Checker and Peer Warnings

Abstract: We present two studies evaluating the effectiveness of flagging inaccurate political posts on social media. In Study 1, we tested fact-checker flags, peer-generated flags, and a flag indicating that the publisher self-identified as a source of humor. We predicted that all would be effective, that their effectiveness would depend on prior beliefs, and that the self-identified humor flag would work best. Conducting a 2-wave online experiment (N = 218), we found that self-identified humor flags were most effectiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(38 citation statements)
references
References 38 publications
0
38
0
Order By: Relevance
“…Attending to the corrective image reduced people's perceptions of credibility of the misinformation and, indirectly, reduced misperceptions. Other research on Facebook has shown that fake news from a source that self-identifies as a satirical outlet can potentially reduce misperceptions by reducing perceptions of credibility (77).…”
Section: Funny Science: How Humor Influences Science Attitudesmentioning
confidence: 99%
“…Attending to the corrective image reduced people's perceptions of credibility of the misinformation and, indirectly, reduced misperceptions. Other research on Facebook has shown that fake news from a source that self-identifies as a satirical outlet can potentially reduce misperceptions by reducing perceptions of credibility (77).…”
Section: Funny Science: How Humor Influences Science Attitudesmentioning
confidence: 99%
“…Two major traditions have been applied to understanding how best to address misinformation, which differ on one key factor—whether the counteractive facts come before or after the misinformation. Inoculation research largely tests preemptive interventions, administered before exposure to misinformation (Banas and Rains 2010; McGuire and Papageorgis 1961), whereas correction research usually tests disputing misinformation after exposure to misinformation (e.g., Bode and Vraga 2015; Garrett and Poulsen 2019; Nyhan and Reifler 2010). These two approaches have largely been studied in isolation (but see Bolsen and Druckman 2015), limiting the ability to compare their effectiveness as a strategy.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The bulk of this research has focused on text-based misinformation and correction strategies, limited to platforms such as Facebook and Twitter (Bode and Vraga 2015; Clayton et al 2019; Garrett and Poulsen 2019; Smith and Seitz 2019; Vraga and Bode 2018). Visual misinformation may represent a different paradigm, requiring unique strategies to debunk.…”
mentioning
confidence: 99%
“…In popular applications of crowdsourcing [18] in misinformation detection [1][2][3][4][5][6][7], communication among human coders is frequently assumed to spread inaccurate judgments. Here we show that communication in online social networks systematically improves both individual and group judgments of new veracity.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, we measured whether peer networks shaped subjects' trust in online content, which has been identified as a key source of partisan differences [1][2][3][4][5][6]. In the binary condition, subjects indicated trust by voting "yes" when asked whether a news item is true.…”
Section: Methodsmentioning
confidence: 99%