2022
DOI: 10.1057/s41599-022-01174-9
|View full text |Cite
|
Sign up to set email alerts
|

The fingerprints of misinformation: how deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions

Abstract: Not all misinformation is created equal. It can adopt many different forms like conspiracy theories, fake news, junk science, or rumors among others. However, most of the existing research does not account for these differences. This paper explores the characteristics of misinformation content compared to factual news—the “fingerprints of misinformation”—using 92,112 news articles classified into several categories: clickbait, conspiracy theories, fake news, hate speech, junk science, and rumors. These misinfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(19 citation statements)
references
References 107 publications
0
19
0
Order By: Relevance
“…However, focusing on the manipulation techniques and rhetorical strategies that underpin misinformation may significantly improve the potential scalability of inoculation interventions ( 17 ). The advantage of this approach is that, although it can be difficult to establish what is and what is not a fact ( 1 ), different examples of misinformation often make use of the same underlying tropes ( 18 , 19 ). These tropes, which include manipulation techniques such as logical fallacies ( 20 ) and emotionally manipulative language ( 18 ), can be analyzed and used for inoculation without prior knowledge of specific misleading content, thus potentially providing broad resilience against social media or news content that draws on one or more of the techniques that someone has been inoculated against.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, focusing on the manipulation techniques and rhetorical strategies that underpin misinformation may significantly improve the potential scalability of inoculation interventions ( 17 ). The advantage of this approach is that, although it can be difficult to establish what is and what is not a fact ( 1 ), different examples of misinformation often make use of the same underlying tropes ( 18 , 19 ). These tropes, which include manipulation techniques such as logical fallacies ( 20 ) and emotionally manipulative language ( 18 ), can be analyzed and used for inoculation without prior knowledge of specific misleading content, thus potentially providing broad resilience against social media or news content that draws on one or more of the techniques that someone has been inoculated against.…”
Section: Introductionmentioning
confidence: 99%
“…We created a series of short inoculation videos covering five manipulation techniques commonly encountered in online misinformation. These five techniques were taken from the broader literature on argumentation and manipulation strategies and are consistently identified as epistemologically dubious: (i) using emotionally manipulative rhetoric to evoke outrage, anger, or other strong emotions ( 18 , 22 ), (ii) the use of incoherent or mutually exclusive arguments ( 23 ), (iii) presenting false dichotomies or dilemmas ( 24 ), (iv) scapegoating individuals or groups ( 25 ), and (v) engaging in ad hominem attacks ( 19 , 26 ).…”
Section: Introductionmentioning
confidence: 99%
“…We review the related work on misinformation concerning our main contributions to (i) the characterization of content spread by this phenomenon and (ii) the models developed to detect such content. In this work, we adopt the definition of misinformation presented in [46], or rather, an umbrella term to include all false or inaccurate information that is spread online, such as rumor, clickbait or fake news, among others [5,23,46], intentionally or unintentionally propagated. Moreover, we only consider content-based classification models, which rely exclusively on textual data from various misleading online sources, such as web articles or social media posts, supporting content pre-bunking [12].…”
Section: Related Workmentioning
confidence: 99%
“…Despite the remarkable capabilities unveiled by recent advancements in natural language processing for the classification and analysis of written texts [24], this phenomenon remains intricate and far from resolved [30]. The challenges of heterogeneity [5] and cross-modality [27] make it exceedingly difficult to observe this phenomenon at the necessary volume and variety required to curate annotated datasets essential for training effective and generalizable models through supervised approaches. Nevertheless, the progress in generative large language models such as GPT-3 [4] and PaLM [28], disclosed alarming scenarios in the automatic generation of misleading content, becoming a possible undesirable tool in the hands of mala fide actors [45].…”
Section: Introductionmentioning
confidence: 99%
“…However, despite budding interest in the cognitive factors that play a role in misinformation susceptibility (e.g., Roozenbeek, Maertens, et al, 2022), research has yet to investigate whether inoculation can be used to confer psychological resistance against cognitive biases that make individuals vulnerable to conspiracist reasoning. This is important, because misinformation can exploit such biases, for example, by appealing to negatively valenced information and conspiratorial reasoning (Carrasco‐Farré, 2022). Furthermore, exploring alternative avenues to existing inoculation techniques would provide researchers and policymakers with more options to tailor misinformation interventions to different contexts.…”
Section: Introductionmentioning
confidence: 99%