Proceedings of the Second Workshop on Computational Approaches To Deception Detection 2016
DOI: 10.18653/v1/w16-0802
|View full text |Cite
|
Sign up to set email alerts
|

Fake News or Truth? Using Satirical Cues to Detect Potentially Misleading News

Abstract: Satire is an attractive subject in deception detection research: it is a type of deception that intentionally incorporates cues revealing its own deceptiveness. Whereas other types of fabrications aim to instill a false sense of truth in the reader, a successful satirical hoax must eventually be exposed as a jest. This paper provides a conceptual overview of satire and humor, elaborating and illustrating the unique features of satirical news, which mimics the format and style of journalistic reporting. Satiric… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
292
0
14

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 441 publications
(309 citation statements)
references
References 33 publications
3
292
0
14
Order By: Relevance
“…Additional labels can then be added to the datasets to better predict veracity, for instance by jointly training stance and veracity prediction models. Methods not shown in the table, but related to fact checking, are stance detection for claims (Ferreira and Vlachos, 2016;Pomerleau and Rao, 2017;Augenstein et al, 2016a;Kochkina et al, 2017;Augenstein et al, 2016b;Zubiaga et al, 2018;Riedel et al, 2017), satire detection (Rubin et al, 2016), clickbait detection (Karadzhov et al, 2017), conspiracy news detection (Tacchini et al, 2017), rumour cascade detection (Vosoughi et al, 2018) and claim perspectives detection (Chen et al, 2019).…”
Section: Datasetsmentioning
confidence: 99%
“…Additional labels can then be added to the datasets to better predict veracity, for instance by jointly training stance and veracity prediction models. Methods not shown in the table, but related to fact checking, are stance detection for claims (Ferreira and Vlachos, 2016;Pomerleau and Rao, 2017;Augenstein et al, 2016a;Kochkina et al, 2017;Augenstein et al, 2016b;Zubiaga et al, 2018;Riedel et al, 2017), satire detection (Rubin et al, 2016), clickbait detection (Karadzhov et al, 2017), conspiracy news detection (Tacchini et al, 2017), rumour cascade detection (Vosoughi et al, 2018) and claim perspectives detection (Chen et al, 2019).…”
Section: Datasetsmentioning
confidence: 99%
“…This approach likely ends up classifying the writing styles of the two distinct types of account, while in our case no distinction between trusted and non-trusted accounts was made during model building. Similarly, Rubin et al [29] use satirical cues to detect fakes, which only applies to a specific subset of cases. Another category of methods attempt to include image features in the classification, under the assumption that the image accompanying a post may carry distinct visual characteristics that differ between fake and real posts [14,34].…”
Section: Related Workmentioning
confidence: 99%
“…similarity model does not seem to have much impact on the GCN model, and considering the computing cost, we don't experiment with it for the 4-way classification scenario. Given that we use SLN as an out of domain test set (just one overlapping source, no overlap in articles), whereas the SoTA paper (Rubin et al, 2016) reports a 10fold cross validation number on SLN. We believe that our results are quite strong, the GAT + 2 Attn Heads model achieves an accuracy of 87% on the entire RPN dataset when used as an out-of-domain test set.…”
Section: Experimental Settingmentioning
confidence: 99%