Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.48
|View full text |Cite
|
Sign up to set email alerts
|

GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media

Abstract: This paper solves the fake news detection problem under a more realistic scenario on social media. Given the source short-text tweet and the corresponding sequence of retweet users without text comments, we aim at predicting whether the source tweet is fake or not, and generating explanation by highlighting the evidences on suspicious retweeters and the words they concern. We develop a novel neural network-based model, Graph-aware Co-Attention Networks (GCAN), to achieve the goal. Extensive experiments conduct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
159
0
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 296 publications
(161 citation statements)
references
References 28 publications
1
159
0
1
Order By: Relevance
“…Yang et al (2019) also use, somewhat similarly, self-attention to extract n-gram explanations and linguistic analysis to extract features, e.g., verb ratio. Lu and Li (2020) take a slightly different approach to generating explanations. Like Shu et al ( 2019) they make use of a co-attention mechanism, however the work, which looks at fact-checking tweets looks at explainability from three perspectives source tweets, retweet propagation, and retweeter characteristics i.e., suspicious users (see Figure 3(b)).…”
Section: Attention-based Explanationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Yang et al (2019) also use, somewhat similarly, self-attention to extract n-gram explanations and linguistic analysis to extract features, e.g., verb ratio. Lu and Li (2020) take a slightly different approach to generating explanations. Like Shu et al ( 2019) they make use of a co-attention mechanism, however the work, which looks at fact-checking tweets looks at explainability from three perspectives source tweets, retweet propagation, and retweeter characteristics i.e., suspicious users (see Figure 3(b)).…”
Section: Attention-based Explanationsmentioning
confidence: 99%
“…Kotonya and Toni (2020) formalize three coherence properties for evaluating the quality of explanations: local coherence, and strong and weak global coherence. ExpClaim (Ahmadi et al, 2019) is evaluated against and outperforms CredEye (Popat et al, 2018), and GCAN (Lu and Li, 2020) is shown to outperform dE-FEND (Shu et al, 2019). However, these evaluations are with respect to the prediction, not explanations.…”
Section: Future Directionsmentioning
confidence: 99%
“…Khattar et al [38] used textual and visual information in a model variational autoencoder coupled with a binary classifier for the task of fake news detection. Lu and Li [39] integrated attention mechanism with graph neural networks using text information and propagation structure to identify whether the source information is fake or not.…”
Section: Misinformation Detectionmentioning
confidence: 99%
“…The recent Hostile Post Detection task in English takes into account information other than news [11]. For example, the official news is always true.…”
Section: Related Workmentioning
confidence: 99%