Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda 2021
DOI: 10.18653/v1/2021.nlp4if-1.7
|View full text |Cite
|
Sign up to set email alerts
|

Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News

Abstract: In this paper, we explore the construction of natural language explanations for news claims, with the goal of assisting fact-checking and news evaluation applications. We experiment with two methods: (1) an extractive method based on Biased TextRank -a resource-effective unsupervised graph-based algorithm for content extraction; and (2) an abstractive method based on the GPT-2 language model. We perform comparative evaluations on two misinformation datasets in the political and health news domains, and find th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…As the training data of the GPT models includes vast amounts of web data, including a filtered version of the Common Crawl Corpus (Brown et al, 2020), there is a significant risk of data leakage for the task of fact-checking. Previous research has shown that misinformation does not exist in isolation, but is repeated (Shaar et al, 2020) across platforms (Micallef et al, 2022) and languages (Kazemi et al, 2022;Quelle et al, 2023). As misinformation is repeated and re-occurs, the ability of models to retain previously fact-checked claims can potentially be seen as a benefit rather than a drawback.…”
Section: Discussionmentioning
confidence: 99%
“…As the training data of the GPT models includes vast amounts of web data, including a filtered version of the Common Crawl Corpus (Brown et al, 2020), there is a significant risk of data leakage for the task of fact-checking. Previous research has shown that misinformation does not exist in isolation, but is repeated (Shaar et al, 2020) across platforms (Micallef et al, 2022) and languages (Kazemi et al, 2022;Quelle et al, 2023). As misinformation is repeated and re-occurs, the ability of models to retain previously fact-checked claims can potentially be seen as a benefit rather than a drawback.…”
Section: Discussionmentioning
confidence: 99%
“…The sentences or sentence parts supporting the decision were highlighted, and users were asked to mark whether they agreed or disagreed with the highlighted evidence. Instead of relying on the weights of a model for explanation, Atanasova, Simonsen, Lioma, and Augenstein (2020) , Kotonya and Toni (2020) and Kazemi, Li, Pérez-Rosas, and Mihalcea (2021) applied extractive and abstractive summarisation techniques to provide users with justifications in natural language. Regarding bias mitigation, several mitigation methods have been proposed ( Hovy & Prabhumoye, 2021 ).…”
Section: Related Workmentioning
confidence: 99%
“…While fully automatic fake news detection and fact-checking systems (Pérez-Rosas et al 2018;Thorne and Vlachos 2018) remain an active research topic within the NLP community, there have been new research fronts in the fight against misinformation, including claim matching (Shaar et al 2020;Kazemi et al 2021a), check-worthiness detection (Hassan et al 2017;Konstantinovskiy et al 2021), explanations (Kazemi et al 2021b;…”
Section: Related Workmentioning
confidence: 99%