Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.761
|View full text |Cite
|
Sign up to set email alerts
|

DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking

Abstract: The increased focus on misinformation has spurred development of data and systems for detecting the veracity of a claim as well as retrieving authoritative evidence. The Fact Extraction and VERification (FEVER) dataset provides such a resource for evaluating endto-end fact-checking, requiring retrieval of evidence from Wikipedia to validate a veracity prediction. We show that current systems for FEVER are vulnerable to three categories of realistic challenges for fact-checking -multiple propositions, temporal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 38 publications
(28 citation statements)
references
References 41 publications
0
28
0
Order By: Relevance
“…To select evidence sentences we follow the approach proposed by Hidey et al (2020). Given the true claims and the 5 evidence documents for each claim (Section 2.1) we use cosine similarity on SBERT sentence embeddings (Reimers and Gurevych, 2019) to extract the top 5 sentences most similar to the true claim.…”
Section: Evidence Sentence Selectionmentioning
confidence: 99%
“…To select evidence sentences we follow the approach proposed by Hidey et al (2020). Given the true claims and the 5 evidence documents for each claim (Section 2.1) we use cosine similarity on SBERT sentence embeddings (Reimers and Gurevych, 2019) to extract the top 5 sentences most similar to the true claim.…”
Section: Evidence Sentence Selectionmentioning
confidence: 99%
“…Another common setting for fact-checking is to assume a credible evidence source is given (e.g., Wikipedia), and to focus on the evidence retrieval and veracity verification steps only. FEVER (Thorne et al, 2018) and Tabfact are two large datasets for this setting, and there are many follow-up studies working on them (Yoneda et al, 2018a;Nie et al, 2019;Zhong et al, 2020;Herzig et al, 2020;Hidey et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…They follow up on this with the FEVER 2.0 task (Thorne et al, 2019b), where participants design adversarial attacks for existing FC systems. The first two winning systems (Niewinski et al, 2019;Hidey et al, 2020) produce claims requiring multi-hop reasoning, which has been shown to be challenging for fact checking models (Ostrowski et al, 2020). The other remaining system (Kim and Allan, 2019) generates adversarial attacks manually.…”
Section: Fact Checkingmentioning
confidence: 99%