Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER) 2021
DOI: 10.18653/v1/2021.fever-1.2
|View full text |Cite
|
Sign up to set email alerts
|

Evidence Selection as a Token-Level Prediction Task

Abstract: In Automated Claim Verification, we retrieve evidence from a knowledge base to determine the veracity of a claim. Intuitively, the retrieval of the correct evidence plays a crucial role in this process. Often, evidence selection is tackled as a pairwise sentence classification task, i.e., we train a model to predict for each sentence individually whether it is evidence for a claim. In this work, we fine-tune document level transformers to extract all evidence from a Wikipedia document at once. We show that thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(34 citation statements)
references
References 9 publications
0
34
0
Order By: Relevance
“…Table 3 reports the fact verification results for ProoFVer and the baselines. Overall, ProoFVer-SB, our configuration using Stammbach's (2021) retriever, is the best performing model in our experiments. ProoFVer-SB, which outperforms Stammbach (2021) itself, is currently the highest scoring model in terms of label accuracy in the FEVER leaderboard.…”
Section: Fact Verificationmentioning
confidence: 88%
See 3 more Smart Citations
“…Table 3 reports the fact verification results for ProoFVer and the baselines. Overall, ProoFVer-SB, our configuration using Stammbach's (2021) retriever, is the best performing model in our experiments. ProoFVer-SB, which outperforms Stammbach (2021) itself, is currently the highest scoring model in terms of label accuracy in the FEVER leaderboard.…”
Section: Fact Verificationmentioning
confidence: 88%
“…We use CorefRoBERTA, their best-performing configuration. DominikS (Stammbach, 2021) focuses primarily on sentence-level evidence retrieval, scoring individual tokens from a given Wikipedia document, and then selecting the highest scoring sentences by averaging token scores. It uses a fine-tuned document level BigBird model (Zaheer et al, 2020) for this purpose.…”
Section: Baseline Systemsmentioning
confidence: 99%
See 2 more Smart Citations
“…Long-document encodings for fact verification have been explored by Stammbach (2021), who use Big Bird (Zaheer et al, 2020) for full-document evidence extraction from FEVER. Domain adaptation for scientific text has been studied in a number of works, including Gururangan et al ( 2020); Beltagy et al (2019); Lee et al (2020); Gu et al (2021).…”
Section: Related Workmentioning
confidence: 99%