Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER) 2020
DOI: 10.18653/v1/2020.fever-1.5
|View full text |Cite
|
Sign up to set email alerts
|

Language Models as Fact Checkers?

Abstract: Recent work has suggested that language models (LMs) store both common-sense and factual knowledge learned from pre-training data. In this paper, we leverage this implicit knowledge to create an effective end-to-end fact checker using a solely a language model, without any external knowledge or explicit retrieval components. While previous work on extracting knowledge from LMs have focused on the task of open-domain question answering, to the best of our knowledge, this is the first work to examine the use of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 7 publications
1
23
0
Order By: Relevance
“…For instance, the event of COVID-19 emerged after the release of the GPT2 pre-trained model. Second, although LMs have shown surprising ability in memorizing some knowledge, they are not perfect, as pointed out by previous works (Poerner et al, 2019;Lee et al, 2020). Therefore, we propose to incorporate evidence into the perplexity calculation by using it as a prefix of the claim.…”
Section: Evidence Conditioned Perplexitymentioning
confidence: 86%
“…For instance, the event of COVID-19 emerged after the release of the GPT2 pre-trained model. Second, although LMs have shown surprising ability in memorizing some knowledge, they are not perfect, as pointed out by previous works (Poerner et al, 2019;Lee et al, 2020). Therefore, we propose to incorporate evidence into the perplexity calculation by using it as a prefix of the claim.…”
Section: Evidence Conditioned Perplexitymentioning
confidence: 86%
“…MNLI-Transfer (U3) trains a BERT model for natural language inference on the MultiNLI corpus (Williams et al, 2018) and applies it for fact verification. LM as Fact Checker (Lee et al, 2020b) (U4) leverages the implicit knowledge stored in the pretrained BERT language model to verify a claim. The implementation details are given in Appendix C.…”
Section: Methodsmentioning
confidence: 99%
“…LM as Fact Checker (U4). Since there is no public available code for this model, we implement our own version following the settings described in Lee et al (2020b). We use Hugging-Face's bert-base as the language model to predict the masked named entity, and use the NLI model described in U3 as the entailment model.…”
Section: Mnli-transfer (U3)mentioning
confidence: 99%
“…We also have a 2-step pipeline, but methods used in each step is distinct from their work. Lee et al (2020) take a new approach using BERT to the otherwise traditional pipeline of factchecking in FEVER-like tasks. The authors treat BERT as a knowledge base and use its masked language modeling predictions to decide the factual correctness of the claim.…”
Section: Related Workmentioning
confidence: 99%