2020
DOI: 10.1007/978-3-030-45442-5_45
|View full text |Cite
|
Sign up to set email alerts
|

BERT for Evidence Retrieval and Claim Verification

Abstract: Motivated by the promising performance of pre-trained language models, we investigate BERT in an evidence retrieval and claim verification pipeline for the FEVER fact extraction and verification challenge. To this end, we propose to use two BERT models, one for retrieving potential evidence sentences supporting or rejecting claims, and another for verifying claims based on the predicted evidence sets. To train the BERT retrieval system, we use pointwise and pairwise loss functions, and examine the effect of ha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
137
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 91 publications
(138 citation statements)
references
References 19 publications
1
137
0
Order By: Relevance
“…We develop a baseline (referred to as VERISCI) that takes a claim c and corpus A as input, identifies evidence abstracts E(c), and predicts a label y(c, a) and rationale sentences S(c, a) for each a ∈ E(c). Following the "BERT-to-BERT" model presented in DeYoung et al (2020a); Soleimani et al (2019), VERISCI is a pipeline of three components: 1. ABSTRACTRETRIEVAL retrieves k abstracts with highest TF-IDF similarity to the claim.…”
Section: Verisci: Baseline Modelmentioning
confidence: 99%
“…We develop a baseline (referred to as VERISCI) that takes a claim c and corpus A as input, identifies evidence abstracts E(c), and predicts a label y(c, a) and rationale sentences S(c, a) for each a ∈ E(c). Following the "BERT-to-BERT" model presented in DeYoung et al (2020a); Soleimani et al (2019), VERISCI is a pipeline of three components: 1. ABSTRACTRETRIEVAL retrieves k abstracts with highest TF-IDF similarity to the claim.…”
Section: Verisci: Baseline Modelmentioning
confidence: 99%
“…Experiments are conducted to evaluate the performance of evidence retrieval, claim verification, and (Hanselowski et al, 2018b) 65.46 61.58 UCL MRG (Yoneda et al, 2018) 67.62 62.52 UNC NLP (Nie et al, 2019) 68.21 64.21 BERT Pair (Zhou et al, 2019) 69.75 65.18 BERT Concat (Zhou et al, 2019) 71.01 65.64 BERT (Base) (Soleimani et al, 2020) 70.67 68.50 GEAR (BERT Base) (Zhou et al, 2019) 71.60 67.10 KGAT (BERT Base) (Liu et al, 2020) 72 aggregation approaches. In addition, we conduct an ablation study.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…Evidence sentence retrieval component in almost all previous works retrieves all the evidences through a single iteration (Yoneda et al, 2018;Hanselowski et al, 2018b;Nie et al, 2019;Chen et al, 2017;Soleimani et al, 2020;Liu et al, 2020). Stammbach and Neumann (2019) uses a multi-hop retrieval strategy through two iterations to retrieve evidence sentences that are conditioned on the retrieval of other evidence sentences.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations