2020
DOI: 10.48550/arxiv.2012.14500
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Paragraph-level Multi-task Learning Model for Scientific Fact-Verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…USE-QA q is a universal sentence encoder [71] model fine-tuned on the SQuAD [94] dataset. Paragraph-Joint [95] uses the BioSentVec [96] model for the retrieval purpose. BioSentVec is a sent2vec [97]…”
Section: Baselines Usedmentioning
confidence: 99%
“…USE-QA q is a universal sentence encoder [71] model fine-tuned on the SQuAD [94] dataset. Paragraph-Joint [95] uses the BioSentVec [96] model for the retrieval purpose. BioSentVec is a sent2vec [97]…”
Section: Baselines Usedmentioning
confidence: 99%
“…Setup For SCIFACT, we chose three systems for testing our attack: VeriSci (Wadden et al 2020), ParagraphJoint (Li, Burns, and Peng 2021), and SciKGAT (Liu et al 2020a). The VeriSci model was released by the creators of the SCIFACT benchmark and retrieves relevant abstracts to a claim using TF-IDF.…”
Section: Scifact Studymentioning
confidence: 99%
“…When dealing with claim verification, most recent systems fine-tune a large pre-trained language model to do three-way label prediction, including VERISCI ( Wadden et al, 2020 ), VERT5ERINI ( Pradeep et al, 2020 ), and ParagraphJoint ( Li, Burns & Peng, 2021 ). Despite the evident effectiveness of these methods, fine-tuning models depends on the availability of substantial amounts of labelled data, which are not always accessible, particularly for new domains.…”
Section: Introductionmentioning
confidence: 99%