2022
DOI: 10.7717/peerj-cs.1137
|View full text |Cite
|
Sign up to set email alerts
|

Aggregating pairwise semantic differences for few-shot claim verification

Abstract: As part of an automated fact-checking pipeline, the claim verification task consists in determining if a claim is supported by an associated piece of evidence. The complexity of gathering labelled claim-evidence pairs leads to a scarcity of datasets, particularly when dealing with new domains. In this article, we introduce Semantic Embedding Element-wise Difference (SEED), a novel vector-based method to few-shot claim verification that aggregates pairwise semantic differences for claim-evidence pairs. We build… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…school science classrooms, and recently at the college level (Eden, 2023). LLMs have recently been used for factchecking and for identifying claim-evidence pairs in scienti c content (Koneru, Wu, and Rajtmajer, 2023;Wang et al, 2023;Zeng and Zubiaga, 2024). The hope is that LLMs could provide instructors and their students with assessment of the scienti c validity of student writing, aiming for the "gold standard" of conceptual learning (Gere et al, 2019).…”
Section: Declarationsmentioning
confidence: 99%
“…school science classrooms, and recently at the college level (Eden, 2023). LLMs have recently been used for factchecking and for identifying claim-evidence pairs in scienti c content (Koneru, Wu, and Rajtmajer, 2023;Wang et al, 2023;Zeng and Zubiaga, 2024). The hope is that LLMs could provide instructors and their students with assessment of the scienti c validity of student writing, aiming for the "gold standard" of conceptual learning (Gere et al, 2019).…”
Section: Declarationsmentioning
confidence: 99%
“…either supported or refuted. Zeng and Zubiaga (2023) explore active learning in combination with PET (Schick and Schütze, 2021), a popular prompt-based few-shot learning method, and Pan et al (2021) and Wright et al (2022) generate weakly supervised training data for zero-shot claim verification. However, none of the aforementioned methods produces (faithful) explanations.…”
Section: Related Workmentioning
confidence: 99%