2022
DOI: 10.1609/aaai.v36i10.21291
|View full text |Cite
|
Sign up to set email alerts
|

LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification

Abstract: Given a natural language statement, how to verify its veracity against a large-scale textual knowledge source like Wikipedia? Most existing neural models make predictions without giving clues about which part of a false claim goes wrong. In this paper, we propose LOREN, an approach for interpretable fact verification. We decompose the verification of the whole claim at phrase-level, where the veracity of the phrases serves as explanations and can be aggregated into the final verdict according to logical rules.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 24 publications
0
15
0
Order By: Relevance
“…To enhance fact-checking capabilities, various techniques have been developed over the past decade. These include employing multi-layer perceptron models (Vlachos and Riedel 2014), incorporating attention mechanisms (Parikh et al 2016), utilizing Graph Neural Networks (Liu et al 2020), employing semantic role labeling and logical reasoning tools (Chen et al 2020). Transformer-based language models, particularly BERT models, have gained significant attention in claim verification (Soleimani, Monz, and Worring 2019;Portelli et al 2020;Chernyavskiy and Ilvovsky 2019;Nie, Chen, and Bansal 2019;Tokala et al 2019;Tan et al 2023b).…”
Section: Related Workmentioning
confidence: 99%
“…To enhance fact-checking capabilities, various techniques have been developed over the past decade. These include employing multi-layer perceptron models (Vlachos and Riedel 2014), incorporating attention mechanisms (Parikh et al 2016), utilizing Graph Neural Networks (Liu et al 2020), employing semantic role labeling and logical reasoning tools (Chen et al 2020). Transformer-based language models, particularly BERT models, have gained significant attention in claim verification (Soleimani, Monz, and Worring 2019;Portelli et al 2020;Chernyavskiy and Ilvovsky 2019;Nie, Chen, and Bansal 2019;Tokala et al 2019;Tan et al 2023b).…”
Section: Related Workmentioning
confidence: 99%
“…We adopt the knowledge distillation strategy with a teacher model and a student model to integrate logic rules into latent variables, providing weak supervision inspired by (Chen et al 2022a). The teacher model projects the variational distribution q ω (z | x, y) into a subspace q ⋆ ω (y z | x, y) adhering to the logic rules, with y z ∈ {Real, Fake} representing the logical aggregation of z.…”
Section: Logical Rule Constraintsmentioning
confidence: 99%
“…Fortunately, from the perspective of human cognition, there is at least one deceptive pattern if the news is fake, while no deceptive pattern if the news is real. Inspired by the powerful expressive capabilities of first-order logic language in capturing complex relationships (Enderton 2001), our mind starts by formalizing these rules using first-order logic as a form of weak supervision inspired by (Chen et al 2022a). By doing so, we establish a correlation between the available labels for news authenticity and the presence of unsupervised deceptive patterns, enabling the underlying deceptive patterns to be automatically learned.…”
Section: Introductionmentioning
confidence: 99%
“…e.g. GopherCite supports answers with verified quotes [183], logicregularized reasoning for interpretable fact verification [194], survey on automated fact-checking [195], hallucinated content detection [196], RL approach for explainability using entailment trees [197].…”
Section: Strengthsmentioning
confidence: 99%
“…• Provide references, traceability, faithful explanation logic [194] or the emerging field of entailment tree explanation [197,329]. • Automated fact-checking [195].…”
Section: Hallucination and Credibilitymentioning
confidence: 99%