Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1405
|View full text |Cite
|
Sign up to set email alerts
|

A Logic-Driven Framework for Consistency of Neural Models

Abstract: While neural models show remarkable accuracy on individual predictions, their internal beliefs can be inconsistent across examples. In this paper, we formalize such inconsistency as a generalization of prediction error. We propose a learning framework for constraining models using logic rules to regularize them away from inconsistency. Our framework can leverage both labeled and unlabeled examples and is directly compatible with off-the-shelf learning schemes without model redesign. We instantiate our framewor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
49
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(50 citation statements)
references
References 31 publications
0
49
1
Order By: Relevance
“…Subsequently, our method uses symbolic logic to incorporate consistency regularization for additional supervision signal beyond inductive bias given by data augmentation. Our method generalizes previous consistency-promoting methods for NLI tasks (Minervini and Riedel, 2018;Li et al, 2019) to adapt to substantially different question formats.…”
Section: Introductionmentioning
confidence: 86%
See 2 more Smart Citations
“…Subsequently, our method uses symbolic logic to incorporate consistency regularization for additional supervision signal beyond inductive bias given by data augmentation. Our method generalizes previous consistency-promoting methods for NLI tasks (Minervini and Riedel, 2018;Li et al, 2019) to adapt to substantially different question formats.…”
Section: Introductionmentioning
confidence: 86%
“…Minervini and Riedel (2018) present model-dependent first-order logic guided adversarial example generation and regularization. Li et al (2019) introduce consistency-based regularization incorporating the first-order logic rules. Previous approach is modeldependent or relies on NLI-specific rules, while our method is model-agnostic and is more generally applicable by combining it with data augmentation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The goal of learning is to let the model capture the data annotation, meanwhile regularizing the model towards consistency on logic constraints. Inspired by the logicdriven framework for consistency of neural models (Li et al, 2019), we specify three types of consistency requirements, i.e. annotation consistency, symmetry consistency and conjunction consistency.…”
Section: Joint Constrained Learningmentioning
confidence: 99%
“…State-of-the-art BERT (Devlin et al, 2019) representations have boosted performance in a wide variety of NLP tasks. The rising interest is on frameworks that combine neural network-driven representations with logic representations to reason about language and predict correct outputs for tasks such as natural language inference (NLI) (Li et al, 2019). Logic puzzle solving is a task that is considered in this direction, as well.…”
Section: Introductionmentioning
confidence: 99%