Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.190
|View full text |Cite
|
Sign up to set email alerts
|

ExpBERT: Representation Engineering with Natural Language Explanations

Abstract: Suppose we want to specify the inductive bias that married couples typically go on honeymoons for the task of extracting pairs of spouses from text. In this paper, we allow model developers to specify these types of inductive biases as natural language explanations. We use BERT fine-tuned on MultiNLI to "interpret" these explanations with respect to the input sentence, producing explanationguided representations of the input. Across three relation extraction tasks, our method, ExpBERT, matches a BERT baseline … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(26 citation statements)
references
References 13 publications
0
26
0
Order By: Relevance
“…Our work joins the class of models that use natural language feedback to improve different tasks, e.g., image captioning (Ling and Fidler, 2017), classification (Srivastava et al, 2017;Hancock et al, 2018;Murty et al, 2020). While these methods use feedback for reward shaping or feature extraction, we use feedback to produce correct response using adversarial learning.…”
Section: Discussionmentioning
confidence: 99%
“…Our work joins the class of models that use natural language feedback to improve different tasks, e.g., image captioning (Ling and Fidler, 2017), classification (Srivastava et al, 2017;Hancock et al, 2018;Murty et al, 2020). While these methods use feedback for reward shaping or feature extraction, we use feedback to produce correct response using adversarial learning.…”
Section: Discussionmentioning
confidence: 99%
“…In this section, we will briefly review related works on the sentiment classification [11,13], knowledge-aware sentiment analysis [9,14], and natural language explanation [5,16,23] classification. Sentiment Analysis Sentiment analysis and emotion recognition have always attracted attention in multiple fields such as NL processing, psychology, and cognitive science.…”
Section: Related Workmentioning
confidence: 99%
“…While there are some prior data engineering solutions to "model patching", including augmentation (Sennrich et al, 2015;Wei and Zou, 2019;Kaushik et al, 2019;Goel et al, 2021a), weak labeling (Ratner et al, 2017;Chen et al, 2020), and synthetic data generation (Murty et al, 2020), due to the noise in WIKIPEDIA, we repurpose BOOTLEGSPORT using weak labeling to modify training labels and correct for this noise. Our weak labeling technique works as follows: any existing mention from strong-sport-cues that is labeled as a country is relabeled as a national sports team for that country.…”
Section: Repurposing With Weak Labelingmentioning
confidence: 99%