Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1487
|View full text |Cite
|
Sign up to set email alerts
|

Explain Yourself! Leveraging Language Models for Commonsense Reasoning

Abstract: Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of worldknowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E). We use CoS-E to train language models to automatically generate explanations that can be used during training and inference in a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
139
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 228 publications
(141 citation statements)
references
References 19 publications
1
139
1
Order By: Relevance
“…They fine-tuned GPT2 on a question answering dataset to generate a question and an answer span for a given passage, and trained BERT to answer the generated question given the passage. Finally, Rajani et al (2019) proposed a model for Com-monSenseQA that generates explanations for its predictions. They collected human explanations and used them to fine-tune LMs to automatically generate explanations.…”
Section: Generating Questions and Explanationsmentioning
confidence: 99%
“…They fine-tuned GPT2 on a question answering dataset to generate a question and an answer span for a given passage, and trained BERT to answer the generated question given the passage. Finally, Rajani et al (2019) proposed a model for Com-monSenseQA that generates explanations for its predictions. They collected human explanations and used them to fine-tune LMs to automatically generate explanations.…”
Section: Generating Questions and Explanationsmentioning
confidence: 99%
“…In the context of QA, there are multiple notions of explanation/justification, including showing an authoritative, answer-bearing sentence (Perez et al, 2019), a collection of text snippets supporting an answer (DeYoung et al, 2020), an attention map over a passage (Seo et al, 2016), a synthesized phrase connecting question and answer (Rajani et al, 2019), or the syntactic pattern used to locate the answer (Ye et al, 2020;Hancock et al, 2018). These methods are primarily designed for answers to "lookup" questions, to explain where and how an answer was found in a corpus.…”
Section: Related Workmentioning
confidence: 99%
“…As mentioned above, semantic parsers have been used to convert language explanations into features (Srivastava et al, 2017) and noisy labels on unlabeled data (Hancock et al, 2018;Wang et al, 2019). Rather than using language to define a global collection of features, Rajani et al (2019) and Camburu et al (2018) use instance-level explanations to train models that generate their own explanations. Zaidan and Eisner (2008) ask annotators to highlight important words, then learn a generative model over parameters given these rationales.…”
Section: Related Workmentioning
confidence: 99%