Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.420
|View full text |Cite
|
Sign up to set email alerts
|

Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference

Abstract: Recent methods based on pre-trained language models have shown strong supervised performance on commonsense reasoning. However, they rely on expensive data annotation and time-consuming training. Thus, we focus on unsupervised commonsense reasoning. We show the effectiveness of using a common framework, Natural Language Inference (NLI), to solve diverse commonsense reasoning tasks. By leveraging transfer learning from large NLI datasets, and injecting crucial knowledge from commonsense sources such as ATOMIC 2… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…[Falke et al 2019] use the entailment predictions of NLI models to re-rank the generated summaries of some state-of-the-art models. [Huang et al 2021] use the NLI models to improve unsupervised commonsense reasoning. [Koreeda and Manning 2021] use the NLI models to assist contract review.…”
Section: Related Workmentioning
confidence: 99%
“…[Falke et al 2019] use the entailment predictions of NLI models to re-rank the generated summaries of some state-of-the-art models. [Huang et al 2021] use the NLI models to improve unsupervised commonsense reasoning. [Koreeda and Manning 2021] use the NLI models to assist contract review.…”
Section: Related Workmentioning
confidence: 99%