Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.225
|View full text |Cite
|
Sign up to set email alerts
|

Generated Knowledge Prompting for Commonsense Reasoning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
69
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(71 citation statements)
references
References 24 publications
1
69
0
1
Order By: Relevance
“…For example, when the final label is binary, Jung et al (2022) induce a tree of explanations, then use an SAT solver and an NLI verifier to infer the satisfiability of each explanation. For commonsense reasoning tasks, Liu et al (2022a) generate relevant knowledge as additional inputs to the model, to improve the performance. Another line of work proposes to better retrieve prompts closer to the target question to further improve task performance (Liu et al, 2022b;Rubin et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…For example, when the final label is binary, Jung et al (2022) induce a tree of explanations, then use an SAT solver and an NLI verifier to infer the satisfiability of each explanation. For commonsense reasoning tasks, Liu et al (2022a) generate relevant knowledge as additional inputs to the model, to improve the performance. Another line of work proposes to better retrieve prompts closer to the target question to further improve task performance (Liu et al, 2022b;Rubin et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…They utilize a multilingual pretrained XLM-RoBERTabase [3] model along with prompting methods to conduct natural language understanding tasks in 15 languages. Furthermore, most recent research for prompt-based learning has been applied in various fields of natural language processing including relation extraction [22], commonsense reasoning [8], and complementing weakness of prompt [4,15,23]. Son et al [22] introduced a multitask learning approach for predicting a relation in a dialogue by guiding the model on the relational cues with an MLM-based relational mention prediction and the prior distribution of entity types.…”
Section: Related Workmentioning
confidence: 99%
“…Son et al [22] introduced a multitask learning approach for predicting a relation in a dialogue by guiding the model on the relational cues with an MLM-based relational mention prediction and the prior distribution of entity types. Liu et al [8] proposed generated knowledge prompting to obtain the external knowledge required to solve commonsense reasoning tasks. Cui et al [4] proposed a soft prototype verbalizer to find a suitable verbalizer within a large vocab.…”
Section: Related Workmentioning
confidence: 99%
“…Recent research has demonstrated that relevant knowledge can provide useful context for approaching commonsense tasks. Yet these methods either retrieve from indomain knowledge bases (Mitra et al, 2019;Chang et al, 2020) that do not have good coverage over commonsense, or generate knowledge from neural models (Shwartz et al, 2020;Gu et al, 2022;Liu et al, 2022), which often need domain-specific engineering and very large models (e.g. GPT-3 (Brown et al, 2020)).…”
Section: Introductionmentioning
confidence: 99%