2022
DOI: 10.48550/arxiv.2210.03078
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering

Abstract: Knowledge underpins reasoning. Recent research demonstrates that when relevant knowledge is provided as additional context to commonsense question answering (QA), it can substantially enhance the performance even on top of state-of-the-art. The fundamental challenge is where and how to find such knowledge that is high quality and on point with respect to the question; knowledge retrieved from knowledge bases are incomplete and knowledge generated from language models are inconsistent.We present RAINIER 1 , or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…Furthermore, our work can also be viewed from the perspective of learning discrete prompts for language models. Past work propose to generate knowledge pieces (Liu et al, 2022) or arbitrary textual snippets (Deng et al, 2022) which they append to the input via reinforcement learning. These works are different than ours in that their policy is conditioned solely on the input x whereas in our case we sample critiques of machine-generated predictions based on x and ŷ.…”
Section: Adapters and Discrete Prompt Learningmentioning
confidence: 99%
“…Furthermore, our work can also be viewed from the perspective of learning discrete prompts for language models. Past work propose to generate knowledge pieces (Liu et al, 2022) or arbitrary textual snippets (Deng et al, 2022) which they append to the input via reinforcement learning. These works are different than ours in that their policy is conditioned solely on the input x whereas in our case we sample critiques of machine-generated predictions based on x and ŷ.…”
Section: Adapters and Discrete Prompt Learningmentioning
confidence: 99%
“…Question answering (QA) has become one of the most popular downstream tasks in natural language processing (NLP) in recent years. QA tasks utilize large-scale pre-trained language models (LMs) to obtain token representations, exemplified by BERT [1], GPT [2], ELMo [3], and RoBERTa [4], which have all achieved remarkable success. Meanwhile, commonsense as external knowledge is essential for QA systems to predict the correct answer, which is natural knowledge for humans [5].…”
Section: Introductionmentioning
confidence: 99%