2021
DOI: 10.48550/arxiv.2112.06318
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contextualized Scene Imagination for Generative Commonsense Reasoning

Abstract: Humans use natural language to compose common concepts from their environment into plausible, day-to-day scene descriptions. However, such generative commonsense reasoning (GCSR) skills are lacking in state-of-the-art text generation methods. Descriptive sentences about arbitrary concepts generated by neural text generation models (e.g., pre-trained text-to-text Transformers) are often grammatically fluent but may not correspond to human common sense, largely due to their lack of mechanisms to capture concept … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…For future work, we would sample sub-graph structures to explore more meaningful event-centric commonsense knowledge (Wang et al, 2021a). Moreover, we will equip our models with generative abilities by finetuning powerful T5 (Raffel et al, 2020) or BART models to help narrative story completion (Ji et al, 2020), commonsense inference (Gabriel et al, 2021), event infilling tasks (Lin et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…For future work, we would sample sub-graph structures to explore more meaningful event-centric commonsense knowledge (Wang et al, 2021a). Moreover, we will equip our models with generative abilities by finetuning powerful T5 (Raffel et al, 2020) or BART models to help narrative story completion (Ji et al, 2020), commonsense inference (Gabriel et al, 2021), event infilling tasks (Lin et al, 2021).…”
Section: Discussionmentioning
confidence: 99%