Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing 2019
DOI: 10.18653/v1/d19-6015
|View full text |Cite
|
Sign up to set email alerts
|

Can a Gorilla Ride a Camel? Learning Semantic Plausibility from Text

Abstract: Modeling semantic plausibility requires commonsense knowledge about the world and has been used as a testbed for exploring various knowledge representations. Previous work has focused specifically on modeling physical plausibility and shown that distributional methods fail when tested in a supervised setting. At the same time, distributional models, namely large pretrained language models, have led to improved results for many natural language understanding tasks. In this work, we show that these pretrained la… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 36 publications
(51 reference statements)
1
11
0
Order By: Relevance
“…Other studies introduced larger datasets, but focused on more specific notions of event plausibility (e.g. the plausibility depending on the physical properties of the participants) (Wang et al, 2018;Porada et al, 2019;Ko et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Other studies introduced larger datasets, but focused on more specific notions of event plausibility (e.g. the plausibility depending on the physical properties of the participants) (Wang et al, 2018;Porada et al, 2019;Ko et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…We use English Wikipedia to construct the selfsupervised training data. As a relatively clean, definitional corpus, plausibility models trained on Wikipedia have been shown to correlate with human judgements better than those trained on similarly sized corpora (Zhang et al, 2019a;Porada et al, 2019).…”
Section: Training Datamentioning
confidence: 99%
“…Annotators were instructed to ignore possible metaphorical meanings of an event. We divide the dataset equally into a validation and test set following the split of Porada et al (2019).…”
Section: Pep-3kmentioning
confidence: 99%
“…Some datasets focus on non-sentential eventual plausibility (Wang et al, 2018;Porada et al, 2019), such as "gorilla-ride-camel". In contrast, our dataset is based on statements which includes events, descriptions, assertion etc, not merely events, such as "China's territory is larger than Japan's".…”
Section: Related Workmentioning
confidence: 99%