Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.138
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Event Plausibility with Consistent Conceptual Abstraction

Abstract: Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events. While distributional models-most recently pre-trained, Transformer language modelshave demonstrated improvements in modeling event plausibility, their performance still falls short of humans'. In this work, we show that Transformer-based plausibility models are markedly inconsistent across the conceptual classes of a lexical hierarchy, inferring that "a person breathing" is plausible … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 41 publications
0
9
0
Order By: Relevance
“…A possible solution to overcoming the reporting bias would be to adjust the event distribution via injecting manually elicited knowledge about object and entity properties into models (Wang et al., 2018; although see Porada, Suleman, Trischler, & Cheung, 2021) or via data augmentation (e.g., Zmigrod et al., 2019). Alternatively, information about event typicality might enter LLMs through input from different modalities, such as visual depictions of the world in the form of large databases of images and/or image descriptions (Bisk et al., 2020).…”
Section: Discussionmentioning
confidence: 99%
“…A possible solution to overcoming the reporting bias would be to adjust the event distribution via injecting manually elicited knowledge about object and entity properties into models (Wang et al., 2018; although see Porada, Suleman, Trischler, & Cheung, 2021) or via data augmentation (e.g., Zmigrod et al., 2019). Alternatively, information about event typicality might enter LLMs through input from different modalities, such as visual depictions of the world in the form of large databases of images and/or image descriptions (Bisk et al., 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Instantiation was attempted by Allaway et al (2023), who proposed a controllable generative framework to probe valid instantiations for abstract knowledge automatically. Though Porada et al (2021) and Peng et al (2022) both proved that existing pretrained language models lack conceptual knowledge, none of existing works explicitly combine both techniques to derive abstract knowledge that is context-sensitive and generalizable.…”
Section: Related Workmentioning
confidence: 99%
“…Does this representation perpetuate the negative stereotype that men are bad at cooking? To investigate this, we should dive deeper into the semantic plausibility learned in language models (Porada et al, 2021;Pedinotti et al, 2021). Unless the focus is on the domain of natural science, there is less agreement on what would lean in spreading desirable and undesirable content, and the borderline can change across time and place.…”
Section: Content Validation For Fair Representationmentioning
confidence: 99%