Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics 2019
DOI: 10.18653/v1/w19-2918
|View full text |Cite
|
Sign up to set email alerts
|

Using Grounded Word Representations to Study Theories of Lexical Concepts

Abstract: The fields of cognitive science and philosophy have proposed many different theories for how humans represent "concepts". Multiple such theories are compatible with state-of-theart NLP methods, and could in principle be operationalized using neural networks. We focus on two particularly prominent theories-Classical Theory and Prototype Theory-in the context of visually-grounded lexical representations. We compare when and how the behavior of models based on these theories differs in terms of categorization and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…Their training scheme is more complex as they first separately train the AE for each modality and then fuse them, which we avoid by adopting a single end-to-end architecture. Ebert and Pavlick (2019) used VAEs to learn grounded representations for lexical concepts. However, as discussed in Section 3, VAEs are not as well suited as RAEs to representation learning for our imagination module.…”
Section: Related Workmentioning
confidence: 99%
“…Their training scheme is more complex as they first separately train the AE for each modality and then fuse them, which we avoid by adopting a single end-to-end architecture. Ebert and Pavlick (2019) used VAEs to learn grounded representations for lexical concepts. However, as discussed in Section 3, VAEs are not as well suited as RAEs to representation learning for our imagination module.…”
Section: Related Workmentioning
confidence: 99%