2023
DOI: 10.1111/cogs.13388
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Brain Representations of Words' Concreteness in Context Using GPT‐2 and Human Ratings

Andrea Bruera,
Yuan Tao,
Andrew Anderson
et al.

Abstract: The meaning of most words in language depends on their context. Understanding how the human brain extracts contextualized meaning, and identifying where in the brain this takes place, remain important scientific challenges. But technological and computational advances in neuroscience and artificial intelligence now provide unprecedented opportunities to study the human brain in action as language is read and understood. Recent contextualized language models seem to be able to capture homonymic meaning variatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
references
References 215 publications
(391 reference statements)
0
0
0
Order By: Relevance