2022
DOI: 10.1101/2022.03.01.482586
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain

Abstract: Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. Do language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language? To test this hypothesis, we densely recorded the neural activity in the Inferior Frontal Gyrus (IFG, also known as Broca area) of three participants using de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…Our findings were based on LLMs, which recently demonstrated strong resemblance with the brain's language representation (26)(27)(28)(29)(30) and similar behavioral performance during memory recall tasks (31). In conjunction with the findings of our study, this notable similarity offers potentially new insights into generalization mechanisms in the brain, namely that prior knowledge of the learning system can be leveraged to perform new tasks without plasticity.…”
Section: Discussionsupporting
confidence: 66%
See 1 more Smart Citation
“…Our findings were based on LLMs, which recently demonstrated strong resemblance with the brain's language representation (26)(27)(28)(29)(30) and similar behavioral performance during memory recall tasks (31). In conjunction with the findings of our study, this notable similarity offers potentially new insights into generalization mechanisms in the brain, namely that prior knowledge of the learning system can be leveraged to perform new tasks without plasticity.…”
Section: Discussionsupporting
confidence: 66%
“…Recent studies have identified a strong resemblance between the brain's language representation and the representation generated by LLMs (20)(21)(22)(23)(24) and similar behavioral performance during memory recall tasks (25). In conjunction with the findings of this study, this notable similarity offers new insights into generalization mechanisms in the brain.…”
Section: Discussionmentioning
confidence: 99%
“…These objectives are readily available to all language users and may play a critical role during online speech processing in the human brain . Second, like the brain, these models use a population code to represent each word in a high-dimensional embedding space distributed across relatively simple computing elements (Goldstein, Dabush, et al, 2022). Third, the architecture of large language models enables them to express the richness and complexity of real-world language; these models learn to represent the unique meaning of words in context.…”
Section: Discussionmentioning
confidence: 99%
“…Conspicuously absent from our discussion of language models are ones based on deep neural networks. Apart from their enormous success in practical tasks in natural language processing [e.g., as seen with the large language models (LLM) underlying systems like ChatGPT (Floridi and Chiriatti, 2020)], such models have also been used to model neural activity during sentence processing via the surprisal values they provide (Goldstein et al, 2022;Heilbron et al, 2022;Russo et al, 2022). On the surface, it would appear that such models belong to the class of linear lexical models (on a par with n-grams), as they do not appear in embody any sort of linguistic abstraction.…”
Section: Discussionmentioning
confidence: 99%