2022
DOI: 10.3389/frai.2022.796741
|View full text |Cite
|
Sign up to set email alerts
|

Beyond the Benchmarks: Toward Human-Like Lexical Representations

Abstract: To process language in a way that is compatible with human expectations in a communicative interaction, we need computational representations of lexical properties that form the basis of human knowledge of words. In this article, we concentrate on word-level semantics. We discuss key concepts and issues that underlie the scientific understanding of the human lexicon: its richly structured semantic representations, their ready and continual adaptability, and their grounding in crosslinguistically valid conceptu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 148 publications
0
2
0
Order By: Relevance
“…Even though they are loosely inspired by neural connections in the brain ( Perconti & Plebe, 2020 ; Rumelhart et al, 1986 ; Rumelhart & McClelland, 1987 ), artificial neural network models are considered neurobiologically unrealistic models both at the level of implementation in the human brain ( McClelland & Botvinick, 2020 ; Rosenbaum, 2022 ; Thomas & McClelland, 2008 ) as well as in regard to their functional similarity to human language processing ( Arehalli et al, 2022 ; Arehalli & Linzen, 2020 ) and learning ( Stevenson & Merlo, 2022 ; Warstadt & Bowman, 2022 ). This notwithstanding, the engagement in constant next-word prediction is an important functional property shared between language models and human sentence processing ( Goldstein et al, 2022 ).…”
Section: Discussionmentioning
confidence: 99%
“…Even though they are loosely inspired by neural connections in the brain ( Perconti & Plebe, 2020 ; Rumelhart et al, 1986 ; Rumelhart & McClelland, 1987 ), artificial neural network models are considered neurobiologically unrealistic models both at the level of implementation in the human brain ( McClelland & Botvinick, 2020 ; Rosenbaum, 2022 ; Thomas & McClelland, 2008 ) as well as in regard to their functional similarity to human language processing ( Arehalli et al, 2022 ; Arehalli & Linzen, 2020 ) and learning ( Stevenson & Merlo, 2022 ; Warstadt & Bowman, 2022 ). This notwithstanding, the engagement in constant next-word prediction is an important functional property shared between language models and human sentence processing ( Goldstein et al, 2022 ).…”
Section: Discussionmentioning
confidence: 99%
“…Producing sentence representations is a non-trivial issue, mainly because of the structural grammatical and semantic relations they express and their varying complexity and length (Stevenson and Merlo, 2022). The deep learning framework has allowed for a variety of elegant solutions to explicitly learn sentence representations or to induce them as a side-effect or modeling of a more complex problem (Mikolov et al, 2013;Pennington et al, 2014;Bojanowski et al, 2017;Peters et al, 2018).…”
Section: Related Workmentioning
confidence: 99%