2020
DOI: 10.1101/2020.09.18.304436
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Understanding and Improving Word Embeddings through a Neuroscientific Lens

Abstract: Despite the success of models making use of word embeddings on many natural language tasks, these models often perform significantly worse than humans on several natural language understanding tasks. This difference in performance motivates us to ask: (1) if existing word vector representations have any basis in the brain’s representational structure for individual words, and (2) whether features from the brain can be used to improve word embedding model performance, defined as their correlation with human sem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…Mishra et al [121] applied eye-tracking data to improve the quality of sentiment-analysis models. Additionally, Fereidoni et al [122] introduced fMRI data into vocabulary representation learning, and Roller et al [123] and Wang et al [124] introduced human behavior data (lexical association score) into multimodal vocabulary representation learning.…”
Section: Cognitive Data-enhanced Modelsmentioning
confidence: 99%
“…Mishra et al [121] applied eye-tracking data to improve the quality of sentiment-analysis models. Additionally, Fereidoni et al [122] introduced fMRI data into vocabulary representation learning, and Roller et al [123] and Wang et al [124] introduced human behavior data (lexical association score) into multimodal vocabulary representation learning.…”
Section: Cognitive Data-enhanced Modelsmentioning
confidence: 99%