2023
DOI: 10.1037/rev0000420
|View full text |Cite
|
Sign up to set email alerts
|

Word meaning is both categorical and continuous.

Abstract: Most words have multiple meanings, but there are foundationally distinct accounts for this. Categorical theories posit that humans maintain discrete entries for distinct word meanings, as in a dictionary. Continuous ones eschew discrete sense representations, arguing that word meanings are best characterized as trajectories through a continuous state space. Both kinds of approach face empirical challenges. In response, we introduce two novel "hybrid" theories, which reconcile discrete sense representations wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 94 publications
0
3
0
Order By: Relevance
“…To briefly reiterate some of the points raised in the introduction: we chose BERT for our examination of the representational structure of regular polysemy due to its large size and proven track record in simulating a range of aspects of human cognition (Rogers et al, 2021), as well as its capability to produce contextual word vectors by attending to both the preceding and subsequent contexts of a given target word. However, choosing BERT does not preclude the generalizability of our results to other DSMs, as previous research has reached similar conclusions using a variety of DSMs (Floyd et al, 2021;Lopukhina & Lopukhin, 2016;Trott & Bergen, 2023). More importantly, probing DSMs is a well-established method for examining human language cognition, not only in terms of linguistic behavior, such as response times in lexical decision or naming tasks (Mandera et al, 2017), and eye-tracking-based reading times (Heilbron, van Haren, Hagoort, & de Lange, 2023;Pimentel, Meister, Wilcox, Levy, & Cotterell, 2023), but also in internal cognitive/neural representations aligned with EEG (Ettinger, Feldman, Resnik, & Philips, 2016) and fMRI data (e.g., Schrimpf et al, 2021).…”
Section: Distributional Semantic Models Regular Polysemy and Human Co...mentioning
confidence: 52%
See 2 more Smart Citations
“…To briefly reiterate some of the points raised in the introduction: we chose BERT for our examination of the representational structure of regular polysemy due to its large size and proven track record in simulating a range of aspects of human cognition (Rogers et al, 2021), as well as its capability to produce contextual word vectors by attending to both the preceding and subsequent contexts of a given target word. However, choosing BERT does not preclude the generalizability of our results to other DSMs, as previous research has reached similar conclusions using a variety of DSMs (Floyd et al, 2021;Lopukhina & Lopukhin, 2016;Trott & Bergen, 2023). More importantly, probing DSMs is a well-established method for examining human language cognition, not only in terms of linguistic behavior, such as response times in lexical decision or naming tasks (Mandera et al, 2017), and eye-tracking-based reading times (Heilbron, van Haren, Hagoort, & de Lange, 2023;Pimentel, Meister, Wilcox, Levy, & Cotterell, 2023), but also in internal cognitive/neural representations aligned with EEG (Ettinger, Feldman, Resnik, & Philips, 2016) and fMRI data (e.g., Schrimpf et al, 2021).…”
Section: Distributional Semantic Models Regular Polysemy and Human Co...mentioning
confidence: 52%
“…Although we could envision that some detailed aspects of our findings are shaped by the particular implementational details of the BERT model, we nevertheless predict that our overall insights into regular polysemy should be reasonably robust and generalize to other similar models, as has been observed in other contexts using distributional vectors. This prediction is motivated by the qualitatively similar results that have been obtained using different models when studying other aspects of polysemy (e.g., BERT vs. ELMo in Trott & Bergen, 2023; word2vec vs. Sentence‐BERT in Floyd et al., 2021), as well as the prior success in using a variety of models to successfully study other aspects of regular polysemy (e.g., Lopukhina & Lopukhin, 2016). Of course, this prediction is fundamentally an empirical question which we leave for future work.…”
Section: Theoretical Approachmentioning
confidence: 77%
See 1 more Smart Citation