2021
DOI: 10.1037/rev0000269
|View full text |Cite
|
Sign up to set email alerts
|

Modeling word and morpheme order in natural language as an efficient trade-off of memory and surprisal.

Abstract: Memory limitations are known to constrain language comprehension and production, and have been argued to account for crosslinguistic word order regularities. However, a systematic assessment of the role of memory limitations in language structure has proven elusive, in part because it is hard to extract precise large-scale quantitative generalizations about language from existing mechanistic models of memory use in sentence processing. We provide an architecture-independent information-theoretic formalization … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(39 citation statements)
references
References 128 publications
(271 reference statements)
1
38
0
Order By: Relevance
“…They show that optimizing the memory-surprisal tradeoff amounts to placing elements close together that strongly predict each other, as measured by mutual information. Hahn et al (2021) argue that this property of the memory-surprisal tradeoff generalizes previous processing theories that suggest that orderings tend to place together elements that are syntactically related (Hawkins, 1994;Rijkhoff, 1986), conceptually related (Givón, 1985), semantically relevant to each other (Bybee, 1985), or processed together in lexical access (Hay & Plag, 2004). While focused on explaining word order across 54 languages, Hahn et al (2021) also test whether two languages optimize the memorysurprisal tradeoff at the morphological level.…”
Section: Introductionmentioning
confidence: 55%
See 3 more Smart Citations
“…They show that optimizing the memory-surprisal tradeoff amounts to placing elements close together that strongly predict each other, as measured by mutual information. Hahn et al (2021) argue that this property of the memory-surprisal tradeoff generalizes previous processing theories that suggest that orderings tend to place together elements that are syntactically related (Hawkins, 1994;Rijkhoff, 1986), conceptually related (Givón, 1985), semantically relevant to each other (Bybee, 1985), or processed together in lexical access (Hay & Plag, 2004). While focused on explaining word order across 54 languages, Hahn et al (2021) also test whether two languages optimize the memorysurprisal tradeoff at the morphological level.…”
Section: Introductionmentioning
confidence: 55%
“…Conversely, the less memory is invested, the higher the surprisal. Hahn et al (2021) argue for the efficient tradeoff hypothesis, the idea that the order of words and morphemes in language provides particularly efficient memory-surprisal tradeoffs. They show that optimizing the memory-surprisal tradeoff amounts to placing elements close together that strongly predict each other, as measured by mutual information.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…We think that this is an interesting feature since semantics precedes syntax in communication whereas advanced syntax is a later evolved mechanism that makes the mapping between signals and complex meanings more fault tolerant (and more redundant, by the way). In fact, syntax can be also investigated using ideas from information theory [ 119 , 120 ].…”
Section: Discussionmentioning
confidence: 99%