2021
DOI: 10.1111/cogs.13027
|View full text |Cite
|
Sign up to set email alerts
|

Monotone Quantifiers Emerge via Iterated Learning

Abstract: Natural languages exhibit many semantic universals, that is, properties of meaning shared across all languages. In this paper, we develop an explanation of one very prominent semantic universal, the monotonicity universal. While the existing work has shown that quantifiers satisfying the monotonicity universal are easier to learn, we provide a more complete explanation by considering the emergence of quantifiers from the perspective of cultural evolution. In particular, we show that quantifiers satisfy the mon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 43 publications
0
7
0
Order By: Relevance
“…Thus, we define a graded measure of monotonicity, which we can check for correlation with optimality. (See References [57,58], which show that this measure also increases over time during iterated learning. )…”
Section: Monotonicitymentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, we define a graded measure of monotonicity, which we can check for correlation with optimality. (See References [57,58], which show that this measure also increases over time during iterated learning. )…”
Section: Monotonicitymentioning
confidence: 99%
“…To see how this measure tracks intuitions, References [57,58] reports results for the previously mentioned quantifiers "some", "between 3 and 5", and "an even number of" on all models of a fixed size. "Some" gets monotonicity 1.0 because knowing whether a structure has a substructure that verifies "some" eliminates all uncertainty about the truth of the structure.…”
Section: Monotonicitymentioning
confidence: 99%
“…Also, they concluded that monotone quantifiers consistently evolved in the iterated learning model, suggesting that the propensity for monotonicity in natural language quantifiers could be explained by the iterated learning process. Moreover, they found that these evolved quantifiers often did not rely on the identity of specific individuals, aligning with another semantic universal, the universal of quantity [14]. Therefore, this outcome supports the hypothesis that certain properties of natural language quantifiers, like monotonicity and quantity, may arise from the cognitive biases of learners and the dynamics of cultural transmission.…”
Section: Monotonicity Universal Of Quantifiersmentioning
confidence: 54%
“…The iterated learning paradigm is one way to study the transmission of any type of cultural knowledge, which makes it an ideal tool to experimentally study language evolution and the emergence of structure. In this paradigm, agents (computational, e.g., Navarro, Perfors, Kary, Brown, & Donkin, 2018; Carcassi, Steinert‐Threlkeld, & Szymanik, 2021; or human, e.g., Raviv & Arnon, 2018) are taught an artificial language, which they are then asked to produce. The artificial language begins with a generation of completely random (i.e., linguistically unstructured) input.…”
Section: Introductionmentioning
confidence: 99%