When we describe time, we often use the language of space (The movie was long; The deadline is approaching). Experiments 1–3 asked whether—as patterns in language suggest—a structural similarity between representations of spatial length and temporal duration is easier to access than one between length and other dimensions of experience, such as loudness. Adult participants were shown pairings of lines of different length with tones of different duration (Experiment 1) or tones of different loudness (Experiment 2). The length of the lines and duration or loudness of the tones was either positively or negatively correlated. Participants were better able to bind particular lengths and durations when they were positively correlated than when they were not, a pattern not observed for pairings of lengths and tone amplitudes, even after controlling for the presence of visual cues to duration in Experiment 1 (Experiment 3). This suggests that representations of length and duration may functionally overlap to a greater extent than representations of length and loudness. Experiments 4 and 5 asked whether experience with and mastery of words like long and short—which can flexibly refer to both space and time—itself creates this privileged relationship. Nine-month-old infants, like adults, were better able to bind representations of particular lengths and durations when these were positively correlated (Experiment 4), and failed to show this pattern for pairings of lengths and tone amplitudes (Experiment 5). We conclude that the functional overlap between representations of length and duration does not result from a metaphoric construction processes mediated by learning to flexibly use words such as long and short. We suggest instead that it may reflect an evolutionary recycling of spatial representations for more general purposes.
Mental abacus (MA) is a technique of performing fast, accurate arithmetic using a mental image of an abacus; experts exhibit astonishing calculation abilities. Over 3 years, 204 elementary school students (age range at outset: 5–7 years old) participated in a randomized, controlled trial to test whether MA expertise (a) can be acquired in standard classroom settings, (b) improves students' mathematical abilities (beyond standard math curricula), and (c) is related to changes in basic cognitive capacities like working memory. MA students outperformed controls on arithmetic tasks, suggesting that MA expertise can be achieved by children in standard classrooms. MA training did not alter basic cognitive abilities; instead, differences in spatial working memory at the beginning of the study mediated MA learning.
Please cite this article in press as: Srinivasan, M., Rabagliati, H., How concepts and conventions structure the lexicon: Cross-linguistic evidence from polysemy. Lingua (2015) AbstractWords often have multiple distinct but related senses, a phenomenon called polysemy. For instance, in English, words like chicken and lamb can label animals and their meats while words like glass and tin can label materials and artifacts derived from those materials. In this paper, we ask why words have some senses but not others, and thus what constrains the structure of polysemy. Previous work has pointed to two different sources of constraints. First, polysemy could reflect conceptual structure: word senses could be derived based on how ideas are associated in the mind. Second, polysemy could reflect a set of arbitrary, language-specific conventions: word senses could be difficult to derive and might have to be memorized and stored. We used a large-scale cross-linguistic survey to elucidate the relative contributions of concepts and conventions to the structure of polysemy. We explored whether 27 distinct patterns of polysemy found in English are also present in 14 other languages. Consistent with the idea that polysemy is constrained by conceptual structure, we found that almost all surveyed patterns of polysemy (e.g., animal for meat, material for artifact) were present across languages. However, consistent with the idea that polysemy reflects language-specific conventions, we also found variation across languages in how patterns are instantiated in specific senses (e.g., the word for glass material is used to label different glass artifacts across languages). We argue that these results are best explained by a ''conventions-constrained-by-concepts'' model, in which the different senses of words are learned conventions, but conceptual structure makes some types of relations between senses easier to grasp than others, such that the same patterns of polysemy evolve across languages. This opens a new view of lexical structure, in which polysemy is a linguistic adaptation that makes it easier for children to learn word meanings and build a lexicon.
Human language relies on a finite lexicon to express a potentially infinite set of ideas. A key result of this tension is that words acquire novel senses over time. However, the cognitive processes that underlie the historical emergence of new word senses are poorly understood. Here, we present a computational framework that formalizes competing views of how new senses of a word might emerge by attaching to existing senses of the word. We test the ability of the models to predict the temporal order in which the senses of individual words have emerged, using an historical lexicon of English spanning the past millennium. Our findings suggest that word senses emerge in predictable ways, following an historical path that reflects cognitive efficiency, predominantly through a process of nearest-neighbor chaining. Our work contributes a formal account of the generative processes that underlie lexical evolution.
Deictic time words like "yesterday" and "tomorrow" pose a challenge to children not only because they are abstract, and label periods in time, but also because their denotations vary according to the time at which they are uttered: Monday's "tomorrow" is different than Thursday's. Although children produce these words as early as age 2 or 3, they do not use them in adult-like ways for several subsequent years. Here, we explored whether children have partial but systematic meanings for these words during the long delay before adult-like usage. We asked 3- to 8-year-olds to represent these words on a bidirectional, left-to-right timeline that extended from the past (infancy) to the future (adulthood). This method allowed us to independently probe knowledge of these words' deictic status (e.g., "yesterday" is in the past), relative ordering (e.g., "last week" was before "yesterday"), and remoteness from the present (e.g., "last week" was about 7 times longer ago than "yesterday"). We found that adult-like knowledge of deictic status and order emerge in synchrony, between ages 4 and 6, but that knowledge of remoteness emerges later, after age 7. Our findings suggest that children's early use of deictic time words is not random, but instead reflects the gradual construction of a structured lexical domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.