Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. We asked adults and children to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations than when they did not gesture. Gesturing appeared to save the speakers' cognitive resources on the explanation task, permitting the speakers to allocate more resources to the memory task. It is widely accepted that gesturing reflects a speaker's cognitive state, but our observations suggest that, by reducing cognitive load, gesturing may also play a role in shaping that state.
Humans regularly produce new utterances that are understood by other members of the same language community 1 . Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances 2 . The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a 'context-free' grammar 2, 3 that is more complex than the 'finite-state' grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty 4,5 . Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation.The computational complexity of generative grammars is formally defined 3 such that certain classes of temporally patterned strings can only be produced (or recognized) by specific classes of grammars (Fig. 1). Starlings sing long songs composed of iterated motifs (smaller acoustic units) 6 that form the basic perceptual units of individual song recognition 7-9 . Here we used eight 'rattle' and eight 'warble' motifs (see Methods) to create complete 'languages' (4,096 sequences) for two distinct grammars: a context-free grammar (CFG) of the form A 2 B 2 that entails recursive centre-embedding, and a finite-state grammar (FSG) of the form (AB) 2 that does not ( Fig. 2a, b; 'A' refers to rattles and 'B' to warbles).We trained 11 European starlings, using a go/nogo operant conditioning procedure, to classify subsets of sequences from these languages (see Methods and Supplementary Information). Nine out of eleven starlings learned to classify the FSG and CFG sequences accurately (as assessed by d', which provides an unbiased measure of sensitivity to differentiating between two classes of patterns), but this task was difficult (Fig. 2c). The rate of acquisition varied widely among the starlings that learned the task (303.44 ± 57.11 blocks to reach criterion (mean ± s.e.m.), range 94-562 blocks with 100 trials per block), and was slow by comparison to other operant song-recognition tasks 7 .To assess the possibility that starlings learned to classify correctly the motif patterns described by the CFG and FSG grammars through rote memorization of the training exemplars, we further (Fig. 3a). The mean d' over the first 100 trials with new stimuli (roughly six responses to each exemplar) was 1.08 ± 0.50, which is significantly better than chance performance (d' = 0). Over th...
Observing a speaker's mouth profoundly influences speech perception. For example, listeners perceive an "illusory" "ta" when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally. Specifically, cortical activity during AV speech perception occurs in many of the same areas that are active during speech production. We find that different perceptions of the same syllable and the perception of different syllables are associated with different distributions of activity in frontal motor areas involved in speech production. Activity patterns in these frontal motor areas resulting from the illusory "ta" percept are more similar to the activity patterns evoked by AV(/ta/) than they are to patterns evoked by AV(/pa/) or AV(/ka/). In contrast to the activity in frontal motor areas, stimulus-evoked activity for the illusory "ta" in auditory and somatosensory areas and visual areas initially resembles activity evoked by AV(/pa/) and AV(/ka/), respectively. Ultimately, though, activity in these regions comes to resemble activity evoked by AV(/ta/). Together, these results suggest that AV speech elicits in the listener a motor plan for the production of the phoneme that the speaker might have been attempting to produce, and that feedback in the form of efference copy from the motor system ultimately influences the phonetic interpretation.
Memory consolidation resulting from sleep has been seen broadly: in verbal list learning, spatial learning, and skill acquisition in visual and motor tasks. These tasks do not generalize across spatial locations or motor sequences, or to different stimuli in the same location. Although episodic rote learning constitutes a large part of any organism's learning, generalization is a hallmark of adaptive behaviour. In speech, the same phoneme often has different acoustic patterns depending on context. Training on a small set of words improves performance on novel words using the same phonemes but with different acoustic patterns, demonstrating perceptual generalization. Here we show a role of sleep in the consolidation of a naturalistic spoken-language learning task that produces generalization of phonological categories across different acoustic patterns. Recognition performance immediately after training showed a significant improvement that subsequently degraded over the span of a day's retention interval, but completely recovered following sleep. Thus, sleep facilitates the recovery and subsequent retention of material learned opportunistically at any time throughout the day. Performance recovery indicates that representations and mappings associated with generalization are refined and stabilized during sleep.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.