How does acoustic degradation affect the neural mechanisms of working memory? Enhanced alpha oscillations (8 -13 Hz) during retention of items in working memory are often interpreted to reflect increased demands on storage and inhibition. We hypothesized that auditory signal degradation poses an additional challenge to human listeners partly because it draws on the same neural mechanisms. In an adapted Sternberg paradigm, auditory memory load and acoustic degradation were parametrically varied and the magnetoencephalographic response was analyzed in the time-frequency domain. Notably, during the stimulus-free delay interval, alpha power monotonically increased at central-parietal sensors as functions of memory load (higher alpha power with more memory load) and of acoustic degradation (also higher alpha power with more severe acoustic degradation). This alpha effect was superadditive when highest load was combined with most severe degradation. Moreover, alpha oscillatory dynamics during stimulus-free delay were predictive of response times to the probe item. Source localization of alpha power during stimulus-free delay indicated that alpha generators in right parietal, cingulate, supramarginal, and superior temporal cortex were sensitive to combined memory load and acoustic degradation. In summary, both challenges of memory load and acoustic degradation increase activity in a common alpha-frequency network. The results set the stage for future studies on how chronic or acute degradations of sensory input affect mechanisms of executive control.
Action-theoretic views of language posit that the recognition of others’ intentions is key to successful interpersonal communication. Yet, speakers do not always code their intentions literally, raising the question of which mechanisms enable interlocutors to exchange communicative intents. The present study investigated whether and how prosody—the vocal tone—contributes to the identification of “unspoken” intentions. Single (non-)words were spoken with six intonations representing different speech acts—as carriers of communicative intentions. This corpus was acoustically analyzed (Experiment 1), and behaviorally evaluated in two experiments (Experiments 2 and 3). The combined results show characteristic prosodic feature configurations for different intentions that were reliably recognized by listeners. Interestingly, identification of intentions was not contingent on context (single words), lexical information (non-words), and recognition of the speaker’s emotion (valence and arousal). Overall, the data demonstrate that speakers’ intentions are represented in the prosodic signal which can, thus, determine the success of interpersonal communication
Our ability to understand others’ communicative intentions in speech is key to successful social interaction. Indeed, misunderstanding an ‘excuse me’ as apology, while meant as criticism, may have important consequences. Recent behavioural studies have provided evidence that prosody, that is, vocal tone, is an important indicator for speakers’ intentions. Using a novel audio-morphing paradigm, the present functional magnetic resonance imaging study examined the neurocognitive mechanisms that allow listeners to ‘read’ speakers’ intents from vocal prosodic patterns. Participants categorized prosodic expressions that gradually varied in their acoustics between criticism, doubt, and suggestion. Categorizing typical exemplars of the three intentions induced activations along the ventral auditory stream, complemented by amygdala and mentalizing system. These findings likely depict the stepwise conversion of external perceptual information into abstract prosodic categories and internal social semantic concepts, including the speaker’s mental state. Ambiguous tokens, in turn, involved cingulo-opercular areas known to assist decision-making in case of conflicting cues. Auditory and decision-making processes were flexibly coupled with the amygdala, depending on prosodic typicality, indicating enhanced categorization efficiency of overtly relevant, meaningful prosodic signals. Altogether, the results point to a model in which auditory prosodic categorization and socio-inferential conceptualization cooperate to translate perceived vocal tone into a coherent representation of the speaker’s intent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.