Epidemiological studies identify midlife hearing loss as an independent risk factor for dementia, estimated to account for 9% of cases. We evaluate candidate brain bases for this relationship. These bases include a common pathology affecting the ascending auditory pathway and multimodal cortex, depletion of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive resources when listening in difficult conditions. We also put forward an alternate mechanism, drawing on new insights into the role of the medial temporal lobe in auditory cognition. In particular, we consider how aberrant activity in the service of auditory pattern analysis, working memory, and object processing may interact with dementia pathology in people with hearing loss. We highlight how the effect of hearing interventions on dementia depends on the specific mechanism and suggest avenues for work at the molecular, neuronal, and systems levels to pin this down.
Misophonia is a common disorder characterized by the experience of strong negative emotions of anger and anxiety in response to certain everyday sounds, such as those generated by other people eating, drinking, and breathing. The commonplace nature of these “trigger” sounds makes misophonia a devastating disorder for sufferers and their families. How such innocuous sounds trigger this response is unknown. Since most trigger sounds are generated by orofacial movements (e.g., chewing) in others, we hypothesized that the mirror neuron system related to orofacial movements could underlie misophonia. We analyzed resting state fMRI (rs-fMRI) connectivity ( N = 33, 16 females) and sound-evoked fMRI responses ( N = 42, 29 females) in misophonia sufferers and controls. We demonstrate that, compared with controls, the misophonia group show no difference in auditory cortex responses to trigger sounds, but do show: (1) stronger rs-fMRI connectivity between both auditory and visual cortex and the ventral premotor cortex responsible for orofacial movements; (2) stronger functional connectivity between the auditory cortex and orofacial motor area during sound perception in general; and (3) stronger activation of the orofacial motor area, specifically, in response to trigger sounds. Our results support a model of misophonia based on “hyper-mirroring” of the orofacial actions of others with sounds being the “medium” via which action of others is excessively mirrored. Misophonia is therefore not an abreaction to sounds, per se, but a manifestation of activity in parts of the motor system involved in producing those sounds. This new framework to understand misophonia can explain behavioral and emotional responses and has important consequences for devising effective therapies. SIGNIFICANCE STATEMENT Conventionally, misophonia, literally “hatred of sounds” has been considered as a disorder of sound emotion processing, in which “simple” eating and chewing sounds produced by others cause negative emotional responses. Our data provide an alternative but complementary perspective on misophonia that emphasizes the action of the trigger-person rather than the sounds which are a byproduct of that action. Sounds, in this new perspective, are only a “medium” via which action of the triggering-person is mirrored onto the listener. This change in perspective has important consequences for devising therapies and treatment methods for misophonia. It suggests that, instead of focusing on sounds, which many existing therapies do, effective therapies should target the brain representation of movement.
Speech-in-noise (SiN) perception is a critical aspect of natural listening, deficits in which are a major contributor to the hearing handicap in cochlear hearing loss. Studies suggest that Sin perception correlates with cognitive skills, particularly phonological working memory: the ability to hold and manipulate phonemes or words in mind. We consider here the idea that Sin perception is linked to a more general ability to hold sound objects in mind, auditory working memory, irrespective of whether the objects are speech sounds. This process might help combine foreground elements, like speech, over seconds to aid their separation from the background of an auditory scene. We investigated the relationship between auditory working memory precision and Sin thresholds in listeners with normal hearing. We used a novel paradigm that tests auditory working memory for non-speech sounds that vary in frequency and amplitude modulation (AM) rate. the paradigm yields measures of precision in frequency and AM domains, based on the distribution of participants' estimates of the target. Across participants, frequency precision correlated significantly with SiN thresholds. Frequency precision also correlated with the number of years of musical training. Measures of phonological working memory did not correlate with SiN detection ability. Our results demonstrate a specific relationship between working memory for frequency and Sin. We suggest that working memory for frequency facilitates the identification and tracking of foreground objects like speech during natural listening. Working memory performance for frequency also correlated with years of musical instrument experience suggesting that the former is potentially modifiable. Speech-in-noise (SiN) perception is the ability to identify spoken words when background noise is present. Deficits in SiN are one of the most common problems in patients with cochlear hearing loss, but there has been increasing interest in cognitive abilities that determine SiN perception 1. Akeroyd 2 summarised studies describing the relationship of ccognitive measures to speech-in-noise performance. Phonological working measures such as the reading span and digit span were found to have an effect on SiN detection after accounting for hearing loss. However, other studies have suggested that phonological working memory only comes into play in older participants or when a participant has high-frequency hearing loss 3. We consider here the idea that more fundamental forms of working memory that apply to all sounds, including speech, are relevant to SiN perception. From first principles, the ability to hold in mind sound features that are characteristic of particular sources, including voices, might aid SiN perception by allowing sequential outputs from a particular source to be grouped. Previous work has shown that fundamental auditory grouping processes involved in separating non-speech figures from an acoustic background ('figure-ground perception') explains a sizable portion of individual differences in ...
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.