What makes a musician? In this review, we discuss innate and experience-dependent factors that mold the musician brain in addition to presenting new data in children that indicate that some neural enhancements in musicians unfold with continued training over development. We begin by addressing effects of training on musical expertise, presenting neural, perceptual, and cognitive evidence to support the claim that musicians are shaped by their musical training regimes. For example, many musician-advantages in the neural encoding of sound, auditory perception, and auditory-cognitive skills correlate with their extent of musical training, are not observed in young children just initiating musical training, and differ based on the type of training pursued. Even amidst innate characteristics that contribute to the biological building blocks that make up the musician, musicians demonstrate further training-related enhancements through extensive education and practice. We conclude by reviewing evidence from neurobiological and epigenetic approaches to frame biological markers of musicianship in the context of interactions between genetic and experience-related factors.
Objective: Cochlear implant (CI) users struggle with tasks of pitch-based prosody perception. Pitch pattern recognition is vital for both music comprehension and understanding the prosody of speech, which signals emotion and intent. Research in normal-hearing individuals shows that auditory-motor training, in which participants produce the auditory pattern they are learning, is more effective than passive auditory training. We investigated whether auditory-motor training of CI users improves complex sound perception, such as vocal emotion recognition and pitch pattern recognition, compared with purely auditory training. Study Design: Prospective cohort study. Setting: Tertiary academic center. Patients: Fifteen postlingually deafened adults with CIs. Intervention(s): Participants were divided into 3 one-month training groups: auditory-motor (intervention), auditory-only (active control), and no training (control). Auditory-motor training was conducted with the “Contours” software program and auditory-only training was completed with the “AngelSound” software program. Main Outcome Measure: Pre and posttest examinations included tests of speech perception (consonant–nucleus–consonant, hearing-in-noise test sentence recognition), speech prosody perception, pitch discrimination, and melodic contour identification. Results: Participants in the auditory-motor training group performed better than those in the auditory-only and no-training (p < 0.05) for the melodic contour identification task. No significant training effect was noted on tasks of speech perception, speech prosody perception, or pitch discrimination. Conclusions: These data suggest that short-term auditory-motor music training of CI users impacts pitch pattern recognition. This study offers approaches for enriching the world of complex sound in the CI user.
IMPORTANCE Cochlear implant users generally display poor pitch perception. Flat-panel computed tomography (FPCT) has recently emerged as a modality capable of localizing individual electrode contacts within the cochlea in vivo. Significant place-pitch mismatch between the clinical implant processing settings given to patients and the theoretical maps based on FPCT imaging has previously been noted. OBJECTIVE To assess whether place-pitch mismatch is associated with poor cochlear implant-mediated pitch perception through evaluation of an individualized, image-guided approach toward cochlear implant programming on speech and music perception among cochlear implant users. DESIGN, SETTING, AND PARTICIPANTS A prospective cohort study of 17 cochlear implant users with MED-EL electrode arrays was performed at a tertiary referral center. The study was conducted from June 2016 to July 2017. INTERVENTIONS Theoretical place-pitch maps using FPCT secondary reconstructions and 3-dimensional curved planar reformation software were developed. The clinical map settings (eg, strategy, rate, volume, frequency band range) were modified to keep factors constant between the 2 maps and minimize confounding. The acclimation period to the maps was 30 minutes. MAIN OUTCOMES AND MEASURES Participants performed speech perception tasks (eg, consonant-nucleus-consonant, Bamford-Kowal-Bench Speech-in-Noise, vowel identification) and a pitch-scaling task while using the image-guided place-pitch map (intervention) and the modified clinical map (control). Performance scores between the 2 interventions were measured. RESULTS Of the 17 participants, 10 (58.8%) were women; mean (SD) was 59 (11.3) years. A significant median increase in pitch scaling accuracy was noted when using the experimental map compared with the control map (4 more correct answers; 95% CI, 0-8). Specifically, the number of pitch-scaling reversals for notes spaced at 1.65 semitones or greater decreased when an image-based approach to cochlear implant programming was used vs the modified clinical map (4 mistakes; 95% CI, 0.5-7). Although there was no observable median improvement in speech perception during use of an image-based map, the acute changes in frequency allocation and electrode channel deactivations used with the image-guided maps did not worsen consonant-nucleus-consonant (−1% correct phonemes, 95% CI, −2.5% to 6%) and Bamford-Kowal-Bench Speech-in-Noise (0.5-dB difference; 95% CI, −0.75 to 2.25 dB) median performance results relative to the clinical maps used by the patients. CONCLUSIONS AND RELEVANCE An image-based approach toward ochlear implant mapping may improve pitch perception outcomes by reducing place-pitch mismatch. Studies using a longer acclimation period with chronic stimulation over months may help assess the full range of the benefits associated with personalized image-guided cochlear implant mapping.
Objectives: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. Design: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7–19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child’s hearing history may serve as predictors of performance on vocal emotion recognition. Results: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody—akin to “motherese”—may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition’s female talker, participants had high sensitivity (d’ scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. Conclusions: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.