P2 and N1c components of the auditory evoked potential (AEP) have been shown to be sensitive to remodeling of the auditory cortex by training at pitch discrimination in nonmusician subjects. Here, we investigated whether these neuroplastic components of the AEP are enhanced in musicians in accordance with their musical training histories. Highly skilled violinists and pianists and nonmusician controls listened under conditions of passive attention to violin tones, piano tones, and pure tones matched in fundamental frequency to the musical tones. Compared with nonmusician controls, both musician groups evidenced larger N1c (latency, 138 msec) and P2 (latency, 185 msec) responses to the three types of tonal stimuli. As in training studies with nonmusicians, N1c enhancement was expressed preferentially in the right hemisphere, where auditory neurons may be specialized for processing of spectral pitch. Equivalent current dipoles fitted to the N1c and P2 field patterns localized to spatially differentiable regions of the secondary auditory cortex, in agreement with previous findings. These results suggest that the tuning properties of neurons are modified in distributed regions of the auditory cortex in accordance with the acoustic training history (musical- or laboratory-based) of the subject. Enhanced P2 and N1c responses in musicians need not be considered genetic or prenatal markers for musical skill.
Animals exposed to noise trauma show augmented synchronous neural activity in tonotopically reorganized primary auditory cortex consequent on hearing loss. Diminished intracortical inhibition in the reorganized region appears to enable synchronous network activity that develops when deafferented neurons begin to respond to input via their lateral connections. In humans with tinnitus accompanied by hearing loss, this process may generate a phantom sound that is perceived in accordance with the location of the affected neurons in the cortical place map. The neural synchrony hypothesis predicts that tinnitus spectra, and heretofore unmeasured "residual inhibition functions" that relate residual tinnitus suppression to the center frequency of masking sounds, should cover the region of hearing loss in the audiogram. We confirmed these predictions in two independent cohorts totaling 90 tinnitus subjects, using computer-based tools designed to assess the psychoacoustic properties of tinnitus. Tinnitus spectra and residual inhibition functions for depth and duration increased with the amount of threshold shift over the region of hearing impairment. Residual inhibition depth was shallower when the masking sounds that were used to induce residual inhibition showed decreased correspondence with the frequency spectrum and bandwidth of the tinnitus. These findings suggest that tinnitus and its suppression in residual inhibition depend on processes that span the region of hearing impairment and not on mechanisms that enhance cortical representations for sound frequencies at the audiometric edge. Hearing thresholds measured in age-matched control subjects without tinnitus implicated hearing loss as a factor in tinnitus, although elevated thresholds alone were not sufficient to cause tinnitus.
Several functional brain attributes reflecting neocortical activity have been found to be enhanced in musicians compared to non-musicians. Included are the N1m evoked magnetic field, P2 and right-hemispheric N1c auditory evoked potentials, and the source waveform of the magnetically recorded 40 Hz auditory steady state response (SSR). We investigated whether these functional brain attributes measured by EEG are sensitive to neuroplastic remodeling in non-musician subjects. Adult non-musicians were trained for 15 sessions to discriminate small changes in the carrier frequency of 40 Hz amplitude modulated pure tones. P2 and N1c auditory evoked potentials were separated from the SSR by signal processing and found to localize to spatially differentiable sources in the secondary auditory cortex (A2). Training enhanced the P2 bilaterally and the N1c in the right hemisphere where auditory neurons may be specialized for processing of spectral information. The SSR localized to sources in the region of Heschl's gyrus in primary auditory cortex (A1). The amplitude of the SSR (assessed by bivariate T2 in 100 ms moving windows) was not augmented by training although the phase of the response was modified for the trained stimuli. The P2 and N1c enhancements observed here and reported previously in musicians may reflect new tunings on A2 neurons whose establishment and expression are gated by input converging from other regions of the brain. The SSR localizing to A1 was more resistant to remodeling, suggesting that its amplitude enhancement in musicians may be an intrinsic marker for musical skill or an early experience effect.
The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers' ratings of the "goodness" of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships.leadership | joint action | music performance | body sway | Granger causality C oordinating actions with others in time and space-joint action-is essential for daily life. From opening a door for someone to conducting an orchestra, periods of attentional and physical synchrony are required to achieve a shared goal. Humans have been shaped by evolution to engage in a high level of social interaction, reflected in high perceptual sensitivity to communicative features in voices and faces, the ability to understand the thoughts and beliefs of others, sensitivity to joint attention, and the ability to coordinate goal-directed actions with others (1-3). The social importance of joint action is demonstrated in that simply moving in synchrony with another increases interpersonal affiliation, trust, and/or cooperative behavior in infants and adults (e.g., refs. 4-9). The temporal predictability of music provides an ideal framework for achieving such synchronous movement, and it has been hypothesized that musical behavior evolved and remains adaptive today because it promotes cooperative social interaction and joint action (10-12). Indeed music is used in important situations where the goal is for people to feel a social bond, such as at religious ceremonies, weddings, funerals, parties, sporting events, political rallies, and in the military...
Objective: We explored the relationship between audiogram shape and tinnitus pitch to answer questions arising from neurophysiological models of tinnitus: ‘Is the dominant tinnitus pitch associated with the edge of hearing loss?’ and ‘Is such a relationship more robust in people with narrow tinnitus bandwidth or steep sloping hearing loss?’ Design: A broken-stick fitting objectively quantified slope, degree and edge of hearing loss up to 16 kHz. Tinnitus pitch was characterized up to 12 kHz. We used correlation and multiple regression analyses for examining relationships with many potentially predictive audiometric variables. Study Sample: 67 people with chronic bilateral tinnitus (43 men and 24 women, aged from 22 to 81 years). Results: In this ample of 67 subjects correlation failed to reveal any relationship between the tinnitus pitch and the edge frequency. The tinnitus pitch generally fell within the area of hearing loss. The pitch of the tinnitus in a subset of subjects with a narrow tinnitus bandwidth (n = 23) was associated with the audiometric edge. Conclusions: Our findings concerning subjects with narrow tinnitus bandwidth suggest that this can be used as an a priori inclusion criterion. A large group of such subjects should be tested to confirm these results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.