Research has shown that dyslexia and attention deficit (hyperactivity) disorder (AD(H)D) are characterized by specific neuroanatomical and neurofunctional differences in the auditory cortex. These neurofunctional characteristics in children with ADHD, ADD and dyslexia are linked to distinct differences in music perception. Group-specific differences in the musical performance of patients with ADHD, ADD and dyslexia have not been investigated in detail so far. We investigated the musical performance and neurophysiological correlates of 21 adolescents with dyslexia, 19 with ADHD, 28 with ADD and 28 age-matched, unaffected controls using a music performance assessment scale and magnetoencephalography (MEG). Musical experts independently assessed pitch and rhythmic accuracy, intonation, improvisation skills and musical expression. Compared to dyslexic adolescents, controls as well as adolescents with ADHD and ADD performed better in rhythmic reproduction, rhythmic improvisation and musical expression. Controls were significantly better in rhythmic reproduction than adolescents with ADD and scored higher in rhythmic and pitch improvisation than adolescents with ADHD. Adolescents with ADD and controls scored better in pitch reproduction than dyslexic adolescents. In pitch improvisation, the ADD group performed better than the ADHD group, and controls scored better than dyslexic adolescents. Discriminant analysis revealed that rhythmic improvisation and musical expression discriminate the dyslexic group from controls and adolescents with ADHD and ADD. A second discriminant analysis based on MEG variables showed that absolute P1 latency asynchrony |R-L| distinguishes the control group from the disorder groups best, while P1 and N1 latencies averaged across hemispheres separate the control, ADD and ADHD groups from the dyslexic group. Furthermore, rhythmic improvisation was negatively correlated with auditory-evoked P1 and N1 latencies, pointing in the following direction: the earlier the P1 and N1 latencies (mean), the better the rhythmic improvisation. These findings provide novel insight into the differences between music processing and performance in adolescents with and without neurodevelopmental disorders. A better understanding of these differences may help to develop tailored preventions or therapeutic interventions.
The research on neural correlates of intentional emotion communication by the music performer is still limited. In this study, we attempted to evaluate EEG patterns recorded from musicians who were instructed to perform a simple piano score while manipulating their manner of play to express specific contrasting emotions and self-rate the emotion they reflected on the scales of arousal and valence. In the emotional playing task, participants were instructed to improvise variations in a manner by which the targeted emotion is communicated. In contrast, in the neutral playing task, participants were asked to play the same piece precisely as written to obtain data for control over general patterns of motor and sensory activation during playing. The spectral analysis of the signal was applied as an initial step to be able to connect findings to the wider field of music-emotion research. The experimental contrast of emotional playing vs. neutral playing was employed to probe brain activity patterns differentially involved in distinct emotional states. The tasks of emotional and neutral playing differed considerably with respect to the state of intended-to-transfer emotion arousal and valence levels. The EEG activity differences were observed between distressed/excited and neutral/depressed/relaxed playing.
The neural correlates of intentional emotion transfer by the music performer are not well investigated as the present-day research mainly focuses on the assessment of emotions evoked by music. In this study, we aim to determine whether EEG connectivity patterns can reflect differences in information exchange during emotional playing. The EEG data were recorded while subjects were performing a simple piano score with contrasting emotional intentions and evaluated the subjectively experienced success of emotion transfer. The brain connectivity patterns were assessed from the EEG data using the Granger Causality approach. The effective connectivity was analyzed in different frequency bands—delta, theta, alpha, beta, and gamma. The features that (1) were able to discriminate between the neutral baseline and the emotional playing and (2) were shared across conditions, were used for further comparison. The low frequency bands—delta, theta, alpha—showed a limited number of connections (4 to 6) contributing to the discrimination between the emotional playing conditions. In contrast, a dense pattern of connections between regions that was able to discriminate between conditions (30 to 38) was observed in beta and gamma frequency ranges. The current study demonstrates that EEG-based connectivity in beta and gamma frequency ranges can effectively reflect the state of the networks involved in the emotional transfer through musical performance, whereas utility of the low frequency bands (delta, theta, alpha) remains questionable.
A Brain-Computer Music Interface (BCMI) system may be designed to harness electroencephalography (EEG) signals for control over musical outputs in the context of emotionally expressive performance. To develop a real-time BCMI system, accurate and computationally efficient emotional biomarkers should first be identified. In the current study, we evaluated the ability of various features to discriminate between emotions expressed during music performance with the aim of developing a BCMI system. EEG data was recorded while subjects performed simple piano music with contrasting emotional cues and rated their success in communicating the intended emotion. Power spectra and connectivity features (Magnitude Square Coherence (MSC) and Granger Causality (GC)) were extracted from the signals. Two different approaches of feature selection were used to assess the contribution of neutral baselines in detection accuracies; 1- utilizing the baselines to normalize the features, 2- not taking them into account (non-normalized features). Finally, the Support Vector Machine (SVM) has been used to evaluate and compare the capability of various features for emotion detection. Best detection accuracies were obtained from the non-normalized MSC-based features equal to 85.57 ± 2.34, 84.93 ± 1.67, and 87.16 ± 0.55 for arousal, valence, and emotional conditions respectively, while the power-based features had the lowest accuracies. Both connectivity features show acceptable accuracy while requiring short processing time and thus are potential candidates for the development of a real-time BCMI system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.