The Autistic Spectrum Disorder (ASD) is characterized by a difficulty in expressing and interpreting others’ emotions. In particular, people with ASD have difficulties when interpreting emotions encoded in facial expressions. In the past, music interventions have been shown to improve autistic individuals’ emotional and social skills. The present study describes a pilot study to explore the usefulness of music as a tool for improving autistic children’s emotion recognition in facial expressions. Twenty-five children (mean age = 8.8 y, SD = 1.24) with high-functioning ASD and normal hearing participated in the study consisting of four weekly sessions of 15 min each. Fifteen participants were randomly divided into an experimental group (N = 14) and a control group (N = 11). During each session, participants in the experimental group were exposed to images of facial expressions for four emotions (happy, sad, angry, and fear). Images were shown in three conditions, with the second condition consisting of music of congruent emotion with the shown images. Participants in the control group were shown only images in all three conditions. For six participants in each group, EEG data were acquired during the sessions, and instantaneous emotional responses (arousal and valence values) were extracted from the EEG data. Inter- and intra-session emotion identification improvement was measured in terms of verbal response accuracy, and EEG response differences were analyzed. A comparison of the verbal responses of the experimental group pre- and post-intervention showed a significant (p = 0.001) average improvement in emotion identification accuracy responses of 26% (SD = 3.4). Furthermore, emotional responses of the experimental group at the end of the study showed a higher correlation with the emotional stimuli being presented, compared with their emotional responses at the beginning of the study. No similar verbal responses improvement or EEG-stimuli correlation was found in the control group. These results seem to indicate that music can be used to improve both emotion identification in facial expressions and emotion induction through facial stimuli in children with high-functioning ASD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.