BackgroundChildren with autism spectrum conditions (ASC) have emotion recognition deficits when tested in different expression modalities (face, voice, body). However, these findings usually focus on basic emotions, using one or two expression modalities. In addition, cultural similarities and differences in emotion recognition patterns in children with ASC have not been explored before. The current study examined the similarities and differences in the recognition of basic and complex emotions by children with ASC and typically developing (TD) controls across three cultures: Israel, Britain, and Sweden.MethodsFifty-five children with high-functioning ASC, aged 5–9, were compared to 58 TD children. On each site, groups were matched on age, sex, and IQ. Children were tested using four tasks, examining recognition of basic and complex emotions from voice recordings, videos of facial and bodily expressions, and emotional video scenarios including all modalities in context.ResultsCompared to their TD peers, children with ASC showed emotion recognition deficits in both basic and complex emotions on all three modalities and their integration in context. Complex emotions were harder to recognize, compared to basic emotions for the entire sample. Cross-cultural agreement was found for all major findings, with minor deviations on the face and body tasks.ConclusionsOur findings highlight the multimodal nature of ER deficits in ASC, which exist for basic as well as complex emotions and are relatively stable cross-culturally. Cross-cultural research has the potential to reveal both autism-specific universal deficits and the role that specific cultures play in the way empathy operates in different countries.
Children with autism spectrum conditions (ASC) experience difficulties recognizing others' emotions and mental states. It has been shown that serious games (SG) can produce simplified versions of the socio-emotional world. The current study performed a cross-cultural evaluation (in the UK, Israel and Sweden) of Emotiplay's SG, a system aimed to teach emotion recognition (ER) to children with ASC in an entertaining, and intrinsically motivating way. Participants were 6-9 year olds with high functioning ASC who used the SG for 8-12 weeks. Measures included face, voice, body, and integrative ER tasks, as well as parent-reported level of autism symptoms, and adaptive socialization. In the UK, 15 children were tested before and after using the SG. In Israel (n = 38) and Sweden (n = 36), children were randomized into a SG or a waiting list control group. In the UK, results revealed that 8 weeks of SG use significantly improved participants' performance on ER body language and integrative tasks. Parents also reported their children improved their adaptive socialization. In Israel and Sweden, participants using the SG improved significantly more than controls on all ER measures. In addition, parents in the Israeli SG group reported their children showed reduced autism symptoms after using the SG. In conclusion, Emotiplay's SG is an effective and motivating psycho-educational intervention, cross-culturally teaching ER from faces, voices, body language, and their integration in context to children with high functioning ASC. Local evidence was found for more generalized gains to socialization and reduced autism symptoms.
Numerical and spatial representations are tightly linked, i.e., when doing a binary classification judgment on Arabic digits, participants are faster to respond with their left/right hand to small/large numbers, respectively (Spatial-Numerical Association of Response Codes, SNARC effect, Dehaene et al. in J Exp Psychol Gen 122:371-396, 1993). To understand the underlying mechanisms of the well-established SNARC effect, it seems essential to explore the considerable inter-individual variability characterizing it. The present study assesses the respective roles of inhibition, age, working memory (WM) and response speed. Whereas these non-numerical factors have been proposed as potentially important factors to explain individual differences in SNARC effects, none (except response speed) has so far been explored directly. Confirming our hypotheses, the results show that the SNARC effect was stronger in participants that had weaker inhibition abilities (as assessed by the Stroop task), were relatively older and had longer response times. Interestingly, whereas a significant part of the age influence was mediated by cognitive inhibition, age also directly impacted the SNARC effect. Similarly, cognitive inhibition abilities explained inter-individual variability in number-space associations over and above the factors age, WM capacity and response speed. Taken together our results provide new insights into the nature of number-space associations by describing how these are influenced by the non-numerical factors age and inhibition.
The EU-Emotion Stimulus Set is a newly developed collection of dynamic multimodal emotion and mental state representations. A total of 20 emotions and mental states are represented through facial expressions, vocal expressions, body gestures and contextual social scenes. This emotion set is portrayed by a multi-ethnic group of child and adult actors. Here we present the validation results, as well as participant ratings of the emotional valence, arousal and intensity of the visual stimuli from this emotion stimulus set. The EU-Emotion Stimulus Set is available for use by the scientific community and the validation data are provided as a supplement available for download.
In this study, we report the validation results of the EU-Emotion Voice Database, an emotional voice database available for scientific use, containing a total of 2,159 validated emotional voice stimuli. The EU-Emotion voice stimuli consist of audio-recordings of 54 actors, each uttering sentences with the intention of conveying 20 different emotional states (plus neutral). The database is organized in three separate emotional voice stimulus sets in three different languages (British English, Swedish, and Hebrew). These three sets were independently validated by large pools of participants in the UK, Sweden, and Israel. Participants' validation of the stimuli included emotion categorization accuracy and ratings of emotional valence, intensity, and arousal. Here we report the validation results for the emotional voice stimuli from each site and provide validation data to download as a supplement, so as to make these data available to the scientific community. The EU-Emotion Voice Database is part of the EU-Emotion Stimulus Set, which in addition contains stimuli of emotions expressed in the visual modality (by facial expression, body language, and social scene) and is freely available to use for academic research purposes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.