Traditional training methods such as card teaching, assistive technologies (e.g., augmented reality/virtual reality games and smartphone apps), DVDs, human-computer interactions, and human-robot interactions are widely applied in autistic rehabilitation training in recent years. In this article, we propose a novel framework for human-computer/robot interaction and introduce a preliminary intervention study for improving the emotion recognition of Chinese children with an autism spectrum disorder. The core of the framework is the Facial Emotion Cognition and Training System (FECTS, including six tasks to train children with ASD to match, infer, and imitate the facial expressions of happiness, sadness, fear, and anger) based on Simon Baron-Cohen’s E-S (empathizing-systemizing) theory. Our system may be implemented on PCs, smartphones, mobile devices such as PADs, and robots. The training record (e.g., a tracked record of emotion imitation) of the Chinese autistic children interacting with the device implemented using our FECTS will be uploaded and stored in the database of a cloud-based evaluation system. Therapists and parents can access the analysis of the emotion learning progress of these autistic children using the cloud-based evaluation system. Deep-learning algorithms of facial expressions recognition and attention analysis will be deployed in the back end (e.g., devices such as a PC, a robotic system, or a cloud system) implementing our FECTS, which can perform real-time tracking of the imitation quality and attention of the autistic children during the expression imitation phase. In this preliminary clinical study, a total of 10 Chinese autistic children aged 3–8 are recruited, and each of them received a single 20-minute training session every day for four consecutive days. Our preliminary results validated the feasibility of the developed FECTS and the effectiveness of our algorithms based on Chinese children with an autism spectrum disorder. To verify that our FECTS can be further adapted to children from other countries, children with different cultural/sociological/linguistic contexts should be recruited in future studies.
In the field of human-robot interaction, socially interactive robots are often equipped with the ability to detect the affective states of users, the ability to express emotions through the use of synthetic facial expressions, speech and textual content, and the ability for imitating and social learning. Past work on creating robots that can make convincing emotional expressions has concentrated on the quality of those expressions, and on assessing people's ability to recognize them. Previous recognition studies presented the facial expressions of the robots in neutral contexts, without any strong emotional valence (e.g., emotionally valenced music or video). It is therefore worth empirically exploring whether observers' judgments of the facial cues of a robot would be affected by a surrounding emotional context. This thesis takes its inspiration from the contextual effects found on the interpretation of the expressions on human faces and computer avatars, and looks at the extent to which they also apply to the interpretation of the facial expressions of a mechanical robot head. The kinds of contexts that affect the recognition of robot emotional expressions, the circumstances under which such contextual effects occur, and the relationship between emotions and the surrounding situation, are observed and analyzed in a series of 11 experiments. In these experiments, the FACS (Facial Action Coding System) (Ekman and Friesen, 2002) was applied to set up the parameters of the servos to make the robot head produce sequences of facial expressions. Four different emotional surrounding or preceding contexts were used (i.e., recorded BBC News pieces, selected affective pictures, classical music pieces and film clips). This thesis provides evidence that observers' judgments about the facial expressions of a robot can be affected by a surrounding emotional context. From a psychological perspective, the contextual effects found on the robotic facial expressions based on the FACS, indirectly support the claims that human emotions are both 4 biologically based and socially constructed. From a robotics perspective, it is argued that the results obtained from the analyses will be useful for guiding researchers to enhance the expressive skills of emotional robots in a surrounding emotional context. This thesis also analyzes the possible factors contributing to the contextual effects found in the original 11 experiments.Some future work, including four new experiments (a preliminary experiment designed to identify appropriate contextual materials and three further experiments in which factors likely to affect a context effect are controlled one by one) is also proposed in this thesis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.