This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner's facial expressions and verbalizations. FILTWAM's facial expression software module has been developed and tested in a proof-of-concept study. The main goal of this study was to validate the use of webcam data for a real-time and adequate interpretation of facial expressions into extracted emotional states. The software was calibrated with 10 test persons. They received the same computer-based tasks in which each of them were requested 100 times to mimic specific facial expressions. All sessions were recorded on video. For the validation of the face emotion recognition software, two experts annotated and rated participants' recorded behaviours. Expert findings were contrasted with the software results and showed an overall value of kappa of 0.77. An overall accuracy of our software based on the requested emotions and the recognized emotions is 72%. Whereas existing software only allows not-real time, discontinuous and obtrusive facial detection, our software allows to continuously and unobtrusively monitor learners' behaviours and converts these behaviours directly into emotional states. This paves the way for enhancing the quality and efficacy of e-learning by including the learner's emotional states.
This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILT WAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner's vocal intonations and facial expressions in order to foster their learning. Whereas the facial emotion recognition part has been successfully tested in a previous study, the here presented study describes the development and testing of FILTWAM's vocal emotion recognition software artefact. The main goal of this study was to show the valid use of computer microphone data for real-time and adequate interpretation of vocal intonations into extracted emotional states. The software that was developed was tested in a study with 12 participants. All participants individually received the same computerbased tasks in which they were requested 80 times to mimic specific vocal expressions (960 occurrences in total). Each individual session was recorded on video. For the validation of the voice emotion recognition software artefact, two experts annotated and rated participants' recorded behaviours. Expert findings were then compared with the software recognition results and showed an overall accuracy of Kappa of 0.743. The overall accuracy of the voice emotion recognition software artefact is 67 % based on the requested emotions and the recognized emotions. Our FILTWAM-software allows to continually and unobtrusively observing learners' behaviours and transforms these behaviours into emotional states. This paves the way for unobtrusive and real-time capturing of learners' emotional states for enhancing adaptive e-learning approaches.
This article provides a comprehensive overview of artificial intelligence (AI) for serious games. Reporting about the work of a European flagship project on serious game technologies, it presents a set of advanced game AI components that enable pedagogical affordances and that can be easily reused across a wide diversity of game engines and game platforms. Serious game AI functionalities include player modelling (realtime facial emotion recognition, automated difficulty adaptation, stealth assessment), natural language processing (sentiment analysis and essay scoring on free texts), and believable non-playing characters (emotional and socio-cultural, non-verbal bodily motion, and lip-synchronised speech), respectively. The reuse of these components enables game developers to develop high quality serious games at reduced costs and in shorter periods of time. All these components are open source software and can be freely downloaded from the newly launched portal at gamecomponents.eu. The components come with detailed installation manuals and tutorial videos. All components have been applied and validated in serious games that were tested with real end-users.
This paper represents our newly developed software for emotion recognition from facial expressions. Besides allowing emotion recognition from image files and recorded video files, it uses webcam data to provide real-time, continuous, and unobtrusive facial emotional expressions. It uses FURIA algorithm for unordered fuzzy rule induction to offer timely and appropriate feedback based on learners' facial expressions. The main objective of this study was first to validate the use of webcam data for a real-time and accurate analysis of facial expressions in elearning environments. Second, transform these facial expressions to detected emotional states using the FURIA algorithm. We measured the performance of the software with ten participants, provided them with the same computer-based tasks, requested them a hundred times to mimic specific facial expressions, and recorded all sessions on video. We used the recorded video files to feed our newly developed software. We then used two experts' opinions to annotate and rate participants' recorded behaviours and to validate the software's results. The software provides accurate and reliable results with the overall accuracy of 83.2%, which is comparable to the recognition by humans. This study will help to increase the quality of e-learning.
This article describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real-time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on the intended emotions they show and which is also useful to increase learners' awareness of their own behavior. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons' behavior was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.