Sentiment analysis can be used to study an individual or a group’s emotions and attitudes towards other people and entities like products, services, or social events. With the advancements in the field of deep learning, the enormity of available information on internet, chiefly on social media, combined with powerful computing machines, it’s just a matter of time before artificial intelligence (AI) systems make their presence in every aspect of human life, making our lives more introspective. In this paper, we propose to implement a multimodal sentiment prediction system that can analyze the emotions predicted from different modal sources such as video, audio and text and integrate them to recognize the group emotions of the students in a classroom. Our experimental setup involves a digital video camera with microphones to capture the live video and audio feeds of the students during a lecture. The students are advised to provide their digital feedback on the lecture as ‘tweets’ on their twitter account addressed to the lecturer’s official twitter account. The audio and video frames are separated from the live streaming video using tools such as lame and ffmpeg. A twitter API was used to access and extract messages from twitter platform. The audio and video features are extracted using Mel-Frequency Cepstral Co-efficients (MFCC) and Haar Cascades classifier respectively. The extracted features are then passed to the Convolutional Neural Network (CNN) model trained on the FER2013 facial images database to generate the feature vector for classification of video-based emotions. A Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM), trained on speech emotion corpus database was used to train on the audio features. A lexicon-based approach with senti-word dictionary and learning based approach with custom dataset trained by Support Vector Machines (SVM) was used in the twitter-texts based approach. A decision-level fusion algorithm was applied on these three different modal schemes to integrate the classification results and deduce the overall group emotions of the students. The use-case of this proposed system will be in student emotion recognition, employee performance feedback, monitoring or surveillance-based systems. The implemented system framework was tested in a classroom environment during a live lecture and the predicted emotions demonstrated the classification accuracy of our approach.
Classroom environment is a competent platform for the students to learn and improve their understanding of the subject. An instructor’s primary responsibility lies in managing the students in a way they feel interested and focused during the class. With the aid of automated systems based on artificial intelligence, an instructor can get feedback on the students’ attention span in the class by monitoring their emotions using learning algorithms that can prove to be effective to improve the teaching style of the instructor that can in turn have positive effects on the class. In this paper, we propose an LSTM recurrent neural network trained on an emotional corpus database to extract the speech features and convolutional neural networks trained on the FER2013 facial emotion recognition database were used to predict the speech and facial emotions of the students respectively, in real-time. The live video and audio sequence of the students captured is fed to the learned model to classify the emotions individually. Once the emotions such as anger, sadness, happiness, surprise, fear, disgust and neutral were identified, a decision-making mechanism was used to analyze the predicted emotions and choose the overall group emotion by virtue of the highest peak value achieved. This research approach has the potential to be deployed in video conferences, online classes etc. This implementation proposal should effectively improve the classification accuracy and the relatability of the detected student emotions and facilitate in the design of sophisticated automated learning systems that can be a valuable tool in evaluating both the students and the instructors. The adapted research methodologies and their results are discussed and found to perform suggestively better than the other research works used in the comparison
Classroom education is a dynamic environment, which brings students from different backgrounds with diverse abilities. By introducing machine-learning algorithms to learn the sentiments of the students in a classroom-based environment, can provide us a better research tool to understand the student psychology behind their attentiveness as well as the impact the instructor has on them while delivering lectures. Emotions can be analyzed mainly through many ways like facial features, audio signals and text messages. In this study, we have proposed a student emotion classifying mechanism that works after the lecture by analyzing the tweets posted by the students in the social media platform, Twitter to study their sentiments, or thoughts as expressed in the department twitter handle as a feedback to the classroom lecture. Students can post a tweet to their respective department’s twitter handle about their opinions, emotions, suggestion. Our application has been designed to monitor the department’s handle, a unique user-id via twitter API handler and when any posts appear, collects it and predicts the emotion. A hybrid-based approach which contains lexical and learning based approaches will be used to handle the twitter-based data and to predict the emotions of a student. A lexicon dictionary will be used in lexical based approach and for learning based approach, a manually customized dataset was used, and a support vector machine was designed to train the datasets and classify the emotions. The use-case of this application can be ideal for colleges, companies and wherever anyone wants to ease up the process of analyzing the feedback, suggestions or complaints from the students or employees, thereby saving considerable manpower and time. Our proposal is expected to garner good results and improved prediction time and accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.