Virtual reality and the Internet of Things have shown their capability in a variety of tasks. However, their availability in online learning remains an unresolved issue. To bridge this gap, we propose a virtual reality and Internet of Things-based pipeline for online music learning. The one graph network is used to generate an automated evaluation of learning performance which traditionally was given by the teachers. To be specific, a graph neural network-based algorithm is employed to identify the real-time status of each student within an online class. In the proposed algorithm, the characteristics of each student collected from the multisensors deployed on their bodies are taken as the input feature for each node in the presented graph neural network. With the adoption of convolutional layers and dense layers as well as the similarity between each pair of students, the proposed approach can predict the future circumstance of the entire class. To evaluate the performance of our work, comparison experiments between several state-of-the-art algorithms and the proposed algorithm were conducted. The result from the experiments demonstrated that the graph neural network-based algorithm achieved competitive performance (sensitivity 91.24%, specificity 93.58%, and accuracy 89.79%) over the state-of-the-art.
An aurora is a unique geophysical phenomenon with polar characteristics that can be directly observed with the naked eye. It is the most concentrated manifestation of solar–terrestrial physical processes (especially magnetospheric–ionospheric interactions) in polar regions and is also the best window for studying solar storms. Due to the rich morphological information in aurora images, people are paying more and more attention to studying aurora phenomena from the perspective of images. Recently, some machine learning and deep learning methods have been applied to this field and have achieved preliminary results. However, due to the limitations of these learning models, they still need to meet the requirements for the classification and prediction of auroral images regarding recognition accuracy. In order to solve this problem, this study introduces a convolutional neural network transformer solution based on vision transformers. Comparative experiments show that the proposed method can effectively improve the accuracy of aurora image classification, and its performance has exceeded that of state-of-the-art deep learning methods. The experimental results show that the algorithm presented in this study is an effective instrument for classifying auroral images and can provide practical assistance for related research.
IntroductionEmotion plays a vital role in understanding activities and associations. Due to being non-invasive, many experts have employed EEG signals as a reliable technique for emotion recognition. Identifying emotions from multi-channel EEG signals is evolving into a crucial task for diagnosing emotional disorders in neuroscience. One challenge with automated emotion recognition in EEG signals is to extract and select the discriminating features to classify different emotions accurately.MethodsIn this study, we proposed a novel Transformer model for identifying emotions from multi-channel EEG signals. Note that we directly fed the raw EEG signal into the proposed Transformer, which aims at eliminating the issues caused by the local receptive fields in the convolutional neural networks. The presented deep learning model consists of two separate channels to address the spatial and temporal information in the EEG signals, respectively.ResultsIn the experiments, we first collected the EEG recordings from 20 subjects during listening to music. Experimental results of the proposed approach for binary emotion classification (positive and negative) and ternary emotion classification (positive, negative, and neutral) indicated the accuracy of 97.3 and 97.1%, respectively. We conducted comparison experiments on the same dataset using the proposed method and state-of-the-art techniques. Moreover, we achieved a promising outcome in comparison with these approaches.DiscussionDue to the performance of the proposed approach, it can be a potentially valuable instrument for human-computer interface system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.