This paper presents an in-depth study and analysis of students’ role perceptions and their tendencies in classroom education using a visual inspection approach. A multi example learning student engagement assessment method based on a one-dimensional convolutional neural network is proposed. Based on the conceptual composition of student engagement, head posture, eye gaze, and eye-opening and closing states and the most used facial movement units are used as visual features. For feature extraction, the proposed view of relative change features, based on the video features extracted from the Open Face toolset, the standard deviation of the distance between adjacent multiple frames relative to the center point of the three visual features is used as the relative change features of the video. This results in the phenomenon that students are highly motivated in the early stage and significantly increase the rate of absenteeism in the later stage. With the development of information technology injecting new vitality into educational innovation, many researchers have introduced computer vision and image processing technology into students’ online learning activities, and understand students’ current learning situation by analyzing students’ learning status. There are relatively few studies in this area in classroom teaching. Considering the low relative position correlation of the features in the examples, the examples are analyzed using a one-dimensional convolutional neural network to obtain the example-level student engagement, and a multi-example pooling layer is used to infer the student engagement in the video from the example-level student engagement. Finally, the experimental method is used to apply the student classroom attention evaluation detection system to actual classroom teaching activities, and the effectiveness and accuracy of the design of the student classroom attention evaluation detection system are investigated in depth through specific applications and example analysis, and the accuracy of the method of this paper is further verified by communicating feedback with teachers and students in the form of interviews.
In this paper, an in-depth study of cross-media semantic matching and user adaptive satisfaction analysis model is carried out based on the convolutional neural network. Based on the existing convolutional neural network, this paper uses rich information. The spatial correlation of cross-media semantic matching further improves the classification accuracy of hyperspectral images and reduces the classification time under user adaptive satisfaction complexity. Aiming at the problem that it is difficult for the current hyperspectral image classification method based on convolutional neural network to capture the spatial pose characteristics of objects, the problem is that principal component analysis ignores some vital information when retaining a few components. This paper proposes a polymorphism based on extension Attribute Profile Feature (EMAP) Stereo Capsule Network Model for Hyperspectral Image Classification. To ensure the model has good generalization performance, a new remote sensing image Pan sharpening algorithm based on convolutional neural network is proposed, which increases the model’s width to extract the feature information of the image and uses dilated instead of traditional convolution. The experimental results show that the algorithm has good generalization while ensuring self-adaptive satisfaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.