In-class teaching evaluation, which is utilized to assess the process and effect of both teachers’ teaching and students’ learning in a classroom environment, plays an increasingly crucial role in supervising and promoting education quality. With the rapid development of artificial intelligence (AI) technology, the concept of smart education has been constantly improved and gradually penetrated into all aspects of education application. Considering the dominant position of classroom teaching in elementary and undergraduate education, the introduction of AI technology into in-class teaching evaluation has become a research hotspot. In this paper, we propose a statistical modeling and ensemble learning-based comprehensive model, which is oriented towards in-class teaching evaluation by using AI technologies such as computer vision (CV) and intelligent speech recognition (ISR). Firstly, we present an index system including a set of teaching evaluation indicators combining traditional assessment scales with new values derived from CV and ISR-based AI analysis. Next, we design a comprehensive in-class teaching evaluation model by using both the analytic hierarchy process-entropy weight (AHP-EW) and AdaBoost-based ensemble learning (AdaBoost-EL) methods. Experiments not only demonstrate that the two modules in the model are respectively applicable to the calculation of indicators with different characteristics, but also verify the performance of the proposed model for AI-based in-class teaching evaluation. In this comprehensive in-class evaluation model, for students’ concentration and participation, ensemble learning module is chosen with less root mean square error (RMSE) of 8.318 and 9.375. In addition, teachers’ media usage and teachers’ type evaluated by statistical modeling module approach higher accuracy with 0.905 and 0.815. Instead, the ensemble learning approaches the accuracy of 0.73 in evaluating teachers’ style, which performs better than the statistical modeling module with the accuracy of 0.69.
The Shack–Hartmann wavefront sensor (SHWFS) has been widely used for measuring aberrations in adaptive optics systems. However, its traditional wavefront reconstruction method usually has limited precision under field conditions because the weight-of-center calculation is affected by many factors, such as low signal-to-noise-ratio objects, strong turbulence, and so on. In this paper, we present a ResNet50+ network that reconstructs the wavefront with high precision from the spot pattern of the SHWFS. In this method, a nonlinear relationship is built between the spot pattern and the corresponding Zernike coefficients without using a traditional weight-of-center calculation. The results indicate that the root-mean-square (RMS) value of the residual wavefront is 0.0128 μm, which is 0.79% of the original wavefront RMS. Additionally, we can reconstruct the wavefront under atmospheric conditions, if the ratio between the telescope aperture’s diameter D and the coherent length r
0 is 20 or if a natural guide star of the ninth magnitude is available, with an RMS reconstruction error of less than 0.1 μm. The method presented is effective in the measurement of wavefronts disturbed by atmospheric turbulence for the observation of weak astronomical objects.
The increasing development in the field of biosensing technologies makes it feasible to monitor students’ physiological signals in natural learning scenarios. With the rise of mobile learning, educators are attaching greater importance to the learning immersion experience of students, especially with the global background of COVID-19. However, traditional methods, such as questionnaires and scales, to evaluate the learning immersion experience are greatly influenced by individuals’ subjective factors. Herein, our research aims to explore the relationship and mechanism between human physiological recordings and learning immersion experiences to eliminate subjectivity as much as possible. We collected electroencephalogram and photoplethysmographic signals, as well as self-reports on the immersive experience of thirty-seven college students during virtual reality and online learning to form the fundamental feature set. Then, we proposed an evaluation model based on a support vector machine and got a precision accuracy of 89.72%. Our research results provide evidence supporting the possibility of predicting students’ learning immersion experience by their EEGs and PPGs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.