This study investigated the influence of instructors’ expressive nonverbal behavior and nonexpressive nonverbal behavior in video lectures on students’ learning performance and affective experience. We conducted two rounds of experiments using the same materials and procedures, the only difference being the participants. In each round of experiments, participants were randomly assigned to expressive condition or nonexpressive condition. 227 rural primary school sixth-graders took part in experiment 1, participants in expressive condition had better affective experiences and perceived tasks as less difficult, but had lower learning performance than participants in nonexpressive condition. 175 sixth-graders from urban primary schools participated in experiment 2. The results showed that instructors’ expressive nonverbal behavior also improved students’ affective experience and reduced students’ perception of task difficulty, but there was no significant difference in learning performance between the two groups. Comparing the pretest scores of students in the two experiments, it was found that the pretest scores of participants in experiment 2 were higher than those in experiment 1. Overall, instructors’ expressive nonverbal behavior can improve students’ affective experience and reduce their perception of task difficulty. However, when students’ prior knowledge is relatively low, instructors’ expressive nonverbal behavior hinders students’ learning performance. We suggest that teachers adopt expressive nonverbal behavior when lecturing because it is beneficial to maintain students’ long-term interest in learning. However, it should be noted that the difficulty of learning material should be determined by students’ prior knowledge.
The teaching effect and learning state of teachers are significantly influenced by many emotions that are manifested in teaching behaviour. The affective recognition model can be applied to the analysis of useful teaching feedback data found in teaching behaviour data to assist teachers in raising the level of instruction they provide. The accuracy of emotion classification is impacted by the typical emotion recognition model's inability to completely distinguish the intricate emotional aspects and hints in instructional conduct. In order to improve the performance of emotion classification, this paper proposes a multi-modal emotion recognition model of teaching behaviour based on dynamic convolution and residual gating. This model enhances the performance of emotion classification by further mining advanced local features and creating efficient interactive fusion strategies. First, text, audio, and images' low-level features, high-level local characteristics, and context dependencies are each extracted. Second, cross-modal dynamic convolution (CMDC) is employed to represent the interaction between modes and within modes, simulate the interaction between lengthy time series, capture the interaction properties of various modes, and prevent the obliteration of crucial information. The experimental results demonstrate that this model performs better than other comparable models in terms of accuracy of emotion categorization and F1 value on the self-built data set, reaching 83.5% and 83.1%, respectively. It has been demonstrated that the emotion classification model can help teachers become more effective over time by giving them a framework on which to analyse teaching behaviour with objectivity.
The teaching effect and learning state of teachers are signi cantly in uenced by many emotions that are manifested in teaching behaviour. The affective recognition model can be applied to the analysis of useful teaching feedback data found in teaching behaviour data to assist teachers in raising the level of instruction they provide. The accuracy of emotion classi cation is impacted by the typical emotion recognition model's inability to completely distinguish the intricate emotional aspects and hints in instructional conduct. In order to improve the performance of emotion classi cation, this paper proposes a multi-modal emotion recognition model of teaching behaviour based on dynamic convolution and residual gating. This model enhances the performance of emotion classi cation by further mining advanced local features and creating e cient interactive fusion strategies. First, text, audio, and images' low-level features, high-level local characteristics, and context dependencies are each extracted. Second, cross-modal dynamic convolution (CMDC) is employed to represent the interaction between modes and within modes, simulate the interaction between lengthy time series, capture the interaction properties of various modes, and prevent the obliteration of crucial information. The experimental results demonstrate that this model performs better than other comparable models in terms of accuracy of emotion categorization and F1 value on the self-built data set, reaching 83.5% and 83.1%, respectively. It has been demonstrated that the emotion classi cation model can help teachers become more effective over time by giving them a framework on which to analyse teaching behaviour with objectivity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.