This paper assesses the impact of emotional arousal on learning through a virtual reality video of geography immersion learning. Validity was measured with traditional questionnaire data and using electroencephalography (EEG). Twenty-four human subjects were recruited and presented with eight geography immersive learning videos with different affective tendencies. EEG data of the subjects were collected while they were watching the videos. After the video watching, subjects were requested to fill in an emotion scale, a learning motivation scale, and a mind flow experience. The results of the study show that VR video learning materials can well induce the positive and negative emotions of the subjects. Compared with negative emotions, the alpha band power in the frontal lobe of the brain, beta and gamma bands in the temporal lobe region is significantly higher under positive emotions. In addition, the results of the subjective scales indicate that subjects have stronger intrinsic motivations and better flow experiences under positive emotions. However, there was no significant difference for the immersion under positive and negative emotions. Our findings demonstrate the usability of teaching virtual reality situational geography stories and the broad value of using this teaching method for future instruction.
With the continuous progress of remote sensing image object detection tasks in recent years, researchers in this field have gradually shifted the focus of their research from horizontal object detection to the study of object detection in arbitrary directions. It is worth noting that some properties are different from horizontal object detection during oriented object detection that researchers have yet to notice much. This article presents the design of a straightforward and efficient arbitrary-oriented detection system, leveraging the inherent properties of the orientation task, including the rotation angle and box aspect ratio. In the detection of low aspect ratio objects, the angle is of little importance to the orientation bounding box, and it is even difficult to define the angle information in extreme categories. Conversely, in the detection of objects with high aspect ratios, the angle information plays a crucial role and can have a decisive impact on the quality of the detection results. By exploiting the aspect ratio of different targets, this letter proposes a ratio-balanced angle loss that allows the model to make a better trade-off between low-aspect ratio objects and high-aspect ratio objects. The rotation angle of each oriented object, which we naturally embed into a two-dimensional Euclidean space for regression, thus avoids an overly redundant design and preserving the topological properties of the circular space. The performance of the UCAS-AOD, HRSC2016, and DLR-3K datasets show that the proposed model in this paper achieves a leading level in terms of both accuracy and speed.
In recent years, deep learning methods have achieved great success for vehicle detection tasks in aerial imagery. However, most existing methods focus only on extracting latent vehicle target features, and rarely consider the scene context as vital prior knowledge. In this letter, we propose a scene context attention-based fusion network (SCAF-Net), to fuse the scene context of vehicles into an end-to-end vehicle detection network. First, we propose a novel strategy, patch cover, to keep the original target and scene context information in raw aerial images of a large scale as much as possible. Next, we use an improved YOLO-v3 network as one branch of SCAF-Net, to generate vehicle candidates on each patch. Here, a novel branch for the scene context is utilized to extract the latent scene context of vehicles on each patch without any extra annotations. Then, these two branches above are concatenated together as a fusion network, and we apply an attention-based model to further extract vehicle candidates of each local scene. Finally, all vehicle candidates of different patches, are merged by global nonmax suppress (g-NMS) to output the detection result of the whole original image. Experimental results demonstrate that our proposed method outperforms the comparison methods with both high detection accuracy and speed. Our code is released at https://github.com/minghuicode/SCAF-Net.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.