In MR (mixed reality) environments, visual searches are often used for search and localization missions. There are some problems with search and localization technologies, such as a limited field of view and information overload. They are unable to satisfy the need for the rapid and precise location of specific flying objects in a group of air and space targets under modern air and space situational requirements. They lead to inefficient interactions throughout the mission process. A human being’s decision and judgment will be affected by inefficient interactions. Based on this problem, we carried out a multimodal optimization study on the use of an auditory-assisted visual search for localization in an MR environment. In the spatial–spherical coordinate system, the target flight object position is uniquely determined by the height h, distance r, and azimuth θ. Therefore, there is an urgent need to study the cross-modal connections between the auditory elements and these three coordinates based on a visual search. In this paper, an experiment was designed to study the correlation between auditory intuitive perception and vision and the cognitive induction mechanism. The experiment included the three cross-modal mappings of pitch–height, volume–distance, and vocal tract alternation–spatial direction. The research conclusions are as follows: (1) Visual cognition is induced by high, medium, and low pitches to be biased towards the high, medium, and low spatial regions of the visual space. (2) Visual cognition is induced by loud, medium, and low volumes to be biased towards the near, middle, and far spatial regions of the visual space. (3) Based on the HRTF application, the vocal track alternation scheme is expected to significantly improve the efficiency of visual interactions. Visual cognition is induced by left short sounds, right short sounds, left short and long sounds, and right short and long sounds to be biased towards the left, right, left-rear, and right-rear directions of visual space. (4) The cognitive load of search and localization technologies is significantly reduced by incorporating auditory factors. In addition, the efficiency and effect of the accurate search and positioning of space-flying objects have been greatly improved. The above findings can be applied to the research on various types of target search and localization technologies in an MR environment and can provide a theoretical basis for the subsequent study of spatial information perception and cognitive induction mechanisms in an MR environment with visual–auditory coupling.
In the mixed reality (MR) environment, the task of target motion perception is usually undertaken by vision. This approach suffers from poor discrimination and high cognitive load when the tasks are complex. This cannot meet the needs of the air traffic control field for rapid capture and precise positioning of the dynamic targets in the air. Based on this problem, we conducted a multimodal optimization study on target motion perception judgment by controlling the hand tactile sensor to achieve the use of tactile sensation to assist vision in MR environment. This allows it to adapt to the requirements of future development-led interactive tasks under the mixed reality holographic aviation tower. Motion perception tasks are usually divided into urgency sensing for multiple targets and precise position tracking for single targets according to the number of targets and task division. Therefore, in this paper, we designed experiments to investigate the correlation between tactile intensity-velocity correspondence and target urgency, and the correlation between the PRS (position, rhythm, sequence) tactile indication scheme and position tracking. We also evaluated it through comprehensive experiment. We obtained the following conclusions: (1) high, higher, medium, lower, and low tactile intensities would bias human visual cognitive induction to fast, faster, medium, slower, and slow motion targets. Additionally, this correspondence can significantly improve the efficiency of the participants’ judgment of target urgency; (2) under the PRS tactile indication scheme, position-based rhythm and sequence cues can improve the judgment effect of human tracking target dynamic position, and the effect of adding rhythm cues is better. However, when adding rhythm and sequence cues at the same time, it can cause clutter; (3) tactile assisted vision has a good improvement effect on the comprehensive perception of dynamic target movement. The above findings are useful for the study of target motion perception in MR environments and provide a theoretical basis for subsequent research on the cognitive mechanism and quantitative of tactile indication in MR environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.