This study is the first to show that tactile display information is perceivable and useful in hypergravity (up to +9 Gz). The results show that the tactile display can capture attention at threat pop-up and improve threat awareness for threats in the back, even in the presence of high-end visual displays. It is expected that the added value of tactile displays may further increase after formal training and in situations of unexpected target pop-up.
We developed a software framework for image-based simulation models in the chain: scene-atmosphere-sensor-image enhancement-display-human observer: EO-VISTA. The goal is to visualize the steps and to quantify (Target Acquisition) task performance. EO-VISTA provides an excellent means to systematically determine the effects of certain factors on overall performance in the context of the whole chain. There is a wide number of applications in the areas of sensor design, maintenance, TA model development, tactical decision aids and R&D. The framework is set up in such a way that modules of different producers can be combined, once they comply with a standardized interface. At the moment the shell runs with three modules, required to calculate TA-performance based on the TOD (Triangle Orientation Discrimination) method. In order to demonstrate the potential of a future comprehensive visualization tool, two example calculations are carried out using two programs not yet implemented: the pcSitoS sensor simulation model and the EOSTAR scene and atmosphere model. With the examples we show that: i) pcSitoS yields a TOD comparable to that of the real sensor that is simulated, ii) performance differences between the human visual system model implemented for automated TOD measurement and a human observer are consistent over different types of sensor and may be corrected for relatively easy, and iii) simulation results of thermal ship imagery are in line with acquisition ranges predicted with the TOD model. All these results can be studied more extensively with EO-VISTA in a systematic way.
Firefighters searching for victims work in hazardous environments with limited visibility, obstacles and uncertain navigation paths. In rescue tasks, extra sensor information from infrared cameras, indoor radar and gas sensors could improve vision, orientation and navigation. A visual and tactile interface concept is proposed that integrates this sensor information and presents it on a head-mounted display and tactile belt. Sixteen trained participants performed a firefighting rescue task with and without the prototype interface, measuring task performance, mental effort, orientation and preference. We found no difference in task performance or orientation, but a significantly higher preference for the prototype compared to baseline. From participants' remarks, it appears the interface overloaded them with information, reducing the potential benefit for orientation and performance. Implications for the design of the prototype are outlined.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.