Advanced man-machine interfaces (MMIs) are being developed for teleoperating robots at remote and hardly accessible places. Such MMIs make use of a virtual environment and can therefore make the operator immerse him-/herself into the environment of the robot. In this paper, we present our developed MMI for multi-robot control. Our MMI can adapt to changes in task load and task engagement online. Applying our approach of embedded Brain Reading we improve user support and efficiency of interaction. The level of task engagement was inferred from the single-trial detectability of P300-related brain activity that was naturally evoked during interaction. With our approach no secondary task is needed to measure task load. It is based on research results on the single-stimulus paradigm, distribution of brain resources and its effect on the P300 event-related component. It further considers effects of the modulation caused by a delayed reaction time on the P300 component evoked by complex responses to task-relevant messages. We prove our concept using single-trial based machine learning analysis, analysis of averaged event-related potentials and behavioral analysis. As main results we show (1) a significant improvement of runtime needed to perform the interaction tasks compared to a setting in which all subjects could easily perform the tasks. We show that (2) the single-trial detectability of the event-related potential P300 can be used to measure the changes in task load and task engagement during complex interaction while also being sensitive to the level of experience of the operator and (3) can be used to adapt the MMI individually to the different needs of users without increasing total workload. Our online adaptation of the proposed MMI is based on a continuous supervision of the operator's cognitive resources by means of embedded Brain Reading. Operators with different qualifications or capabilities receive only as many tasks as they can perform to avoid mental overload as well as mental underload.
In this paper, a novel approach for real-time heatmap generation and visualization of 3D gaze data is presented. By projecting the gaze into the scene and considering occlusions from the observer's view, to our knowledge, for the first time a correct visualization of the actual scene perception in 3D environments is provided. Based on a graphics-centric approach utilizing the graphics pipeline, shaders and several optimization techniques, heatmap rendering is fast enough for an interactive online and offline gaze analysis of thousands of gaze samples
This paper presents a system for real-time analysis of 3D gaze data arising in mobile applications. Our system allows users to freely move in a known 3D environment while their gaze is computed on arbitrarily shaped objects. The scanpath is analysed fully automatically using fixations and areas-of-interest - all in 3D and real time. Furthermore, the scanpath can be visualized in parallel in a 3D model of the environment. This enables to observe the scanning behaviour of a subject. We describe how this has been realized for a commercial off-the-shelf mobile eye tracker utilizing an inside-out tracking mechanism for head pose estimation. Moreover, we show examples of real gaze data collected in a museum
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.