Abstract-This paper studies the design and application of a novel visual attention model meant to compute users gaze position automatically, i.e. without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in real-time a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multipletexture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high refresh rate. Second, we introduce the use of visual attention model in three visual effects inspired from the human visual system namely: depth-of-field blur, camera motions, and dynamic luminance. All these effects are computed based on simulated user's gaze, and are meant to improve user's sensations in future virtual reality applications.Index Terms-visual attention model, first person exploration, gaze tracking, visual effects, level of detail.
We investigated whether perception of affordances for standing on an inclined surface depended on the height of center of mass of the perceiver-actor. Participants adjusted the angle of inclination of a surface until they felt that it was just barely possible for them to stand on that surface. They performed this task while wearing a backpack apparatus to which masses were attached in one of three configurations-high-mass, low-mass, and no-mass. Moreover, participants performed this task by viewing the inclined surface or by probing it with a hand-held rod (while blindfolded). Perception of affordances for standing on the inclined surface reflected the changes in center of mass brought on by the weighted backpack apparatus (the perceptual boundary occurred at a smaller angle of inclination in the high-mass condition than in the low-mass condition and in the no-mass condition). Moreover, perception of this affordance reflected such changes both when the surface was viewed and when the surface was probed with a hand-held rod (while blindfolded). The results highlight that perception of affordances is dynamic and task-dependent and suggest that the stimulation patterns that support perception of affordances are invariant and modality-independent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.